From 3cbd4a98c07efbfbd4406ea688cde40569d84b56 Mon Sep 17 00:00:00 2001 From: andrewflowers Date: Fri, 31 Oct 2014 12:48:41 -0400 Subject: [PATCH] update poll of pollsters with new data --- poll-of-pollsters/README.md | 20 ++++++++++++++++--- poll-of-pollsters/poll-of-pollsters-3.tsv | 1 + .../poll-of-pollsters-anonymous-answers-3.tsv | 1 + 3 files changed, 19 insertions(+), 3 deletions(-) create mode 100644 poll-of-pollsters/poll-of-pollsters-3.tsv create mode 100644 poll-of-pollsters/poll-of-pollsters-anonymous-answers-3.tsv diff --git a/poll-of-pollsters/README.md b/poll-of-pollsters/README.md index 7cf5b7c..1433687 100644 --- a/poll-of-pollsters/README.md +++ b/poll-of-pollsters/README.md @@ -1,15 +1,19 @@ ### Poll of Pollsters data -This repo contains the responses from pollsters to FiveThirtyEight's two polls of professional political pollsters, as described in these articles: +This repo contains the responses from pollsters to FiveThirtyEight's three polls of professional political pollsters, as described in these articles: [Pollsters Predict Greater Polling Error In Midterm Elections](http://fivethirtyeight.com/features/pollsters-predict-greater-polling-error-in-midterm-elections/) [Pollsters Say They Follow Ethical Standards, But They Aren’t So Sure About Their Peers](http://fivethirtyeight.com/features/pollsters-say-they-follow-ethical-standards-but-they-arent-so-sure-about-their-peers) -We sent out the first poll starting Wed. Sept. 24, and 26 pollsters responded by deadline. We sent out the second poll starting Sunday, Oct. 12, and 24 pollsters responded by deadline. -Respondents include commercial and academic pollsters who identify their polling organizations as liberal, nonpartisan or conservative. Some poll online, some by phone, some both. +[Even Pollsters Don’t Know All The Details Of How Their Polls Are Made](http://fivethirtyeight.com/features/even-pollsters-dont-know-all-the-details-of-how-their-polls-are-made/) + +We sent out the first poll starting Wed. Sept. 24, and 26 pollsters responded by deadline. We sent out the second poll starting Sunday, Oct. 12, and 24 pollsters responded by deadline. We sent out the third poll starting Friday, Oct. 24, and 26 pollsters responded by deadline. +Respondents include commercial and academic pollsters who identify their polling organizations as liberal, nonpartisan or conservative. +Some poll online, some by phone, some both. Some answers have been edited, primarily for spelling, grammar, style and to protect anonymity, when requested. + The responses are broken up into two files for each poll. Here are the files for the first poll: @@ -31,3 +35,13 @@ This tab-separated file contains the names of 24 respondents, their polling orga `poll-of-pollsters-anonymous-answers-2.tsv`: This tab-separated file contains those responses that pollsters didn't want attributed to them. The heading of each column or set of columns contains a question and the next row contains the types of answer. Starting with the third row come the responses, in alphabetical order, grouped by question grouping. That means that each row doesn’t correspond to any one respondent. For example, the answers in the fourth row weren't all necessarily given by the same respondent. This sorting step was taken to better protect anonymity, by making it harder to figure out who gave which answer. + +Here are the files for the third poll: + +`poll-of-pollsters-3.tsv`: + +This tab-separated file contains the names of 26 respondents, their polling organizations, and those responses that they have agreed can be attributed to them. The heading of each column or set of columns contains a question and the next row contains the types of answer. Starting with the third row, each row lists the answer given by the pollster listed in that row. Empty fields either mean the corresponding pollster didn't answer that question, or didn't wish to have the answer attributed. + +`poll-of-pollsters-anonymous-answers-3.tsv`: + +This tab-separated file contains those responses that pollsters didn't want attributed to them. The heading of each column or set of columns contains a question and the next row contains the types of answer. Starting with the third row come the responses, in alphabetical order, grouped by question grouping. That means that each row doesn’t correspond to any one respondent. For example, the answers in the fourth row weren't all necessarily given by the same respondent. This sorting step was taken to better protect anonymity, by making it harder to figure out who gave which answer. diff --git a/poll-of-pollsters/poll-of-pollsters-3.tsv b/poll-of-pollsters/poll-of-pollsters-3.tsv new file mode 100644 index 0000000..77a14e8 --- /dev/null +++ b/poll-of-pollsters/poll-of-pollsters-3.tsv @@ -0,0 +1 @@ +"Please enter your information, and your polling organization's information." "How do you determine how likely one of your respondents is to vote? Do you weight them, or is each respondent either a likely voter or not one? Please provide as much detail as possible." "Do you weight by party affiliation? If so, how do you determine what weights to use? Please provide as much detail as possible." Do you ever poll from registered voter lists rather than call at random? Why? "When you weight poll results, is there a maximum weight you use to increase the count of a demographic subgroup?" "Why or why not, and if yes, what is that weight?" Do you weight by race and party together? (Example: weighting African-American Democrats instead of African-Americans and Democrats separately.) "In what circumstances do you do so, and why?" Do you ever deliberately call back prior poll takers [http://fivethirtyeight.com/features/oct-16-can-polls-exaggerate-bounces/]? "If you do sometimes or always do that, under which circumstances, how do you do so, and why? If not, why not?" Do you have off-the-record conversations with other pollsters to compare results before publishing them? Why or why not? Do you use Bayesian methods or frequentist methods? Why? Do you find traditional reporting of statistical margin of error to be credible? "If so, why? And if not, how do you think margin of error should be reported?" In what languages other than English do you ever ask your political polls? What percentage does it add to the cost of a poll to add one language? "If you field polls in Spanish, how do Hispanic respondents who choose to answer in Spanish differ from those who answer in English?" "How much does it cost you to poll one stateÍs Senate race, on average?" "How much did it cost you to poll one stateÍs Senate race, on average, in 2010?" "How do you account for the change, if any?" Do you always disclose who is funding your polls? Why or why not? Do you ever conduct and publish political polls without sponsors? "If not, why not and would you ever do so? If so, under what circumstances and why?" How many total employees (full-time or part-time) does your polling organization have? How many are men? How many are women? How many are white? How many are African-American? How many are Hispanic? How many are Asian American? Any comments on the demographics of your staff? Do you ever poll using an online panel? Do you cap the number of polls that panel members can take in a given time period? "Why or why not? And if you do, at what level do you cap it?" What percentage of your panel members leave or become inactive annually? Any comments on your panel turnover? Do you ever poll by phone? Have decreasing response rates required you to change your techniques by using increased weighting or supplementing with different technologies? Please elaborate on your answer above. How much do you pay your interviewers per hour? What percentage of your interviewers are male? What is your interviewers' average age? How many hours of training do you require them to have before they can conduct interviews? How are your interviewers trained to handle invective from people who hate being called? What percentage of interviews do you monitor for quality? Do you ever interview by phone using Interactive Voice Response (IVR)? For what percentage of IVR polls do you use male voices? For what percentage of IVR polls do you use female voices? "Whose voices do you use? i.e. actors, local TV personalities?" "How much do you pay the people whose voices you record, by poll or by hour?" "How many seats do you expect Republicans will control in the Senate in 2015? (Yes, we're asking again.)" Why? What question or questions would you want us to ask your fellow pollsters in future rounds of this poll? Name Polling Organization Open-Ended Response Open-Ended Response Response Open-Ended Response Response Open-Ended Response Response Open-Ended Response Response Open-Ended Response Response Open-Ended Response Response Sometimes or Other (please specify) Open-Ended Response Response Open-Ended Response Spanish Mandarin or other Chinese Tagalog French Vietnamese German Korean Russian Arabic None Other (please specify) Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Response Open-Ended Response Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Response Response Open-Ended Response Open-Ended Response Open-Ended Response Response Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Shachi Kurl Angus Reid Global "Angus Reid Global reports two perspectives on the Canadian electorate. The first, as has been our practice for almost 40 years, involves examination of the intentions and attitudes of eligible Canadian voters. The second, in light of declining electoral turnout rates, particularly among younger voters, includes a separate commentary in our analysis on the intentions of Canadians most likely to vote. Our data has been analyzed through two sets of filters: those of all respondents who are eligible Canadian voters, and those of survey respondents who are most likely to vote. The data from all respondents uses standard census-based targets to ensure a national sample that is representative of the adult Canadian population as a whole by key demographics such as gender, age, education and region. Data from likely voters applies a weighting structure that further adjusts our sample to reflect known variations in voter turnout _- specifically across age groups -_ while also filtering based on respondents' own self-reported past voting patterns and habits. We have developed this approach because we feel strongly that it is the responsible thing to do when reporting on electoral projections. With declining voter turnout, there exists an increasingly important divergence between general public opinion -_ which still includes the still valid views of the almost 40 percent of Canadian adults who donÍt vote -- and the political orientation of the 60 percent of likely voters whose choices actually decide electoral outcomes." Yes Transparency is absolutely 100% key. Yes No John Anzalone Anzalone Liszt Grove Research "First off, we are driving our samples based off of voter history. We have an equation of what percent of the sample is only 2010 voters and then allow in a certain but small percent of new registrants since 2011 who voted in 2012 primaries and general, as well as new registrants since the 2012 election. But we start with 80% hardcore mid-term voter history. Then we screen down verbally on the interview between very likely and somewhat likely to vote." "Naturally we ask self-identified party affiliation and we record party registration if the state has that on the voter file. Early in the cycle, we will let party ID float, but when it gets late in the cycle, you should not be seeing big shifts in party ID, and we will at times weight it depending on what we see with the other demos. You have a model and you need to be consistent with that model." Yes "We almost exclusively use sample from voter files that have voter history and have been very successful with both our modeling and predictions. In midterms you have to use a voter file because it is such a restricted voting universe. You can use RDD [random-digit dialing] in a presidential cycle but voter files update new voter registration so fast these days that using a voter file is superior to RDD because with RDD you are counting on the respondent to be truthful about whether they are going to vote. And that is not realistic. We see in our drop-off studies (those who voted in 2012 but not 2010) in 2014 that so many of those who did not vote in midterms say they are going to vote in 2014. We know that is not true, and it is not true when using an RDD of midterms." Yes "First of all, we run partial data every day and then make sure the phone bank has quotas on demographic groups if they get out of whack. Our goal is to do as little weighting as possible and get the right interviews. You can do that by reviewing partials and seeing where you are low on groups. You then have to double-down on getting those interviews. It may be more expensive, but it is the right way to do it. You can also help this by calling more evenings. Too many pollsters (and clients) are in a rush to get data. With caller ID, no call list, cellphones, etc., you need to take more days doing proper call-back procedures and that will help you get the correct demographics instead of just overly weighting." Yes "It just depends. If you are in a state like Florida where you have whites, blacks and Hispanics (and Cubans and non-Cuban Hispanics), you may have to do multivariable weighting. You don't usually have to do that in homogenous Iowa. In Florida you have white progressive Democrats in south Florida but conservative cultural Democrats in the Panhandle who call themselves Democrats and vote Republican. It is never as simple as it seems." Sometimes Panel backs [i.e. call backs] are difficult to pull off unless you start with a very large sample size. We rarely do it. No Just don't. Sometimes or Other (please specify) Are you fucking kidding me? Are you fucking kidding me? Yes That probably should be a yes-and-no answer reserved for a two-hour panel discussion. Spanish Mandarin or other Chinese "When using a bilingual bank for Hispanics, you also have high cost, because Hispanic households are at nearly 50% cellphone-only. You also have translation cost. In total, it can add 20%." "Spanish-speaking respondents are much more likely to be Democrats. But again, it depends. In Florida, we have a universe who take interviews in Spanish who are older Cubans and then younger, non-Cuban Hispanics." "There is no average because there are differences in sample size, length of questionnaire and whether or not you are using bilingual banks." again no avg "Phone-bank cost are definitely going up and they have been for over a decade. The combination of Caller ID, no call list, cell phones, marketing calls, robo calls reminding people of appointments, etc. all contribute to costs." No Depends on the state laws. We follow the laws. Yes "Yes, we internally do test polling and on occasion put parallel polls in the field to check our numbers on important race. But we never publish them." 22 9 13 18 2 0 2 We are proud of our diversity. We also have one LGBT staffer. You should ask that. Yes Yes The panel company does that as well Yes Yes "We really are strict on our call-back procedures and extending the number of days of our dialing. Again, we look at partial results every day and if any demographics get too out of whack, we will put a quota on it for the phone bank." No Gabriel Joseph ccAdvertising "No weight. We just ask them all. For example, we are surveying 107,555 homes in a county for a client today." "No weight. We just survey a very big universe. Once the sample is big enough, it does not matter on weight. We also target the mobile-phone channel. There are more mobile phones than people. This automatically provides us with a diverse and balanced response." Yes Clients want us to. No We do not weight surveys. We survey the entire universe. No We do not weight polls. No "We call an entire universe. If that includes respondents from past surveys, that is great. If not, it does not matter. When the sample size if big enough, no weighting is needed." No Never have been approached by one. Sometimes or Other (please specify) Our process does not require methods. "Our process does not require methods. When you talk to everyone, why do you need a method?" No When your sample size is big enough it makes margins of error ridiculously low. Why mention something no one would believe. Spanish Mandarin or other Chinese French German 25% Yes "Minimum fee $1,500" "Minimum fee $1,500" N/A No We always disclose who is responsible for the survey. Yes Because we can. It feeds a thirst for information. The public and our clients want to know. If we see a need sometimes it helps us get business as well. No Yes No We have not seen decreasing response rates. We target the mobile-phone channel that has increasing response rates. Yes 50% 50% "All of the above, including clients." They are staff. 54 Data How off are your polls from what happens on Election Day? Darrel Rowland Columbus Dispatch "We are the weird one -- a mail poll. Over the past several decades, we have determined that those who choose to fill out and mail back our poll form make up likely voters. If the demographic composition of those respondents is significantly different than U.S. census demographics, we will consider weighting the results. Again, with a mail poll, our weighting technique is somewhat unique. The polls are coded for each of Ohio's media markets. If one of those markets is heavily over- or under-represented, we will first weight on a geographical basis and determine whether that pulls the other demographic factors into line. It usually does." "No. Party affiliation has been shown to be highly volatile, especially in a swing state such as Ohio. We regard it as a variable, not a constant." Yes Our mail-poll sample is drawn from the state's official list of seven-million-plus registered voters. No "Obviously, the higher the weights, the more reluctant we become to use them. Far more often than not, we use unweighted figures." No "Again, we never weight on party." Yes "We call respondents who volunteer contact information to get comments elaborating on their answers. We use some of those comments in the stories we publish, and put the remainder on the web to give us a qualitative element along with the quantitative element of a standard poll." No "Not to polish our own apple, but our poll has been demonstrated to be the most accurate in Ohio in recent years plus we poll on far more races and issues than anyone else." Yes "Generally so. I would be more comfortable if pollsters would refer to a plus-or-minus ""sampling"" error, since other types of error are always possible." none N/A N/A Yes It's a simple matter of transparency that adds credibility. Yes This is the ONLY way we do it -- we conduct and fund the poll totally ourselves (The Columbus Dispatch). We are a newspaper that conducts polls. All employees are part of our operation. N/A N/A N/A N/A N/A N/A No No For those who use party ID [identification] in their LV [likely-voter] turnout scenarios how can you project this in a statistically -- even historically -- valid way? MaryEllen FitzGerald Critical Insights "We have a battery of four questions, gauging past participation in on- and off-year elections. We then score the respondents based on that." Stuart Elway Elway Research "We sample from registered-voter lists, which has the vote history on the record. In off years, we define a likely voter as one who has voted in at least two, sometimes three, of the last four elections. Depending on the proximity to the election, we also include a screening question about certainty to vote. In presidential years, we don't typically screen. Our objective is to explain the entire electorate, including those choosing not to vote -- not necessarily to predict the election outcome." "No. There is no party registration in Washington state, and party identification fluctuates month-to-month. There is no reliable anchor to weight to." Yes "It a more efficient and reliable way to contact registered voters. Plus, there is other information available, such as voting district, vote history and party registration that is useful in both the sampling and the analysis." No No "I can't recall having done that in a election poll, which I presume is what you are asking about. When we do polls for media outlets, we get numbers for reporters to call back respondents for in-depth interviews, but I don't think that is what you are asking." No "I have had conversations after publication in some instances (rarely), to discuss variance and possible reasons (timing, too many older voters, etc.)." Don't use them. Yes "By ""sponsors"" I assume you mean single entities. All of my political polls are done for The Elway Poll, which is funded by subscribers, or by media outlets." No Yes No "Not for election polling, except that we have to order larger samples. We sometimes supplement non-election surveys with an online component." Yes Bernie Porn EPIC-MRA No Yes Yes Mark DiCamillo Field Research Corporation (Field Poll) We use a scaled intention-to-vote question and couple this with their actual voting history from their voting record. (This information is available to us since we sample voters from the registered voter rolls.) "No. We weight by party registration, which has a population value to which we can project the sample. Each voter's registration comes directly from the voter file (again since we are sampling from the voter rolls). It is not a question asked in the survey." Yes This is the standard method used by The Field Poll in its pre-election surveys measures. Yes "Yes, we will trim weights when necessary, although this is fairly uncommon. When trimming is employed, we usually limits individual weights to 3." Sometimes "When we do so, we weight to the party registration of the ethnic subgroups (not party ID), again since the registered-voter population value of each ethnic subgroup is known." Sometimes We did this once when one of the candidates included in our pre-election poll dropped out and we called back voters who chose that candidate to ask who their second choice would be. No "We keep our poll results strictly confidential prior to their publication. After publication, we are open to discuss them with other pollsters and polling analysts." "We calculate and report each poll's margin of sampling error in all reports mainly because it is the usual practice in our industry. But I must confess that when reading the polls of other pollsters, I don't really pay much attention to its reported margin of error." Spanish Mandarin or other Chinese Tagalog Vietnamese Korean Korean "Including Spanish is standard when polling Californians. Since all of our polls over the past 15 years have included Spanish, I can't really say what our cost would actually be without Spanish. Over the past four years, we have increasingly included Asian languages, and probably have conducted about a third of then in the Asian languages noted above. When this is done, we typically include an over-sampling of each Asian American population to enable the poll to compare and contrast each of the Asian American voter segments, since they are quite distinct. This typically adds 20% to 25% to the total cost of the poll." "Spanish-speaking Latinos are quite different demographically than English speakers. They tend to be much more downscale, with lower levels of income and education. This creates greater affinity to the policy positions of the Democratic Party, since they place greater value on government-provided services, like health care, education and the schools, jobs, and, more recently, the minimum wage. However, we have found them to be more conservative in their views on many hot-button social issues, like same-sex marriage, marijuana legalization and abortion." Not applicable since each of our polls covers numerous poll topics and multiple election contests. Same answer. Yes We tell voters that our polls are nonpartisan and are funded in part by many of the state's leading news organizations. No Yes Yes "Increasingly, we stratify our voter samples by age to ensure a proper representation of voters of all ages. Sampling from the voter rolls makes this relatively straightforward since the age of the voter is included on their voting record. We believe stratifying the sample in this way is superior to applying larger weights to the younger voter segments that can be otherwise be underrepresented in our surveys." No 51 The odds still favor the Republicans winning enough of the contested races to put them over the top. "To the automated voice robopollsters: Since 90%+ of Americans now have cellphones and more Americans choose to cut the cord and do without access to a residential landline phone (it now includes fewer than two in three), what are the long-term prospects of this interviewing method since automated voice message calls are prohibited by law from dialing cellphones?" Berwood Yost Franklin & Marshall College "We use several different models, one based on prior voting history and the other based on perceived electoral interest and self-described intention to vote. These are discussed in our releases." "We do weight by party affiliation (as well as region and gender) since all of our samples are designed to include all registered voters. Likely-voter determinations are then made as discussed previously. The state of Pennsylvania provides detailed voter-registration files that we use to determine the appropriate proportions. PA voters must be registered to vote 30 days prior to the election, so we use the final registration file for surveys that take place after that date." Yes "Why not? There are several considerations. First, the voter files include data that can be used to assess the quality of the interviewing and how representative the sample is. Second, voting history can be used to build likely voter profiles. Finally, it is more cost-efficient to use voter lists than RDD [random-digit dialing] samples. We switched from RDD samples to RV [registered-voter] lists in 2012." Yes "We use a raking algorithm to adjust the sample results. If any weight is >= 3, it would be trimmed." No No No Frequentist None Yes I don't have that information immediately available. I'll try to pull it together before your deadline. No Yes Yes No Doug Kaplan Gravis Marketing "We ask them if they plan on voting. We weigh likely, very likely, and somewhat likely." Yes "Yes, we constantly update our lists to cover new voters. We call registered-voter lists unless otherwise noted." Sometimes No Yes Yes On larger races yes. On smaller races with low populations it becomes more of an issues. Example only 200 completes in a school board race. Yes AAPOR rules Yes Yes Yes $10/hour 20% 23 10 They have a manager always available and an internal do-not-call list. 15% Yes 75% 25% Professional voice actors $5 per message. We record thousands of messages 55 "The Republicans will win Kentucky, Georgia, Iowa, Montana, Arkansas, Colorado, Alaska and North Carolina. The Republicans will hold Kansas (possible they lose governor's race there). I am less confident in New Hampshire but believe the Republicans will win there, as well." Matt Towery InsiderAdvantage "Our samples are from Aristotle's registered voter files. [http://aristotle.com/] We then have questions to screen for likely voters and only include and weight responses from those who indicate they are likely to vote (or in the instance of early voting, if they have already done so)." No Absolutely not! Yes Yes Yes 51 There is a slight GOP wave in non-southern contested states such as Colorado and enough southern states will either hold GOP or go GOP to give Republicans a slim majority. But perhaps not until January (Georgia runoff possible). Do you seriously believe that legitimate and representative voters answer 70 questions on a cellphone? Julia Clark Ipsos "We utilize a likely-voter model. This contains questions on: registered voter, past vote at various election, self-reported likelihood of voting on a 10-point scale, and interest in the election. They are turned into a summated index from which we make cuts based on turnout assumptions. For more granularity, we also run a regression index. Weights are applied to the whole population (either at the total level or registered-voter level, depending on the type of survey) before the likely-voter model is applied." No No "We do not do campaign polling, and so we almost never need to target very small geographic regions over short periods of time, which is why lists are generally used. Plus we do primarily online polling, and lists do not have email addresses." No "We do not set fixed max weights. If a weight is high, we assess it and determine the best way to address the situation." No No We don't do phone polling. No "It would seem like a very strange thing to do. And there is never time, even if we were so inclined... the data needs to get out the door very quickly." Bayesian "For reporting ONLINE poll results, we use Bayesian methods to account for the non-probability nature of our recruiting processes for online samples. In many studies, we use outside information to help create informed priors to calibrate our online results. (Generally speaking for our business as a whole, we use a combination of methods depending on mode and client.)" Yes "Our real response is ïyes and noÍ ! Traditional reporting of margin of error accounts for sampling error, and it does not account for other sources of error that may affect a pollÍs results and its variability far more than the sampling error calculation using the sample variance. Sampling error does not adequately account for the changing nature of telephone research in the age of cell phones, callerID, and respondent disaffection with participating in surveys. For the MoE to be credible, it would need to account for total survey error within a mean square error calculation. Traditional use of margin of error for online surveys is often noted as inappropriate for online surveys, and this is a simplification. Traditional use of margins of error can be used for online surveys with the use of an underlying model. Very often, this argument is dismissed. Our organizationÍs response has been to adopt a Bayesian framework/ model. This allows us to calibrate our results using informed priors to account for the differences in the population for technologically adept versus non-adept people. We have promoted this approach in print and at conferences to work within the system rather than outside of it. In all honesty, we still report traditional margins of error for telephone research, and we use Bayesian for our online polling work. To report our telephone results using traditional margins of error, we too have to implicitly assume an underlying model _ that the percent of respondents we can reach and get to cooperate respond similarly to those that we could not contact or refused to cooperate. For our online pollsÍ results, we provide Bayesian Credibility Intervals. This provides a probabilistic measure of error in some ways analogous to Confidence Intervals and a margin of error." Yes "We are supporters of the AAPOR Transparency Initiative [http://www.aapor.org/Transparency_Initiative.htm], and also fundamentally believe that this is important information for a poll consumer to have." No "Polling is an important part of Ipsos's identity in the U.S. and globally. If we did not have a media partner, it is likely we would continue to publish political polls under our own brand. Happily, we have always had the fortune to work with media partners on our polling work." "Our polling team is a small group of people within Ipsos Public Affairs, which is a part of Ipsos Group (2100+ U.S. employees and 15,500+ global). At any given point in time, between two and 15 people work on the U.S. polling work, and no one works on it exclusively (even me). Our core team is more women than men, but it varies enormously depending on the work at hand and the time of year." Yes Yes We want to avoid frequent responders for quality reasons. No more than once a month. Non-panel participants are tracked by IP address. Yes Yes "All our political polling this year and in 2012 has been online, although we do a great deal of phone research too (just not for political work). Across the industry, it is now standard to include at least 25% -- we usually do more -- cellphones in a phone sample." No Barbara Carvalho Marist College "Likely voters are defined by a probability turnout model. This model determines the likelihood every respondent who indicates they are registered to vote (or plan to register if the poll is being done prior to the deadline for registration in their state) will vote based upon their chance of vote, interest in the election, and past election participation." No No We prefer RDD [random-digit dialing]. Lists add another layer of uncertainty and non-response given the quality of registered-voter lists is not consistent across all states. We also try to measure new voters. Yes "Multipliers are capped to be no greater than 2. We prefer not to make statistical adjustments to make up for sampling that is not representative. Increasing the proportion of cellphone sample, increasing callbacks, scheduling callbacks, allowing respondents from the sample to return call the survey center at their convenience, contacting respondents at different times of the day, training and monitoring of interviewers -- all improve representativeness. The need for adjustments greater than 2 for our weighting demographics in samples are rare." No We never weight by party ID. Sometimes "1 -- For research purposes, we have called back respondents to pre-election polls after Election Day to compare their intention to support a candidate with whom they voted for. 2 -- We ask permission for recontact at the end of our surveys to have a reporter from our media partner(s) call them to do a follow-up interview or, in some circumstances, request participation in a focus group about their views." No No need to. Sometimes or Other (please specify) Not enough time or space to get into the Bayesians/frequentists debate Not enough time or space to get into the Bayesians/frequentists debate. No There is generally a misunderstanding as to what MOE [margin of error] is and for which types of surveys it may be calculated. It doesn't provide very much insight into the value or quality of the research although that is often the inference. Spanish Mandarin or other Chinese French German It depends upon the language and the population incidence of those who speak the language. It depends upon the issue/topic surveyed. Sometimes there are significant differences and sometimes there are not. "We're an educational research center, not a business." "In 2010, it took fewer interviewing hours to get the same number of completed survey interviews." Proportion of cellphones Yes Principle of transparency Yes We're a research center at Marist College and an educational program. So we often conduct surveys without sponsors. 12 and about 450 interviewers Staff 3 Interviewers 45% Staff 9 Interviewers 55% Staff 12 Interviewers 70% Interviewers 15% Interviewers 10% Interviewers 5% Yes Yes We have worked with a probability-based outside vendor. N/A N/A Yes No The proportion of cellphone interviews has increased. It is a scale based on experience. 45% College age About 10 hours "Part of their training is in human subjects research. They are trained and evaluated in these specific skills, they role-play, have pat phrases, and a hierarchy of coaches and managers to assist." All interviewers are monitored and provided feedback during their shift. All interviews are recorded. No 51 "Unlike 2012, when Obama had many paths to winning, the GOP this time has expanded the playing field and has multiple paths to gaining a majority." Nothing at this time Brad Coker "Mason-Dixon Polling & Research, Inc." No "Our contracts specifically prohibit us from discussing a poll with any third party until the client publishes the results. Breaching a contract is not a smart business practice. Over the past 25+ years, we have developed cordial relationships with many of the major campaign polling firms. They know our situation and that about the only thing we can give them is an approximate day that the results will be released. However, once our poll results are in the public domain, we will discuss and compare notes with campaign consultants that have a degree of gravitas in the business." No Simple business principle -- make more money than you spend or else you'll go out of business. It's better use of resources to do nothing and go fishing than it is to spend money and staff resources to conduct a poll no one is paying for. No Yes No 54 "Most are saying 51-53, so I'm going bold. It feels a lot like 1994, when Democratic incumbents polling in the mid 40s all lost." Mark Mellman Mellman Group Yes Yes Yes Seth Rosenthal Merriman River Group No. No It's simply not the type of work we do. Spanish 25% "Lower socioeconomic status, more likely to identify as a Democrat, less likely to vote, more likely to be undecided in down-ballot races." No Yes Yes Steve Mitchell Mitchell Research & Communications "We first ask if they are registered voters. If registered, are they definitely going to vote, probably going to vote, not sure yet, or definitely not going to vote? We accept only the top two: definitely or probably. Probably is usually less than 2 percent. After absentee ballots are mailed, we ask: ""Thinking about the upcoming November General Election for U.S. Senate and Governor, have you voted by absentee ballot, are you are planning to vote by absentee ballot, are you definitely voting on Election Day, probably voting on Election Day, not sure yet if you are voting, or definitely not voting?"" If they answer not sure yet or definitely not, we do not poll them." "Yes, we weight. We look at past election results in similar races. However, the closer we get, the less we weight by party." Yes "Michigan has an excellent list vendor and our results in all types of races have been very accurate using his lists. We have found that to be true in other states, as well." No Yes "In the past, we used to do a panel-back [call-back] poll the night before the election to see how the race would turn out the next day. We don't do that any more since we use automated phoning now. It was for internal use only." No Our numbers are our numbers. Polling cannot be done in a vacuum because we all look at what is happening in other states and in the states we are polling. But I don't want my results tainted by talking to another pollster. Arabic 50% It depends on whether it is automated or operator-assisted. Our costs are down. Yes Yes Yes Yes We do not use online panels for political polling. We use them very rarely. Yes Yes We have to weight for voters under 29 and for African-Americans. $9 per hour About 33% 21 10 "To be polite, thank them and hang up." 33% Yes 25% 75% Local actors $50 per poll 52 Watching all states and blogs like FiveThirtyEight carefully. None Patrick Murray Monmouth University No "While there may be some pollsters who would like to be able to do that, they would be hard-pressed to find a willing accomplice. P.S. You guys really need a lesson or two on question wording." Sometimes or Other (please specify) "Neither really if you you are basing this on a strict definition of the terms. Using RBS [registration-based sampling] rather than RDD [random-digit dialing] allows for a little more Bayesian application in the model, but even those who take samples ""as they lie"" apply some Bayesian thinking -- it's just not quantifiable." This question is a good reminder that few pollsters -- especially the good ones -- are primarily statisticians. My first job in polling was as a telephone interviewer in college. I learned a lot more about the practical application of polling methodology from that experience than any of my stats classes. Good polling requires an understanding of how to communicate with individual respondents (either by phone or in writing) as much as how to weight and model databases. Yes "The idea of a cap is not applicable for a true panel study. Your question asked if we use an ""online panel"" -- that could mean a variety of things now. You have to remember that many of us do non-election polling. For me a panel study is tracking indiviudal changes in attitudes and behavior over time. That's different from a so-called cross-sectional ""panel"" which is modeled to ""look like"" a population at a specific point in time. You need to clarify what you are actually asking about." Yes Yes Christopher P. Borick Muhlenberg College We are using a RBS [registration-based sampling] sampling frame that utilizes past voting behaviors to determine preliminary screening for interviewing. If an individual has demonstrated voting patterns that qualify them for interviewing (e.g. voted in two of the last three midterm elections). During interviewing we screen out individuals who express a low likelihood of voting. "Yes. The weighting varies depending upon if we are making inferences about registered or likely voters. For registered voters, we weight our sample to most recent party-registration statistics for the population of registered voters as was reported in the most current figures provided by the Pennsylvania Secretary of State. For likely voter samples, we incorporate previous party voting results in similar elections (e.g. midterms) as reported in exit polling during those elections." Yes We have transitioned in recent years from RDD [random-digit dialing] to RBS [registration-based sampling] samples only in our election polling because we feel that these sampling frames can provide us more validated voter-behavior information. Yes "In election polls, we cap our weights for a demographic subgroup at 2.5. Obviously as weights increase for any subgroup, there are risks that additional survey error may be introduced, so we have opted for a cap at a weight of 2.5." No Sometimes "If we are conducting an experiment where individual-level analysis is necessary, we have called respondents back again. For example, we have done tests where we call back respondents after an event ( e.g. a debate, a high-profile ad) to see what changes may have occurred at the individual level. We wouldn't be able to test these individual changes with a new group of respondents. In our election polling that is reported publicly, we have only utilized new samples without calling back prior respondents. Our experiments with call backs have been done for academic studies." Yes Yes. I have shared results in advance of publication with other pollsters as a courtesy to give them a heads up of what we will be releasing. Other established pollsters in the state are often called on to comment on our release so I like to share what we find with them earlier so that they can get a sense of what will be coming out. Frequentist "We are getting more interested in Bayesian approaches but at this point don't feel comfortable using Bayesian techniques as a primary component of our methods. To be honest, we need to learn more about what Bayesian techniques can do to improve our survey quality and I imagine we will be thinking more about this issue in coming years." "I'm not really sure by what you mean by traditional reporting, If you are talking about the difference between the standard MOE and TSE there is room for discussion about what should be shared." Spanish "The cost increase from English-only surveys ranges from about 15 percent to 20 percent higher. We don't do Spanish-language interviewing often (only about three times in the last four years), so we don't have a big N to look at." We haven't done enough research on this with our samples to be able to identify if there are any differences. Yes This should be a requirement for any publicly released poll. A pollster should give full disclosure of the funding of a poll upon request. Yes We do so as a core element of our institute's mission. "about 50 total, 6 who are not interviewers" "approximately 22, 1 who is not an interviewer" "approximately 28, 5 who are not interviewers" "about 37, 6 who are not interviewers" "about 8, 0 who are not interviewers" "2, 0 who are not interviewers" "1, 0 who are not interviewers" Yes No "We have done some survey experiments using online panels, but only occasionally. Thus repeating panel members have not been a significant concern for us." Yes Yes We have had to increase the number of call backs we conduct. About 40% Around 21 It depends on the survey project To always remain courteous and to anticipate that they will regularly deal with individuals who are upset with being contacted. We monitor interviewing at all times during calling hours. No 54 Latest polls (I check daily!) and other indicators such as the president's continued low approval ratings. "I'm not sure but this was a really long survey. I really like your work at FiveThirtyEight so I hung in there, but I think I'm tapped out for a while." Scott Keeter Pew Research Center "We build a likely-voter index from a set of seven or eight questions. Scale includes measures of engagement in the campaign and in politics more generally, past voting, and intention to vote. We add one or two bonus points for young people. We then establish a likely-voter turnout percentage based on historical averages and some estimate of whether the current election is likely to be higher or lower than average. We adjust the prediction to take account of the fact that survey samples tend to be a bit biased toward more engaged people. Then we use the adjusted turnout percentage to cut the likely-voter scale. It is usually the case that the target percentage does not correspond cleanly to a set of categories on the scale, and so we have to take a percentage of the people at a particular point, plus all of the people in categories higher than that. So, for example, if our scale is seven points, we may take all of the 7s and a portion of the 6s. The apportionment is done using weighting, rather than arbitrarily splitting the group." No No "Past concerns about bias in voter lists. This may be changing, and we are likely to experiment with voter lists in the future." No "We are mindful of the impact of extreme weights, but using a typical RDD [random-digit dialing] design, it is usually not necessary to adjust weights, other than the usual trimming that is done as a part of the normal weighting process." No We don't weight on party in our RDD [random-digit dialing] surveys. Sometimes "Occasionally we do this to determine whether voter intentions have changed. In non-political surveys, we sometimes call back respondents to gather additional information." No "Doesn't seem like a useful exercise. We have many conversations with our peers, but not at the point of deciding to publish a poll." Frequentist Yes But only if pollsters are calculating margin of error with design effects properly included. Spanish I don't have that information close at hand. "Yes, but the differences vary across topics." We don't poll in states. We don't poll in states. Yes We are a nonprofit organization whose mission is public polling. No "Our funder is a sponsor in the sense that they fund us to conduct research, but they are not a client in the usual sense of sponsorship." Yes Yes Our probability-based panel was designed to yield one survey per month. We haven't been through a full year yet. "After the initial attrition following recruitment, turnover has been very low." Yes No "Declining response rates could certainly lead to increased weighting, but at the same time we have been increasing the cellphone percentage in our samples (now 60%). That has had the effect of reducing the need for higher weights." No Evans Witt Princeton Survey Research Associates International Yes "Certainly. For certain surveys, especially state-level surveys on elections, RBS [registration-based sampling] can be an excellent choice." No Yes Call-back surveys are a valuable survey tool in a variety of circumstances. Yes There is a decades-long debate about the reporting of sampling margins of error. They should be reported because they provide one valuable piece of information about the survey. Yes Disclosure is central to credibility. Please check out 20 Questions A Journalist should Ask about Polls Results. [http://www.ncpp.org/?q=node/4] Yes Yes No J. Ann Selzer Selzer & Company "No. There is a wonderful chart at Pollster.com that shows how widely polls vary when they ask party ID. It is not a fixed variable, therefore inappropriate for weighting. [http://elections.huffingtonpost.com/pollster/party-identification#!showpoints=yes&estimate=custom]" No "When I first joined The Register in 1987, we were calling about the 1988 caucus. A strategy in place was to bank ""warm"" caucus-goers. We'd simply call back a random sample of people we had already polled and mix them with new caucusgoers. Once I ran the crosstab and saw that George H. W. Bush was wining with one group and Bob Dole with the other, we halted that practice immediately. We had no explanation for the difference, but there it was." Yes "Well, the reporting of the margin of error is okay. It's how people interpret it. More then 90% of the time, if it is mentioned, it is to say that the race could be closer, or the other person could have the lead. It is equally likely the race is farther apart. It seems a matter bias that one possibility is reported without the other, equally likely, possibility reported." Yes "It's a principle of disclosure. If the respondent is going to give us time, what we can give in return is transparency." No It's not part of our business model. Yes Yes No Don Levy Siena College Yes Yes No Jay H. Leve SurveyUSA No Never in 22 years. Never even occurred to me to do so until this minute. No "Public opinion polls should not be ""loss leaders."" A media sponsor is a critical second-set of eyeballs on every questionnaire we draft and on every set of research results we release. The journalists who review our questions before we launch and who review our findings before we publish create an essential ""checks and balances"" system that is wholly absent for those pollsters who act unilaterally." Yes Yes Yes "Telephone polling died in 2008. See article by Bialik in WSJ 11/06/18: ""2008 is to telephone polling what 1948 was to passenger rail: The end of the line."" [That was a quote by Leve in this article: http://online.wsj.com/articles/SB122592455567202805]" Yes Andrew Smith University of New Hampshire "We ask several questions about voter registration, interest in the election and voting in past elections, to create a context in which it is easy for a respondent to say they are NOT going to vote. We continue only with respondents who say they will definitely vote or who will vote unless an emergency comes up. In addition, we read an option to each election question that allows respondents to say they will skip that particular race. We do not weight voters on their propensity to vote." "No, this is nuts! There is no parameter to weight to." No "The states we most typically poll in (New Hampshire, Massachusetts, Maine) have same-day registration. In New Hampshire, this typically is between 5 percent and 15 percent of the electorate. Using registration lists systematically excludes these people. Also, lists never have contact information for all voters. This systematically excludes those for whom no information is available." No Has never been an issue. No Yes Quality control. No What would be the point? We publish all of our results regardless of what others look like. Frequentist No "MSE [margin of sampling error] is least important source of error. Non-response, particularly for IVR [interactive voice response] polls is a much greater source of error as are question-form effects. Focusing on MSE is misleading. Also, MSE should be reported with design effects." None Yes Readers have to know the sponsor so they can judge their interest in the results. Yes "About 100, 5 full-time staff" "About 40 percent, 3 full-time staff" "About 60 percent, 2 full-time staff" About 90 percent About 5 percent About 3 percent About 2 percent Not too many minorities in New Hampshire. No Yes No Between $9 and $15 per hour depending on level and experience About 40% 30 8-12 hours Thank them and hang up About 10% No 52 Bad year for Democrats Gregg Durham We Ask America No Yes Yes \ No newline at end of file diff --git a/poll-of-pollsters/poll-of-pollsters-anonymous-answers-3.tsv b/poll-of-pollsters/poll-of-pollsters-anonymous-answers-3.tsv new file mode 100644 index 0000000..1487c85 --- /dev/null +++ b/poll-of-pollsters/poll-of-pollsters-anonymous-answers-3.tsv @@ -0,0 +1 @@ +"How do you determine how likely one of your respondents is to vote? Do you weight them, or is each respondent either a likely voter or not one? Please provide as much detail as possible." "Do you weight by party affiliation? If so, how do you determine what weights to use? Please provide as much detail as possible." Do you ever poll from registered voter lists rather than call at random? Why? "When you weight poll results, is there a maximum weight you use to increase the count of a demographic subgroup?" "Why or why not, and if yes, what is that weight?" Do you weight by race and party together? (Example: weighting African-American Democrats instead of African-Americans and Democrats separately.) "In what circumstances do you do so, and why?" Do you ever deliberately call back prior poll takers [http://fivethirtyeight.com/features/oct-16-can-polls-exaggerate-bounces/]? "If you do sometimes or always do that, under which circumstances, how do you do so, and why? If not, why not?" Do you have off-the-record conversations with other pollsters to compare results before publishing them? Why or why not? Do you use Bayesian methods or frequentist methods? Why? Do you find traditional reporting of statistical margin of error to be credible? "If so, why? And if not, how do you think margin of error should be reported?" In what languages other than English do you ever ask your political polls? What percentage does it add to the cost of a poll to add one language? "If you field polls in Spanish, how do Hispanic respondents who choose to answer in Spanish differ from those who answer in English?" "How much does it cost you to poll one stateÍs Senate race, on average?" "How much did it cost you to poll one stateÍs Senate race, on average, in 2010?" "How do you account for the change, if any?" Do you always disclose who is funding your polls? Why or why not? Do you ever conduct and publish political polls without sponsors? "If not, why not and would you ever do so? If so, under what circumstances and why?" How many total employees (full-time or part-time) does your polling organization have? How many are men? How many are women? How many are white? How many are African-American? How many are Hispanic? How many are Asian American? Any comments on the demographics of your staff? Do you ever poll using an online panel? Do you cap the number of polls that panel members can take in a given time period? "Why or why not? And if you do, at what level do you cap it?" What percentage of your panel members leave or become inactive annually? Any comments on your panel turnover? Do you ever poll by phone? Have decreasing response rates required you to change your techniques by using increased weighting or supplementing with different technologies? Please elaborate on your answer above. Do you ever poll by phone using live interviewers? How much do you pay your interviewers per hour? What percentage of your interviewers are male? What is your interviewers' average age? How many hours of training do you require them to have before they can conduct interviews? How are your interviewers trained to handle invective from people who hate being called? What percentage of interviews do you monitor for quality? Do you ever interview by phone using Interactive Voice Response (IVR)? For what percentage of IVR polls do you use male voices? For what percentage of IVR polls do you use female voices? "Whose voices do you use? i.e. actors, local TV personalities?" "How much do you pay the people whose voices you record, by poll or by hour?" "How many seats do you expect Republicans will control in the Senate in 2015? (Yes, we're asking again.)" Why? What question or questions would you want us to ask your fellow pollsters in future rounds of this poll? Open-Ended Response Open-Ended Response Response Open-Ended Response Response Open-Ended Response Response Open-Ended Response Response Open-Ended Response Response Open-Ended Response Response Sometimes or Other (please specify) Open-Ended Response Response Open-Ended Response Spanish Mandarin or other Chinese Tagalog French Vietnamese German Korean Russian Arabic None Other (please specify) Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Response Open-Ended Response Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Response Response Open-Ended Response Open-Ended Response Open-Ended Response Response Response Open-Ended Response Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response Open-Ended Response "Cut-off. Since I use RBS [registration-based sampling], the sample is pre-screened by past voting history before I even dial with an additonal cut based on self-reported likelihood." No. No No But we look carefully at our weights. No I do not weight by party ID. No It's simply not the type of work we do. No "Don't want to drive myself crazy. Also, on congressional districts and state senate districts, often no other public pollster is doing them." Bayesian We use Bayesian because it allows us to change our models as more information becomes available. No "For phone polling it's fine. We poll 100% online, so we provide a comparison-purposes only example of what an MoE on a statistical sample the same size would yield. There is a much wider debate at play regarding the reliability of phone versus online polling and the MoE is symptomatic of that." Spanish Mandarin or other Chinese Tagalog Arabic Less acculturated Hispanics tend to be more Democratic. "$2,200 " "$1,750 " No "For publicly released polls, yes. For private clients who request anonymity, no." No Funding constraints would prohibit us from conducting such polls. 2 0 2 2 0 0 0 No It's not our proprietary panel; we don't have access to this information. Yes No Only been polling since 2012. Yes $27/hour 55% 22 20+ hours They are required to add them to our do-not-call list and told to never be impolite. varies Yes 0% 100% Our main voice is simply someone who has a great phone voice and good instincts about how to read poll questions. We have also used people with a media background in special situations. $25/hr 50 "Alaska seems unknowable. Colorado still seems hard to believe (I'm probably unskewing that one, sad to say). And who knows about Georgia?" "Academic pollsters -- Do you pay your callers minimum wage? Do you withhold taxes, Social Security & Medicare? Do you match Social Security & Medicare, or do you consider your callers ""independent contractors"" and force them to pay 100%? Does your institution include coverage for callers in its Worker's Compensation Insurance policy?" Each respondent is either a likely voter or not. No. I weight by party registration from the RBS [registration-based sample] list. Turnout by party registration is remarkably stable -- e.g. there was very little shift on that demographic from the 2006 Democratic year to the 2010 Republican year -- whereas self-reported party ID per the exit polls did change. (I have only polled states with party registration). Yes "Have done both in past. Now using RBS [registration-based sampling] exclusively for midterms and primaries. Have yet to decide about 2016 general. Note: For other than ""likely voter"" polls (i.e. the policy and quality of life polling that make up the vast majority of our output -- an area FiveThirtyEight isn't as interested in) we still use RDD [random-digit dialing], since those findings are based on the general population" No "We may trim weights as dictated by the makeup of each sample, if necessary. But we do not have a specific, static cut-off that we apply across samples." No No We believe in the true randomization method. No "Our results are processed quickly and even were we inclined to, we could not do so. And we are not inclined to." Bayesian No "Margin of sampling error should not be reported. It's misleading, it's not applicable in 2014, and it mis-informs poll consumers as to the real sources of error that drive differences across polls and pollsters." Spanish 10% to 15% "$4,425, approximtaely, if in English only" "$4,410 " Need more sample to offset lower response rate(s). Yes "99% of our ""public"" polls are paid for by us. If someone else pays, we are up front about it." No We always have a sponsor as our company is owned in part by the owners of national newspaper chain and the company owns its own news sites. 2 2 0 2 0 0 0 Yes Sometimes "Sometimes is not a good answer. We did a study of five data-collection methods using the same questionnaire in the same market at the same time. One method was a panel, so I've done it. Even though I told the company we would be comparing the quality of data across methods, the panel was pretty bad. That's the mail panel. The online panel had to be redone because it was managed so poorly. I'm not a fan." Our retention rates are very strong. Yes No "Same techniques -- weighting is routine now, though it used to be occasionally needed, in the olden days." Yes Again I don't have responses to all these questions -- it would take a while to get the answers from our phone unit (a separate business unit). "15%-20%+, depending on project and client requirements" Yes 50% 50% Local voice talent By poll: $50 for some; $75 for longer ones. 50 We aren't as convinced as others about the alleged GOP wave. Ask why pollsters use telephone? Ask why they use IVR [interactive voice response]? Ask why they use online? Focus on likely electorate using data in voter file on past participation. "Not by party self-ID. In states with party registration, only if the sample is way out of whack with actual voter-registration statistics." Yes "In congressional districts and state senate, as RDD [random-digit dialing] error is overwhelming in those geographies." No No No No "The only purpose in having such conversations would be to change your numbers to be closer to the results of others. We don't change our results to match or come closer to others. You would be changing the data, and that is just plain wrong." Bayesian No "There's nothing wrong with including the standard MOE in a report (as long as the phrase ""statistical dead heat"" doesn't rear its ugly head). But a total survey error approach is much more legitimate at this point. Also, the general public can't really consume effect sizes and p-values. But it would be much more informative if those were reported as well." Spanish 2x No consistently observable difference. "$8,000 (if part of larger polling program so we have a lot of infrastructure already set up)" "$10,000 (ditto caveats above)" "Use of nonpanel sample means large, geographically targeted base sizes are easier to achieve online than they used to be." Yes AAPOR [American Association for Public Opinion Research] No "What does sponsorship mean? We have media partners that we work with on our polls, but they do not provide any financial support for the polls." 5 3 2 4 0 1 0 N/A Yes Sometimes This is something that is new for us. We have emails on 30% of all voters. I really don't know but we can request that data for you "No, but again we can probably get that data from our project staff or vendor." Yes No "We have used online surveys more often for specific types of surveys, but not for our political surveys. We have used screens to make sure we have adequate representation of younger age categories, and we have also increased the proportion of households with younger voters to increase the chances of including younger voters." Yes Between $8.50 and $14.75 per hour Yes 50% 50% Professiona voice and robotics N/A 51 Because that's what the polls are showing! How do they get cellphone numbers -- is it RDD [random-digit dialing] or RVV? Do they own IVR [interactive voice response] software? Do they own the call center that they make the calls with? How do they handle calling cellphones in the call center since you can't use a predictive dialer? "Generally, our likely-voter screens are in the tradition of the Gallup screens, which have historically used a series of questions to identify likely voters. When clients request it, we can use a system that gives each respondent a weight representing their relative likelihood to vote." "Only when necessary to have the party affiliation within a range that is determined by examining our own polling data from the previous election cycle for governor or president that best applies to the upcoming election, looking at an average of polls for previous similar elections that were close to the election outcome, but most importantly, calculating the base partisan vote for statewide education posts as a percentage of the total vote for governor or president, whichever applies to the upcoming election. There has been great consistency in our party affiliation for most polls, and with rare exceptions, we have only had to weight party affiliation by 2 or 3 points, and any party reweighting has only been requred about a third of the time." Yes "In election polling, we do this almost exclusively. (Although I should clarify that they are still random samples, just drawn from a more restricted sampling frame.) First, we believe that it simply doesn't make much sense to include people who can't vote in the frame parameters when trying to assess an electoral outcome. We wouldn't include Canadians, for instance, so why would we include non-registered Americans!? Second, voter lists usually provide more information about respondents (like voter-propensity scores, party propensity, etc.) than you'd get from an RDD [random-digit dialing] sample. Even if we don't generally use these scores for weighting, they're valuable to use as validity checks for the data. Third, many major campaign pollsters also seem to favor voter-list-based polling, and have based this decision on empirical evidence they have gathered." No No No No "We do most of our polling in races where there are few or no other polls. But even when we do work in more heavily polled races, I wouldn't want my judgment influenced by somebody else's choices. I do discuss polls with other pollsters after the fact, however." Bayesian No Spanish c. 10% "NOTE that our main polling is asked in English only. But other smaller polls we have run, such as polling for newspapers in Florida, incorporate Spanish. As you would expect for acculturated vs. unacculturated groups... there is a lot of variation in demographics and political attitudes." "Automated: about $500. Live interview: about $2,000." About $200. Cost of contacting cellphones. Yes "If it is published, we usually provide it to our media clients as an exclusive before other or all media outlets, and we always disclose who funded the poll if it was not commissioned by our media clients." No 5 4 1 5 0 0 0 "The larger staff on the election-management side is highly diverse in terms of sex, orientation, race/ethnicity, age, education level, etc. The much smaller polling operation evolved from that, based on people's specific training, skill sets, interest, and availability. Based on those parameters, it just happened to be a much less diverse subgroup of the larger organization." Yes Sometimes "We cap the number of OUR polls that a panelist can complete in a given period of time, but we do not cap the number of other polls that a panelist might participate in." Yes No "We supplement with cells, but because of the movement away from landlines, not because of decreasing response rates." Yes "I pay call centers about $22 per hour, all inclusive." Not sure Not sure Depends upon the call center -- but they are established firms that I am sure all have good training & monitoring in place. Not sure We remote call monitor about 15% Yes about 20% about 80% "TV news anchors combined with professional ""voice-over"" talent" We pay the talent annually. 51 Just a guess. "How many hours did it take you to complete this ""20 minute"" survey? (That's not really a suggested question, obviously!)" "It depends on the type of election, how close the poll is to Election Day, and whether there are voter propensity scores available on the particular voter file. So, there's no single formula. But typically, we first apply a liberal likely-voter screen when drawing a sample. For example, in state-level Senate & gubernatorial polls we have conducted within the past two weeks, we have selected our random samples from among registered voters who have voted at least once in the past three even-year elections (2008, 2010, or 2012), or first registered in December 2012 or more recently. We have experimented with including (registered) individuals with no recent history of voting, but their response rate tends to be negligible, and those who do respond tend to select ""undecided"" in matchup questions at a very high rate. This effect may be heightened in IVR [interactive voice response] polling (which is the method we use nearly exclusively), but I don't have live-caller numbers with which to compare. Our second step is a self-rated voter-likelihood question. When we are more than a few weeks out from an election, we usually use that question simply as a crosstab, rather than a weighting variable. But in a poll close to an election, particularly when early voting is underway, we will use the likelihood question as one of our weighting factors, advantaging people who say they have already voted, and reducing the impact of those who say they're not yet sure when or how they're voting (if they say they're NOT voting, they are typically dropped). We have found that self-identified early voters tend to track with final results pretty closely. On the other hand, those who aren't sure when or where they're voting also tend to have high ""undecided"" numbers, so weighting them down affects the undecided number much more than the head-to-head candidate numbers. Finally, in a general-election poll just before an election, we may also apply a voter propensity score adjustment (we wouldn't do this in a primary or a local election, because nearly everyone in those samples is likely to have a very high propensity score, and there's very little variability in those scores). However, applying a propensity adjustment often makes the weights more extreme for a small number of respondents (i.e., high-propensity individuals from low-propensity demographic groups) without changing the topline result in any appreciable way. In such cases, we tend to stick with our demographically adjusted results without adding propensity scores to the mix." "There is not ""party registration"" in every geography in which we poll, so it's impossible to do any one thing consistently and uniformly." Yes It is impossible to do a RDD [random-digit dialing] poll in a congressional district. No No No No Frequentist "It's what I was trained to do in graduate school. And it seems to work well in polling. (Ironically, it probably doesn't work as well in psychology, which is the field in which I was trained.) We don't, however, go out of our way to stress the binary nature often associated with frequentist methods (i.e, is the result within or outside the MOE [margin of error]?) -- I prefer to opt for an effect-size approach over a significance-testing approach to the degree possible." Yes "A poll uses Bayesian sampling techniques, then the margin of error should be reliable." Spanish Depends on the language and the proportion. It doubles the cost on an individual interview. Depends on the sample size Depends on the sample size Yes "If the polls are released to the public. One exception is that if we choose to piggy-back some political questions for our own use on to a private poll. If we decide to release the political findings, we do not identify the client whose poll we added them to. That's primarily a contractual requirement." Yes "At times in the past, we have conducted a poll on a congressional or state legislative race that is hotly contested -- if there has not been polling done, if we question the results of other polls that were published, or if we are just curious about an election." 8 7 1 7 0 1 0 Irrelevant. Yes Yes Good luck with that! Yes No "We've had to call more people, but the techniques remain the same." Yes I use an independent call center. I pay by the project. Yes Rarely. And not for election polls 100% 0 Can't remember. 51 "Looks like the terrain is just too tough for Democrats to hold on to their majority, but enough races are close that Democrats should be able to pull out a couple of the close ones, which would prevent the GOP from getting a bigger majority." Why the fuck they stay in this god-awful business Several screener questions: 1) Are they registered; 2) Have they already voted absentee or voted early; 3) Five-point scale -- keep top two. "We do NOT weight by self-described party affiliation, but we do weight on voting history based on declared party affiliation in primaries." Yes More accurate; more cost-effective. No No Sometimes For anecdotal information No Frequentist Yes "By pollsters, not by journalists. And would prefer all report design effect so as to add the impact of sampling error to the MoE." Spanish Depends upon the percentage of Hispanic voters in the electorate. I haven't noticed a consistent pattern. "Depends upon the sample size and questionnaire length, etc." "About 20%, less on average." "Primarily, the need to include an increasing number of cellphone interviews." Yes Obligatory. Yes We conduct public polls for publicity. 10 4 6 9 0 0 1 Yes Yes We subscribe to a company that provides fresh panels for each survey. Yes No Yes "I work with a vendor, so this information is not at my command" Yes We use the same male voice for IVR calls because of the quality of his voice. No "It is provided as part of the cost of the project, usually without cost." 51 "Use a version of Pew's series -- 4 questions on likely, knowledge of election, politics and candidates -- which yields a score. We, as election approaches, keep only those with a progressively higher score. Then at the end, we ask surety to vote 1 to 10, so we have a front and back score." "We weight by party affiliation. We usually conduct completely separate surveys with only the question of, ""What is your political affiliation?"" and then the usual demographic questions. In states where there is no registration by party, this has proved to be essential." Yes "That is the way we have always done it and it has been successful for us, not only for our statewide polls for our media clients, but also for polls we do for our other clients when they are having us test ballot proposals, such as school districts, community colleges, counties, local governments, libraries and transit authorities. Also, since we maintain a voter file with complete vote history, which includes new registrants." Yes 10%-15% No Yes For panel design or reusing special groups at a later time. No Frequentist Yes "It's credible as far as it goes. The problem is journalists are crap with numbers so they don't really know how to interpret it. A lead is a lead, Not a 'statistical tie."" And in many cases questionnaire design may introduce more error than sampling." Spanish Depends. Yes I'm the methodologist. I actually don't have this information. Yes Transparency Yes "We do our own, as such, we are the sponsor." 10 7 3 8 1 1 0 Yes Yes Yes No Yes "Since we use contractors, I don't have this information readily available." Yes 51 "We have a database of voters with vote history. Each sample that we pull includes households with any general-election vote history of household members, or households with new registrants. Our screen asks if the respondent voted in either of the past two general elections, or both, and if the respondent voted in neither, they are asked if the reason they did not vote in either election is because they were too young or not registered to vote at the time of the election, in which case all respondents who qualify continue and are asked their intent to vote in the upcoming election. If they are very certain, somewhat certain, or will likely vote, they continue. If not, their interview is terminated." "Where there is party registration, weight to best estimate of registration in likely electorate weight party ID to rolling average of our polls." Yes There is a difference in taking a random sample of registered voters from a source such as Aristotle [http://aristotle.com/] and then randomly dialing from that list. Most of our colleagues whom we normally see polling in our area do the same. Yes "For the majority of our surveys, the weights are trimmed if they exceed a certain value, which can vary survey to survey." No Yes "Only with the respondent's pre-permission, and only under idiosyncratic circumstances. Not systemically." Yes We often get called by others to compare notes since we're one of the few willing to publish results. Frequentist Yes "It's good enough given that the average media audience has no clue how to correctly interpret the ""horsey-ducky"" version." Spanish Double "They really donÍt differ from Hispanics who take polls in English; we do get more responses from the bloc in Spanish, though." N/A unknown N/A Yes We do not poll for partisan organizations or campaigns. Yes We have only done this a couple of times. We did it to test out some methodological adjustments and to increase our public exposure. 14 8 6 12 1 0 1 It changes. Yes Yes Yes Yes More weighting and switch to voter file. Yes "We contract for our live interviews, but I believe they are paid above minimum wage, depending on experience." "Unknown, since we contract for the service." Unknown "Unknown, but there are rarely any problems." "We stress the value of their opinion in a variety of tactful ways, but if they are upset, we just tell them they will be removed from our calling list, which we do." 25% Yes 52 Because Mark Pryor [Arkansas Democratic candidate] and Bruce Braley [Iowa Democratic candidate] ran shitty campaigns and the map was a bad-luck draw for Democrats. "We have experimented with likely-voter screens that contain as many as six questions and as few as one question. There is no simple relationship between the number of screening questions and the accuracy of the final vote estimate. In 2014, we are using a single question which offers respondents a range of options from ""absolutely certain"" you will vote to ""absolutely certain"" you will not vote." "Yes, based on prior elections." Yes Use voter-registration list matched to cellphone numbers to get cell voters in statewide polls. Random-dialing for cellphone numbers just isn't practical. Also use them in elections such as congressional or state legislative races that involve district boundaries. Yes "I try not to weight something up by more than about 15% of what is in-tab. If I'm targeting to get 200 black voters in a sample and end up with 175-180, I'll weight. If I only get 150, I'll selectively oversample. In those circumstances I will use a voter list to target." Sometimes "In some states, where voters register by party, data is available showing exactly how many voters in each party are white, black, Hispanic or other. In those states, we keep an eye on it. Most common is making sure Hispanic voters generally reflect their party registration and white vs. black among Democratic voters is relatively in line." "Not sure what you mean by that. We sometimes conduct panel studies (usually not around elections, though)." Yes Yes is too much of an answer. It has happened. I can count on one hand over 25 years. I think the pressure for conformity is a problem in the industry. Sometimes or Other (please specify) BCS Computers Yes "That is the error rate for responses near 50 percent, but when a response to a specific question is much closer to 0 or 100, the smaller error rate should be noted, depending on the importance of the question asked." French Minimal "We have very little costs on IVR [interactive voice response] -- maybe a few hundred dollars. We own the data and one of the largest robocall systems in the country. For live calls, the cost with cellphones is in the $3,000 range. $2,500 we charge for IVR. $7,000 for live interviews." N/A In some way Yes 200 Yes N/A N/A Yes Yes The raking has changed somewhat. Main issue has been Hispanic non-response requiring a slight further adjustment to age (i.e. weighting up Hispanics to the population proportion tends to overweight young adults -- so the final weighting has to be smoothed a little more). Yes We use a call center. The cost is masked to us. Unknown Unknown Unknown Politely All live interviews are recorded and can be re-examined in the event of a complaint 52 Same reasons as before -- whatever those were. "We primarily use voting history. For new voters, we assign voting likelihood based on a number of demographic factors (age, gender, ethnic origin, and 64 others)." Yes. State board of elections turnout figures. This is for pre-election polls. Yes "Vastly richer and more accurate information about past participation and many other useful variables for modeling, etc." Yes "I try to stay under 2. I want to stay as close to the actual answers as possible. Weighting is a distortion of the respondents' answers. The more the weight, the further away from what the respondents said." Sometimes Pin Voting Rights Act states [http://www.civilrights.org/voting-rights/vra/map.html] with party registration where we have very accurate data. Sometimes or Other (please specify) It depends on goal and situation. Yes "While margin of error on early election polls have little bearing on final results, we find that it does indeed play its proper role as elections draw near." Arabic About 30% to 35% "We do a statewide mail poll of whatever is on the statewide ballot, whether that's a Senate race, governor, a referendum, whatever. So any cost increases are due to printing and postage increases for 12,000-plus ballots and twice that many envelopes." The Elway Poll is funded by subscribers. Yes 14 employees in house (20 employees in the call center) 8 6 Yes New for us. Yes Yes We added online and mobile. Yes "We would have to get that from our vendors. [Followup from one of the vendors: ""Pay is dependent upon geographic location, tenure and performance (ability to deal with difficult respondents, attendance and floor behavior) and varies from $8 per hour up to over $11 per hour. All interviewing staff qualify to receive vacation and sick benefits as well as paid breaks and other company-designed benefits.""]" "We would have to get that from our vendors. [Followup from one of the vendors: ""The percentage of males (as well as other demographic characteristics) varies by geography as well but overall ranges between 30 and 40%.""]" "We would have to get that from our vendors. [Followup from one of the vendors: ""The average age varies by center as well, with our Ohio and California call centers trending older (40+ ish). Texas, Florida and Washington trend younger.""]" "We would have to get that from our vendors. [Followup from one of the vendors: ""All interviewers complete four hours of classroom training followed by at least two hours on a training survey before being able to work on client-sponsored surveys. Additionally, each interviewer is briefed on the specific survey that they are conducting. Briefing includes discussion of the topic, geography involved, special circumstances which may exist among the respondents and pronunciations. No interviewer is allowed to leave the briefing until they have properly pronounced key words to the briefing supervisor.""]" "We would have to get that from our vendors. [Followup from one of the vendors: ""As part of the initial training, Interviewers are taught to be polite and accommodating during all interactions with respondents. They are taught from the perspective of, 'Imagine if it was you who just was woken up with the call.' If the respondent becomes increasingly abusive, they are instructed to inform their supervisor and to put the phone number on our internal do-not-call list. Numbers on this list are not recalled for 12 months.""]" We monitor interviews from our office every night. 52 Yes Yes "If you weight a subgroup by more than a factor of about 20 percent to no more than 25 percent of the original count of the subgroup, you are creating data that is unreliable. When the subsample is too small to reflect the voting N-size of a particular group, supplement the data by making more phone calls or gather more data by other means, but don't make a mountain out of a molehill of small N-sizes of subgroups, because the higher error rate of the smaller number is carried with that reweighting and seldom noticed by most observers, but it can determine the overall outcome of the poll." Yes "Our system combines numerous demographic, psychographic and voting-history data fields into 335 distinct categories. So, yes we do weight by race and party together, but there are many more data points involved." Sometimes or Other (please specify) Trick question? Yes None Not lately. Yes 3 full-time and 1 part-time on the analysis side -- field is farmed out or hired ad-hoc 3 1 4 0 0 0 "This is a great question. This is a big issue in our field. Few minorities end up choosing quant-related fields in graduate school, so there the pool is small." Yes "I don't have this information to hand, and it would take our online team a few days to get me all the details. Panel attrition is an issue, though, across the industry." Yes Varies Varies 3 days of training "Role-playing, demonstrations, mentoring" 100% 52 Yes Reasonable case by case also do sensitivity analysis to see what difference it makes. Sometimes or Other (please specify) Yes and yes. "It is credible as far as it goes, but it is typically given the one-sentence +/-. Explaining the margin of error and other sources of potential error takes up air-time or column inches that media outlets don't have to spare. Or the patience to explain." None Negligible. "31, 5 full-time staff" "9, 2 full-time staff" "22, 3 full-time staff" "26, 4 full-time staff" 3 1 1 Yes About 20% 51 or 52 The trendlines on polling on key states have shifted to the GOP in the past couple of weeks. Yes Usually between 0.5 and 4.0. Not sure what that means... None "In non-election polls where I have used Spanish interviews, the difference has been negligible." five four one five none none none Yes N/A N/A Yes "When our weighting processes push results too far in either direction, we tend to believe our sample is too far from the norm to trust the results." None varies about 1/2 about 1/2 all none none none Yes Yes "I don't have the demographic breakdowns readily available, but we are a large organization that is much broader than our political polling unit. We subcontract our telephone surveys. Whether we included or excluded our field houses would affect the demographic profile of our employees." I have gone to a network of independent contractors and consultants. \ No newline at end of file