Political polling is risk management, not data collection

A campaign team commissions a poll three weeks before an election. The numbers come back quickly. The sample size looks solid. The data looks clean. They adjust their strategy based on the results.

Two weeks later, a journalist asks how the respondents were selected. Whether anyone verified they were real voters. Whether the methodology could be independently reviewed. The campaign manager realises nobody asked those questions before the fieldwork started.

The problem was not the data. The problem was that nobody treated polling as a risk exercise.

What polling is actually for

Most organisations treat polling as an input to strategy. They want numbers that tell them what voters think, how public sentiment is shifting, or whether a policy position has support. That is half the picture.

Polling is also a liability. Every poll a political party commissions, every piece of community consultation a government body publishes, every public opinion survey an advocacy group releases becomes an artefact that can be questioned, audited and publicly scrutinised.

The question is not just “what did the numbers say?” It is “can you explain how you got them?

For government work, this is not theoretical. Freedom of information requests, parliamentary questions, and media scrutiny all create situations in which polling methodology must be explained and defended. The Australian Polling Council exists because the industry recognised that transparency standards could be much better. Its code of conduct requires members to publish detailed methodology statements, including weighting procedures, question wording and response rates.

Where risk hides in polling

Four specific risks sit underneath every polling project. They are rarely discussed until results are challenged.

Sample composition risk.

Who actually answered? Were they verified? How were they selected? If the sample skews and the methodology cannot explain why, the results are exposed. In automated telephone polling (IVR), there is no way to verify respondent identity. The system accepts whoever presses the buttons. In online panels, the problem is different but equally serious. A 2025 study published in the Proceedings of the National Academy of Sciences found that AI bots can now complete online surveys with a 99.8% pass rate on standard quality checks. The bots maintain consistent demographic personas, adjust their answers based on previous responses, and are essentially undetectable.

Interpretation risk.

Did respondents understand the questions? Were complex or sensitive questions clarified in real time, or were they left to be misunderstood? In both automated polling and online surveys, there is no mechanism for clarification. If a voter is unsure whether a question about “government policy” refers to the state or federal government, the ambiguity is baked into the response. The answer is recorded as given.

Respondent authenticity risk.

This is the newest and fastest-growing problem. Beyond the bot issue in online panels, a separate study found that 34% of online respondents admitted to using generative AI tools to answer open-ended survey questions. In IVR polling, there is no way to confirm who pressed the buttons or whether they were paying attention. Response rates in IVR can fall below 2%. When 98 out of 100 people hang up, the question is not just whether the data arrived fast. It is whether it represents anyone.

Audit trail risk.

Can you produce a methodology statement that explains how the poll was designed, conducted and validated? For government and public sector work, this is not optional. For political work, it is increasingly expected. Media organisations, opposition researchers and data journalists now routinely request methodology documentation. If it does not exist or cannot withstand scrutiny, the results are undermined regardless of what the numbers said.

What risk management looks like in practice

A polling approach designed for defensibility has specific characteristics. The data collection is interviewer-led. Questions are clarified in real time when respondents are unsure. The interviewer verifies the respondent is real, eligible and engaged before recording any data.

Calling procedures are structured and documented. Sampling methodology is transparent. Back-checking and validation are built into the process, not bolted on after the fact.

The point is not that one method is universally better than another. Different research questions call for different approaches. Online surveys and automated methods serve perfectly well for quick pulse checks, large-scale consumer tracking and low-stakes internal research.

But when results will be published, quoted, challenged or used to make decisions that carry reputational or political risk, the methodology needs to be explainable. And the respondents need to be verifiable.

Before the results are challenged, or after

When a campaign team, government department or advocacy group commissions a poll, they are not just buying data. They are accepting a level of methodological exposure. The sampling design, the data collection method, the weighting approach and the respondent verification process all become part of a public record the moment results are released.

The only question is whether they thought about that before fieldwork started, or after someone asked a question they could not answer.

If you are planning a polling project and want to talk through methodology options, we are happy to help.

SOURCES AND REFERENCES

  • Westwood, S.J. (2025). “The potential existential threat of large language models to online survey research.” Proceedings of the National Academy of Sciences, 122(47).
  • Stanford/NYU study on AI-assisted survey responses (2024). 34% of online respondents reported using AI to answer open-ended questions.
  • Australian Polling Council Code of Conduct (australianpollingcouncil.com)
  • Lowy Institute Poll 2025 Methodology (mixed online + 1% CATI for offline populations)
  • AAPOR guidance on IVR disclosure and transparency requirements

Talk to Our Team

Have questions about our approach to AI and data quality?

Subscribe to Our Newsletter

Get the latest insights on CX, AI, and growth strategies delivered to your inbox.

AI in Market Research

Download our comprehensive white paper on responsible AI use in fieldwork.

Share This Article

More Blogs

Where research meets results. Trusted fieldwork, real data, and human insight across Australia, New Zealand, and Fiji.
  • Melbourne (Head Office)
    83B Hartnett Drive, Seaford, VIC, 3198
  • Auckland
    Level 5, 110 Symonds Street, Grafton, Auckland, 1010 NZ
  • Fiji
    TKW Collective Pte Ltd, Kalabu TFZ
    Valelevu, Nasinu, Suva FIJI
© 2026 TKW Research. All rights reserved.