In this age of fake news and widespread cynicism, it’s perhaps unsurprising that online polls and market research discussions aren’t always trusted. In fact, there’s good reason for viewing them with suspicion. Automated scripts are being used to distort the results of polls around the world, including a recent debate about the future of net neutrality. Of the 22 million comments submitted to the FCC (Federal Communications Commission) in June, 82.6% were identified as being sent by bots, rendering the results largely meaningless.
Putting aside any debate about why online polls are being distorted (and who is responsible), identifying this phenomenon is currently quite easy. The FCC survey received a million responses from ‘users’ with pornhub.com email addresses, yet no such ISP exists. Huge swathes of FCC replies contained data that betrayed their computerized origins; one common theme was identical wording with different letters randomly capitalized, while thousands of messages had been sent in periodic bursts with near-identical time stamps.
Unfortunately, the ever-increasing sophistication of algorithms (and hackers) means votebots and chatbots will gradually become more adept at blending in. It won’t be long before the generic wording of current submissions is subjected to greater alteration, making it harder to batch-match comments. Spammers are now inferring authenticity by using contact details from stolen customer databases, following a spate of high-profile data breaches from firms like Yahoo and Verizon. And bots will be programmed to distribute responses randomly over longer timeframes, helping false responses bury themselves amongst genuine ones.
Deus ex machina
So is there any hope for online polls and discussions in future? Can we still trust website surveys, or believe the comments posted on message boards? The short answer is yes. By designing polls and forms more carefully, it’s possible to weed out automated responses without deterring genuine involvement.
These are some of the techniques recommended by experts for ensuring the accuracy and legitimacy of online surveys and forums:
Only accept votes and comments from registered users. While foreign agents might still try to distort a debate with biased comments, it’ll be harder to spam-bomb forums and online polls if every account has to be registered with a unique email address.
Block respondents from taking a survey more than once. Duplicate protection often involves adding a cookie to a respondent’s web browser, requiring spammers to clear their cookies after every vote – or use numerous different browsers.
Analyze social media responses for telltale signs of bot activity. Bot accounts rarely use profile images, and often post in the middle of the night. They lack followers despite following many accounts themselves, and only post about one or two topics.
Generate a one-time link. Rather than allowing anyone to participate in an online poll, a one-time code is generated using HTTP verification. People who don’t click on the link can’t contribute, which prevents duplication and stifles bot activity.
Use verification bots won’t understand. Traditional Captcha forms with italicized lettering are now unpopular, but alternatives involve simple tasks like rotating a picture or answering a sum. Even basic interpretation tasks will stump most bots.
Run IP address tracking. This is a flawed system, since it prevents voting by multiple people connected to wifi hubs in public environments like cafés or colleges. However, it also stops a single device repeatedly contributing to a poll or form.
Use rate limiting. To resolve the wifi issue above, a few lines of code on a server can restrict the speed at which votes are accepted from a particular IP address. It can also respond to volume breaches with progressively longer automatic timeouts.
This article was brought to you by Midphase, for shared hosting, cloud servers and 24/7 support visit our site here www.midphase.com