Students Fight Digital Robots and Fake Accounts on Twitter

Ash Bhat and Rohan Phadte, computer science students, recently stared at a screen in their Berkeley apartment showing the Twitter account of someone called “Red Pilled Leah.” They suspected she was a “bot,” short for robot.

Red Pilled Leah joined Twitter in 2011 and had 165,000 followers. The majority of her 250,000 tweets were retweets on political topics. Was she a human with strong opinions or an automated account that is part of a digital army focused on riling up Americans over divisive social and political issues?

Internet companies are under pressure to do more to crack down on such automated accounts, following scrutiny over Russian-backed efforts to influence the last U.S. presidential election. But companies are struggling over how to identify the malicious bots from the merely opinionated human users.

The two students at the University of California, Berkeley developed a software program called botcheck.me that looks for 100 characteristics in Twitter accounts that they say are common among bots.

Among them, tweeting every few minutes, gaining a lot of followers in a short time span and retweeting other accounts that are likely bots. In addition, bot accounts typically endorse polarizing political positions and propagate fake news, they say.

Concern about Twitter bots

Twitter says that less than five percent of its 69 million monthly active users in the U.S. are automated, but some researchers have pegged the bots at closer to 15 percent.

U.S. lawmakers say that bots on Twitter played a role in trying to upend the democratic process.

“Bots generated one out of every five political messages posted on Twitter over the entire presidential campaign,” said Senator Mark Warner, a Democrat from Virginia.

Facebook requires its users to prove their identity. To get an account on Twitter, on the other hand, a user needs a phone number. And Twitter allows automated accounts for legitimate purposes, such as for companies to provide customer service or for public safety officials to spread the word regarding a possible danger.

But the company is also trying to crack down on bots and “other networks of manipulation,” the company said in a blog post.

Nonhuman Twitter behavior

Bhat and Phadte have worked on other projects, such as developing technology to determine the political bias of a news article or to detect fake news on Facebook.

They turned to Twitter after they noticed that many accounts tweeting about politics appeared to be “nonhuman,” Bhat said. These accounts gained a lot of followers fast and tweeted and retweeted frequently, about five times as much as a human account. They promoted polarizing views and fake news.

Launched in October, Botcheck.me has a 93.5 percent accuracy rate, the students say. However, they have heard from real people complaining that their accounts have been falsely identified as “bots.” When the algorithm makes a mistake, the two students say they investigate what went wrong and improve the program.

Phadte says it matters if a Twitter account is a human being or a robot.

“People are seeing political, polarizing opinions that aren’t accurate,” Phadte said. “People are getting angry at each other about stereotypes that are not really true.”

Bot-like characteristics

Bhat pressed a blue button next to Red Pilled Leah’s Twitter account. Botcheck.me scanned a person’s Twitter history and ran the tweets through an algorithm to predict if the account was actually a bot.

Sure enough, Botcheck.me said Red Pilled Leah, who claims to be an entrepreneur with a master’s degree in psychology, exhibits “bot-like characteristics.”

The students said they have contacted Twitter about their software, but haven’t heard back. Twitter didn’t respond to a request for comment from Voice of America. However, the company has said that it can’t share details of how it’s determining which accounts are bots.

From their vantage point, the students say the bots are getting more sophisticated. A whole network of accounts will retweet a single tweet to spread a message quickly. And programmers can change bots’ behavior as detection methods improve. But the students say that only makes it more important to determine when messages are being spread by malicious actors.

“The reason why this really matters is that we formulate our views based on the information we have available to us,” Bhat said of the social media content. “When certain views are propagated on the network that is very artificial, it tends to influence the way we think and act. We think it is very horrific.”

To use botcheck.me, users can download a Google Chrome extension, which puts the blue button next to every Twitter account. Or users can run a Twitter account through the website botcheck.me.



your ad here

leave a reply:

Discover more from UPONSOFT

Subscribe now to keep reading and get access to the full archive.

Continue reading