In the 1950s leading British computer scientist Alan Turing wanted to see if a machine could behave like a human. So he came up with a test, where a panel of judges would speak with the machine. If it tricked at least a third of the group into thinking it was a human, it passed. Originally called the imitation game, it’s now known as the Turing test. For decades no machine could pass, but in 2014 a Computer Program called “Eugene Goodsman” convinced a team of judges that it was a 13-year-old boy.
Fast forward to today, computer programs like Eugene plays a key role on the internet where they are known as bots, short for robots. Bots are a computer program set up to do a task, so a human doesn’t have to. They can respond to automated customer queries, or help companies post articles on multiple platforms at the same time. And now, they play a vital role as Social Media Bots.
The Dark Side of Social media Bots.
But there is a darker side to them too. Armies of bots disguised as humans can be hired to hijack and manipulate debates by spreading false information online, or support groups inciting hate speech. The web has become a primary platform to discuss social and political views but the role of bots hangs in the balance. As to what extent will bots be deployed as a tool for efficient customer service and other uses and will those intend on manipulating public opinion. Double down on bots and in the long term, endangered democracy bots became popular in the mid 90s. When digital giants like Google, AOL and Microsoft took the global economy by storm and leaned on these automated programs to streamline their services. You can have bots that scrape websites to check them, archive them or to catalog them. There are bots that can create poetry, there are bots that can create art.
So there are all kinds of uses for bots. In 2019 bots account for nearly one-third of all internet traffic, but the double-edged nature of these machines becomes particularly evident when it comes to politics. When we talk about the malicious bots, the social media bots that are part of the conversation around political interference right now.
What you have is social media accounts which are set up to pretend to be people, and the idea is you fool people into thinking there’s a real person on the other end of this. Whereas in fact it’s one of a collection of fake accounts. In recent years bots have been accused of shifting people’s perceptions of political events and even swaying an election or a referendum result. But while this feels like a new phenomenon for some experts, there’s really nothing new.
Manipulating The Masses, not a new Concept.
When the printing press was developed, initially there was an explosion of enthusiasm that it would bring a whole new wave of learning to the world. And you had an explosion of paper propaganda and then gradually over time people started realizing that actually just because it’s written on paper doesn’t necessarily mean it’s true. And so with social media we’ve seen something of the same kind of development in the last decade. Social media has given a vast online audience a platform for political conversation and has helped spark some of the biggest political movements of the century.
The Arab spring in 2011 was hailed as the social media revolution because you had people who were getting together and discussing online what to do about their political activism. In the political sphere you have particularly the authoritarian actors, who realize well people are talking about this and:
- we’re not in the game
- we’re not in the conversation
- and our supporters aren’t online.
So let’s create fake supporters. We saw this again in The 2010 U.S Midterm Elections when bots were used to support or discredit candidates using tailored content or fake news, Since then programmers crafted their bots to be more sophisticated and more difficult to tell apart from real people. That was when they started to really hit the headlines. In the 2016 US Elections, there had been rumors going around that massive botnets were distorting the debate on twitter. And there were questions whether some of this was being run from Russia. It became an intellectual challenge. What are the ways that you can find to tell apart a bot from a human?
There’s this theory described in this video Called the 3A.
Activity is what volume does it post not just per day, but per week, per month and per year. Then there’s Anonymity. Does the account give any kind of information that there’s a real person behind it? And then you look at Amplification. A retweet bot is something which only retweets other people and doesn’t post its own content. And at that point you have reason to believe that that is a bot.
How Bots on Social Media have impacted Important Decisions.
In 2016, researchers uncovered examples of mass social media manipulation during the U.S presidential election. They showed how nearly 400 000 users, approximately 15 percent of the entire twitter population in the U.S was made of social media bots. These programs were responsible for 20% of online conversations in the UK. Research found that political bots played a strategic role in shaping discourse on twitter about the polarizing Brexit referendum. The hashtags associated with leave groups dominated the conversation with less than one percent of sampled accounts generating almost a third of all messages.
Even today bots are participating in the discourse around the Covid 19 pandemic, often in dangerous ways. According to a recent study from Carnegie Mellon University, nearly half of twitter accounts discussing reopening America may come from bots. Of the top 50 influential retweeters involved in the Covid 19 conversation, 82 percent are computer programs. And in the lead-up to the 2020 U.S Elections, there were attempts to create false personas and news outlets to spread divisive and polarizing messages.
Not only have bots played a big part in big national debates they’ve been known to influence hyper local and niche issues too. In 2017, an army of bots were mobilized to flood public agencies with comments about net neutrality in the US.
In Early 2018 twitter cracked down hard on mass automation. And so the evolution that we’ve seen over the last couple of years particularly, has been that the botnets have been getting smaller and generally posting at a lower level. It feels like they’re trying more to fly under the radar that doesn’t mean that they have gone away, but they’re having to spend more effort in hiding and the more they try and hide the less they’re going to get noticed. Legislators in California have tried to step in and regulate bots by requiring the accounts to identify themselves as automated. California Senator Dianne Feinstein introduced a federal law that would do the same thing. In spite of these efforts, there remains the common problem of regulating new technology.
It’s unclear whose job it should be to regulate social media bots and decide how malicious operators should be punished. As experts point out bots are only a small part of a bigger picture, bots don’t exist on their own. They don’t exist in a vacuum. They’re part of a bigger ecosystem. So we need to be aware of bots, we need to be aware of the role they play but we shouldn’t be exaggerating them. Bots can be amplifiers! People can use them to make an unpopular opinion look more popular. But ultimately they are only one part of the problem and to keep the whole ecosystem healthy.
For More Articles, visit DataFifty.
Youur mode of explaining the whole thig in this paragraph iis in facxt pleasant, evey onee can effortlessly
bbe aware off it, Thahks a lot.