By Today, I want to highlight two interrelated and significant
developments in the world of AI, or Artificial Intelligence. The first
has to do with a computer program called “Eugene Goostman,” which has
reportedly become the first artificially created human being to fool
more than 30% of judges that he is a real person. The test was recently
conducted at the Royal Society in London and the :UK’s Independent reported yesterday that:
A program that convinced humans that it was a 13-year-old boy has become the first computer ever to pass the Turing Test. The test — which requires that computers are indistinguishable from humans — is considered a landmark in the development of artificial intelligence, but academics have warned that the technology could be used for cybercrime.
The computer program claims to be a 13-year-old boy from Odessa in Ukraine.
“In the field of Artificial Intelligence there is no more
iconic and controversial milestone than the Turing Test, when a
computer convinces a sufficient number of interrogators into believing
that it is not a machine but rather is a human,” he said.
“Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime.
This is particularly interesting in light of a recent piece from the MIT Technology Review titled, How Advanced Socialbots Have Infiltrated Twitter. The article highlights the work of Carlos Freitas from the Federal University of Minas Gerais in Brazil, who demonstrated the ability of Twitterbots (i.e., fake computer generated Twitter accounts) to not only gain more followers than actual human users, but to also infiltrate social groups within Twitter and exercise influence within them. Furthermore, while Twitter monitors the site for bots and suspends them when uncovered, 69% of the bots created by the Mr. Freitas escaped detection. From MIT Technology Review:
The threat posed by such systems cannot be overstated when it comes to both government and corporate propaganda, and it seems we have already reached the point where they are sophisticated enough for us to become concerned.
While this hasn’t been a key topic on this site up to this point, I have covered it in the past from time to time. For example, in the post from one year ago titled: The CIA’s Latest Investment: Robot Writers.
Furthermore, the threat posed by AI isn’t limited to media. We have also seen it creep into surveillance methods. For example: Meet AISight – The Artificial Intelligence Software Being Installed on CCTV Networks Globally.
This is definitely a trend that deserves more scrutiny going forward.
In Liberty,
Michael Krieger
A program that convinced humans that it was a 13-year-old boy has become the first computer ever to pass the Turing Test. The test — which requires that computers are indistinguishable from humans — is considered a landmark in the development of artificial intelligence, but academics have warned that the technology could be used for cybercrime.
Computing pioneer Alan Turing said that a computer could be
understood to be thinking if it passed the test, which requires that a
computer dupes 30 per cent of human interrogators in five-minute text
conversations.
Eugene Goostman, a computer program made by a team based
in Russia, succeeded in a test conducted at the Royal Society in London.
It convinced 33 per cent of the judges that it was human, said
academics at the University of Reading, which organized the test.
It is thought to be the first computer to pass the iconic test.
Though other programs have claimed successes, those included set topics
or questions in advance.
“Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime.
This is particularly interesting in light of a recent piece from the MIT Technology Review titled, How Advanced Socialbots Have Infiltrated Twitter. The article highlights the work of Carlos Freitas from the Federal University of Minas Gerais in Brazil, who demonstrated the ability of Twitterbots (i.e., fake computer generated Twitter accounts) to not only gain more followers than actual human users, but to also infiltrate social groups within Twitter and exercise influence within them. Furthermore, while Twitter monitors the site for bots and suspends them when uncovered, 69% of the bots created by the Mr. Freitas escaped detection. From MIT Technology Review:
You might say that bots are not very sophisticated and so easy to
spot. And that Twitter monitors the Twittersphere looking for, and
removing, any automated accounts that it finds. Consequently, it is
unlikely that you are unknowingly following any automated accounts,
malicious or not.
If you hold that opinion, it’s one that you might want to revise following the work of Carlos Freitas at the Federal University of Minas Gerais in Brazil and a few pals, who have studied how easy it is for socialbots to infiltrate Twitter.
Their findings will surprise. They say that a significant proportion of the socialbots they have created not only infiltrated social groups on Twitter but became influential among them as well. What’s more, Freitas and co have identified the characteristics that make socialbots most likely to succeed.
These guys began by creating 120 socialbots and letting them loose on Twitter. The bots were given a profile, made male or female and given a few followers to start off with, some of which were other bots.
The bots generate tweets either by reposting messages that others have posted or by creating their own synthetic tweets using a set of rules to pick out common words on a certain topic and put them together into a sentence.
The bots were also given an activity level. High activity equates to posting at least once an hour and low activity equates to doing it once every two hours (although both groups are pretty active compared to most humans). The bots also “slept” between 10 p.m. and 9 a.m. Pacific time to simulate the down time of human users.
Having let the socialbots loose, the first question that Freitas and co wanted to answer was whether their charges could evade the defenses set up by Twitter to prevent automated posting. “Over the 30 days during which the experiment was carried out, 38 out of the 120 socialbots were suspended,” they say. In other words, 69 percent of the social bots escaped detection.
If you hold that opinion, it’s one that you might want to revise following the work of Carlos Freitas at the Federal University of Minas Gerais in Brazil and a few pals, who have studied how easy it is for socialbots to infiltrate Twitter.
Their findings will surprise. They say that a significant proportion of the socialbots they have created not only infiltrated social groups on Twitter but became influential among them as well. What’s more, Freitas and co have identified the characteristics that make socialbots most likely to succeed.
These guys began by creating 120 socialbots and letting them loose on Twitter. The bots were given a profile, made male or female and given a few followers to start off with, some of which were other bots.
The bots generate tweets either by reposting messages that others have posted or by creating their own synthetic tweets using a set of rules to pick out common words on a certain topic and put them together into a sentence.
The bots were also given an activity level. High activity equates to posting at least once an hour and low activity equates to doing it once every two hours (although both groups are pretty active compared to most humans). The bots also “slept” between 10 p.m. and 9 a.m. Pacific time to simulate the down time of human users.
Having let the socialbots loose, the first question that Freitas and co wanted to answer was whether their charges could evade the defenses set up by Twitter to prevent automated posting. “Over the 30 days during which the experiment was carried out, 38 out of the 120 socialbots were suspended,” they say. In other words, 69 percent of the social bots escaped detection.
The more interesting question, though, was whether the social
bots can successfully infiltrate the social groups they were set up to
follow. And on that score the results are surprising. Over the
duration of the experiment, the 120 socialbots received a total of 4,999
follows from 1,952 different users. And more than 20 percent of them
picked up over 100 followers, which is more followers than 46 percent of
humans on Twitter.
More surprisingly, the socialbots that generated synthetic tweets (rather than just reposting) performed better too. That suggests that Twitter users are unable to distinguish between posts generated by humans and by bots. “This is possibly because a large fraction of tweets in Twitter are written in an informal, grammatically incoherent style, so that even simple statistical models can produce tweets with quality similar to those posted by humans in Twitter,” suggest Freitas and co.
More surprisingly, the socialbots that generated synthetic tweets (rather than just reposting) performed better too. That suggests that Twitter users are unable to distinguish between posts generated by humans and by bots. “This is possibly because a large fraction of tweets in Twitter are written in an informal, grammatically incoherent style, so that even simple statistical models can produce tweets with quality similar to those posted by humans in Twitter,” suggest Freitas and co.
Gender also played a role. While male and female bots were
equally effective when considered overall, female social bots were much
more effective at generating followers among the group of socially
connected software developers. “This suggests that the gender of the
socialbots can make a difference if the target users are gender-biased,”
say Freitas and pals.
Hahaha, software developers trying to get frisky on Twitter.
So the work of Freitas and co is a wake-up call for Twitter. If
it wants to successfully prevent these kinds of attacks, it will need to
significantly improve its defense mechanisms. And since this work
reveals what makes bots successful, Twitter’s research team has an
advantage.Hahaha, software developers trying to get frisky on Twitter.
The threat posed by such systems cannot be overstated when it comes to both government and corporate propaganda, and it seems we have already reached the point where they are sophisticated enough for us to become concerned.
While this hasn’t been a key topic on this site up to this point, I have covered it in the past from time to time. For example, in the post from one year ago titled: The CIA’s Latest Investment: Robot Writers.
Furthermore, the threat posed by AI isn’t limited to media. We have also seen it creep into surveillance methods. For example: Meet AISight – The Artificial Intelligence Software Being Installed on CCTV Networks Globally.
This is definitely a trend that deserves more scrutiny going forward.
In Liberty,
Michael Krieger
No comments:
Post a Comment