A cybersecurity group in New Zealand recently shared the results of some experiments they did using fake accounts on Twitter. These are the variety that have been used to roast celebrities like Steve Jobs and Rahm Emmanuel, or give voice to an escaped cobra. These fake accounts are trying to pass as human.
Organized by the Boston-based Web Ecology Project, the experiment called for three teams to program social-bots Twitter accounts that coule mimic human conversation. They selected 500 real users (I presume they had a way of confirming that), most of which shared an affinity for cats. Accounts like @JamesMTitus relied on a database of generic responses, focusing on the most responsive people in the target community. In the second week, additional bots were added to allow teams to try to thwart the efforts of other bots to be perceived as human.
Although the Evil applications are readily apparent—in February, Anonymous hackers revealed government interest in infiltrating online groups—Tim Hwang also sees the potential for great good. A new version of this social experiment called “The Narrows” will attempt to construct a community where one does not yet exist, leading to the hope of using bots as connective mechanisms to help shape large online communities.
This immediately reminded me of a couple other projects related to fakery on Twitter.
Truthy (@ truthyatindiana) is an Indiana University research project about detecting astroturfing and other misinformation around political topics. While it takes a little practice to understand the meaning of the network visuals—the site now offers a nice visual guide that explains some of the common patterns, with specific examples—the work has produced some new insights about political use of Twitter, as well as statistically confirming other assumptions.
One of the most relevant to my perpetually delayed dissertation is the analysis of the #gop hashtag, which clearly shows a polarized group.
An example of a grassroots meme, the #gop hashtag is widely used on Twitter, but in two very distinct ways. One cluster reflects use by conservatives, and the other contains liberal critics. People will retweet others in the same community. When they do mention those in the other community, it is typically expressed as a disagreement. This might support known patterns within political forums online, where members engage with opposing views while reinforcing information flow from their peers.
The other is Cyc (@cyc_ai), a non-profit organization to manage and grow an ontology of general knowledge that can allow computers to reason like humans. The Cyc systems leverage natural language interface, detailed background information, and deep inference to create conversational knowledge. Cycorp and Cleveland Clinic Foundation built the Semantic Research Assistant (SRA) to answer clinicians’ ad-hoc queries. Cyc, which began back in 1984, is using Twitter to help train it’s information. Recently, the tactics have changed to use a variety of inquisitive wordings to prompt confirmation of data.
The Twitter account claims to allow you to send a direct message and get an answer, but that hasn’t worked for me yet. The Cyc project isn’t without criticism, not the least of which is scalability and responsiveness to cultural shifts in meaning.Tags: AI, artificial intelligence, astroturfing, bots, community, computers, Cyc, dissertation, evil, fake, growth, humanity, Politics, Social, structure, Truthy, Twitter, Web Ecology Project