Knowlesys

How can intelligence analysts distinguish social media bots from real people?

You may be dealing with a social media bot if the social media account has the following characteristics.

1. Unusual activity

Have a higher frequency of activity than the average person, even posting 24/7. This can be easily calculated by looking at their profile page and dividing the number of posts by the number of days they are active. The benchmarks for suspicious activity vary. The Oxford Internet Institute's Computational Advocacy team considers an average of more than 50 posts per day as suspicious; this is a widely accepted and applied benchmark, but may be on the low side.

2. Publish in multiple languages

Use multiple languages to try to target a global audience.

3. High level of anonymity

Provides little personal information, or uses profiles that look unnatural, or uses generic profile pictures. In general, the less personal information it provides, the more likely it is to be a bot. Has a username consisting of random letters and numbers.

4. Unoriginal much

A major role of bots is to enhance information from other users by retweeting, re-tweeting, liking or quoting them. Thus, a typical bot's timeline will include a series of retweets or re-tweets and verbatim quotes from news headlines with little or no original posts. Like and share content, but few original comments or posts.

5. Retweeting the same content

Bots achieve their effect by amplifying content in large quantities from a single account. For example, in Twitter, they create a large number of accounts, each of which retweets the same post once. If they all post the same content or similar content types at the same time, they may be judged as program-controlled bot accounts.

6. Stealing or using the same avatar image

The most primitive bots are particularly easy to identify because their creators don't bother uploading avatars or avatar images to them. Other bot makers are more meticulous and try to hide their anonymity by taking pictures from other sources. Therefore, a good test of the authenticity of an account is to reverse search its avatar image. Using Google Chrome, right-click on the image and select "Search images in Google". With other browsers, right-click on the image, select "Copy image address", enter the address in Google Search, and click "Search by image". In either case, the search will display a page with matching images, indicating whether the account may have stolen its avatar.

7. Content with a specific purpose

Content that focuses on specific political narratives, propaganda or misleading content.

8. The account has been registered for a short period of time

Was created in the last year or less. Or it was registered a long time ago, but has not had active information for a long time and was suddenly used recently.

9. Repeated use of the same emoticons and symbols

Repeated use of emoticons and punctuation marks such as exclamation marks.

10. Commercial content

Advertising is also a typical indicator of some botnets. Some botnets seem to exist primarily for this purpose, with only occasional forays into politics. When they do, their attention to advertising tends to betray them.

11. Automation Software

Another clue to potential automation is the use of URL shorteners. These are primarily used to track traffic on specific links, but the frequency of their use can be an indicator of automation.

12. Retweets and Likes

The ultimate indicator of a botnet at work can be gathered by comparing retweets and likes on specific posts. Some bots are programmed to retweet and like the same tweet; in this case, the number of retweets and likes is almost identical, and the family of accounts performing the retweets and likes may also match.

13. Fake followers

There may be fake followers who engage in similar activities.

However, according to a 2019 study by the University of California, bots are becoming more human-like and harder to detect. As adversaries develop more advanced technologies, it may be difficult to detect social media bots with critical analysis skills alone - especially at scale. The widespread use of social media bots means analysts need advanced tools to facilitate monitoring.

Over the past decade, social media platforms have become an important resource for national security teams as open source data has become increasingly valued. These networks are valuable for revealing public security risks such as disinformation and terrorist recruitment, which rely on social media to reach target audiences.

Bots are now widely used to extend and target malicious social media activity. As this tactic evolves, intelligence teams must leverage advanced solutions to monitor malicious social media bots and protect vulnerable populations from their influence.



銆怰esources銆戔棌The Achilles heel of AI startups: no shortage of money, but a lack of training data
銆怰esources銆戔棌The 27 most popular AI Tools in 2023
【Open Source Intelligence】●10 core professional competencies for intelligence analysts
銆怤etwork Security銆戔棌9 popular malicious Chrome extensions
銆怤ews銆戔棌Access control giant hit by ransom attack, NATO, Alibaba, Thales and others affected
【News】●AI-generated fake image of Pentagon explosion goes viral on Twitter
【Dark Web】●5 Awesome Dark Web Links