In defining inauthentic accounts (potential bots), Osavul relies on 3 distinct signals:

Both categories are aggregated into Inauthentic accounts widget (Case → Actors tab). It provides breakdown of inauthentic accounts by detection criteria: immature, or inauthentic.

image.png

Search and cases can also be filtered by ‘Inauthentic behaviour’ filter. It will filter content, whose authors are either immature or inauthentic.

image.png

<aside>

The Filter by Inauthentic Behavior feature currently performs slowly, which may affect usability in search and case views. We are actively working on improving its performance in a future update

</aside>

Osavul’s bot detection methodology is as follows:

Immature accounts:

  1. Any account that has <10 followers. This labeling supports all platforms.

Suspended by platform:

  1. Any account that was blocked by the platform and we got this account status while trying to collect data from it. Supported mainly for Twitter and Facebook.

Inauthentic behavior:

  1. We collect a large array of user-generated content from various pages and platforms: Telegram, Facebook, Twitter, Youtube, in total more than 60 million content units per month;
  2. We analyze this content and identify suspicious groups of non-unique content. Such content could be posted by different accounts across different places in different languages and words, but is similar semantically.