In defining inauthentic accounts (potential bots), Osavul relies on 3 distinct signals:
- inauthentic behavior
- immature accounts
- suspended by platform
Both categories are aggregated into Inauthentic accounts widget (Case → Actors tab). It provides breakdown of inauthentic accounts by detection criteria: immature, or inauthentic.

Search and cases can also be filtered by ‘Inauthentic behaviour’ filter. It will filter content, whose authors are either immature or inauthentic.

<aside>
The Filter by Inauthentic Behavior feature currently performs slowly, which may affect usability in search and case views. We are actively working on improving its performance in a future update
</aside>
Osavul’s bot detection methodology is as follows:
Immature accounts:
- Any account that has <10 followers. This labeling supports all platforms.
Suspended by platform:
- Any account that was blocked by the platform and we got this account status while trying to collect data from it. Supported mainly for Twitter and Facebook.
Inauthentic behavior:
- We collect a large array of user-generated content from various pages and platforms: Telegram, Facebook, Twitter, Youtube, in total more than 60 million content units per month;
- We analyze this content and identify suspicious groups of non-unique content. Such content could be posted by different accounts across different places in different languages and words, but is similar semantically.