Social media sites are dealing with an unprecedented number of bots, and Reddit is taking steps to combat the problem. The company announced on Wednesday that it will start labelling bots and ask suspected bot accounts to verify they are human.
The company will use technical signals such as posting speed to identify suspected bot accounts. The suspected accounts will then be asked to verify they are human through tools such as Apple and Google passkeys, YubiKey, Face ID, World ID by Sam Altman, and government IDs in some countries.
Automated accounts which people refer to as good bots will receive transparency labels through the same procedures used by X. Reddit confirmed that AI-generated content remains permissible on its site as long as it meets existing platform regulations which allow for ethical AI implementation while controlling spam and harmful activities.
Reddit shows how it deals with a larger problem which affects all internet sites. Cloudflare predicts that by 2027 bot traffic will exceed human traffic because of AI agents and web crawlers.
Users on platforms such as Reddit have started using bots to manipulate online conversations while they also market products and spread links and create AI training materials. Co-founder Alexis Ohanian has previously noted concerns about a “dead internet", where automated activity increasingly dominates online interactions.
The company will continue to remove 100,000 spam accounts on average per day. Developers operating ethical bots will now be able to mark them with a new "APP" tag via the r/redditdev community. Reddit is working on long-term options that will be decentralised, private, and won't require users to have IDs.