Chatroulette is on the rise again – with the help of AI

A decade ago, Chatroulette was an Internet supernova, exploding in popularity before collapsing under a torrent of male nudity that repelled users. Now, the app, which pairs strangers randomly for video chats, is getting a second chance, partly thanks to a pandemic that has restricted personal social contact, but also thanks to advances in artificial intelligence that help filter out the most questionable images.

User traffic has almost tripled since the beginning of the year, to 4 million unique monthly visitors, the highest since the beginning of 2016, according to Google Analytics. Founder and President Andrey Ternovskiy says the platform offers a refreshing antidote to diversity and serendipity for family social echo chambers. On Chatroulette, strangers meet anonymously and do not need to reveal their data or browse advertisements.

A sign of how Chatroulette completely cleaned up its operations: an embryonic corporate conference business. Bits & Pretzels, a German startup conference, organized a three-day event on Chatroulette in September, including a Founders Roulette session that combined participants. “But without nudes, but full of surprising conversations,” announced the conference. Another change: women are now 34% of users, compared to 11% two years ago.

The AI ​​that helped keep visitors free from unwanted nudity or masturbation was a good investment, says Ternovskiy. It can also offer lessons for much larger social networks that struggle to moderate content that can turn into falsehood or toxicity. But Ternovskiy still dreams of a platform that creates happy human connections and warns that technology cannot offer this alone. “I doubt that the machine will be able to predict: is this content desirable for my user base?” he says.

A 17-year-old Ternovskiy codified and created Chatroulette in November 2009 from his room in Moscow as a way to kill boredom. Three months later, the site attracted 1.2 million daily visitors. Then the exodus came. Ternovskiy got involved in some ill-fated partnerships with Sean Parker and others to try to keep Chatroulette relevant. In 2014, he launched a premium offer that matched users based on desired demographics, which generated some revenue. He invested part of that money in cryptocurrency ventures that brought in additional earnings. Chatroulette today is based in Zug, Switzerland, an encryption center.

In 2019, Ternovskiy decided to give Chatroulette another spin, as a more respectable business, led by a professional team, with less “adult chaos”. The company was incorporated in Switzerland. Ternovskiy hired Andrew Done, an Australian with experience in machine learning, as CTO. Earlier this year, Done became CEO. He was accompanied by a senior product researcher with a PhD in psychology, a community manager, a talent acquisition manager and more engineers. Then Covid-19 appeared and traffic grew.

The new team took advantage of increased traffic to conduct user surveys and test ways to moderate content, including AI tools from Amazon and Microsoft. He created a filtered channel, now known as Random Chat, designed to exclude nudity, alongside a channel without moderation. By demarcating the two channels, Chatroulette hoped to make the filtered feed more secure and attract users interested in the human connection. The unfiltered channel remains popular, but usage is declining, and Ternovskiy plans to eliminate it by mid-2021.

In June, Chatroulette brought Hive, based in San Francisco, an AI specialist, for a nudity detection test. Hive software also moderates content on Reddit. Executives were quickly impressed by Hive’s accuracy, especially in not signaling innocent users and actions. At the same time, Chatroulette tested the moderation tools from Amazon Rekognition and Microsoft Azure; had previously tried Google Cloud’s Vision AI.

“Hive is at a level of precision that makes it practical to use this technology at scale, which was not previously possible,” says Done. He says Hive is “so accurate that using humans in the moderation cycle is detrimental to system performance. That is, humans introduce more errors than they remove. “

.Source