From the beginning, there were signs that the Clubhouse was accelerating the platform’s lifecycle. Weeks after the launch, the site received allegations that it was allowing for the proliferation of harassment and hate speech, including large rooms where speakers allegedly made anti-Semitic comments. The start-up struggled to update community guidelines and add basic blocking and reporting features, and its founders took the necessary apology tour for Zuckerberg. (“We unequivocally condemn anti-blackness, anti-Semitism and all other forms of racism, hate speech and abuse at the Clubhouse,” said a company blog post in October.)
The company also faced accusations of mishandling user data, including a Stanford report that found that the company may have routed some data through servers in China, possibly giving the Chinese government access to confidential user information. (The company has pledged to block user data and have an external audit of its security practices.) And privacy advocates have refused to adopt aggressive application growth practices, which include asking users to upload their information. entire contact lists to send invitations to others.
“Main privacy and security concerns, a lot of data extraction, use of dark standards, growth without a clear business model. When will we learn? ”Elizabeth M. Renieris, director of the Notre Dame-IBM Technical Ethics Laboratory, wrote in a tweet this week that compared the Clubhouse at this point with the early days of Facebook.
To be fair, there are some important structural differences between the Clubhouse and the existing social networks. Unlike Facebook and Twitter, which revolve around central feeds and curated by algorithms, the Clubhouse is organized more like Reddit – a grouping of rooms by topic, moderated by users, with a central “corridor” where users can navigate through the rooms in progress. Clubhouse rooms disappear after they are finished, and recording a room is against the rules (although it still happens), which means that “going viral” in the traditional sense is not really possible. Users need to be invited to the “stage” of a room to speak, and moderators can easily boot undisciplined or disturbing speakers, so there is less risk of civilized discussion being hijacked by trolls. And the Clubhouse has no ads, which reduces the risk of fraud for profit.
But there are still many similarities. Like other social networks, the Clubhouse has a number of “discovery” features and aggressive growth hacking tactics designed to attract new users to the app, including algorithmic recommendations and personalized push alerts, and a list of suggested users below. These features, combined with the Clubhouse’s ability to form private and semi-private rooms with thousands of people in them, create some of the same bad incentives and abuse opportunities that have undermined other platforms.
The reputation of the relaxed moderation app has also attracted several people who have been barred by other social networks, including figures associated with QAnon, Stop stealing and other extremist groups.
The Clubhouse has also become a home for people who are disillusioned with social media censorship and criticism from various guardians. Attacking The New York Times, in particular, has become something of an obsession among Clubhouse addicts for reasons that would require another entire column to explain. (A room called, in part, How to Destroy the NYT ran for many hours, attracting thousands of listeners.)