Mastodon, which positions itself as a better alternative to Twitter, could be one of the rare social media sites to suspend a law enforcement agency's account after it recently suspended Assam Police's handle saying the platform will 'not welcome cops'.
Angered by the suspension of the Twitter account of Supreme Court advocate Sanjay Hegde, Mastodon has seen a trickle of Indian Twitter users joining its platform.
Several liberal Twitter users in India had accused the micro-blogging site of censoring anti-government handles and failing to control hate speech, allegations which Twitter India refuted.
Mastodon, which is not owned by any one person and is decentralised, was proposed as an alternative to Twitter.
It is a free and open-source social networking service where users are allowed to host their own servers in the network.
The free and open-source social networking service is similar to its rival in design, with 'toots' replacing tweets and 'boosts' replacing retweets. It also has hashtags and a 500 character limit per toot.
For a fee, anyone can host their own server known as an 'instance', or users can join any instance as these servers are connected as a federated social network, allowing users from different servers to interact with each other.
Mastodon is so focused on the concept of decentralisation that it does not have an official app and uses different apps created by others.
Instances can choose to remain private or interact with each other.
Several news organisations such as Live Law, News 18, The Quint and as well as BOOM have joined Mastodon.
We spoke to Eugen Rochko from Germany, who founded Mastodon in 2016.
Rochko administers the main instance Mastodon.social which is currently home to over 4 lakh users, while other instances are owned and moderated entirely separate from his flagship instance
We started by asking Rochko the reason behind his instance's decision to ban Asssam Police and how Mastodon plans to tackle misinformation.
Below are edited excerpts from the conversation.
A recent development on Mastodon was that a sate police account (Assam police) was suspended.
Yes, our community felt unsafe in the presence of law enforcement essentially, it puts like a chilling effect on people's speech and it was decided by all of our moderators to do this.
As the number of users increase, how will Mastodon tackle problems like targeted harassment and trolling? What about other instances that are smaller?
Well the answer is two fold. On one hand it is true that some problems are lacking on Mastodon due to its size, relative to other social media platforms, but some problems are not because of the structure, because Mastodon is a decentralised platform we have a higher number of moderators per user, and responsibility is shared across the whole network to keep it clean. And it also allows multiple communities with different values to coexist on the spaces where they can they can conduct their own rules.
For that matter Mastodon is fundamentally different from Facebook and Twitter and you won't see the same problems because of its design.
How many moderators does your instance (mastodon.social) have?
We have five moderators with one being new, just a few days we added a new moderator who speaks Hindi, so we have five moderators plus me, I don't count myself as a moderator but I help with moderating sometimes.
They cover a variety of user flagged reports in different languages and from different time zones, we'll have to see what the demand / reporting is like on other Indian languages like Bengali, Tamil etc. before adding new moderators.
On Mastodon.social we have paid moderators, but of course you are not obliged to structure it that way. It is fully acceptable for some servers that are smaller and tighter communities for moderators to simply be volunteers that want to help their community.
BOOM spotted a fake account impersonating journalist Rana Ayyub, as we confirmed with her that it wasn't her account. How would you tackle the problem of fake accounts as there is no verification badge on Mastodon?
I see verification and fake accounts as two different things, because Twitter has verification, but that does not stop people from making fake accounts. There are a variety of accounts that impersonate others, parody others and they are not helped by verification.
So there are two ways to address it, the first is getting rid of fake accounts, and the second one is how do you know the account you are seeing is the right one. Getting rid of fake accounts depends on reporting. A few days ago we had a person report that a fake Mastodon account was created in their name, and had contacted us saying that they wanted that account to be transferred to them, and we did after investigating as we do that if someone creates a fake account impersonating someone.
We noticed that Inditoot has a leaner moderation policy. How does this work as instances can define their own moderation policy.
As a user one has few different options - one can mute and block, which can hide individual trolls. The next step is you can hide all content from a particular instance, you will lose all followers from that instance. You can (also) contact your admin and take action on behalf of your instance which is the harshest step as the admin can cut off access to and from a particular instance.
We have received reports about Inditoot, and we have seen that the admin has claimed that hate speech will not be moderated, so for that reason we have on mastodon.social 'silenced' Inditoot from our side which means their content does not appear on people's notifications.
How would you deal with hate driven hashtags? Do you plan to have an option where people can report something fake?
The trending hashtag system on mastodon is developed with this kind of problem in mind. We expect that hashtags can be misused by bots or masses that can be trend, be it harassment complains or misinformation, so before anything trends, it has to be approved. We try to make sure that the hashtags are clean and they are not ads and that they are not calling for violence.
Is there a focus on moderating content?
We do not watch all content that goes through the system, we rely on people's reports, reports from the community help us find stuff that violates the code of conduct. Some of us watch the local timeline (where toots from a local instance appear), where we will notice if something is happening, but it is very helpful when people report stuff, rather than review hashtags when they are used.
Once something is flagged, like for instance a fake article, can the platform detect it going forward and warn users?
We do not have a flagging system for links, but that sounds like a good idea, so maybe we'll have it in the future.
Are direct messages (DMs) on Mastodon encrypted?
Twitter DMs are not encrypted either, Mastodon DMs are encrypted the same way as Twitter's - in a way that they are not. There is a encryption between the browser and the website of course, but that's it. It not out of the question that Mastodon might implement end-to-end encryption for DMs in the future, however, it is not hundred percent obvious why social media website would need that. Because it is not primarily oriented towards a private one-on-one communication system.