Last Sunday, thousands of Twitter users took part in a mass boycott of the social network. #Twittersilence as the protest became known was a response to the recent surge of threatening tweets made against several high-profile women, including the TV historian Mary Beard and Caroline Criado-Perez, who ran the campaign to get Jane Austen on the new £10 note. These ranged from threats of violence to statements of intent to rape. In one particularly dark turn, a number of women received anonymous tweets saying that a bomb had been placed outside their homes.
Naturally, the police have been involved in many of these cases and at time of writing at least two arrests have been made alongside several police cautions. Despite police involvement, the abusive tweets show little sign of letting up. The reaction from Twitter’s management was, many felt, less than impressive and so the Times writer Caitlin Moran suggested a 24-hour boycott of the site, as a form of solidarity and protest.
This protest – and the abusive behaviour which prompted it – has come at a time when Twitter is making it more difficult than ever for people to manage their own Twitter experience, by restricting the availability of third-party Twitter apps that offer features such as muting. Shortly before the #twittersilence fell, Twitter’s UK director Tony Wang (right) announced some changes to the site’s terms of service and some changes to the site’s interface that may help but is Twitter doing enough?
The long arm of the law
One argument against taking action against people saying abusive things online is that of freedom of speech. However, in the UK it is currently considered a criminal act to threaten someone’s life or to say that you plan to sexually assault someone. The police sort of frown on things like bomb threats too.
It is not too surprising therefore that there have been a number of arrests and that Twitter has agreed to help the Police with their inquiries. The problem is proving a tricky one to tackle however. It is very easy for a dedicated abuser to open temporary or multiple accounts and it is not always possible to find out the real identity of the creator, meaning any real-world police action is impossible.
In the light of this, can Twitter offer users any other ways of protecting themselves and preventing abuse?
One of most recent proposed solutions is an Abuse Button – a one click way of alerting Twitter’s abuse monitoring staff about someone sending you threatening or offensive mesages. It is also the solution that Twitter says it has just implemented. Twitter’s interpretation of the button is somewhat different to the one proposed by Twitter users however.
Sign up for the newsletter
Get news, competitions and special offers direct to your inbox
“We introduced an in-Tweet report button in the latest version of the iOS Twitter app and on the mobile web. Rather than going to our Help Centre to file an abuse report, users can report abusive behaviour directly from a Tweet.“
The Report Abuse button that Twitter’s Tony Wang announced (and which will come to the Android app soon) is actually just a link to the standard abuse report form, albeit one which fills out some of the details for you. What many people actually wanted was a one-click way of getting someone ejected from the site, or at least bumping them up the list of users to be investigated by Twitter’s moderators.
An obvious problem with an Abuse Buton is that it could easily find itself repurposed as a tool for abusers. If clicking a button on someone’s tweet is enough to get them targetted by Twitter’s abuse monitors then could a group of trolls use it to hound someone they don’t like off the site? Beyond trolling, such a tool might have an impact on genuine freedom of speech – those with unpopular political opinions or lifestyles being shut down and silenced by the weight of complaints.
Blocking is a powerful tool against people abusing you on Twitter. When you block someone, you no longer see their tweets unless you actively seek them out and they are unable to follow you, making it slightly harder for them to see what you are doing and send you abusive messages about it. It is a bit of a blunt instrument, however, and there may be some ways that Twitter could make it more effective.
“Soft blocking” is an informal term used by some Twitter users for quickly blocking-then-unblocking someone. What this does in effect is makes them unfollow you and you them without the permanent and detectable (and thus likely to enrage) extra step of a full block. Could Twitter make this a more official feature and add an ‘unfollow me’ button? It wouldn’t deter the most committed trolls, but could offer a way to diffuse some arguments or rants before they escalated.
Another not-quite-blocking solution is muting. Some twitter apps such as Tweetbot or UberSocial offer a filter on your incoming twitter feed. You can add keywords such as ‘Royal Baby’ or ‘Olympics’ to your filter and the app will automatically hide any tweets containing those words.
You can mute individual users too . This can be handy for a temporary breather if someone goes off on an extended rant, or just starts tweeting spoilers to an episode of Doctor Who you haven’t iPlayer’d yet.
A little help from their friends
In her blog post about the #twittersilence, Caitlin Moran made the point that simply telling people to block abusive people on Twitter doesn’t scale well. “If a woman is getting fifty of these messages an hour, blocking all the abusers becomes something of a thankless, full-time job.”
Could Twitter offer users a way of blocking whole lists of people? If a single user or a small community maintained a shared list of blocked users then everyone who subscribed to it could block them automatically, making it harder for trolls to send abuse to all. If, for example, Caroline Criado-Perez had blocked a hundred or so people who were sending her threatening messages she could share her block list with the wider community. This would not only mean that fewer people would see abusive tweets, it might also act a a way to discourage people from abusing at all – abuse could start to mean a sort of social death for the abusers.
Shared block lists are only effective against people who care about their Twitter usernames. If people start using temporary aliases (as they have in recent cases such as the aforementioned bomb threats) then there are other methods of blocking available.
Some USENET news readers pioneered the concept of the Score File. This was a kind of ‘spam filter’ for USENET news posts in which you gave weighted scores to different keywords. By clever use of Score File filters it would be possible to block everything from a particular set of users, except when they posted about a certain range of subjects, or to hide any posts containing X Files spoilers or whatever. A Twitter app that could use score files to filter your timeline could give automatic protection from a lot of abusive messages, or even funnel them into a ‘report abuse’ account without you having to do anything.
The panic room
If a user starts getting a string of abusive or bullying tweets they could hit the ‘Panic Mode’ button and their account would be put in a state where they could only see tweets from people they follow or who follow them. They would also have the ability to screen new followers in the same way that a standard locked account can.
As well as filtering incoming tweets, Panic Mode could also open a log of abusive tweets that can then be used as evidence for Twitter’s moderators or the police, should that become necessary. What’s more, a user going into panic mode and a sudden flurry of activity associated with their account would be a pretty good indicator to Twitter’s abuse team that there was something occurring which needed their attention.
Is Twitter discouraging development?
Any of the above ideas could have some degree of impact on abuse, or at least empower users to protect themselves. There is a real incentive for app developers to offer these or similar features. Unfortunately, that may be difficult to achieve.
Twitter has made it a matter of policy that third-party client apps are to be discouraged. Rather than ban them outright, it has attempted to soften the blow by limiting the number of users any one client can have using its API. There are sound business reasons for doing this but it has had the inarguable effect of pouring cold water on what was once a source of innovative new features that helped users interact with their timelines.
Allowing apps that can filter an incoming timeline is problematic for Twitter as a growing part of their revenue comes from ‘promoted’ tweets bearing advertising and allowing users to run their timelines through a sieve might mean that they choose not to see those either. One solution would be to embrace filtering and make it a part of the Twitter API, but make promoted tweets un-filterable.
More broadly, Twitter needs to either provide users with the tools to manage their experience of the site or relax its restrictions on third-party apps or face losing its user base to alternatives such as Google Plus, which is built specifically for privacy and the kind of granularity that makes trolling difficult. Keeping things tightly controlled to protect your business model is fine, but you need to keep your users too.
Next, read our look at why the planned ‘porn filter’ for the internet will not work