Twitter is Testing More Options to Help Users Avoid Negative Interactions in the App
These new features could have a big impact - and Twitter's not done yet. This week, the platform has previewed a few more new control options which could help users avoid negative interactions, and the mental stress that can come with them, when your tweets become the focus for abuse. Twitter is developing new 'Filter' and 'Limit' options, which, as Twitter notes, would be designed to help users keep potentially harmful content - and the people who create it - out of their replies.
The new option would enable you to automatically filter out replies which contain potentially offensive remarks, or from users who repeatedly tweet at you that you never engage with. You could also block these same accounts from ever replying to your tweets in future. But even more significant, the Filter option would also mean that any replies that you choose to hide would not be visible to anyone else in the app either, except the person who tweeted them, which is similar to Facebook's 'Hide' option for post comments.
Up till now, Twitter has enabled users to hide content from their own view in the app, but others are still be able to see it. The Filter control would up the power of individual users to totally hide such comments - which makes sense, in that they're replies to your tweets. But you can also imagine that it could be misused by politicians or brands who want to shut down negative mentions.
In addition to this, Twitter's also developing a new 'Heads Up' alert prompt, which would warn users about potentially divisive comment sections before they dive in. That could save you from misstepping into a quagmire of toxicity, and unwittingly becoming a focus for abuse. As you can see in the second screenshot, the prompt would also call on users to be more considerate in their tweeting process.
Twitter is also developing new 'Word Filters', which is an extension of its existing keyword blocking tools, and would rely on Twitter's automated detection systems to filter out more potentially offensive comments. The option would include separate toggles to automatically filter out hate speech, spam and profanity, based on Twitter's system detection, providing another means to limit unwanted exposure in the app.