Facebook Should Trust Our Innocence

… while still reacting quickly and with clear intent once it’s clear that any of us, its users, has abandoned it and is tempting others to do the same.

Plenty of commenters have criticized Mark Zuckerberg after an interview with Recode in which he seemed to suggest that those who deny the existence of the Holocaust, something that the social media mogul finds offensive, should not be outright banned from Facebook because it is very hard if not impossible for the company to know their intent and their real beliefs.

Zuckerberg has since clarified his comments and there’s some nuance in the interview that many critics should have picked up on. But the biggest mistake that this line of criticism makes is that it fails to show trust in individuals and in the better angels of our human nature.

When one individual engages in speech that suggests the Holocaust has not happened Facebook should be aware of his or her position and even flag their posts in a non-public way, while giving them more information, alternatives, clear signs that their position is not supported by facts. Only when and if that user of Facebook fails to engage with this new info and doubles down on his position should the platform take down the offending post and maybe even consider taking down the account if the offense is repeated.

Humans do not have the time and the energy to always investigate everything and to make sure that all their ideas are entirely based on facts. Sometimes their wrong ideas or their biased judgements have no direct impact on their conduct or simply fail to be offensive in any way. There’s no need for a social network to police those but there might be a place to give friends or acquaintances of a poster a way to do so.

Only when someone posts something clearly aims to recruit others to their false opinion or when they are aiming to weaponize a false piece of info should Facebook step in, gently at first and (although it is impossible to perfectly evaluate the inner life of a social media user) then make a decision on what and why it can remove.

When and if an official page denies the Holocaust or delivers clearly false information (the type of situation that InfoWars often finds itself in) then Facebook needs to be more forceful in its enforcement, mainly by cutting down the reach of offending pages while taking down individual posts and then by banning them.

Facebook and Zuckerberg are often, these days, in the crosshair and rightfully so but we should not force a very negative view of humanity on the company and on its employees. Humans, the companies they create and the social spaces that they create are often flawed and filled with falsehoods. Bans and takedowns are necessary tools but we cannot entirely rely on them to create a pristine world, filled entirely with beliefs and opinions based on perfect knowledge.

Lack of Clarity from Microsoft Will Hurt New Offensive Language Policy for Xbox One

Microsoft has revealed, buried inside a longer statement about updates to its terms of service, that the company actively prohibiting the use of offensive terms when it comes to Xbox based services on both the One console and the PC. Those who violate the new terms can be suspended and banned from their accounts and can lose access to licenses and devices.

The section of the wider update that’s most relevant for gamers reads: “In the Code of Conduct section, we’ve clarified that use of offensive language and fraudulent activity is prohibited. We’ve also clarified that violation of the Code of Conduct through Xbox Services may result in suspensions or bans from participation in Xbox Services, including forfeiture of content licences, Xbox Gold Membership time and Microsoft account balances associated with the account.”

It is commendable that Microsoft is working to limit the impact of offensive language on the gaming platforms that it manages but the company is doing a very poor job communicating about its efforts. The above statement is vague enough that the community has taken it both to mean that bans and moderation will be increased and to say that nothing will change in the actual policy and that the company is only updating the terms to make them clearer for those who might have misinterpreted before.

Toxicity, mostly related to voice and text chat and the deluge of offensive terms that some use, is a major issue for the gaming industry and many gamers claim that multiplayer experiences cannot improve it moderation, including suspensions and bans, is not improved significantly.

But before it can act Microsoft needs to become much better at communicating with those who bought and use the Xbox One or associated services on the PC. This means offering clear statements about their intentions, a list of terms that can trigger moderation and a clear goal that it wants to reach when it comes to creating a welcoming platforms for gamers to express themselves without offending others.