cupure logo
techappleappusersgoogletrumpchinatiktoktariffsdata

Tech giants team up to arrive at principles for handing online harms

Nine of the world's biggest tech companies have come together to establish an industry framework for handling harmful content and conduct online.Why it matters: Tech companies, facing a threat from U.S. lawmakers who are considering changing the rules around what content they are liable for on their platforms, are eager to win back public trust.Details: The Digital Trust & Safety Partnership — with Facebook, Google, Microsoft, Twitter, Discord, Pinterest, Reddit, Shopify and Vimeo listed as its members — will develop a best practices framework for handling harmful content and behavior online, based on five commitments to:Anticipate the risks for misuse as part of product design, and develop ways to prevent misuse or abuse.Adopt rules for user conduct and content that are clear and consistent. Enforce the rules. The framework includes examples of best practices for enforcement operations, including investing in wellness and resilience of teams dealing with sensitive materials.Keep up with changing risks and review whether policies are effective in limiting harmful content or conduct.Report periodically on actions taken on complaints, enforcement and other activities related to trust and safety.Yes, but: This is unlikely to affect the individual companies' existing approaches to moderating content created by their users. The partnership says it's not aiming to create an industry-wide definition of, say, hate speech or misinformation, but to define the internal processes companies should use to develop their own policies."The goal is that there should be sufficient flexibility such that the different companies can have different substantive definitions of these things and still agree on whether you have a set of institutional practices that are addressing them," said Alex Feerst, former general counsel at Medium, who is advising the group.Between the lines: Lawmakers have been calling for increased transparency into how tech companies moderate content, as well as proposing new laws to address concerns about online harms. This new effort is a way for the companies to respond."Improving all the best practices and getting this out there I think is a step on the road, hopefully to establishing more trust between government and tech companies, especially as it really gets into the details of, 'Here's much more concrete information on exactly what we do,' " Feerst said.What's next: The companies want feedback, and will conduct both internal and external reviews on how they are implementing them."This puts us on a path to having both transparency, but also having external assessments of whether the company is concretely meeting these commitments," Feerst said.Our thought bubble: Tech firms, under pressure, have been announcing lots of sweeping efforts to tackle content moderation problems. While they've gotten better about issuing reports about content takedowns, only third-party audits will be able to determine whether the companies are really successful. Feerst says an impartial outside reviewer will assess how well the companies are living up to their commitments.

Comments

Similar News

Tech news