Tech giants’ transparency reports are vague

Meta, Google and Twitter have released their 2021 Annual Transparency Reports, documenting their efforts to tackle disinformation in Australia. Despite their name, however, the reports offer a narrow view of companies’ strategies to combat misinformation. They remain vague about the reasoning behind the strategies and how they are implemented. They therefore highlight the need for effective legislation to regulate the digital information ecosystem in Australia.

Transparency reports are published under the Digital Industry Group (DIGI) voluntary code of practice to which Meta, Google and Twitter joined in 2021 (along with Adobe, Apple, Microsoft, Redbubble and TikTok). The DIGI Group and its Code of Practice were created after the Australian Government demanded in 2019 that major digital platforms do more to address issues of misinformation and content quality. What do the transparency reports say?

In Meta’s latest report, the company claims to have removed 180,000 pieces of content from Australian Facebook and Instagram pages or accounts for spreading health misinformation in 2021. It also describes several new products, such as the Climate Science Facebook’s Information Center, aiming to provide “Australians with authoritative information on climate change”. Meta describes initiatives such as funding a national media literacy survey and a commitment to fund training for Australian journalists on identifying disinformation.

Similarly, Twitter’s report details various policies it implements to identify misinformation and moderate its spread. These include: alerting users when they interact with misleading tweets – directing users to authoritative information when searching for certain keywords or hashtags, and – punitive measures such as deleting tweets, blocking accounts and permanent suspension for violating company policies. In the first half of 2021, Twitter suspended 7,851 Australian accounts and removed 51,394 posts from Australian accounts.

Google points out that in 2021 it removed more than 90,000 YouTube videos from Australian IP addresses, including more than 5,000 videos containing misinformation about Covid-19. Google’s report further notes that more than 657,000 creatives were blocked by Australia-based advertisers, for violating the company’s “misrepresentation policies (misleading, clickbaiting and unacceptable business practices)”.

Google’s senior head of government affairs and public policy, Samantha Yorke, told The Conversation: “We recognize that misinformation and associated risks will continue to evolve and we will reassess and adapt our measures and policies to protect people and integrity of our services. . The Underlying Problem When reading these reports, we should keep in mind that Meta, Twitter and Google are essentially advertising companies.

Advertising accounts for about 97% of Meta’s revenue, 92% of Twitter’s revenue, and 80% of Google’s revenue. They design their products to maximize user engagement and extract detailed user data which is then used for targeted advertising. Although they dominate and shape much of Australian public discourse, their primary concern is not to improve its quality and integrity. Rather, they hone their algorithms to amplify the content that most effectively grabs users’ attention. Who decides what “misinformation” is? Despite their apparent specificity, the reports leave out some important information.

First, while each company emphasizes efforts to identify and remove misleading content, they don’t disclose the exact criteria through which they do so — or how those criteria are applied in practice. There are currently no acceptable and enforceable standards for identifying misinformation (DIGI’s Code of Practice is voluntary). This means that each company can develop and use its own interpretation of the term “disinformation”. Since they do not disclose these criteria in their transparency reports, it is impossible to gauge the true extent of the mis/disinformation problem within each platform. It’s also difficult to compare gravity between platforms.

A Twitter spokesperson told The Conversation that its misinformation policies focus on four areas: synthetic and manipulated media, civic integrity, COVID misinformation, and crisis misinformation. But it is not clear how the policies are applied in practice. Meta and YouTube (which is owned by Google’s parent company Alphabet) also vaguely describe how they enforce their misinformation policies. There’s Little Context The reports also don’t provide enough quantitative context for their content removal statements.

Although the companies provide specific numbers of deleted messages or accounts against which actions have been taken, it is not clear what proportion of the overall activity these actions represent on each platform. For example, it is difficult to interpret the claim that 51,394 Australian posts were deleted from Twitter in 2021 without knowing how many were hosted that year. We also don’t know what proportion of content has been flagged in other countries, or how those numbers are changing over time. And while the reports detail various features introduced to combat misleading information (such as directing users to authoritative sources), they do not provide evidence as to their effectiveness in reducing harm.

And after? Meta, Google and Twitter are among the most powerful players in the Australian information landscape. Their policies can affect the well-being of individuals and the country as a whole. Concerns about the harm caused by misinformation on these platforms have been raised in relation to the COVID-19 pandemic, federal elections, and climate change, among other issues. It is crucial that they operate on the basis of transparent and enforceable policies whose effectiveness can be easily assessed and independently verified.

In March, the government of former Prime Minister Scott Morrison announced that, if re-elected, it would introduce new laws to give the Australian Communications and Media Authority “new regulatory powers to compel big business technologies to report harmful content on their platforms”. It is now up to the government of Anthony Albanese to keep this promise. Local policymakers could follow the example of their counterparts in the European Union, who recently agreed on the parameters of the Digital Services Act. This law will force big tech companies to take greater responsibility for the content that appears on their platforms.

(The author is Professor of Business Information Systems, University of Sydney)

Previous Palin leads the crowded field of candidates seeking Alaska's single seat in the US House | Alaska
Next Waynesboro says goodbye to city planner Luke Juday