“We have strong policies that we continue to implement, including banning hate organizations and deleting content that praises or supports them,” a Meta spokesperson told CNN, adding that the company has been working with people including the FBI and the Capitol. Law enforcement agencies keep in touch. The police are near the anniversary. As part of its efforts, Facebook is actively monitoring content praising the riots in the Capitol, According to the company, and calls for people to carry or use weapons in Washington, DC.
A Meta spokesperson said: “We will continue to actively monitor threats on our platform and will respond accordingly.”
A Twitter spokesperson told CNN that Twitter has convened an internal working group composed of members from various departments of the company to ensure that the platform can enforce its rules and protect users around the year of January 6.
“What we did before and after January 6  Strong enforcement actions have been taken against accounts and tweets that incited violence or may cause offline harm,” the spokesperson said, adding that Twitter also maintains open channels of communication with federal officials and law enforcement agencies.
YouTube’s intelligence department is a group responsible for proactively detecting and managing problematic content, and has been monitoring trends in content and behavior Related to the riots in the Capitol and its anniversary. As Spokesperson Ivy Choi said that as of Wednesday, the company has not found an increase in content containing new conspiracies related to January 6 or the 2020 election, which violates its policy.
Cui said in a statement: “Our system is actively pointing to authoritative channels and limiting the spread of harmful misinformation related to elections.”
These efforts are in Facebook (FB), Twitter (TWTR), YouTube (Google) In the past year, social media and other platforms have been strongly criticized for their role in the crisis. At the same time, these companies have largely argued that they had established strong policies even before the Capitol riots, and have since strengthened protection and law enforcement.
As mobs escalated their attacks on the Capitol on January 6th last year—destroying the building, looting Congress’ offices, suppressing law enforcement officials—social media platforms scrambled to do everything they could to prevent the consequences, first labeling the then president Trump’s posts, then deleted them, and then completely suspended his account.
But some experts question whether the method of moderation has changed substantially in the past year.
Laura Edelson, a researcher on online political communication at New York University, said: “Although I certainly hope they can learn from what happened, if they have, they have not really communicated openly.”
Edelson said this is particularly worrying because misinformation about the attack may come back, and conspiracy theories about election theft will reappear within a year or so of the rebellion. “Many narratives within the extreme right movement are like this, [the Insurrection] It’s not that bad, and there are two other people who actually did it,” she said.
In interviews before the January 6th anniversary, some Trump supporters in Washington, DC told CNN that they believed the Democrats or the FBI were responsible for the attack.
Facebook’s response to January 6
Facebook, now a division of Meta, became the most popular of all social media platforms around January 6, partly because Facebook whistleblower Francis leaked internal documents Haugen stated that the company cancelled protective measures for the 2020 election before January 6 last year. Hogan told the US Securities and Exchange Commission in a document that the company only re-implemented some of these protective measures after the rebellion began.
A few days after the Capitol riots, Facebook banned “stop stealing” content. Internally, the researchers analyzed the reasons why the company failed to stop the movement, according to documents released by Haugen (and obtained by CNN from a congressional source). Guy Rosen, Meta’s vice president of integrity, stated in a blog post in October that Meta also took steps to “disrupt militarized social movements” and prevent QAnon and militia organizations from organizing on Facebook, the company before and after the 2020 election Efforts made.
Meta refuted Haugen’s statement and tried to distance himself from the attack. Nick Clegg, the company’s vice president of global affairs, told CNN in October that it was “ridiculous” to blame the riots on social media. Clegg said: “The responsibility for the violence of January 6 and the uprising that day rests solely with those who caused the violence and those who encouraged it.”
But researchers say the company is still working to combat misinformation and extremist content.
Edelson said: “We haven’t really seen any substantial changes in Facebook’s content review. These changes are publicly discussed or externally perceptible.” “On the surface, they are still using fairly basic Keyword matching tool to identify problematic content, whether it is hate speech or misinformation.”
Meta pointed out in a blog post in September that its artificial intelligence system has improved in proactively removing problematic content such as hate speech. The company stated in its November Community Standards Implementation Report that views on hate speech content compared with other types of content have fallen for the fourth consecutive quarter.
A new report from the Technology Advocacy and Research Group’s Technology Transparency Project on Tuesday found that content related to the “three centers” is still widely available on Facebook, some of which use “militia” or includesecond Well-known symbols associated with this group. The report stated that when TTP researchers viewed the content, Facebook’s “recommend friends” and “related pages” had recommended accounts or pages with similar images. (TTP is partly funded by an organization founded by Pierre Omidyar.)
“As Americans approach the first anniversary of the uprising, TTP has discovered many of the same disturbing patterns on Facebook, and the company continues to ignore militant groups that pose a threat to democracy and the rule of law,” the report noted, adding that Facebook’s “algorithm And advertising tools often promote this kind of content to users.”
“We removed several of these groups that violated our policies,” Meta spokesperson Kevin McAllister said in a statement to CNN on the TTP report.
Facebook stated that it had deleted thousands of groups, pages, personal data and other content related to militarized social movements, and banned militia groups, including three Percenters, and pointed out the pages and groups cited in the TTP report Has relatively few followers.
To be sure, the scope of misinformation extends far beyond Facebook, including more edge platforms, such as Gab, which became popular after January 6th for promising not to review content, because large companies are facing a blow Call for groups about hate speech, misinformation and violence.
In August, the House Special Committee investigating the January 6th Capitol riots sent a letter to 15 social media companies, including Facebook, YouTube, and Twitter, trying to understand how misinformation exists on their platforms and the efforts of foreign and domestic actors to overthrow the election .
Six days after the attack, Twitter Say
It deleted 70,000 accounts that spread conspiracy theories and QAnon content.Since then, the company has stated that it has deleted thousands of accounts that violated its “Coordinated Harmful Activity” policy and has stated that it prohibits Violent extremist groups
“The participation and attention of the government, civil society and the private sector are also important,” a Twitter spokesperson said. “We recognize that Twitter can play an important role, and we are committed to doing our part.”
YouTube has stated that in the months leading up to the Capitol riots, it deleted the channels of various groups involved in the attacks, such as those related to Pride Boys and QAnon, because these groups violated hatred, harassment, and election integrity. The current policy of. During the attack and in the following days, the company removed live broadcasts of riots and other related content that violated its policies. YouTube stated that its system is more likely to point users to authoritative sources of information about the election.
“In the past year, we have deleted tens of thousands of videos that violated U.S. election-related policies, most of which have been viewed 100 times,” YouTube’s Cui said. “We remain vigilant until the 2022 election, and our team will continue to monitor closely and quickly resolve election misinformation.”
—— Oliver Darcy of CNN contributed to this report.