A timely CRS report (updated 27 January 2021) Jason A. Gallo and Clare Y.Cho 'Social Media: Misinformation and Content Moderation Issues for Congress'.
Download CRS_Report_Social_Media_Misinformation_27Jan2021
Click on the graphics to display or enlarge them.
Overview
Social media platforms disseminate information quickly to billions of global users. The Pew Research Center estimates that in 2019, 72% of U.S. adults used at least one social media site and that the majority of users visited the site at least once a week.Some Members of Congress are concerned about the spread of misinformation (i.e., incorrect or inaccurate information) on social media platforms and are exploring how it can be addressed by companies that operate social media sites. Other Members are concerned that social media operators’ content moderation practices may suppress speech. Both perspectives have focused on Section 230 of the Communications Act of 1934 (47 U.S.C. §230), enacted as part of the Communications Decency Act of 1996, which broadly protects operators of “interactive computer services” from liability for publishing, removing, or restricting access to another’s content.
Social media platforms enable users to create individual profiles, form networks, produce content by posting text, images, or videos, and interact with content by commenting on and sharing it with others. Social media operators may moderate the content posted on their sites by allowing certain posts and not others. They prohibit users from posting content that violates copyright law or solicits illegal activity, and some maintain policies that prohibit objectionable content (e.g., certain sexual or violent content) or content that does not contribute to the community or service that they wish to provide. As private companies, social media operators can determine what content is allowed on their sites, and content moderation decisions could be protected under the First Amendment. However, operators’ content moderation practices have created unease that these companies play an outsized role in determining what speech is allowed on their sites, with some commentators stating that operators are infringing on users’ First Amendment rights by censoring speech.
Two features of social media platforms—the user networks and the algorithmic filtering used to manage content—can contribute to the spread of misinformation. Users can build their own social networks, which affect the content that they see, including the types of misinformation they may be exposed to. Most social media operators use algorithms to sort and prioritize the content placed on their sites. These algorithms are generally built to increase user engagement, such as clicking links or commenting on posts. In particular, social media operators that rely on advertising placed next to user-generated content as their primary source of revenue have incentives to increase user engagement. These operators may be able to increase their revenue by serving more ads to users and potentially charging higher fees to advertisers. Thus, algorithms may amplify certain content, which can include misinformation, if it captures users’ attention.
The Coronavirus Disease 2019 (COVID-19) pandemic illustrates how social media platforms may contribute to the spread of misinformation. Part of the difficulty addressing COVID-19 misinformation is that the scientific consensus about a novel virus, its transmission pathways, and effective mitigation measures is constantly evolving as new evidence becomes available. During the pandemic, the amount and frequency of social media consumption increased. Information about COVID-19 spread rapidly on social media platforms, including inaccurate and misleading information, potentially complicating the public health response to the pandemic. Some social media operators implemented content moderation strategies, such as tagging or removing what they considered to be misinformation, while promoting what they deemed to be reliable sources of information, including content from recognized health authorities.
Congress has held hearings to examine the role social media platforms play in the dissemination of misinformation. Members of Congress have introduced legislation, much of it to amend Section 230, which could affect the content moderation practices of interactive computer services, including social media operators. In 2020, the Department of Justice also sent draft legislation amending Section 230 to Congress. Some commentators identify potential benefits of amending Section 230, while others have identified potential adverse consequences.
Congress may wish to consider the roles of the public and private sector in addressing misinformation, including who defines what constitutes misinformation. If Congress determines that action to address the spread of misinformation through social media is necessary, its options may be limited by the reality that regulation, policies, or incentives to affect one category of information may affect others. Congress may consider the First Amendment implications of potential legislative actions. Any effort to address this issue may have unintended legal, social, and economic consequences that may be difficult to foresee.
Let's cut to the chase:
Among the overarching questions regarding misinformation and content moderation practices on social media are the following:
Should Congress or the Executive Branch take action to address misinformation or content regulation?
Is action necessary to reduce the spread of misinformation or to prevent censorship?
If action to address the spread of misinformation and prevent censorship is deemed necessary, which institutions, public and private, should bear responsibility for it?
Who defines misinformation, how, for what purpose, and under what authority?
While Congress may choose not to take any actions to address social media operators’ content moderation practices, if it chooses to, there are a range of potential legislative actions it could take, from legislation designed to support existing practices to regulation of social media operators.
The last section:
Concluding Thoughts
If Congress chooses to address the spread of misinformation on social media or content moderation practices generally, it might consider the intended scope of proposed actions, under what conditions they would be applied, and the range of potential legal, social, and economic consequences, both intended and unintended, that may result. It might consider whether any action that it takes imposes costs, monetary or otherwise, that further entrenches the market power of incumbent operators. It might also consider how U.S. actions, such as regulating social media companies’ content moderation practices, would fit within an international legal framework. Major social media operators are multinational corporations, and the internet provides access to their websites worldwide, unless governments erect firewalls to block access. Crafting legislation to address the activities of U.S.-based social media sites in other countries may be difficult, particularly if another country seeks to impose obligations that are in conflict with U.S. law. Conversely, it may not be possible for U.S. legislation to regulate the internal activities—such as algorithms or content moderation practices—of foreign-based social media platforms.
Very informative!
Enjoy!
“Content is fire. Social media is gasoline.” - Jay Baer
Comments