"*" indicates required fields

Social media in the time of pandemic Image source: Flickr

Social media in the time of pandemic

share this

The United States and the rest of the world are fighting an “infodemic,” as coined by the World Health Organization. Social media platforms must work together to decisively curb the proliferation of disinformation and misinformation. Disinformation is intentionally false information that is spread by state actors, political organizations, individuals, or other “bad actors.” Misinformation is false or misleading information that is spread unintentionally. During the coronavirus pandemic, it is essential that social media platforms work to stop the spread of misleading information that can harm public health and safety.

According to the U.S. State Department, Russia, China, and Iran have begun coordinated disinformation campaigns against the United States related to the coronavirus outbreak. All three countries are relying on social media to disseminate information that will confuse and harm the American public. In addition to state-run campaigns, social media influencers and celebrities run the risk of sharing harmful misinformation. A Reuters Institute study states that prominent public figures spread only 20% of misinformation but attract 69% of all social media engagements. Social media platforms such as Facebook, Instagram, Twitter, WhatsApp, and others need to better combat misinformation and disinformation that hurts action to slow the spread of the coronavirus. This will be a challenge, as social media companies have been struggling to develop effective methods for doing so over the last several years, especially after Russian interference in the 2016 election.

Current policies by social media platforms include using Artificial Intelligence-based tools to flag false information, using human fact-checkers that remove or flag misleading information, and limiting advertising of ineffective medical remedies for the virus. However, research performed by The New York Times suggests that publicly flagging misleading information could produce adverse results. For example, Twitter is debating displaying badges on proven misinformation content. As a result, misinformation that has not been reviewed may appear legitimate to readers that rely on badges. Celebrities may share information that looks legitimate but is inaccurate. For example, Tesla, Inc. and SpaceX CEO Elon Musk recently tweeted that children are “essentially immune” to the novel coronavirus. This tweet has been shared 421 times and liked over 5,000 times as of May 11, 2020. Twitter has yet to remove this false claim from its platform. This type of dis- and misinformation is dangerous to public health and can literally cost lives.

Additionally, little has been done to hinder the spread of misinformation through direct message sharing. Hoaxes are being spread rapidly through WhatsApp group chats and voice notes. In response, WhatsApp and Facebook have created coronavirus information chat bots so users can reliably find answers to frequently asked questions. This response is insufficient as it does not remove misleading information or restrict the ability of fake accounts to rapidly forward information. It also does not solve the problem whereby users find false information to be credible in the first place, nor does it address the credibility of the platform itself.

Social media platforms are well-advised to empirically study the effectiveness of policies created to stop the spread of misinformation. Even though promoting World Health Organization sources is an effective public relations strategy, promoting certain websites does not curtail the proliferation of harmful information. Twitter, for example, fails to remove 59% of COVID-19 misinformation posts from its platform. Social media platforms are privately owned and have the discretion to remove messaging that goes against defined community standards.

During a public health crisis, platforms have a responsibility to protect their users from intentionally harmful information and misinformed users that could place people at risk for transmitting COVID-19. The consequences of people following incorrect information on this disease can literally cost lives. Removing harmful information is the first step to formulating public policies effective in undermining disinformation and misinformation, but it is only a band-aid for a bigger issue of why people believe that information in the first place.