top of page
Search

Assessing current platforms' attempts to curb misinformation

  • alysahorton
  • Apr 19
  • 4 min read

Meta (Facebook & Instagram):


Summary: Meta is attempting to transform Facebook and Instagram fact-checking into user-led accountability. According to Meta, the “Community Notes” feature was developed in April 2025 as a test to see if other users will be able to control the spread of misinformation. Meta has only brought the “Community Notes” feature to America and in the rest of the world, the business giant relies on independent fact-checkers. 


In Practice: People from outside America have examined Meta’s new fact checking method in America and seen many similarities to X. An article from France 24, interviewed international researchers who said the “Community Notes” feature could help America have more accurate content. Community notes appear at the bottom of a post and can provide more context or correct information. Mark Zucherberg, the owner of Meta, said fact checking had become “a program intended to inform (that) too often became a tool to censor.” Zucherberg is using Community Notes as a way to keep up content, even if it’s false, but provide more information/correction.


Personal Experience: I am on both Facebook and Instagram. As a user of Meta’s biggest platforms, I have seen the impact misinformation can have on people in my circle. My mom, who regularly keeps up with the news and tries to critically analyze information, has fallen for misinformation because of AI generated images and captions that appear on Facebook. AI generated spam is infiltrating more aspects of social media. The ongoing appearance of false information that randomly appears, makes it harder for people to discern what is true and what isn’t. According to an NPR article, these AI generated images are appearing more on Facebook without people following these accounts as content suggestions. NPR also said the motive for the spread of these falsehoods is unclear because Meta hasn’t been able to start a widespread financial motive for users — but it may be headed that way according to ProPublica


Evaluation: Meta has experienced years of misinformation on both Facebook and Instagram. It appears the company is looking to try something new but the timing is perplexing given Zucherberg’s close connections with President Donald Trump. A Vox opinion article said this change was “a willingness among tech companies to cater to Trump.” By keeping up content that involves falsities, it is likely misinformation will perpetuate and more echo chambers could emerge. 


Improvement Suggestions: 

ProPublica, known for being a newsroom that “investigates abuses of power,” wrote an article criticizing Meta’s changes and compared its history of attempting to fight misinformation. ProPublica author Craig Silverman’s article explains Meta is also planning to start paying users for viral content. By paying out users and taking away third part fact checking, I believe Meta could be incentivizing users to spread outlandish information that can go viral, in exchange for money. I believe Meta should either offer both Community Notes and third-party fact-checking to ensure audiences have transparency about the information they are absorbing. 


Youtube:


Summary: YouTube uses what they call the “4 Rs” as guiding principles for combating misinformation. The company says they “remove content that violates our policies, reduce recommendations of borderline content, raise up authoritative sources for news and information, and reward trusted creators.” YouTube takes down content that it deems to be a threat to the public, with an extra emphasis on removing content that includes election and vaccine misinformation.


In Practice: Since 2022, YouTube has faced criticism from numerous fact-checking organizations and citizens worldwide. In 2022, more than 80 fact-checking organizations signed a letter to YouTube, demanding the company do more to prevent misinformation from circulating on their platform. YouTube is unable to remove all the videos that go against its misinformation guidelines. 


Personal Experience: From what I’ve learned through media law classes, I think YouTube faces a complex position by simply removing content they deem to be “misinformation.” While in some cases I believe content could hurt the public at large, other times, YouTube could fall down a slippery slope of censorship. One Google user reported a public complaint about YouTube’s policy and said “comments I make on controversial or general topics are deleted or just flat or not posted…” The user's complaint brings up a valid concern, although I can’t completely defend it because I am unsure of the content they were trying to spread. 


Evaluation: An article by Poynter followed up on the 2022 letter sent by numerous fact-checking organizations and said fact-checkers around the world are dissatisfied with YouTube’s policies. According to the article, a leading concern is live streams “with falsehoods (that) racked up hundreds of thousands of views.” YouTube’s policies seem clear for posted content, but livestreams seem to be a different thing the platform hasn’t attempted to conquer yet. 


Improvement Suggestions: YouTube should add a feature that gives context, warnings or community notes to a video. The blatant erasing or denying the posting of content because it could include misinformation could lead to censorship. To avoid censorship, YouTube could make a warning for specific users to prevent falsehoods from perpetuating on livestreams.

 
 
 

Comments


bottom of page