I read an article about the illusory truth effect. This is a phenomenon where people are more likely to believe information if they’ve heard it multiple times. This phenomenon is concerning in a world where information can be shared at the click of a button. This article specifically talks about research done by several psychologists at Yale University. They created fake news headlines and showed people real news headlines and asked them to evaluate how accurate they were. They distracted the participants with an unrelated task after this. Then, they incorporated the fake headlines into a longer list of headlines to evaluate. What they found was that people categorized the fake headlines as more accurate after seeing them the second time.
What’s interesting to me is that according to this graph, when the fake news is new, people are more likely to rate it at about a 1.5 out of 4. When they saw it again, the peak was more around 2.5. However, when it’s familiar there appears to be less of a peak. I do think that this data is fairly compelling. I think that the graph does show a difference when people have seen the fake news again. I think that if we were looking at a smaller scale event, this would not be very compelling. However, there are millions of people on social media. Even a slight difference in accuracy means several million people believing fake news to be more accurate. This is particularly concerning when so many articles get shared multiple times by multiple people. Misinformation can spread very quickly with social media, especially because people can make up anything they want. This study found that even when the information was not very probable, people still believed it more the second time.
A quick note about this article: it hasn’t been peer reviewed yet. They did some things to help make the study objective, but it hasn’t been the process yet. However, there have been other studies in the past that have studied the illusory truth effect and found similar results.
I think that this study has concerning implications about the spread of information. I think that people want to think that they’re good at identifying misinformation, but they may not be as good as they think they are. It’s particularly concerning because people can spread this misinformation incredibly quickly. I also think that people are more likely to believe misinformation if it fits with a personal agenda. Information that affirms our personal beliefs is easier and more personally profitable to believe than information that counters those beliefs.
In order to combat this, we need to be better at recognizing fake news sources, as well as checking out where information came from if we find it suspicious. However, this may be a difficult task for people. The author of the Vox article said that sites like Facebook and Google need to be better about identifying and labeling misinformation. While I agree, I wonder how well this will work. I’ve seen many people believe articles from the Onion to be true, even though it’s relatively well known that the Onion is a fake news site. I wonder if people will ignore labels about the accuracy of a news source if it affirms their beliefs. I don’t think this is inherently a reason to not label misinformation, but it may mean that labeling doesn’t have as great of an impact as we might hope.