How can Anti-Muslim Rhetoric cause Dehumanization?

On Monday, October 2nd, I was sitting in Math class when a friend came to me with a look of relief on his face. I asked him about it and he told me in short words, “Yesterday´s shooter was not Muslim.” Up until that point, I had heard about many hate crimes that Muslims experience in the United States because of previous terrorist acts conducted only by a small group of people, but I had never been a witness of the day-to-day concern for the othering of Muslims. I came across this article in Vox called “The dark psychology of dehumanization explained,” by Brian Resnick. In this article, Resnick describes some of the experiments conducted by Nour Kteily, a psychologist at Northwestern University, and his colleagues. Through a series of experiments, Kteily, Emile Bruneau, Adam Waytx, and Sarah Cotterill attempted to “measure people´s levels of blatant dehumanization of other groups,” and some of the correlations between current anti-Muslim policies and their dehumanization.

According to psychologist Nour Kteily, dehumanization is the ability to see fellow people as less than human. This so-called “ability” allows people to think about murder and torture, universally considered taboo, as things that can be justified provided that they are used against a group considered not fully human. Kteily and his team created the following tool:

picture 1
Ascend of Man tool

This image shows inaccurate representations of human ancestors slowly evolving into a modern human. With the sliders shown in the picture, participants who were mostly white Americans were asked to “[rate] members of different groups ─ such as Muslims, Americans, and Swedes ─ on how evolved they are on a scale of 0 to 100.” *It is important to note that these categories contain mostly nationalities, with the exception of Mexican Immigrants, Arabs, and Muslims. Mexican immigrants are a group of people classified by their immigration status and precedence, Arabs are the group of people who belong to the Arab states (22 Arabic-speaking countries), and Muslims are the followers of Islam.

Overall, their findings show the following:

picture 2Through the Ascent of Man tool, the scientists rated the different levels of dehumanization given to the previous categories. The scientists write, “On average, Americans rate other Americans as being highly evolved, with an average score in the 90s. But disturbingly, many also rated Muslims, Mexican Immigrants, and Arabs as less evolved… We typically see scores that average 75, 76 for Muslims.” *It appears that the target groups are ordered from high to low score, however, note that “American” is placed on the top of the target group column even though the “European” category was rated higher.

SELRES_e9d7b82e-e54a-46cd-9dc6-52dbcdf0f257SELRES_2e7145d0-f955-4a2b-82bc-877fd6b7bd2eSELRES_24005a8b-b15a-4621-bbf4-384fe6d87e7bSELRES_b67637c3-e7c4-4bb1-b576-070d15e16a7cSELRES_c3c80ccc-1cc6-49a7-a21e-ea31eb3a1271SELRES_fd21e23a-d7d1-4799-9c7f-fb6614e43ab2Correlations SELRES_fd21e23a-d7d1-4799-9c7f-fb6614e43ab2SELRES_c3c80ccc-1cc6-49a7-a21e-ea31eb3a1271SELRES_b67637c3-e7c4-4bb1-b576-070d15e16a7cSELRES_24005a8b-b15a-4621-bbf4-384fe6d87e7bSELRES_2e7145d0-f955-4a2b-82bc-877fd6b7bd2eSELRES_e9d7b82e-e54a-46cd-9dc6-52dbcdf0f257to Dehumanization

Nour Kteily, Emile Bruneau, Adam Waytx, and Sarah Cotterill found some interesting correlations (they are too many so I will just list them). They found out that people who show more willingness to dehumanize in the test above are more likely to:

  1. Show aggressive attitudes towards the Muslim world.
  2. Blame all Muslims for the actions of a few perpetrators.
  3. Support policies restricting the immigration of Arabs to the United States and the “Muslim ban.”
  4. Score higher on a measure called “social dominance orientation”, which means they “favor inequality among groups in society, with some groups dominating others.”
  5. Agree with statements such as:
    – Muslims are potential cancer to this country
    – The attacks on San Bernardino prove it: Muslims are a threat to people from this country
  6. Support Donald Trump

Anti-Muslim Rhetoric

This article also mentions that hate crimes against Muslims in the United States are at their highest levels since 2001. So this brings us into the discussion of all the anti-Muslim rhetoric that has been going on since this last presidential campaign. Between February and March of this year, four mosques were burnt in America, but we do not call this terrorism. While Trump places temporary bans for refugees and for people issuing tourists visas in six Muslim-majority countries, the misconception that Muslim immigrants are dangerous keeps spreading.

Kteily and his team did their “Ascent of Man” scale before, during, and after the Boston Marathon bombing in 2013:

picture 3
After the attack, all Muslims were dehumanized more significant than 2 months prior and six months after the attack. These results show that we tend to think of certain groups as less human when only particular members commit atrocities. Differently enough, in another study, Kteily and Bruneau created a fake article that highlighted Dalia Mogahed´s research. She is the director of research at the Institute for Social Policy and Understanding, a nonprofit that studies Islamophobia. After reading this article, “mostly white participants that read that Muslims actually admired Americans, [didn´t] dehumanize them as much on the Ascent scale.” They also tried to make many participants who harshly graded minorities in the Ascent of Man tool realize how extreme it was to blame all Muslims for an act committed by few. Emile Bruneau found that if white people are asked, “Are all Christians responsible for the actions of the Westboro Baptist Church?” they are less likely to dehumanize Muslims.

Going back to the Las Vegas shooting on October 1st, I was stunned by how relieved my friend felt that the mass shooting was not committed by a Muslim. This fear is only experienced by some people because our current political climate and all of its anti-Muslim rhetoric tend to naturalize dehumanization. The writer of this article, Brian Resnick, interviewed a number of experts on dehumanization and othering and writes that they all came to the same conclusion: “The No. 1 way to combat dehumanization is simply getting to know people who are different from us.” I think it is really important to understand that all the anti-Muslim news we see on social media and all the hate we are exposed to can lead us to judge others as less human and affect our decision-making. Anti-Muslim rhetoric, in the form of policies and publicity, cause dehumanization in the sense that it gives others a reason to believe that minorities in America, such as the Islamic community, are dangerous. Resnick ends his article with the following thought: “Just as we have the mental capacity to dehumanize, we´re equipped with the mental programs that forge trust and understanding. It´s up to us to turn them on.”

Advertisements

Are Trump Tweets Backed Up by Data?

trump 1

I chose an article on the New York Times named The Markets Are Up, Unemployment is Down. How Much Credit Should Trump Get? In this article, writers Alicia Parlapiano, Nelson D. Schwarts, and Karen Yourish offer data from Yahoo Finance, Google Finance, the Bureau of Labor Statistics, and the Federal Reserve Bank of St. Louis to investigate whether the information that the current United States president Donald Trump provides on his tweets is accurate or not.

The Stock Market
trump 2

The Dow Jones Industrial Average is a price-weighted average of the stocks of 30 large American publicly traded companies. *Some of the current 30 stocks include American Express, Apple, Coca-Cola, Goldman Sachs, Intel, Microsoft, Nike, Visa, and Walt Disney.* Trump`s tweet expresses that the United States is experiencing the “Highest Stock Market EVER,” which is not necessarily wrong. As the previous graph indicates, the stock market has maintained an overall growth since the financial crisis of 2008. The new trend of Gig Economy (leaning towards hiring independent contractors and freelancers instead of full-time employees), the creation of new products in the tech industry, and a steady profit from companies such as S&P are fueling the rise of stock prices. How much can Donald Trump take credit for in this aspect is unclear.

Unemployment Rates
trump 3

In July of this year, the unemployment rate reached a low 4.3%─ the lowest it has been since 2001. However, it is important to notice in the graph that “nearly all of that drop occurred on the watch of his predecessor, Barack Obama.” In January when Obama left office, the unemployment rate was already at 4.8%. Again, we can see that Donald Trump`s tweet is not incorrect; it just paints an incomplete picture. It is also worth to mention that, during his campaign, he continuously stated that government agencies and mainstream economists were presenting fake numbers of unemployment rates. At one point in his campaign, he said that the real unemployment rate was “not some number below 5 percent widely cited by economists, but something like 42 percent.”

Gross Domestic Product

trump 4

The gross domestic product (GDP) is “one of the primary indicators used to [measure] the health of a country`s economy.” Usually, it is expressed by a comparison to the previous year as the annual growth rate. United States` economy “has grown 2.6 percent or more in 81 of the 145 quarters [since 1981], including 14 times during the Obama administration.” On July 28, Donald Trump stated that the annual rate of 2.6% growth was an “unbelievable number” for the second quarter of the year.

trump 5

Trump tweeted an inflated figure. So either he does know that 2.6% is not an extraordinary quarterly growth rate and is trying to inflate this number by rounding it up, or he ran out of his 140 tweeter characters. I would like to dismiss this second option because his tweet contains 135 characters, I counted.

My problem with these two tweets by Donald Trump is that the information they present blur the context of the data. We have been discussing in class about how in Big Data, it is necessary to have that human component, one that seeks to explain the context and relationships around the data in a reasonable way. I think this is an element missing in many of Donald Trump`s tweets. My question, “Are Trump tweets backed up by data?” has a yes and no component to it. Yes, if we ignore all the antecedents of how these numbers came to be and basic significant figures rules, then we could actually say that the stock market is up, the unemployment rates are down, and the GDP is at 3%. And no, it is unethical no present information as factual when there is so much context missing and numbers are being inflated. Twitter is a prominent social platform that many use as a source of information. The president of the United States should be a “trustworthy” source; is he being completely honest about the information he presents? Let`s remember that ignorance is as much of a construct as knowledge is, and that a person who has a vast amount of resources and advising like Donald Trump does should be hold more accountable to the fabrications they present as truth.

Data Exposure and the Knobe Effect

For this post, I chose an article named Intentionality and Morality in Human Judgement by Sudhakar Nuti. In this article, he discusses the Knobe effect, a phenomenon where people tend to judge that a bad side effect is caused intentionally, whereas a good side effect is not intentional (Feltz 2016). Nuti explains how Josua Knobe proved through a survey that subjects are inclined to connect intentionality with negative side effects. My question is, can the nature of the information that people are exposed to justify the Knobe effect?
These are the following scenarios:

The CEO of a company is told, “We are thinking of starting a new program. It will help us increase profits, but it will also hurt the environment.” The company starts the new program and the environment is harmed. In an alternate scenario, the CEO is told that the same profit-increasing program will help the environment. Surely enough, upon implementation, the program helps the environment.

Joshua Knobe, an Assistant Professor at Yale, conducted a survey asking whether people believed the CEO had intentionally harmed or helped the environment. While the actual sample demographics for this particular survey are not specified in the article or anywhere on the web, these results have been replicated in other studies with multiple side effects, cultures, and ages obtaining similar results (Feltz 2006). About 82% of the participants said that the CEO would intentionally harm the environment, while only 23% said that the CEO would intentionally help the environment. This large difference between perceptions served as the explanation for the Knobe effect.

knobe
Image by Joshua Knobe

Knobe suggested that instead of looking at intentionality as two different choices, it should be thought of as a gradient. In the diagram above, the dot and its relative position to the x-axis allow us to locate the intentionality of the side effect. This diagram, as explained by the previous percentages, indicates that people believe that harming side-effects are most likely to be intentional while helping side-effects are consider to be most likely unintentional. (Ideally, the dots in the diagram should have confidence intervals and percentages, but this is just a relative model.) Sudhakar Nuti provides evidence of how these decisions are not based on emotion. He incorporates the work of Liane Young and her studies in ventromedial prefrontal cortex patients to prove that the Knobe effect “isn`t actually due to people`s emotions getting in the way.” Through the rest of his article, he suggests that our understanding of the world is “colored by our moral judgment” but not by our sentiment. Nuti`s article is very open-ended; it suggests that “there is now a search for a deeper theory that explains how moral judgments affect our different conceptions of the world”, but it offers no conclusion. For this reason, I want to suggest that the idea of priming we discussed while reading Thinking Fast and Slow by Daniel Kahneman can help explain the Knobe effect.

Priming is the idea that cues in our environment may have significant effects on behavior. When I first read the two CEO scenarios my mind went directly to corrupt Peruvian politicians and media examples of developed countries with overwhelming carbon dioxide pollution. The Knobe effect is the phenomenon of people believing that negative side effects are always intentional, but how can we expect otherwise when most of the news (and headlines) we are exposed to carry a negative connotation. I could not find the exact numbers on how many positive versus negative news do media sources comment on, but I did go through a couple of news pages and noticed that the majority of their headlines contained threatening words. I also came across other articles that aim to talk about why bad news dominates the headlines and how people pay more attention to them do to negativity bias.

While news priming may be only a small factor behind the reason that most people believe it is more likely to intentionally harm than to help, I think that it is an area that would be interesting to explore. In order to test whether the Knobe effect has a relationship to priming, I would expose participants to either positive, negative, or varied news to see if this changed the numbers of participants that say that the CEO would intentionally help the environment. I think that the study of data exposure and its effect on behavioral perceptions is an area that this particular study would benefit a lot from. If we are more exposed to news about ordinary people doing incredible things, we might even reconsider the morality of Knobe`s CEO. The nature of the information that we are exposed to can deeply impact our actions, why shouldn`t it affect the way we perceive morality in other people?

Can Big Data Decrease the Number of College Dropouts?

College-dropouts
Image retrieved from Tuition.io

For this week I chose an article named Will You Graduate? Ask Big Data from The New York Times. The article does a great job presenting some of the positive and negative outcomes of the use of big data in colleges and even elaborates on the idea of developing ethical guidelines for its use. I picked this article because last class we mentioned that positive consequences for the use of big data exist and, even though there is room for improvement, the use of big data in academics presents a promising future.

The article has the central idea of using predictive analysis to foretell when some students will be in danger of dropping out. The process consists of tracking down the academic path of successful students, such as higher scores on introductory classes, to predict which students will need to use more university resources to graduate. Dr. Richard Sluder from Middle Tennessee State University says that before predictive analytics, many of the D grades went unnoticed because advisers were mainly monitoring GPA and not grades by course. Dr. Sluder then explains that big data is allowing advisers to understand that a lower grade in certain courses, especially those that involve practicing reading comprehension or basic mathematics, can be an indicator of which students need help. The use of resources, like writing coaching and tutoring, is an important factor in determining if students will drop out or not. In this case, big data allows institutions to target students who need extra-help, which I think is something that will both decrease the number of college dropouts and create a positive feedback loop in which more people learn about campus resources.

Other areas of improvement come from the advising department. Even though the use of big data in academic analysis is fairly new when compared to its use in other areas, many institutions are already seeing positive outcomes and encouraging them. For example, Georgia State has significantly invested in advising because of analytics results; now instead of having a 1:750 adviser-to-student ratio, they have a 1:300 ratio. Another example is Stanford University, which developed a digital tool based on 15 years of data that helps students with the task of choosing among 5,000 undergraduate classes. To this, Michell L. Stevens, an associate professor who led the development of this tool, says: “No singles advisor, however wise and alert, can possibly be aware of all the instructional opportunities.” What I really like about this overall improvement is the combination of both data analytics and human expertise. I think that because academics is such a personal topic, many institutions are taking the good decision of not letting data guide every decision. The previous examples show that academic big data, when mixed with advising, can be successful.

This last “positive” outcome of data analytics according to the article is a little more controversial. Personally, I feel more comfortable with the idea of using previous, anonymous data to lay down the best academic path for the student. However, what Sudha Ram, the director of the Center for Business Intelligence and Analytics at the University of Arizona, is doing is quite different. According to his research, if students are not socially integrated into college they tend to drop out. Because of this, he is now observing freshmen conduct by tracking the information on their identification cards when they access the library, gym, cafeteria, etc. As we discussed in class, the fact that data is available does not mean that its use is ethical; there is a risk that students details can become public, and tracking down the actions of students compromises their right to privacy.

college

There are other problems associated with the use of big data to predict and prevent college dropouts. Students whose initial academic performance is low can be discouraged from trying harder in their chosen field because data patterns can produce a feeling of predestination. Martin Kurzweil, a program director at an education research organization (Ithaka S+R), also expresses his concern over the fact that predictions could present a temptation to “weed out at-risk students to improve a school’s ranking.” (For more drama about Maryland university ex-president Simon Newman’s and his administration’s plan to “cull struggling freshmen as a part of an effort to improve retention numbers” click here.) However, I think the use of big data in academics is quite promising due to the following reason:

“In June, Ithaka S+R and a team from Stanford brought together 73 specialists from universities, analytics companies, foundations and the Department of Education for three days of discussion on developing standards and ethical guidelines for big data on college campuses.”

This discussion represents an effort to regulate the power that big data can give institutions. While I believe that we are still far away from developing guidelines for the use of big data in general, it is important to consider that standardizing the use of big data for smaller areas can be a way to tackle this huge task. Big data, being used correctly, is decreasing the number of college dropouts, demonstrating that big data can have positive outcomes… shocking, right?

Dominant Leaders and Economic Uncertainty

I read this interesting article from Harvard Business Review named Why We Prefer Dominant Leaders in Uncertain Times by Hermant Kakkar and Niro Sivanathan. I chose this article because it took me back to the section “Attention and Effort” in Thinking Fast and Slow, which says that System 1 takes over in emergencies and assigns total priority to self-protective actions (35). In the article, the authors explain a series of observations they did to prove the correlation between economic uncertainty and the support for dominant leaders. With this correlation in mind, they hypothesized that in times where individuals feel like they lack a sense of personal control, they “try to compensate by supporting leaders who they believe hold greater agency and control.” The whole set up for their three experiments is very neat, in my opinion, because it takes into consideration multiple factors to calculate for economic uncertainty and the three experiments are a sequence to test whether their findings can be generalized to other parts of the world.

The researchers focused on the preference of socioeconomic groups, divided by zip codes, in regard to two alternative ways of leadership: dominance and prestige. Kakkar and Sivanathan proposal for these two alternative leadership methods is rooted in their previous research on evolutionary and social psychology, which provided a background for placing many political leaders compared in the experiments in a dominance-prestige scale. Additionally, the economic uncertainty of an area was calculated by aggregating its poverty, unemployment, and housing vacancy rates.

hilarry
Retrieved from: CNN

The first experiment involved 750 random participants from 46 different states in the United States. Their voting preference, political ideologies, and demographic characteristics were recorded “on the day of the third and final presidential debate” to ensure the participants had had sufficient exposure to the candidates. I think that both the sample size for their observations and the moment they collected the data were critical in the future generalizability of their findings. I appreciate how many different states and representatives per state (about 16) were included in this study and the fact that the researchers selected a pertinent time where the test subjects were most likely to be more informed. To place their findings into context, Kakkar and Sivanathan used a pretest where they had a separate group of people indicate Clinton´s and Trump´s level of dominance/prestige. As a result, Trump was rated significantly higher on dominance, which is associated with assertiveness, confidence, control, decisiveness, narcissism, aggression, and uncooperativeness; and Clinton was rated significantly higher on prestige, which is associated with respect, admiration, and high esteem. About their first test, the researchers write:

After controlling for participants’ ideology, demographics, personal income, and time spent living in the zip code, as well as the total population and population density of the zip code, we found that the greater the economic uncertainty in the area, the more people preferred voting for Trump.

They used this results to conclude that economic uncertainty is correlated to people´s preference for dominant leaders, but they wanted to test this hypothesis outside the last election topic to make sure that the experiment was not influenced by general impressions of Hillary Clinton and Donald Trump. Kakkar and Sivanathan extended the experiment using similar procedures. This time around they asked about 1,400 participants from the 50 states to participate and based the questions on local leaders who demonstrated more dominance or more prestige. Because of their incredibly similar results, the researchers were inclined to generalize their findings beyond the United States. With data from the World Values Survey, a non-profit organization that functions as a global research project carried out by a huge network of social scientists, and unemployment rate data provided by the World Bank, they furthered explored this connection. While I do understand that generalizations as such create a greater degree of uncertainty in the results due to all the external variables, I think this research is a great starting point to identify stronger connections between economic doubts and type of leaders, especially in countries that have a history of dictatorship and political abuse.

Reading this article helped me connect some of the decision-making ideas we discussed in class to a real-life event. In Thinking Fast and Slow the idea of self-protective actions controlled by System 1 in the case of emergencies seemed more like an ancient idea of survival instincts to me. However, after having analyzed the correlation between economic uncertainty and dominant leaders and tying this idea with the desire for restoring a sense of personal control, it makes more sense to me why some people would decide to vote for someone like Donald Trump. When the election results came out, I was really shocked because nobody close to me at Westminster had ever expressed their approval for this candidate. Looking back at this, I can now observe that I live in quite an affluent area, where the economic instability may not be as high as other places. Because of this factor, selecting a dominant leader like Trump is not perceived as a self-protecting action by me or by many of my peers. This is not true for many people living a different economic reality.

The article states that the implications of their finding are worrisome because “dominant leaders are propped into power under uncertainty, but once in power they can fuel more uncertainty and further solidify their appeal.” I think this article puts many things into question, among these things the economic uncertainty that can prime our decisions as well as the future consequences of acting in a seemingly self-protective way. This article teaches us that it is really important to use some of our System 2 thinking, realizing that we are prone to choose some electoral candidates over others and give them power based on our current financial situations.

Data privacy protection laws in Russia

lock

I read this article in the New York Times titled Russia Threatens to Block Facebook over Data Storage and it took me back to some of the things we discussed in class about Big Data. The article talks about how the Russian Federal Communications Agency wants to make the American company Facebook comply with the country´s laws on personal data, which say that the personal information of Russians should be stored locally. This article refers to the Russian Federal Law on Personal Data No. 152-DZ, which requires data operators to:

Take necessary organizational and technical measures, including the use of encrypting (ciphering), to protect personal data from unlawful or accidental access to them, destruction, modification, blocking, coping, [and] distribution of personal data.

The previous citation is written in a way that makes the privacy of personal data a priority for the Russian government. However, I want to point out that this law is protecting the use of personal data against “unlawful or accidental access” and it makes no point whether this information will be accessible to third parties approved by the law or the law itself. When I was living in China I experienced a similar situation. In China (Hong Kong has some exceptions), internet sites like Facebook, Google, YouTube, Twitter, and other social media services are blocked. Initially, I thought that this approach was a socialist attempt to obtain more control over people and an opportunity to create apps that could both serve as a social media platform for people and an examination tool for government agencies. However, our later discussions about Big Data got me started on the idea that if certain companies and governments have access to Big Data that independent researchers cannot question, then the topic of data protection should be discussed under a new light. My question is, with laws as the one previously stated, does the Russian government want to control or protect its people?

According to the NY Times article, Russia´s “most recent step to crack down on Internet freedom” was blocking virtual private networks (or VPNs) which hide routing connections through servers of other countries (the foreign community uses these services in China quite often). I want to make it clear that blocking VPNs is not as uncommon as this article is making it sound. Netflix, for example, does not allow the use of VPNs or proxy servers while using it because the content varies according to your geographical location. It is a very common service using a very common practice, yet we do not really think about it unless the news seems to disagree. The article also claims that “the law obliging companies to store personal data about Russian citizens in Russia… has been widely viewed as the Kremlin´s attempt to expand control over the Internet”. Kremlin is the central location of Russian political affairs and it is a term used to refer to the Russian government. I think that claiming that the Russian government is trying to take over the internet is out-of-place for an article that lacks evidence and sounds completely one-sided.

So here is the other side of the argument in an interesting excerpt of the Russian Federal Law on Personal Data:

This Federal Law is aimed at ensuring the protection of the rights and freedoms of a human being and a citizen in the course of processing his [or her] personal data, including protection of the rights to inviolability of private, personal and family life.

I have copied part of the translation of the Russian federal law above because it is important that we know both sides of the argument. While the article seems to suggest that the Russian government seeks to have control over Russian citizens, the law is saying that its goal is actually to protect the rights and freedoms of its citizens by making sure their data is safe. This being said, let’s also not forget the “unlawful or accidental access” mentioned in the first excerpt, could mean that “the law” may be able to access this information. We have been discussing in class about some questions for Big Data, and maybe these Russian laws are an attempt to regulate the consequences that might stem from a company like Facebook having access to a ridiculous amount of information.  I am not entirely sure how to answer my own question because there are so many perspectives to take into consideration; what seems clear to me is that exploring different stands like the two above and not being too fast to judge is essential for a discussion about Big Data.

Marshmallows and Self-Control

I read an article named “How children`s self-control has changed in the past 50 years” by Christopher Ingraham on the Washington Post. Previously, I had heard about the marshmallow experiment; you place a kid in front of the marshmallow and explain that if they wait long enough without eating it they will have a second one. This article claims that, by using the marshmallow experiment as a way to indicate self-control, a “child`s ability to delay gratification” was measured. Is this article correctly presenting information about the original study it is referencing? Is this experiment a reliable one?

My first reaction was that the article was sketchy. I say this because many of the links in the text referred to other instances of the same web page and because there was no link or name of the study it was talking about. After some research, I found the study, Kids these days, and I started comparing some of the information presented. The article from the Washington Post I read presents a graph of the information collected through a period of 50 years. This graph has marshmallows and a line of best fit, however, the line does not seem to account some of the peeks during 1997 and 2007. When I compared this graph to the one in the case study I noticed there were a couple of very important elements that had been missed from the graphs while adding the marshmallows. First, the size of the bubbles in the original study vary to indicate the standard errors. The smaller bubbles indicate studies with a larger number of participants and, therefore, smaller standard error. Secondly, the black bubbles indicate the outliers of the experiment and these values could alter the trend line had they not been specified.

chart 1
1. Graph retrieved from How children`s self-control has changed in the past 50 years
chart 2
2. Graph retrieved from Kids this days by John Protzko

I determined that this article is not reliable. It uses some of the claims of the study out of context and introduces unrelated information (like 418 B.C. Greek playwriting). Usually, knowing that the Washington Post is an acclaimed information source, I would not dare criticize the information it presents. This gave me a moment to reflect on the authority over information that famous newspapers have. Personally, I tend to give more credit to something that was written under a noticeable company than a Facebook article, for example. This time the article did not present the information of the study in a pertinent manner, and that will probably make me double check sources from this page more often.

I decided to concentrate more on the actual study. A researcher from the University of California named John Protzko collected and analyzed data from 30 published marshmallow test trials between 1968 and 2017. The raw data presented in this study demonstrates that many factors could be responsible for the increase in the number of minutes it took for children to partly lose their self-control and try the marshmallow. The raw data demonstrates variations in qualitative information such as the average age of the kids, their socioeconomic status, and their countries of residence. We have seen in class how studies are meant to be specific. Even if the difference between self-control in children over the years is being investigated, I think it would be optimum to evaluate the same area with kids from similar ages and similar socioeconomic backgrounds. Also, the document presents a pie chart indicating the responses of 260 experts in the field of cognitive development when asked to predict if the latest generations had better self-control. The pie chart contains a pie chart within a pie chart (inception), which is honestly a very confusing way to present a couple of percentages.

pie chart
Image retrieved from Kids this days by John Protzko

In conclusion, I think that both of the sources I investigated could have done a better job presenting information. The article made claims out of context and, rather than summarizing or commenting the case study, it stated that children`s self-control has increased over and over again. Additionally, the researcher could have done a better job taking into account the difference in the origin of his own sources. Making claims connecting self-control or a child`s ability to delay gratification and marshmallow eating seem a little too bold for me. I would like to know if there is further research out there that follows up with the young participants to see if there is a correlation between the time it took for them to eat the marshmallow and their future development. For now, this has been a really eye-opening experience about checking sources and not trusting every single graph just because the y-axis seems to make sense.