Are we making our days shorter with artificial lighting?

Developing countries are good at doing just that, developing. What they can fail to do, however, is recognize when their development may be more detrimental than helpful to its’ residents.

A recent article by The Washington Post highlights a new, growing problem the world may come to face as we continue to develop and grow our infrastructures. The study  found that the area of artificially lit area of the Earth’s surface grew by 2.2%** per year from 2012-2016. Which is a pretty big amount of land.

** 2.2% of the Earth’s land is roughly 3,000,000km^2. The majority of results online estimated that the amount of habitable land on Earth is about 150,000,000km^2.

Scientists in this study examined high resolution satellite images from country to country (excluding regions along a similar latitude to Iceland, because their data was not accurate beyond a certain latitude) to measure just how much the artificial lighting was growing and affecting the light in our atmosphere at night. They compared measurements from 2012 (NOTE: taken with a DIFFERENT satelite imaging technology than 2016 images), then compared them to an image from 2016. The growth they observed startled them.

This figure illustrates the lighting change between 2012 and 2016. The blue indicates decreases in light, and the red indicates an increase in light.

What concerned the scientists even more is that the satellite images were not sensitive to blue LED lights, which have recently become more commonplace in work and home spaces. These blue LED lights have been linked to sleep disorders and linked to sleep deficiencies and other health issues. Because the satellites do not pick this kind of  light up, scientists fear the numbers produced from their study may be underestimates of the actual levels of light in the atmosphere. Furthermore, while the LED may reduce the overall “brightness” of a city, the health impacts mentioned before could still be amplified as the LEDs are more readily available and encouraged. The bottom line is no one knows what the effects could be once applied to a global scale.

The results of this study could have wide ranging implications. Literally globe-spanning implications. Exposure to artificial light is being blamed for numerous sleep disorders. The blue LED lights emitted from our phone and computer screens have been proven to keep our brains awake longer if we use them before bed. I am questioning if the obscenely bright lights out on the village green are having an impact on my sleep.

If this 2.2% per year trend continues, within the next 50 years we could, in theory, illuminate all of Earth’s atmosphere with artificial light. What would this look like? They don’t offer a picture, but could we truly ever get rid of night time altogether? I’m not sure, and the article does not really state any negative implications other than the possibility of an increase in sleep disorders.

The study, though widely mentioned, was not actually linked by this article. I had to Google it to find the published study.

It is interesting to me, that there still would be a 2.2% increase, even when there are such large areas of land, possibly uninhabited or more rural areas, where there is little to no light. It seems like a significant number to obtain.

While this is interesting data, I find it all to be pretty logical, and easy to explain. An increase in population will  lead to an increase in the demand for energy/electricity, therefore leading to more lights and more light pollution. This data would be more compelling if it included some possible implications of smaller increases in light in the atmosphere to compare against some of the more prominent research dealing with direct exposure to intense LED lights before bed. Furthermore, they never actually reference how much night time we are losing because of this increase in artificial light. I would be interested to see how much night we lose with every percent increase.

Nevertheless, if more studies similar to this one come out, perhaps it will make us think twice about our energy consumption, and turn out the lights at night.



Are female surgeons more likely to be punished than male surgeons following a mistake?

The wage gap is no secret. Nor is it a secret that there are way more male surgeons and physicians in the medical field than females. More and more studies are emerging to confirm the presence of bias between men and women.

Screen Shot 2017-11-26 at 11.09.00 PM.png

This figure from Heather Sarson’s paper illustrates the very large difference between the number of women and men employed as surgeons.

In a field dominated primarily by men, women in medicine face many more challenges than their male counterparts. According to a recent article by Vox, female doctors can make up to 27% less than their male counterparts in the same specialty. In fields such as medicine, it can be difficult to determine if this favoring is due to presence of a bias or if its due to ranking. Not only do they make less, but this article goes on to say that people are less likely to forgive a female doctor for making a mistake than a male doctor.

The research was conducted by Heather Sarsons, a Harvard student up for PhD. In her “working paper” (which I assume means it is still being revised) she designed a means to separate whether this trend was due to bias, or simply due to ranking or experience. She obtained a 20% random sample of all fee-for-service claims of all Medicare data compiled between 2008 and 2012. She then limited this data to surgical procedures, to physicians who have at least two options for referral, to surgeries performed by one surgeon, and instances where the physician has referred the surgeon in question once before. This allowed her to compare similar surgeries, each with a definitive outcome (success or failure), how surgeons performed over time (relative to success and/or failure), and how their referral ratings compared between 2008 and 2012.

Upon examining her paper, Sarsons also took numerous outside factors into account such as patient risk, time, and other “bad outcomes”. A lot of her results section went way over my head, as she created numerous, complicated statistical equations to account for these calculations. She also tested other possibilities to rule that her theory was most correct. The Vox article I read did not examine these other factors, instead they focused on the results. I cannot focus on all of them, because it would take a much longer blog post to analyze all 70 pages of her research.

***NOTE: Here is a link to her paper if you are interested in looking into the complex statistical modelling that went into this research.

Her findings were in sync with what we know regarding gender bias:

“Primary Care Physicians (PCPs) increase their referrals more to a male surgeon than to a female surgeon after a good patient outcome but lower their referrals more to a female surgeon than a male surgeon after a bad outcome. Furthermore, a PCP’s experience with one female surgeon influences his or her referrals to other female surgeons in the same specialty, while an experience with a male surgeon has no impact on a PCP’s behavior toward other male surgeons.” – Heather Sarsons 

Overall, she found women lose 60% of their Medicare billings from the referring PCP per quarter when they experience a bad patient outcome, whereas men lose 30%. That’s a pretty significant difference. We would expect the number to drop, as a bad outcome in surgery is never what a PCP wants.
What is so interesting about this study is how Sarsons examined not only the statistical data, but used sociology as a lens to create other statistical scenarios that could explain this trend rather than just examining her own position. I wish I could have understand the underlying statistics better. This research, though preliminary, could lead to further investigation of the origin of bias in workplaces. It could also be a gateway to explaining why these kinds of biases continue to cycle, rather than disappear or resolves themselves.

Does incentivizing prescription medication make us more likely to take our medication?

My previous blog posts have focused on negative impacts of the opioid epidemic and the increase in opioid related deaths. Statistics like those have, no doubt, had an impact on who takes what prescriptions when. The article I read this week focused on a sort of opposite trend; the negative effects of not taking prescribed medication.

**Side note: Prescriptions can be a huge pain. Recently I have been prescribed 4 different pills to take, and swallow following my tonsillectomy. I have also been told that swallowing will be very difficult after this procedure. I’m hoping this blog post will make me more motivated to take these pills.

In the article, recently posted on Upshot, the author a review done by the New England Journal of Medicine, they estimated cost of noncompliance-related hospital visits in America is $100 billion. Not only that, but the deaths due to noncompliance with medication is estimated around 100,000 per year. That is a lot of people not taking their medication.

Further research in this article indicated that steps taken to increase compliance have been largely ineffective. One study even tried to incentivize taking prescription medication, and failed!

The study was conducted over 12 months, and was composed of 1509 individuals out of 7,179 contacted. All of the patients were between 18-80 years old, who had all suffered a myocardial infarction (heart attack), and were prescribed at least two of four pre-determined medications. They established a 2:1 ratio between treatment grouped patients, those who received the intervention and those who received “typical” treatment. The intervention was given through three steps per person:

  1. Entering cardiovascular patients (with prescriptions for their condition) into lotteries where they could win $5 or $50 every day for a year.
  2. Giving participants an electronic pill bottle, that allowed their caretakers to be notified when the patient had not taken their medication.
  3. Finally, there was a staff member who would remind the patients the importance of adhering to their medication.

Despite all of these incentives, people still elected to not take their medication. As a result, the lengths of time from first hospitalization to second were about the same. Their findings are summarized below:

“In this randomized clinical trial of 1509 patients following acute myocardial infarction, there were no statistically significant differences between study arms in time to first re-hospitalization for a vascular event or death, medication adherence, or cost.”

This is a super interesting trend to me. Why, even after all the incentives and promptings given, are people still unwilling to take their medications? How are we still having an opioid crisis if these rates of noncompliance are so high? Are these trends consistent in other disorders (besides heart conditions) as well?

It would also be interesting to see the rates of noncompliance in individuals with non-life threatening conditions. I would guess that the rates would be even higher, because there is less incentive to take the pills.


Can Labeling an Ailment Increase the Likelihood a Parent Will Opt for More Dramatic Treatment?

Doctors’ opinions are (rightfully in most cases) often taken as gospel. When we are faced with terminology and/or a diagnosis that we are unfamiliar with, and an expert who understands said unknown, the most logical option is to listen to the expert. Even if what we are given as options for treatment may not be effective.

We have long been told that the words we say can have significant impacts on not only our mental health, but our physical health as well. According to recent studies, these words can affect a patient’s outlook on not only their diagnosis and/or prognosis, but the image they have of themselves and what they can and cannot handle. For instance, if a doctor uses aggressive or unfamiliar words to describe an ailment, a patient may be inclined to take more drastic measures for treatment. Again, even if they are told that the treatment may not be effective.

This behavior is leading to an increase in a commonly held fear with antibiotics and other prescription drugs. ‘Over-diagnosing’ can be leading to the rise in antibiotic resistant bacteria. It could also allow potentially harmful drugs to be a “quick-fix” to patient’s symptoms. An article I found on highlights research done regarding physician’s use of words, and patient’s subsequent choice of treatment.

The researchers behind one study were particularly interested in the over use of proton pump inhibitors in infants experiencing mild acid reflux. The drugs have been shown to cause more harm than good, despite providing relief to symptoms (note: a whole other blog post could be done on the reliability of the study done on proton pump inhibitors, this was just cited by the original article and I thought it would be consistent to include it). In the study, doctors gave “real parents” (an interesting choice of words by the author, I know), a hypothetical situation in which their child was spitting up and crying. One group of parents was told that their child had GERD (gastroesophageal reflux disorder), and the other group had no medical diagnosis attached to the symptoms (meaning, I assume, that they were just told their child was exhibiting the symptoms, then given options for treatment). Parents who had been given the medical diagnosis of GERD were more likely to opt for prescription drugs to treat their child, even though the parents were told the drugs were unlikely to help their child.

Screen Shot 2017-11-12 at 10.50.39 PM

Something I am confused about, is the scaling here. The results from the study were obtained through an ANOVA analysis. The study never mentions how many individuals were surveyed. This could lead to a false confirmation. There could be no correlation, but because their sample was so small, they could have found correlation. They are, however, careful to state that labeling diseases “may” promote over-treatment.  Never stating that it “will” promote over-treatment.

In addition, from the information gathered from the study, they gathered their data in one pediatric clinic. This could be another indication that their study is not representative of a larger population.

Though this study may not be the best indication of the effect of labeling and over-diagnosing, this is still a legitimate issue. Medical jargon is not widely understood, and it can be inherently scary to hear when it is being used to describe your health. Recently, I have been having issues with my tonsils. An interaction I had with one clinician brought up a similar kind of fear in me. He spewed a bunch of medical terms at me, then immediately following, told me that tonsillectomy surgery was the best option for treatment.  At first, I felt so hopeless that I almost scheduled a surgery. However, after talking with my parents, I realized I had many more options open to me than the doctor had made it seem.

The article on FiveThirtyEight looks into labeling other diseases and the impacts they have on patients. I am interested in doing another blog post about the cancer studies they cite.




VR: More than Just a Game?

It’s probable…

In a prior blog post, I explored the opioid epidemic and its’ widespread effects. According to a more recent Wired article, approximately 100 people die due to opioid related causes in America each day. Discussion regarding possible solutions to the epidemic has become more prominent in media. For this week’s post, I wanted to explore some of the alternative therapies that are being introduced. I found something pretty interesting.

Scrolling through the “Most Popular” section on Wired, I stumbled across a video entitled: “Doctors Are Giving Real Pain The Virtual Treatment“. This short video illustrates research conducted together by doctors at Cedars-Senai hospital  and researches from Applied VR in Los Angeles. Their aim? To explore the possibility of utilizing Virtual Reality (VR) as a pain reliever.

I was doubtful at first, but the article did present some very convincing evidence.

They conducted a study of 100 patients, in which 50 recieved 10 minutes of VR therapy, then the other 50 watched “relaxing videos in 2 dimensions”. According to Dr. Speigel, a gastroenterologist and the man in charge of implementing the research, the VR pool outperformed the relaxation pool significantly. This research is only preliminary, and the researchers are hoping to move forward and examine the effects VR can have on chronic pain.

Applied VR boasts a large collection of 3-D technology, all of the ‘experiences’ designed to combat pain. Some include voices talking you through meditative breathing exercises while you soar through the amazing scenery from around the world, others are more task-based, like ‘Bear Blast’, shown below. Their theory, the more distracted your peripheral nervous system is, the less pain you will feel.

An image from one of the first ‘experience’ they created, called Bear Blast. This activity was shown to decrease acute pain in 25% of it’s initial research participants.

Prior research has shown similar decreases in pain following the use of VR. In the early 2000s, two burn patients were observed during physical therapy. Virtual reality painted a scene of frosty mountains and allowed participants to fling snowballs at snowmen, mammoths and other creatures. Patients were found to have “significant decreases in pain and increases in mobility and range of motion” when using VR.  Since this research was only conducted on two people, we cannot say with any certainty that this research would apply to other, larger populations. It did, however, inspire other researchers, like Speigel, to explore this field in more detail.

Text Box:








An illustration of 1/2 of the subjects results for Hoffman’s experiment on VR and pain management. Source:

Several other studies highlighted in the prior article found similar correlations in areas such as cancer treatment, post-surgical pain, and other routine medical procedures.

Though this may be exciting research, many of the studies conducted have been on small-sized populations (usually right around 100 individuals or less). This could lead to false confirmation, because the sample sizes may be too small to accurately project on to a larger population. None of the researchers in these studies have attempted to do so, it could be a possibility, however, if someone were to see this data for the first time and immediately apply it to a larger population.

Another possible concern is simply that everyone reacts differently to pain. How can we say for certain that all of those individuals in the study were ranking their pain on a similar level? How can one design a program to address every single person’s pain and/or anxiety associated with pain? Is there a level of pain where VR cannot further distract the brain?

Though there are several pieces missing from the research that makes it appear inconclusive, it is very possible that this research could expand rapidly in the near future. With the expansion of VR accessibility (the cost of an Occulus Rift has dropped from $800 to $399 recently) and popularity, more individuals could look into VR as a possible pain management tool. By preventing the necessity of the medication in the first place, VR could serve as a preemptive strike against opioid dependency.

Who knows? Maybe we will leave the doctor’s office with a VR headset instead of a pain pill prescription in the near future. For now, more research must be conducted.




Why do I enjoy haunted houses?


Turns out there really isn’t a concrete answer…at least not for now.

So first of all, I apologize for the awful photo. But I think it illustrates my feelings towards haunted houses very accurately. I love Halloween. I love pumpkins, fall, leaves, planning my costumes, cooler weather, and hoodies. But above all, I love haunted houses.

Or at least I think I do…

Look at my face in that picture. Is that enjoyment, or pure terror on my face? Maybe it’s a mixture of both. Nevertheless, each time I decide to enter another haunted house, I find myself asking the same question… Why am I doing this again? Why am I giving some random people money to scream in my face and scare me?

An article on the New York Times website also examines these questions. It reveals a number of possible factors that could explain why some people love getting scared when others don’t. They included the following:

  1. Possible social pressures. According to a social scientist cited in the article, the social interaction with people in your group can help you gain positive experiences from the fear. If your friend is captivated in a horror movie, you’re more likely to recreate those feelings in your own mind, because it can bring you closer to that person. Once you have that positive experience, you will want to have it again and again. Explaining why people will want to visit haunted houses over and over again.
  2. Another psychologist suggests that people who are more inclined to take risks (“type- T’ personalities) are also going to be more inclined to enjoy haunted houses. A mixture of environmental factors, genes, and early developmental experiences add together to make this type of personality.
  3. Yet another psychologist suggests that the haunted houses may be a way of testing ourselves. Seeing how much fear we can handle.
  4. Finally, different people have different levels of the chemical dopamine, which is our brain’s response to reward. Those with more dopamine (more closely related to the T-type personality) tend to respond more positively to the fear and want to repeat the haunted houses or similar experiences.

Oddly enough that is where the article basically ends. When I tried to search further into why people enjoy being scared, I was met with little to no information. Similar claims were made in article after article. Each with psychologists talking about their respective theories, but never citing any data.

I would be super interested to know why these studies are not more readily available. T According to, in 2013 NBC reported that haunted attractions were a $300 million per year revenue. In 2016, total Halloween spending reached $8.4 billion. Salt Lake’s own Nightmare on 13th attracts 60,000 people annually. There is obviously an interest in the Halloween industry. So why is there not any study into the reason why? Why are people willing to spend money on scaring themselves? How much money are they willing to spend? How have haunted houses changed to accommodate people’s fears and become successful?

Obviously this post left me with more questions than answers. I hope to look more into studies about fear and how people respond to high-stress, fear inducing situations.

Can seeing fake news multiple times make it more believable?

Vox recently shared an article including several interesting trends I had also noticed in the aftermath of the Las Vegas shooting. All over my Facebook in the days following were articles with titles such as “CONFIRMED: TWO Shooters Responsible for Las Vegas Tragedy”, or “What the media won’t tell you about the Las Vegas Shooting”. Some of the “Trending Stories” on Facebook were articles about the shooter by Sputnik (a Russian propaganda outlet). TechCrunch and Vox both noted how a rumor spread by a 4Chan thread, ended up on the top Google search results for a period of time naming the wrong man as the shooter. All of that is just the beginning, according to Vox.

Why is it so easy for these falsified reports to spread so quickly? The internet is wide-spread, and easy to access, but shouldn’t Google and other search engines have algorithms in place to prevent these kinds of things from happening?

Brian Resnick from Vox argues that there are not enough of those filters in place, and because of the lack of security, many of us are falling victim to an “illusion of truth”. The more fake news stories are circulated, the more we see them. The more we see them, the more likely we are to start to believe they are true. A study cited by Vox highlights these ideas.

In this study participants were exposed to 4 true and 4 fake news stories involving the 2016 election, then asked to determine if they believed the headlines to be true.

An example of fake news headlines given.

When distracted by another task (not specified) then brought back to a larger list of other headlines, (including some of the fake ones they had already seen) the participants were more likely to call the stories “more accurate” after seeing it a second time. The researchers even found that if they put a warning on their headlines stating the source had been discredited by 3rd party fact-checkers, it made no difference in the selection of accuracy. Participants still decided to rank the sources as “accurate”.

Screen Shot 2017-10-08 at 11.28.32 PM

While the research is compelling, Vox does make it clear that it is preliminary, and has not yet gone through the peer review process. After examining the cited paper (available here) some questions were raised for me as to the validity of their claims.

First, I wonder if choosing such a politically charged topic would lead to as accurate results as they claim? I could see it leading to relatively accurate data in that people could be more easily swayed by articles leaning towards their political bias. On the flip side, if their study included more Democrats than Republicans, and they chose headlines opposing Democrat’s views, the participants could be less inclined to agree, leading to skew in their results. The opposite could also be true.

**NOTE: The study could have addressed this and made a questionnaire for participants, and I vaguely remember noticing something to the effect of that when I skimmed the paper. However, I was only able to access the paper once so I missed that information.  When I navigated away from the page and then back to it, the website asked me to create an account with them which I did not really want to do.

It was also difficult to ascertain the study’s reliability because the Vox article left out some key points of information about the study. For example: we do not know if the study was done on 50 individuals or 5,000 individuals. We do not know if they were democrat, republican, male, female nor do we know the headlines they were given and if they were given the same headlines. From the first graph, you can vaguely determine the scale that was used to rank the headlines by accuracy, but even those involve some guess-work.

I would also be interested to know about the participant’s education on web literacy to see whether or not that played a role in their perceptions.

While the study may appear shaky on the surface, I am not discrediting the fact that fake news can circulate and spread at alarming rates. Nor am I saying that people do not believe fake news. Upon second investigation (which I was not able to do), the study could have been remarkably solid, and the illusion of truth could really be a measurable thing. Because of the many questions still at play, I am simply not sure that the relationship between viewing fake news multiple times and believing the fake news is causal.