1. p-Hacking and False Discovery in A/B Testing by Ron Berman (University of Pennsylvania – The Wharton School), Leonid Pekelis (OpenDoor), Aisling Scott (Independent), Christophe Van den Bulte (University of Pennsylvania – Marketing Department)
“p-Hacking and A/B testing have received a lot of attention recently, with many attempts to devise methods to help experimenters make better decisions and avoid false-discoveries. p-Hacking is an umbrella term for engaging in behaviors that generate statistically significant effects where none exist, and is known to generate false discoveries in academic and biomedical research. Commercial experimenters may also engage in p-hacking and the resulting false discoveries are likely to result in bad business decisions. It is therefore important to understand the extent and consequences of p-hacking among experimenters who run A/B tests.
We were excited when Optimizely, the leading A/B testing platform, generously allowed us to collect and analyze data on 2,101 experiments. The data stem from just before Optimizely put in place protections against p-hacking. One key finding is that p-hacking is quite common; more specifically, when running a fixed horizon experiment, slightly more than half of experimenters stop their experiments based on the level of the p-value reached. Another key finding is that about 70% of experiments (whether p-hacked or not) involve treatments that generate no effect at all. Finally, the data allowed us to estimate, conservatively, the substantial economic value of the damage caused by p-hacking through increasing the number of false discoveries.
These findings have already proven useful for Optimizely, who now uses always-valid tests that adapt to stopping behavior, and we hope that they will be equally useful for researchers as well as practitioners, whether they are running A/B tests, or researching and designing A/B testing platforms.” – Ron Berman
2. Pulling the Goalie: Hockey and Investment Implications by Clifford S. Asness (AQR Capital Management, LLC) and Aaron Brown (New York University (NYU) – Courant Institute of Mathematical Sciences)
3. A Brief Introduction to the Basics of Game Theory by Matthew O. Jackson (Stanford University – Department of Economics)
4.Coin-Operated Capitalism by Shaanan Cohney (University of Pennsylvania Law School; University of Pennsylvania School of Engineering and Applied Sciences), David A. Hoffman (University of Pennsylvania Law School; Cultural Cognition Project at Yale Law School), Jeremy Sklaroff (University of Pennsylvania Law School – Student/Alumni/Adjunct; University of Pennsylvania, The Wharton School, Students), and David A. Wishnick (University of Pennsylvania Law School)
5.Why Do Some Terrorist Attacks Receive More Media Attention Than Others? by Erin Kearns (University of Alabama), Allison Betus (Georgia State University), Anthony Lemieux (Georgia State University – Global Studies Institute)
Terrorist attacks often dominate news coverage as reporters seek to provide the public with information about the event, its perpetrators, and the victims. Yet, not all incidents receive equal attention. Why do some terrorist attacks get extensive media coverage, while others get less or even no coverage? Building from previous research on biases in entertainment and news media, we expect that the perpetrator’s social identity is a driving factor behind this discrepancy. Drawing from our own research and that of colleagues, we also expect that media coverage would be influenced by the type of target and the number of fatalities. We also expect that perpetrators who are arrested will receive more coverage, particularly as their case moves through the criminal justice system.
To test our argument, we looked at media coverage for all terror attacks within the United States between 2006 and 2015. The Global Terrorism Database (GTD) provides systematic, unbiased coding of terrorism around the world from 1970 to 2015. We draw from the GTD’s coding to identify 136 terrorism incidents that meet their definition and thus should be reported on as such in the media. Media coverage came from two sources: LexisNexis and CNN.com. We limited coverage to US-based sources between the date of the attack and the end of 2016. To be included in our dataset, the primary focus of the article had to be the event, the perpetrator(s), or the victim(s). We identified 3,541 articles on the 136 events.
Using negative binomial regression, we estimated models to test our argument and compared it to a number of counterarguments. We found that Muslim perpetrators receive over 350% more coverage, even when controlling for target type, fatalities, and being arrested. In terms of relative effects, a non-Muslim would have to kill about seven more people than a Muslim to receive the same degree of coverage on average. The Boston Bombing and the Fort Hood shooting comprised nearly 25% of all media coverage in our dataset, which raises concerns that these incidents are driving our results. We estimated models without these cases and the results were approximately the same, yet the impact of a Muslim perpetrator was actually a bit larger. Our findings were robust against additional counterarguments including the attack: occurring near a significant date, targeting Muslims or minorities more broadly, having an unknown perpetrator and group, accounting for casualties, and not meeting all criteria of the GTD’s definition. In sum, the media disproportionately covers attacks when the perpetrator is Muslim. This may have important implications for how Americans think and feel about both terrorism threats and the Muslim community. To better understand media coverage of terrorism, we are building from the current project to explore factors that impact whether or not “terrorism” and “terrorist” are used to describe incidents and their perpetrators. –Erin M. Kearns