Weekly Top 5 Papers – February 18, 2019

1. Global Factor Premiums by Guido Baltussen (Erasmus University Rotterdam (EUR)) and Laurens Swinkels (Erasmus University Rotterdam (EUR)) and Pim van Vliet (Robeco Asset Management – Quantitative Investing)

This paper shows very strong evidence on the main strategies underlying factor-based investing. Over the years several anomalous, but persistent patterns in returns have been discovered by fellow academics and ourselves. These patterns are also known as ‘return factors’ and are a hot topic in the investment industry, with tremendous growth in asset managed based on factors. 

At the same time, many studies turn out to be hard to replicate. There is bias to positive results which is referred to as ‘p-hacking’. This p-hacking is a serious concern, but it can be addressed. For example, by raising the statistical bar. Furthermore, replication studies have become more common in social sciences. Campbell Harvey has put p-hacking on the financial research agenda with his AFA Presidential Address.

Nobody knows 100% sure if factors keep on working in the future. But what if the results were fake to start with? We, therefore, apply the same cures which are proposed by the leading scientists. Replicate previous studies, raise the statistical bar and apply extensive, deep datasets. It is our duty to apply the highest possible standards when doing research.

Over the past years, we have constructed a very extensive and deep historical dataset stretching back to 1800 with which we test 24 global factor premiums. We replicate several previous studies which typically go back ‘only’ 30-40 years, with some very strong and remarkable findings. – Guido Baltussen, Laurens Swinkels, and Pim van Vliet


2. Alice’s Adventures in Factorland: Three Blunders That Plague Factor Investing by Robert D. Arnott (Research Affiliates, LLC) and Campbell R. Harvey (Duke University – Fuqua School of Business) and Vitali Kalesnik (Research Affiliates LLC) and Juhani T. Linnainmaa (USC Marshall School of Business)

 3. Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice by Rashida Richardson (AI Now Institute) and Jason Schultz (New York University School of Law) and Kate Crawford (AI Now Institute)

We are at a critical moment in the debate over the use of predictive systems for policing. From the Deputy Attorney General to the Mayor of Baltimore, we’ve seen more public officials turn to predictive policing and other computational techniques both to improve the efficiency use of law enforcement resources – as well as counteract problems of human bias and discrimination. Yet even as these tools are being tested and procured, there continues to be a lack of transparency and public oversight concerning the risks they may pose. While some studies have documented concerns about bias, fairness, and accountability generally, none have looked at the risks of reinforcing bias in jurisdictions where documented corruption and unlawful police practices have potentially influenced the data that is used to “train” the underlying model of crime patterns.

In this paper, we provide the first study looking at these risks. Specifically, we document overlaps between 13 jurisdictions where legal adjudications have found so-called “dirty” police practices alongside public records indicating interest or efforts to develop predictive systems. We then identify various categories of risks that stem from implementing such systems, including the potential for illegal or unethical police practices to create “dirty data” – data that could bias the outputs of the predictive systems and create further bias via feedback loops, data sharing practices, etc. In teasing apart these risks, we identify cases where the link between dirty police practices and dirty data is most direct – for example, in person-based systems such as the Chicago “heat list” – to systems where the link is less certain – for example in place-based systems which focus on the location of potential criminal activity rather than specific people. However, even for place-based systems, findings of systemic corruption in law enforcement raise serious concerns over the use of any data for future predictions – concerns that governments and police technology vendors have done some work to redress but not nearly enough. As discussions on how to best use predictive technologies in public services move forward, we must closely examine the risks that problematic and illegal historical practices create for data systems. It is extremely difficult to weed out the ‘dirty data’ from the ‘clean data.’ Instead, stringent public review and transparent accountability mechanisms are needed to ensure these dark legacies are not perpetuated under the guise of technological progress. – Rashida Richardson, Jason Schultz, and Kate Crawford

4. The Decline of Computers As a General Purpose Technology: Why Deep Learning and the End of Moore’s Law are Fragmenting Computing by Neil Thompson (MIT Computer Science and Artificial Intelligence Lab (CSAIL)) and Svenja Spanuth (RWTH Aachen University)

In the past 50 years, computers have fundamentally reshaped our society.  This paper is part of a research stream to understand how this has happened and what the future will look like.  In previous work by one of the authors (Thompson), he looked at how one of the most important technical trends in computing (Moore’s Law) made firms more productive.  In other work (in review at Science) he also looked at what technical options will be available for getting more computing performance as Moore’s Law winds down.  In this work, the authors look at how the economics of computing are changing as Moore’s Law comes to an end.  They find that a key economic force that was pushing towards mutually-compatible, ‘lift all boats’ computing is now coming to an end.  In its place, a force is starting to fragment computing, producing ‘winners’ and ‘losers’ that use very different technology and get very different economic outcomes – Neil Thompson

5. ‘A Diamond is Forever’ and Other Fairy Tales: The Relationship between Wedding Expenses and Marriage Duration by Andrew Francis-Tan (National University of Singapore (NUS) – Lee Kuan Yew School of Public Policy), and Hugo M. Mialon (Emory University – Department of Economics)

Leave a Reply