Professor Joshua Gans

Meet the Author: Professor Gans on Monopolies and AI

We’re excited to present the next instalment of our author spotlight series, featuring Professor Joshua Gans. Professor Gans is currently a Professor of Strategic Management and holder of the Jeffrey S. Skoll Chair of Technical Innovation and Entrepreneurship at the Rotman School of Management at the University of Toronto (with a cross-appointment in the Department of Economics). Joshua is also Chief Economist of the University of Toronto’s Creative Destruction Lab. 

Professor Gans has worked on a swathe of topic including monopolies and the impact of AI, and since we’ve been focusing on both of those subjects for the last two months, we were delighted to check in with him.

Q. We really enjoyed the book you co-authored on AI as a tool that helps to lower the cost of prediction. Have you seen recent developments in AI that have surprised you? Azeem Azaar wrote recently in his newsletter that the biggest input he has replaced in his business using AI tools is grad students…

I think there are implications in a few places as we’ve somehow hit an AI moment, and it’s quite interesting. A lot of the fanfare and the detractors of AI come from a misconception of what AI really is. As soon as you use AI with the idea that you’re talking with an intelligent entity then it’s you trying to evaluate how intelligent they are. That’s a bad way to approach it. You have to understand that you’ll only be talking to an entity that is trying to predict what response you would like to see. If you have a two hour conversation with ChatGPT and in the end it tells you it loves you, and wants to pursue a relationship with you, it’s not actually because it loves you. It’s because the conversation has gone in a direction that mirrors literature all over the world. You know, there are plenty of examples where the end response to this trope would be, “I love you.” If you can make the dialogue trope-ey enough, you can tap into a consistent set of expectations. People hate to talk about AI as being just a much better autocorrect, but that’s what it is. It’s trying to respond in a way what makes the most sense.

The amazing thing is how valuable autocorrect is! The idea that you can replace a whole load of tasks that graduate students can do is extraordinary – it’s just completely fascinating. There’s a difference between the people who write about AI in the New York Times, and everyone else who is now using it for other stuff. And for the ones who are using it for other stuff, the main danger they face is that the AI output still requires review. For example, I saw a case recently in which someone used AI to write a condolence letter, and people noticed an error which they presumably just didn’t check for. Where’s the problem there? What’s the condolence letter supposed to be? It’s not giving instructions, it’s just a template. Maybe the offensive thing here is that you spent twenty minutes to write it using AI, not an hour on your own. But for most of stuff that we do, it’s great to spend less time on it if you can. I didn’t read a lot of University admin emails before AI, so if they use AI, that will save time and who cares about the difference? We have to ask ourselves, for example with situations such as academics thinking about their students exams; if AI can answer an exam question to a good degree, was that a really good exam question, or was that some hurdle you designed to get students to pay minimal attention in a lecture? AI may pass a medical exam, but it can’t be doctor. The issues arising with AI are not new: we had similar issues before with calculators, with Excel. If everyone understood the message from prediction machines, they will be much more effective using AI, and panic about it much less. In my department, we were playing around with AI to see if it could it write an entire paper in twenty minutes. We tried it, and it’s not a great paper, but boy, does it look like it is. It has a proposition, which was wrong – interesting, but still wrong – and it has references. It’s even got the right format for an academic paper. Does that mean that SSRN will be flooded by plausible papers that don’t mean anything? I don’t think so. But is AI going to get some researchers started on a paper and get them set up? Yes it is. I expect people to become vastly more productive by using this new AI.

There’s a lot of interest in digital platforms such as Google, Amazon and Apple and their huge incumbency advantages. It seems that current anti-trust legislation is unable to deal with their formidable market positions. We’ve seen huge growth, for example with Google building in Search and Hosting, and Amazon in Online Retail. How do you think about Digital Markets and the challenge of monopolies and monopsonies?

I was disappointed in my fifty year old self for looking at these cases and discovering that actually our existing tools for regulation are doing just fine. Every issue you look at, such as Facebook taking over WhatsApp, could have been stopped under previous rules – but it wasn’t tried. I think there are a few things that are a problem, such as tech monopolies tending to acquire a lot of start-up firms. Anti-trust can’t currently see when there’s a pattern of acquiring smaller firms that could have become competition, or at least it hasn’t been really tested to deal with that. We may see some of that going forward. Could we go back and examine old mergers? I think that on the jurisprudence side of the law, it’s not really fair to revisit deals once they’ve gone through. Once you’ve done a thing you can’t really go back, although I’m sure there will be discussion about that. What people forget is that at the time of these cases, things are uncertain, and of course in hindsight it’s easy to see what could have been done. I think that our typical tools of antirust still do a good job, but there are broader issues that we are starting to face that we’re not sure how to deal with. It’s sometimes easier to think about some factors like more or less competition, but then their other factors like dead weight loss and measures of efficiency that interconnect are more difficult. The partial equilibrium notions like that, are they really the right measures market by market?

A lot of the time regulators have been looking at the impact of monopolies purely in terms of the lowering prices – does this monopoly mean cheaper products and services for consumers. Do you think that definition is enough, given the broader impact on and markets that monopolies can create?

What you have to remember is that sometimes there are only a few things that can be measured accurately, so price, for example, seems to capture a lot of what we care about. In actuality, we are relying on very textbook theories to tell us if something looks like it’s a bad monopoly issue, rather than something obvious which creates a bright line which triggers a process from there. The value added to discussion about monopolies is not in the straightforward cases, but instead in the difficult ones. A great example of this is what has happened with online retailing – there is discussion about online retailing and whether it is competition with offline retailing. Yet you have firms such as Walmart that were offline at first but now have a huge online presence. It’s just difficult! The biggest variable is how motivated the regulators are to write good antitrust laws. You can see the difference in the European regulators compared to the US: the EU regulators were very concerned with cell phone data prices and as a result plans are much cheaper than when compared with the US. The legal frameworks are there – but you have to be willing to use them. The US enforcers used to be very worried about losing in court, although they seem to be less worried about that now. From an economics point of view, the main role of antitrust is not to fight cases, but to stop companies from making the antitrust violation in the first place because their lawyers are advising them against it. The role isn’t to see them court – it’s to prevent them getting there in the first place. 

I have a very specific question for you: do you think Search is a natural monopoly?

I have to declare an interest on that one, given some work I’m doing at the moment, so it’s hard to to comment, but just to say, as usual, here’s one dimension of antirust that we struggle on. Is search a monopoly? Absolutely. The issue is, will that monopoly persist into the future and is Google doing things now that are making that more likely that it will persist? That is a more difficult question. On the one hand there is a feeling that an intervention should be made now if we’re worrying about these things, but at the same time to intervene now could turn out to be the wrong thing. You might stop Google doing a whole bunch of things, but then you might also stop other competitors emerging to challenge google. But then, putting in place restrictions will send a message to other big firms. It’s confusing! 

If antitrust enforcement works as it should, we should start to see a less centralised search market. For instance, academic search is not a big earner for Google, and the message for interested parties of the world should be: why do academics prefer using Google scholar? It just works better than any other options out there. Other competitors are popping up though, like Elicit. We have also seen competitors searching with ChatGPT, asking it to show all the studies with a sample size of 10,000 or more. That sort of innovation in the academic search space is very interesting.

How do you see the collision of Big Tech and AI – are we going to see a thousand AI flowers bloom, or are the well funded players such as Google, Amazon, and Apple Microsoft likely to dominate as they have in mobile and Search?

I think that AI, for reasons that historians will need to unpack, has a great start on remaining unmonopolised. Yes, all these big tech companies have AI, but most of them are providing tools to other researchers who are then building on them. There is just a lot of free flow, and that is terrific. History tells us that innovation happens and then particular complements will turn it into the next big thing, by network effects. For example, we used to worry about that in the Internet, what will be the thing that becomes important? We thought it would be landing pages – but it was actually the browser, and then it turned out to be search. These complements became very important. If I knew what the next big complement was going to be in AI, I wouldn’t be telling you, I would be getting in on the ground floor… If history is any guide, then there is some complement to AI that will rationalise everything that’s happening. We are really only a few years into this, and I don’t think we know enough about the form of AI that will emerge over time, but I don’t think GPT type things will be monopolised. But who knows? It’ll be interesting to watch.

These big companies also don’t really have a great insight into buying and exploiting new technologies. Facebook has gone for virtual reality: I’m not so sure. One of the problems is that they have the capacity to throw a lot of money at something because of the perceived threat. Microsoft did this for years, although they finally stopped that in the end. Google and Apple have done a lot of that too, as with self-driving cars. They spend a whole lot of money on the next big thing, but it turned out not quite to be there. 

What role has SSRN played in your research process?

I’ve had a good run with SSRN – one year I had two of the top five downloaded papers, around the economics of the block chain. There was a huge interest in those papers, although they were very hard to publish! SSRN is huge. Back in the day when it first started, (I put up papers the week that SSRN was launched) SSRN was all about the newsletters we sent out, and we would go back and forth with those. In recent times it’s acted more as a repository, and although there are other valuable sites, I use SSRN mostly because I know how to use it and update it, and I like that I can link to it from my website. 

The most attractive feature is the the number of downloads, because it’s great for academics to get anything that gives signals of value on what is important. SSRN produce rankings and everyone loves a good ranking. Google Scholar was one site where people would say that its citations aren’t as attractive as Web of Science – but in the end we all agreed we liked that we knew how to use it, and we really liked that the citation counts are a bit higher! A hundred citations on Google Scholar and two on Web of Science, that doesn’t make you feel good.

With your crystal ball, do you see big impacts big tech from recent legislation coming from either the European Union or the US? Or do you think the technical genie is well out of the regulatory bottle?

Europe has the Digital Markets Act, which is a process by which they can designate a big technology firm and change the way it deals with suppliers. As with a lot of European ‘big think’ legislation, they throw a big brush at it and it turns out it has huge unintended consequences, such as GDPR. Big companies can afford to comply, but otherwise it’s a big share of your costs for a small company to comply with it. Any legal costs to small companies really upset these markets. They are saying ‘Can we change the balance of bargaining power between small and large players…’ It will be interesting to see!

Professor Joshua Gans’ Most Recent Papers on SSRN

A Solomonic Solution to Ownership Disputes: Theory and Applications

Mechanism Design Approaches to Blockchain Consensus

Prediction Machines, Insurance, and Protection: An Alternative Perspective on AI’s Role in Production

A Theory of Visionary Disruption

Internal Disagreement and Disruptive Technologies

Leave a Reply