We’re excited to present our first author spotlight blog post, featuring Professor Luciano Floridi. Professor Floridi is currently a Professor of Philosophy and Ethics of Information at the University of Oxford. He also holds a position as a Professor of Sociology of Culture and Communication at the University of Bologna.
In June, Professor Floridi will be leaving Oxford to become the Founding Director of the Digital Ethics Center at Yale University, and a Professor in the Cognitive Science program (as of late January 2023). We asked him about his new appointment, and his thoughts on the recent developments in the field of artificial intelligence, which is SSRN’s topic focus this month.
SSRN: Can you tell us about your new role as the Founding Director of the Digital Ethics Center at Yale?
Yale is investing significantly in expanding its leading role in social sciences and humanities towards STEM. As part of that wider approach, Yale has decided to invest in studying, researching, teaching, and communicating the impact of digital technologies on human life and society, the environment, and politics.
A significant investment became the creation of the new Digital Ethics Center (DEC) at Yale. I’m the lucky person who got headhunted to create the DEC and lead it. The area that we are going to cover could be described as GELSI: the Governance, Ethics, Legal, and Social Implications of digital technologies. That acronym builds on some terminology used in Brussels, but I’ve added the ‘G’ at the beginning, which is significant, because I am convinced that the governance of the digital will make a huge difference. It means deciding what to do with digital innovation.
SSRN: It seems as though the culture has suddenly become obsessed with AI, following the big breakthroughs in text to image AI (Dalle-E) and conversational AI (ChatGPT). What’s it like to find that the issues you’ve been talking about for a while have become suddenly fashionable?
On the one hand, there is a sense of relief and excitement. I have been working on these topics for decades and, with other colleagues, we saw these issues coming when no one really cared – we were just a small group of academics, you could go to a conference in the 80s and 90s on so-called “computer ethics”, but it was not at the centre of our culture, or even of the mainstream academic debates. Now, these issues have become mature and current, of interest to anyone who has had any interaction with digital technologies. This digital revolution is affecting billions of people. I’m delighted. On the other hand, there is also some frustration that we didn’t make the right decisions at the right time, when it would have been less costly. Philosophers are used to being ignored, but it still hurts. It’s just like the classic dentist’s advice: if we had brushed and flossed at the right time, we would have had fewer problems. For example, consider the Cambridge Analytica scandal and its political impact. We could have avoided that. Brexit, Trump – we are talking about significant damage to our democracies and societies which perhaps we could have minimised, if not avoided, dealing with privacy, fake news, content moderation, and other issues at an earlier stage. However, I look forward to what we can do to improve things now. There are plenty of opportunities available to us in social interactions, and in developing good legislation. I am not the usual philosopher who sees things only going from bad to worse. What good can we use these technologies for? We have two huge issues we need to tackle urgently: climate change and social inequality. Both could benefit from better use of AI, for example, and of the data we are accumulating. To put it simply, we need a good ethical framework to ensure that the bad stuff doesn’t happen, or happens less frequently, and the good stuff does, at least more frequently. There is a lot of work to be done! And we need some of the best intellects to do it.
SSRN: There’s a number of fascinating ethical challenges thrown up this new wave of AI. When AIs are being trained on copyrighted work it feels like there is a tension between protecting the copyright of existing artists and stifling innovation?
With novelty comes excitement for the new opportunities, but also its counterpart, uncertainty, and fear of what may be happening or go wrong. One mistake we should avoid is thinking that good solutions from the past can simply be used to cope with the future. Copyright has done its job, more or less well in the age of printed books, and recorded music, but increasingly we’ve seen that it’s less and less fit for purpose. The discussion about music online, streaming, buying a single song rather than an album – how do you reward the artist? We need new tools for these new features of our society.
Rather than considering how we can stretch existing solutions developed for an analogue world, we should think afresh, and see what new solutions are needed for this new digital reality. With a metaphor, we do not need to abandon or rewrite the chapter of past solutions, we need to add a new chapter. For example, if a commercial AI content production system has been trained on the paintings of a museum, something may need to be done to reward that museum…we need new forms of contracts to understand what a fair way of handling resources and profits is.
I just got a note last week from a publisher saying approximately this, in a more legal vocabulary: “Here are the new rules involving AI: if you’ve used an AI to generate even part of your paper, it has to be declared, and it can’t be listed as an author, because an AI has no legal responsibility.” Authorship means responsibility – so software cannot be an author. Wrong data in a medical journal means a real serious responsibility, for example. So, we are already adapting our context, in this case, our legal responsibilities, in the face of this new technology. We are going to see a lot of novelties in terms of the ethical, legal, and social perspectives to which we’re going to have to adapt. We have to match the novelty of the technology with the conceptual innovation in the conceptual frameworks we use to think about it and regulate it. Some time ago, I suggested to the European Commission that we should have a beta testing framework for legal uses of AI, for example. You beta test a piece of technology, so you should be able to beta test a piece of corresponding legislation, for example, in a particular city or region, as a sandbox. I’m very happy to see that this is now something they are considering.
I really enjoyed your paper on AI and Accountability, and it made me think of the news that CNET’s AI legal bot had been accused of plagiarism – quite elegantly – of other articles in the personal finance space. What do you think of this?
This form of plagiarism is going to be harder to spot, but we are also going to see something that already happened in other contexts. There has been a lot of mediocre content for a long time, you get it for free for a reason, and sometimes it’s worth very little. You get what you pay for, formulaic novels that can be churned out by the kilo, and mediocre content of all types, including movies, songs, writing, or elevator music. Think of some soundtracks for some movies – they can sound completely generic. Mass-produced things are a great step forward – we can all afford mass-produced shoes, not handmade ones. We are now seeing the mass production, on an industrial scale, of content. So many jobs we thought were irreplaceable are going to go out of the window. Let me give you an example. There used to be a job as a translator for instructions for gadgets, but that disappeared a long time ago. Yet someone able to translate Shakespeare into Italian will still be irreplaceable. And new jobs will appear because we have to manage all this. I call them green collars. I recently got involved with the first comic book entirely drawn by an AI, called “Abolition Of Man”. I know it took a special cartoonist like Carson Grubaugh to create such a unique book. In general, these new tools require unprecedented skills, abilities, imagination, aesthetic visions and so forth.
You’ve said previously that private organisations feel obliged to present themselves as green these days. Do you see a similarity in regard to AI, that organisations will need present themselves as ethical in their use of AI for commercial and reputational reasons?
As more and more legalisation is coming, a bit like the green movement, at some point, fines and compliance will come in. Yet, increasingly, it’s also no longer just what is required for compliance. In the same way that being green means much more than complying with local laws, compliance with digital legislation will be necessary but not sufficient for competitive advantage. You will have to show you are doing much more than what the law asks. If you look at some digital companies and their policies about energy consumption and carbon neutrality, I expect something similar to happen in the digital context. Companies will increasingly feel the pressure to go beyond mere compliance.
It took decades for green legislation to make a difference, it took a new culture, new rules, and new people coming into power. As the demographic profile changes, there’s more pressure and new legislation. Of course, I hope this will happen more quickly – both environmentally and digitally. I would like to see improvements happen more radically and much more quickly. There’s not much time left for either – bad digital governance can really inflict damage – disinformation and fake news cost lives during Covid-19. We need to change page much more quickly. But have we learned? I am not quite sure. One of the tasks of the new centre at Yale will be to push to get this done more quickly – we can’t wait another thirty years.
I actually believe climate change and digital change are part of the same challenge. The two worlds have become one. We do not live online or offline, we live ‘onlife’ – neither on nor offline, but both in digital and analogue realities. That’s the experience we have…more and more people increasingly live in one, seamless world. We can cope with all this by building a new alliance between the ‘green’ of all our habitats, natural and artificial, and the ‘blue’ of all our digital technologies. This is something that I cover in my forthcoming book, called The Green and The Blue – Naïve Ideas to Improve Politics in the Information Society. I believe this should be the human project for the 21st century. We need to bring environmental and digital solutions together to save both. We have a single experience of the world – we have one world to save.
What role do preprints play in your work? I can see you have over 230 papers on SSRN – how has it become part of your approach to research?
Preprints are the way forward for actual dissemination and growth of knowledge. Published research behind paywalls cannot be accessed by many people, and it’s rigid, once you have published something it can hardly be changed. The pre-print has that 21st-century quality of being flexible, you can re-upload a new file anytime, download and transform, comment on and change. You can build new relationships with preprints with other people interested in the same topics, so in terms of growth of knowledge, it’s more agile and more dynamic and more in line with the pace of knowledge creation these days. You still need the printed material because that is a record and that has other kinds of advantages, a permanent presence, for example, almost like a blockchain, but that’s more about history than the experience of knowledge work today.
I’ve always been a big fan of SSRN. Whenever we finish an article with my research group, we put it on SSRN. I recommend SSRN to all my students as the right place for them to share their research. There are other places where you can post preprints online, but for me the only platform that has a real reputation for added value is SSRN. I only wish the interface to upload papers were more intuitive!
I enjoyed your paper on Tragedy and Catastrophe. It almost made me think we should pray for a Catastrophe in order to prevent a Greater Tragedy…it’s appalling to imagine, but if we lost a major city such as Miami to climate disaster, do you think the world would wake up?
I’m afraid so. It’s the ABC of every conspiracy movie: some good people do something evil to make sure that it becomes possible to cope with something even more evil. More seriously, there is not enough proaction, you can read that piece as a prediction. I’m afraid we will not take significant action until a city such as Miami is underwater. The moment the sea goes up, the Maldives will disappear. Will that be enough? Like many people, I am worried. The hope is that Nature’s slap on humanity’s face, what I call the catastrophe, will not be too violent and yet sufficient to change direction. Perhaps we could use some rhetoric, and start linking all disasters to climate change, to ensure that humanity will take it seriously. I won’t blame politicians for playing that card. We must get the world moving in a different direction. The current one is leading to a tragedy.
Professor Luciano Floridi’s Most Recent Papers on SSRN
Using Twitter to Detect Polling Place Issues on U.S. Election Days
The Doctrine of Double Effect & Lethal Autonomous Weapon Systems
Climate Change and the Terrible Hope
The Ethics of Online Controlled Experiments (A/B testing)
The Blueprint for an AI Bill of Rights: In Search of Enaction, at Risk of Inaction
Professor Luciano Floridi on SSRN
Professor Luciano Floridi is on Twitter at @floridi