Home

  • Top Papers on AI in Law Q1 2026 

    Top Papers on AI in Law Q1 2026 

    This list includes the top downloaded papers on AI in Law posted in Q1 2026.  

    • The Artificial in “Artificial Intelligence”: How Imagination Shapes AI Regulation by Claudio Novelli (Yale University – Digital Ethics Center), Luciano Floridi (Yale University – Digital Ethics Center; University of Bologna- Department of Legal Studies), Stefan Larsson (Lund University – Department of Technology and Society) Mariarosaria Taddeo (University of Oxford – Oxford Internet Institute) and Steven L. Winter (Wayne State University Law School)  
    • Code Is Not Law by Carla Reyes (Southern Methodist University – Dedman School of Law), Andrea Tosato (Southern Methodist University – Dedman School of Law) and Andrew Hinkes (New York University School of Law) 
    • Legal Alignment for Safe and Ethical AI by Noam Kolt (Hebrew University of Jerusalem), Nicholas Caputo (Oxford Martin School), Jack Boeglin (University of Pennsylvania Law School), Cullen O’Keefe (Institute for Law & AI; Centre for the Governance of AI), Rishi Bommasani (Stanford University), Stephen Casper (Massachusetts Institute of Technology (MIT) ), Mariano-Florentino Cuéllar ( Carnegie Endowment for International Peace; Stanford Law School), Noah Feldman (Harvard Law School), Iason Gabriel (School of Advanced Study University of London), Gillian K. Hadfield (University of Toronto; Vector Institute for Artificial Intelligence; OpenAI; Center for Human-Compatible AI), Lewis Hammond (University of Oxford), Peter Henderson (Princeton University – Program in Law & Public Policy), Atoosa Kasirzadeh (Carnegie Mellon University), Seth Lazar (Australian National University (ANU)), Anka Reuel (Stanford University), Kevin Wei (RAND Corporation; Harvard University – Harvard Law School), Jonathan L. Zittrain (Harvard Law School; Harvard School of Engineering and Applied Sciences; Harvard University – Harvard Kennedy School (HKS); Harvard University – Berkman Klein Center for Internet & Society)  
    • Questioning the Digital Markets Act’s Legality by Thibault Schrepel (Vrije Universiteit Amsterdam; Stanford University’s Codex Center) and Godefroy de Boiscuille (University of Côte d’Azur; Paris-Panthéon-Assas University & CRED) 
    • Liberal AI by Cass R. Sunstein (Harvard Law School; Harvard University – Harvard Kennedy School (HKS)) 

    To read more about AI in Law, subscribe to SSRN’s Artificial Intelligence – Law, Policy, & Ethics Research Updates or view other papers here.  

  • The Latest Research on Cryptocurrency 

    The Latest Research on Cryptocurrency 

    This list includes a selection of the latest research on cryptocurrency posted to SSRN in 2026. 

    Cryptoasset Ecosystem in Latin America and the Caribbean by Roman Proskalovich (University of Cambridge, Judge Business School, Cambridge Centre for Alternative Finance), Christopher Jack (University of Cambridge – Cambridge Centre for Alternative Finance), Alex Zarifis (University of Cambridge – Cambridge Centre for Alternative Finance), Diego Montes Serralde (University of Cambridge – Cambridge Centre for Alternative Finance) and Damaris Njoki (University of Cambridge, Judge Business School, Cambridge Centre for Alternative Finance) 

    When Markets Never Sleep:  Intraday Liquidity Patterns and Volatility Effects in Cryptocurrency Trading by Aleksander R. Mercik (Wroclaw University of Economics and Business), Barbara Bedowska-Sojka (Poznań University of Economics and Business) 

    OmniFormer: A Patch Transformer for Joint Long-Term Multi-Dimensional Cryptocurrency Time Series Forecasting by Trung Nam Nguyen (Ho Chi Minh City University of Economics and Finance), Nguyen Quoc Anh (Hitachi Digital Services), Son Ha (RMIT University), Phien N. Nguyen (Ton Duc Thang University), Trung Phan Hoang Tuan (FPT University), Anh N. Le (FPT University) and Nguyen Gia Chan (FPT University) 

    Code Is Not Law by Carla Reyes (Southern Methodist University – Dedman School of Law) Andrea Tosato, (Southern Methodist University – Dedman School of Law) and Andrew Hinkes (New York University School of Law) 

    Tokenized Gold by Campbell R. Harvey (Duke University – Fuqua School of Business; National Bureau of Economic Research (NBER)), Chen Lin (The University of Hong Kong – Faculty of Business and Economics) Daniel Rabetti (National University of Singapore (NUS); Harvard Business School) and Che Zhang (Tsinghua University) 

    The Moneyness of Stablecoins by Christopher K. Odinet (Texas A&M University School of Law), Andrea Tosato (Southern Methodist University – Dedman School of Law) and Yesha Yadav (Vanderbilt University – Law School; European Corporate Governance Institute (ECGI) ) 

    Pairs Trading in Crypto by Sasha Stoikov (Cornell Financial Engineering Manhattan), Dora Xu (Cornell University – Cornell Financial Engineering Manhattan), Shijie Shao (Cornell University), Yourui Wang (Cornell University) Tongshu Zhang (Cornell University) and Jinxuan Hu (Cornell University)  

    Stablecoins and Banking: Deposit Dynamics, Financial Stability, and Regulatory Design by Lin William Cong (Nanyang Technological University; Cornell University)  

    Regulating Decentralized Stablecoins: Comparing MiCAR and the GENIUS Act by Christopher K. Odinet (Texas A&M University School of Law) and Andrea Tosato (Southern Methodist University – Dedman School of Law) 

    The Contest Between Central Bank Digital Currencies, Stablecoins and Tokenised Deposits: Which Will Likely Win, and Why?  by Ross P. Buckley (University of New South Wales (UNSW) – UNSW Law & Justice) 

    Discover more research on cryptocurrency in SSRN’s Cryptocurrency Research Updates here

  • SSRN Strategic Update: Renewed Focus on Core Research Sharing Mission 

    SSRN Strategic Update: Renewed Focus on Core Research Sharing Mission 

    At SSRN, our mission is to rapidly share preprints and other early-stage research, empowering global scholars to help shape a better future. Today we are announcing an important change that reflects where we believe we can make the greatest contribution to that mission. 

    We have decided to focus entirely on SSRN’s core function as a free, world-class preprint platform. As a result, we will be closing our commercial products (Research Paper Series, Sponsored Networks and Site Subscriptions, paid Conference Proceedings, Data Analytics Dashboards, Partners in Publishing, Jobs and Announcements, and Data Feeds) by the end of December 2026. 

    This is not a decision we have taken lightly. These products have supported thousands of researchers and institutions over many years, and we are extremely grateful to all our institutional partners. However, running a commercial operation alongside a free research platform has required very tough trade-offs in technology investment, operational focus, and our ability to keep pace with what researchers actually need from a preprint server in 2026. Stepping back from commercial products will allow us to us put everything into what SSRN does best. 

    We hope that in the future this will mean  free Research Alert subscriptions, faster posting times, improved CrossRef metadata, stronger transparency through versioning, ORCID integration, and clear links between preprints and published versions. SSRN will remain publisher-neutral and committed to serving researchers across all disciplines. 

    What this means for existing customers 

    Every existing contract will be honored in full through its term or until December 31, 2026, whichever comes first. Our team will be in touch with each customer directly to talk through the timeline, answer questions, and plan the transition, including any applicable refunds. We will not be onboarding new customers for these commercial products, and automatic renewals will not proceed. 

    If you have questions about your contract, data, or transition planning in the meantime, please reach us at ideas@ssrn.com

    Looking ahead 

    SSRN has been part of the research ecosystem for over 25 years. This change is about making sure it will survive and thrive for the next 25. We’d like to thank all our commercial partners for their support for SSRN over the last two decades: we really wouldn’t be here without you. However, we now look forward to building an SSRN that is completely focused on the needs of researchers around the world. As always, we’d love to hear your thoughts at ideas@ssrn.com

    FAQ: 

    Why are SSRN’s commercial services being discontinued, and why now? 

    The preprint landscape has changed significantly. Expectations around posting speed, licensing, metadata transparency, and discoverability have all increased, and SSRN has had to make difficult trade-offs to maintain commercial products alongside its free platform. Stepping away from our commercial products will allow us to simplify our model and prioritize investing properly in the things that researchers tell us matter the most to them. The timing reflects both the strategic direction set by our parent organization and a genuine belief that now is the right moment to make this shift. 

    When will the process be complete? 

    We will complete the transition by the end of December 2026. 

    Will SSRN continue to operate after the transition? 

    Yes. SSRN will continue as a free, world-class preprint platform. Sunsetting our commercial products is about sharpening our focus, not shrinking our ambition. We intend to strengthen SSRN’s platform with faster posting, better licensing options, improved discoverability, and higher research integrity standards. We hope that many of our commercial customers will transition and make full use of our ongoing free services.  

    Which services are being discontinued? 

    Research Paper Series (RPS), Sponsored Networks and Site Subscriptions, paid Conference Proceedings, Data Analytics Dashboards, Partners in Publishing, Jobs and Announcements, and Data Feeds will all close by the end of December 2026. All existing content will remain permanently archived and accessible. 

    What happens to my existing contract? 

    All existing contracts will be honored through their term or until December 31, 2026, whichever comes first. A SSRN manager will be in touch to confirm the details for your specific arrangement and to work through the transition with you. 

  • Meet the Author: Alicia Solow-Niederman

    Meet the Author: Alicia Solow-Niederman

    Alicia Solow‑Niederman is a leading scholar at the forefront of algorithmic accountability, data governance, and information privacy. As an Associate Professor at George Washington University Law School, she examines how emerging technologies, especially artificial intelligence, expose the limits of existing legal frameworks and reveal deeper questions about power, governance, and the values embedded in our regulatory systems. In this interview, Alicia discusses the challenges of governing AI across overlapping legal regimes, the politics of technical standards, and the evolving role of courts, agencies, and private actors in shaping the digital landscape. Her insights illuminate the tensions among privacy, transparency, and innovation, offering a nuanced view of how law can adapt to technological change while remaining grounded in democratic principles.

    Q: Your recent work discusses the concept of “inter-regime doctrinal collapse” in data governance. Could you explain what this phenomenon entails and its implications for AI regulation?

    A: Broadly speaking, my article explores how AI is challenging existing legal frameworks. Not in a literal sense, more fundamentally, by revealing that the doctrines and structures governing AI are not operating in clear, consistent, or principled ways. This has significant political, economic, and rule-of-law implications.

    To make this concrete, I focus on data, especially data acquisition, since AI systems today require vast amounts of data to function effectively. How we regulate data directly impacts AI regulation. Because multiple legal fields like copyright law and privacy law apply to data, the regulatory landscape becomes complex. These fields have different rules and underlying goals. When the boundaries between copyright and privacy law blur, and their rules and rationales no longer remain distinct, the legal regimes can start to lose their structural integrity and effectively collapse into one another. I refer to this phenomenon as inter-regime doctrinal collapse.

    A key point about the term: the word “collapse” might sound alarming, like a bridge falling down. But in this context, I use it descriptively. Whether this collapse is ultimately beneficial or harmful depends on what it enables, who gains power from it, and the broader political and legal consequences. So, it’s a phenomenon worth understanding.  Moreover, it matters for AI regulation because data is a key input to develop and deploy AI systems, and we’re seeing doctrinal collapse with two regimes that govern data—information privacy law and copyright law.  

    Because I’m a big believer in showing, not just telling, I want to offer a concrete illustration that connects this point to AI regulation. Suppose a company scrapes data from the internet to train an AI model. Initially, the company claims that the data it uses is public and therefore not subject to privacy restrictions, because users voluntarily shared it. Simultaneously, it asserts that it’s not liable for copyright infringement because the data was publicly available. The term “publicly available” isn’t a legal term of art, but it appears frequently in AI disputes, litigation briefs, and public rhetoric. At the same time, the company refuses to disclose its training data, citing confidentiality and proprietary interests.  Subsequently, the same company argues during litigation that user privacy requires non-disclosure despite previously denying privacy concerns because the data was shared voluntarily with a third party.

    This example highlights how legal boundaries become fuzzy, even though copyright law and privacy law have very different doctrines and normative goals. In practice, what’s considered public versus proprietary, in copyright law, and public versus personal, in privacy law, become blurred as companies toggle between them.  It becomes extremely difficult to determine which legal doctrine applies at any given moment. This is what I refer to as doctrinal collapse on the ground: the boundaries between privacy law and copyright law become indistinct, and the legal system’s structure begins to weaken. That connects back to AI regulation because the choices we are making about privacy law and copyright law regulate data—and because data acquisition is required for AI development, these choices about data regulation will affect AI governance. 

    Q : As you’ve just explained, your research suggests that the legal regimes governing data, privacy law, and copyright law are becoming increasingly blurred. What challenges does this pose for effective regulation of AI systems, and what institutional responses do you propose?

    A: Here, I want to sharpen the political economy and rule of law stakes, and then consider some potential institutional responses. 

    First, collapse enables companies to manipulate the legal and social meanings of “public.”  Not all companies are equally well-positioned to exploit the domains.  Collapse tends to favor the “haves” – the well-resourced incumbents – by allowing them to acquire data at the expense of ordinary people or less well-resourced actors.  In some cases, well-resourced incumbents have the money and influence to execute and leverage licensing agreements.  In other cases, dominant platform firms are best-positioned to rely on broad user consent through privacy policies and terms of service.  

    Second, from a rule of law perspective, this fluidity allows private actors to switch between conflicting claims depending on what serves their interests, undermining legal predictability, coherence, and legitimacy. When laws become ambiguous and actors can switch between different legal regimes, it erodes public trust and the legitimacy of the legal system itself. This toggling creates a situation where the law no longer functions as a clear, predictable framework for accountability and justice, especially in the rapidly evolving context of AI.

    Now, what might we do about this? I don’t believe that collapse itself is a problem we can solve. We can’t, and shouldn’t, try to create perfect clarity in the law or impose  artificial boundaries between different fields of law. Instead, we should recognize that collapse becomes a problem when it undermines the law’s ability to govern effectively.

    In the paper, I suggest some institutional responses. Some are incremental, focusing on adapting the current legal framework. In particular, we might draw on conflict of laws and empower courts as managers of collapse. For example, if a court is resolving a dispute where parties raise both privacy and copyright claims, the judge might insist on a rebuttable anti-switching presumption, saying, “You can’t assert mutually incompatible claims at different points in the lawsuit unless you provide a compelling reason to justify the switch.” These are strategies to manage the collapse without overhauling the entire legal system.

    Other responses are more reformist, aiming to change the legal structure itself and make it harder to manipulate the lines between domains.  Notably, we might adopt stronger privacy laws that make the initial relationship between copyright and privacy law less asymmetrical. I believe that reducing or eliminating the underlying weaknesses that lower the legal and social costs of privacy violations compared to copyright would decrease incentives for companies to exploit privacy loopholes and avoid copyright obligations.

    Q: Let’s turn next to some of your other projects. In your paper on AI standards, you argue that standards are not neutral but have embedded politics. How can policymakers and regulators ensure that AI standards promote fairness and accountability rather than reinforce existing power structures?

    A: Standards are tricky regulatory devices. One problem with standards, as I discuss in the paper, is that they tend to work much better in purely technical settings, like whether an outlet must have two or three prongs. But when we start dealing with socio-technical contested issues, such as fairness, accountability, or transparency, standard setting becomes much more difficult. I begin there because I think it’s important to be honest about that from the start. We can talk about how to improve standard setting, but standards are always political artifacts, in Langdon Winner’s sense of politics. They will inevitably reflect the power structures that create them.

    For AI standards, one key step is to carefully consider whether a particular issue should be addressed through standard setting at all, or if it should be handled via public legislation or more binding legal procedures. If we decide that standard setting is appropriate, then we need to think about the relative influence of public versus private actors.

    Another important aspect is ensuring ongoing deliberation and rethinking over time. I believe standards can sometimes be better than hard law because they can adapt more quickly and be nimbler. Building space for contestation and re-contestation is crucial. Without that, I worry that a powerful private actor could entrench a standard through market dominance, locking it in without democratic oversight or opportunities for re-evaluation. These chances to rethink things are vital, especially if what fairness means turns out to harm certain populations more than others, or if the initial requirements for transparency aren’t sufficient for outside parties to contest decisions made with the AI system, or if other problems with the standard emerge.

    Q: Your work often emphasizes the importance of understanding the political and social context of technological standards. How can stakeholders ensure that AI governance frameworks are inclusive and reflect diverse societal values?

    A: If I had a one-shot answer, I’d be selling it to the highest bidder, there’s no silver bullet. But I do think recognizing that technology is not a shiny, isolated object is a crucial first step. These tools are shaping our democracy and social future, and we must see that.

    Technology is not neutral. Design choices, what data to use, how to define goals, how to align AI systems, or even whether to use AI at all, are deliberate decisions with real outcomes. The structure of our legal system also reflects choices; for example, how we regulate privacy or how lightly we regulate tech companies are policy decisions that shape society. My hope, and what I aim to help others see, is that these are choices and recognizing that empowers us. It means we’re not stuck; even without a perfect, one-size-fits-all solution, understanding these choices allows us to advocate for more inclusive and equitable governance.

    Q: Your research also speaks to other aspects of AI regulation, such as  the role of judicial decisions in shaping AI governance. How do you see the courts influencing the development of AI law, and what are the risks and benefits of relying on litigation as a form of regulation?

    A: I see courts affecting AI law in many ways. It’s happening quite a lot. Sometimes, it’s direct, like the copyright lawsuits I discuss in AI and Doctrinal Collapse. Other times, it’s less direct, such as a First Amendment case that influences what policymakers believe is possible for AI regulation. Additionally, a company’s decision to settle a case rather than litigate can shape the regulatory landscape, depending on the outcome.

    Whether this is good or bad is the key question. That’s why my piece is titled, “Do Cases Generate Bad Law?,” with a question mark. It’s a real question. Cases involve adversarial parties and concrete issues, where a harm that has already occurred. This can be beneficial because it helps focus attention on specific legal issues and concerns. Judges are often well-positioned to uncover detailed facts and understand how AI companies operate, which can be valuable.

    However, whether courts make good or bad AI law depends a great deal on their interaction with legislators and regulators. AI cases are more likely to produce beneficial outcomes if they prompt legislative action to fill regulatory gaps and provide remedies for harms.

    For example, there’s a recent lawsuit alleging a violation of Illinois’ Biometric Information Protection Act (BIPA). To establish a violation, there must be collection of biometric data, like faceprints or voiceprints, without prior informed consent. In this case, the complaint alleges that an AI company collected voiceprints to develop its system. If the case proceeds to discovery, it could reveal how these systems work, informing other plaintiffs, the public, and policymakers. It might lead Illinois legislators to update the law, if it turns out that they wanted to cover the data at stake and it is not covered. Or, if the voiceprints are covered by BIPA, the disclosures about how biometric information is used to create AI systems might inspire similar legislation elsewhere.

    That said, there are risks to case-made AI law. Relying on litigation can mean that less tangible or emergent harms go unrecognized, either because there’s no legal cause of action or because the harm isn’t understood as such. It can also lead to spurious or costly litigation, which is especially problematic for startups.

    Another concern is the concentration of cases in a few jurisdictions, which can result in a limited number of courts deciding complex social issues with nationwide implications. This lack of diversity in outcomes and limited access for plaintiffs is troubling. Therefore, I see litigation as an access-to-justice issue for individuals who may be left without a remedy for the negative impacts of AI systems. That is also why we need both litigation and legislation. It’s essential for functioning legislatures at both the state and federal levels to recognize how AI systems are affecting people, to realize the issues that cases are not likely to address, and to take action on these vital issues.

    Q: Your research covers both AI law and information privacy law. In your view, how can the concept of the “Overton Window” be applied to improve the enforcement of privacy protections in the rapidly evolving digital landscape? 

    A: I’m not sure the Overton Window directly improves privacy enforcement. Instead, it helps reveal the range of actions that regulators believe are feasible at any given moment. What agencies do depend on internal norms, political will, resources, and external pressures from courts, industry, and social movements.

    Applied to privacy, this means that privacy‑minded regulators can and should use broad legal authorities, like unfair and deceptive trade practices laws, to address emerging privacy and AI harms. They’re most likely to act where social consensus is already strong, such as with location tracking or children’s privacy. But the scope of enforcement ultimately turns on politics. Some FTC administrations have embraced a more expansive view of “unfairness,” enabling more aggressive interventions; others have taken a narrower approach.

    Two additional points matter. First, enforcement requires resources. Privacy investigations are complex and time‑intensive. Without adequate funding and staffing, even strong legal tools won’t translate into meaningful action. Second, institutional design is crucial. When lawmakers create new privacy or AI rules, they need to consider whether agencies are insulated from political pressure and whether they have the capacity to act. Otherwise, even well‑intentioned laws risk being symbolic rather than effective.

    Q: The broader question of how to regulate emerging technologies runs through your work. Given your expertise in algorithmic accountability and data governance, what are some practical steps that governments or organizations can take to enhance transparency and fairness in AI systems?

    A: Much of my work is about reframing the problem rather than offering a single, crisp solution. It’s crucial to identify the assumptions underlying policymakers’ proposed paths forward. For example, governments or organizations should ask: What am I assuming about the law and the values I want to uphold? What about human actions, both by developers and end users? What assumptions are being made about the technology itself? By posing these threshold questions, we can reveal our underlying assumptions and develop interventions that better address complex human and technical interactions in specific contexts.

    In addition, we should not latch onto just transparency, or just fairness, as the key to AI regulation.  Although transparency and fairness are vital, focusing solely on those concepts can be limiting. Ultimately, the core issue is power, who controls the means to produce, refine, and deploy AI systems, and who has the voice to influence their governance. Overemphasizing just one aspect risks missing the relational dynamics at play. Recognizing these power relations is essential for meaningful accountability and equitable governance.

    Q: As a member of the EPIC Advisory Board, a faculty affiliate at Harvard’s Berkman Klein Center, and an affiliated fellow at Yale Law School’s Information Society Project, how do collaborations across academia, civil society, and government influence the development of effective AI regulation?

    A: The intersection of public and private sectors is vital for AI governance and my scholarship. First, collaborative governance, meaning active engagement between public and private actors is valuable in theory, but in practice, it carries risks. Without a strong, well-funded, and influential state, private actors can overshadow public voices, effectively replacing regulation with private interests. I believe that public regulation and democratic accountability are essential. They help ensure that AI development aligns with societal values, and contrary to some fears, regulation can foster innovation by guiding technology in lawful and ethical directions.

    Second, the traditional public-private divide is increasingly strained by how social and governmental uses of AI are evolving. For example, in a forthcoming essay, called Clickwrap Accountability, I discuss how government agencies are using generative AI chatbots to provide non-binding advice on matters such as the Supplemental Nutrition Assistance Program (SNAP).   This guidance often replaces in-person visits, government pamphlets, or static FAQ pages about benefits programs. These chatbot interactions are often mediated through private tools and rely on third-party terms of service, with limited accountability. When a chatbot provides advice, and there’s no formal government decision or official process involved, it doesn’t fall neatly within existing procedural due process frameworks. The current legal doctrine offers limited redress.  At best, there are contract disclaimers or clickwrap style “agree” buttons for the end user.  And because the state has sovereign immunity, the forms of contract or tort law redress that might be available in private law are not generally available in this public law context.

    This example highlights a broader trend: the blurring of lines between public and private as AI tools become embedded in public services. Developments like this challenge our existing legal and regulatory frameworks and underscore the need for scholarship and policymaking to adapt to these new realities. We must rethink how accountability, transparency, and oversight are structured when government functions are mediated through private AI systems, and I’m working on these questions in several future projects.

    Q: Looking ahead, what do you see as the most pressing legal or regulatory challenges in AI governance, and how should scholars and policymakers prepare to address them? 

    A: I see two major sets of challenges in AI governance. First, the friction between different legal regimes and values. AI systems sit at the crossroads of copyright, privacy, discrimination, consumer protection, and more. These areas don’t always align, and the trade-offs like balancing privacy with goals such as reducing bias or increasing transparency rarely have clean answers. Scholars can help by stepping outside disciplinary silos and examining how their preferred doctrines interact with others. Policymakers, meanwhile, should resist the urge for simple narratives. Effective regulation starts with mapping which bodies of law are implicated, where the gaps are, and what values are being traded off. There’s rarely a perfect solution, but there can be a principled one.

    Second, we’re dealing with both known unknowns and unknown unknowns. Technological shifts like changes in data needs or the limits of so-called “scaling laws” could reshape incentives and alter how existing laws function. And then there’s the Collingridge dilemma: intervene too early, and policymakers won’t have the information they need, or too late, and harmful practices will already be entrenched. I tend to favor precaution where human interests are at stake, but with humility, flexibility, and mechanisms for revision as evidence evolves.

    A final challenge is institutional capacity. Technology doesn’t inherently outpace law, yet knowing when to adapt and ensuring that agencies have the expertise to understand it  is extraordinarily difficult. Robust governance will depend as much on institutional design as on the substance of any particular rule.

    Q: What are some of the most exciting upcoming projects or research initiatives you are currently involved in or looking forward to?

    A: I have several writing projects and initiatives that I’m looking forward to.  First, I’m very excited about my forthcoming Clickwrap Accountability piece, as well as a forthcoming essay called The Supply Chain as a Circle: AI, Privacy, and People. These projects both focus on the users of technology and what existing law says or doesn’t say about these interactions. In the Supply Chain piece, I argue that the AI supply chain often overlooks the interactions between users and generative AI systems. If we don’t account for this interaction, then harm and responsibility tend to fall on end users, who often lack the knowledge or ability to prevent bad outcomes. They should be part of the regulatory calculus. In addition, I am developing a few other pieces, including one that examines relationships between administrative agencies and platform companies.  All of these projects reflect my conviction that technological developments expose weaknesses in legal and regulatory frameworks and offer opportunities to re-examine institutions and doctrines.

    Beyond scholarship, I’m engaging in conversations with EPIC about regulating AI chatbots and companion AI. I’m also involved in a Uniform Law Commission project on mental privacy, neural data, and cognitive biometrics, exploring potential model state legislation. This is a vital area that links privacy and AI.  Think of the potential health benefits, like restoring hearing—but also the privacy risks of accessing our brains and most personal data.

    Finally, I’m developing a new seminar called “Frontiers and Flashpoints in Tech Law,” which will cover cutting-edge issues like agentic AI, neural tech, and robotics. It aims to help students understand both the technology and the legal challenges. I am really excited for this course, and for all that undoubtedly lies ahead in tech law in the year to come.

    Q: What do you think SSRN contributes to the world of modern research and scholarship?

    A: I appreciate SSRN as a centralized place to find recent research. With so much information available online, it can be difficult to know where to look, so it’s incredibly helpful to have an open‑source community that makes research accessible and provides a clear, reliable destination for new work.

    In addition, as a junior scholar, I’m especially grateful for SSRN as a research platform and as a way for my work to reach other scholars. I see it as a privilege to participate in these scholarly conversations, and I’m thankful for the infrastructure that makes it possible to sustain and expand them.

    MORE ABOUT ALICIA SOLOW‑NIEDERMAN

    Alicia’s scholarship explores how digital technologies disrupt traditional legal categories and institutional assumptions. Her influential work on inter‑regime doctrinal collapse shows how AI blurs the boundaries between privacy, copyright, and other legal domains, creating both regulatory challenges and opportunities for reform. She has written extensively on the politics of AI standards, the role of courts in shaping AI governance, the ways that inferences challenge privacy law on the books, and the need for institutional designs that can withstand uncertainty and technological evolution.  

    Her research has appeared or is forthcoming in leading journals, including the Stanford Law Review, Northwestern University Law Review, Harvard Journal of Law & Technology, Journal of Law & Innovation, and Southern California Law Review. A graduate of Harvard Law School, where she served as Forum Editor of the Harvard Law Review, Alicia has held fellowships at UCLA Law’s PULSE program and Harvard Law School, clerked on the U.S. District Court for the District of Columbia, and worked at the Berkman Klein Center for Internet & Society. She also serves on the EPIC Advisory Board and is a faculty affiliate at the Berkman Klein Center as well as an affiliated fellow at the Yale Law School Information Society Project, where she contributes to cutting‑edge conversations on AI regulation, mental privacy, and the future of public‑private governance.

  • The Latest Research on Medical Law

    The Latest Research on Medical Law

    This list includes a selection of the latest research on medical law posted to SSRN in 2025-2026.

  • SSRN’s New Advanced Search Makes Finding Papers Easier and Faster

    SSRN’s New Advanced Search Makes Finding Papers Easier and Faster

    Michael Parsons

    SSRN’s search has been sorely in need of some love for a while, so we’re very happy to share that we’ve recently been able to make some improvements to how it works. Previously, if you wanted papers about corporate governance but not banking, you couldn’t say so. If your search terms were slightly off, you may have struggled to find what you’re for. We’ve now added two new search modes to the Advanced Search page: Fuzzy Search and Boolean Search. They solve different problems, and you can choose between them depending on what you need.

    Fuzzy Search: A Broader Net

    Fuzzy Search is more forgiving than a traditional keyword search. It will tolerate slight variations in your search terms, typos, and near-matches, returning results that a strict search might miss. You’ll typically see a larger set of results, which makes it useful for exploratory searches, when you’re not yet sure of the precise terminology a field uses.

    It works across all three search field options (Title Only, Title Abstract & Keywords, and Title Abstract Keywords & Full Text) and applies to the Author(s) field as well. If your query is in the right neighbourhood, fuzzy matching will try to get you there. A word of calibration: fuzzy search broadens your results, but it isn’t a spell-checker. It works best when your terms are close to the target. The further off your query, the noisier the results

    Boolean Search: A Precise Tool

    Boolean Search lets you build structured queries using standard logical operators:

    AND requires both terms to appear. "corporate governance" AND disclosure returns only papers addressing both topics.

    OR broadens to either term. "machine learning" OR "deep learning" captures papers using either framing.

    NOT excludes a term. cryptocurrency NOT bitcoin finds crypto papers that aren’t specifically about Bitcoin.

    Parentheses let you group expressions. (fintech OR regtech) AND regulation finds regulation papers that mention either fintech or regtech, without running two separate searches.

    As with Fuzzy Search, Boolean mode works with all three advanced search options. Narrow your scope to Title Only for precise results; expand to Full Text to surface papers where the terms appear anywhere in the document.

    Two Modes, One Search

    These are separate modes, not features you stack. You choose one via the radio buttons on the Advanced Search page. The right choice depends on where you are in your research.

    Use Fuzzy Search when you’re exploring a new area, working from partial memory, or casting around for how a topic is discussed in the literature. It’s the mode for early-stage discovery, when missing a relevant paper is worse than wading through some noise.

    Use Boolean Search when you know what you’re after and want to carve out a specific slice of the literature. It’s the mode for systematic reviews, targeted citation searches, and any query where you need to include or exclude particular terms with confidence.

    Both modes share the same controls: scope selection, the Author(s) field, date filters, and sort options. The only difference is how the search engine reads your query.

    A Two-Step Workflow for Search

    You can now think in terms of a two-step workflow for Search on SSRN. Start in Fuzzy Search with a broad query. Scan the first page of results to pick up the vocabulary the literature actually uses. Then switch to Boolean Search and build a precise query with those terms and operators. The two modes complement each other when used in sequence.

    In Boolean mode, wrap multi-word phrases in quotes: "corporate governance" behaves differently from the two words searched separately. And remember that the Author(s) field is independent of the main search box. If you want a specific person’s work on a specific topic, use both fields rather than cramming everything into one query.

    These changes are live now on papers.ssrn.com, and we really hope you find them useful. Send us your feedback at ideas@ssrn.com, we’d love to hear from you.

  • The Latest Research on Central Banks

    The Latest Research on Central Banks

    This list includes a selection of the latest research on Central Banks posted to SSRN in 2025-2026.

  • What SSRN’s Copyright Policy Really Means — and How to Navigate It  

    What SSRN’s Copyright Policy Really Means — and How to Navigate It  

    Copyright can feel like one of those topics everyone knows is important, but no one really wants to untangle. If you’re sharing your research on SSRN, though, understanding the basics goes a long way toward keeping your work accessible and compliant. 

    At its core, copyright gives creators a bundle of rights over their work, the right to reproduce it, share it, adapt it, and decide who else can do the same. Copyright protects “original works of authorship, giving creators exclusive rights to their creations.” 

    SSRN’s job is to help you share your research widely, but only when doing so doesn’t infringe on someone else’s rights. That means we verify that the version you upload is one you’re actually allowed to post. 

    Which Version of Your Paper Can You Share? 

    This is the question authors ask most often, and the answer depends on your publisher. 

    Publishers have their own self‑archiving rules that determine whether you can post: 

    • A working paper (preprint) — usually the most flexible version to share. Working papers “have not yet undergone peer review” and are commonly posted for early feedback. 
    • An accepted manuscript — many publishers allow this version, but often with conditions such as embargo periods. 
    • The final published version — this is the most restricted. Publishers often reserve exclusive rights to distribute the Version of Record, including formatting, pagination, and branding. We emphasizes that publishers might have specific policies about sharing the final published version. 

    The safest move is to check your publishing agreement or the publisher’s copyright policy before uploading anything. SSRN’s Terms of Use also outline what you can and can’t post. Bear in mind that some publishers have specific terms which mean they might not want you to share your work on a platform such as SSRN, so it’s always a good idea to check if you’re not sure. 

    Do You Need Permission? Sometimes and Here’s When 

    If the publisher owns the rights to the version you want to upload, you’ll need written permission. Written permission must be obtained from the rightsholder to re-use any copyrighted material and that the rightsholder is typically the publisher unless it is explicitly indicated otherwise. 

    You can usually request permission through: 

    • The publisher’s permissions department 
    • Rightslink (via the article’s webpage) 

    And no, silence does not count as approval. 

    Does SSRN Own Your Copyright? Absolutely Not. 

    Uploading to SSRN does not transfer your copyright. You simply grant us a non‑exclusive right to post and distribute your paper. You can remove it at any time. 

    You also confirm that your submission doesn’t violate anyone else’s rights, which is a standard requirement for any scholarly repository. 

    What If You Spot a Copyright Problem on SSRN? 

    If you believe a posted paper infringes copyright, we have a formal process for reporting it, which you can read here. Depending on the situation, we may remove the paper, warn the user, or restrict account access. These actions are part of enforcing the platform’s Terms of Use. 

    Why All This Matters 

    Copyright isn’t just a legal technicality, it’s what allows authors, publishers, and platforms like SSRN to coexist without stepping on each other’s toes. Our Copyright Reference Guide encourages authors to: 

    • Review their agreements 
    • Check publisher policies 
    • Request permission when needed 
    • Provide documentation during submission 

    Determining posting rights of a work can be complex, but being proactive helps ensure your research is shared responsibly and effectively. 

  • Meet the Author: Greer Donley

    Meet the Author: Greer Donley

    Greer Donley is the Associate Dean for Research and Faculty Development, the John B. Nicklas, Jr. Faculty Fellow, and a Professor of Law at the University of Pittsburgh School of Law. She is one of the nation’s leading experts on abortion law, with widely cited scholarship on medication abortion, interjurisdictional conflicts, and the far‑reaching consequences of abortion bans on reproductive healthcare. Her work has been published in top law reviews and featured across major media outlets, shaping national conversations in the post‑Dobbs era. She spoke with SSRN about abortion shield laws, the evolving legal landscape, and the most pressing challenges and strategies for ensuring reproductive access today..


    Q: Can you tell us about the motivation behind your recent co-authored paper, “Abortion Shield Laws in Action,” and what you see as the most significant legal challenges these laws face today?

    A: David Cohen, Rachel Rebouche, and I have been thinking and writing about shield laws since their first conception. We wanted to write this paper after observing how quickly US states began implementing shield laws after Dobbs. These laws are a relatively new legal innovation, first  becoming effective in the summer of 2023. Our paper looks at how they are working a few years later. The primary purpose of shield laws is to protect providers and individuals involved in abortion care in shield states from civil and criminal liability instigated by states where abortion is heavily restricted or banned.

    Since the overturning of Roe v. Wade, there was a real concern that legal and political attacks would severely limit access to abortion and threaten providers. Shield laws have played a crucial role in mitigating some of these threats. For example, recent data indicates that more than 10,000 boxes of abortion pills are mailed into restrictive states each month, providing vital access to those who cannot travel out of state, many of whom lack the resources to do so. These laws have enabled providers to continue offering care despite the legal threats.

    However, the legal landscape is still very much in flux. Some early cases, like those in New York, have shown clerks refusing to file lawsuits against providers, citing shield laws’ protections. Another case in federal court is ongoing. Our hope is that these laws will continue to serve as a shield, allowing states to uphold their own abortion policies as intended, especially in a post-Roe landscape where states are increasingly diverging in their laws.

    Q: Your work discusses the concept of interjurisdictional abortion wars. How do shield laws and telehealth intersect in this context, and what are the potential legal and practical implications?

    A: The intersection of shield laws and telehealth is a critical aspect of the ongoing interjurisdictional battles over abortion access. These battles are complex because they involve different legal standards across jurisdictions, including federal, state, and even local laws. It’s important to clarify that shield laws address conflicts between states, not conflicts between federal and state authority. The federal supremacy clause makes clear that federal law generally trumps state law, but shield laws are designed to protect providers and patients from state-level legal actions, especially when states attempt to reach across borders to regulate out-of-state care.

    Post-Dobbs, my co-authors and I anticipated and indeed observed that states would adopt divergent abortion laws, with some banning and others protecting access. This divergence enabled states to try to influence or restrict out-of-state care. For example, Texas suggested that if any part of a medication abortion was consumed in Texas, it was an illegal abortion, even if the Texas patient travelled out of state and the out-of-state provider fully followed their home state’s laws. Similarly, some states like Idaho passed laws making it illegal to help minors leave the state for abortion without parental consent, aiming to prevent out-of-state travel for abortion care.

    Shield laws have evolved to address these challenges. Initially, they focused on protecting providers offering care to patients that travelled to them but returned to the ban state. But over time, some have expanded to shield providers treating patients across state lines through telehealth, even if the patient is physically located in a ban state.

    Practically, this means that shield laws are becoming a vital tool to facilitate cross-border care and telehealth services, which are essential for maintaining access in a highly polarized legal landscape. The broader implication is that shield laws are becoming a key part of the legal infrastructure supporting reproductive access, especially as states attempt to regulate beyond their borders.

    Q: Your paper “From Medical Exceptions to Reproductive Freedom” discusses using pregnancy complication cases as a legal strategy against abortion bans. How might this influence future litigation?

    A: This paper was motivated by the observation that, despite strict abortion bans, many pregnant individuals face severe health risks or complications that require medical intervention. These cases often reveal the dangerous gaps in the bans particularly because many laws include narrow or vague exceptions for health or life, which are difficult to interpret and apply consistently.

    In our analysis, David Cohen and I argue that pregnancy complication cases can be powerful legal tools to challenge these bans. They demonstrate that abortion is not just a matter of personal choice but a critical component of healthcare for everyone. Publicized cases of women denied care, suffering harm or even dying have shifted public perception by emphasizing that abortion bans threaten real lives.

    Legally, these cases can be used to expose the flaws and inconsistencies in the laws. For example, some laws have vague language that leaves too much to interpretation, draw arbitrary lines, and prioritize secular exceptions over religious ones. We suggest strategies like challenging laws based on vagueness, religious discrimination, and rationality. These cases can serve as a wedge to argue that abortion bans violate constitutional rights by endangering health and life for all. They can also help shift the legal narrative from abstract rights to concrete health and safety concerns, making it harder for courts to justify bans that cause harm. We hope that ultimately, they can also be used down the road to challenge Dobbs itself as unworkable.

    Q: Your chapter in “Regulation in a Turbulent Era” examines the regulatory landscape post-Dobbs. What do you see as the biggest hurdles for regulators trying to adapt to this rapidly changing environment?

    A: The post-Dobbs environment presents a host of complex challenges for law makers, and these hurdles differ depending on which side of the debate you’re on. For anti-abortion lawmakers, the primary concern has been how to “expand” exceptions without undermining the core restrictions. They are trying to craft laws that appear to provide some leeway for health-or-life exceptions but are often so narrowly defined that they effectively do almost nothing to expand access.

    Supporters of reproductive rights, on the other hand, are focused on expanding abortion access. Shield laws are a key part of this puzzle. Many advocates are pushing to expand and strengthen these laws to better protect providers and patients. Some states are experimenting with innovative measures, such as removing provider names from pill bottles. But there are other critical efforts too, like expanding state Medicaid coverage for abortion or removing unnecessary state abortion restrictions.

    Another layer is the broader regulatory landscape, where federal agencies such as the FDA are involved in shaping the environment. The rapid pace of legal and policy changes makes it difficult for regulators to keep up, and there are tremendous threats to the agencies’ independence. Overall, it is a big challenge to navigate this shifting terrain balancing legal risks, public health considerations, and political realities while trying to ensure access and safety for those seeking reproductive care.

    Q: You played a key role in drafting Connecticut’s abortion shield law and other legislative efforts. What have been some of the most important lessons learned from these advocacy efforts?

    A: It’s been interesting and exciting to have our scholarship turn into a real-world impact. It has been a privilege for me and my co-authors, David and Rachel. We feel the responsibility of wanting shield laws to remain effective over the long run.

    I teach a course on legislation and regulation, which focuses on how courts interpret statutes. For me, it’s been really rewarding to bring real-world experience into the classroom, something that complements my scholarly work. Having done some actual bill drafting, I can provide students with an inside perspective on how the process works, which I think students have enjoyed.

    Q: Given your experience with drafting laws and amicus briefs, what advice would you give to legal scholars and practitioners interested in influencing reproductive health policy? 

    A: I think it’s important for legislators to communicate with a wide range of stakeholders. Talking to advocates is crucial, but it’s also valuable for legislators to engage with academics, healthcare providers, and others who serve different roles within a movement.

    Being open to creative ideas is also important. For example, when shield laws first emerged, they were often packaged with other bills aimed at reducing unnecessary abortion restrictions or funding reproductive health services. Combining related issues can be an effective strategy.

    Reproductive rights have been less in the spotlight recently, but I hope people continue to prioritize reproductive rights, as they remain critically important and currently endangered.

    Q: Your work often discusses the intersection of law, ethics, and medicine. How can these fields collaborate more effectively to advance reproductive justice?

    A: I’m currently starting several new projects related to the fetal personhood movement. It’s been an exciting time to revisit my background in philosophy and ethics. I previously completed a fellowship in bioethics, which I really enjoyed, and I’m now exploring questions like: if the fetus is not considered a person under the Constitution, then what is it? This question intersects law and philosophy and has been both challenging and intellectually stimulating.

    I also do a lot of work at the intersection of law and medicine. For example, after Dobbs, I had the opportunity to work with rheumatologists whose rheumatoid arthritis patients were struggling to access a common medication called Methotrexate, which can also cause abortions. Collaborating with them helped me understand the legal issues and communicate how certain patients are facing difficulties accessing these drugs.

    Additionally, I am part of a centre at the University of Pittsburgh called CONVERGE, an interdisciplinary hub focused on sexual and reproductive health equity. It includes members from the Schools of Medicine, Public Health, Psychiatry, and Law. Collaborating with colleagues across these fields has been very rewarding and enriching for various projects.

    Overall, I think reproductive rights and justice are inherently interdisciplinary fields, and the more that experts from different fields work together, the better the work will be.

    Q: What do you think are the most common misconceptions the public or policymakers have about abortion laws and their impacts?

    A:  One of the most common misconceptions about abortion, reflected in recent polling, is that people have abortions for selfish reasons or because they are irresponsible. These gendered biases influence public perception. For example, a late-2024 poll found that about 57% of likely voters – both pro-choice and anti-abortion — believed most abortions are obtained for selfish reasons. I was surprised by how widespread this misconception is.

    Most people seek abortions because they cannot afford to have a child. Many are already mothers struggling to care for their existing children. Others feel they are not emotionally, financially, or physically capable of being the parent they want to be. There are also medical reasons and a variety of other circumstances that lead to abortion. Framing any of these reasons as selfish is simply incorrect. There are also deep stereotypes about why women get pregnant, such as the idea that pregnancy is their fault or that they were irresponsible. These biases are ingrained, and supporters of abortion rights are working to correct them.

    Historically, during the Roe era, much of this discussion was silenced; people who had abortions did so quietly. Since Dobbs, however, more open conversations are happening, which I believe will lead to greater understanding. I hope this will help people see why abortions happen and understand that all abortions are health saving. Pregnancy is physically and emotionally demanding, more than many realize.

    Having been pregnant myself, I can say that pregnancy’s physical demands can be overwhelming, even when the pregnancy is wanted. Whether someone seeks an abortion because they don’t want to be pregnant or for other reasons, they are making a medical decision that prioritizes their health. Pregnancy is physically and emotionally challenging, even under the best circumstances. Addressing gender biases and misconceptions about abortion is crucial for changing hearts and minds. Doing so is essential for protecting abortion rights and recognizing abortion as a fundamental healthcare issue.

    Q: What are the next big questions or challenges in abortion law that you hope to explore in your future research?

    A: I’m currently working on a two-part series that argues against the fetal personhood movement. I see the anti-abortion movement as being on the defensive right now; they are surprised and struggling with the deep unpopularity of their abortion bans. However, their long-term goal remains to ban abortion nationwide. If they can’t achieve this through legislation, they will try to do so through the courts, using fetal personhood. I’m working on some papers related to this.

    Additionally, I’ve done a lot of work on medical exceptions in pregnancy and plan to continue exploring that area. I also work extensively on FDA regulation of abortion pills, and there could be significant developments in that area in the coming year.

    Q: What do you think SSRN contributes to the world of modern legal research and scholarship?  

    A: When my co-authors and I uploaded our paper “The New Abortion Battleground” to SSRN, it was downloaded around 10,000 times in just six months. That was incredible. It’s a great example of how SSRN provides free, open access to legal scholarship, allowing ideas to reach media, policymakers, and the public. It’s rare for an academic work to have that kind of immediate impact: where people want to read, discuss, and engage with your ideas.

    I’m very grateful for SSRN because it helped get our paper out into the world. In law, posting drafts on SSRN is common, and it’s especially important because our paper was even cited by the Supreme Court’s dissent in Dobbs, when it was still a draft. Without SSRN, it wouldn’t have been accessible at that stage.

    Beyond that, the habit of posting drafts fosters a vibrant exchange of ideas. I subscribe to various legal journals and receive updates on what scholars are working on, which helps me stay informed and share my own work. It encourages collaboration and the flow of ideas, with many of us working together (consciously or not) in part of larger effort to restore lost rights.

    More About GREER DONLEY

    Professor Greer Donley’s scholarship and advocacy have been influential in courtrooms, legislatures, and public discourse. Her work has appeared in leading journals including the Stanford Law Review, Columbia Law Review, and Duke Law Journal and her public writing is has been featured in outlets such as The New York Times, The Atlantic, The Washington Post, and Slate. Donley co‑authored the widely discussed paper The New Abortion Battleground, which was cited by the U.S. Supreme Court’s dissent in Dobbs v. Jackson Women’s Health Organization. Beyond scholarship, she has played a central role in drafting transformative reproductive‑rights legislation, including Connecticut’s pioneering abortion shield law. Before joining academia, she practiced at Latham & Watkins and clerked on the U.S. Court of Appeals for the Second Circuit. She is a graduate of the University of Michigan Law School, where she served as Editor‑in‑Chief of the Michigan Journal of Gender & Law. Her work continues to shape the national landscape of reproductive health, law, and policy.

  • The Latest Research on Law & Political Economy

    The Latest Research on Law & Political Economy

    This list includes a selection of the latest research on law & political economy posted to SSRN.