What Should We Do About ChatGPT?

Almost everyone seems to be talking about ChatGPT. This new AI-enabled chatbot, which can swiftly produce answers that feel as if a human wrote them, promises to revolutionize how we generate text. Although ChatGPT emerged just a few months ago, it is already causing turmoil across various fields. Microsoft embedded it into their Bing search engine. Google declared a “code red” and tried to introduce a competing bot. Universities are rethinking how to evaluate student performance, as ChatGPT can apparently pass exams in law, business, and medicine.

Despite the hype, ChatGPT is not perfect. It is prone to inaccuracies and a phenomenon colloquially known as “hallucinations,” where the output deviates from the expected. For instance, shortly after ChatGPT was released, Dr. Hadar Jabotinsky and I tried asking ChatGPT to provide academic references in support of its answers. Surprisingly, some of the citations were not only inaccurate but non-existent, specifying fake titles and arbitrarily crediting them to authors. That raises an important question: can we trust ChatGPT without regulatory oversight?

The United States has lagged behind in responding to the AI revolution. Proposals such as an “AI Bill of Rights” or a voluntary AI risk-management framework neither directly address ChatGPT nor entail meaningful interventions. Meanwhile, the European Union has been working on stronger proposals such as a regulatory framework (“AI Act”), a revision of its product liability directive, and a new “AI Liability directive.”

In a recent paper, I evaluated these EU proposals from  the perspective of law and economics and identified three main problems. The first concerns the EU’s general approach: Instead of looking at whether the AI market needs fixing (i.e., whether is suffers from a market failure), the AI Act takes a risk-based approach. This approach divides the uses of AI into categories such as unacceptable risk, high risk, and limited risk but does not distinguish between risks that do and do not constitute a market failure. In particular, not all risks are externalities; some are governed by contractual terms and conditions or otherwise subject to negotiations. Moreover, market forces may already be sufficient to induce AI creators to fix inaccuracies, as failing to do so will lead to loss of business to competitors. Hence,  the need for legal interventions may be independent of whether the risk is high or low.

The second problem is that the EU intertwined regulation with liability. As breaches of regulatory obligations under the AI Act enable victims to sue more easily, there are concerns of either under-compliance or over-compliance if the regulatory standard is imperfectly set (e.g. if it is homogeneous but AI creators are heterogeneous). For instance, if an AI creator can automatically escape liability by complying with the regulatory standard, the creator will have no incentive to take additional precautions, even when it would be efficient to do so (under-compliance). Similarly, if failure to comply with the regulatory standard automatically gives rise to liability, some AI creators would comply although they need not (over-compliance).

The third problem is the standard of liability. The EU’s proposals apply three different standards – strict liability, negligence, and no liability – depending on factors such as whether the risk is high or low and whether the parties face different activity levels. However, the type of the risk is not usually relevant for the choice between different liability regimes. Instead, what matters for efficiency are factors such as whether the victim can take precautions to avoid harm. For instance, if AI creators are strictly liable for all harm, even when taking precautions to prevent it, this gives victims implicit insurance. The problem with this implicit insurance is that it eliminates the incentive of victims to take precautions, as they get paid through the damages in a lawsuit. At the same time, if AI creators are not strictly liable, they may choose an inefficiently high level of activity (e.g. by releasing more algorithms), as long as they do not behave negligently. These factors are not a function of the type of risk, so a risk-based approach does not capture the relevant distinctions.

In light of these problems, my paper calls upon AI policymakers to pay closer attention to principles of law and economics to ensure that the most relevant distinctions are taken into account.

As a final remark, I fed ChatGPT this blogpost (which, to be clear, was written by me) and asked for a final paragraph entailing policy implication. Here is the result:

Q (my prompt). The following text is a draft of a blogpost. Could you add a brief final paragraph that entails policy implications, given the problems I described? The text is as follows:

A (ChatGPT): “In summary, while ChatGPT has tremendous potential, it is not without flaws, and its increasing use raises concerns about regulatory oversight. The European Union has taken an active approach to regulate the AI market, but its current proposals have limitations. Therefore, policymakers need to pay closer attention to principles of law and economics to ensure that regulations address market failures and do not stifle innovation. By doing so, they can create a regulatory environment that fosters the development of AI technologies while protecting users from harm.”[1]

ENDNOTE

[1] The author generated this paragraph in part with ChatGPT, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to his own liking and takes ultimate responsibility for the content of this publication.

This post comes to us from Professor Roee Sarel at the University of Hamburg’s Institute of Law and Economics. It is based on his recent paper, “Restraining ChatGPT,” forthcoming in the UC Law SF Journal and available here. A version of this post appeared on the Oxford Business Law Blog.

1 Comment

  1. Kiers So

    On an optimistic note, perhaps ChatGPT will show us the true value of human critical thinking! (I doubt it).

    The “hallucination problem” is fascinating. We as humans use critical thinking a lot in our higher learning. Critical thinking hews us to the straight and narrow as our minds traverse from “accepted given truth” to “derived truthful statement implied/deduced/induced from the given truth”.

    In hallucinating, the ChatGPT betrays it’s literal feedback circuit design. It can not distinguish direction of travel between the given truth and implied/deduced/induced statement. It goes round and round both ways traversing the feedback loop: taking the statement it makes as truth and implying/deducing/inducing the more base given truth (here, the citation reference). It’s fascinating.

Comments are closed.