GOOLE AND OPENAI PLAN TECHNOLOGY TO TRACK AI-GENERATED CONTER

The rise of AI raises many urgent questions for content creators and the media industry. One of the most fundamental is this: how can we distinguish AI-generated images, videos, or pieces of music from human creations? When President Biden announced yesterday that seven major technology companies were taking voluntary steps to regulate their AI technologies, a potential answer emerged: digital watermarking. GoogleGOOG +1.3% and OpenAI in particular committed to developing watermarking schemes to help identify content created with their AI tools.

Digital watermarking takes its name from centuries-old techniques to embed invisible markings into the paper to signify its source and authenticity, markings that could be seen if the paper were soaked in liquid. Most paper currency today, for example, contains various types of watermarks. In digital watermarking, an algorithm operates on a digital content file—a JPEG image, an MP3 audio file, an MP4 video file—to insert a small piece of data in a way that doesn’t affect how a user would see or hear it. You can run a software program on the file to extract that small piece of data called a payload.

This technology is neither new nor rare. Digital watermarking techniques first appeared in the 1990s. They’re used routinely in many types of content today, from movies shown in theaters to photos from stock agencies to e-books to digital music files sold online. In most cases, they are used to trace the origins of content suspected to be pirated. Content creation tools today such as AdobeADBE +0.7% Photoshop have add-ins that enable users to embed invisible watermarks.

A good watermarking algorithm makes the payload very difficult to remove without damaging the content and robust enough to survive transformations such as screen grabs of images or analog recordings of digital music. As a result, the data capacity of a watermark payload is very small—typically a few dozen bytes—so it’s not possible to cram very much useful information into a watermark. Instead, the payload is usually an identifier that’s used to index an entry in a database, where information about the content can be stored.

That leads to the use of watermarking in AI. Generative AI tools can readily be modified so that they embed a watermark when they produce a piece of content. The payload can point to an entry in an online registry that stores information such as the name of the AI tool, the date and time, the identity of the user who used it, and perhaps information about how or if the user was involved in the creation of the content. For example, the latter information is important in determining whether the user qualifies as an “author” of the content under copyright law. AI tool vendors can make watermark extraction tools freely available to the public so that they can examine any piece of content they come across to see what AI origins, if any, it may have. These tools would be like “X-ray vision glasses” for looking at and finding information about the content.

Yet although the tech exists, challenges lie ahead. One is that watermarking schemes differ from one type of content (e.g., images) to another (e.g., music). Another is that there are no standard watermarking algorithms even for specific content types. Several vendors each have their own proprietary schemes. There is what IP experts call a “patent thicket” in the technology: many watermarking-related patents exist, some of which are owned by firms who maintain them continuously and use them to extract license fees, such as by threatening or filing lawsuits for patent infringement. It would not be possible to create standards for watermarking schemes that all AI content creation tool vendors use without embarking on a lengthy, contentious process involving patent identification and licensing.

This means that for the foreseeable future, each AI tool vendor would most likely have to develop its own watermarking scheme and make its own determinations about patent liability and technology licensing. As a result, it would be necessary to use several sets of “X-ray vision glasses” to find watermarks in content.

On the other hand, it should be possible for the AI technology vendors to get together on a standard format for payload data and even a common registry (database) for storing the information that payloads point to. Back in 2009, the RIAA created a standard watermark payload for music files that were designed to work with multiple audio watermarking schemes.

These AI tech companies should go down the path of standard payload formats, a common registry, and freely available watermark detection tools. This is the better side of the 80/20 rule when it comes to standardizing this technology to make it as useful as possible in a reasonable amount of time—and time is of the essence here. This type of standard-setting will enable other AI tech vendors, including the multitude of startups to come, to join in easily.

There are many good reasons to identify AI-generated content and distinguish it from content created by humans and even content created by people with AI assistance. AI is likely to lead to an explosion of content that dwarfs what humans have created, even with the powerful digital tools we have today. For example, just last week AI music startup Mubert boasted that its technology has created over 100 million tracks, equal to the size of the entire Spotify library. And although Mubert hasn’t attempted to upload all that music to Spotify, a step like that is inevitable. This will surely be a long process of disruption for music and other types of content, and the outcome is far from clear.

Of course, the use of watermarking to identify AI-generated content would be voluntary; AI tech vendors who refuse to use watermarking are inevitable, even if the technology is free to use. (And, of course, hackers will look for ways to remove AI watermarks without altering the content.) That leads to a need to identify AI-generated content after it’s created.

This technology exists today as an offshoot of tools to detect plagiarism in written assignments at schools and colleges. Other companies are developing technology to detect AI-generated visual and audio content, mostly with the objective of rooting out deepfakes. This will lead inevitably to an arms race between AI detection tools and AI content creation tools. And as the commercial needs for AI content detection grow—for example, if Spotify were to decide not to accept certain forms of AI-generated music into its vast catalog—the arms race will accelerate.

Some say that detecting AI-generated content is a Quixotic quest. Yet similar things were said about content recognition technology to detect copyrighted music, text, and video online back in the 1990s—technology that is related to AI detection in various ways. At first, content recognition technology was not very accurate, but as the need for it increased with the rise of online file-sharing and copyright liability, the tech improved—to the point that it’s currently used every day in services like YouTube and Facebook. It’s not perfect, but it works well enough to satisfy copyright owners most of the time. The same may happen with AI detection; we’ll just have to wait and see.

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *