Podcast: Artificial Intelligence and Intellectual Property Considerations
Artificial intelligence technology is driving a new wave of innovation, presenting unique challenges and opportunities for intellectual property law and policy. In this podcast, IP litigation partners Matt Rizzolo and Leslie Spencer, and IP transactions partner Regina Sam Penti, discuss factors impacting patenting AI technologies, determining infringement liability, protecting training data and attributing inventorship or authorship to AI-generated solutions and creations.
Matt Rizzolo: Hello everyone. I’m Matt Rizzolo, an IP litigation partner based in Ropes & Gray’s Washington, D.C. office. Today I am joined by my colleagues Leslie Spencer, an IP litigation partner in our Silicon Valley and New York offices, and Regina Sam Penti, an IP transactions partner with a dual-practice in our Boston and London offices.
In today’s podcast, we’re going to talk about artificial intelligence technology and how it’s driving a new wave of innovation, presenting unique challenges and opportunities for intellectual property law and policy. AI is having a profound impact across a range of industries and organizations, and is spurring research and collaboration in exciting new ways. For example, MIT recently announced that it was partnering with the Blackstone Group to fund and establish a $1 billion AI-focused college: the “College for Computing and AI.” With the widespread deployment and use of AI technologies, IP issues abound – including concerns about patenting AI technologies, determining infringement liability, protecting training data and attributing inventorship or authorship to AI-generated solutions or creations. But before we get into that, let’s take a step back. Regina, I’ll begin with you – AI can be such a broad concept. What does AI mean to you?
Regina Sam Penti: Thanks, Matt. So I think of AI as a demonstration of intelligence by machines. It generally refers to the concept of machines mimicking human intelligence such as learning and problem-solving. However, some will actually argue that this definition underestimates the true power of AI to surpass human intelligence. So another way to think about AI is by reference to systems that change behaviors without being explicitly programmed to do so, and they can change behaviors based on data collected, usage analysis and other observations. Not long ago, many of us thought of AI as somewhat science fiction – that’s no longer the case. AI is already here – it’s been for a long time and it’s part of our everyday life. For example, machine translation and natural language searches – these are all things that use AI. If you think about how many people use search engines everyday, these are all examples of AI demonstrating learning and problem-solving abilities. Earlier this year, for example, Google and Microsoft both rolled out AI-driven offline translation apps intended to help international travelers, and these apps can assist travelers in dozens of languages.
AI uses larger data sets, so in a sense, the revolution and explosion of AI has really been allowed and enabled by the explosion of data, and the fact that were creating now so much data that we’re practically swimming in it, and we’ve managed to harness this data at a fraction of the cost, using relatively simple everyday devices like cell phones and laptops. So in some of the examples we gave, the data may include how a certain word or phrase is used, sentence structure and how it appears in oodles and oodles of documents. AI then problem-solves by using algorithms to process that data and extract useful relationships from the data. So for example, in natural language search, algorithms would calculate how closely the search terms and a document are related and can return relevance ranked results.
Matt Rizzolo: So those examples are really helpful to give some context. I know most people are aware of IBM’s cognitive computing system “Deep Blue” beating Garry Kasparov at Chess years back, and more recently, it’s “Watson” computer famously winning at Jeopardy, but AI is being more widely used in a lot of other critical matters, like in the medical field as a diagnostic tool. For example, the ability to recognize the pattern of a tumor in an imaging scan, or to detect the signs of a stroke in advance to allow doctors to intervene sooner. Some studies have shown that AI performs on par with medical professionals – now there’s still great debate about this in the medical profession, but there are several health care start-ups organized around this idea. A few months ago, the FDA even approved the first AI-based diagnostic system that doesn’t separately require a clinician to interpret the data.
So AI has demonstrated that it can make intelligent decisions that have traditionally been accomplished by humans and can potentially do so even faster and more efficiently than we can. It raises for me the question of whether the current IP framework is adequate for AI-related inventions. Leslie, what kind of IP issues should we be on the look out for?
Leslie Spencer: Well, AI technology raises fundamental IP-related questions. From a patent law perspective, three issues immediately come to mind—inventorship, patentability and infringement liability. Inventorship issues may come up when AI technology is developed to a point where AI independently comes up with the inventive ideas. Currently, the U.S. patent system recognizes only individuals as inventors. It would be interesting to think about what if an AI system should be considered as an inventor, or whether we can trace AI’s invention to human contribution.
For example, nearly two decades ago there was a patent filed titled “Neural Network Based Prototyping System and Method.” The invention claimed in this patent was apparently generated by the AI system called the “Creativity Machine,” that had been developed by Dr. Stephen Thaler. Dr. Thaler has claimed that the Creativity Machine mimics the human brain using two artificial neuron networks and is therefore capable of producing creative outputs to new scenarios without human input. As one might expect, Dr. Thaler didn’t name the Creativity Machine as the inventor in his disclosures to the Patent and Trademark Office. Rather, he named himself as the inventor on the patent. But there is a real question there as to whether the invention was something that was machine generated or not.
Regina Sam Penti: Yes and you know, this parallels a question that has been raised for some time now in the copyright realm, as AI systems have already been known to make music and their own art. But under U.S. copyright law, a human author or creator is required – the U.S. Copyright office will refuse to register a claim if it determines that a human being did not create the work. Some people may recall the recent example of the so-called “Monkey Selfie” case, where the Ninth Circuit determined that no one owned the copyright in a picture taken by a monkey. I should point out that these issues aren’t as clear cut universally since IP systems do vary worldwide across jurisdictions. The law is very much struggling to catch up in this area, so it’s certainly worth exploring the specific rules in the country or jurisdiction where protection is sought, but just have an awareness that these issues are evolving and the law is going to change quickly in these areas.
So these types of inventorship and creation questions inevitably lead, at least for me as a transaction attorney, to ownership and licensing issues with respect to AI-created inventions – and some interesting examples come to mind. So if a company agrees to license out its AI technologies, does it also automatically give away inventions created by the AI during the license term? For instance, you can think of a scenario where two parties are in a collaboration, and one brings the AI algorithms and the other provides massive supplies of internal data to train the AI. Who would own the resulting inventions – the licensor or licensee? And if the deal falls apart, can you un-train the AI if it’s your data that’s kind of improved the AI, or is one party left with basically internal relationships and their data being trapped, extracted and exposed in the other parties’ AI’s logical processes? So the ability of these systems to train themselves to perform functions that are not expressly programmed also raises important questions not just in the licensing context, but in other context. So for example, what happens if your AI algorithm trains itself to do something that’s either wrong or illegal? For example, trains itself to violate privacy laws. Should we, as we build these systems, be thinking about whether each of these systems should have some kind of kill switch as a reversibility measure? Suddenly, issues like, at least in the transactional context, indemnification and insurance can take on new complexities that must be addressed in the agreement. And given that the law is very much struggling to catch up with the technology, you have to consider as a transactional attorney the very real possibility that future regulations may render illegal whatever it is that your AI robot is trained to do today.
Matt Rizzolo: So let’s shift gears and turn away from inventions created by AI technology and toward inventions directed to AI technology. There’s the important question of how to protect them – with a patent, or some other way? The Supreme Court’s decision in Alice set forth a two-part test to determine patent eligibility, which, to put it mildly, makes things difficult for a lot of AI-related patent applications. So the first part of the test asks whether the claims are directed to a patent-ineligible concept, such as law of nature, natural phenomena or abstract idea. And the second part of the Alice test asks whether the claims add any inventive concept, which is significantly more than just the idea itself. As we discussed earlier, processing a large amount of data with algorithms is central to AI technology. AI patents are often directed to algorithms as well as the computers directed to generate those algorithms. Algorithms compare similarities and differences among various data points. For example, in the diagnostic tool context, an algorithm may be used to compare a patient’s scan to numerous known diagnoses or maps of diseases. Courts may consider these algorithms to be directed to an abstract idea because these comparisons could be, or may long have been performed by human physicians.
Leslie Spencer: Well, I would agree that the first step in Alice does present a challenge for AI technology to overcome, given that the very nature of AI technology is to develop something specifically tailored to mimic human activity. So even if the technology is too complex to be fairly characterized as mere mental steps, it may well be characterized as an abstract idea or algorithm that would be un-patentable. But there may be situations where the second step in Alice could save a claim like that, in that it recites more than just what is “well-understood, conventional, and routine” steps. For example, claims may be drafted to recite novel technologies for extracting crucial information from a data set, or improving the algorithms such that they make the computer on which they are running process data more quickly, or use less energy. Some fundamental improvements to the underlying computer technology such as that may be enough to get over the threshold for adding an inventive concept in step two in Alice, and therefore, make the invention patentable.
Regina Sam Penti: Alright, so just chiming in to add that some jurisdictions, Europe for example, may be somewhat more welcoming of AI-based innovations than others, or at least may provide more clarity than we have seen in the U.S. So for instance, The European Patent Office recently unveiled guidelines that make it clear that while AI-innovations that focus on mathematical functions will face an uphill battle in securing patent protection, those innovations that focus on the technological applications of those algorithms might do better.
In addition, companies should not forget about other methods of protecting AI technology, as well. In the U.S in the past, patents have traditionally been the currency of innovation, so it’s easy to bring them to the forefront. However, trade secrets can be particularly impactful – especially considering that two years ago Congress passed the Defend Trade Secrets Act, which provides a federal cause of action and strong remedies for trade secret misappropriation, and we know several states also have their own statutes. Many elements of AI systems really are quite well-suited to be protected as trade secrets – think of the structure and components of neural networks, think of sort of the training sets, test data, source code and other algorithms that drive the system. In one context involving AI-driven online chat engagement systems, the Northern District of California recently held that XML data that’s generated by the chat platform analytics could constitute a trade secret under New York law, as it reflected the application of the Plaintiff’s rules and models to test real world situations. And of course, to the extent AI is implemented in software, that software can be protected using copyright law, but that protection would not extend to functionality, so it is a bit more limited, but it’s certainly something worth thinking about.
Matt Rizzolo: So that brings us to infringement liability. As a litigator, I’m thinking, “Who can I sue, or how do I avoid being sued? Can an AI system be liable as a patent infringer independent of any human involvement at all, and if so, who pays?” The scenario I have in mind, for example, is that a person programs an AI system to perform a specific task – and over time, through machine learning, the AI system independently develops a way to perform that task more quickly and more efficiently, but that approach happens to infringe on a patent. So in that scenario, who, if anybody, should be held responsible for the infringement?
Leslie Spencer: So first and foremost, it’s important to point out that while we often speak of “infringing products” in patent litigation, U.S. patent laws actually require the action of a person to find infringement. The statute says that “whoever” makes, sells, uses, etc. is liable for infringement. And then there’s the complicated issue of whether the liability is for direct infringement alone—the actual making/using of a patented invention—or for indirect infringement—inducing a third party to infringe, or contributing to some third party’s infringement.
For direct infringement, would the infringer be the developer of the AI program, or the owner of the AI program at the time of infringement? Certainly it wouldn’t be the AI machine itself you would think. And does it make a difference if the AI program, when originally developed or purchased, was not programmed to operate in an infringing manner? For example, if you have machine learning and the AI program is now generating algorithms and heuristics that are considered infringing, how do you identify an infringement then and who the infringer is?
Regina Sam Penti: Yes, you know, there are a lot of tricky issues to consider and as you note, Leslie, it’s arguably even harder for indirect infringement. Contributory infringement, for example, requires a finding that the product at issue had non-substantial non-infringing uses. So what if you have an AI program that had many different and non-infringing uses at the time it was created, but over time it learned to become exclusively infringing, probably based on data that told it that approach was particularly profitable? And a finding of inducement similarly requires an entity knowingly causing another party to infringe – since the ultimate goal of an AI system is for it to learn and develop on its own, would it be fair to ever really hold a developer or seller of such a system to be liable for inducing infringement that occurs many weeks, months or years down the road after they have programmed their system? So these are all rather complicated and as-yet-unanswered questions – it’s something that we as transactional lawyers have to think about. Even if these questions are unsettled in the law, we always have to consider whether the parties who are at a negotiation table can bring more certainty to some of these arrangements and allocate risks using contractual provisions that hopefully will reduce the likelihood of conflicts down the road.
Matt Rizzolo: So there’s a lot of complex issues to consider – I think we could probably do an individual podcast on each one, but unfortunately, that’s all the time we have today. So thanks to you both, I really appreciate you joining me for this interesting discussion. And thank you all for listening. If you’re interested in listening to more Ropes & Gray podcasts in which we address noteworthy legal developments, court decisions, changes in legislation and regulations, etc., please feel to subscribe to us on iTunes, or visit us at www.ropesgray.com.