PATENTING ARTIFICIAL INTELLIGENCELEGAL IMPLICATIONS
Patent protection must keep pace with the growing applications of artificial intelligence in diverse technologies. The elevate of “thinking machines” raises questions regarding the application of personhood to patent law, including the definition of a “person” of adeptness in the art, contractual relationships, inventorship, subject matter eligibility, and liability. This article will address these questions in light of the recent technological advances.
INTRODUCTION
“AI is a new digital frontier that will have a profound impact on the world, transforming the way we live and work"
WIPO Director General, Francis Gurry
The term "Intellectual Property (IP)" is defined as the property resulting from creations’ of the human mind, the intellect. In this regard, it is fair that the person making efforts for an intellectual creation has some welfare as a result of this endeavour. Probably, the most consequential among intellectual properties is “patent.” A patent is a prerogative granted by a government for an invention, which is a product or a process that provides, in general, an incipient way of doing something, or offers a unique technical solution to a quandary. A few decenniums ago, it was only humans who could play chess or read handwriting. Since then a lot of research and development has been conducted in the field of artificial intelligence and it has changed the terrain completely. Today, researchers are working on many more applications of AI which will remodel the ways in which we work, communicate, study and relish ourselves. Products and accommodations incorporating such innovation will become a component of people’s day-to-day lives within the next few years as we embark on what some AI experts construe as the age of implementation. Yet AI remains a trying issue for many people. Definitions vary, have transmuted over time and are in some cases contentious. The technology is intricate and wide-ranging, potentially affecting many different areas of human activity. And AI raises involute questions about privacy, trust and autonomy that are arduous to grapple with, and this has led to fears about humans themselves being under threat.
Artificial Intelligence Legal Landscape
Artificial intelligence has rattled the integrated technology ecosystem and has unlocked avenues that were far from imagination a few ages ago. A profusion of activity has been espied in the field of AI with R&D being carried out to implement AI in sundry industries on a macro level.
Key Issues Stemming from AI and Policy Replications
“It is important that the various regulatory and other governance mechanisms are thought about now; the fast pace of change in this technology is such that we cannot wait.”
- Kay Firth-Butterfield, WEF
As a transformational technology, AI has the potential to challenge any number of licit postulations in the short, medium, and long term. Precisely how law and policy will acclimate to advances in AI; and how AI will acclimate to values reflected in law and policy depends on a variety of convivial, cultural, economic, and other factors, and is liable to vary by jurisdiction. The most prominent licit issues that arise are as follows:
Legal Personality of AI
Legal personhood is invariably linked to individual autonomy, but has however not been granted exclusively to human beings. The law has extended this status to non-human entities as well, whether they are corporations, ships, and other artificial legal persons. No law currently in force in India apperceives artificially intelligent entities to be legal person, which has prompted the question of whether the desideratum for such apperception has now arisen. The question of whether legal personhood can be conferred on an artificially intelligent entity boils down to whether the entity can and should be made the subject of legal rights and obligations. The essence of legal personhood lies in whether such entity has the right to own property and the capacity to be sue and be sued. There are diverse debates against granting AI’s legal personhood:
The Responsibility Remonstration: That AI by nature, would not be responsible. This remonstration fixates on the capability of an AI to consummate its responsibilities and obligations, as well the consequent liability for breach of trust.
The Judgment Remonstration: That AI entities cannot be trusted to make the judgment calls that humans are faced with in their work.
Corporations are a prime illustration of an artificial person. The legal fiction engendered for corporates accommodates as a good precedent for the argument for granting the same to AI. The status of AI needs to be examined further and a simple analogy with corporations would not suffice. Further, it would be unfair to treat AI on par with natural persons as AI lacks (i) a soul, (ii) intentionality, (iii) consciousness, (iv) feelings, (v) interests, and (vi) free will.
However, with Sophia, a gregarious humanoid robot developed by “Hanson Robotics”, a Hong Kong-predicated company, launched in April 2015, being granted citizenship by Saudi Arabia in 2017, it has become the desideratum of the hour for legal systems across the world to address issues pertaining to the legal standing of AI, at the earliest. The prominence of this need is highlighted by a recent contingency caused by an autonomous / a self-driving car being tested by Uber, wherein an individual died and there was no certainty as to whether Uber Technologies Inc should be held responsible or whether the AI which was running the autonomous car should be held responsible, on its own.
In order to find a middle ground, Migle Laukyte suggests the possibility of granting AI hybrid personhood, a quasi-legal person that would be apperceived as having a bundle of rights and obligations as culled from those currently ascribed to natural and legal persons.
Uncertainties in Patenting AI—Subject Matter Eligibility
Subject matter eligibility is amidst the fundamental principles for receiving a patent, in addition to novelty, Inventive step or non-obviousness, and capable of Industrial Application. An invention must contain patent-eligible subject matter in order to receive patent security. Sections 3 and 4 of the Patents Act list out the non-patentable subject matter. As long as the invention does not fall under any provision of Sections 3 or 4, it signifies it has patentable subject matter (subject to the gratification of the other criteria).
Hon’ble Supreme Court of India on inventive step: In Biswanath Prasad Radhey Shyam vs Hindustan Metal Industries Ltd it was held that “The expression "does not involve any inventive step" utilized in Section 26(1) (a) of the Patents Act and its equipollent word "obvious", have acquired special paramountcy in the terminology of Patent Law. The 'obviousness' has to be rigorously and objectively judged.
Hon’ble Delhi High Court in F.Hoffman la Roche v Cipla case had observed that the obviousness test is what is laid down in Biswanath Prasad Radhey Shyam vs Hindustan Metal Industries Ltd and that “Such observations made in the foreign judgments are not the guiding factors in the true sense of the term as to what qualities that person adroit in the art should possess. The reading of the verbalized qualities would mean qualifying the verbally expressed verbalization and the test laid down by the Supreme Court.”
The “obviousness” must be rigorously and objectively judged. While determining the inventive step, it is paramount to visually examine the invention holistically. It must be ascertained that an inventive step must be a feature that is not an omitted subject itself. Otherwise, the patentee by citing economic paramountcy or technical advance in cognation to any of the omitted subjects can insist upon the grant of patent thereto. Consequently, this technical advance comparison should be done with the subject matter of invention and it should be found it is not cognate to any of the omitted subjects.
There is a perpetual debate on whether awarding patent rights to CRIs (computer related inventions) can enhearten investment in software-cognate research and thereby promote innovation. The naysayers ardently argue that granting patents on software stifles innovation. So, where would that leave AI? It is an over-simplistic approach to suggest that patents should not be awarded to AI-predicated inventions, which would essentially fall under CRIs (albeit they have far more preponderant potential than general software). The middle ground is a more sensible option and the Indian Patent Office should address this issue expeditiously. Open discussions are indispensable for engendering a solid framework for patenting AI inventions – that is, one that establishes predictability in the patent office’s approach through updated and comprehensive examination guidelines. The EPO has already held its first conference on AI and patenting; The government must ascertain that AI’s impact on patents is dealt with systematically and to the benefit of the technology community, especially if India is to become an engenderer of AI and not a mere adopter.
Patentability and inventorship issues for AI-generated inventions
The patent-eligibility issue for AI-engendered inventions must be explored in the context of whether patents on AI-engendered inventions would further the patent law system’s main objectives. Some have argued that granting patent rights to AI-engendered inventions would expedite innovation, even enabling advances that would not have been possible through human intelligence alone. Some point out that, even if patents on AI-engendered inventions ultimately promote innovation, those patents may “negatively impact future human innovation as supplanting human invention. Artificial Intelligence Collides with Patent Law with autonomous algorithms could result in the atrophy of human intelligence”. The concern is that reduced inventive aptitude could lead to the elimination of high-quality research and development (R&D) jobs or entire R&D-intensive industries. Others even argue that the notion of awarding patent protection on AI-engendered inventions should be abolished altogether. In their view, alternative implements, such as first-mover advantage and convivial apperception of AIs, as well as alternative technologies that obviate infringement of patent rights, can better lead to innovations and public disclosure of inventions. Further, the discussions must identify possible “middle grounds” to avail balance the competing objectives and factors. For example, one could consider raising the patentability standard (e.g. non-obviousness) for inventions engendered solely by AI, which would level the playing field to some extent between human inventors and AI.
If inventions engendered entirely by AI become eligible for patent rights, the next question to address is who should be listed as the inventor. Section 6 of the Patents Act states that an application for a patent for an invention can be made only by the true and first ‘inventor’ of the invention or an assignee. Further, a ‘patentee’, according to Section 2(1)(p), is the “person” entered on the patent office register as the grantee or owner of the patent. Ostensively, this suggests that an inventor and person essentially mean a natural person. However, Section 2(1)(s) defines ‘person’ to include the regime, a non-natural entity. Moreover, ‘true and first inventor’ has an exclusionary definition and there is no mention of a natural person (Section 2(1)(y)). Thus, the Patents Act disputably does not require a particular threshold of human control or input in the invention process for granting patent rights per se, and frames the questions of inventiveness in terms of engendering (i.e., “new product or process” or “technical advance as compared to the subsisting knowledge”, in Sections 2(1)(j)-(ja)). While these provisions do not expressly impose the requisite for an inventor to be a natural person, the predisposition appears to require human intervention for an invention to be considered patentable. The very first issue to be determined is whether an inventor must be a natural person. Keep in mind that Saudi Arabia granted citizenship to Sophia, a gregarious humanoid robot; so, would Sophia be considered a ‘natural person’?
Contractual Relationships
In 1996, Tom Allen and Robin Widdinson noted that “soon, our autonomous computers will be programmed to roam the Internet, seeking out new trading partners - whether human or machine”. An elevating concern is that contract law, as it stands, cannot keep up with the elevate in technology. While the UN Convention on the Use of Electronic Communications in International Contracts apperceived contracts composed by the interaction of an automated system and a natural person to be valid and enforceable, a need for more comprehensive legislation on the subject subsists today. Explanatory note by the UNCITRAL Secretariat on the matter demystifies that messages from such automated systems should be regarded as ‘originating’ from the licit entity on behalf of which the message system or computer is operated. This circles back to the debate of giving AI entities a legal personality.
Employment and AI
With the ultimate objective of reducing man hours and incrementing efficiency, several prominent companies across the world have actively prescribed to the practice of utilizing AI systems as a replacement for the human workforce. This wave of automation, driven by AI is engendering a gap between the current employment cognate legislation in force and the incipient laws/employment framework that is required to be brought into place to deal with the emerging automation via the utilization of AI and robotics systems in the workplace. As employers incorporate AI and robotics systems into the workplace, it is pertinent that they simultaneously must acclimate their compliance systems accordingly. Ergo a synergy is required between the members of the industry and the regulators to arrive a plausible and a technologically germane employment framework to address such issues.
Liability for Patent Infringement by AI
Another consequential patent law issue that will likely be disrupted by AI relates to liability in cases where AI breaches patent rights, given that most AIs now have the technological capacity to infringe patent claims. Akin to the above discussion on AI as the inventor, the liability issue raises the question of who should be held responsible for actions taken by AI – the terminus utilizer, the developer or AI itself – as well as the cognate question of how to assess liability.
Section 48 of the Patents Act confers on the patentee “the prerogative to avert third parties, who do not have his consent, from the act of…” The pertinent question here is whether AI has the potency to give consent. If it does, how would someone receive the requisite consent? The same issue lies with ownership through assignment or acquisition. If the ownership of the invention is transferred to a business entity that can enforce the patent, does an AI have the potency to assign (i.e., give consent for change of ownership)? Thereafter, once the patent infringement is established, the infringer would have to pay damages to the patent owner in an amount adequate to compensate for the infringement (conventionally in the form of lost profits or plausible royalties), and in certain cases would be enjoined or precluded from performing the infringing activity. How would the courts enforce this on an infringing AI? Does the licit responsibility arising through an AI’s illicit action lie with the AI, its owner or its utilizer or operator? If the cause of the illicit activities cannot be traced back to a concrete human actor, who has liability? These and many kindred concerns are now the subject of debates on the ambiguities of AI, not only in the IP context but withal in the context of malefactor liability or civil tort liability.
Legal Philomath Gabriel Hallevy has discussed three models of malefactor liability that are instructive in understanding the issues at hand. The first is the ‘perpetration-via-another’ liability model, wherein mens rea is not attributable to an AI entity and the perpetrator would be either the programmer of the AI software or the terminus utilizer. Second, the ‘natural-probable-consequence’ liability model postulates deep involution of the programmers or users in the AI entity’s daily activity, but without intent to commit an offense. However, since incognizance of the law is not a bulwark, this postulates that the programmers or users of an AI should have kenned about the probability of the commission of the concrete offense, and hence holds them to be liable. Third, the ‘direct liability’ model fixates on the AI entity itself and suggests that the AI entity would be liable as if it were a human. The ultimate consequences of this are anyone’s conjecture; drawing a metaphor from a favourite childhood party game, the essence here is that we may have the tail, but no donkey to pin it on.
CONCLUSION
The perforation of self-driven cars, robots and fully-automated machines, which are currently being utilized in diverse economies around the world is only expected to increment with the passage of time. As a result, the dependency of entities and individuals on AI systems is withal expected to increment proportionately.
This may be evidenced by the fact that AI is expected to bolster economic magnification by an average of 1.7% across various industries by 2035.
However, in order to safeguard the development and integration of AI systems with the industrial and social sector, it is paramount to ascertain that the current concerns that subsist with regard to AI systems are felicitously addressed. The most prevalent issues being (i) the issue of imputation of liability or in other terms the issue of holding an AI to be responsible for its actions; and (ii) the issue pertaining to the relationship/interplay between ethics, the law and AI and robotics systems. Whilst addressing the aforementioned, it would be imperative that the regulators undertake a plausible and balanced approach between safeguarding of rights of citizens/individuals and the desideratum to embolden technological magnification.
While we ponder on the policy issues, a fascinating practical point to ruminate is that if artificial intelligence lives up to its hype and superintelligence in AI is achieved, then would not the AI then be able to decide whether it has developed patentable subject matter, review the prior art and approve or reject its own application – and, if patented, find infringers with claim charts that are indisputable? Will the very machines we develop file, prosecute, forfend, enforce and pay and receive damages – thus taking all our jobs away? Needless to say, the road ahead is unknown and daunting.
By Jahnvi Bhala