Select Page

The advance of technology and the newfound buzzword of “artificial intelligence” has been greeted with equal parts optimism and concern. While most recognize the value of artificial intelligence for its ability to automate time- and labor-intensive tasks, it is widely acknowledged that AI carries risks. Though tamer than the grim futures presented by movies such as Terminator and the Matrix, AI has already presented a number of potential issues in need of resolution. This opens up two closely-related conversations concerning regulation and liability. The former is relevant for business owners, governments, and other organizations looking to leverage AI in their operations. The latter is of concern to professionals in legal fields that now have to contend with the ways that AI can affect their work. This technology is effectively a new frontier that needs to be adapted accordingly.

 

Regulation

 

Though technology is consistently marching on, most laws were not drafted with this in mind. As a result, governing the proliferation of technology, especially something as impactful as AI, is an uphill battle. Safety standards and certification procedures all need to be established under the watchful eye of a government that may not fully understand what they are trying to legislate. Bringing in experts to assist with this is imperative, as AI technology is complex and necessitates governmental advisory roles. Regulation should be preceded by conversations concerning the detriments and benefits of AI and the way that it can be used to improve public welfare without getting mired in compliance issues and potential risks. 

 

The end goal of any regulation is to establish a framework relevant for corporate and governing bodies as well as making provisions for the worldwide spread of AI. Countries will need to recognize each other’s safety standards as well as rules for the importation of AI. Existing treaties and conventions will also have to be considered. All in all, it’s a process that involves many organizations and thus should be started as soon as possible lest any potential framework is rendered obsolete on arrival. Already, hotly contested topics such as the safety of driverless cars have caught the attention of world leaders who recognize the need for new standards governing AI.

 

Liability

 

As challenging as regulation is, liability presents its own unique litany of problems for legal institutions to address. If an automated system malfunctions and causes delays or damages, who is to blame? The owner of the system? The company that created it? The engineers that developed it? Though it may seem obvious that blame would lie with the latter, companies often hire third-party developers to create AI. Beyond that, it can be hard to determine which developer is responsible for which code or even how issues could have been prevented. AI is intended to learn and adapt to changes in data, which can lead to unforeseen consequences that can be nigh-impossible to predict.