Technology / AI and the liability challenges it will create
AI and the liability challenges it will create
18 May 2017 |
With the breakneck pace of AI development, how will we know who owns what? The case of a simian photographer could be key...
Artificial intelligence (AI) is transforming the way we work and go about our daily lives at pace. Our phones can now talk back to us and give us answers to everyday questions, while our cars are starting to drive themselves.
Indeed, AI is expected to have such an impact that consultancy firm Accenture expects it could double annual economic growth rates by 2035 and boost labour productivity by up to 40 per cent.
But as this digital transformation creates new ways of doing things it will also create new challenges. What laws can be applied to this new technology? Who is liable if an AI system makes a mistake? Who owns the intellectual property?
“To what extent can AI make decisions which may affect humans?” asks Bertrand Liard, head of White & Case’s Intellectual Property and Information Technology practice in Paris. “Can AI create intellectual property? Can the use of AI systems infringe the rights of other parties? Which liability model should be applied to AI, if AI does harm to others?”
We are facing a huge issue surrounding patent and copyright legislation worldwide – which, says Liard, could be rendered obsolete by the emergence of AI. He gives the example of a wildlife photographer whose camera was stolen by a monkey.
When the photos taken by the monkey went viral, the photographer lost the ensuing court case over who had permission to use the images – the court ruled that no copyright existed as the pictures hadn’t been taken by a human.
Liard warns that something similar could happen with AI, and that unless we change the definition of a physical person, or copyright or patent law itself, AI will not be able to create anything that can be patented or copyrighted.
“That is a worry for companies that are currently programming AI software because through self-learning they are creating on their own,” says Liard.
“They are creating new lines of code, but currently they cannot be protected. It will be a worry in a few years.”
Liard also warns it is important not to regulate too much as there is a risk it might kill innovation, but it would seem there are plenty of legal pitfalls to be negotiated with the rise of artificial intelligence.