robot aAI

Should Machines Be Allowed To Do Everything They Can?

katrin nyman metcalfWelcome to our blog, the Human Righter. We shed light on contemporary human rights issues and comment on human rights developments. We dig deep into our focus areas within human rights, discuss SDGs and human rights. You will also find book reviews and analyses of new laws.

By Katrin Nyman Metcalf 
Board member, the Raoul Wallenberg Institute and Adjunct Professor of Communications Law, Tallinn University of Technology, Estonia.

Should machines be allowed to do everything they can?

Technology is rarely good or bad.

If it were, the task of legislators would be quite easy: ban the bad and make sure the good is used properly.

Would you think that it is a good idea to be able to insert a chip in humans that shows exactly where they are at any one moment, even if they themselves are not aware of this? Probably not. At least not until your old parent with severe dementia leaves the house in the middle of the night when it is -15C outside, dressed only in pyjamas.

Whether technology is good or bad depends on how it us used with the exact same technology being useful or dangerous, depending on the situation.

We have tools to mitigate possible risks through the need for consent as well as through requirements to use the least intrusive technology available – neither of which however offers any help in the aforementioned scenario.

If we cannot just ban or permit technology, we need to find other ways to make the legal and regulatory system relevant in an increasingly technological world.

Neither the importance of different kinds of technologies for our daily life, nor the regulatory tools available, are very recent. The idea of impact assessment is important in many contexts – whether regarding environmental impact, the impact on data protection or other rights.

Many technologies (like e.g. information and communication technologies, transport, energy) are regulated by specific, specialised regulatory agencies that issue rules and guidance, set licence conditions and monitor activities.

Another tool is the notion of absolute liability – liability even when there is no fault – for activities that are regarded as especially risky, like e.g. certain space activities or nuclear energy.

Such instruments remain important as technology develops and can indeed be more conductive to real results than passing new legislation or setting up new organisations when a new technology appears.

There is unfortunately a tendency in many countries to pass too much specialised legislation on technology matters, as this is a more visible way to tick the box of having taken action than adjustment of existing rules, and thus appeals to politicians. The real reasons for legislation not being properly implemented are usually more complicated and dealing with that offers less immediate gratification.

Even if the importance of technology is not new, we have seen a more rapid development as well as more and more areas of daily life being affected in recent years. What we are only beginning to face is technology that is so autonomous that the link between a human deciding something and the final activity and outcome is so indirect and remote that it is almost impossible to determine – at least to determine in a manner that is clear enough to form a basis for a legal obligation or liability.

‘Artificial intelligence’ (AI) or the related term ‘machine learning’ are surprisingly vaguely or differently defined for something so much talked about, but for our current purpose this is not important: the key is in the mentioned characteristic that the machine decides (to put it simple on purpose).

To some extent the above-mentioned, existing mechanisms can still be useful. e should not forget to first analyse existing laws and regulations when being faced with a new technological solution. However, we will face unchartered territory – including such territory that appears risky or even terrifying.

When do we reach the point where the question is whether we should not just ban certain technologies? Although we do not need to let machines do everything that they can do, taking a decision today to ban something inevitably brings to mind the alleged (albeit probably apocryphal) statement in 1899 by the then Commissioner of the US Patent Office that everything that can be invented has already been invented!

Maybe the only conclusion that can be made regarding how to decide the way forward is that interdisciplinarity is a necessity and not just as a popular catchphrase for academic education (much preached and little practiced) but as the way to work in general.

What machines should be allowed to do is not a technical issue, it is not a legal issue – it is a question of strategy and policy, which should be based on ethics. But what is ethics, where does it come from and whose ethics should a society apply? We will not find easy answers.

So what is the solution? I quote the singer/poet Benoit Dorémus:

“On verra bien, dans quelque siècles“ (We shall see, in a few centuries).


Share with your friends