Skip to content Skip to footer

There is a growing debate about whether AI should be regulated on the basis of ethical principles. The EU with its ethical guidelines has taken up a progressive step in this respect and received mixed reviews with sceptics fearing that regulation will end up limiting AI technology and harm society by depriving it of the overwhelmingly positive potential AI provides. Proponents of regulation, by contrast, focus on privacy concerns and unintended consequences that have already taken a toll especially among the most vulnerable in society by reinforcing existing human biases.

The risks of AI can be divided up into long and short-term risk, an important distinction that many reports fail to understand. Long-term risk, also known as existential risk, revolves around the question of what will happen to society when the intellect of a computer supersedes those of human beings while short-term risks address the current challenges of data-driven algorithms including biases, discrimination and privacy violations.

As a result, ethical concerns have gradually increased and public institutions as well as the private sector look for ways to curb the negative impacts of this technology by setting up principles for human-centric AI development. However, since the black box model is still dominating the AI landscape, companies currently lack the capacity to implement these principles. Thus, further research into explainable AI and ethics are of vital importance in order to guarantee the responsible development of AI technology.

Click Here to Read the Entire Commentary

This website uses cookies. By continuing to use this site, you accept our use of cookies.