Ethical AI part 1: Who’s in charge?

March 12, 2020
Ethical AI part 1: Who’s in charge?

Time to talk about responsibility

It is easy to understand who is responsible for airline safety. Although Artificial Intelligence (AI) is a new phenomenon, the structure of responsible parties is very similar to air traffic control.

Defining who is in charge of artificial intelligence has prompted much discussion. Experts agree that legislation lays the foundation for the ethical use of AI. In regulation, AI should be seen like any other automated system. Contrary to common fears, there is nothing magical in the skills of AI, at least in the predictable future. Currently, machine learning is the most advanced form of AI. Despite its powerful abilities, ML does not invent anything creative or new that does not already exist in the data provided. The safety issues of AI lie mostly in the collection of personal data and the technologies that have implications for physical interactions with humans, such as machines, cars or aircraft.

A frequent question is: Who is responsible for the resulting actions of an AI?

AI can simply be compared to the airline industry, which is familiar to us all and is well-regulated due to the obvious safety issues surrounding it. But who is responsible for an aeroplane to fly safely from A to B? First of all, the users of the aircraft ­ the pilots, engineers, cabin crew, airport staff – each working in their own area of responsibility and expertise. Secondly, the manufacturer of the aircraft. The leading airline companies have professionals in each different aspect of aeroplane building and have lengthy experience in developing high-quality products.

AI is not much different. It is a technology built by humans, just as aeroplanes are. Most AI applications do not currently represent a lot of physical risk for humans, mostly due to the still-limited application areas of machine learning compared to all technology which runs on CPUs.

The responsibility of AI decision making should work like this: legislators and authorities should define the boundaries for collecting and using data, and what kind of certificates are needed to be eligible to produce AI tools. Just as they would regulate any other tools or products that significantly affect human lives. For AI, there are three responsible parties:

1. The field experts who perform their profession and at the same time provide training data for machine learning based AI.

2. The AI model and product developers, who need professional skills in understanding how different algorithms behave on different kinds of data and in different types of AI products.

3. The users of the AI tool who should follow the user instructions in order to get the expected behaviour of the AI decision-making system.

Dr. Mari-Sanna Paukkeri, CEO

Want to learn more?

Check out our case studies or contact us if you have questions or want a demo.

Message sent! Thank you.

An error has occurred somewhere and it is not possible to submit the form. Please try again later or contact us via email.

You may also like

No items found.

Let’s meet! Book a free 30-minute consultation with our AI expert.

BOOK A DEMO

Blog

How Tech Savvy Are You? Learn how Utopia AI Moderator is compared to competing moderation solutions.