Ethical AI part 3: Shared responsibility

June 30, 2020
Ethical AI part 3: Shared responsibility

Human rights breach as grounds for termination

The previous part of the Ethical AI blog series, “Human Rights”, discussed how powerful AI products require a concrete foundation to keep ethics in place for both developers and users. After thorough consideration, Utopia Analytics decided to use the United Nation’s Universal Declaration of Human Rights (UDHR) as the backbone for our approach to ethical issues.

With that in mind, Utopia unconventionally included UDHR in our company service contract as grounds for termination. If a party fails to comply with UDHR, the cooperation will be terminated with immediate effect.

All involved parties are obliged to comply with applicable laws and regulations and conduct their business in accordance with high ethical standards. This ensures that Utopia and its customers keep each other honest.

In the process of keeping the AI model up-to-date, the customer provides Utopia with all the human moderation decisions that define the customer’s moderation policy. Utopia AI learns from these examples. At the same time, Utopia provides all its automated moderation decisions for the customer. Both parties are free to inspect and analyse the quality of each other’s actions.

In the end, the common goal is to protect online discussions without violating freedom of speech and to share responsibility with all parties involved, including the AI service provider, the customers, their users and their content moderators.

Has it worked? Without a doubt. The Declaration has been in our service contract since the launch of Utopia AI Moderator in May 2016, and has succeeded in helping all involved parties maintain responsibility for the ethical implementation of Utopia AI.

You may also like

No items found.

Book a free 30-minute consultation with one of our AI experts

BOOK A DEMO

Blog

How Tech Savvy Are You? Learn how Utopia AI Moderator is compared to competing moderation solutions.