Ethical AI part 5: Values in order
Utopia will not sell its tools to anyone who might abuse them
Is it OK to sell the Utopia AI Moderator service to a certain company? This is a question that arises now and then in Utopia’s internal discussions.
The question is relevant.
Utopia’s Text Analytics Platform is able to understand the semantic meaning in any language in the world, and Utopia AI Moderator learns each customer’s moderation policy from content samples moderated by humans. It is always the Utopia customer’s duty to provide the training data. If the human moderation has been biased or has acted in violation of human rights principles, the AI model would learn this behaviour. On top of that, the AI model is more effective than a human moderator could ever be: it can make all the unwanted opinions and voices vanish into cyberspace.
First of all, the answer to the question “to sell or not to sell” very much depends on one particular issue: What is this particular company using Utopia AI for?
For example, online marketplaces often use Utopia AI Moderator in moderating classified sales ads. Those services are rarely considered platforms for political messaging or other activity critical from a human rights viewpoint. Therefore, it is quite clear that usually Utopia can provide the moderation service to online marketplaces.
If Utopia AI Moderator is wanted for news comment moderation, the evaluation needs to be done carefully. Utopia supports freedom of expression and other human rights. If there is a risk that a potential customer would, for one reason or another, violate the United Nations’ Universal Declaration of Human Rights (the UDHR), Utopia is not able to sell to that customer.
Utopia has been asked, for example, to build AI models that would moderate comments in a way that would remove certain types of political opinions. Utopia has declined such requests and has instead offered models that, rather than removing political-related content, would remove content related to, for instance, hate speech and grooming.
Sometimes it’s clear from the start that a potential customer is not committed to human rights. In such cases the answer is easy. Utopia cannot provide the service. Sometimes the situation is more ambiguous. Under those circumstances Utopia communicates openly with the customer and reminds that UDHR breach is grounds for termination of the service contract with immediate effect.
Utopia’s service contracts usually last for multiple years. Therefore, it’s necessary to continuously monitor the human rights situation in the moderation. The moderation is offered as a service, and Utopia’s PhDs of AI and computational linguistics continuously monitor the quality of the training data for suspicious activity. If things start to go wrong, termination is easy for Utopia, because we want to follow the UDHR and because of the UDHR stipulation that is written into every single contract.
As a text analytics company, Utopia needs to be as trustworthy as a medical practitioner or a lawyer. Quality, safety and ethics have to come first.
Want to learn more?
Check out our case studies or contact us if you have questions or want a demo.
You may also like