July 20, 2021

Ethical AI Part 9: Safety

We build tools to maintain security and wellbeing in the digital world.

In the physical world, many of us have the privilege of taking safety for granted. We assume the existence of a security structure that ensures everything runs smoothly. If we attend a music festival, there are rules and instructions for everyone to follow, and on top of that, people who make sure the rules aren’t being broken.

We need to achieve a similar level of safety in the digital world. We should be able to participate in online gatherings and discussions without fear of being insulted, humiliated, robbed or attacked.

Responsible digital service providers take the security and wellbeing of their users seriously. They set up terms of use to support the community and its individual members. They watch that the terms are being followed. And if the terms are being violated, they decide on effective consequences that can be implemented without violating anyone’s rights.

Many digital services nowadays have huge numbers of users. Whether they’re buying or selling, sharing opinions or swapping images, playing or just chatting, traffic in many services is so high that watching after the community’s wellbeing is labor-intensive. It is, in fact, an impossible task to be handled manually. An urgent need exists for advanced digital tools to maintain each service’s security, both for the brand and for the users.

Utopia’s purpose is to build tools that maintain security and wellbeing in the digital world, and that free humans from mundane tasks, allowing us to focus on the things that really require the attention of our human brains. From online marketplaces to news site comment sections to social media services and dating platforms, the service provider must now decide on its own community guidelines for acceptable actions. Utopia AI will learn that policy, and then uses this knowledge to help protect users as well as brands.

Naturally, the policy needs to be uniform and must take account of people’s rights. Utopia is not agreeing to build any AI models with prejudice. Utopia AI analyses all items solely according to content and context, no matter who wrote it. Utopia’s powerful tools have a solid base.

—–

Previous parts:

Ethical AI part 1: Time to talk about responsibility
Ethical AI part 2: Powerful tools need a solid base
Ethical AI part 3: Human rights breach as grounds for termination
Ethical AI part 4: AI moderation and freedom of expression
Ethical AI part 5: Values in order
Ethical AI part 6: No prejudice
Ethical AI part 7: Honesty
Ethical AI part 8: Equality

Article Categories

Categories

  • No categories

Read Next: Latest Articles