Ethical AI part 6: No prejudice
Our Utopia AI models moderate each message or comment solely by the content and context, no matter who created it.
Utopia has been asked to build AI moderation models that would prefer certain gender’s comments “because their comments are better”. Also, Utopia has been asked to build AI moderation models that would judge a writer’s fresh comment based on one’s earlier bad behaviour.
Business-wise, both wishes are understandable. The higher the traffic, the bigger the number of impressions and clicks for the ads. The worse the writer’s reputation, the higher the risk of unacceptable content.
Unbiased chat message and news comment moderation is important to Utopia. As a text analytics company Utopia is committed to the United Nations’ Universal Declaration of Human Rights which disallows any type of discrimination and guarantees everyone’s freedom of speech.
Of course, every company and online service provider has the right and responsibility to decide what kind of comments are accepted on their online service and society. But with Utopia AI onboard, the publishing decision must be done respecting freedom of speech, and without prejudice.
Traditional tools for moderation do not understand the semantic meaning in the text, thus user modelling, i.e. the user’s past behaviour, is one way to increase the quality of such moderation tool. In contrast, Utopia AI is so powerful that it moderates each message or comment solely by the content and context, no matter who created it. Utopia is not willing to build AI models that do user modelling or enable prejudiced or discriminating moderation for social media communication.
Want to learn more?
Check out our case studies or contact us if you have questions or want a demo.
You may also like