January 19, 2021

Ethical AI part 7: Honesty

Our professional team focuses on building unbiased AI models.

The human brain is sometimes referred to as the ultimate black box. Science is constantly providing new discoveries but we’re not there yet. We may think we know what our long-time friend or boss is thinking. Honestly, we have no clue. Even our own ideas, motives, decisions and reactions tend to remain a mystery to us.

Artificial Intelligence is often called a black box too. It’s true that many AI products are complicated and some suffer a lack of transparency. In fact, they are built by us humans and with no secret ingredients. So they are not black boxes at least for their builders if the builders know what they’re doing. Since AI is technology, there is always a reason if an AI model does funny things. With machine learning models, the explanation usually lies in the data describing human behavior.

AI can actually help to understand humans better, to see inside the ultimate black box of the human brain. Take bias for an example.

Humans are biased in their thoughts and decision-making. Many biases are something humans don’t recognise themselves. Humans may lie about or not even realize the factors that made them to, for example, hire one of the candidates or choose the particular yogurt in the supermarket.

In the content moderation work, human bias is ubiquitous. Someone may let their political opinions affect the moderation decisions, whereas another might have created a set of own guidelines for their moderation work. This results in them e.g. declining much more content than others. One may have slept poorly and not sharp enough to stop the ugly comments. The reasons for the inconsistencies are various. This is natural for us humans.

Artificial Intelligence, in its most elegant forms, meaning advanced and very automated machine learning, is based on statistical modelling. Such AI can reveal any human bias and inconsistency in data. Moreover, small errors or inconsistency do not interfere with this kind of statistical AI which learns the general rules, the big picture, without getting caught up with infrequent failures.

When dealing with fundamental human rights such as freedom of speech and equality, it is important that the moderation actions are unbiased. Human’s natural tendencies are not the fairest ones. The monitoring processes and tools are needed to remind ourselves to be equal and unbiased in our decisions.

AI learns to mimic human bias if not built correctly. Creating unbiased AI models is actually rocket science. The data scientists need to deeply understand the behaviour of the AI algorithms they are using. They also need to understand how different types of data can be modelled in order to get unbiased results.

One crucial component is to know how, for example, user information should be used in the AI models to achieve equal treatment but also resemble the reality to make useful decisions. The world around us is not evenly distributed, therefore natural data has huge variations. If an AI model is forced to generate evenly distributed results under these circumstances, the model won’t reflect society and therefore, won’t work

The truth is that some biases simply need to be accepted.

—–

Previous parts:

Ethical AI part 1: Time to talk about responsibility
Ethical AI part 2: Powerful tools need a solid base
Ethical AI part 3: Human rights breach as grounds for termination
Ethical AI part 4: AI moderation and freedom of expression
Ethical AI part 5: Values in order
Ethical AI part 6: No prejudice

Article Categories

Categories

  • No categories

Read Next: Latest Articles