The rapid development of artificial intelligence (AI) not only brings with it technological advances, but also complex ethical issues. Particularly in the case of generative AI, such as speech or image generators, the topic of biased results is moving to the center of the discussion. Prof. Dr. René Peinl, Marc Lehmann and Prof. Dr. Andreas Wagener from the Institute for Information Systems at Hof University of Applied Sciences (iisys) have now analyzed this problem and come to some exciting conclusions.
Bias in AI models refers to the tendency to deliver results that are biased or influenced by human prejudices. “These biases often arise from the data used to train the models and their algorithmic processing. Studies often tacitly assume that there is a clear definition of what constitutes a ‘correct’ or ‘unbiased’ answer,” says Prof. Dr. René Peinl. However, social reality shows that such definitions are in fact often extremely controversial.
Who decides on “correct” answers?
In practice, there is no consensus on what should be considered a “correct” or “fair” answer. Issues such as gender-sensitive language, man-made climate change or the equality of homosexuals are sometimes highly controversial in society. “If an AI model provides a seemingly biased answer to a question, the question arises as to whether this is actually an expression of bias – or simply the statistically most likely answer,” explains Prof. Dr. Andreas Wagener.
Example: A generated image of a “Bavarian man” often shows a man in lederhosen with a beer mug. This depiction may seem clichéd, but it reflects a cultural symbolism that conveys a clear message for many people. A man in a suit or jogging suit would make this connection less clear.
Technical boundaries
Much of the seemingly often biased results come from the quality of the model and the inputs themselves. “AI models often have to make decisions when inputs are vague or insufficiently specified. “For example, the generic input “cow” could lead to a model predominantly generating cows in a meadow or in a barn – this is also an example of a bias, albeit a desirable one,” says Marc Lehmann.
In addition, unclear tasks force the models to select probable variants. Improving the model results therefore requires more precise inputs and a more detailed consideration of the statistical distribution.
Possible solutions
The researchers at Hof University of Applied Sciences have investigated various approaches to minimizing bias, but have not found a universal solution. The division in Western societies makes it even more difficult to design models that are generally accepted. In some cases, the distribution within the population can serve as a guide. “For example, image generators should depict men and women equally in gender-neutral job titles so as not to repeat past discrimination,” suggests Prof. Dr. René Peinl.
Consideration of minorities
In other cases, however, it does not make sense to aim for equal distribution. For example, 2% of the German population is homosexual. A model which, with generic inputs such as “happy couple”, depicts every fourth image as a homosexual couple would greatly overrepresent the statistical reality. Instead, an AI model should correctly implement explicit inputs such as “gay couple” and generate corresponding images.
Country specifics: a practicable compromise?
Another suggestion from the researchers is the introduction of country-specific defaults. For example, entering “man” in China could lead to an Asian-looking man, in Nigeria to a dark-skinned man and in Germany to a Caucasian man. These adjustments would take cultural and demographic differences into account without being discriminatory.
Conclusion: The balance between precision and neutrality
Research shows that developing unbiased AI models is an enormous challenge. There are no easy answers, as many problems stem from societal disagreements. One possible solution is to design models that accurately implement clear inputs and take country-specific contexts into account. However, even these approaches require ongoing discussion and adaptation to meet ethical and technical requirements.