As the tech industry grows, and as its products increasingly become a part of our everyday lives, the question of ethical design is also gaining in prominence. And yet, it remains relatively nebulous – insiders within the tech industry interpret it differently, while outsiders have often never even heard of it.
This article on the subject does not intend to be conclusive, but on the contrary, to act as a starting point. The arguments raised here will hopefully encourage you to consider ethical design in greater depth, and this is what matters most, as ultimately the whole point of ethical design is that the decisions to make on this topic will be yours.
Ethical design is sometimes confused with the development of software (or the integration of a feature into said software) that protects its users from security breaches, online harassment or other harmful outcomes of using a tech product. That sort of thing has more to do with safety than with ethics though, and its principles are generally more established and better understood.
Instead, ethical design has to do with the purpose of a product, not with its hazards. It is hopefully self-evident why developing software to automate a military drone would have questionable moral implications, but what about an algorithm that suggests videos or articles for a user to watch and read? In that case, the question of what those programs are for becomes essential. We know that social media, for example, can hijack the functions of our brain to generate addiction for commercial purposes – the very opposite of ethical design.
Other problems are raised when software is designed for profiling purposes. There is nothing in and of itself wrong about a program that can recognize a human being’s somatic traits, but if it’s going to be used to categorize people as more or less likely to commit a crime, then its potential for injustice becomes enormous.
Ethical design is primarily about ensuring that the software we develop actively does good for the people who use it and others around them. It does not stop at designing products that passively do no harm. How to understand and implement this principle is up to you, though it’s very much worth taking a moment to look into how others have further developed the concept – for instance Don Norman’s Human-Centered Design.
We are currently standing at the threshold of a new technological era – one in which algorithms and AI will be left to make their own autonomous decisions in a great deal of fields. The trouble with new technology, however, is that regulation for it tends to come late. There are laws that can limit unethical design to an extent (the General Data Protection Regulation), but there are none to proactively ensure or promote ethical design. This means, at the cost of employing an expression that sounds like a cliché, that your decisions and your responsibility as a software developer are greater today than they probably ever will be.
Designing an algorithm that can process data is a relatively simple task, but data taken from the real world will reflect unethical systems that exist in it. The Twitter bot that turned racist by being exposed to racist tweets is an infamous example, but far from the only one. Software has been known to discriminate against women and ethnic minorities, for example.
In the context of a world-changing technological transition with insufficient laws and imperfect data, the burden falls entirely on developers to make sure these new products will change everyone’s lives for the better, and not for the worse. It is not enough to create software with no inherent bias, because said software is likely going to learn bias from the real world. It is necessary instead to code what will amount to sensitivity, equity, fairness and awareness into our programs.
Exactly how to do that is a very good question, and one for which the tech community has no ready answer yet. That is why we need your help to find it. Because if you don’t care, then nobody else will.
Weiskopffstraße 16/17
12459 Berlin
Germany