AI: The only way is ethics
Spies and tech giants
In February this year, GCHQ released a paper looking at the ethics of artificial intelligence. In it they set out the case for including ethics as a key part of the development of new AI capabilities to ensure that they didn’t replicate existing power imbalances and societal discrimination. At roughly the same time Google was attracting unfavourable press coverage because of the loss of two of its leading artificial intelligence ethicists, Margaret Mitchell and Timnit Gebru.
But what do researchers into the ethics of AI do, and should we care about it in relation to the mainstream implementation of machine learning technology?
Rapid advances in AI capability have led to increased market penetration
The last decade has seen quite remarkable progress in the capabilities of artificial intelligence. The convergence of the reach and scale of big data, the increased speed of processing offered by the latest CPUs and GPUs, and the evolution of sophisticated connectionist algorithms has led to a surge of AI models capable of visual and aural recognition and complex predictive analysis.
This rapid increase in capability has been matched by the rate at which machine learning platforms have moved beyond academic research and become ubiquitous in many industries, whether guiding self-driving cars, assisting in medical diagnoses or supporting law enforcement agencies in identifying wanted criminals.
The range of possibilities for machine learning, and AI in general, is exciting, but as with any new technology, care must be taken to minimise the impact of the mistakes that will inevitably be made as it becomes widely adopted.
Why AI ethics matter
This is where AI ethics comes in. While you may not join Elon Musk in worrying that advances in AI present an existential threat to human existence, you should certainly take care to ensure that the machine learning algorithms that you implement within your company do not contain unintended biases.
Unexpected consequences have always been a risk in complex conventional systems, but this risk increases when using machine learning because the logic of decision making is no longer explicit within the code, but rather is contained within the training and reinforcement data that is used to set up the system.
There are many anecdotal examples of where AI systems have not behaved as anticipated. In 2018 Amazon scrapped an AI recruiting tool when it became apparent that it favoured male candidates – the tool had been trained on 10 years of resume data which reflected a period of male dominance in the tech industry, resulting in it downgrading applications from women. Meanwhile, the safety review into the fatal crash of an autonomous Uber vehicle in the US concluded that a contributing factor was that the training data-set had not included examples of jaywalkers.
So how can you get it right?
With machine learning models becoming ever more complex, issues around privacy, fairness and discrimination have become prominent. It is increasingly important to understand the machine learning models in your business to a level that enables you to explain, verify and defend the outcomes from them.
As you create new AI platforms, you should think about the following in parallel to the technical considerations of the system:
Do you understand the social and ethical implications of the system?
Involve your data scientists, product managers, data engineers and delivery managers in discussing the impacts of the system thought its development.
Consider the wellbeing of communities that the system will act upon.
Think about the worst-case scenario and how you would detect and prevent it.
Do you understand the data that your using?
Look at whether it contains sensitive elements like gender or race.
Ensure that the data sources are diverse and don’t reinforce existing trends.
Understand the biases of any base data sets you are building upon.
Will you be able to provide a justification for the results of the system?
Think about the how you would articulate the rationale behind the output of the system in a way that can be understood by end users.
Identify any areas of the system or process that may be difficult to explain or justify and challenge whether they could be engineered in a more transparent way.
As a final thought, don’t eliminate people from the process entirely – as Mr. Musk reflected after early attempts in automation at Tesla, “Humans are underrated”. But when combined with the best in AI systems, we have the opportunity to do incredible things!