Understanding ethical AI

AI ethics are a set of values, principles, and techniques that guide moral conduct in the development and deployment of Artificial Intelligence technologies. AI has become a fundamental part of our lives. It has transformed businesses, detected frauds, composed art, conducted research, enabled translations, and helped judicial systems of the world. The burgeoning rise of AI has also instigated crucial discussion by the tech giants such as Alphabet, Amazon, Facebook, IBM, and Microsoft as well as individuals like Stephen Hawking and Elon Musk about the nearly boundless landscape of artificial intelligence. Yet one pressing issue that plagues the AI community is the fine line between ethical and biased AI. AI has the potential to harm the privacy of individuals and expose organizations and individuals to risks. The dangers can be controlled and mitigated by careful analysis of the consequences and making ethical implementation choices. Ethical AI is the foundation of successful and impactful AI systems.

The potential risks of adopting AI-driven technologies are:

  1. Data-driven technologies, such as Artificial Intelligence, can emulate the preconceptions and biases of the engineer. Data samples train and test algorithms. There is a risk that the data is not an apt representative of the population from which they are drawing inferences. This creates possibilities of biased and discriminatory outcomes due to a flaw from the start when the designer feeds the data into the systems. Microsoft developed Tay (@TayandYou), a Twitter chatbot AI that was launched to experiment with conversational understanding, but in less than 24 hours it began to generate racist messages. An African American man in the US state of Michigan was wrongfully arrested for a shoplifting crime. The police officers involved had trusted facial recognition AI to catch the man, but the tool hadn’t learned how to recognize the differences between black faces because the images used to train it had mostly been of white faces. These incidents tell us that without a responsible AI framework, implicit biases in data are likely to render unexpected and undesirable results.
  2. Social Media Behemoths have come under fire for using algorithms, powered by AI, to micro-target users, and send them tailored content that reinforces their prejudices. This has boosted the popularity of extremism, fake news, conspiracy theories which have just risen after the COVID-19 pandemic. Prime examples of the evils of AI-powered systems can be traced back to the 2021 Capitol Riots by QAnon supporters (American far-right conspiracy theorists).
  3. A massive amount of personal data is collected, processed, and utilized to develop AI technologies. There have been instances where big data is captured and extracted without gaining the proper consent of the data owner subject. Thus it is a huge risk to the privacy of individuals especially with the insurgence of deep fakes.
  4. Widening socioeconomic inequality by AI-driven job loss is a concern. Automation-led job losses have been on the rise. According to a UNICEF report, more than three-quarters of all new digital innovations and patents are produced by just 200 firms. Out of the 15 biggest digital platforms we use, 11 are from the US, whilst the rest are Chinese. By 2030, North America and China are expected to get the lion’s share of the economic gains, expected to be worth trillions of dollars, that AI is predicted to generate.

Nonetheless, AI used as technological progress is beneficial for society. During the COVID-19 pandemic, governments all around the world turned to contact-tracing apps, to telemedicine and drugs delivered by drones, and, in order to track the worldwide spread of COVID-19. The problem is that we are being ignorant of the pitfalls because of the nascency of technology.

The optimum solution to this conundrum is to maintain an optimum balance between innovation and risk. One way to control these algorithmic inconsistencies is to understand the deviations and mitigate them at each level of development. One must be wary of the data fed into these algorithms, train and test the data well, and must avoid implementation errors. Governments and companies all around the world must protect the data from the feeding of disinformation leading to model tampering and data theft to avoid reputational damage, criminal investigation, and diminished public trust.

When used properly and responsibly, AI can improve our lives drastically.