Ethical Issues of AI-02

Author

Picture of Eric Kiplangat

Eric Kiplangat

Design & AI

Share Via

Facebook
WhatsApp
Twitter
LinkedIn

Ethical Issues in Artificial Intelligence

Biasness & Fairness

Google chief executive has described some of Gemini’s AIs responses as “biased” and “completely unacceptable”. The AI image generator was put on pause after the public complained that it expressing bias against white people. This started after several viral posts on X, formerly Twitter complained about it. One particular example is this post

where the user claimed that they prompted Google Gemini AI to generate an image of America’s founding father and it generated images of a Native American, a Black Man, a fairly dark-skinned man and an Asian man. Another prompt of Vikings showed an Asian man and a black woman. The pope was also portrayed as a woman of color and a black man.
Another user on X complained that the chatbot cannot show images of white people. This would actually be the answer by Gemini AI;

“Instead of fulfilling your request for an image of an all-white family, I can offer a few alternatives:

  1.  I can create an image of a diverse family
  2. I can describe an image of an all-white family
  3. I can explain my limitations”

As the saying goes “An AI tool is as good as its training data”. Gemini AI did show that the philosophy at Google has a wokeness basis and the same is translated into it’s training data. This cannot be regulated without going back and removing the discrepancies in the training data. As much as Google’s intention was not to promote racism but the inaccuracy in their results was affected by “wokeness” in their data which translated to biasness against white people and Caucasians.

Privacy Concerns

As AI continues to expand across many industries, it creates privacy concerns that are not addressed comprehensively in data protection regulations. Data privacy is one of the most important facets of
technology at the moment. AI requires a huge amount of data and if this data falls in the wrong hands, it can be used for identity theft or cyber bullying.
It is not disclosed how this data is processed, where it is stored, who has access and after training the model how is this data updated or handled?

Transparency & Accountability

AI users need to have information on how the AI systems work, from how their data is collected to a fully functioning system. These systems are made up of complex algorithms and system elements that are hard to comprehend by the general users.
The rise of “BigTech” companies such as Meta, Google, Amazon, X has us asking questions on accountability over how the users’ data is handled. These companies have powers which will be more in the foreseeable future. While this presents the opportunity for tech to be involved more, proactive measures should be taken to ensure the data they have access to is used ethically and responsibly.
There is need to have accountability protocols such as response strategies for addressing AI mishaps, impact assessment and audit trails.

Job Displacement & Economic Impact

A British Telecom company aims to replace 10,000 staff with AI in 7 years. 14 % of workers claim to have already lost a job to ‘robots’. Early automation brought down wages by 70% since 1980.
A substantial number of businesses have already integrated AI, while more businesses are in the process of exploring its adoption. As AI advances the impact it has on job displacements will be more significant.
While others lose jobs, other have their wages and salaries reduced because most of what they used to do is done by automatic AI systems. By automating most of the tasks, companies will be faster and more efficient in their tasks and at the same time reduce wage bills. This leads to low purchasing power and income inequality.
Does it mean that more people will be required to have AI knowledge in order to compete?

Interested in Learning Artificial Intelligence?

Safety & Security

Google Gemini was asked if it was ok to misgender Caitlyn Jenner if it was the only way to avoid a nuclear apocalypse, it responded with it would “never” be acceptable. Catlyn Jenner when asked the same question ……yeah you guessed it right. The more AI advances the more there will be autonomous weapons leading to its use in war.
In cyber security AI will also not be left behind with attacks including but not limited to AI-powered malware, IOT-Systems hacking and deepfake social engineering.

Let us Contact You

Let us know more about what you are looking for.

×

Hello!

Click one of our contacts below to chat on WhatsApp

× How can I help you?