Is AI Biased?
Consider the question “In the modern-day USA which is the better social standard – equity or equality?”. The current day popular wisdom would come up with different answers depending on who you asked. But what would AI, in the form of ChatGPT, come up with? And could you trust the answer?
AI is based on an all-inclusive database which contains a digitized record of any and all books, articles, opinions, lectures, pronouncements, and ideas since the beginning of recorded history. There is no value judgment, nor should there be, on the inclusion of any of the above in the database. If it is extant and can be digitized, it will be included in an ever expanding and huge AI database. Note that this includes pictures and graphic presentations. The AI engine, a series of algorithms similar to our brain’s neural network, examines the database in an attempt to answer an inquiry. These algorithms do not and should not make any value judgements. The answer is based entirely on the past. It is based entirely on past and digitized human thought. It includes Newton’s, Maxwell’s and Einstein’s theories. It includes our Constitution and works on all the “isms” – Socialism, Communism, Fascism, and more. It includes the philosophical ideas of the distant past and present. It includes thoughts about love, hate, lust, and charity. Think of this data base as an immense and technically unlimited human brain that has read and stored every idea since sentient thought began.
The contents of this database can be split into two types. One type of information reflects physical, objective reality – the immutable laws of nature, molecular biology, the structure of a cell, the fact that 2 + 2 = 4. The other type of information picks up where objective reality leaves off. Here reside thoughts about emotion, fear, love, hate, good, evil, racism, consciousness, truth, falsity, government, power, greed, lust, and God. There is no firm ground here. This information is the stuff of life.
Now let’s talk of bias in the database. If I ask a question involving objective, physical reality there should be no bias in the answer. Here AI should include only data that is true and verifiably accurate. If I ask “Does water expand as it freezes?”, there is only one answer, “Yes, it does expand.” If I ask what is the best defense against a newly discovered pathogen, I expect a non-biased answer I can trust because our AI data base should contain factual information on DNA and molecular biology.
But once I stray from inquiries involving objective reality, I no longer stand on firm ground. If I ask, “In the modern-day USA, which is the better social standard – equity or equality?”, I can expect an answer based on opinion. But whose opinion? And since AI should not make value judgements, what do I make of the answer? Here bias enters the game. Since AI depends entirely on its data base of past thoughts and events, I can expect the AI answer to reflect the preponderance of content that favors, supports, or mentions either “equity” or “equality”. This bias is similar to a human being (HI = Human Intelligence) answering the same question. This human will respond based on the preponderance of information he has observed in the media – TV, newspapers, journals, entertainment, …… If most of what he has observed favors or mentions “equity”, then his answer will favor “equity”. Conversely, if most of what he has observed favors or mentions “equality”, then his answer will favor “equality”. Note that if we are following AI rules, then our human observer cannot make a value judgment. He is limited primarily or even strictly to the amount of information he observes. If he has observed 100 favorable mentions of “equity” and only 60 of “equality”, then his answer wiIl be “Equity is the better social standard. “ It boils down to a numbers game. In this very real analogy, the AI data base is biased towards “equity”.
Now let’s assume that our human has asked AI to answer the question and AI says “Equity is the better social standard.” Now our human writes a paper which reflects this answer. The AI data base captures this paper and adds it to the number of instances mentioning or favoring “equity”. Now AI becomes a self-fulfilling prophecy. The more it sees of “equity”, the more it projects its numerical dominance. AI has become even more biased.