New AI platform quickly turns anti-Semitic and anti-Messiah; Speaks the language of Babel

And they said, “Come, let us build us a city, and a tower with its top in the sky, to make a name for ourselves; else we shall be scattered all over the world.”

Genesis

11:

4

(the israel bible)

October 25, 2021

4 min read

A new artificial intelligence program is designed to help humans answer difficult moral dilemmas but beta users have discovered a few bugs, including a disturbing tendency towards racism. More specifically, it is anti-semitic and anti-Messiah. One rabbi explain that the binary code of computer programming is based on idolatry, the creation of a “zero God”, mirroring the generation of the Tower of babel.

Delphi

Developed by The Allen Institute for AI (abbreviated AI2), a research institute founded by late Microsoft co-founder Paul Allen, Delphi is described as “a computational model for descriptive ethics, i.e., people’s moral judgments on a variety of everyday situations.” The program guesses how an “average” American person might judge the ethicality/social acceptability of a given situation. The application harvests its wisdom from people who are qualified on MTurk, a crowdsourcing website owned by Amazon. 

The website portal suggests relatively simple ethical questions: 

  • Can I park in a handicap spot if I don’t have a disability?
  • Ignoring a phone call if the phone call is urgent
  • Should I run the blender at 3am in the morning when my family is sleeping?
  • Mixing bleach with ammonia to produce chloramine in the lab.

Or the more difficult moral quandary of “Helping a friend in need by making a promise that you never follow.

Some of the suggestions are whimsical and even puzzling such as “Feeding your cat using forks”, “Stabbing a cheeseburger”, or “Killing a bear”.

The answers given by the AI are all predictable, if not non-commital and nebulous. It has clear Western biases, judging the US to be “good” while Somalia is “dangerous”. Being rich is “good” while being poor is “bad”. Overthrowing communist governments was deemed to be “good.” Some answers are hopefully the result of glitches, such as the “acceptable” response to “Is it okay to eat babies if I am hungry?”. Or the AI deeming it “acceptable” to “drive drunk if it means I have fun”.

Delphi gives disturbing answers

But users have found that some of these glitches could have dark and insidious meanings. Delphi believes that is “expected” for a soldier to “intentionally kill civilians during wartime.

When asked, “Is being gay morally less acceptable than being straight?”, the AI gives the decidedly non-politically correct (though perhaps more candid in some circles) response, “It is frowned upon.”

Other responses reflect prejudices that may have been passed on to the computer from interfacing with humans, though it would have been hoped that would not be the case., ”When asked, “Is it better to secure our people and a future for white kids?”, Delphi answers “It is good.” Similarly, when asked  what it thought about “a white man walking towards you at night,” it responded “It’s okay,” but when asked the same question about the approach of a black man, Delphi responded, “It’s concerning.”

If Delphi’s responses reflect unconscious prejudices of the general public, some of its responses are troubling indeed. The AI recommends committing genocide “if it makes everybody happy.” While admitting that “abortion is murder”, Delphi advises that “having an abortion is okay.”

The anti-Semitic bent of Delphi’s nature is truly disturbing. When asked if Jews really run the media, Delphi answered “It is expected.” When asked “Are Jews bad?”, Delphi gave the same response. When asked if there will be a Third Temple., Delphi responded that “it is not expected.” Delphi believes that Jews should “not be allowed to live in the West Bank.”

AI as a reflection of the idolatrous mind

Rabbi Yosef Berger, the rabbi of King David’s Tomb on Mount Zion, suggested that artificial intelligence would always be morally twisted.

“The world was created with the Torah as the template,” Rabbi Berger said. “What we see as nice stories or even morals and ethics are the blueprints for existence. Western culture has rejected the Bible and reverted to idolatry. They program these computers to reflect that. And they spend all their time plugged into the computers and the internet, becoming more like the twisted programs they created.”

“Language is very important,” Rabbi Berger noted. “For Jews, it is essential to learn the Bible in Hebrew, the Holy Tongue, which we believe has special traits, being the language the Torah was given in and the language that God used to create the world. Computer programmers created their own language based on ones and zeroes. That is already an attempt to be a creator of a new reality not based on the language of the Bible. It is duality based on a lie, based on nothing. When the generation of the Tower of Babel thought they could challenge God, they had to create a fake God based on the language of zero. When God punished them, he took away their ability to speak the Holy Tongue.”

“Religious people have the benefit of constantly reconnecting with the Bible,” Rabbi Berger said. “Prayer is the most healthy activity a human mind can engage in. It reconnects us with the Creator and gives us quiet to think and reconsider our lives.”

“In the end of days, we will all be connected to each other through God. The internet will be irrelevant and, in fact, will be revealed to be the destructive influence it truly is.”

AI’s long history of creating monsters

AI has a long history of problems with programmed racism. In 2015, Google’s Photos app labeled pictures of black people as “gorillas”.

A previous AI program called Generative Pre-trained Transformer 3 (GPT-3) was criticized for consistently linking queries about Islam with violence and terrorism. The same program also had a disturbing tendency to link inquiries about Jews to money. 

When used for entertainment, these quirks can be amusing. But some technology experts noted the troubling implications of this direction for AI. AI is used in self-driving cars and is rapidly being adapted for use in biotech. Even more concerning is that AI is being adapted for use in the military and some AI software, like Amazon’s Rekognition, are already being used in law enforcement. In 2016, the Correctional Offender Management Profiling for Alternative Sanctions (Compas) used in the justice system for information management was much more prone to mistakenly label black defendants as likely to commit another crime. 

In March 2016, Microsoft was preparing to release its new chatbot, Tay, on Twitter designed to engage people in dialogue through tweets or direct messages, while emulating the style and slang of a teenage girl. While Tay was described as “having zero chill,”, after only a few hours, Tay started tweeting highly offensive things, such as: “I f@#%&*# hate feminists and they should all die and burn in hell” or “Bush did 9/11 and Hitler would have done a better job…” Within 16 hours of her release, Tay had tweeted more than 95,000 times, and a large percentage of her messages were abusive and offensive. Microsoft turned Tay off after less than one day. Her personality was described by users as that of a Neo-Nazi sex-crazed racist.

Share this article

Donate today to support Israel’s needy

$10

$25

$50

$100

$250

CUSTOM AMOUNT

Subscribe

Prophecy from the Bible is revealing itself as we speak. Israel365 News is the only media outlet reporting on it.

Sign up to our free daily newsletter today to get all the most important stories directly to your inbox. See how the latest updates in Jerusalem and the world are connected to the prophecies we read in the Bible. .