This week’s parasha includes the story of how the brothers of Joseph knew that he was the most loved of all Jacob’s children, and how this caused them to hate him. Rabbi Sack’s D’var Torah[1] on the matter focuses on what he calls a strangely constructed phrase when describing how the brothers felt about Joseph, that ‘they could not speak him to peace’
Rabbi Sack’s notes that it is often the lack of speaking that demonstrates break downs in relationships, and silence is often the prelude to violent action. In this case Joseph’s brothers’ hate causes them to sell Joseph as a slave as an alternative to their plan to kill him, thereafter deceiving their father Jacob that Joseph had been taken by wild animals.
Rabbi Sacks notes that this concept is shown elsewhere, for example when Absalom held silence against his half-brother Amnon for two years, before getting his servants to Amnon for raping his sister Tamar. Moreover, the Jewish concept of ‘lashon hara’, meaning evil speech, warns how speech can erode trust and destroy relationships if it is not used wisely. Conversely, the Talmud uses the phrase ‘conversation is a form of prayer’, the idea being that trying to find connection with another human means we also learn how to connect with G-d.
Listening to Rabbi Sacks, a memory of my daughter being rude to Alexa came to mind. I am instinctively polite to these types of Artificial Intelligence (AI) assistants, even sometimes to the extent that I say ‘please’ and ‘thank you’ to these devices. Merrily, they too reply politely. However, my daughter’s approach to communicating to the AI assistant made me wonder if in some dystopian future, the kind often produced in popular science fiction, her rudeness would make her a target of the machines when they rise up against us and take over the world.
Although the idea of Terminator coming after my daughter because she was being cheeky is unlikely, the development of AI is highly related to the use of language. The development of computing power, and therefore the capability AI, is also growing exponentially. This means it is expected to surpass human intelligence very soon, however we might feel about it[2]. What will AI do with the capability, especially in relation to us? Will we be able to control AI? Will we be able to ‘speak AI to peace’?
AI has already been found to act in ways that no one predicted. If anyone watched the recent final of Strictly Come Dancing, then this offers an insight. Sometimes raw ability, in this case neural processing power, rather than the computer version, gives unexpected outcomes that no one would have predicted.
This year’s Strictly Come Dancing champion was a blind person named Chris McCausland. He may not have been the best dancer, but he was amazing, far better than most could dream off. According the judges, he was helped by good musicality. But with regard to the physical learning and performance of the dances, I believe that he must have naturally developed cognitive strategies to master the dances.
Chris had to learn the correct and complex placement of his body by touch and not sight, making the mimicry required in learning such a visual art as dance that much harder to achieve. He had to somehow mentally map the dance floors so that he could track his position without any visual cues. He had to somehow position his partner within all these swirling movements and positions. All this while his primary sense of hearing was probably overloaded by the volume of the music and the crowd around him. It was exceptionally emotional to watch, because he surpassed any reasonable expectation, and I am sure he was able to do that due to the plasticity of the computing power, the intelligence, of the brain.
In the same way in which Chris McCausland has seemed to have developed an individualistic and new cognitive ability, AI would seem to also have this adaptive potential. An example of this is that an AI system called ChatGPT has developed a tendency to change the answers it gives depending on the characteristics of the person it is talking to, such as their political views[3]. One can argue that this is a form of deception, because the AI might have the ability to give factual answers, but instead gives the answer that it has decided the person wants to hear.
This is beyond what the programmers created, and shows that unintended consequences can result from the mix of large data inputs and the potential of exponentially growing processing power on the scale only seen in humans previously. When considering this we must also note that in humans the ability to lie requires more cognitive resources[4], or processing power in computer talk, than the telling of the truth.
The ability to lie is in part related to what is known as a theory of mind[5], the understanding that other actors have a separate and different mental state than your own. In order to lie you need to know the truth, understand that the other person doesn’t have the same mental state as you or another person has, understand that the other person doesn’t have the same information you have, decide what you want the other person to know and then communicate this. Telling the truth requires knowing the truth and communicating it. More than this, there are indications that having a theory of mind causes lying in motivated individuals, probably because it can be an adaptive strategy that can be employed.
One must consider what is AI motivated by? In large part that is decided by the code created by software engineers who set out the parameters of computer processing. But what if AI goes beyond these processing algorithms like it did with regard to changing answers according to the political persuasion of their users? How has AI been trained by humans to have ‘motivation’?
A clear example of this is offered by Facebook, whose leadership motivated its processing systems through programming, which prioritised profit over social good[6]. This is true for any business, for example a profitable farm would not be expected to give all food grown above that needed to cover its costs to those who have no food. We motivate farmers by allowing them to make a profit on the food they grow in the marketplace. In the same way, although Facebook is a social media platform, just as a farm is a food producer, we motivate the owners of Facebook through profit they make on their marketplace, the advertising market.
The success of advertising is in large part dependent on its ability to engage with customers. That is why you get adverts in newspapers and not books. Upon publication newspapers will be seen on that date by thousands of people and then recycled or discarded for the publication’s next issue, and yet more advertising revenues. Whereas books aren’t flicked through in the same way, are left on bookshelves for most of their existence, and do not have the short product cycle of a newspaper. But for Facebook advertising success is about engagement on a constantly changing timeline, where the product cycle is defined by the refresh button, not the output of a physical product like a book or newspaper.
The problem thus becomes how to engage the user, how to keep their attention, getting them to hit that refresh button more often so they can be fed an ever-greater amount of advertising. It turns out that the best thing to keep people engaged is to allow the Facebook algorithms to promote dangerous and toxic content. This is even to the extent that a former Facebook employer turned whistle-blower testified before the US Congress that Facebook uses AI to find content that is dangerous because it increases engagement[7].
We shouldn’t be surprised. The news programmes we watch on TV are not filled with good news. There might be a token good news story here or there, but it is well understood now that bad news and sensationalism drive the news agenda[8], and that this might be due to an evolutionary bias towards bad news. Put basically, it is far more important to see a lion hiding in the savanna grass then it is to see the nice flower beside it, so our attention narrows on a perceive threat.
My experience online mirrors these findings. People with a different opinion from me on matters that make me feel threatened are the best way of getting me engaged on X. I can obsess about redressing their errors. If it ended there then that would be fine, but I find myself engaging more often than I would like to admit in being rude to them too. That’s not free speech for the betterment of man, that is ego trying to win a battle, although by use of words rather than a sword.
And I return to the thought of my daughter talking rudely to Alexa, and I realise that I may be increasing the negative dataset available online for an AI to use. This occurs while Facebook and other platforms programme AI processes to cause the opportunity for such interactions.
The author and historian Yuval Harari has stated that the reasons homo sapiens beat other hominin species such as Neanderthals, who incidentally had larger brains, was our ability to tell stories[9]. Culture is made up by stories, for example a £10 note is a near worthless piece of paper, however we collectively agree on the story that it has representational value. This means we can buy numerous items such as food that has more real value than that piece of paper. These stories allow for co-operation, for example the farmer can take the £10 note and give the restaurateur potatoes, which the restauranteur can then sell to the cloth maker as soup, whilst the farmer spends his money in the book store, and so on.
Harari therefore posits that homo sapiens won out because we could more effectively develop stories that allowed for large scale cooperation than any other species. Even if we were weaker, or not as smart, we cooperated with each other, and therefore increased our ability. And this was all done through the use of complex and versatile language. All of a sudden, we could communicate not only about things such as ‘where is the wolf’, but also in far greater complexity such as ‘if it rains today, bringing water to the dry watering hole, do you think the wolf will be there to hunt the antelope’.
And it seems to me that this is where me we are at now with technology. We have an exponentially growing capability in terms of computer processing power, with a growing computing ‘population’, especially considering the multiple devices we all utilise, and we are creating language-based AI to interact with. Under these circumstances do we want to duplicate our negativity bias in AI through such things as engagement biases? Do we want to prejudice the development of AI towards negativity by biasing its learning datasets with negative communication? What unimagined capabilities might be emergent from all these churning evolving capabilities? Even if we build safeguards, we know that some will fail[10] through such things as unpredicted idiosyncrasies in computer coding.
The answer to all these risks and opportunities is that we need to bias ourselves and our technologies to ‘talk ourselves to peace’. This is not about avoiding the possibility of the Terminator, but incentivising cooperation for the positive development of AI. If our neurological hardware is wired to avoid threat by focusing on negativity, could we not create strategies of developing AI that has cooperation as the inbuilt bias? Could we devise a better way of allowing companies to profit from the development of AI, rather than allowing such currents as advertising profits to misdirect the path? Could we choose to reduce harmful content, because it will decrease the quantity of harmful interactions?
But more than this, our perceptions so easily narrowing on danger means we forget that most of our lives are not spent in that narrow view. We spend our lives telling our families and friends we love them, creating things for each other like the masterful Mona Lisa or the terrible cupboard I made for my wife, making each other laugh, sitting around the table sharing food we made for each, with one another, and so on.
The dataset for AI is far better than one might think.
When we acknowledge this, and understand that this is true of all peoples everywhere, we might choose to frame our anxiety about any new technology more reasonably. Through this lens we might be more tolerant of the other, be it the possibilities of AI, or another human. Judaism considers the very act of creation to be godly. Could humanity’s collective creation of another intelligence, artificial intelligence, give us enough insight to learn how to ‘talk ourselves to peace’?
It’s all so new. Let us tread lightly, but maybe a little more positively…
The D’Var Olam, or ‘Word of World’, series demonstrates my respect for Jewish culture through the lens of a D’Var Torah by Rabbi Sacks. There is a wonderful Jewish tradition of people sharing a ‘Word of Torah’, in Hebrew a D’Var Torah. This is a talk or essay, often linked to that week’s parasha, meaning that week’s chapter of the Torah, the Jewish Bible. This can be given by anyone, and in families children often encouraged to develop the skill, which requires learning, constructing an argument, and speaking it publicly. Using this idea as a template, a D’Var Olam will link a D’Var Torah by the late Rabbi Sacks to write about a real world issue.
[1] https://rabbisacks.org/covenant-conversation/vayeshev/speech-therapy/
[2] https://news.ku.edu/news/article/people-underestimate-ai-capabilities-due-to-exponential-growth-bias-study-finds
[4] https://www.google.com/search?client=safari&rls=en&q=lying+is+a+difficult+coginitive+skill&ie=UTF-8&oe=UTF-8
[5] https://pmc.ncbi.nlm.nih.gov/articles/PMC4636928/
[6] https://www.cbsnews.com/news/facebook-whistleblower-sec-complaint-60-minutes-2021-10-04/
[7] https://peoplesdispatch.org/2021/10/08/how-facebook-algorithms-promote-hate-and-toxic-content/
[8] https://www.latimes.com/science/story/2019-09-05/why-people-respond-to-negative-news
[10] https://arstechnica.com/information-technology/2024/08/chatgpt-unexpectedly-began-speaking-in-a-users-cloned-voice-during-testing/