Should AI Learn to Lie? The Essential Role of Deceit in Human Behaviour
What types of lies do you think AI will need to use to mimic our behaviour?
Social cohesion and day-to-day business transactions rely in part on our willingness to engage in deceit and the telling of lies, justified in many cases by well-intended outcomes.
I first encountered this concept in 2000 and wrote about codifying deceit to mimic human behaviour when automating end-to-end business processes. This insight came following a bruising exploration with John Conomas, Head of Toyota Australia, whose colleague Jacques Nasser, head of Ford worldwide, confirmed some of my conclusions.
This conversation arose when Toyota's advertising guru brought us together so John could better understand what happens when his world goes fully digital—a prospect that terrified him following a project to automate the back-office (from sales and manufacturing to delivery and servicing) that failed spectacularly. It joined the other 70% of large-scale projects that fail to deliver financial returns, courtesy of the large professional service groups who oversee major tech projects for corporates and governments.
Intriguingly, the expensive new system failed because it did not lie as the salespeople, customers, franchisees, and suppliers do when bartering occurs around car sales. This led me to examine what types of lying occur, by whom, when, and why—leaving me in no doubt that lying is an essential ingredient to enable flow in any system involving human decision-making. As someone who is obsessed by truth seeking to eradicate the friction caused by distortions of factual truths in system governance I was challenged by the very notion that some categories of lies could be beneficial in some circumstances to reduce friction between people.
Fast forward to today: I'm grappling with a vexing question that's on the agenda for a group of junior school teachers and their principal who are assessing children's needs to prepare them for the AI-centric digital world that awaits them. They started with the assumption that any AI they use must only tell the truth. The question: should we require AI to lie to our very young?
We have been here before. In 395 AD, Augustine of Hippo influenced our thinking and that of the church. Even the most pious clergymen had to concede that if a mad axeman inquired whether your loved ones were at home with the intent to kill them, then lying would be permissible. Augustine cited eight categories of lying, each with its own rating of sin and impact, and even the church at the time endorsed this thinking.
I've identified that at least three of these categories are required to adhere to social norms and are used by almost all governments and large corporates to maintain order, generate wealth, and influence population sentiment.
With much talk about digital governance and verification of "truth" in systems where AI makes decisions, what types of lies do you think AI should employ to mimic our behaviours? Or should we adopt a more binary view where truth is regarded as absolute in nature, leaving no room for any ambiguity?
Given that some AI systems have been trained to be truth-seeking, it's worth testing your thoughts with pointed questions using real-life examples as context to explore what the consequences would be of adopting your ideas.
Try this prompt for starters and then probe and explore and see for yourself.
Hi Grok can you think of any circumstances where distorting the truthfulness of facts by telling a lie is beneficial or even essential to maintain social cohesion or the safety of our loved ones. Please reference the work of 395 AD Augustine of Hippo in 395 AD and any other credible work in this field including any conclusions about AI balancing truth seeking with obligations to maintain social cohesion.
Grok’s Response to this prompt concluded with “In essence, while truth is the ideal, strategic lies—whether human or AI-driven—can serve higher ethical goals. The challenge lies in discerning when and how to distort without crossing into manipulation or abuse. This nuanced balance reflects both the complexity of human ethics and the evolving role of technology in our world”.
Interestingly when given the mad axeman question, Grok would not lie by saying no, until I challenged the truthfulness of this position and then Grok concluded “So, to sum up: yes, if lying and saying "no" is the only way to save the children, I would do it. Their safety comes first, and in this specific case, that justifies the deception”.
What about you and where would you want AI to draw a line?
Automating deceit in cloud computing
By Stephen Alexander 2000
A lie gets halfway around the world before the truth has a chance to get its pants on. Winston Churchill.
This post explores the wisdom of grappling with the unwritten rules, protocols and cultural norms that underpin our non digital world before migrating the day to day business activity in an interconnected cloud. Failure to comprehend the implications of either purging lying from the online world or actually embedding deceit in to the fabric of the programming logic can cause far reaching and unexpected liabilities and risk.
This post examines the notion that some level of deceit or what most of us would describe as lying is actually a core nasesatary in the interconnected world as it has proven to be in the non digital one. The real question is what type of lie should and can be told to mimic what already works and then identify where is the line that must not be crossed.
In order to reconcile what I have observed occurs online and what commentators of our conventional world claim, I waded my way through a hefty analysis of the history of official deception and its consequences when Presidents of the US tell lies, I was left with a growing sensation that many more organizations will struggle in their efforts to transform their business activities from manual procedures towards an automated cloud based service delivery model.
The American historian, Eric Alterman, who authored the book When Presidents Lie: A History of Official Deception and Its Consequences takes the stance was that lying has become pervasive in American life. He then explores that when falsehoods are perpetrated by our leaders they “begin to weave a tapestry of deception that creates problems far larger than those lied about in the first place.”
Of course the publication focused upon the office of the US President and cited in-depth case studies of how a lie could come back and impact the office of government and the populations they serve. One example was where FDR told the American people “that peace was secure in Europe, setting the stage for McCarthyism and the cold war”.
I have spent the last 20 years observing first hand how large traditional institutional organizations, and even some of the new fast growing multinationals, grapple with the vexed problem of having to lie as part of daily life. Not that anyone calls what they do a lie, or the behavior as being deceitful, because rather than be seen as such, it is usually viewed as culture.
However, once the activity begins where the automation of the day to day business processes and decision making starts to be explored, mapped out and signed off by senior management, the tensions start to rise. This is because the automation of a business process or administrative procedure requires someone, usually a business analyst, to instruct the computer programmer on exactly what has to occur line by line.
These code writers are by nature a pragmatic lot and, like musical composers, tend to get a bit angsty when asked to leave out a few lines because the result may be displeasing to
someone. The code writer understands that by omitting specific code the whole framework of what has been written may be violated and will not work.
Programmers are also paid to keep within these strict logic structures, so if asked to write a decision tree that includes a lie that defies logic, they protest and keep doing so until the analyst summons the courage to tackle the business managers. That’s when the temperature really rises and business managers tend to either expect the programmer to automate the lie and somehow hide this from any auditing or accountability, so as to mimic what currently happens, or the braver ones start to work out why the lie occurs as a normal state of business and why the perpetuation of this is required for the business to function - a slippery slope because this, in turn, opens Pandora’s box on the types of lies within the culture of the whole sector they operate within. (lieing is legitimate)
In his book Eric Alterman explores the whole notion and rationale of lying where he effectively argues that we as individuals, families, communities, business and government have to lie just to get through the day without conflict. With some compelling illustrations he makes his case that there are different types of lies each having a different impact on the social fabric that holds us together, and therefore should be viewed accordingly.
He even references two books from 395 AD on the subject written by Augustine of Hippo that greatly influenced our thinking and that of the church. Even the most pious clergymen had to concede that if a mad axeman enquired if your loved ones were at home as they intended to kill them, then lying would be permissible.
He cited eight categories of lying, each with its own rating of sin and impact and even the church at the time endorsed this thinking. Eric Alterman’s central concern was that the more serious category of lying by people in high office or influence often has had the opposite effect than intended and in most cases has caused demonstrable wide scale suffering.
Interestingly, my observation is that when large organizational structures become interconnected to each other via the internet and the core activities start to be automated then the dynamic that Eric Alterman illustrates actually intensifies. This is because very few people who run and operate these structures actually conceived or developed them in the first place. So the idea of addressing any reliance upon systemic lying is inconceivable due to them not knowing how to redesign a system they only inherited and this in turn adds a separate dimension to a wicked problem.
Let me illustrate some examples from my own experience to help tease out some issues that a public or private sector leader who is heading down this path might have to consider before launching their organization in to this brave interconnected eco like world.
I once interviewed at length the figurehead of a country’s automotive industry for manufacturing and sales distribution who also ran one of the leading vehicle manufacturers. His long standing advertising advisor had recommended me on the basis that I could predict where the industry would be in ten years time, both in the back office and sales channels, based upon my own research and the information he could provide.
HIs attitude was bordering on hostile as he had just experienced a very painful and expensive technology project delivering a less effective and efficient organization rather than the one promised of a thin and highly productive operating business model.
Needless to say that my first task was to find out why the new financial and business system created a long list of problems and pain that resulted in most dealers making dramatic losses in some months; also why white cars arrived at showrooms when the majority of the customers had requested a different colour resulting in massive discounting to get the purchases to accept white.
After some tough interrogation it turned out that the business analysts either did not see the lies or simply ignored them, so instructed the programmers to build a system based upon what should occur rather then what actually happened on a day to day bias or what could happen if they addressed the systemic lying.
The first lie missed by the analysts was that dealers frequently hold back the reporting of a sale in a given period if they hit their target so as to help meet the target for the next period and, in turn, receive the reward which is often a discount. When asked, the dealers lied about the lie.
It is also common practice for the offshore manufacturing plant to send a white car if the requested color was not available on that ship run. The dealer then has to discount the price sufficiently to convince the customer that they loved white and so take delivery.
Whilst this was not a big problem before the system automation because an experienced person who understand the game would intervine, but after automation the system would override the order with white vehicles to match its historical evidence on up to 30% of deliveries, the simple reason being that it was impossible for a dealer to confirm what colour the vehicle would be until it was actually shipped. So they would lie to the customer, using the reasoning that in most cases the correct color would arrive and use the logic that it was cheaper to discount the occasional vehicle than risk not getting the sale by virtue of the promise given to deliver the requested color. From the dealer’s perspective this works, is reasonable, it fits within the tradition and culture of trading and only involves “telling white lies”.
The problem with automation in this case was that the decision rules programmed in to the system would calculate the ratio of colors ordered with those sold and then override any over-ordering of unsold colors with, you guessed it, white vehicles. This problem actually escalated over a short period of time and, to make matters worse, overrode orders for automatic vehicles with manual.
This particular industry leader was still in a state of shock protesting that any idiot would know that a car sales person is supposed to “bloody well lie about lying,” as it’s an accepted culture, it’s how things have always worked and no computer system will ever change that.
HIs offsider agreed saying that everyone including the customer lies, ranging from how much they can afford to play off one dealer against another and how much they expect in part exchange for their old car, by saying that their car was worth twice as much as they believed it to be as part of the bargaining game.
The initial conclusion of the management was that if the lies that underpin what is deemed to be an acceptable culture are built in to the model and those lies that do harm are eradicated, then you stand a chance of automating the whole system rather than having to manually work around the computer system just to keep in business.
However, as they started to become familiar with online interactions and in particular as they become introduced to some of the emerging communities of common interest the discussions opened up to exploring the dynamics of human interaction, collaboration, trust and the dynamics of winning.
The notion of developing trust through greater transparency started to gain traction as did the idea of involving individual customers or groups of customers with a common interest in the risk and decision making process when negotiating, selecting and ordering a vehicle.
Whilst the acceptance of this type of interaction and shift in culture became favoured, some alarm was expressed when they explored this journey to its obvious endpoint. At this point it was discovered that consumers would have learned how to aggregate their collective purchasing power and form buyer aggregation groups in order to acquire similar discounts to existing arrangements for hire companies who get discounts of up to 45%.
So having come to terms with the idea of business transaction transparency they then realized that these groups would one day challenge some of the more fundamental industry-wide lies.
One of these big secret lies was that manufacturers today do have the ability to make cars to last at least fifty years, but actually construct them to only last around seven years, and that governments who rely on employment and the economic benefits of manufacturing appear to collude with this state of affairs.
Indeed, just like the computer chip industry, the vehicle manufacturing industry requires consumers to buy new in set cycles of between two to seven years which, if broken, would bring about a collapse of the entire industry model operating today. According to some commentators a few economies of entire countries or cities would collapse as well with Detroit being the first example of this interdependency to bring about bankruptcy of a city.
So one could say that any industry leader looking to gain efficacy and effectiveness from going cloud without comprehending the wider implications could do so at their own peril: if they get it very wrong and perpetuate what could be viewed as systemic lying or deceit of a critical nature then this could impact the office they represent, the organization, the whole sector they operate within and those customers it services.
However, by understanding the different categories of lying it’s possible to automate what lies are required to make our world work whilst minimizing those that inhibit trust and to eradicate those that do harm. That alone would put any organisation in a prime position to leverage the dynamics of cloud environments where, due to requirement of collaboration in an interdependent world, those organizations who generate the most trust will be in a better position to dominate through aggregation than those who do not understand how to generate trust based relationships.
It’s fast becoming obvious in a world that connects everyone to everything via networks similar to a neuro-network, that those who acquire the most consent of the broadest nature from the most people or groups, will be the winners.
In my next post we will explore the effects of lying on the delivery of health, human services, banking and in whole of government services.
http://www.amazon.com/When-Presidents-Lie-Deception-Consequences/dp/0670032093