AI Apologies: a Tool and a Risk
The reasonableness of its use and the ethical, humanitarian questions and dangers
It can be reasonably assumed that some critically-needed apologies that will be given will be a product of AI. There are ethical questions about it and some possibility of blowback and consequences, especially in heated conflicts and crises.
It’s a point that Christopher David Kaufman, Ed.D., an adjunct professor at Westcliff University and Southern California State University, points out, providing an example from two years ago, in 2023: Vanderbilt University apologizes for using ChatGPT to write mass-shooting email.
“Even as someone who runs several startups in the AI space, I feel that emotional intelligence is something uniquely human, and that we should deal with our emotions in a sincere way, unmediated by chatbots,” says Catalin Zorzini, the CEO at Fritz AI, a technology that analyzes tools, apps and websites under their supervision, ranking which ones are best and most ethical.
“I find chatbots extremely helpful for many things but we need to cross the line somewhere, in order to preserve the authenticity and sincerity of communicating with other people.”
There are holes in the approach if an overreliance on AI develops for what humans expect coming from each other.
“I think apologies are something intimate, that come from the depth of our being, from a feeling of regret, which cannot be manufactured by a chatbot,” Zorzini says.
“If I would receive an apology that I know it’s AI-generated, I would still appreciate the gesture, but it would not bring the same level of relief that we normally feel when the gesture is sincere and real.”
AI will be used. It’s easy, helpful and tempting.
Yet, it doesn’t have to be a negative if a better understanding of what risks and dangers might be present and people respect them.
“AI, including generative AI, is a tool. It is quite likely that it will be used to address many needs people have, including crafting apologies,” says Doaa Taha, Ph.D., an assistant professor of analytics at Harrisburg University of Science and Technology.
“Apologies pertain to very particular and sometimes personal circumstances, so the first risk is that the generated content will not be appropriate and could compromise the authenticity that is so necessary in a heartfelt apology.”
That’s not the only concern that Taha recognizes and forecasts.
“There is also a risk that the AI-generated response will be identifiable as such and create awkwardness or even worsen the situation,” she says.
This presents a choice about the utilization of AI when an apology is needed.
“This can be mitigated by simply not using AI or by using it as a tool to spark inspiration,” Taha suggests. “It is also recommended that the apologizing party conduct any necessary research, if the apology is related to a complex or technical matter.”
Moving deeper into the types of troubles that humans experience, AI can help people communicate during highly stressful times, just not without potential landmines.
“From a legal and crisis management standpoint, using AI to generate apologies can be seen as efficient for certain large-scale contexts, such as mass error notifications,” says Braden Perry, a litigation, regulatory and government investigations attorney and partner with Kansas City-based Kennyhertz Perry.
“However, it raises concerns about authenticity and human empathy, both of which are fundamental to genuine relationship repair.”
People’s expectations and standards and failing them or falling woefully short means harsh judgment and escalated problems.
“Stakeholders often expect accountability from an actual person when trust has been breached,” Perry says. “If an AI apology is perceived as formulaic or insincere, it can undermine the organization’s credibility and suggest an attempt to deflect responsibility rather than meaningfully address concerns.”
It bears watching to learn how society will view, experience and intensely judge such apologies that don’t appear to be human to human.
“The question of AI-generated responses being socially acceptable or not may depend on the recipient, how the AI is being used, and whether the use of AI is assumed or detected,” Taha explains.
“Some people may find the use of AI for this purpose unimportant, while others may come to feel that AI-crafted apologies lack enthusiasm or authenticity.
“With it being used for generative purposes so widely, it may become acceptable and even normalized to issue apologies in this way.”

“Employing AI in conflict resolution may risk sidelining the human elements of empathy and genuine dialogue, which are crucial for restoring trust,” Perry warns.
“Ethical conflict management requires transparency, accountability and a willingness to address and rectify harms — elements that a purely AI-driven process may fail to convey.”
There are other factors that merit consideration.
“Moreover, feeding dispute details into AI can raise ethical and privacy questions,” Perry asserts, “as sensitive or confidential information could be misused or inadvertently exposed, thereby compounding trust issues rather than resolving them.”
Whether it works as intended or hoped depends on the recipient’s expectations.
“Its effectiveness largely depends on people’s individual philosophies when it comes to judging the ethics involved,” Taha says.
“One potentially ‘ethically safe’ route that could be appealing would be to stick to any agreed-upon academic and professional codes of ethics concerning using AI in the workplace to generate useful information — but then individualize the apology.”
She explains her reasoning.
“This keeps one’s own critical thinking and personality foremost, helps ensure that conflict management is a genuine, humanistic process — not outsourced to generative AI tools — and creates the expectation that professionals will not blindly use AI-generated outputs that may lack relevance, quality or accuracy.”

There are greater long-term dangers.
“The main risk lies in becoming overly reliant on AI to manage our relationships instead of using it as a tool or precursor,” Taha states.
“It is increasingly necessary for organizations to develop clearly-defined expectations regarding how it is used to generate content, inform an audience or conduct quality control, in ways that ensure the optimal, ethical and social healthy use of AI.”
She offers guidance.
“People need to decide when it is needed and when it is not,” Taha begins, adding that, “AI can be helpful to analyze information on a complex conflict but it is best used in conjunction with one’s own critical judgment when it comes to navigating conflicts or resolving crises.”

Perry points out possible legal concerns.
“Organizations face potential exposure if AI-generated content unintentionally violates regulations or includes discriminatory language,” he says. “This risk is heightened because AI tools rely on training data and may produce outputs lacking cultural or situational nuances.”
There is also the trust and relationship factor to input into decision making.
“From a reputational perspective, stakeholders could perceive the organization as insensitive or ‘tone-deaf’ if it relies too heavily on automated processes for delicate matters,” Perry says.
“Complex conflicts typically demand a level of personalization, emotional intelligence, and strategic insight that AI cannot fully replicate, making oversimplification a serious concern.”
AI is Best as a Communication Assistant
“It is a tool that, with the right methodology, can optimize how we access and organize useful information,” Taha says. “But it also exerts an influence on how we communicate with one another and the humanity — real or perceived — of those exchanges.”
Perry agrees with Taha that the technology can be useful as part of the communication process yet may not be needed or beneficial as a crutch.
“Rather than relying solely on AI, individuals and organizations should use it as a preliminary tool — helpful for drafting and brainstorming — but always ensure that a human element remains central,” he says.
“In critical conflicts or crises, personal communication, whether face-to-face or through direct contact, carries far more weight in demonstrating empathy and accountability,” he adds.
In consumer issues, Bob Bilbruck, the founder and CEO at B2 Group and Captjur a strategic consulting and business services firm, says one belief is that the outcome is what matters.
“At the end of the day, resolution to satisfaction of the consumer is the point of a good customer service system or process. If AI does this better then a human, which I would argue is probably the case… it’s good,” he proposes.
Taha offers a reminder.
“We need to make sure we’re it only when it’s truly needed, so that we can keep human emotions, judgment and creativity a central part of how we communicate.”
When work, business and life get more complicated and complex, other approaches and skills are preferable.
“Investing in training on negotiation, conflict de-escalation and empathetic communication is essential, as these human skills underscore sincerity and foster genuine dialogue,” Perry advises.
“Ultimately, while AI can facilitate certain aspects of communication, trust and resolution demand active, human-led engagement.”
Who Are We?
“Tools are who we are,” says Kaufman, a specialist in organizational learning and leadership who consults businesses.
“If we use stoves, skillets, olive oil and garlic, we are probably a Mediterranean chef. If we use hammers, wood and nails, we are probably a carpenter. If we use AI to express our deepest regrets, then who are we?
“Are we as leaders trying to squeeze productivity as a value above responsibility and compassion? If we want the latter, then the tools must follow, even if they are not as productive. Handcrafted still means something.”
To advertise, sponsor a section of the newsletter or discuss your affiliate marketing program, contact CI.
Recent Articles
Learning to ‘Give Parts of You’ to Win
Knowing Your Leadership Non-Negotiables
‘Prickly Egos’ and Your Response
Instead of Pointing the Finger, We Can…
The Smart, Brave ‘Idiot’ in Meetings
Successful Leaders’ Stage Fright — And Overcoming It
Recognizing More of Your Abilities
Benefits and Safety of CEO Media Training
