AI Companies Communicate Trustworthiness - or Not - With Decisions
On the superhighway to financial windfalls, how AI companies must prove themselves responsible

Artificial intelligence is developing quickly. It is attention getting and growing in use in the present and the future. It does come with significant concerns however.
Some of those being “privacy, bias, accountability and the unintended consequences that continue to surface,” Hessie Jones — a strategist, entrepreneur and investor covering AI — wrote at Forbes.
Those are issues that have to be planned for, with reliable safeguards implemented and monitored and prompt corrections made, to show and prove responsibility along the way to progress, stratospheric success and grand financial profits and influence.
A pressing question is how will AI systems prove themselves “ethical, trustworthy and inclusive” as “not just a technical challenge but a moral imperative,” Jones wrote.
This is not yet fully known. With any “good” development in technology there will be noticeable, problematic and maybe critically dangerous gaps between expectations, standards and performance. There will additionally be failings and yes, nefarious participants in the field. How can “issues” be prevented and mitigated?
Ethical leadership, top down, will be a critical requirement, as will be commitments to governance, compliance and tight oversight, just because of the risk involved for dangerous outcomes.
AI is a gold rush. When massive profits are available for the taking, there are going to be bad decisions, bad actors and abuse present, not necessarily as the norm, but still there and always lurking.
How will the ethical leaders and companies (and they exist) sniff them out early and come together to protect their companies and the industry’s well being?
One way is the intent focus on the greater good, even if it means less obsession on billions or trillions of dollars. Transparency and self control has to be unwavering commitments.
Continuous trust building and maintenance in any company or the industry cannot be secondary in decision analysis and decision making. It has to baked into planning and execution.
“That includes informed consent, clear disclosure of synthetically generated content and information about when and how an individual is interacting with an AI system,” Rebecca Finlay, the CEO at Partnership on AI (PAI), told Jones.
Informed consent: This, like the following two points, is a matter of ethics. Not all companies in any industry or organization, for their rationalized reasons, exercise them yet it’s critical for “good” business, relationship building and maintenance and long-term sustainability.
Clear disclosure: This can be, yet should never be, in the “fine print.” Don’t give users a legitimate reason to question your ethics, character, intent and trustworthiness. It can be painfully difficult if not impossible to regain trust.
Information about when and how: People don’t like being purposely misled and they like it even less when you rationalize the behavior or displace blame by pointing the finger. It’s costly behavior.
It’s an exciting new frontier yet people are watching closely for slip ups and malpractice of leadership and technology. Exceed expectations and pleasantly surprise.
Communication Intelligence columns are written by this newsletter’s writer and publisher or contributors

