Artificial Intelligence is on the raise and chatbots are already active in social media or the first line of customer service on company websites. These are text or voice related software, where the app can understand the user’s statement or question and reply adequately to this. As part of the discussion, the chatbot can learn from the user and include this new knowledge into the discussion, similar as a human would do.

Thanks to this modern software, users face more and more difficulty to distinguish a human call center agent from such an AI program. Due to this, it is only consequent to use AI not only for the contact with customers or suppliers, but furthermore employees. The artificial colleague will take away work-load from the human Compliance Officer. It can offer an individual treatment, with 24-hours availability and given anonymity. The last at least if the employee trusts in the company’s data privacy processes. Such an intelligent app would not only serve as source for information, but furthermore can avoid “ethical blindness”.

2016: The Robot
2016: The Robot

“Formally, ethical blindness can be defined as the temporary inability of a decision maker to see the ethical dimension of a decision at stake.”[1] Often provoked by an employee, who is so much involved that he or she has no possibility to see the situation from another angle or point-of-view. One practical solution to avoid the problem is to ensure breaks, where the employee can discuss the situation with an employee of trust, friend or family member. If this is not possible, there should be time to sleep over a relevant decision or at least take a cup of tea or coffee. All these actions target to have a break in the flow, so that the individual can start a different (and maybe more extended) decision making process, which may lead to different results. For the case that the employee is travelling in far-away countries, an intelligent chatbot can take the role of a human discussion partner. Understand the risk situation and argue from a value point-of-view. Furthermore it serves to disrupt the continues flow of the employee and provoke the required break so that the employee can (self-) reflect about the situation.

Such an AI should clearly not replace a human Compliance Officer as building up a trustful relation needs “face time”. So even if modern IT supports, the Ethics & Compliance department requires an adequate travel budget to meet employees on their work location, this includes office, workshop or project.

If not prevented by data privacy law and / or internal guidelines, such app could connect to the HR-database and gain access to the employee file. With this access the software understands if the individual is intro- or extrovert, success seeker or failure avoider. Thanks to additional information the software can adapt its behavior and actively manage a distinguished discussion. This functionality requires an open discussion about ethics and if everything what could be possibly should also be implemented or not.

Henz, Patrick (2017): “Access Granted – Tomorrow’s Business Ethics”

[1] Palazzo, Guido / Krings, Franciska / Hoffrage, Ulrich (2012): “Ethical Blindness”

Advertisements