1) Corruption in the Age of Machine Learning

The robot, the perfect employee. No conflict of interest, all decisions based on information, rules and logic. The ideal vision for a Compliance Officer? Unfortunately it is not that easy. A machine compliance today or in the near future is a rules-based compliance. A complete new condition may lead to a situation that none of the defined rules can apply. The machine has two possibilities:

  • Use the most similar known condition and apply its rules. This behavior opens the risk of the execution a wrong decision.

  • The software can decide that the situation is not decodable and stop its actions. Doing nothing, can also be the wrong decision.

2) Corruption is a Learnt Behavior

The case gets more complicated if we enter the grey zone between human and machine: Artificial Intelligence, the self learning machine. Corruption is a learnt behavior. A software can learn positive behavior, but also non-desired one, if this is what leads to success.

Compliant and corrupt behavior are both learnt. This is important for machine learning, as the robot has no emotions, it bases its decisions purely on action and result. With an incredible speed the software can calculate the outcome with the most attractive result (possibility and value). Based on this, computers today are hardly beatable in board games as Chess or Go with limited choices (More complex situation are difficult to handle and may lead to wrong decisions, as the algorithm of a trading software let to 2010 to the stock market flash crash). Here they do not only calculate the best outcome for the actual move, but can calculate different scenarios for all possible future moves, keeping in mind all of the opponent’s possibilities. In a transparent country with effective anti-trust, -corruption or -money laundry laws, the software would come to the result, complying with the rules will promise the highest outcome. In a country with a high impunity index, the machine may come to a different result. Following laws will create a lower potential output as bypassing them. This based on the calculations that the risk of getting caught by law-enforcement is minimal.

2016: The Robot
2016: The Robot

The quality of the software, with other words, the teacher is important. In March 2016 Microsoft launched its artificial intelligence „Tay“, a chat-bot which should be able to friendly chat with internet users via Twitter and Facebook. To do so, Tay learnt from this talking partners. An exciting experiment, which went completely wrong, so that Microsoft hat to shut down the software less than 24 hours later. What happened? A certain group of users took advantage of the Tay’s innocence. Even as the software was highly intelligent, it started with no experience, similar to a baby. Organized users taught Tay radical political positions, so that the software learnt this and started to copy its teachers and communicated these statements of hate. The experiment showed an disturbing similarity to the Stanford-Prision-Experiment, which also had to be terminated before its planned time.

Even if the country has a high impunity level, other factors have to be included. If compliance with the law would not get enforced, corruption would destroy the market, as the final cost of corruption is the destabilization of the region and rise of radical political parties to replace former conservative ones. In average, this will not lead to combat corruption, but its perfection and a further economical downturn, including the country’s education, art, culture and social life.

3) Perception is subjective

Gifts, Hospitality and Entertainment can distract a human being and lead consciously or sub-consciously to a wrong and psychological influenced decision. An individual cannot divide perception from interpretation. Originally, the human eye sees everything upside down, but in its first days, a baby learns automatically to re-interpret this, so that we „see“ everything as it is, what is up, we see up and what is down, we see down. As the human brain tries to facilitate the execution of information, processes get simplified, we „think in drawers“ and perceive similarities more similar than they are and the differences as more different.

With a machine this is not possible, so perception has be altered directly. Perception is subjective, a robot needs a database to interpret a picture, similar to Google Goggles. An IT specialist or Hacker could alternate the database, so that the machine not recognize the seen or changes its interpretation.

We are on the edge getting Cyborgs. Already today small implemented microchips can support blind people to see again. Today piercings and tattoos lead to individualism, in future we may also „technically upgrade“ our senses and muscles. Such mini computers opens the theoretical possibility to hack a human being. Let him or her see different pictures or even manipulate feelings. From an ethical point of view, a not to be estimated risk-factor.