In the beginning of December of 2020 (meaning around 5 months ago at the time of this blog post writing), I have given a presentation about AI Trust & Ethics, which was called – “What you always wanted to know about practical AI Trust & Ethics, but were too afraid to ask”, but given that I was not satisfied with the final product (the amount of time my session had, the amount of time I was able to allocate into the preparation, the content that was left behind – not making part of the presentation), I decided to write about this topic in 3 parts.
I have to make a disclaimer that by no means I am even pretending to be an authority in this field, and please do not guide your business and/or personal life decisions with my thoughts. These thoughts are mine and this is my personal site, where I write whatever I want and whenever I feel like. There is no guarantee whatsoever.
These are my thoughts for the kind of people who like the discussion on the new and challenging topics.
I believe we should start the conversation about the principles of Trust and of Ethics with the definition of what those items are.
Ethics are moral principles and definition of right and wrong and they are expected from the organisation that we want to deal with.
Trust is the basic currency of any business transaction – we make deals with people we trust and in the 21st century it is the most essential coin, since as expected, the trust will be becoming progressively harder to earn in the ever attention-seeking world.
For me, these items are extremely interconnected because
– someone has got to practice a certain amount of ethics before gaining a certain level of trust;
– at the same time in order to develop a certain amount of ethics, one shall need to have a certain amount of trust. At very least it is needed in order to get a recognition of the practiced ethics principles.
Openness & Transparency
Openness & Transparency are some of the aspects for the current Artificial Intelligence Systems that if not supported by the Trust and even worse, are ignored, they will bounce back with questions and issues:
– Due Process
– Accountability, as in – who is responsible for the outcome and for the bottom line
– Auditing, which might be imposed over your activity from trust (financial, for example) or ethical angles.
– and many others.
This might sound that those requirements mean the same thing with the trust or have a direct relationship, but that would be a major failure – because trust and transparency are definitely on the opposite sides of the spectrum. If we are trusting someone – we do not need their openness and transparency. Only if there is a huge lack of trust (or we are building our name and hence the trust for the brand) – there will be a requirement for the transparency.
The result of the transparency and openness might be (hopefully) – establishment of the trust and not the other way around.
There is a whole universe of the subtopics with different school of thoughts and orders, but for the purpose of the finite space and time I will pick just 2 major categories of them:
Manipulation is the biggest and the most commonly talked one, when we are considering an AI system.
The huge difference between a traditional system that has virtual no direct influence on the following results (unless hacked), in the artificial intelligence-based systems we have most of the time direct and sometimes indirect way of supplying the production system with a feedback loop.
Think about the training data sets, that gets a lot of time selected from the current production data – if we put a number of specific information (canceled transactions with specific attributes), in time we can manipulate the system into some specific kind of behaviour.
This manipulation is not limited by the external factors/people, since one of the most known fact about online stealings is that most of the time there is a partner on the inside and any of the potential inside manipulations should be designed, developed & tested against. Some times a rather simple act of solving some friend problem can lead into the very uncomfortable consequences in a sufficient complex system.
The manipulation of the users behaviour that leads into serious consequences, such as disastrous self-evaluation, based on the premises that the computer system knows the user better than she/they/himself are some of the things that can be actually done not by the system designers and/or system developers, but by the sheer marketing effort of selling the solution against all odds and against any potential competition. Even though mostly the IT people are quite away from the final consumers, it is not taking away the responsibility from their actions and a great system should have some inheriting protection.
The so-called “dark patterns” of the easily exploiting of behavioural biases & deception of the clients/final users is another manipulation topic that is deeply connected to the ethics core and in the modern times, the penalty of the lost of the trust will spread like fire through the dry forest.
Being irresponsible and not knowingly exploring them won’t get you any slack – there is no excuse and you will be held responsible AND accountable.
So many people have pointed out in the past decade that a number of solutions are extremely addictive and even though in the most countries such highly addictive substances such as tobacco, alcohol and drugs are strictly regulated (in some places more then in some others) – there is a huge need to have something like that for the cases when AI solution are driving addiction in consumption.
It feels totally like some certain sweet-spot industries, but it still a manipulation and it should not be done, in my opinion.
When touching on the Responsibility, it being the core ethics topic, that might be something totally laughable for some,
but in the modern online business it being ignored, it will become just a question of time before it will come back hunting you high and low and taking no prisoners.
The responsibility can be split in a number of sub-categories, and I will just point to a couple of them:
Responsibility for the bottom line – does it make a world a better place? Yes, that’s an easy question that requires a serious investment of honesty. The answer to this question is not the point of focusing on something totally egoistic with a similar answer, such as “it makes my personal world a better place by giving me more money”. Such an answer would not be a good option !
This is a very close relative of the category that raises the issue for the overall impact.
Do not take me wrong, the consequences for your sloppy concurrence might be not good, but that is the nature of any business – if you are not doing a good job, you will not get paid.
The environmental impact and the other living beings are another topics that are mentioned in the famous Ethics guidelines for trustworthy AI. For someone who cares about this topic quite a lot, it seems very appropriate that when someone is designing a system and delegates the responsibility for the impactful decision, the impact on our home planet is taken into the account. It is a lot like when we invite someone into our house, we do expect them not to kill our dog and cat and not to throw rubbish onto the floor.
The system is clever
What if the intelligent system knows more about the client than the client itself? A very efficient modern system might arrive to this point much sooner than expected.
And I am not writing here about the Singularity, but rather a simple situation when system collects enough data on an individual and given the total amount of data & statistics will have better understanding of the person in question, then they/he/herself ? How ethical this situation will be ? How ethical the company will be considered to be ? What would be the consequence in the terms of the trust ?
While it might be great for the business, it might become unethical or even criminal (if this information/insight is being used/explored).
The exploration does not have to take place on purpose – it might just be some fresh intern data science apprentice who accidentally pushes the analysed data into the wrong environment/system where it gets published on internet or maybe even stolen ?
What if the intelligent system decision about something that we consider greatly private will put our and/or client core ethics values under test in a difficult moment ?
Ignore this and you can still wash your hands as much as you like, but they will still be kind of dirty.
to be continued …