Practical AI Trust & Ethics thoughts, part II

This blog post is the continuation of the little series on the topic of Practical AI Trust & Ethics that I have started previously in Practical AI Trust & Ethics thoughts, part I

Continuing with the thoughts on the matter of trust, I believe it is important to underline some key points that accompany the Practical aspect of trust.

Trust in the process – the clients should have the trust in the complete process of the development, starting right from the concept inception, including the development process and up to the delivery of the final result.
Sometimes, as pointed previously – without the trust, there will be a need in the transparency and this one is typically puts a heavy burden over the company and people participating in the process. Notice that transparency does not guarantee the trust, because no matter what you do and prove, some people won’t believe you in any circumstances.

Trust in the results – There is a need for trusting the results, in their fairness and in their correctness (as in being not influenced internally or externally).
Fairness of the results is a huge topic, which has a lot of connections to other parts of trust, such as the Trust in the System, but most of all in the current phase there is a need for the trust of how original (pre-production) results are obtained, which data was used for training the system (was it biased in any way?), etc.
The other huge topic is quite limited reproducibility of the results, because most probably after a couple of months, with different set of data used for the AI system training, it won’t be even possible to obtain the very same result as before. Without trust from the clients, we shall be talking about pretty unwanted consequences, and even if they might not be legally binding, it might hurt the overall client trust and prompt for distrust from a general client.

Trust in the system – This kind of trust is focused on the company level and includes such topics as Trust in the core company values, concrete people involved (hire a person who has a bad perception/trust from the general public and you will see how valuable trust can be), and even support service (which will go from “are they helpful?” right up to “are they doing the right thing ?”).
It is important that everyone participating in the system is a person of trust and that the system itself (as in the company) is to be trusted to deliver a good result.
I know some companies that even with great people on their side – won’t earn my trust because of the system that they have created.

Trust is not static

By now, an average reader of this post should be very aware that trust is not static and even more – it is very fragile. It is so darn hard to earn it (and sometimes it is actually impossible), and it is so easy to instantly loose it.
Trust’s very dynamic nature is not the only thing that impacts the development and acceptance. A major jump in the trust is made with a generational change, once a new generation becomes involved in decision making, but with the passing of time it will change again and every new generation starts with breaking changes and accepting the trust, but generally – will eventually start slowing down the progress and putting breaks on the trust for the things that are recent.


Security is essential for the trust on the side of the client and I would say that it is of a supreme importance on the side of AI solution provider on the ethical side of the offer.

Besides quite obvious topics, such as data protection, security of the process (and whole topic of the manipulation that was referenced in the first post of this series), the
there are 2 topics that I feel that need to be highlighted:

Privacy – has been such a crucial topic in the modern times with GDPR and other local privacy protection laws, which establish huge fines for failures. Protecting privacy is a question of professional ethics and a successful protection of privacy will help to establish a lasting trust between your clients and your company.

Surveillance – with digital economy and analytics breaking into every imaginable sphere of our lives, most people and companies are discovering the power and the potential of the data. Here is where ethics will be entering the stage again and again, and here is where the surveillance should be quite limited, to say at least, because unless there is a LEGAL need, I am quite opposed to any potential surveillance and would not be available to establish a trusted relationship with any individual or company that wants to practice surveillance.
In the digital world, we live in the Free* social economy – aka surveillance economy and with every passing moment I personally start preferring paid services that will not sell my data, instead of being the product (as in “when the product is free, your data/self is the product”).


For me personally, Bias starts a lot of time with a rather simple premise and the associated arrogance:

how this works
None of us does.

Bias has been lately quite a topic between the people who truly care about AI, but it should be even more central as FINALLY the racial & gender equality are becoming more important topics for the society in general.

One should not take a biased data set to train their model – by now this should be one of the most basic principles, but the true question is – “How can we ensure the least amount of bias possible?”

– Humans building AI systems have their own bias, and it is inclusion is almost inevitable. (You know, a developer testing their solution almost always is unable to determine it’s erros, because of the thinking model being the same)
– The original data used in the development (or so far in production for re-training) will introduce inevitable bias. Even if we ensure that our original training data is representative, how can we guarantee that with the time this situation will stay the same, if, for example, our most typical clients currently represent one certain particular group ? This issue will cause quite an amount of investment, but I argue that this is an ethical thing to do.

HCI – Human Computer Interaction

A huge issue is the way we interact with computers, the amount of time we spend with them is increasing by minute, but there are limits each individual and/or society is not ready to cross.
The sensitive topics of General Health, Health Hospitality (think about stress, emotional behaviour, etc) & Sexuality are something that need to be thought and cared – because they are always present and still so much “untalked” (as in avoided to be exposed) in our general society. It is just a question of time when we shall be deciding even on the society level how to deal with those issues and applications.
And by the way – there are already so many applications on the market, touching on those aspects, but the regulations and the respective ethics and security are the topics which seem to be quite in the conceptional phase, which is saddening for me.

Autonomous Systems

As we are entering the world of semi or even fully Autonomous Systems, there are many questions to be asked

The power of the final decision: who should have it ? When should we leave the final decision to an automated system and how fair it is ?
Most of the people I work with or friendly with are in favour of Autonomous Driving Vehicle, but it does not seems to be the case for the society in general.
Who should be responsible for Pushing the red button on the military systems ? Under which circumstances ? With this issue, I feel that my own ethics will be put under the pressure even more than in the cases with autonomous driving system decision of which life it will take.

Overall the decision will be dynamic over when a fully-autonomous system would take decisions or when we shall have a semi-autonomous system, which decision will need to be confirmed by a human (or not, if the issue is critical and a no-decision would be worse). We shall have many categories for such systems and the trust will be established with some of them, while not with the others (and this does not mean that one day some people won’t trust machines more in some aspects than they currently trust another human being)

If this would not be enough, there is always Singularity. :)


Just the other day, a friend I greatly admire, shared a certain BIG percentage of the jobs that are about to be lost (wiped out by the AI), since they are totally obsolete.
We can argue about wether that particular percentage will happen or not, but even more importantly is to take a decision on how to handle those issues and the re-education of society, which is totally unavoidable and as far as I think – is just a question of less than 2 decades.
With a pandemic still glooming, I see less and less doubts that we are about to enter a new phase in the labour. A phase, where a lot of old concepts will not be applied anymore.
Another interesting angle to tackle is wether some economical changes are ethical or not, and here are so many opinions that will rise that the issue will become political in about a second.

to be continued with the operational and even more data-based aspects …

Leave a Reply

Your email address will not be published. Required fields are marked *