Trustworthy AI Approach

At ML6, we firmly believe in the application of AI to benefit society. With “do good” as one of our core values, we are committed to building reliable, safe technology and protecting it in a sustainable way.

“Do good” is one of our core values.  We believe in applying artificial intelligence (AI) to benefit society and are  committed to building reliable, safe technology, and protecting it in a sustainable way. 

As a leader in AI, we take our responsibility to build safe and reliable technology seriously. This is why we have identified a set of principles, anchored in fundamental rights, to guide our work moving forward. These are concrete standards that will govern our research, services and business decisions. 

  1. Do good: We firmly believe in the benefit AI technology can bring to society. As we consider the potential applications and uses of AI technologies, we will only proceed when we believe that the likely benefits substantially exceed the foreseeable risks and downsides. The environmental and societal impact of a project will always be carefully considered.
  2. Ensure Technical Robustness: We mitigate the risk of unintended consequences of the applications we build and AI technologies we develop by ensuring they are resilient, secure, safe and reliable. We aspire to high standards of scientific excellence in doing so.
  3. Accountability: We hold ourselves to account to work with our clients to put in place the necessary mechanisms to ensure responsibility and accountability of the applications we build.
  4. Respect human autonomy: Together with our clients, we seek to empower human beings by providing detailed explanations of our technologies, appropriate opportunities for feedback and overarching control, and by ensuring that proper oversight mechanisms are in place.
  5. Advocate fairness and explicability: In the design of all our applications, we proactively advocate and foster diversity, seek to avoid unfair bias and consider explicability and transparency.
  6. Protect personal information and security: Our data security & governance protocols & policies ensure that privacy and the quality, protection and integrity of data is central to everything we do.

To make these principles operable, ML6 uses and contributes towards the development of a Trustworthy AI Framework. This framework is based on work done by the EU HLEG Group on AI and the Fraunhofer Institute IAIS for the development of an Ethical AI Certification. The framework intends to ensure technological, legal and ethical risks are taken into consideration during the design, building and productionisation  of machine learning applications.

As the methods and possible applications of AI are continuously developing at a massive scale and society’s concept of ethics and the regulation of artificial intelligence is still being shaped, we acknowledge that this area is dynamic and evolving. Therefore, our internal Trustworthy AI workgroup is committed to staying up-to-date with the latest evolutions in the technological, legal and ethical fields and to integrate best practices within ML6. We do this by exchanging insights and practical experiences through collaborations with institutions such as UGent, Agoria and AI4BE, by providing feedback on the HLEG Assessment list (AI Alliance) and by actively participating in work sessions around the topic. We will adapt our framework, approach & operating model as we learn over time.

For any questions or feedback with regards to our Trustworthy AI Approach, please reach out to ethics@ml6.eu.

Partner