At ML6, we firmly believe in the application of AI to benefit society. With “do good” as one of our core values, we are committed to building reliable, safe technology and protecting it in a sustainable way.
As a leader in AI, we take our responsibility to build safe and reliable technology seriously. This is why we have identified a set of principles, anchored in fundamental rights, to guide our work moving forward. These are concrete standards that will govern our research, services and business decisions.
To make these principles operable, ML6 uses and contributes towards the development of a Trustworthy AI Framework. This framework is based on work done by the EU HLEG Group on AI and the Fraunhofer Institute IAIS for the development of an Ethical AI Certification. The framework intends to ensure technological, legal and ethical risks are taken into consideration during the design, building and productionisation of machine learning applications.
As the methods and possible applications of AI are continuously developing at a massive scale and society’s concept of ethics and the regulation of artificial intelligence is still being shaped, we acknowledge that this area is dynamic and evolving. Therefore, our internal Trustworthy AI workgroup is committed to staying up-to-date with the latest evolutions in the technological, legal and ethical fields and to integrate best practices within ML6. We do this by exchanging insights and practical experiences through collaborations with institutions such as UGent, Agoria and AI4BE, by providing feedback on the HLEG Assessment list (AI Alliance) and by actively participating in work sessions around the topic. We will adapt our framework, approach & operating model as we learn over time.
For any questions or feedback with regards to our Trustworthy AI Approach, please reach out to firstname.lastname@example.org.