At Decision Lab, we see how rapidly Artificial intelligence (AI) is developing and transforming the work of ever more of us. With upcoming standardisation, legislation, and regulation, AI ethics are important to us all.
The growing power of AI demands responsible development and deployment practices. At Decision Lab, we create Decision Intelligence products, powered by Machine Learning (ML) and other AI technologies, and understand the imperative of responsible and inclusive AI development and deployment.
The relative calm we see today in terms of regulations appears to be something of a calm before the storm. Whilst there are currently no universally adopted regulations for AI ethics, the International Organisation for Standardisation (ISO) is developing a new standard, ISO/IEC 42001. This standard aims to provide a framework for organisations in any industry to implement responsible AI practices. In the UK, the standard is known as BS ISO/IEC 42001:2023.
What is ISO/IEC 42001?
While 42 may be the answer to the Ultimate Question of Life, the Universe, and Everything, ISO/IEC 42001 is more niche. It provides a comprehensive framework that outlines specific requirements for organisations to establish an Artificial Intelligence Management System (AIMS). An AIMS ensures responsible development and use of AI systems throughout the entire lifecycle, from conception to deployment. This standard is particularly relevant for organisations, like Decision Lab, that develop and utilise AI-powered products and services.
At the time of writing, ISO/IEC 42001 is in review. We expect official confirmation and publication in 2024. Following that, the potential for a surge in industry adoption is highly likely. Considering the increasing public and governmental focus on AI ethics, widespread adoption seems probable – especially following basic compliance adoption for government-funded projects and contract awards.
How does ISO/IEC 42001 relate to the EU AI Act?
ISO/IEC 42001 can be a valuable tool for organisations to prepare for compliance with the coming EU AI Act. By following the principles and requirements outlined in ISO/IEC 42001, organisations can demonstrate their commitment to responsible AI development and deployment, which can help them meet the legal requirements of the EU AI Act.
AI Ethics at Decision Lab
At Decision Lab, we are not waiting. We consider AI ethics paramount. Responsible AI practices and adherence to the ISO principles are a part of our workflows, ensuring that the Decision Intelligence products we build are implemented around the following key principles:
- Transparency: We strive to ensure our AI models are clear and understandable. This allows for human oversight and avoids the creation of “black box” algorithms.
- Accountability: We are responsible for the development and deployment of all our AI products, including addressing potential biases and unintended consequences.
- Fairness: We actively mitigate biases within our AI models to ensure they are fair and inclusive.
- Privacy: We prioritise data privacy and security throughout the entire development lifecycle.
While ISO/IEC 42001 awaits final review, the core principles it represents are already guiding our work at Decision Lab. We believe that by proactively adopting these principles, we can ensure our AI products are a force for good, shaping a future where AI benefits everyone.
So, how are you preparing for the coming storm of standardisation, regulation, and legislation? Our early embrace of AI ethics permeates our full project and product lifecycle; so, bring it on — we’re ready to navigate the storm!
Related podcast: The Standards Show ISO/IEC 42001 | AI management system
For further updates from Decision Lab, follow us on LinkedIn!