Global initiative tackles AI ethical challenges

Amidst growing concerns about issues such as bias, transparency, accountability and safety, a global initiative launched recently to develop ethical AI Standards. It's an invitation for organisations to collaborate on a voluntary basis.
  • Author: JASANZ

As Artificial Intelligence (AI) continues to integrate deeply into various aspects of society, bringing about transformative potential, addressing its ethical implications has become paramount. Recognising the need for a cohesive global response to these ethical challenges, the Walbrook AI Accord has launched a pioneering initiative aimed at developing ethical AI standards.

This initiative was highlighted earlier in May at the TIC Summit 2024 in Brussels, where representatives from over 25 nations’ quality infrastructure organisations signed the Walbrook AI QI Accord, marking a significant step towards unified ethical standards for AI.

The primary focus of the Accord is to facilitate three tangible outcomes:

  • Advocating for the adoption of Quality Infrastructure for AI assurance,
  • Developing assurance standards and methodologies, and
  • Facilitating training and skill development for AI assurance professionals.

The initiative emphasises a collaborative approach, inviting organisations and professionals to participate voluntarily. By leveraging the global QI system, which ensures the safety, security, and sustainability of products and services through risk-based assurance and continuous improvement, the Walbrook AI QI Accord aims to provide a rigorous assurance system for AI. This collaboration is crucial for addressing the complex ethical issues associated with AI on a global scale.

Learn more about this important global initiative.