Blog page img

Our Blog

Learn About The Latest Issues Facing The Technology and Telecommunications Industries. Subscribe To Our Blog And Get Regular Updates Automatically!

Some of our Satisfied Clients

Startups, SMEs, Public Listed Entities, Multinational Corporations and Government

featured in

Department of Industry, Science and Resources releases their Voluntary AI Safety Standard

Multimodal Interaction with AI: Businessman Engaging with AI Chat that Sees, Hears, and Speaks.

Australia has unveiled a comprehensive Voluntary AI Safety Standard as part of its broader Safe and Responsible AI agenda. This initiative aims to provide practical guidance for organisations developing and deploying artificial intelligence (AI) systems while balancing innovation with safety concerns.

The standard introduces 10 voluntary guardrails applicable across the AI supply chain, focusing primarily on AI deployers–organisations that use AI systems to provide products or services. These guardrails include key requirements such as:

  • Establishing accountability processes and governance
  • Implementing risk management strategies
  • Protecting AI systems and ensuring data quality
  • Conducting thorough testing and monitoring
  • Enabling meaningful human oversight
  • Ensuring transparency with end-users about AI interactions

A notable aspect of the standard is its human-centred approach, aligned with Australia’s AI Ethics Principles and international commitments like the Bletchley Declaration. This approach emphasises protecting people, upholding diversity and fairness, and prioritising human needs in AI system design and deployment.

The standard addresses the critical issue of bias, defining it as a systematic differential treatment that can lead to unfairness. It specifically highlights concerns about unlawful discrimination based on protected attributes such as age, disability, race, sex, and sexual orientation.

To ensure international compatibility, the standard aligns with existing global frameworks, including AS ISO/IEC 42001:2023 and the US NIST AI RMF 1.0. This alignment supports Australian organisations operating internationally while avoiding barriers for international companies in Australia.

While the current version focuses on AI deployers, future iterations will include more complex guidance for AI developers. The standard serves as a foundation for potential future legislation, setting expectations for mandatory guardrails in high-risk AI applications while allowing low-risk uses to flourish largely unimpeded.

For a full reading of the standard, see here.

"Stellar Results Through Technology Contract Negotiations"

Are you putting your business at risk with lawyers who don’t understand Technology Contracts?

free book