Blog page img

Our Blog

Learn About The Latest Issues Facing The Technology and Telecommunications Industries. Subscribe To Our Blog And Get Regular Updates Automatically!

Some of our Satisfied Clients

Startups, SMEs, Public Listed Entities, Multinational Corporations and Government

featured in

The Australian Cyber Security Centre releases official guidance for the use of AI systems in organisations

Cybersecurity

The Australian Cyber Security Centre (ACSC), collaborating with various international governmental partners, has released a guidance paper for organisations on the safe and secure use of artificial intelligence (AI) systems.

Central to the paper is the recognition of AI’s transformative potential across various sectors. However, harnessing this potential requires a nuanced approach that prioritises responsible governance and proactive risk management. The paper underscores the importance of adopting a strategic framework that encompasses governance, collaboration, ethics, user education, and research.

However, before harnessing the opportunities of AI, the paper describes potential risks, with case studies, that arise when using AI. Principally, these include:

  • data poisoning of an AI model
  • input manipulation attacks, such as prompt injection
  • generative AI hallucinations
  • privacy and intellectual property concerns, and
  • model stealing attacks.

As such, governance emerges as a cornerstone of effective AI engagement, emphasising the need for clear policies, standards, and guidelines tailored to AI adoption. Such frameworks provide organisations with a roadmap for making informed decisions and mitigating potential risks associated with AI technologies. The paper proposes eleven mitigation strategies when using AI as follows:

  1. Has the organisation implemented the cyber security framework relevant to its jurisdiction?
  2. How will the system affect the organisation’s privacy and data protection obligation?
  3. Does the organisation enforce multifactor authentication?
  4. How will the organisation manage privileged access and backups to the AI system?
  5. Can the organisation implement a trial of the AI system?
  6. Is the AI system secure by design, including in the supply chain?
  7. Does the organisation understand the limits and constraints of the AI system?
  8. Does the organisation have suitably qualified staff to ensure the AI system is set up, maintained, and used securely?
  9. Does the organisation conduct health checks of the AI system?
  10. Does the organisation enforce logging and monitoring?
  11. What will the organisation do if something goes wrong with the AI system?

Additionally, collaboration between government, industry, and academia is deemed essential for fostering innovation, knowledge exchange, and the development of best practices in AI governance. By leveraging collective expertise and resources, stakeholders can address challenges more effectively and enhance cybersecurity resilience in the AI era.

For a full reading of the paper, see here.

"Stellar Results Through Technology Contract Negotiations"

Are you putting your business at risk with lawyers who don’t understand Technology Contracts?

free book