• Australian Academy of Technological Sciences and Engineering (ATSE) CEO, Kylie Walker.
    Australian Academy of Technological Sciences and Engineering (ATSE) CEO, Kylie Walker.
Close×

The Federal government has proposed new legal requirements for artificial intelligence in high-risk settings, and a new Voluntary AI Safety Standard for Australian businesses.

While most Australians support AI there are fears more legal protections are required to address the real risk of harm.

The federal government is seeking to fills gaps in existing law, by adopting a risk-based approach for the development and deployment of AI.

UTS Human Technology Institute co-director, Professor Edward Santow said a risk-based approach to AI reform would bring Australia into line with other jurisdictions, such as the European Union and Canada.

“But Australia has been slow to act, and the government should commit to introducing legislation by 2025 at the latest,” he said.

“While reform is overdue, regulators should do more now to enforce the laws we already have. Our existing anti-discrimination, consumer protection and other laws apply to the use of AI just as they do to all other technologies.

“Those existing laws need to be enforced and applied more effectively.”

The publication this week of the government’s Voluntary AI Safety Standard (the Voluntary Standard) is a key milestone in the Australian government’s developing approach to safe and responsible AI.

Aligned with existing international approaches on AI governance and current legal requirements, the Voluntary Standard supports organisations to incorporate principles of AI governance into existing policies, procedures and processes.

It provides practical guidance that can be used by Australian businesses to unlock the potential of AI, while minimising the risk of harm to their customers, users and the wider community.

In its work on the Voluntary Standard, HTI’s primary focus was to underpin its 10 guardrails with a human-centred approach.

This means prioritising the safety of people and protection of their human rights; upholding principles of diversity, inclusion and fairness; incorporating human-centered design and ensuring the system is trusted by users and the wider community.

The Australian Academy of Technological Sciences and Engineering (ATSE) said the voluntary AI Safety Standards is a prudent step.

ATSE CEO Kylie Walker said Australia has the potential to lead the world in responsible AI.  

“Greater adoption of AI could see Australia’s economy increase by $200 billion annually, but it is critical that robust measures are rapidly implemented to safeguard these areas and position Australia at the forefront of AI development,” Walker said. 

 “This is Australia's AI moment. Ultimately, these proposals will help Australia lead in both technological and regulatory innovation in AI, setting a global standard for responsible and effective AI development and deployment. 

“Investing further in local AI innovations will simultaneously create new AI industries and jobs here in Australia and reduce our reliance on internationally developed and maintained systems.”