Companies would be forced to make a human oversee artificial intelligence systems and tell people when they are being affected by AI under likely new legislation to regulate the high-risk technology.
As global regulators weigh laws to curb revolutionary applications of the new technology, the government is beginning a month-long consultation to decide whether to create an AI-specific act. Another option is introducing changes to existing regulations or laws.
Industry and Science Minister Ed Husic said less than a third of businesses were using AI responsibly, requiring voluntary “guardrails” that begin on Thursday. They include having human control over systems, plans to reduce risks, and allowing people affected by AI – for example, a person denied an insurance policy – to challenge them.
Husic has spoken favourably about a new AI act that could make those steps mandatory. Government sources, not authorised to speak publicly, said new laws were firmly in the government’s sights.
“Australians know AI can do great things, but people want to know there are protections in place if things go off the rails,” Husic said, citing Tech Council research suggesting AI could contribute more than $100 billion to the Australian economy by 2030.
Loading
“Business has called for greater clarity around using AI safely and today we’re delivering. We need more people to use AI and to do that we need to build trust.”
This masthead revealed last year that Husic was taking steps to regulate the technology.
The creation of tools like ChatGPT and the invention of new antibiotics have highlighted the potential the technology has, but concerns have been raised about systems that challenge conceptions of justice and morality in government services, banking and military decision-making.
Source Agencies