Unexpectedly, the Australian government announced a new, eight-week consultation to determine how strictly it should regulate the AI industry.
A quick eight-week consultation by the Australian government to determine if any “high-risk” AI tools should be outlawed has been launched.
In recent months, steps have also been introduced in other regions, including the United States, the European Union, and China, to identify and possibly reduce concerns related to the rapid development of AI.
A discussion paper on “Safe and Responsible AI in Australia” and a report on generative AI from the National Science and Technology Council were both released on June 1, according to industry and science minister Ed Husic.
The documents are part of a consultation that lasts through July 26.
The government is seeking input on how to assist the “safe and responsible use of AI” and debates whether to employ voluntary strategies like ethical frameworks, implement specific regulation, or combine the two.
“Should any high-risk AI applications or technologies be completely banned?” is a question posed in the consultation. and what standards should be applied to determine which AI tools should be prohibited.
The thorough discussion paper also offered a sketch risk matrix for AI models for comments. It classified generative AI tools used for tasks like producing medical patient records as “medium risk” while classifying AI in self-driving cars as “high risk” just to give examples.
The study highlighted both “harmful” uses of AI, such as deepfake tools, use in the production of fake news, and instances where AI bots had encouraged self-harm, as well as its “positive” uses in the medical, engineering, and legal industries.
Bias in AI models and “hallucinations” – information generated by AI that is erroneous or incomprehensible — were also mentioned as problems.
According to the discussion paper, the adoption of AI is “relatively low” in the nation since there is “low levels of public trust.” Additionally, it mentioned other countries’ AI laws as well as Italy’s temporary prohibition on ChatGPT.
Australia has some favourable AI capabilities in robotics and computer vision, but its “core fundamental capacity in [large language models] and related areas is relatively weak,” according to a report by the National Science and Technology Council. It also stated:
“Australia faces potential risks due to the concentration of generative AI resources within a small number of large multinational technology companies with a preponderance of US-based operations.”
The paper went on to explore international AI policy, provided examples of generative AI models, and predicted that these technologies “will likely impact everything from banking and finance to public services, education, and everything in between.”
The post In an unexpected consultation, Australia asks if “high-risk” AI should be outlawed. first appeared on BTC Wires.
Credit: Source link