
Artificial intelligence offers Australia innovation, efficiency, and economic growth, but it could also widen inequality. Human rights commissioner Lorraine Finlay warns that without strict regulation, AI tools may deepen racism and sexism. She says productivity must never replace fairness and equality.
This warning comes as the Australian government debates its AI strategy. Divisions are growing in the ruling Labor party. Some members want more transparency in how AI models train. Others push for strong legal protections. The challenge lies in developing AI while protecting human rights, a struggle seen across the world.
Next week’s federal economic summit will address AI’s productivity potential. However, unions, media groups, and arts organisations already voice concerns about intellectual property theft and privacy breaches. Many fear that, without careful oversight, AI could not only harm workers but also undermine trust in technology.
Experts Warn of Entrenched Racism and Sexism Through AI
Lorraine Finlay stresses that AI bias in Australia stems from hidden flaws in training datasets. A lack of transparency makes it nearly impossible to identify the extent of these biases.
“Algorithmic bias means unfairness is built into the tools we use,” she explains. “When combined with automation bias, where humans overly trust machine decisions, discrimination can become so entrenched we may not even notice it.”
She urges the government to adopt bias testing, regular audits, and human oversight. Finlay supports a dedicated AI act to strengthen existing laws like the Privacy Act.
Labor Party Divided on AI Data Access
Labor senator Michelle Ananda-Rajah challenges the idea of strict regulation alone. She argues that AI tools must be trained on Australian data to avoid inheriting overseas biases.
She warns that without access to domestic datasets, Australia risks becoming dependent on overseas AI models with little control or insight into their workings. “AI must be trained on diverse Australian data or it will amplify biases,” she says.
While she opposes a dedicated AI act, Ananda-Rajah supports paying content creators fairly. She believes Australia can protect rights while still freeing enough data to improve AI representation.
Risks in Medicine and Recruitment Highlight Urgency
Recent research in Australia shows how AI discrimination can harm real lives. In May, a study found AI recruitment tools discriminated against candidates with accents or disabilities. Similar patterns appear in healthcare, including AI skin cancer screening systems that misdiagnose certain groups due to algorithmic bias.
Ananda-Rajah points out that these flaws can be reduced if models are trained on wide-ranging, representative local datasets. She insists that safeguards for sensitive information must go hand in hand with data access.
Tech Industry and Academics Call for Balanced Approach
Judith Bishop, an AI expert from La Trobe University, agrees that Australian data can help tailor AI tools to local needs. However, she says this is only part of the solution. “We must ensure systems built overseas can be adapted properly for Australians,” she notes.
Julie Inman Grant, the eSafety commissioner, adds that the transparency is important, and this can be achieved by tech companies telling us: where the training data comes from, and ensuring diversity in the datasets. If there is not sufficient transparency, AI systems risk amplifying destructive biases, solidifying gender stereotypes and marginalising voices.
The Push for Stronger AI Governance
While there are various opinions on the scope of data sharing, there is universal agreement between the experts, however those opinions differ, regulation is non-negotiable! In Finlay’s view, while different datasets help, laws and other regulation and oversight are essential foundations for fair AI use not only for public interest uses, but for other market and institutional responsibility based uses too.
When considering strong governance, regular bias audits and suitable accountability measures could have the function of preventing discrimination before it can be disseminated through an automated decision-making process. The goal was on not to halt innovation with artificial intelligence, but to provide more equity for all Australians.
Protecting Communities While Harnessing AI’s Potential
Australia is at a time-crossing point that will determine the future of AI going forward, and the conversation around those issues is so much more than technology, it is about values, representation, and humanity. Experts like Finlay and Ananda-Rajah have highlighted the urgent and necessary time for a balanced set of policies that can build innovative policy and creativity, while safeguarding communities from risk of potential negative harm.
By combining transparency of AI practices, representative datasets, and sound legal architecture, Australia can build technology and practices that are reflective of and respect the diverse communities in Australia. The stakes are high, and so are the possibilities to get this right.