complianceaistartups

AI Tools and the Australian Privacy Principles

The Australian Privacy Principles now require disclosure when AI makes decisions about people. Most SMEs using AI tools don't realise they're already non-compliant.

Robbie Cronin
Robbie Cronin
·5 min read

Australian Privacy Principles and AI Tools

The Australian Privacy Principles now have something to say about your AI tools. Since the December 2024 amendments to the Privacy Act, businesses that use automated systems to make decisions affecting individuals must disclose that fact. If you've been feeding customer data into ChatGPT, screening job applicants with AI, or letting your CRM auto-score leads, the APPs apply to all of it.

Most SMEs adopted AI tools fast and asked compliance questions later. That's understandable. But the law has caught up, and the gap between how businesses use AI and how the Australian Privacy Principles expect them to is wider than most people realise.

The Australian Privacy Principles Automated Decision Rule

The December 2024 amendments introduced a transparency requirement for automated decision-making. If your business makes a decision that substantially affects an individual, and that decision is made substantially by an automated system, you need to tell the person.

The key word is "substantially." If someone on your team reviews every AI output before acting on it, that's different from a system that auto-rejects applications or auto-flags customers without human review. The more the machine decides on its own, the more likely this requirement applies.

This isn't a ban on automation. You can still use AI tools. The obligation is disclosure: people have a right to know when a machine played a major role in a decision about them.

Where this catches SMEs off guard is the off-the-shelf stuff. You didn't build the AI. You just subscribed to a hiring platform that uses it, or a CRM that scores leads automatically, or an insurance tool that auto-triages claims. The obligation still sits with you. Your vendor built the system. You made the decision to use it on your customers.

Where AI Tools Create Australian Privacy Principles Problems

Three areas come up repeatedly with the SMEs I work with.

Data going offshore. Most AI tools process data on servers outside Australia. When you paste customer information into an AI chatbot, or upload resumes to an AI screening tool, that data is leaving the country. APP 8 requires you to take reasonable steps to ensure overseas recipients handle personal information consistently with the APPs. Your AI vendor's terms of service are not a substitute for checking this.

Collection creep. APP 3 says you should only collect personal information that's reasonably necessary. AI tools are hungry for data. The more you feed them, the better they work. But "better AI output" isn't the same as "reasonably necessary." If you're uploading full customer records into an AI tool when you only need names and order history, you're overcollecting.

Retention and training. Some AI platforms use the data you input to train their models. That means personal information you submitted for one purpose, like answering a customer query, gets used for another purpose: improving the AI. Under APP 6, you can't use personal information for a purpose other than the one you collected it for, unless an exception applies. Check your AI vendor's data usage policy. If they train on your inputs, you have an APP 6 problem.

What to Actually Do

You don't need to stop using AI. You need to know what you're doing with it.

Audit your AI tools. List every tool in your business that uses AI or automation to process personal information. Your CRM's lead scoring, your hiring platform's resume screening, your chatbot, your scheduling tool. For each one, note what data goes in, where it's processed, and whether any decisions happen without human review.

Update your privacy policy. If any of those tools make decisions that substantially affect individuals, your privacy policy needs to say so. You don't need to explain the algorithm. You need to disclose that automated decision-making occurs, what kinds of decisions are involved, and how someone can request human review.

Check vendor data handling. For each AI tool, confirm three things. Where is data processed? Is it used to train the model? And does the vendor's DPA cover APP-equivalent protections? If you can't answer these questions, you can't demonstrate APP 8 compliance.

Minimise what you feed in. Before pasting customer data into an AI tool, ask whether you actually need all of it. Strip out identifying details when you can. Use dummy data for testing. The less personal information that enters the system, the smaller your exposure if something goes wrong.

The Enforcement Direction

The OAIC hasn't brought an AI-specific privacy enforcement action against an Australian SME yet. But the groundwork is there. The statutory tort for serious privacy invasions means individuals can sue directly in Federal Court. The broadened definition of personal information captures the kind of data AI tools ingest. And the automated decision-making transparency requirement is now law.

Regulators overseas are already moving. The EU's AI Act and the UK ICO's guidance on automated decision-making are producing enforcement actions. Australia tends to follow. The question isn't whether the OAIC will look at AI compliance. It's when.

Getting your AI use documented and disclosed now is cheaper than retrofitting it after someone complains.

Get posts like this in your inbox

Practical takes on engineering, compliance, and building products that work. No spam, unsubscribe anytime.

Robbie Cronin

Robbie Cronin

Fractional CTO helping non-technical founders make better technical decisions. Based in Melbourne.

More about Robbie

Related articles