top of page
Search

Privacy regulator provides clarity on artificial intelligence

Updated: Dec 10, 2024

On 21 October 2024 Australia’s privacy regulator, the Office of the Australian Information Commissioner (OAIC), published two guides about how the existing privacy law (the Privacy Act 1988 (Cth)(Privacy Act)) applies to artificial intelligence systems (AI):


  1. Guide for businesses using commercial AI products such as chatbots, content-generation tools, productivity assistants that augment writing, coding, note-taking and transcription; and

  2. Guide for developers using personal information to train AI models including organisations who design, build, train, adapt, fine-tune or combine AI models and applications.


The guides provide important clarity from the OAIC in response to the rapidly evolving and increasingly ubiquitous technology when involving ‘personal information’ (being information or an opinion about an identified individual, or an individual who is reasonably identifiable). We have outlined some of the key insights from each guide below.


Insights for businesses using AI

  • Caution advised. Businesses seeking to use AI should exercise caution given it can represent a potentially high privacy risk, depending on the use case.

  • Approach. Take a ‘privacy by design’ approach which includes conducting a Privacy Impact Assessment before deploying AI. This means undertaking a systematic assessment of the AI that identifies its impact on the privacy of individuals and sets out recommendations for managing, minimising or eliminating that impact.

  • Always review AI terms. Review the legal terms which apply to a business’ use of AI to determine whether the provider has access to (or use of) the data which the organisation inputs or generates when using the AI (for example, to train or develop their AI). If it does, this may impact whether your organisation is compliant with its obligations under the Privacy Act.

  • Transparency. While the OAIC notes the ‘black box’ problem posed by AI (the difficulty in understanding how complex AI generate output), businesses must nevertheless take steps to ensure that they are transparent about their handling of personal information in relation to AI. This includes implementing any required changes to their Privacy Policy, Privacy Collection Notices and privacy practices to ensure that use of AI does not jeopardise compliance with the Privacy Act. 

  • AI generated information is regulated. Inferred, incorrect or artificially generated information produced by AI (such as hallucinations and deepfakes), where it is about an identified or reasonably identifiable individual, constitutes personal information and must be handled in accordance with the Privacy Act. 

  • Use of public AI discouraged. The OAIC recommends that businesses do not enter personal information, and particularly ‘sensitive information’ (such as health, racial, religious or biometric information etc.), into publicly available AI due to the significant and complex privacy risks involved.

  • Sensitive information. Care must be taken with sensitive information, which generally requires consent to be handled by a business. Photographs or recordings of individuals (including artificially generated ones) may contain sensitive information and therefore may not be able to be generated by (or used as input data for) AI without the individual's consent. Consent cannot be implied merely because an individual was notified of a proposed collection of sensitive information.

  • Inaccurate AI output. AI can produce inaccurate or false results. The Privacy Act requires that businesses take reasonable steps to ensure the personal information collected, used and disclosed is accurate. Businesses must therefore seek to ensure accuracy, proportionate with the likely increased level of risk in an AI context. This includes the appropriate use of disclaimers which clearly communicate any limitations in the accuracy of the AI.

  • AI in decision-making. The use of AI in relation to decisions that may have a legal or similarly significant effect on an individual’s rights (for example, loan or insurance assessments) is likely a high privacy risk activity, and particular care should be taken in these circumstances. This includes considering the accuracy and appropriateness of the AI product for the intended purpose.

  • Internal measures. Internal policies, monitoring and oversight practices should be implemented for the use of AI which clearly define the permitted and prohibited uses. Processes for human oversight and verification of AI outputs should be established, particularly where the outputs contain personal information or are relied on to make decisions in relation to a person. A ‘set and forget’ approach should not be adopted.

Insights for developers training AI

  • Caution advised. Where there is any doubt about the application of the Privacy Act to specific AI-related activities, developers should err on the side of caution and assume it applies to avoid regulatory risk and ensure best practice.

  • Approach. Take a ‘privacy by design’ approach which is a process for embedding good privacy practices into the design specifications of technologies, business practices and physical infrastructures. This includes conducting a ‘Privacy Impact Assessment’ both before developing or fine-tuning AI and on an ongoing basis as the privacy impact of the AI becomes clearer throughout development. This means undertaking a systematic assessment of the AI that identifies its impact on the privacy of individuals and sets out recommendations for managing, minimising or eliminating that impact.

  • Transparency. Regardless of how a developer compiles a dataset used by AI, they must nevertheless take steps to ensure that they are transparent about their handling of any personal information collected. This includes having a clear and up-to-date Privacy Policy, Privacy Collection Notices and appropriate privacy practices to ensure compliance with the Privacy Act.  

  • Consider personal information in large data sets.  Developers using large volumes of information to train AI should always consider whether the information includes personal information and the privacy implications. This is particularly so where the information is of unclear origin. This includes inferred, incorrect or artificially generated information produced by AI (such as hallucinations and deepfakes) where it is about an identified, or reasonably identifiable, individual. The data should be considered in its totality, including any associated metadata, annotations, labels or other descriptions attributed to the data as part of its processing. Developers should also be aware that information that would not be personal information by itself may become personal information in combination with other information.

  • Data collection minimisation. The Privacy Act requires developers to collect or use only the personal information that is 'reasonably necessary' for their purposes. An important aspect of considering what is ‘reasonably necessary’ is first specifying the purpose of the AI. This should be a current purpose rather than collecting information for a potential, undefined future AI product. Once the purpose is established, developers should consider whether they could train the AI without collecting or using the personal information, by collecting or using a lesser amount (or fewer categories) of personal information or by de-identifying the personal information.

  • Third-party datasets. When developers obtain a third-party dataset that contains personal information, this will generally constitute a 'collection' of personal information by the developer and will trigger privacy obligations. The OAIC recommends that developers seek assurances from third parties in relation to the dataset including through contractual terms that the collection of personal information and its disclosure by the third party for the purposes of the developer training the AI does not breach the Privacy Act.

  • Placing privacy obligations on users. Where developers build or structure their AI in a way that places the obligation on downstream users of the AI to consider privacy risks, the OAIC suggests developers provide any information or access necessary for the downstream user to assess this risk in a way that enables all entities to comply with their privacy obligations. However, best practice is for developers to assume the Privacy Act applies to them to avoid regulatory risk.

  • Use of public data. Just because data is publicly available or accessible, does not mean it can legally be used to train or fine-tune generative AI. Developers must consider whether data they intend to use or collect contains personal information and comply with their obligations under the Privacy Act.

  • Use of de-identified information. If a developer collects information about identifiable individuals with the intention of de‑identifying it before AI model training, this is still considered a collection of personal information and the developer needs to comply with its obligations under the Privacy Act. Further, de-identifying personal information is considered a 'use' of that personal information for a secondary purpose (see above dot point for implications).

  • Sensitive information scraped from the web or from third parties. The Privacy Act generally requires consent from individuals in order to handle their 'sensitive information' (such as health, racial, religious or biometric information etc.). AI input data such as photographs or recordings of individuals (including artificially generated ones) may contain sensitive information and therefore may not be able to be scraped from the web or collected from a third-party dataset without procuring consent from the individual to whom that information relates.

  • Secondary use of existing data. Where developers are seeking to use personal information that they already hold for the purpose of training AI, and this was not a primary purpose of collection (but rather, a secondary purpose), they need to carefully consider their privacy obligations. This may involve seeking consent from relevant individuals if the secondary purpose was not within the reasonable expectations of the individual and related (or directly related in the case of sensitive information) to the primary purpose.

  • Inaccurate AI output. AI can produce inaccurate or false results. The Privacy Act requires that developers take reasonable steps to ensure the personal information collected, used and disclosed is accurate. Developers must seek to ensure accuracy, proportionate with the likely increased level of risk in an AI context. This includes through using high quality datasets, undertaking appropriate testing and the appropriate use of disclaimers.

  • AI in decision-making. The use of AI in relation to decisions that may have a legal or similarly significant effect on an individual’s rights (for example, loan or insurance assessments) is likely a high privacy risk activity, and particular care should be taken in these circumstances. In this case, careful consideration should be given to the development of the AI model and more extensive safeguards will be appropriate.


Future developments


As the law attempts to play catch up with technology we expect to see more legal developments impacting AI, including the anticipated privacy law reforms. The first tranche of these (the Privacy and Other Legislation Amendment Bill 2024 (Cth)) is currently before Parliament. We will examine this in a separate update once it is passed by Parliament.











Legal Notice

The contents of this article are for reference purposes only and may not be current as at the date of accessing this article. The contents of this article do not constitute legal advice and should not be relied upon as such. Legal advice about your specific circumstances should always be sought separately before taking any action based on this article.

 
 
  • LinkedIn

© 2023 Corptech Legal Pty Ltd

Liability limited by a scheme approved under Professional
Standards Legislation.

bottom of page