Go to content
Search Typeahead
${facet.Name} (${facet.TotalResults})
${item.Icon}
${ item.ShortDescription }
${ item.SearchLabel?.ViewModel?.Label }
See all results
Search Typeahead
${facet.Name} (${facet.TotalResults})
${item.Icon}
${ item.ShortDescription }
${ item.SearchLabel?.ViewModel?.Label }
See all results

Harneys AI acceptable use policy

Purpose

This policy outlines Harneys’ use of artificial intelligence (AI) and generative AI (GenAI) tools in the provision of legal and business services. The objective is to ensure responsible, secure, and ethical use consistent with professional obligations, client confidentiality, and data protection laws.

Principles of responsible AI use

The firm’s use of AI will adhere to the following principles:

    1. Client Confidentiality: No client or matter information may be input into any AI or GenAI tool unless the tool has been approved for firm use and subject to contractual, technical, and organisational safeguards. Under no circumstances will any client data be used for model training.
    2. Human Oversight: AI outputs must always be reviewed and validated by appropriately qualified lawyers or staff before use.
    3. Transparency: The firm will be transparent with clients about any material use of AI tools in client work where such use may influence advice or deliverables.
    4. Accountability: Responsibility for the quality and accuracy of work remains with the human professionals supervising the matter, not with the AI system.
    5. Security and Data Governance: Only approved, enterprise-grade AI tools that meet the firm’s information security and privacy standards may be used.
    6. Compliance and Ethics: Use of AI must comply with applicable law, professional conduct rules, and the firm’s internal ethics and information security policies.
    7. Court rules: In relation to any litigation proceeded, where any court documents are generated with the assistance of AI tools full disclosure of such use will always be made in compliance with the applicable civil procedure rules in that jurisdiction.
Approved tools

In relation to client work the firm will only use:

    1. approved enterprise-grade AI tools that have been designed for the legal profession and which meet the firm’s information security and privacy standards,
    2. which are designed for use by the legal profession with appropriate data security for legal client confidentiality,
    3. under licence from the supplier (and never on a consumer model basis), and
    4. which do not store or process data within the United States.

The firm will not use any AI or GenAI tool where the client has instructed us that we may not do so in relation to their client work.

At present the following AI tools have been approved for use by the firm in relation to client work:

    1. Harvey AI
    2. Legora
    3. Definely
    4. Contract Express

The AI Governance Committee may approve other tools which comply with the above from time to time.

Permitted use

Approved tools may be used to assist with:

  1. Drafting or summarising documents,
  2. Knowledge management and internal research,
  3. Administrative and operational efficiency tasks, and
  4. Predictive or analytical tasks authorised by the AI Governance Committee.
Prohibited use

Users must not:

  1. Input client, personal, or confidential firm data into any unapproved AI system, including public models such as ChatGPT or Gemini.
  2. Use AI to generate or amend client deliverables without human review.
  3. Rely on AI outputs for factual or legal assertions without verification with approved external sources.
Oversight and Governance

The AI Governance Committee shall be responsible for:

  1. Tool approval and monitoring,
  2. Staff training and compliance,
  3. Incident reporting and audits, and
  4. Regular policy review (at least annually).

Last updated on 23 January 2026