Go to content
Search Typeahead
${facet.Name} (${facet.TotalResults})
${item.Icon}
${ item.ShortDescription }
${ item.SearchLabel?.ViewModel?.Label }
See all results
Search Typeahead
${facet.Name} (${facet.TotalResults})
${item.Icon}
${ item.ShortDescription }
${ item.SearchLabel?.ViewModel?.Label }
See all results

The use and regulation of artificial intelligence offshore: An emerging legal landscape

Download pdf
20 Apr 2026
|

Artificial intelligence (AI) is rapidly transforming industries globally and financial services is not exempt. In this article, we look at developments in the UK Overseas Territories (UKOTs) and Crown Dependencies (CDs) focussing on the British Virgin Islands (BVI) and the Cayman Islands, which both serve as major international financial centres. As financial services businesses look to the future, the need to embrace AI becomes important, and all international centres, including the BVI and the Cayman Islands will need to consider the question of how AI will be regulated and the impact of that regulation on day-to-day business operations.

This article examines the current legal and regulatory landscape governing AI in both jurisdictions, considering existing and comparative regulatory frameworks, data protection regimes, and the likely trajectory of future developments.

Spoiler alert: As at the date of this article, no offshore jurisdiction, as far as we are aware, has moved to comprehensively regulate AI technologies or uses. In this article, we speculate what might be on the horizon and, more importantly, the checks and balances institutions and their participants should be putting in place in order to get a healthy head start on regulatory initiatives and to manage their AI risks more broadly. Firms should be in no doubt, regulators will use AI tools to ensure stricter compliance by firms, being able to more thoroughly scrutinise returns submitted by their regulated entities.

Firstly, what do we mean, precisely, by “AI”?

The Organisation for Economic and Cooperative Development’s definition of an AI system in its AI Principles Overview publication[1] is a good place to start. AI is described as a machine-based system that, for explicit or implicit objectives, processes input to generate output such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. These systems vary in their levels of autonomy and adaptability after deployment.

How is AI being used in financial services and capital markets?

Based on the seminal AI report of the International Organisation of Securities Commissions from March 2025, AI is being used increasingly in capital markets and financial services industries to enhance efficiency, decision-making, and compliance. Key use-cases comprise the following:

  1. Decision-making support:
    • Robo-advising: AI systems provide automated investment advice and portfolio management, including portfolio optimisation and risk-return assessments.
    • Algorithmic trading: AI is used to process market data, monitor movements, and optimise trade execution strategies.
  2. Market analysis and insights: AI models analyse diverse data sources, including financial, macroeconomic, and social media data, to forecast asset prices, predict market trends, and identify patterns.
  3. Surveillance and compliance:
    • AI enhances anti-money laundering (AML) and counter-terrorist financing (CFT) measures by detecting suspicious transactions and automating compliance processes.
    • It is also used for fraud detection and monitoring client communications for regulatory compliance
  4. Internal operations:
    • AI automates tasks like coding, document summarisation, and transcription, improving internal productivity.
    • Large language models assist in generating transaction summaries, meeting notes, and translations.
  5. Customer interaction: AI-powered chatbots and virtual assistants handle client queries, provide support, and enhance customer engagement.
  6. Risk management: AI systems improve risk assessment and management by analysing market conditions and identifying potential vulnerabilities.
Horizon scanning AI legislative frameworks

The most prominent AI-specific legislation as of early 2026, has been exclusive to the European Union’s Artificial Intelligence Act, Regulation (EU) 2024/1689 (the EU AI Act). At present, the EU framework continues to represent the global benchmark of AI regulation. Firms should take note of the contents and requirements even where they are based outside of the EU as the framework will likely influence the direction of travel that offshore jurisdictions may take when implementing AI measures into local regulatory frameworks. We have commented extensively on the EU AI Act, links to our publications may be found here.

The offshore jurisdictions in the UKOTs and CDs are starting to conduct their own consultations and discussions on the extent of AI regulation. We look comparatively below at the initial developments in some of these jurisdictions:

Cayman Islands

According to recent press releases from the Cayman Islands Government, the jurisdiction is actively preparing for AI legislation, targeting a draft framework by mid-2027.[2]

The Government has established a National Digital Transformation Task Force to guide the development of AI governance, focussing on balancing innovation with safeguards for public safety and rights. A preliminary AI policy for the civil service has already been drafted, emphasising oversight, risk management, and restrictions on unauthorised or harmful AI use. This policy mandates formal reviews for AI adoption in Government operations to address cybersecurity, legal, and technical risks.

Separately, the Cayman Islands Court of Appeal has issued a warning on the use of generative AI in legal filings, following a case where AI-generated references included non-existent legal precedents. The court emphasised the duty of litigants to ensure accuracy and transparency, cautioning that future misuse could lead to severe consequences. This reflects a broader regional trend toward regulating AI in legal and governmental processes to maintain integrity and accountability. See our post: Cayman Court issues warning on AI use in legal filings.

Bermuda

In July 2025, the Bermuda Monetary Authority (BMA) published its discussion paper “The Responsible Use of Artificial Intelligence in Bermuda’s Financial Services Sector”.[3] The BMA stressed that Bermuda is advancing its regulatory framework for AI in financial services, emphasising a principles-based, outcomes-focussed approach. The BMA's discussion paper outlines a comprehensive risk management framework that prioritises governance, board accountability, and proportional oversight. The framework addresses key areas such as data management, model validation, human oversight, and cybersecurity, while also considering the unique risks of generative and agentic AI systems. The BMA aims to balance innovation with safeguards, ensuring AI adoption aligns with Bermuda's reputation as a premier financial hub.

The BMA's approach integrates global regulatory insights, including those from the EU AI Act and the voluntary National Institute of Standards and Technology frameworks on cybersecurity in the United States, while tailoring them to Bermuda's financial ecosystem, which serves sophisticated institutional clients. The framework emphasises proportionality, allowing smaller institutions to innovate without excessive compliance burdens. The BMA also highlights the importance of transparency, fairness, and operational resilience, proposing measures like AI inventories, incident response plans, and third-party risk management. This initiative positions Bermuda as a forward-thinking jurisdiction for responsible AI adoption in financial services.

Jersey

Jersey has yet to implement specific AI legislation but is taking steps toward establishing a framework influenced by developments in larger jurisdictions such as the EU. Further, the Jersey Institute of Directors recently introduced comprehensive AI Adoption Guidelines to assist local businesses dealing with these issues. See our blog here.

A principle-based approach has already been adopted in areas such as education, with policies emphasising fairness, transparency, accountability, and inclusivity. Jersey Finance additionally highlights AI's potential to enhance efficiency and client relationships in the financial sector, though concerns about jurisdictional differences in AI regulations persist. Future developments may include either a codified law or an expansion of principle-based frameworks to regulate AI use responsibly.

Anguilla

Anguilla has not yet introduced an AI framework but has capitalised on its ".ai" internet domain, earning significant revenue from its association with artificial intelligence. In 2024, domain sales contributed $39 million, nearly a quarter of the island's total revenue, with projections for further growth. This income is being used to diversify Anguilla's economy, traditionally reliant on tourism, with plans to invest in infrastructure, healthcare, and a new airport to support sustainable development.[4]

(and last but not least) the United Kingdom

Offshore jurisdictions like the BVI and Cayman Islands, as UK Overseas Territories, often align their regulatory approaches with developments in the UK. For instance, in 2024, the UK’s Financial Conduct Authority (FCA) launched its AI Lab to foster collaboration among stakeholders, support AI innovation, and deepen understanding of AI’s risks and opportunities in financial markets. Additionally, the Bank of England and FCA conducted studies on machine learning in financial services, revealing widespread use of vendor models and pilot projects in capital markets, though mature applications remain limited. These efforts reflect a global trend of grappling with the complexities of regulating rapidly evolving AI technologies.[5]

What should offshore firms, specifically those in the BVI and Cayman Islands, be doing now to mitigate AI risks?

BVI and the Cayman Islands have historically adopted a principles-based regulatory approach, particularly in financial services, which may shape future AI governance. In the absence of specific AI legislation, existing legal frameworks—such as company law, contract law, tort law, data protection laws, and sector-specific financial regulations—govern AI use. Businesses deploying AI must navigate these established principles to address novel technological applications.

Key principles for AI governance include, but are not limited to: transparency; ensuring systems are understandable and disclosures are accurate, reliability; ensuring consistent and robust performance and fairness; and avoiding bias or discrimination. Security, privacy, accountability, and effective risk management are also critical, alongside reasonable human oversight to augment, not replace, effective decision-making and good governance. Regulators have also emphasised investor protection, issuing alerts and educational materials to raise awareness of AI-related fraud risks and urging firms to comply with existing laws on disclosure, registration, and marketing.[6] These measures aim to balance innovation with safeguards, ensuring responsible AI adoption in financial services.

Monitoring investor alerts and regulatory announcements

Regulators worldwide have taken proactive steps to educate investors, particularly retail investors, about the growing risks of securities fraud linked to artificial intelligence.[7] This has been achieved through the publication of investor alerts and educational materials. Firms should be monitoring these publications. Some of these resources provide general insights into FinTech risks, including references to AI and robo-advisory services, while others are specifically tailored to highlight AI-related investment fraud risks. Additionally, more targeted alerts focus on unregistered AI firms or products, emphasising the importance of regulatory compliance.

Beyond investor-focussed materials, regulators have also issued guidance for firms. For example, ESMA’s May 2024[8] guidance for firms using AI in investment services underscores the need for transparency, accountability, and adherence to existing laws. The overarching message is clear: investors should conduct thorough due diligence before investing in AI-focussed companies or using AI-driven investment tools.

Similarly, firms leveraging AI in financial services are expected to comply with all applicable regulations, including those governing disclosure, registration, and marketing practices. These efforts aim to foster trust and ensure responsible AI adoption while safeguarding market integrity and investor interests.

Data protection considerations

Data protection law is of central importance to AI governance, given that AI systems typically rely on the processing of large volumes of data, including personal data.

In the Cayman Islands, the Data Protection Act (2021 Revision) (the Cayman DPA) establishes a comprehensive framework for the protection, collection, and processing of personal data of data subjects by data processors and date controllers. The Cayman DPA imposes obligations on data controllers and processors, including requirements relating to lawful processing, data security, and the rights of data subjects. Where AI systems process personal data, organisations must ensure compliance with these requirements. Of particular relevance is the principle of transparency, which may require organisations to inform individuals when automated decision-making is being used in ways that significantly affect them.

The BVI enacted the Data Protection Act 2021 (the BVI DPA), which similarly establishes a modern data protection regime. The BVI DPA imposes requirements concerning the fair and lawful processing of personal data, and organisations deploying AI must ensure that their data practices comply with these statutory obligations. The BVI DPA Act also addresses the rights of data subjects, which may be engaged where AI systems are used to make decisions affecting individuals.

Financial services regulation

The BVI and the Cayman Islands are highly regarded as significant offshore international financial centres, and the financial services sector is a key area where AI is being deployed. Applications include algorithmic trading, fraud detection, customer onboarding, AI chat-boxes and regulatory compliance (often referred to as "RegTech").

The BVI Financial Services Commission (FSC) and the Cayman Islands Monetary Authority (CIMA) are the principal competent authorities overseeing financial services in their respective jurisdictions. While neither regulator has issued AI-specific guidance as of this date, both maintain high expectations regarding operational resilience, sound risk management practices, and robust governance that are relevant to the deployment of AI systems.

Financial services firms operating in these jurisdictions should expect regulators to take an interest in how AI systems are developed, tested, and monitored. Key areas of regulatory focus are likely to include model risk management, the explainability of AI-driven decisions, and the potential for algorithmic bias. Firms operating with the regulatory framework in these jurisdictions would be well-advised to develop robust governance frameworks for AI systems, including appropriate oversight mechanisms and audit trails. Where for example, regulated firms are using AI as a part of their business to generate advice, care should be taken to consider if this type of activity falls within the regulatory parameters of, for example, the Securities and Investment Business Act (Revised Edition 2020) in the BVI and the Securities Investment Business Act (2025 Revision) in the Cayman Islands, together with any subsidiary legislation, regulatory rules and guidance.

Anti-Money Laundering and AI

Both jurisdictions have robust and sophisticated AML/CFT regimes, and AI is increasingly being used to enhance AML compliance. In the Cayman Islands, the Proceeds of Crime Act (2025 Revision) and the Anti-Money Laundering Regulations (2025 Revision) establish the framework for AML compliance, while in the BVI, the Anti-Money Laundering Regulations (Revised Edition 2020) and the Anti-Money Laundering and Terrorist Financing Code of Practice (Revised Edition 2020) serve a similar function.

AI-powered transaction monitoring and customer due diligence tools are becoming more prevalent. While the use of such technology is not prohibited, firms must ensure that their AI systems are calibrated appropriately and that human oversight is maintained. Regulators will expect firms to be able to demonstrate that their AML controls, whether AI-assisted or otherwise, are effective and proportionate.

Contractual and tortious liability

Even though firms would need to look at embracing technology as a part of its business products and services, the deployment of AI systems, policies and procedures undoubtedly also raises important questions of liability. Where an AI system causes loss or damage, determining responsibility may be complex, particularly where the system's decision-making processes are opaque. Firms will need to ensure there is very good corporate governance policies, internal operations systems and controls and business continuity policies in the event that there is some degree of infraction.

The laws in the BVI and the Cayman Islands, which are based on English common law, established principles of contract and tort and these laws will generally apply. Parties using AI as a part of their business engineering model to provide services to clients should ensure that their agreements clearly allocate risk and responsibility. This includes addressing issues such as warranties as to performance, limitations of liability, and indemnities. To this extent, firms operating in the financial services arena may want to also consider adopting specific indemnity insurance policies to adequately cover any shortfall or liability that may arise.

In the tortious context, questions may arise as to whether the entity in question providing the service through the use of AI bears responsibility for harm caused by that system. The resolution of such questions will depend on the specific facts and the application of established principles of negligence, product liability, and vicarious liability.

Intellectual property considerations

AI raises novel questions concerning intellectual property, including the ownership of AI-generated works (which can include advice) and the use of protected materials in training AI systems. Businesses operating in these jurisdictions should take care to ensure that their use of AI does not infringe third-party intellectual property rights, and should consider how ownership of AI-generated outputs is addressed in their contractual arrangements. To the extent firms own AI related assets there may also be relevant tax considerations to think about, particularly under the Economic Substance (Companies and Limited Partnerships) Act (Revised Edition 2020) in the BVI and the International Tax Cooperation (Economic Substance) Act (2025 Revision) in the Cayman Islands together with any subsidiary legislation and regulatory guidance.

Stay ahead of the curve

It is reasonable to assume that both the BVI and the Cayman Islands will continue to monitor international developments in AI regulation. The EU’s AI Act is likely to be influential in this regard. Given the close ties between these offshore jurisdictions and the UK legal system, guidance and legislation emanating from the UK may also be of relevance.

For the time being, businesses operating in the BVI and the Cayman Islands should focus on ensuring that their use of AI complies with existing legal requirements, including data protection, financial services regulation, and AML obligations. Developing robust internal governance frameworks for AI, including appropriate policies, procedures, and oversight mechanisms, will also be important in demonstrating responsible AI use to regulators and stakeholders.

Businesses should take a proactive approach to AI governance, ensuring compliance with data protection laws, financial services regulations, and AML/CFT requirements. As the international regulatory landscape continues to evolve, both jurisdictions can be expected to develop their approaches to AI governance, and stakeholders should remain attentive to future developments in this dynamic area of law.