For a financial services firm, Section 912A and Directors’ Duties are the twin engines of your compliance framework. In the 2026 “Year of Accountability,” ASIC is no longer treating these as abstract concepts; they are the primary tools used to regulate the “black box” of AI and automated decision-making (ADM).

Below is the structured content for your site, designed to show {Customer} how these legal pillars support a robust AI strategy.


The Legal Bedrock: s912A & Directors’ Duties

To hold an Australian Financial Services Licence (AFSL) is to make a professional commitment to a high standard of conduct. This commitment is codified in Section 912A of the Corporations Act 2001 and the fiduciary duties of your company’s officers.

⚖️ Section 912A: The General Obligations

Section 912A is “technology neutral,” meaning it applies to your algorithms exactly as it applies to your human advisors.

ObligationStatutory RefWhat it means for AI & ADM
Efficient, Honest & Fairs912A(1)(a)Your AI must not be a “black box.” It must produce outcomes that are fair, unbiased, and capable of being explained to the client.
Conflicts of Interests912A(1)(aa)You must ensure your AI (e.g., a robo-advice tool) isn’t programmed to unfairly favour your firm’s own products over the client’s interests.
Adequate Resourcess912A(1)(d)You must have the technological and human resources to monitor your AI. If you use it, you must understand it.
Trained & Competents912A(1)(f)Your representatives must be trained to supervise ADM systems and intervene if an automated outcome is unsuitable.
Risk Managements912A(1)(h)Your risk framework must specifically address “model drift,” data poisoning, and algorithmic bias.

Directors’ Duties: The Standard of Care

As a director, the legal responsibility for a “rogue” AI or a failed automated system stops with you. The courts and ASIC now expect directors to apply an “enquiring mind” to the technology their firms deploy.

  • Duty of Care and Diligence (s180): Directors must inform themselves about the risks of AI adoption. You cannot claim ignorance of how your automated credit scoring or advice engines work.

  • Good Faith & Proper Purpose (s181): Decisions to implement AI must be made in the best interests of the company and its clients, not just for short-term cost-cutting that sacrifices service quality.

  • Proper Use of Position & Information (s182–183): Ensuring that the vast amounts of data used to train AI are not misused for improper gain or to the detriment of the company.

  • Prevention of Insolvent Trading (s588G): While not directly AI-related, the financial risks of an “AI hallucination” causing massive liability can impact a firm’s solvency.

The “Interprac Warning“: A Case Study in Supervision

In late 2025, ASIC’s action against Interprac served as a stark reminder: manual systems cannot scale to meet s912A obligations in a modern environment.

  • ASIC argued that a failure to use data-driven oversight to detect patterns of unsuitable advice was a breach of s912A.

  • The Lesson: If your firm uses AI, your supervision of that AI must also be data-enabled and proactive.


Lead Strategist Note: In 2026, “I didn’t know the algorithm did that” is no longer a valid legal defence. ASIC expects your Responsible Managers and Directors to have “real-time, data-driven, RegTech-enabled supervision” of all automated systems.