Artificial intelligence (AI) is increasingly being marketed to directors as a means of predicting financial distress, flagging insolvency risk early and supporting better governance. For small medium enterprise (SME) boards, often operating with limited resources and under increasing scrutiny, these tools can appear highly attractive.
However, while AI can be a useful input, directors should be cautious about placing undue reliance on algorithmic predictions of solvency. From both a legal and practical perspective, over‑reliance on AI creates risks that may ultimately increase, rather than reduce, personal exposure.
Directors retain responsibility
At law, directors cannot outsource their core decision‑making responsibilities. Duties relating to solvency, creditor interests, and the prevention of insolvent trading are personal and non‑delegable. While directors are entitled to rely on advice and information, that reliance must be reasonable, informed and accompanied by independent judgment.
An AI tool does not replace the director’s obligation to actively assess financial position and risk. If a company later enters external administration, a director will not be protected by pointing to an algorithmic forecast that proved incorrect. Regulators and courts will focus on what the director knew, or ought reasonably to have known, and how they responded. AI can inform the decision; it cannot make it. The decision remains with the director(s) though reliance on an “appropriately qualified entity” (or person) mitigates such risk under the Corporations Act.
Solvency is more than a financial output
Many of the factors that determine whether an SME survives or fails are qualitative and contextual. Directors are uniquely placed to understand these matters, but many are difficult, if not impossible, to feed into AI systems.
Examples include:
- The willingness of a lender to continue support despite technical covenant breaches
- Reliance on informal creditor arrangements or extended payment terms
- The strength and credibility of management
- Concentration risk in customers or suppliers
- Regulatory, reputational or key‑person risk
AI models tend to focus on financial statements and ratios. Directors must remember that solvency assessments in practice are far broader and often hinge on matters that no dataset captures.
Acute shocks versus gradual decline
Predictive tools perform best where businesses experience a slow, observable deterioration-declining margins, increasing leverage or shrinking liquidity over time. In those circumstances, trend‑based models may provide useful early warnings.
However, many SME failures arise from acute events, not gradual decline.
Common examples include:
- Sudden withdrawal of overdraft or invoice finance facilities
- Loss of a major contract or customer
- Cyber incidents or infrastructure failure
- External shocks such as pandemics, trade disruptions or supply chain collapse
These events are often binary and externally driven. AI systems trained on historical financial patterns are inherently poor at predicting them. Directors should be wary of models that suggest confidence where genuine uncertainty exists.
Question the data before you trust the output
AI models are only as reliable as the data provided to them. In SMEs, financial data is frequently:
- Incomplete or delayed
- Prepared on a cash, not accrual, basis
- Optimistic in forecasting assumptions
- Less robust as financial stress increases
Data quality often worsens precisely when solvency risk is highest. Directors should treat AI outputs with caution where management information is patchy, late, or heavily adjusted, and should never assume that technical sophistication compensates for weak underlying inputs.
Transparency and explainability matter for boards
Many AI systems operate as ‘black boxes’, producing risk scores or probabilities without clear explanations.
For directors, this presents a governance problem:
- Can you explain the result to fellow board members?
- Can you test or challenge the assumptions?
- Can you demonstrate why reliance on the output was reasonable?
If the reasoning behind the prediction cannot be understood, it becomes difficult to validate, act upon or defend later.
Beware the risk of false comfort or premature panic
AI can distort decision‑making in two dangerous ways:
- False reassurance, where a ‘low insolvency risk’ score delays necessary intervention
- Overreaction, where a pessimistic forecast trigger unnecessary value‑destroying decision
Directors should view AI outputs as prompts for deeper inquiry, not as verdicts. AI confidence should never substitute for professional scepticism.
A more disciplined approach for directors
The safest and most defensible approach to solvency assessment is layered:
- Quantitative analysis including financial ratios, cash‑flow forecasting, and stress testing (with or without AI)
- Qualitative risk assessment drawing on the board’s understanding of the business and its stakeholders
- Scenario analysis particularly around funding continuity and downside events
- Professional advice obtained early rather than reactively
AI can enhance this process, but it should sit alongside, rather than replace, director judgment.
Use AI as an early‑warning system, not a shield
For directors, the real value of AI lies in surfacing risks earlier and encouraging better questions. It does not provide legal protection, nor does it eliminate uncertainty.
Solvency is ultimately a matter of judgment exercised under pressure. Tools may assist, but responsibility remains firmly with the board. Directors who manage this and treat AI as an input rather than an answer will be far better placed to navigate distress and protect both enterprise value and themselves.