‘Algorithmic advisers’ are digital tools that provide solicited, personalised recommendations or advice to consumers generated by algorithms of varying degrees of sophistication. Examples include comparison websites, advisory apps, chatbots, and robo-advisers. Algorithmic advisers offer considerable potential for supporting welfare enhancing choices by consumers, particularly in complex, high-cost or high-risk contexts, such as in finance, insurance, legal or healthcare settings. They also carry risks of harm. Some of these risks are those that have been raised in digital markets more widely, namely the potential for data harvesting, loss of privacy and bias. Other risks are specific to advisory services generally, in particular the risk of self-dealing. Additionally, algorithmic advisers may simply fail to provide the service they have been contracted to provide. In using algorithmic advisers, consumers are seeking advice that is personalised to them. They expect a nuanced and specific response. But commonly they are poorly placed to scrutinise the quality of what they receive. Consumers turn to algorithmic advisers for the very reason that they themselves lack the skills to navigate the field in question. In principle the response lies in the conduct and service standards provided by law. Firms providing algorithmic advice may be subject to fiduciary duties that demand utmost loyalty. They will be subject to an implied contractual duty of reasonable care and skill. But there is a question about how this duty applies and is assessed when the advice in question is automated. What should be demanded from that advice and from the firm that is providing the automated service? Drawing on technical insights on algorithmic auditing and explainability, this paper considers the content and expectations of the duty of reasonable care applying to algorithmic advice. It further explores the related question of the extent to which the contract terms can and should define the scope of this duty.
About the Speaker:
Jeannie Marie Paterson teaches and researches in the fields of consumer protection law, consumer credit and banking law, and AI and the law.
Jeannie’s research covers three interrelated themes:
1. The relationship between moral norms, ethical standards and law;
2. Protection for consumers experiencing vulnerability;
3. Regulatory design for emerging technologies that are fair, safe, reliable and accountable.
Jeannie has published widely on these research topics in leading journals and edited collections, including as the co-editor, with Elise Bant, of Misleading Silence (2020). Jeannie is also the co-author of a number of leading textbooks: (with Andrew Robertson) Principles of Contract Law (6th ed, 2020), Corones’ Australian Consumer Law (2019) and (with Hal Bolitho and Nicola Howell) Duggan and Lanyon on Consumer Credit Law (2020). Her scholarly work has been cited by courts, including the High Court of Australia and the Supreme Court of Canada.
Jeannie completed her BA/LLB(Hons) at ANU and her PhD at Monash University. She previously lectured at the Faculty of Law at Monash University and prior to that time was a solicitor at Mallesons Stephen Jacques (now King & Wood Mallesons). Jeannie holds a current legal practicing certificate and regularly consults to government, regulators and not-for-profit organisations.
Jeannie is a Fellow of the Australian Academy of Law. She is an editor for consumer protection in the Australian Business Law Review, and the Journal for Law, Technology and Humans.
The Law Society of Hong Kong has awarded this seminar 1 Continuing Professional Development (CPD) point.