Imagine receiving personalized financial tips from a chatbot that could steer your life's savings in the wrong direction—without any legal safeguards in place. That's the shocking reality we're diving into today, where artificial intelligence (AI) might be skirting New Zealand's strict financial regulations, leaving everyday consumers vulnerable.
But here's where it gets controversial: Could these handy AI tools be doling out advice that's not just inaccurate, but outright illegal under the law? Let's unpack this gripping issue, drawing from expert insights at a recent industry event, and explore why it matters for anyone relying on tech for their money matters.
In a thought-provoking address to over 450 members of the Insurance Brokers Association of New Zealand (IBANZ) during their annual Insurance Industry Legislation Update session, Chapman Tripp partner Tim Williams raised alarms about generative AI chatbots. He pointed out that these AI systems might be violating the Financial Markets Conduct Act (FMCA), which governs financial advice licensing and regulatory standards in New Zealand. For beginners, think of the FMCA as a set of rules designed to protect consumers—like a safety net ensuring that financial advice is given by qualified professionals who have your best interests at heart, much like how doctors need licenses to prescribe medicine.
Williams urged the Financial Markets Authority (FMA), the watchdog overseeing financial markets, to investigate whether current laws are keeping pace with AI's lightning-fast advancements. If they're not, we need a clear path forward to weigh the benefits of easy access to financial info against the dangers of getting subpar or risky recommendations without the full protections the FMCA offers. Picture this: AI could provide quick facts about investments, but if it crosses into personalized suggestions, it might be operating without the necessary checks, potentially exposing users to unsuitable advice that could lead to financial losses.
IBANZ chief executive Katherine Wilson echoed these concerns, backing Williams' call and noting that she's already flagged the matter with the FMA. Wilson emphasized that IBANZ represents thousands of certified financial advisers who undergo rigorous professional development to deliver top-notch advice. "We're aware that the advice available via AI tools can be of questionable quality," she stated, highlighting ongoing worries about the harm from incorrect or deceptive information. "IBANZ has raised this issue with the FMA because we believe it’s an important matter for the regulator to consider." This is a crucial point—while AI can be a game-changer for accessibility, it lacks the human judgment and accountability that licensed advisers provide, potentially misleading users who might not discern fact from flawed suggestion.
And this is the part most people miss: Williams broke down what AI chatbots in New Zealand are currently allowed to do. They can share straightforward factual data, discuss financial products in general terms, or relay advice from others. However, if users push them further—say, by asking for specific product recommendations—the most popular chatbots might step over the line into unlicensed territory, requiring formal licensing and adherence to various duties under the FMCA.
This raises a big red flag for policymakers and the public alike. Williams warned that such AI-driven recommendations could strip recipients of key FMCA protections, like ensuring advice suits the individual's circumstances or comes with disclosures about conflicts of interest. His remarks follow similar concerns Down Under in Australia, where AI chatbots giving stock trading tips have sparked debates about their reliability.
To illustrate, consider an everyday scenario: A young investor chats with an AI bot about retirement funds. The bot might suggest a high-risk option based on generic data, ignoring the user's age, risk tolerance, or financial goals—something a licensed adviser would carefully assess. Without that oversight, it's like trusting a self-driving car that hasn't been tested for your specific route.
Williams stressed that if an AI chatbot routinely recommends products to retail clients, offers opinions on investments, crafts personalized plans, or handles financial planning without a license, it's essentially providing financial advice in violation of New Zealand's laws. He referenced past FMA guidance on robo-advice (automated advice tools), which underlines the need for a proper license and compliance with duties when advising everyday consumers.
Here's where things get really intriguing—and potentially divisive: Williams also cautioned human financial advisers using AI in their work. They could breach the Code of Professional Conduct if their records don't show that their reliance on AI outputs is justified. For instance, if an adviser leans on faulty AI data to build a client's portfolio and things go wrong, they might face scrutiny for not exercising due diligence. Advisers should also anonymize their prompts to AI tools, ensuring only authorized staff access sensitive client details, in line with Code Standards. This blurs the line between innovation and responsibility—some might argue that integrating AI boosts efficiency, while others fear it dilutes professional expertise.
Is AI the future of affordable financial guidance, or a ticking time bomb of unregulated risk? Do you think laws should evolve to embrace AI's potential, or should we clamp down tighter to protect consumers? Share your thoughts in the comments below—do you agree that AI advice needs stricter oversight, or is this just another tech scare story? We'd love to hear your take!