Split screen showing a human financial advisor on one side and an AI robot interface on the other, representing the tension between traditional and AI-powered financial advice

AI Financial Advisors Hit a Legal Wall: The Fiduciary Duty Problem That's Holding Back the Revolution

Artificial intelligence has mastered chess, defeated Go champions, and can now write code that rivals human programmers. But when it comes to replacing your financial advisor, AI faces an unexpected roadblock that has nothing to do with technical capability and everything to do with legal accountability.

According to MIT finance professor Andrew Lo, the question isn’t whether AI has the expertise to provide financial advice—it clearly does. The real problem is that AI systems can’t be held legally accountable when they make mistakes, creating a fiduciary duty gap that could leave consumers unprotected.

The Technical Capability Is Already Here

Lo, who directs MIT’s Laboratory for Financial Engineering, is clear about AI’s current abilities: “The problem that we have to solve is not whether AI has enough expertise. The answer right now is, clearly, AI has the expertise.” Large language models like ChatGPT, Claude, and Gemini are already being used extensively for financial guidance.

The numbers are staggering. 66% of Americans who have used generative AI report using it for financial advice, according to an Intuit Credit Karma poll. Among millennials and Gen Z, that figure jumps to 82%. Perhaps most telling: 85% of respondents who received AI financial advice actually acted on those recommendations.

This adoption pattern mirrors the early days of online banking in the 1990s, when consumers initially showed skepticism but quickly embraced digital financial services once they experienced the convenience. However, unlike online banking—which was backed by regulated financial institutions—AI advice exists in a regulatory vacuum.

The Fiduciary Duty Gap: Why Legal Accountability Matters

Fiduciary duty represents one of the most fundamental protections in financial services. When a human financial advisor accepts this responsibility, they’re legally bound to put their client’s interests ahead of their own. Violating this duty can result in:

As Lo puts it: “They don’t have the ability to suffer consequences if they make a mistake to the same degree that a human advisor does.” An AI system can’t be fined, sued, or lose its license. The companies behind these systems largely disclaim responsibility for investment advice, creating what NYU’s Sebastian Benthall calls an “unresolved” legal question.

“Who’s really responsible, and can people really be relying on a product to do this if it’s not being backed up by a corporation with a fiduciary duty? It’s really unresolved.” — @SebastianBenthall

Where AI Excels (And Where It Fails)

Lo identifies clear use cases where AI financial advice works well and where it becomes dangerous. AI excels at providing educational resources and explaining complex financial concepts like Medicare or basic investment principles. It’s particularly useful for generating multiple scenarios and explaining how different financial strategies might work.

However, AI has critical weaknesses:

This mirrors the limitations seen in early expert systems of the 1980s, which could process rules and provide recommendations but lacked the nuanced judgment and accountability that human professionals offered.

The Human Advisor Problem: Not Everyone Is a Fiduciary

Here’s where the situation gets complicated: not all human financial advisors are fiduciaries either. The financial services industry operates under a patchwork of regulations where legal duties vary dramatically based on professional designation.

Stockbrokers, insurance agents, and many financial intermediaries operate under lower standards than registered investment advisors (RIAs). Recent regulatory battles highlight this confusion—a Labor Department rule requiring fiduciary duty for 401(k) rollover advice recently died after the Trump administration stopped defending it in court.

This creates a situation where some human advisors have fewer legal obligations than consumers might expect, while AI systems have none at all.

Real-World Applications and Investor Behavior

The gap between AI capability and legal accountability isn’t stopping investors from using these tools. Social media shows sophisticated investors leveraging AI for research and analysis:

“I absolutely leverage AI to help me scrape the news for Iraq and crypto, and now I am adding Vietnam into the mix, as I am pretty heavily invested in it.” — @RealThomSieloff

This represents a pragmatic middle ground—using AI as a research and analysis tool while maintaining personal responsibility for investment decisions. It’s similar to how professional traders use algorithmic tools: leveraging computational power while retaining human oversight and accountability.

The Path Forward: Regulation and Innovation

Lo believes government policy must evolve to provide fiduciary protections for AI-generated financial advice. Until that happens, “we’re not going to get to the point where we can fully delegate these [financial] decisions.”

Several potential solutions are emerging:

The precedent exists in other industries. Medical AI systems require human physician oversight and approval. Autonomous vehicle manufacturers accept liability for their systems’ decisions. Financial services could adopt similar approaches.

The Bottom Line: Capability Without Accountability

AI has achieved the technical capability to provide sophisticated financial advice, but the legal framework hasn’t caught up. This creates both opportunity and risk for consumers. While AI can be tremendously valuable for financial education and generating options, the lack of fiduciary duty means users must maintain healthy skepticism.

Lo’s advice applies to both AI and human advisors: “You should always remember that the advice that they can give you could be wrong.” The difference is that when human fiduciaries are wrong, there are legal remedies. When AI is wrong, you’re on your own.

Until regulators solve the accountability gap, AI financial advice will remain a powerful research tool rather than a true replacement for professional guidance. The technology is ready—the legal system isn’t.

← All dispatches