ideas are brewing.

not quite...

← Back to work JP Morgan Chase · 2023 – 2024

AI-Powered Wealth
Management Platform

Role Principal Product Designer
Scope End-to-end product design
Team 5 designers · 3 PMs · 40+ engineers
Platform Web · Internal tools
Project overview — interface preview

Turning data into decisions for wealth advisors

JP Morgan Chase's private wealth division manages over $800 billion in client assets across a network of thousands of financial advisors. The challenge: advisors were spending up to 60% of their time on research, reporting, and data wrangling — time that should be spent with clients.

I led the end-to-end design of an AI-assisted platform that surfaced insights, automated routine tasks, and helped advisors make faster, better-informed decisions — all while preserving the human judgment that defines great financial advice.

60%
Reduction in time spent on research and reporting tasks
4.6/5
Average advisor satisfaction score post-launch
3,200+
Advisors onboarded within 90 days of release

AI fluency isn't the same as AI trust

Financial advisors aren't sceptical of technology — they're sceptical of anything that could make them liable. The biggest design challenge wasn't capability; it was trust. How do you design an AI system that advisors will actually act on?

Through 40+ hours of contextual research with advisors across seniority levels and client segments, I identified three critical failure modes in the existing tooling: information overload, opaque recommendations, and a disconnect between AI outputs and individual client context. Any new solution would have to address all three.

Research synthesis — journey mapping & insight clustering

Designing for explainability, not just efficiency

The platform was built around three design principles: show your working (every AI recommendation includes its reasoning), keep humans in the loop (advisors approve, not just receive, AI suggestions), and reduce before you reveal (summarise complexity before surfacing depth).

I ran six rounds of usability testing with advisors at different experience levels, iterating rapidly on information architecture, interaction patterns, and the visual language of AI-generated content. We developed a distinct "AI layer" design system — subtle but legible — so advisors always knew when they were looking at machine-generated insight versus verified data.

Prototypes were built in Figma and tested with a mix of moderated sessions and unmoderated diary studies, capturing real-world usage patterns over two-week sprints. This surfaced edge cases — particularly around conflicting signals and client-specific overrides — that shaped the final interaction model significantly.

A platform advisors choose to use

The platform launched to an initial cohort of 800 advisors and scaled to 3,200+ within 90 days — well ahead of target. More tellingly, voluntary daily active usage hit 78%, compared to 31% for the legacy tooling it replaced.

The design system developed for this project became the foundation for JPMC's broader AI product design language, subsequently applied across six internal AI tools. The "explainability layer" pattern — showing AI confidence, data sources, and suggested next steps in a standardised format — is now a firm-wide standard.

Next project

Haleon — Consumer Health Platform →