Redesigning a static dashboard to deliver dynamic content

The challenge

Credit Karma’s dashboard was underperforming on the following fronts:

  1. Discovery — users were not discovering the product’s features beyond credit score (indicated by declining tap-through rate to previously undiscovered product pages)
  2. Monetisation — fewer users were engaging with credit card and loan offers on their dashboard (indicated by declining tap-through on offers and revenue per active user)

Based on signals from previous user interviews, we hypothesised that the dashboard was struggling because:

  1. The content was not personalised — users couldn’t see how the content might relate to them and so didn’t engage
  2. The content was too static — users would see the same content over many weeks and lose interest

From user interviews, the most common feedback was that credit card and loan offers were ‘pushy’ or ‘like an advert’ and didn’t update enough. In pursuit of monetisation, our dashboard featured a lot of credit card and loan products without much context, which had initially boosted revenue but the effect was fading fast.

While one potential solution would have been to simply remove static offers, this was likely to be immediately revenue-negative — our stakeholders would take more convincing.

So I set out to explore a more dynamic approach to our dashboard with the goal of better blending the needs of both users and the business.

From user interviews, we knew that users wanted more personalised content to help them improve their whole financial picture. On the business side, I knew that our leadership saw improving user trust as essential to delivering the business strategy.

I felt that making the dashboard more dynamic could result in a much better balance between these needs, with ‘trust’ acting as the guiding principle.

Approach to design

I started by defining what success would look like:

  1. An increase in users self-reporting that they trusted our product
  2. Product metrics reflecting higher engagement with the new experience than the current dashboard (increase in tap-through rate from the dashboard)

In collaboration with the analytics and product teams, I created a whole new approach to dashboard content which allowed the business to publish lots of new content that was relevant, timely, and frequently updated. Users were then segmented based on whether or not they had shown intent to take out a card or a loan — suppressing ‘pushy’ content for those it didn’t suit and introducing a softer educational feel for everyone. 

The real complexity in making the dashboard dynamic came from deciding how to structure the content framework. Working through the variables methodically, I focused on four: 

  1. Content type — what type of content would we be showing users and what purpose would each type serve?
  2. Content ranking and ordering — what shows at the beginning, middle and end of the dashboard and why? Which pieces should be prioritised above others?
  3. Content update frequency — how often would the content change? We’d want to give users enough time to engage with each piece but balance that with exposing them to fresh content as well.
  4. User cohorts — how and when would we split the user base to target the content most effectively?

The most complex part was deciding when and how to introduce user cohorts. We could feasibly segment users based on characteristics, such as age, or based on actions they’d taken (or not taken) on the platform, such as log-in rate or page visits. But this would clash with the rules we wanted to implement on who saw which individual piece of content — or group of content — we ended up going in circles!

Eventually, since the core of the problem identified through user research was ‘pushy’ offer content putting users off, I proposed that we split users according to whether they had shown intent to take out a card or loan or not. For example, someone who has chosen to tap on credit card offers in their past three sessions is likely to have far higher intent than someone who only got a new credit card last month. Using rules like this, we split users into four segments:

  1. High intent for a credit card
  2. High intent for a personal loan
  3. Low intent, could transact
  4. Low intent, won’t transact

This meant users who had shown intent would still see the card and loan offers since that made sense and was appropriate, while those who hadn’t shown intent could be given an experience leaning much more heavily on education and guidance, while card and loan offers would be suppressed for them.

While we felt confident this approach made sense to us, we ultimately didn’t know how it would perform. In order to de-risk this decision, we decided to keep a bigger control test group seeing the original dashboard (75%) than the variant group (25%) who would see the new experience. This would mean learnings came in slower as we waited for significance, but ultimately protected the existing revenue we knew the original dashboard was generating. 

Approach to delivery

For the project to be a success within the product culture of our business, we needed to keep proving out our solution in small incremental releases to grow confidence in the initiative for ourselves and our stakeholders. This project represented a step change in the way the product worked, so we engineered the releases to be carefully A/B tested against our existing dashboard as a control. 

In terms of delivery, we ultimately wanted to introduce 50 pieces of new content if we were to max out the potential of the new framework, but since I was working on all this new content alone, this wasn’t realistic for a quick release. With the product manager, I broke the releases down into batches of content to be released 10 at a time, meaning we could release content into the product and begin learning which content types and subjects were proving most engaging much more quickly, without being dependent on the more complex aspects of the framework to be built. 

We started by simply randomising these early batches of content, but our ultimate aspiration was always to control which content types and subjects showed to which user cohort. As the randomised content gained traction and showed more promise than the old dashboard, we began to trust in the framework and evolve towards the ultimate vision for a much more dynamic and engaging experience.

The outcome

The final outcome of the project saw a single-digit percentage point increase in users associating the product with ‘trust’ and double-digit percentage point improvements in overall tap-through content engagement, which held stable allowing for novelty effect. User feedback from then on noted more of an educational feel and far less mention of ‘pushy’ content.

Things I’d do differently

Leading a project of this significance and complexity, it was important to have a clear shared sense of the vision and goals associated with the project. I felt I was successful in achieving that upfront, getting buy-in from stakeholders and colleagues by being clear about all the potential upsides and the opportunity to move away from a dashboard that — simply put — was no longer scalable. 

But it was difficult to keep everyone on the same page as things evolved and we learned more about the framework we wanted to implement. The vision and goals began to shift as stakeholders influenced the initiative over time, but these changes weren’t always shared back out with the wider team to realign everybody, and it meant we weren’t always on the same page. 

In particular, clearer boundaries around who was responsible for each element of the project would have helped the product manager, product designer and I to work more effectively. We found ourselves unsure who would take the lead on certain parts of the process, particularly between the product manager and I.
If I were given the chance to redo the project, I would instate project milestones where the wider team would get together and realign on how the project was going and how the vision was evolving.

I would also keep a living document showing who was responsible for what between each milestone, keeping people clear and focused on their role in the project.

Leave a comment