top of page
top icon 1.png

Knowledge Base

Previously, experimentation insights were scattered and hard to access, slowing ad creation and reporting. Knowledge Base centralized every experiment and streamlined reporting, empowering teams to create higher-performing ads with less friction and driving a 5% improvement in ad performance.

Role

Product Designer

Length

3 Months

Users

200-300 Rokt Employees

Responsibilities
  • Product Design — Designing new internal tool from the ground up & prototyping

  • Problem Definition — Redefining the core challenge and solution space

  • User Research — Identifying pain points and requirements; usability testing

  • Design Systems — Designing UI from scratch & contributing components

table KB 1.png

Experiments Table

New KB 2.png

Experiments Details View

The Problem

Ad performance had stagnated, and teams struggled to find inspiration for new ideas.

Without a centralized repository of insights, many decisions were made on assumptions rather than data—making it difficult to build on what worked.
Rokt placement.png

Rokt Ad Placment

At Rokt, experiments fell into two categories:

  • Creative Experiments – controlled by the advertiser (e.g., testing copy, creative, or design).

  • Page Experiments – controlled by the host site (e.g., testing ad placement or layout).

Both experiment types could span a wide range of variables, from messaging to visual design.

Rokt displayed experiment results in its internal hub, One Platform, but experiments could only be viewed one at a time—making navigation slow and cumbersome. The lack of tagging and relevance indicators also made it difficult to discover and apply valuable insights.

Key Pain Points

v icon 2.png
Discoverability of Past Experiments

Locating past experiments was difficult for Customer Success teams, especially without clear search criteria or tagging.

v icon 1.png
Understanding Experiment Setups

Understanding the reasoning behind hypotheses, variant differences, and why certain variants were excluded was often unclear and difficult to track.

v icon 3.png
Interpreting Experiment Results

Complex statistical concepts often confused users, making it hard to determine the optimal decision.

Preliminary Requirements

Users

Knowledge Base's served anyone looking for ad placement inspiration...

  • Customer Success Teams

  • Solutions Teams

  • Account Managers

  • Product Managers

  • Product Designers

There is also an admin persona who manuallly vets the experiemnts, such as the experiment analysis and test types.

High Level Workflow

workflow.png

Creative Experiments Table Columns

  • Experiment name

  • Account name (the client)

  • Verification status

    • Manual review to ensure conclusive and impactful results

  • Industry Vertical

  • Elements

    • Specifies what is being tested

  • Primary success metric

  • Uplift

    • Indicates how much better or worse variants performed against to the control

  • Duration

  • Date ended

  • Status

  • Probability to beat

    • How likely the variant(s) would out perform the control

Project Conflict: Timeline

The Challenge

Rokt wanted the Knowledge Base MVP launched by quarter’s end—yet no design work or meaningful user research had been done. Stakeholders wanted speed; I saw risk in building without validating the problem.

noun-conflict-1744722 (1) 1.png

My Approach

  • Advocated for UX research despite deadline pressure.

  • Proposed a compromise: start with redesigning the experiment table (achievable within timeframe), run usability tests alongside design, and defer a larger research study mid-quarter.

  • Suggested temporarily linking the detail view to legacy tooling to buy time

The Outcome

  • Won stakeholder buy-in by balancing speed with user value.

  • Met the tight deadline while still collecting critical user insights.

  • Research uncovered deeper pain points, leading to a broader initiative to streamline ad placement creation.

Rapid Problem Validation

Card Sorting

A card sorting exercise was conducted with users to determine the best order of the columns for the creative experiments table view.

image 13.png

Survey

After the card sorting exercise, I sent participants a post survey that included quantifying the UX of the current experience and feedback about the experiments tooling in One Platform where it was currently hosted.

Establishing a UX Baseline

How is your experience finding applicable experiments in One Platform?
a 1.5 b.png

Very Difficult

1.5/7

Very Easy

Reasoning

  • Hypothesis not shown by default

  • Ad preview in whole separate feature that’s many clicks

  • Unclear values of what is being tested

How is your experience understand experiment setups in One Platform?
a 1.5 b.png

Very Difficult

1.5/7

Very Easy

Reasoning

  • Only search one account at a time

  • Can’t search whole vertical at once

  • Lots of clicks & views

  • Type of test hidden in lengthy naming convention (That people often didn’t enter

How is your experience interpreting experiment results in One Platform?
a 2.png

Very Difficult

2/7

Very Easy

Reasoning

  • Hard to gauge what won at a glance

  • Manually calculate statistical significance of best performing variant

  • Table of segmentation hard to digest

OP-E.png

One Platform Experiment View (Legacy Tooling)

V1 Wireframes

After a few days of preliminary research confirming this was a real problem, I created two versions of the creative experiments detail page, tested them as paper prototypes, and gathered feedback.

Option 1

w 2.png
CSAT: 4.1 / 5

Option 2

w 1 b.png
CSAT: 3.2 / 5

Users preferring option 1 with the filters sidebar because it was more easily discoverable & centralized.

New Test Types

Initial creative experiment test types included CTA, header, body, and landing page link. After testing v1 wireframes with users, I uncovered many additional test types.

 

I presented these findings to the team, and together we developed a refined taxonomy with proper categories. I also counted how many users requested each category to help us prioritize them.

test types big.png

V2 Wireframes

Table (Filters Panel Collapsed)

image 20.png
V1 Insights & Solutions
  • Offer Type Column: Users highly requested “Offer Type” and were excited about it.

    • Solution

      • Added Offer Type as a standalone column.

  • Limited Metadata & Filters: Initial filters and experiment metadata were too narrow. Users also wanted to easily view their own experiments.

    • Solution

      • Added new metadata/columns (e.g., industry sub-vertical, created by).

  • Variant Interpretation: Users wanted quicker ways to compare experiment variants without clicking into details.

    • Solution

      • Split the table into four tabs (creative placement components), showing test values directly in the table.

  • Experiment Name Visibility: Users wanted the experiment name to be the main identifier, consistent with other internal tools.

    • Solution

      • Kept experiment name as the first column, with live status moved next to it for consistency.

  • Adoption Concerns: While excited about release, users worried the tool might be underutilized as “just another tool.”

    • Solution

      • Added gamification metrics to incentivize engagement.

  • Revisiting Valuable Experiments: Users often returned to past experiments for insights.

    • Solution

      • Introduced “favorite” experiments feature + filter for easy access.

  • Key Results Columns: “Verified” and “Uplift” were the most important columns for interpreting results. However, “Verified” was a new concept and confusing.

    • Solution

      • Highlighted only Verified & Uplift with icons/colors; added banner + hover tooltip explaining “Verified.”

  • Dev Constraint: Infinite Scrolling: Infinite scrolling caused poor load performance and pauses.

    • Solution

      • Implemented pagination for smoother experience.

Filters Panel

v2 Filters Panel 1.png
V1 Insights & Solutions
  • Narrow Test Scope: The initial identification of test types was too limited.

    • Solution

      • Added an accordion menu of filters with full taxonomy of categories.

CSAT: 4.3 / 5

+ 17.8 %

Planning Ahead

By this stage of the design process, we had a long list of feature requests that couldn’t all make the end-of-quarter launch. To prioritize, I led an impact-vs-effort exercise with the team, helping us focus on the most valuable features we could realistically deliver.

image 14.png

Uplift Column: Team Conflict

When designing the Knowledge Base, the team clashed on how uplift results should be presented. The original approach sorted experiments by highest uplift, pushing baseline wins to the bottom.​

The Problem

The current approach buried experiments with negative uplift, even though they often revealed critical learnings about what didn’t work.

My Proposal

Sort by the absolute value of uplift, ensuring that the most insightful results—positive or negative—rose to the top.

The Resolution

Although some teammates were skeptical, I validated the approach through targeted user testing, which confirmed its value.

noun-conflict-1744722 1.png
Absolute Uplift 1.png

The Four Uplift Column Options I Presented to Users

V3 Wireframes

At this stage of the design process, I stripped the UI down to only the elements prioritized for the MVP. Guided by user feedback, I iterated on the wireframes until we reached a strong, validated direction.

v3_lowfi 1.png
V2 Insights & Solutions
  • Filter Performance: Applying filters one at a time would have caused excessive grab requests, risking UI crashes and long load times.

    • Solution

      • Added an “Apply” button at the bottom of the filters panel.

  • Probability Ranges: Users struggled to remember the meaning of probability-to-beat-baseline percentage ranges.

    • Solution

      • Introduced descriptive tags for each range to aid quick interpretation.

  • Cluttered First Column: The experiment name column was overloaded with name, account, and status, making the table hard to scan.

    • Solution

      • Split status into its own column and replaced the token with a subtle dot, giving visual priority to more important fields.

  • Slider Precision: The range slider was difficult to use, especially for wide spans, and lacked accuracy.

    • Solution

      • Replaced the slider with min/max selectors and a list of applicable values for precise control.

  • Date Range Usability: Users were frustrated by multiple clicks required in the date picker.

    • Solution

      • Switched to relative values in a simple list selector (e.g., “past 3 months”).

CSAT: 4.5 / 5

+ 23.3 %

Final Table Design

Knowledge base - Default Landing.png

Phase 2: Experiment Details & Page Experiments

With the creative experiments table complete and handed off to engineering, I shifted focus to deeper UX research. The goal was to look at the experiments platform holistically, uncover opportunities, and define requirements for both the Creative Experiments details page and the new Page Experiments Knowledge Base.

Research Goals
  • Understand pain points in the experiments creation process (across multiple internal tools)

  • Optimize workflows for different personas (Customer Success, Ops, PMs)

  • Define requirements for the Creative Experiments details page

  • Map end-to-end experiment user journeys

  • Define requirements & categorization for Page Experiments in Knowledge Base

Research Scope
  • Participants: 13 across Customer Success, Ops, and Product

  • Format: 1:1 interviews (60 min each), mix of in-person & remote

  • Timeline: ~1.5 months

image 15.png

User Jounrneys

Future Project Opportunities
  • Ops relied on manual Google Sheets to plan setups → error-prone and slow

  • Intake forms were often filled incorrectly by Customer Success → caused delays and back-and-forth

  • Experiment handoffs lacked clarity, slowing down launches

OP-E.png

One Platform Experiment Details Page (Legacy Tooling)

Outcome

​Although I was moved to another team mid-study, I had already mapped user journeys and uncovered enough insights to redesign the Creative Experiments details page in Knowledge Base. This ensured the project didn’t stall and provided a strong foundation for future Page Experiments work.

Details Page Wireframes

image 16.png

Default VIew

image 17.png

Ad Previews Modal

One Platform Insights & Knowledge Base Solutions

  • Unclear Experiments: 

    • Hypotheses weren’t shown by default, making it difficult for users to understand what was actually being tested.

    • Solution

      • Added value of this experiment in bottom right corner, which communicated the hypothesis.​

  • Misleading Progress Bar

    • Users’ eyes were drawn to the progress bar first, even though the real indicator of success was the selected metric column. The bar itself was misleading—for example, half-filled even when the variant performed equally to baseline.

    • Solution

      • Redesigned the progress bar to align with the primary success metric column and added probability-to-beat-baseline tokens, matching the table view for greater clarity.

      • Progress bars now display the leading variant or baseline at 100%, with all others proportionally filled for easier comparison.

  • Baseline-Only Comparisons

    • Metrics were always shown relative to baseline, which was fixed at the top. Users had to manually calculate variant-to-variant performance, making it hard to see the true winner at a glance.

    • Solution

      • Added toggle to change relativity to baseline vs best & worst performance. Users really liked this idea so we made it the default relativity.​

      • Added number indicators for which ad placement won.

  • Inefficient Segmentation

    • Analytics showed users clicked segmentation more often than switching metrics, yet segmentation required two extra clicks compared to tabs—slowing analysis despite being more commonly used.

    • Solution​

      • Utilized tabs at top for segmentation, and put the changing primary success metric in dropdown.​

  • Variant Previews

    • ​Viewing variants required navigating to a separate section with multiple clicks, opening excessive tabs. Poor naming conventions often left users unclear about what was actually being tested.

    • Solution​

      • Clicking on an ad placement row opens up a preview, with ability to togg​le placements with arrows.

      • Modal contains specific values of the components of the placements for quicker recognition.

  • Timeline Graph Hard to Digest

    • Users complained that the chart was hard to digest, as the shaded areas overlapped and lines were often squished.

    • Solution

      • Redesigned graph and launched design system component​.

Post Usability Testing CSAT Comparison

One Platform

2.9 / 5

Knowledge Base

4.7 / 5

+ 62.1 %

Final Details Page Designs

Usability testing on the wireframes was highly successful, with only minor tweaks identified (like column order adjustments). I then refined the mocks into high-fidelity designs.

Details Page Default

final details good.png

Segmentation

Segmentation Group by segment - section open.png

Ad Placement Preview

Preview creatives modal - First Variant.png

Impact

Adoption

  • 50% of teams relied on Knowledge Base monthly → Became the go-to hub for experiment insights.
  • 20% of new experiments were replicated/inspired by Knowledge Base → Accelerated testing and reduced guesswork.

Business Impact

  • 5% uplift in ad performance → Experiments inspired by Knowledge Base consistently improved outcomes.
bottom of page