Know+ Product Development:
Prototyping, Testing & Implementation

Know+ Product Development:
Prototyping, Testing & Implementation

Know+ Product Development:
Prototyping, Testing & Implementation

Based on my generative research, we set out to develop a mobile streaming app that would stream our original film productions and feature unique learning tools for unsatisfied knowledge entertainment seekers

Based on my generative research, we set out to develop a mobile streaming app that would stream our original film productions and feature unique learning tools for unsatisfied knowledge entertainment seekers

Project Scope

Timeframe: 12 months

My Role: Lead UX Researcher

Team: Paul (PM), Tony (PM), Zheng (UXD/UI), Devs team (6 members), Chloe (UI), Sen (UX intern)

Methods: User testing (card sorts, A/B, UX benchmarking & qualitative inquiries for key tasks), prototyping, system evaluation

Artefacts: IA, taxonomy, user flows, wireframes and prototypes, testing plans and protocols, UX audit reports

Tools: Figma, Zoom, MS Office


Timeframe
: 8 months

My Role: Lead UX Researcher

Team: Paul (PM), Tony (PM), Zheng (UXD/UI), Devs team (6 members), Chloe (UI), Sen (UX intern)

Methods: User testing (card sorts, A/B, UX benchmarking & qualitative inquiries for key tasks), prototyping, system evaluation

Artefacts: IA, taxonomy, user flows, wireframes and prototypes, testing plans and protocols, UX audit reports

Tools: Figma, Zoom, MS Office

Project Overview

Project Overview

'What are knowledge entertainment consumers seeking to satisfy?'

Kickoff

Kickoff

Kickoff

We had spent 6 month conducting market and user research that established our product requirements and business plan. With funding secured, we were looking to enter the design and iteration phase for Know+.

Our development and algorithm teams had built a new feature integrated in Shōgun's upcoming version update: the ROM guidance panel, which visualises subject calibration information (‘range of motion’) in real time capture.

They wanted to know how end users would interpret, learn and use the feature.

Read about

Know+ Research

Objectives

Objectives

Objectives

  1. Create an effective IA that supports both user and business goals


  2. Uncover issues in design by testing and evaluation to effectively iterate into beta version


  3. Benchmark UX (MVP onward) to track progress

  1. Create an effective IA that supports both user and business goals


  2. Uncover issues in design by testing and evaluation to effectively iterate into beta version


  3. Benchmark UX (MVP onward) to track progress

  1. Create an effective IA that supports both user and business goals


  2. Uncover issues in design by testing and evaluation to effectively iterate into beta version


  3. Benchmark UX (MVP onward) to track progress

Notes

Notes

Notes

My main role at this phase was research—evaluation, testing, analysis, and communicating feedback by reports and presentations. I collaborated in creating some low-fi wireframes and prototypes, but major UX/UI design work was led and completed by our highly experienced contractor.

My discussions with colleagues revealed that no user research had been done, and that no design artefacts had ever been created at the company.

Because of this, part of my work focused on collecting data to create evolvable personas and to document use case scenarios.

Methodology

  • Card sorting

  • Creation of mobile sitemap and user flows

  • Wireframing, prototyping & early user testing

  • System evaluation

  • Contextual inquiry

  • UX benchmarking

Card Sorting

To identify content categories, I first conducted moderated open card sorting on paper. In a session with 16 users, each participant grouped 40 cards (selected from our inventory) into groups. Despite seeing a range of choices, we were able to recognise meaningful patterns in a careful process of analysis and establish labels that reflect users' mental models. Following that, we conducted tree testing with 21 new participants to see how well the defined classification and labels worked.

Quantitative and qualitative findings helped us determine the app's structure and categories. I put together a content taxonomy and drafted a mobile sitemap to visualise navigation, while organising team sessions to engage in iterating on these IA artefacts together—this created a shared sense of ownership and reduce the risk of having a 'bottle-neck' in the project workflow.

To identify content categories, I first conducted moderated open card sorting on paper by running a session with 16 users; each participant grouped 40 cards (selected from our inventory) into categories. Despite seeing a range of choices, we were able to recognise meaningful patterns in a careful process of data analysis and establish labels which we thought to reflect our users' mental models.

Following that, we conducted unmoderated tree testing with 21 new participants, analysing both quantitative and qualitative data to see how well the defined labels worked.


Findings from the sessions helped us to determine the structure and classify groups/subgroups. I created a content taxonomy, labelled features, drafted IA and sketched navigation, meanwhile organising team sessions to engage members in iterating on IA artefacts, so as to build a shared sense of ownership and reduce the risk of having a 'bottle-neck' in the project workflow.

My first sketch of our information architecture

First sketch of Know+ IA

User flow

Example task: Search course to purchase

Example task: Course search to purchase

To better illustrate different types of information behaviour, I outlined a user flow for each main task within the app, covering interactions such as searching, browsing, registering, subscription, payment, and 7 others learning and media functions. I presented the visualised data to help stakeholders understand how users with varied goals would move through the app, enabling us to design a system supportive of different actions in a range of contexts for any target user to intuitively navigate through and interact with the platform in seamless processes.

To identify content categories, I first conducted moderated open card sorting on paper by running a session with 16 users; each participant grouped 40 cards (selected from our inventory) into categories. Despite seeing a range of choices, we were able to recognise meaningful patterns in a careful process of data analysis and establish labels which we thought to reflect our users' mental models.

Following that, we conducted unmoderated tree testing with 21 new participants, analysing both quantitative and qualitative data to see how well the defined labels worked.


Findings from the sessions helped us to determine the structure and classify groups/subgroups. I created a content taxonomy, labelled features, drafted IA and sketched navigation, meanwhile organising team sessions to engage members in iterating on IA artefacts, so as to build a shared sense of ownership and reduce the risk of having a 'bottle-neck' in the project workflow.

Wireframing & user testing

I translated my findings into first sketches in Figma, which underwent some tweaks after internal viewing and discussion based on our design artefacts. The low-fidelty wireframes were static and purposed for testing, as I wanted to concentrate on collecting user feedback and iterating quickly through new versions. Sticking to basic prototypes would help us avoid feeling wedded to the design before it was finalised.

To identify content categories, I first conducted moderated open card sorting on paper by running a session with 16 users; each participant grouped 40 cards (selected from our inventory) into categories. Despite seeing a range of choices, we were able to recognise meaningful patterns in a careful process of data analysis and establish labels which we thought to reflect our users' mental models.

Following that, we conducted unmoderated tree testing with 21 new participants, analysing both quantitative and qualitative data to see how well the defined labels worked.


Findings from the sessions helped us to determine the structure and classify groups/subgroups. I created a content taxonomy, labelled features, drafted IA and sketched navigation, meanwhile organising team sessions to engage members in iterating on IA artefacts, so as to build a shared sense of ownership and reduce the risk of having a 'bottle-neck' in the project workflow.

Next, I conducted user testing on 6 defined tasks with the help of our UX intern. We involved a total of 14 users in 4 rounds of online testing: I held individual sessions where we provided realistic scenarios and observed user interaction with the mockups, taking notes and probing participants when appropriate to collect qualitative feedback. Our analysis identified a list of issues that were presented to inform iterations.

We prioritised simplifying and clarifying navigation on main screens, and we organised the complex system of contents and functions in a digestible way. The findings helped us tacke multi-layer challenges and validate multiple changes: structurally, we optimised the IA to support user and our business goals; in UI, we re-positioned elements in layout (e.g., logo, fonts, thumbnails, CTAs) to facilitate info scanning.

Series of low-fidenlity key screen wireframes

Series of low-fidenlity key screen wireframes (annotations omitted), 2nd iteration

Prototypes

We reached a version where users experienced little difficulties and responded positively during testing. Growing more confident that the wireframes would not require complex redesigns in the future, I handed the design over to our contract UX/UI designer, who proceeded to develop high fidelity prototypes. Thanks to early testing, we avoided investing significant resources in changing the app's IA at later stages.

A quick showcase of the high fidelity wireframes and notes

User testing: Purchase funnels

To optimise conversion, my focus was to understand the UX of three main purchase funnels—this involved the home, course and payment screens. We set goals for our interaction design:

1. Users are prompted to subscribe or purchase a course as soon as possible
2. Users can easily understand pricing model, and subscribe or purchase a course
3. Users can easily find specific course info and trailer
4. Users can easily understand course structure

I conducted three rounds of remote tests—one A/B and two moderated usability/UX—with two success metrics to investigate our designs: task completion (how users engaged with the content on 2 set of designs) and time to purchase. After each iteration, I tested different users to complete the same key tasks. Next, I conducted 4 remote contextual inquiries to gather qualitative UX data on navigation and content exploration.

Insights

The analysis helped me to identify triggering elements or factors. I presented the comprehensive findings with informed design suggestions for the Designer and Devs to address: we removed distractors, highlighted contents, increased discoverability of functions, and directed clicks on key CTAs (free trial / purchase buttons). In addition, given user feedback on content language, I rewrote our app's text with a different tone and tested for ideal engagement.

Example: Notes, home screen

Example: Raw annotation, course screen

Example: Iterated flow, payment screen

PAYMENT SCREEN


  1. Too many tabs—users were confused about the payment options and hence hesitant to click. Many were unsure how to proceed to payment as icons didn't translate (text needed)


  2. Unclear what 'Wallet' was—users didn't know they could Top Up or coupons were stored here


  3. No in-app top-up options—nearly all users felt 'annoyed' or 'confused' when redirected to a webpage.

COURSE SCREEN


  1. Colours—users thought blue areas would expand on tap to reveal more info; clicks here interrupted flows


  2. Hidden CTA—users couldn't find small 'play' button for trial lesson. ~50% failed to locate it; <35% succeeded without verbal prompts


  3. Crowded content—took users long to scroll through showcased episodes; reported feeling overwhelmed by heavy content and discouraged from continuing

OVERALL


  1. Tone—users thought the address was impersonal and formal; they felt ‘too stupid to take these courses’


  2. Image overload—users confused about who the people were and had no reference until clicking in the screen. They'd rather see what courses > speakers' faces


  3. Too minimalist—users not used to too much space and wanted more info to be seen packed in empty spaces

Example issues uncovered from 2nd user testing

Commercial impacts

We brought the average time to purchase down by nearly 190% (2.86-second decrease); the rate of users redirecting to trial/purchase section within the first 15 seconds increased from <50% to nearly 90%. Through the process, users reported significantly more positive feedback on the ease of navigation in new versions, higher satisfaction and impression of the Know+ brand.


After the MVP release, I continued to conduct post-launch testing by tracking user activities on the backend. While our original home screen signalled two obvious CTAs, I noticed over 75% users tended to click on the ad at the screen top.

I suggested linking the image directly to purchase, creating another intuitive route. The implementation instantly shortened new customers’ purchase time and, to our shock, increased our conversion rate by a stunning 26% within the first two weeks of change.

Improved payment screen

Improved payment (cont.)

Improved course screen

Improved course (cont.)

Contextual inquiries: Lesson

The lesson section was meant to be the part where users spend most of their time on. Therefore, we set the following goals:

1. Users can easily take control of media tools and formats
2. Users can easily find and use our 4 learning features
3. Users can easily find and use the sharing function (for social media)


We considered dairy studies and contextual inquiries, deciding on the latter (as the former contained higher risks of dropout for our target demographics). It took 4 weeks to plan, recruit, conduct 6 sessions (7 participants) and analyse data. I followed them in the field through part of their day (app activities recorded on mobile screen), with an observation outline and inquiry topic guide.

Insights

The analysis revealed multitasking tendencies halfway through the mini video lesson, owing to a mix of goals and rationales:

  • To switch to audio only (visual attention needed in physical environments)

  • To pause and resume videos (taking notes without missing content)

  • To change video setting (for subtitles, volumes, HD and play speed)

  • To skim other episodes (bigger picture wanted)

  • To find transcripts (subtitles too chopped & difficult to follow)

  • To exit the app (answer messages)



Users found the above experience of scrolling and clicking around taxing. Also, low engagement with the ‘notes’ was observed:

  • 2/3 of users didn’t know how to create notes without a sample

  • >50% users felt exhausted after minutes

  • 3 users hoped to save notes in one place (currently scattered in different lessons)

Interaction design flow
Interaction design flow
Interaction design flow

Lesson page interaction explained

Impacts

We needed to support users' multitasking behaviour by minimising interruptions. in light of this, I suggested design changes to improve UX:

  • To create a video/audio toggle on screen top and keep media functions in a popup overlay, enabling fast model switches and setting changes


  • To create 3 tabs floating below the lesson video—‘course contents’, ‘notes’ and ‘transcripts’—maximising navigation clarity and ease to reduce cognitive load and steps in flow


  • To replace the notes function with knowledge cards, which featured pre-made templates, an automatic video locating function, text and background personalization, ‘save to my collection’, and sharing function. This would simplify note-taking while enabling personalised lesson review


My recommendations were implemented in time, driving user satisfaction up by 1.8 in a survey (SUS + open items) to 300+ users. In addition, knowledge cards became a highlight feature, with over 80% of users describing the tool as ’fun’, ‘helpful for learning’, creating ‘sense of achievement’, and ‘easily shareable’.

Lesson screen toggle
Lesson screen toggle

Popup media functions

Lesson screen knowledge card
Lesson screen knowledge card

Floating tab: Notes

Lesson screen transcript
Lesson screen transcript

Floating tab: Transcripts

Lesson screen showcase
Lesson screen showcase

Toggle on screen top

40+

Research sessions

4

Methods employed

300+

Users involved

26%

Increased conversion

Next steps

Next steps

We planned to continue researching opportunities, having drawn valuable insights throughout product development. My strong passion for UX research was ignited by this 30-month project, after which I went on to complete a second MSc in Human-Computer Interaction Design at City, University of London.

I’ve now settled in London and am handing Knewtopia to its other Shanghai-based co-founder.

We planned to continue researching opportunities, having drawn valuable insights throughout product development. My strong passion for UX research was ignited by this 30-month project, after which I went on to complete a second MSc in Human-Computer Interaction Design at City, University of London.

I’ve now settled in London and am handing Knewtopia to its other Shanghai-based co-founder.