Know+ Product Development:
Prototyping, Testing & Implementation
Based on my generative research, we set out to develop a mobile streaming app that would stream our original film productions and feature unique learning tools for unsatisfied knowledge entertainment seekers

Know+ Product Development:
Prototyping, Testing & Implementation
Based on my generative research, we set out to develop a mobile streaming app that would stream our original film productions and feature unique learning tools for unsatisfied knowledge entertainment seekers

Know+ Product Development:
Prototyping, Testing & Implementation
Based on my generative research, we set out to develop a mobile streaming app that would stream our original film productions and feature unique learning tools for unsatisfied knowledge entertainment seekers

Project Scope
Timeframe: 12 months
My Role: Lead UX Researcher
Team: Paul (PM), Tony (PM), Zheng (UXD/UI), Devs team (6 members), Chloe (UI), Sen (UX intern)
Methods: User testing (card sorts, A/B, UX benchmarking & qualitative inquiries for key tasks), prototyping, system evaluation
Artefacts: IA, taxonomy, user flows, wireframes and prototypes, testing plans and protocols, UX audit reports
Tools: Figma, Zoom, MS Office
Project Scope
Timeframe: 12 months
My Role: Lead UX Researcher
Team: Paul (PM), Tony (PM), Zheng (UXD/UI), Devs team (6 members), Chloe (UI), Sen (UX intern)
Methods: User testing (card sorts, A/B, UX benchmarking & qualitative inquiries for key tasks), prototyping, system evaluation
Artefacts: IA, taxonomy, user flows, wireframes and prototypes, testing plans and protocols, UX audit reports
Tools: Figma, Zoom, MS Office
Project Scope
Timeframe: 8 months
My Role: Lead UX Researcher
Team: Paul (PM), Tony (PM), Zheng (UXD/UI), Devs team (6 members), Chloe (UI), Sen (UX intern)
Methods: User testing (card sorts, A/B, UX benchmarking & qualitative inquiries for key tasks), prototyping, system evaluation
Artefacts: IA, taxonomy, user flows, wireframes and prototypes, testing plans and protocols, UX audit reports
Tools: Figma, Zoom, MS Office



Project Overview
'What are knowledge entertainment consumers seeking to satisfy?'
Project Overview
'What are knowledge entertainment consumers seeking to satisfy?'
Project Overview
'What are knowledge entertainment consumers seeking to satisfy?'
Kickoff
We had spent 6 month conducting market and user research that established our product requirements and business plan. With funding secured, we were looking to enter the design and iteration phase for Know+.
Read about
Know+ Research
Objectives
Create an effective IA that supports both user and business goals
Uncover issues in design by testing and evaluation to effectively iterate into beta version
Benchmark UX (MVP onward) to track progress
Notes
My main role at this phase was research—evaluation, testing, analysis, and communicating feedback by reports and presentations. I collaborated in creating some low-fi wireframes and prototypes, but major UX/UI design work was led and completed by our highly experienced contractor.
Kickoff
Our development and algorithm teams had built a new feature integrated in Shōgun's upcoming version update: the ROM guidance panel, which visualises subject calibration information (‘range of motion’) in real time capture.
They wanted to know how end users would interpret, learn and use the feature.
Read about
Know+ Research
Objectives
Create an effective IA that supports both user and business goals
Uncover issues in design by testing and evaluation to effectively iterate into beta version
Benchmark UX (MVP onward) to track progress
Notes
My discussions with colleagues revealed that no user research had been done, and that no design artefacts had ever been created at the company.
Because of this, part of my work focused on collecting data to create evolvable personas and to document use case scenarios.
Kickoff
We had spent 6 month conducting market and user research that established our product requirements and business plan. With funding secured, we were looking to enter the design and iteration phase for Know+.
Read about
Know+ Research
Objectives
Create an effective IA that supports both user and business goals
Uncover issues in design by testing and evaluation to effectively iterate into beta version
Benchmark UX (MVP onward) to track progress
Notes
My main role at this phase was research—evaluation, testing, analysis, and communicating feedback by reports and presentations. I collaborated in creating some low-fi wireframes and prototypes, but major UX/UI design work was led and completed by our highly experienced contractor.
Methodology
Card sorting
Creation of mobile sitemap and user flows
Wireframing, prototyping & early user testing
System evaluation
Contextual inquiry
UX benchmarking
Methodology
Card sorting
Creation of mobile sitemap and user flows
Wireframing, prototyping & early user testing
System evaluation
Contextual inquiry
UX benchmarking
Methodology
Card sorting
Creation of mobile sitemap and user flows
Wireframing, prototyping & early user testing
System evaluation
Contextual inquiry
UX benchmarking
Card Sorting
To identify content categories, I first conducted moderated open card sorting on paper. In a session with 16 users, each participant grouped 40 cards (selected from our inventory) into groups. Despite seeing a range of choices, we were able to recognise meaningful patterns in a careful process of analysis and establish labels that reflect users' mental models. Following that, we conducted tree testing with 21 new participants to see how well the defined classification and labels worked.
Quantitative and qualitative findings helped us determine the app's structure and categories. I put together a content taxonomy and drafted a mobile sitemap to visualise navigation, while organising team sessions to engage in iterating on these IA artefacts together—this created a shared sense of ownership and reduce the risk of having a 'bottle-neck' in the project workflow.

First sketch of Know+ IA
Card Sorting
To identify content categories, I first conducted moderated open card sorting on paper by running a session with 16 users; each participant grouped 40 cards (selected from our inventory) into categories. Despite seeing a range of choices, we were able to recognise meaningful patterns in a careful process of data analysis and establish labels which we thought to reflect our users' mental models.
Following that, we conducted unmoderated tree testing with 21 new participants, analysing both quantitative and qualitative data to see how well the defined labels worked.
Findings from the sessions helped us to determine the structure and classify groups/subgroups. I created a content taxonomy, labelled features, drafted IA and sketched navigation, meanwhile organising team sessions to engage members in iterating on IA artefacts, so as to build a shared sense of ownership and reduce the risk of having a 'bottle-neck' in the project workflow.

First sketch of Know+ IA
Card Sorting
To identify content categories, I first conducted moderated open card sorting on paper. In a session with 16 users, each participant grouped 40 cards (selected from our inventory) into groups. Despite seeing a range of choices, we were able to recognise meaningful patterns in a careful process of analysis and establish labels that reflect users' mental models. Following that, we conducted tree testing with 21 new participants to see how well the defined classification and labels worked.
Quantitative and qualitative findings helped us determine the app's structure and categories. I put together a content taxonomy and drafted a mobile sitemap to visualise navigation, while organising team sessions to engage in iterating on these IA artefacts together—this created a shared sense of ownership and reduce the risk of having a 'bottle-neck' in the project workflow.

First sketch of Know+ IA
User flow

Example task: Course search to purchase
To better illustrate different types of information behaviour, I outlined a user flow for each main task within the app, covering interactions such as searching, browsing, registering, subscription, payment, and 7 others learning and media functions. I presented the visualised data to help stakeholders understand how users with varied goals would move through the app, enabling us to design a system supportive of different actions in a range of contexts for any target user to intuitively navigate through and interact with the platform in seamless processes.
User flow

Example task: Course search to purchase
To identify content categories, I first conducted moderated open card sorting on paper by running a session with 16 users; each participant grouped 40 cards (selected from our inventory) into categories. Despite seeing a range of choices, we were able to recognise meaningful patterns in a careful process of data analysis and establish labels which we thought to reflect our users' mental models.
Following that, we conducted unmoderated tree testing with 21 new participants, analysing both quantitative and qualitative data to see how well the defined labels worked.
Findings from the sessions helped us to determine the structure and classify groups/subgroups. I created a content taxonomy, labelled features, drafted IA and sketched navigation, meanwhile organising team sessions to engage members in iterating on IA artefacts, so as to build a shared sense of ownership and reduce the risk of having a 'bottle-neck' in the project workflow.
User flow

Example task: Course search to purchase
To better illustrate different types of information behaviour, I outlined a user flow for each main task within the app, covering interactions such as searching, browsing, registering, subscription, payment, and 7 others learning and media functions. I presented the visualised data to help stakeholders understand how users with varied goals would move through the app, enabling us to design a system supportive of different actions in a range of contexts for any target user to intuitively navigate through and interact with the platform in seamless processes.
Wireframing & user testing
I translated my findings into first sketches in Figma, which underwent some tweaks after internal viewing and discussion based on our design artefacts. The low-fidelty wireframes were static and purposed for testing, as I wanted to concentrate on collecting user feedback and iterating quickly through new versions. Sticking to basic prototypes would help us avoid feeling wedded to the design before it was finalised.
Next, I conducted user testing on 6 defined tasks with the help of our UX intern. We involved a total of 14 users in 4 rounds of online testing: I held individual sessions where we provided realistic scenarios and observed user interaction with the mockups, taking notes and probing participants when appropriate to collect qualitative feedback. Our analysis identified a list of issues that were presented to inform iterations.
We prioritised simplifying and clarifying navigation on main screens, and we organised the complex system of contents and functions in a digestible way. The findings helped us tacke multi-layer challenges and validate multiple changes: structurally, we optimised the IA to support user and our business goals; in UI, we re-positioned elements in layout (e.g., logo, fonts, thumbnails, CTAs) to facilitate info scanning.

Series of low-fidenlity key screen wireframes (annotations omitted), 2nd iteration
Wireframing & user testing
To identify content categories, I first conducted moderated open card sorting on paper by running a session with 16 users; each participant grouped 40 cards (selected from our inventory) into categories. Despite seeing a range of choices, we were able to recognise meaningful patterns in a careful process of data analysis and establish labels which we thought to reflect our users' mental models.
Following that, we conducted unmoderated tree testing with 21 new participants, analysing both quantitative and qualitative data to see how well the defined labels worked.
Findings from the sessions helped us to determine the structure and classify groups/subgroups. I created a content taxonomy, labelled features, drafted IA and sketched navigation, meanwhile organising team sessions to engage members in iterating on IA artefacts, so as to build a shared sense of ownership and reduce the risk of having a 'bottle-neck' in the project workflow.
Next, I conducted user testing on 6 defined tasks with the help of our UX intern. We involved a total of 14 users in 4 rounds of online testing: I held individual sessions where we provided realistic scenarios and observed user interaction with the mockups, taking notes and probing participants when appropriate to collect qualitative feedback. Our analysis identified a list of issues that were presented to inform iterations.
We prioritised simplifying and clarifying navigation on main screens, and we organised the complex system of contents and functions in a digestible way. The findings helped us tacke multi-layer challenges and validate multiple changes: structurally, we optimised the IA to support user and our business goals; in UI, we re-positioned elements in layout (e.g., logo, fonts, thumbnails, CTAs) to facilitate info scanning.

Series of low-fidenlity key screen wireframes (annotations omitted), 2nd iteration
Wireframing & user testing
I translated my findings into first sketches in Figma, which underwent some tweaks after internal viewing and discussion based on our design artefacts. The low-fidelty wireframes were static and purposed for testing, as I wanted to concentrate on collecting user feedback and iterating quickly through new versions. Sticking to basic prototypes would help us avoid feeling wedded to the design before it was finalised.
Next, I conducted user testing on 6 defined tasks with the help of our UX intern. We involved a total of 14 users in 4 rounds of online testing: I held individual sessions where we provided realistic scenarios and observed user interaction with the mockups, taking notes and probing participants when appropriate to collect qualitative feedback. Our analysis identified a list of issues that were presented to inform iterations.
We prioritised simplifying and clarifying navigation on main screens, and we organised the complex system of contents and functions in a digestible way. The findings helped us tacke multi-layer challenges and validate multiple changes: structurally, we optimised the IA to support user and our business goals; in UI, we re-positioned elements in layout (e.g., logo, fonts, thumbnails, CTAs) to facilitate info scanning.

Series of low-fidenlity key screen wireframes (annotations omitted), 2nd iteration
Prototypes
We reached a version where users experienced little difficulties and responded positively during testing. Growing more confident that the wireframes would not require complex redesigns in the future, I handed the design over to our contract UX/UI designer, who proceeded to develop high fidelity prototypes. Thanks to early testing, we avoided investing significant resources in changing the app's IA at later stages.
A quick showcase of the high fidelity wireframes and notes
User testing: Purchase funnels
To optimise conversion, my focus was to understand the UX of three main purchase funnels—this involved the home, course and payment screens. We set goals for our interaction design:
1. Users are prompted to subscribe or purchase a course as soon as possible
2. Users can easily understand pricing model, and subscribe or purchase a course
3. Users can easily find specific course info and trailer
4. Users can easily understand course structure
I conducted three rounds of remote tests—one A/B and two moderated usability/UX—with two success metrics to investigate our designs: task completion (how users engaged with the content on 2 set of designs) and time to purchase. After each iteration, I tested different users to complete the same key tasks. Next, I conducted 4 remote contextual inquiries to gather qualitative UX data on navigation and content exploration.
Insights
The analysis helped me to identify triggering elements or factors. I presented the comprehensive findings with informed design suggestions for the Designer and Devs to address: we removed distractors, highlighted contents, increased discoverability of functions, and directed clicks on key CTAs (free trial / purchase buttons). In addition, given user feedback on content language, I rewrote our app's text with a different tone and tested for ideal engagement.

Example: Notes, home screen

Example: Raw annotation, course screen

Example: Iterated flow, payment screen
PAYMENT SCREEN
Too many tabs—users were confused about the payment options and hence hesitant to click. Many were unsure how to proceed to payment as icons didn't translate (text needed)
Unclear what 'Wallet' was—users didn't know they could Top Up or coupons were stored here
No in-app top-up options—nearly all users felt 'annoyed' or 'confused' when redirected to a webpage.
COURSE SCREEN
Colours—users thought blue areas would expand on tap to reveal more info; clicks here interrupted flows
Hidden CTA—users couldn't find small 'play' button for trial lesson. ~50% failed to locate it; <35% succeeded without verbal prompts
Crowded content—took users long to scroll through showcased episodes; reported feeling overwhelmed by heavy content and discouraged from continuing
OVERALL
Tone—users thought the address was impersonal and formal; they felt ‘too stupid to take these courses’
Image overload—users confused about who the people were and had no reference until clicking in the screen. They'd rather see what courses > speakers' faces
Too minimalist—users not used to too much space and wanted more info to be seen packed in empty spaces
Example issues uncovered from 2nd user testing
User testing: Purchase funnels
To optimise conversion, my focus was to understand the UX of three main purchase funnels—this involved the home, course and payment screens. We set goals for our interaction design:
1. Users are prompted to subscribe or purchase a course as soon as possible
2. Users can easily understand pricing model, and subscribe or purchase a course
3. Users can easily find specific course info and trailer
4. Users can easily understand course structure
I conducted three rounds of remote tests—one A/B and two moderated usability/UX—with two success metrics to investigate our designs: task completion (how users engaged with the content on 2 set of designs) and time to purchase. After each iteration, I tested different users to complete the same key tasks. Next, I conducted 4 remote contextual inquiries to gather qualitative UX data on navigation and content exploration.
Insights
The analysis helped me to identify triggering elements or factors. I presented the comprehensive findings with informed design suggestions for the Designer and Devs to address: we removed distractors, highlighted contents, increased discoverability of functions, and directed clicks on key CTAs (free trial / purchase buttons). In addition, given user feedback on content language, I rewrote our app's text with a different tone and tested for ideal engagement.

Example: Notes, home screen

Example: Raw annotation, course screen

Example: Iterated flow, payment screen
PAYMENT SCREEN
Too many tabs—users were confused about the payment options and hence hesitant to click. Many were unsure how to proceed to payment as icons didn't translate (text needed)
Unclear what 'Wallet' was—users didn't know they could Top Up or coupons were stored here
No in-app top-up options—nearly all users felt 'annoyed' or 'confused' when redirected to a webpage.
COURSE SCREEN
Colours—users thought blue areas would expand on tap to reveal more info; clicks here interrupted flows
Hidden CTA—users couldn't find small 'play' button for trial lesson. ~50% failed to locate it; <35% succeeded without verbal prompts
Crowded content—took users long to scroll through showcased episodes; reported feeling overwhelmed by heavy content and discouraged from continuing
OVERALL
Tone—users thought the address was impersonal and formal; they felt ‘too stupid to take these courses’
Image overload—users confused about who the people were and had no reference until clicking in the screen. They'd rather see what courses > speakers' faces
Too minimalist—users not used to too much space and wanted more info to be seen packed in empty spaces
Example issues uncovered from 2nd user testing
User testing: Purchase funnels
To optimise conversion, my focus was to understand the UX of three main purchase funnels—this involved the home, course and payment screens. We set goals for our interaction design:
1. Users are prompted to subscribe or purchase a course as soon as possible
2. Users can easily understand pricing model, and subscribe or purchase a course
3. Users can easily find specific course info and trailer
4. Users can easily understand course structure
I conducted three rounds of remote tests—one A/B and two moderated usability/UX—with two success metrics to investigate our designs: task completion (how users engaged with the content on 2 set of designs) and time to purchase. After each iteration, I tested different users to complete the same key tasks. Next, I conducted 4 remote contextual inquiries to gather qualitative UX data on navigation and content exploration.
Insights
The analysis helped me to identify triggering elements or factors. I presented the comprehensive findings with informed design suggestions for the Designer and Devs to address: we removed distractors, highlighted contents, increased discoverability of functions, and directed clicks on key CTAs (free trial / purchase buttons). In addition, given user feedback on content language, I rewrote our app's text with a different tone and tested for ideal engagement.

Example: Notes, home screen

Example: Raw annotation, course screen

Example: Iterated flow, payment screen
PAYMENT SCREEN
Too many tabs—users were confused about the payment options and hence hesitant to click. Many were unsure how to proceed to payment as icons didn't translate (text needed)
Unclear what 'Wallet' was—users didn't know they could Top Up or coupons were stored here
No in-app top-up options—nearly all users felt 'annoyed' or 'confused' when redirected to a webpage.
COURSE SCREEN
Colours—users thought blue areas would expand on tap to reveal more info; clicks here interrupted flows
Hidden CTA—users couldn't find small 'play' button for trial lesson. ~50% failed to locate it; <35% succeeded without verbal prompts
Crowded content—took users long to scroll through showcased episodes; reported feeling overwhelmed by heavy content and discouraged from continuing
OVERALL
Tone—users thought the address was impersonal and formal; they felt ‘too stupid to take these courses’
Image overload—users confused about who the people were and had no reference until clicking in the screen. They'd rather see what courses > speakers' faces
Too minimalist—users not used to too much space and wanted more info to be seen packed in empty spaces
Example issues uncovered from 2nd user testing
Commercial impacts
We brought the average time to purchase down by nearly 190% (2.86-second decrease); the rate of users redirecting to trial/purchase section within the first 15 seconds increased from <50% to nearly 90%. Through the process, users reported significantly more positive feedback on the ease of navigation in new versions, higher satisfaction and impression of the Know+ brand.
After the MVP release, I continued to conduct post-launch testing by tracking user activities on the backend. While our original home screen signalled two obvious CTAs, I noticed over 75% users tended to click on the ad at the screen top.
I suggested linking the image directly to purchase, creating another intuitive route. The implementation instantly shortened new customers’ purchase time and, to our shock, increased our conversion rate by a stunning 26% within the first two weeks of change.

Improved payment screen

Improved payment (cont.)

Improved course screen

Improved course (cont.)
Commercial impacts
We brought the average time to purchase down by nearly 190% (2.86-second decrease); the rate of users redirecting to trial/purchase section within the first 15 seconds increased from <50% to nearly 90%. Through the process, users reported significantly more positive feedback on the ease of navigation in new versions, higher satisfaction and impression of the Know+ brand.
After the MVP release, I continued to conduct post-launch testing by tracking user activities on the backend. While our original home screen signalled two obvious CTAs, I noticed over 75% users tended to click on the ad at the screen top.
I suggested linking the image directly to purchase, creating another intuitive route. The implementation instantly shortened new customers’ purchase time and, to our shock, increased our conversion rate by a stunning 26% within the first two weeks of change.

Improved payment screen

Improved payment (cont.)

Improved course screen

Improved course (cont.)
Commercial impacts
We brought the average time to purchase down by nearly 190% (2.86-second decrease); the rate of users redirecting to trial/purchase section within the first 15 seconds increased from <50% to nearly 90%. Through the process, users reported significantly more positive feedback on the ease of navigation in new versions, higher satisfaction and impression of the Know+ brand.
After the MVP release, I continued to conduct post-launch testing by tracking user activities on the backend. While our original home screen signalled two obvious CTAs, I noticed over 75% users tended to click on the ad at the screen top.
I suggested linking the image directly to purchase, creating another intuitive route. The implementation instantly shortened new customers’ purchase time and, to our shock, increased our conversion rate by a stunning 26% within the first two weeks of change.

Improved payment screen

Improved payment (cont.)

Improved course screen

Improved course (cont.)
Contextual inquiries: Lesson
The lesson section was meant to be the part where users spend most of their time on. Therefore, we set the following goals:
1. Users can easily take control of media tools and formats
2. Users can easily find and use our 4 learning features
3. Users can easily find and use the sharing function (for social media)
We considered dairy studies and contextual inquiries, deciding on the latter (as the former contained higher risks of dropout for our target demographics). It took 4 weeks to plan, recruit, conduct 6 sessions (7 participants) and analyse data. I followed them in the field through part of their day (app activities recorded on mobile screen), with an observation outline and inquiry topic guide.
Contextual inquiries: Lesson
The lesson section was meant to be the part where users spend most of their time on. Therefore, we set the following goals:
1. Users can easily take control of media tools and formats
2. Users can easily find and use our 4 learning features
3. Users can easily find and use the sharing function (for social media)
We considered dairy studies and contextual inquiries, deciding on the latter (as the former contained higher risks of dropout for our target demographics). It took 4 weeks to plan, recruit, conduct 6 sessions (7 participants) and analyse data. I followed them in the field through part of their day (app activities recorded on mobile screen), with an observation outline and inquiry topic guide.
Contextual inquiries: Lesson
The lesson section was meant to be the part where users spend most of their time on. Therefore, we set the following goals:
1. Users can easily take control of media tools and formats
2. Users can easily find and use our 4 learning features
3. Users can easily find and use the sharing function (for social media)
We considered dairy studies and contextual inquiries, deciding on the latter (as the former contained higher risks of dropout for our target demographics). It took 4 weeks to plan, recruit, conduct 6 sessions (7 participants) and analyse data. I followed them in the field through part of their day (app activities recorded on mobile screen), with an observation outline and inquiry topic guide.
Insights
The analysis revealed multitasking tendencies halfway through the mini video lesson, owing to a mix of goals and rationales:
To switch to audio only (visual attention needed in physical environments)
To pause and resume videos (taking notes without missing content)
To change video setting (for subtitles, volumes, HD and play speed)
To skim other episodes (bigger picture wanted)
To find transcripts (subtitles too chopped & difficult to follow)
To exit the app (answer messages)
Users found the above experience of scrolling and clicking around taxing. Also, low engagement with the ‘notes’ was observed:
2/3 of users didn’t know how to create notes without a sample
>50% users felt exhausted after minutes
3 users hoped to save notes in one place (currently scattered in different lessons)

Lesson page interaction explained
Insights
The analysis revealed multitasking tendencies halfway through the mini video lesson, owing to a mix of goals and rationales:
To switch to audio only (visual attention needed in physical environments)
To pause and resume videos (taking notes without missing content)
To change video setting (for subtitles, volumes, HD and play speed)
To skim other episodes (bigger picture wanted)
To find transcripts (subtitles too chopped & difficult to follow)
To exit the app (answer messages)
Users found the above experience of scrolling and clicking around taxing. Also, low engagement with the ‘notes’ was observed:
2/3 of users didn’t know how to create notes without a sample
>50% users felt exhausted after minutes
3 users hoped to save notes in one place (currently scattered in different lessons)

Lesson page interaction explained
Insights
The analysis revealed multitasking tendencies halfway through the mini video lesson, owing to a mix of goals and rationales:
To switch to audio only (visual attention needed in physical environments)
To pause and resume videos (taking notes without missing content)
To change video setting (for subtitles, volumes, HD and play speed)
To skim other episodes (bigger picture wanted)
To find transcripts (subtitles too chopped & difficult to follow)
To exit the app (answer messages)
Users found the above experience of scrolling and clicking around taxing. Also, low engagement with the ‘notes’ was observed:
2/3 of users didn’t know how to create notes without a sample
>50% users felt exhausted after minutes
3 users hoped to save notes in one place (currently scattered in different lessons)

Lesson page interaction explained
Impacts
We needed to support users' multitasking behaviour by minimising interruptions. in light of this, I suggested design changes to improve UX:
To create a video/audio toggle on screen top and keep media functions in a popup overlay, enabling fast model switches and setting changes
To create 3 tabs floating below the lesson video—‘course contents’, ‘notes’ and ‘transcripts’—maximising navigation clarity and ease to reduce cognitive load and steps in flow
To replace the notes function with knowledge cards, which featured pre-made templates, an automatic video locating function, text and background personalization, ‘save to my collection’, and sharing function. This would simplify note-taking while enabling personalised lesson review
My recommendations were implemented in time, driving user satisfaction up by 1.8 in a survey (SUS + open items) to 300+ users. In addition, knowledge cards became a highlight feature, with over 80% of users describing the tool as ’fun’, ‘helpful for learning’, creating ‘sense of achievement’, and ‘easily shareable’.

Popup media functions

Floating tab: Notes

Floating tab: Transcripts

Toggle on screen top
Impacts
We needed to support users' multitasking behaviour by minimising interruptions. in light of this, I suggested design changes to improve UX:
To create a video/audio toggle on screen top and keep media functions in a popup overlay, enabling fast model switches and setting changes
To create 3 tabs floating below the lesson video—‘course contents’, ‘notes’ and ‘transcripts’—maximising navigation clarity and ease to reduce cognitive load and steps in flow
To replace the notes function with knowledge cards, which featured pre-made templates, an automatic video locating function, text and background personalization, ‘save to my collection’, and sharing function. This would simplify note-taking while enabling personalised lesson review
My recommendations were implemented in time, driving user satisfaction up by 1.8 in a survey (SUS + open items) to 300+ users. In addition, knowledge cards became a highlight feature, with over 80% of users describing the tool as ’fun’, ‘helpful for learning’, creating ‘sense of achievement’, and ‘easily shareable’.

Popup media functions

Floating tab: Notes

Floating tab: Transcripts

Toggle on screen top
Impacts
We needed to support users' multitasking behaviour by minimising interruptions. in light of this, I suggested design changes to improve UX:
To create a video/audio toggle on screen top and keep media functions in a popup overlay, enabling fast model switches and setting changes
To create 3 tabs floating below the lesson video—‘course contents’, ‘notes’ and ‘transcripts’—maximising navigation clarity and ease to reduce cognitive load and steps in flow
To replace the notes function with knowledge cards, which featured pre-made templates, an automatic video locating function, text and background personalization, ‘save to my collection’, and sharing function. This would simplify note-taking while enabling personalised lesson review
My recommendations were implemented in time, driving user satisfaction up by 1.8 in a survey (SUS + open items) to 300+ users. In addition, knowledge cards became a highlight feature, with over 80% of users describing the tool as ’fun’, ‘helpful for learning’, creating ‘sense of achievement’, and ‘easily shareable’.

Popup media functions

Floating tab: Notes

Floating tab: Transcripts

Toggle on screen top
40+
Research sessions
4
Methods employed
300+
Users involved
26%
Increased conversion
40+
Research sessions
4
Methods employed
300+
Users involved
26%
Increased conversion
40+
Research sessions
4
Methods employed
300+
Users involved
26%
Increased conversion
Next steps
We planned to continue researching opportunities, having drawn valuable insights throughout product development. My strong passion for UX research was ignited by this 30-month project, after which I went on to complete a second MSc in Human-Computer Interaction Design at City, University of London.
I’ve now settled in London and am handing Knewtopia to its other Shanghai-based co-founder.
Next steps
We planned to continue researching opportunities, having drawn valuable insights throughout product development. My strong passion for UX research was ignited by this 30-month project, after which I went on to complete a second MSc in Human-Computer Interaction Design at City, University of London.
I’ve now settled in London and am handing Knewtopia to its other Shanghai-based co-founder.
Next steps
We planned to continue researching opportunities, having drawn valuable insights throughout product development. My strong passion for UX research was ignited by this 30-month project, after which I went on to complete a second MSc in Human-Computer Interaction Design at City, University of London.
I’ve now settled in London and am handing Knewtopia to its other Shanghai-based co-founder.
Explore other projects

Generative Research: Mocap System
Analysing how users in the life sciences community adopt motion capture technology, employing qualitative methods—assumption mapping, field observations, inquiries, and workshops—to create artefacts that ground teams with data in a product redesign.
Read about Nexus research

Discovery: Uncovering Insights
Delving into the depths of user experiences in China's knowledge entertainment market to discover behaviour and desires, identify pain points, explore market gaps, and inspire an sustainable offering that fits both user and business needs.
Read about Know+ discovery

HCI: From Theory to Application
Applying human-computer interaction theories in projects to showcase my knowledge and skills in user research, system evaluation and audit, inclusive design, interaction design, IA, and web app development.
Read about academic projects
Explore other projects

Generative Research: Mocap System
Analysing how users in the life sciences community adopt motion capture technology, employing qualitative methods—assumption mapping, field observations, inquiries, and workshops—to create artefacts that ground teams with data in a product redesign.
Read about Nexus research

Discovery: Uncovering Insights
Delving into the depths of user experiences in China's knowledge entertainment market to discover behaviour and desires, identify pain points, explore market gaps, and inspire an sustainable offering that fits both user and business needs.
Read about Know+ discovery

HCI: From Theory to Application
Applying human-computer interaction theories in projects to showcase my knowledge and skills in user research, system evaluation and audit, inclusive design, interaction design, IA, and web app development.
Read about academic projects
Explore other projects

Generative Research: Mocap System
Analysing how users in the life sciences community adopt motion capture technology, employing qualitative methods—assumption mapping, field observations, inquiries, and workshops—to create artefacts that ground teams with data in a product redesign.
Read about Nexus research

Discovery: Uncovering Insights
Delving into the depths of user experiences in China's knowledge entertainment market to discover behaviour and desires, identify pain points, explore market gaps, and inspire an sustainable offering that fits both user and business needs.
Read about Know+ discovery

HCI: From Theory to Application
Applying human-computer interaction theories in projects to showcase my knowledge and skills in user research, system evaluation and audit, inclusive design, interaction design, IA, and web app development.
Read about academic projects