
Timeframe: 6 weeks
My Role: Lead UX Researcher
Team: Ed (PM), Anita (QA), Yuga (PD), Aya (Marketing Manager)
Methods: Contextual inquiry, moderated in-person usability testing, qualitative analysis
Artefacts: Research session protocol, scripts, note-taking templates, consent form, findings report
Tools: Miro, Zoom, MS Office, Confluence
5
13
21

Project Overview
'How would users react to our solution?'
Product
Kickoff
Objectives
Notes
Stakeholder interviews
Contextual inquiry + moderated in-person usability testing
Data analysis
Report writing & presentation

As the company's new and first UX Researcher, I made it a priority to understand the project's context, history and goals. I also wanted to outline communication preferences and align on a shared vision for success before jumping into research planning.
I informally approached our Senior Engineer, Lead Developer, Product Manager, and Head of Software for entertainment products, and conducted informal semi-structured 1-1 interviews following a discussion guide to ensure deep and free-flowing conversations centred on four major topics:
Project history and priorities to address in Shōgun
ROM guidance panel
Success metrics
Expertise, process and workflow
With the small number, it was rather quick and easy to outline major themes via lightweight analysis. High-level insights from these internal interviews were organised as part of the key Shogun UX research document and shared in the depository for larger teams to access and evolve in time. This instigated a zeal for documentation of informal discussions across the Devs and Product teams.
Gathering all internal stakeholders' questions about the ROM Guidance Panel (hereby ‘RGP’), I rephrased them to form research questions ('RQs'). The RQs were synthesised under two main categories, respectively pertaining to general UX and product-specific reaction. This called for a combination of methods that would allow us to effectively uncover issues and opportunities at both levels of granularity.
GOAL 1
Unveil the human, environmental, and contextual factors affecting subject calibration processes, results, and user experiences
GOAL 2
Capture behaviour and interactions in context with metrics to assess RGP impacts, to uncover issues and areas for improvement
To answer the RQs, I considered field research necessary and hence worked with our entertainment PM to arrange a series of visits. Given the rare opportunities we had to see customers on site, I decided to combine contextual inquiry and usability testing to maximise findings in a single session.
I created the protocol, script with a comprehensive list of tasks and follow-up questions, note-taking template, metrics guideline, client-facing presentation and consent form to share in advance. The 1.5-hour session was outlined in three parts:
Read session script




Immediately after each session, I debriefed with our team to organise rich summaries of digital, paper notes, audio/video files, and surface preliminary codes. The recordings were transcribed via Otter.ai and manually edited; all data were collated, formatted for consistency, and imported to NVivo for systematic viewing and processing. Metrics were gathered in one Excel file.
First running an analysis of the metrics (time on task, errors, SUS), I then conducted thematic analysis over a week, coding user statements as well as observer descriptions. I started with our predefined codes of larger categories ('subject calibration context' and 'RGP') which facilitated theme identification while leaving room to explore data and challenge preconceived notions.
Next, I reviewed the extracts of coded data under devised themes to reform or refine them in terms of validity, coherence, clarity and relevancy—that is, whether each had been effectively derived data, appropriately grouped together, adequately descriptive of our understanding, and possessive of the explanatory power to answer our RQs.
Finally, I wanted to reduce my personal biases in interpreting the data. I involved colleagues across Product and UX to critique the themes with fresh eyes over one week, providing a main guideline:
Is the theme well supported by the data (saturated with lots of instances)?
Are the themes well distinct from each other, and relevant to building Shōgun?
Looking at the data, do you agree with the themes? Any missing picture?
I fleshed out 9 major themes, each comprising 3-5 subgroups, to answer our RQs under the three research objectives:
Calibration journeys and contexts
RGP usability and UX
Opportunities for Shōgun
My findings informed key iteration decisions surrounding the v1.11 beta prototype, helping the Product and Dev teams to evaluate their backlog and prioritise tasks based on evidence and design impacts.

User reaction to RGP
Satisfaction
Issues & pain points
UI & interaction
IA & labels
Calibration UX factors
Workflow
Setup
Distractions
Shooting considerations
Learning behaviour & touchpoints
Novice
Expert
Non-technician
Example themes (descriptions omitted)
Report & Presentation
For NDA reasons, I am unable to share the final report and presentation, or specific metrics, product insights, design suggestions and artefacts created in this project. Click on the following to see (1) an early report of synthesised data from a single customer visit, or (2) an example user journey that I would create to keep stakeholders informed of initial findings.


