Watson Health – 2016
Watson for Drug Discovery (WDD) leverages machine learning to process Medline, full-text medical journals, and Patent databases to understand the relationships between specific genes, diseases, and drugs. Pharmaceutical companies and academic institutions buy licenses for their researchers to find and learn about new potential drug candidates.
In July of 2016, I was brought on the team as Lead User Researcher build out the research program for the alpha launch and envision what it might look like after.
In order to effectively prioritize and design, the Design Team needed a system to gather evaluative, generative, and strategic user feedback that would scale with our user base and fit in Continuous Delivery.
Previously, the Design Team would sit-in on checkpoint sessions between the clients and the Sales Team to gauge user response. An online wiki was used to host user feedback forms.
With no active engagement and incentive, feedback forms were rarely used.
Checkpoint calls could be insightful but were generally geared toward solving specific issues the client was having with the tool rather than producing a holistic understanding of the whole user experience.
Checkpoint calls were with a panel of researchers, so depending on the personality of the researchers, individual concerns were often magnified or sidelined, and so did not provide an accurate representation of user satisfaction and needs.
With this method, it was tough to develop high-level insights or a nuanced understanding of the user, their process, or latent needs.
A Sponsor User Program and newly instrumented metrics were constructed to bring a rich, nuanced understanding of types of user behaviors. 36 interviews and months of engagement metrics later, the Design Team has deep insight into the health of the product that it shares with other teams.
I worked with our product owner to construct a Sponsor User program, in which the Design Team would introduce themselves with every new client kickoff.
The interviews are one-on-one open-ended feedback sessions held during the beginning (day 1), learning (day 10), and established use (day 30) periods to create a longitudinal understanding of each user and their journey.
The 1-10-30 interview format lends itself to more exploration of individual issues and outcomes (one uncovered latent need later became the basis for a new key feature, uncovered outcomes were used to create a new case study for the sales team).
Instrumenting our product with metrics gives us granular information about how each individual user is interacting with the product, and interviews can be used to uncover why. The data can be used to group users into different cohorts (e.g. disease researcher vs. drug researchers) to study different user behaviors.
The Design Team would schedule design review sessions through a user recruitment agency to validate designs.
Some users may not have enough experience in these hyper-specific domains or have experience working with a drug discovery research tool to answer questions or be really indicative of actual users.
Time was wasted with each new interviewee to explain the aim, interface, and workflow context to brand new users.
Each interview cost around $500, which was expensive for the team to maintain.
The Design Team leverages user relationships from the 1-10-30’s to conduct surveys and prototype reviews with actual users with relevant concerns.
Real users speak to exactly how new features would positively or negatively affect their workflow.
With the breadth of knowledge we got from in-person interviews, we split up the users by behaviors, which we found roughly corresponded to their domain (gene, disease, or drug researcher)
An example of a new feature that came out of an offhand comment: The “Picking up where I left off” Feature:
1-10-30 Day feedback illuminated the need across multiple users. The interviews also provided the context and validation to prioritize it as an urgent feature in the roadmap.
I constructed an As-is and To-be Process Map.
I conducted product research to see how other precedents of interactions within other file management systems like Google Drive, Dropbox, and Box.
I built and tested a clickable InVision prototype for feedback with users who had mentioned a need.
I gathered critique and validation for design concepts with a follow-up survey.
User research was conducted in a waterfall manner – generative research took place only before designing; evaluative research took place only after. After features were built and pushed, there was no insight into how frequently certain features were used.
Ongoing user research ensures that the Design Team and OM are aligned around the most urgent and impactful user needs. Structured user feedback plays a large role in continuous delivery and (re)prioritization of features.
- Example: user feedback received about a latent need arose during a Day 1 interview was validated with other users and prioritized into a feature.
- Working alongside users builds client trust and satisfaction with the product development process, and allows the Design Team to tap their expertise in future feature development.
- Success metrics collected from Day 30 informed a case study to inform and empower the business and sales team.