Research & Ethnography
What is unspoken
A nested classificatory hierarchy
Unfinished
Record them all
Recommendations for field notes
In Defense of Browsing
The illustrated guide to a Ph.D.
When users never use the features they asked for
An Article by Austin Z. HenleyWe deployed our tool. Almost no one used it.
The handful that did use it, used it once or twice and barely interacted with it. After a few days, zero people were using it.
Why did they tell me they wanted these features?
Why Most Published Research Findings Are False
A Research Paper by John P.A. IoannidisThere is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.
How can we develop transformative tools for thought?
A Research Paper by Andy Matuschak & Michael NielsenConventional tech industry product practice will not produce deep enough subject matter insights to create transformative tools for thought.
...The aspiration is for any team serious about making transformative tools for thought. It’s to create a culture that combines the best parts of modern product practice with the best parts of the (very different) modern research culture. You need the insight-through-making loop to operate, whereby deep, original insights about the subject feed back to change and improve the system, and changes to the system result in deep, original insights about the subject.
A hypothesis is a liability
A Research Paper by Itai Yanai & Martin LercherThere is a hidden cost to having a hypothesis. It arises from the relationship between night science and day science, the two very distinct modes of activity in which scientific ideas are generated and tested, respectively [1, 2]. With a hypothesis in hand, the impressive strengths of day science are unleashed, guiding us in designing tests, estimating parameters, and throwing out the hypothesis if it fails the tests. But when we analyze the results of an experiment, our mental focus on a specific hypothesis can prevent us from exploring other aspects of the data, effectively blinding us to new ideas.
Keep digging
An Article by Ryan SingerThe hardest thing about customer interviews is knowing where to dig. An effective interview is more like a friendly interrogation. We don’t want to learn what customers think about the product, or what they like or dislike — we want to know what happened and how they chose... To get those answers we can’t just ask surface questions, we have to keep digging back behind the answers to find out what really happened.
Fast Path to a Great UX – Increased Exposure Hours
An Article by Jared SpoolAs we’ve been researching what design teams need to do to create great user experiences, we’ve stumbled across an interesting finding. It’s the closest thing we’ve found to a silver bullet when it comes to reliably improving the designs teams produce.
The solution? Exposure hours. The number of hours each team member is exposed directly to real users interacting with the team’s designs or the team’s competitor’s designs. There is a direct correlation between this exposure and the improvements we see in the designs that team produces.
Weighing up UX
An Article by Jeremy KeithMetrics come up when we’re talking about A/B testing, growth design, and all of the practices that help designers get their seat at the table (to use the well-worn cliché). But while metrics are very useful for measuring design’s benefit to the business, they’re not really cut out for measuring user experience.
Monkeys testing random designs
A Tweet by Jared SpoolA/B testing is an effective approach to use science to design and deliver deeply-frustrating user experiences.
A/B testing without upfront research is just random monkeys testing random designs to see which of those designs do “best” against random criteria.
If drug testing was actually implemented like most A/B tests, you’d give 2 drugs to 2 groups of people and pick the “winner” by whichever group had fewer deaths.
You and Your Research
A Speech by Richard HammingThis talk centered on Hamming's observations and research on the question "Why do so few scientists make significant contributions and so many are forgotten in the long run?"
The Fidelity Curve
How do we choose which level of fidelity is appropriate for a project?
I think about it like this: The purpose of making sketches and mockups before coding is to gain confidence in what we plan to do. I’m trying to remove risk from the decision to build something by somehow “previewing” it in a cheaper form. There’s a trade-off here. The higher the fidelity of the mockup, the more confidence it gives me. But the longer it takes to create that mockup, the more time I’ve wasted on an intermediate step before building the real thing.
I like to look at that trade-off economically. Each method reduces risk by letting me preview the outcome at lower fidelity, at the cost of time spent on it. The cost/benefit of each type of mockup is going to vary depending on the fidelity of the simulation and the work involved in building the real thing.
Four levels of fidelity
Suppose we have four levels of fidelity…
- Rough sketch (on paper or an iPad)
- Static mock-up (eg. Photoshop or Sketch)
- Interactive mock-up (eg. Framer, InVision)
- Working code prototype (HTML/CSS, iOS views)
Depending on the feature you’re working on, these levels of fidelity take different amounts of time to create. If you plot them in terms of time to build versus confidence gained, you could imagine something like a per-feature fidelity curve.
Time to build versus confidence gained
Take a simple CRUD web UI, where you’re just navigating between screens. It doesn’t take much more time to build the real version than it does to mock it when the design is simple. If you were to build out an interactive mock first, you would end up spending twice as much time in total without gaining much out of it.
Contrast that with a complicated Javascript interaction. Or a native iOS feature that requires programmer time to build out. If it takes substantially more time to build the real code version, then it may be smart to do an interactive mockup first.