Research & Ethnography
What is unspoken
A nested classificatory hierarchy
Unfinished
Record them all
Recommendations for field notes
In Defense of Browsing
The illustrated guide to a Ph.D.
When users never use the features they asked for
An Article by Austin Z. HenleyWe deployed our tool. Almost no one used it.
The handful that did use it, used it once or twice and barely interacted with it. After a few days, zero people were using it.
Why did they tell me they wanted these features?
Why Most Published Research Findings Are False
A Research Paper by John P.A. IoannidisThere is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.
How can we develop transformative tools for thought?
A Research Paper by Andy Matuschak & Michael NielsenConventional tech industry product practice will not produce deep enough subject matter insights to create transformative tools for thought.
...The aspiration is for any team serious about making transformative tools for thought. It’s to create a culture that combines the best parts of modern product practice with the best parts of the (very different) modern research culture. You need the insight-through-making loop to operate, whereby deep, original insights about the subject feed back to change and improve the system, and changes to the system result in deep, original insights about the subject.
A hypothesis is a liability
A Research Paper by Itai Yanai & Martin LercherThere is a hidden cost to having a hypothesis. It arises from the relationship between night science and day science, the two very distinct modes of activity in which scientific ideas are generated and tested, respectively [1, 2]. With a hypothesis in hand, the impressive strengths of day science are unleashed, guiding us in designing tests, estimating parameters, and throwing out the hypothesis if it fails the tests. But when we analyze the results of an experiment, our mental focus on a specific hypothesis can prevent us from exploring other aspects of the data, effectively blinding us to new ideas.
Keep digging
An Article by Ryan SingerThe hardest thing about customer interviews is knowing where to dig. An effective interview is more like a friendly interrogation. We don’t want to learn what customers think about the product, or what they like or dislike — we want to know what happened and how they chose... To get those answers we can’t just ask surface questions, we have to keep digging back behind the answers to find out what really happened.
Fast Path to a Great UX – Increased Exposure Hours
An Article by Jared SpoolAs we’ve been researching what design teams need to do to create great user experiences, we’ve stumbled across an interesting finding. It’s the closest thing we’ve found to a silver bullet when it comes to reliably improving the designs teams produce.
The solution? Exposure hours. The number of hours each team member is exposed directly to real users interacting with the team’s designs or the team’s competitor’s designs. There is a direct correlation between this exposure and the improvements we see in the designs that team produces.
Weighing up UX
An Article by Jeremy KeithMetrics come up when we’re talking about A/B testing, growth design, and all of the practices that help designers get their seat at the table (to use the well-worn cliché). But while metrics are very useful for measuring design’s benefit to the business, they’re not really cut out for measuring user experience.
Monkeys testing random designs
A Tweet by Jared SpoolA/B testing is an effective approach to use science to design and deliver deeply-frustrating user experiences.
A/B testing without upfront research is just random monkeys testing random designs to see which of those designs do “best” against random criteria.
If drug testing was actually implemented like most A/B tests, you’d give 2 drugs to 2 groups of people and pick the “winner” by whichever group had fewer deaths.
You and Your Research
A Speech by Richard HammingThis talk centered on Hamming's observations and research on the question "Why do so few scientists make significant contributions and so many are forgotten in the long run?"
Deadlines are bullshit
In software development deadlines are a necessary evil. It is important to understand when they are necessary, and it is important to understand why they are evil.
- External vs. internal deadlines
- Why are internal deadlines evil?
- Engineers who love their work
External vs. internal deadlines
When are deadlines necessary?
- Contractual obligations
- Technical liabilities (e.g., dependency EOL)
- Compliance, government, investors, and other external stakeholders
What do all of these deadlines have in common? They are all important. They are all deadlines that cannot be missed. They are all external.
When are deadlines evil?
- Your manager says you have a deadline
- Your software development methodology says you have deadlines
What do all of these deadlines have in common? None of them are important. They are arbitrary. They are all internal. They are all bullshit.
Why are internal deadlines evil?
- Estimation: When estimating engineering work a substantial time investment is required by an engineer in order to get an accurate estimate.
- Misaligned Incentives: There is an incentive to lie and give estimates much longer than the feature is truly expected to take.
- Low Morale: Deadlines are likely to be missed often. Repeated failure has a cost to the morale of the team.
- Micromanagement: Deadlines are wielded by middle managers as a whip to harass and annoy engineers working on features.
- High Stress: When engineers feel the pressure of other stakeholders holding deadlines over their heads it creates an environment of high stress.
- High Turnover: On teams with high turnover rates the best engineers have an easy time finding new work and leave quickly, the worst engineers have a difficult time finding work and remain. This selects for a lower quality team over time.
Engineers who love their work
The resolution is simple. Never have internal deadlines. Operate on a prioritized and ordered list of features. Estimate only when necessary to prioritize and do so in a t-shirt sizing way. Trust your engineers and they will begin to love their work. Engineers who love their work are happy and productive.