A study should persist
Since we cannot interview the subject, we can only infer the past from the present. Ideally, a study should persist for at least the life span of an animal.
Since we cannot interview the subject, we can only infer the past from the present. Ideally, a study should persist for at least the life span of an animal.
Ethnographic studies are distinct from ethological research in other species because we can speak with our subjects and ask them questions. This has tremendous value, but much of what humans do is not spoken, and we also observe, count, and measure.
I organized behavioral codes to contain several levels of information. As in this example, if a child is outside playing with friends while minding her two-year-old sister, the activity was coded as 675: the 600 signifies noneconomic activity, the 70 that it is playing, and the 5 that it is playing while in charge of a child. All activities were coded in this way. A nested classificatory hierarchy preserves both detail for future research and flexibility to lump or disaggregate activities for analyses. This method of nesting information carries over into many kinds of coding and classificatory schemes.
Leave the drawing unfinished. Record as much information as you need, but don’t draw any forms, details, or colors that are merely repetitive. The back and front of a representative flower on a plant, for example, or half of a bilaterally symmetrical animal may be all that’s necessary.
You can’t tell often in advance which observations will prove valuable. Do record them all!
— Joseph Grinnell, 1908
Being an end-user of someone else’s field notes certainly gives you insight into the benefits of good note-taking skills. Our experiences as end-users and creators of archival field notes lead us to a few specific recommendations:
(1) Don’t get bogged down in the details of format or style.
Rules are counterproductive if they prevent a researcher from taking field notes in the first place.
You will get more return by focusing on your content than by refining your formatting.
(2) Compose your notes as if you were writing a letter to someone a century in the future.
Writing for an external audience requires you to be more explicit in your descriptions and to take less knowledge for granted. Avoid the use of abbreviations, symbols, and other shortcuts that only you will understand.
Ask yourself: How would you describe this to someone over the phone?
(3) It is better to spend five minutes writing the important details than twenty minutes writing the trivial ones.
The feeling of fortuitous gratitude at coming across unexpected information is something most of us who’ve done any research, have experienced — that kismet of finding the perfect book, one spine away from the one that was sought. In the field of art and image research, this sparking of transmission, of sequence and connection, happens on a subconscious level.
…Why is the vernacular image still being dismissed as ephemera? Why is its study not being prioritized? All languages are alive, but visual language is galactic. Keywords are not eyeballs, and creating rutted pathways to follow is the antithesis of study. A century of visual language, knowledge, and connectivity is marching toward a narrow, parsimonious basement of nomenclature. The NYPL takes a step backward if it models its shelves and research on a search engine. Spontaneity is learning. Browsing is research.
Imagine a circle that contains all of human knowledge.
By the time you finish elementary school, you know a little.
By the time you finish high school, you know a bit more.
With a bachelor's degree, you gain a specialty.
A master's degree deepens that specialty:
Reading research papers takes you to the edge of human knowledge.
Once you're at the boundary, you focus.
You push at the boundary for a few years.
Until one day, the boundary gives way.
And, that dent you've made is called a Ph.D..
Of course, the world looks different to you now.
So, don't forget the bigger picture.
Keep pushing.
We deployed our tool. Almost no one used it.
The handful that did use it, used it once or twice and barely interacted with it. After a few days, zero people were using it.
Why did they tell me they wanted these features?
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.
Conventional tech industry product practice will not produce deep enough subject matter insights to create transformative tools for thought.
...The aspiration is for any team serious about making transformative tools for thought. It’s to create a culture that combines the best parts of modern product practice with the best parts of the (very different) modern research culture. You need the insight-through-making loop to operate, whereby deep, original insights about the subject feed back to change and improve the system, and changes to the system result in deep, original insights about the subject.
There is a hidden cost to having a hypothesis. It arises from the relationship between night science and day science, the two very distinct modes of activity in which scientific ideas are generated and tested, respectively [1, 2]. With a hypothesis in hand, the impressive strengths of day science are unleashed, guiding us in designing tests, estimating parameters, and throwing out the hypothesis if it fails the tests. But when we analyze the results of an experiment, our mental focus on a specific hypothesis can prevent us from exploring other aspects of the data, effectively blinding us to new ideas.
The hardest thing about customer interviews is knowing where to dig. An effective interview is more like a friendly interrogation. We don’t want to learn what customers think about the product, or what they like or dislike — we want to know what happened and how they chose... To get those answers we can’t just ask surface questions, we have to keep digging back behind the answers to find out what really happened.
As we’ve been researching what design teams need to do to create great user experiences, we’ve stumbled across an interesting finding. It’s the closest thing we’ve found to a silver bullet when it comes to reliably improving the designs teams produce.
The solution? Exposure hours. The number of hours each team member is exposed directly to real users interacting with the team’s designs or the team’s competitor’s designs. There is a direct correlation between this exposure and the improvements we see in the designs that team produces.
Metrics come up when we’re talking about A/B testing, growth design, and all of the practices that help designers get their seat at the table (to use the well-worn cliché). But while metrics are very useful for measuring design’s benefit to the business, they’re not really cut out for measuring user experience.
A/B testing is an effective approach to use science to design and deliver deeply-frustrating user experiences.
A/B testing without upfront research is just random monkeys testing random designs to see which of those designs do “best” against random criteria.
If drug testing was actually implemented like most A/B tests, you’d give 2 drugs to 2 groups of people and pick the “winner” by whichever group had fewer deaths.
This talk centered on Hamming's observations and research on the question "Why do so few scientists make significant contributions and so many are forgotten in the long run?"