Science
Interdisciplinary
Which half?
Scientific writing
A tiny rivulet in a distant forest
The downgrading of experience
Humility
Art and science
The Structure of Scientific Revolutions
BLDGBLOG
The Art of Doing Science and Engineering: Learning to Learn
The illustrated guide to a Ph.D.
evermore, and other beautiful things
An Article by Linus the SephistIf all evidence of civilization on Earth was destroyed, and humans had to re-build society from the ground up, what would be different? Feynman reckons that pivotal scientific moments, like the discovery of the atom, will still happen in the same way. Perhaps mathematics will be similarly rediscovered.
Someone told me once in response to this question, no artwork would ever be recreated. The art we create – music, stories, dance, film – isn’t a fundamental element of the universe, or even of humanity. It’s unique to each artist. If you choose to create art, you leave something in the world that has never had a chance to exist before, and will never again have a chance to exist. There will never be another Beatles or Studio Ghibli or Picasso. Art, in its infinite variations of originality, is cosmically unique in a way the sciences will never be. Art immortalizes human experiences that would otherwise vanish in time.
Reality is Very Weird and You Need to be Prepared for That
An EssayWe might be closer than we think to cures for depression, hypertension, and yes, even obesity.
The answer to scurvy was just one thing, plus a few wrinkles — mostly “not all citrus has the antiscorbutic property” and “most animals can’t get scurvy”. This was only difficult because people weren’t prepared to deal with basic wrinkles, but we can do better by learning from their mistakes.
This means don’t give up easily. It suggests that there is lots of low-hanging fruit, because even simple explanations are easily missed.
Lots of theories have been tried, and lots of them have been given up because of something that looks like contradictory evidence. But the evidence might not actually be a contradiction — the real explanation might just be slightly more complicated than people realized. Go back and revisit scientific near-misses, maybe there’s a wrinkle they didn’t know how to iron out.
Tortured phrases
An Article by Holly ElseIn April 2021, a series of strange phrases in journal articles piqued the interest of a group of computer scientists. The researchers could not understand why researchers would use the terms ‘counterfeit consciousness’, ‘profound neural organization’ and ‘colossal information’ in place of the more widely recognized terms ‘artificial intelligence’, ‘deep neural network’ and ‘big data’.
Further investigation revealed that these strange terms — which they dub “tortured phrases” — are probably the result of automated translation or software that attempts to disguise plagiarism. And they seem to be rife in computer-science papers.
Why Most Published Research Findings Are False
A Research Paper by John P.A. IoannidisThere is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.
A hypothesis is a liability
A Research Paper by Itai Yanai & Martin LercherThere is a hidden cost to having a hypothesis. It arises from the relationship between night science and day science, the two very distinct modes of activity in which scientific ideas are generated and tested, respectively [1, 2]. With a hypothesis in hand, the impressive strengths of day science are unleashed, guiding us in designing tests, estimating parameters, and throwing out the hypothesis if it fails the tests. But when we analyze the results of an experiment, our mental focus on a specific hypothesis can prevent us from exploring other aspects of the data, effectively blinding us to new ideas.
The small web is beautiful
I believe that small websites are compelling aesthetically, but are also important to help us resist selling our souls to large tech companies. In this essay I present a vision for the “small web” as well as the small software and architectures that power it.
Why aim small?
Why aim small in this era of fast computers with plenty of RAM? A number of reasons, but the ones that are most important to me are:
- Fewer moving parts. It’s easier to create more robust systems and to fix things when they do go wrong.
- Small software is faster. Fewer bits to download and clog your computer’s memory.
- Reduced power consumption. This is important on a “save the planet” scale, but also on the very local scale of increasing the battery life of your phone and laptop.
- The light, frugal aesthetic. That’s personal, I know, but as you’ll see, I’m not alone.
Features and complexity
Niklaus Wirth of Pascal fame wrote a famous paper in 1995 called A Plea for Lean Software. His take is that “a primary cause for the complexity is that software vendors uncritically adopt almost any feature that users want”, and “when a system’s power is measured by the number of its features, quantity becomes more important than quality”.
Solving the problem of software bloat
But instead of just complaining, how do we actually solve this problem? Concretely, I think we need to start doing the following:
- Care about size: this sounds obvious, but things only change when people think they’re important.
- Measure: both your executable’s size, and your program’s memory usage. You may want to measure over time, and make it a blocking issue if the measurements grow more than x% in a release. Or you could hold a memory-reduction sprint every so often.
- Language: choose a language that has a chance.
- Remove: cut down your feature set. Aim for a small number of high-quality features. My car can’t fly or float, and that’s okay – it drives well.
- Say no to new features: unless they really fit your philosophy, or add more than they cost over the lifetime of your project.
- Dependencies: understand the size and complexity of each dependency you pull in. Use only built-in libraries if you can.
Raw size isn't enough
A few months ago there was a sequence of posts to Hacker News about various “clubs” you could post your small website on: the 1MB Club, 512KB Club, 250KB Club, and even the 10KB Club. I think those are a fun indicator of renewed interested in minimalism, but I will say that raw size isn’t enough – a 2KB site with no real content isn’t much good, and a page with 512KB of very slow JavaScript is worse than a snappy site with 4MB of well-chosen images.
...[Instead, it's about] an “ethos of small”. It’s caring about the users of your site: that your pages download fast, are easy to read, have interesting content, and don’t load scads of JavaScript for Google or Facebook’s trackers.