" /> Coruscation: November 2017 Archives

« October 2017 | Main | December 2017 »

November 29, 2017

Randonneuring vs Euraudax

Audax is a cycling sport in which participants attempt to cycle long distances within a pre-defined time limit. Audax is a non-competitive sport: success in an event is measured by its completion. Audax has its origins in Italian endurance sports of the late nineteenth century, and the rules were formalised in France in the early twentieth century.

In the present day, there are two forms of Audax: the original group-riding style, Euraudax, governed by Unions des Audax, and the free-paced (allure libre) style usually known as Randonneuring, governed by Audax Club Parisien. The original form is mostly popular in France, but also in the Netherlands, Belgium and Germany. Randonneuring is popular in many countries including France, Great Britain, Singapore, Australia, Canada, the USA and China.

November 17, 2017

Automated facilitation of dispute resolution

eBay resolves 60 million disputes a year and Alibaba 100 million. How do they do that? At the other less impressive extreme, in 2015 the IRS hung up on telephone callers 8.8 million times without making contact. Are there online solutions for that? Disputes are a "growth industry" on the internet, an inevitable by-product of innovation but often harmful to individuals. Drawing on his recent book, Digital Justice: Technology and the Internet of Disputes, (co-authored with Orna Rabinovich), Professor Katsh considers opportunities for online dispute resolution and prevention in ecommerce, health care, social media, employment, and the courts.

The Berkman Klein Center for Internet & Society

November 10, 2017

Presence of other people change an individual's behavior? Norman Triplett

One of the seminal social-psychology studies, at the turn of the 20th century, asked a question that at the time was a novel one: How does the presence of other people change an individual's behavior?

Norman Triplett, a psychologist at Indiana University, found that when he asked children to execute a simple task (winding line on a fishing rod), they performed better in the company of other children than they did when alone in a room. Over the following decades, a new discipline grew up within psychology to further interrogate group dynamics: how social groups react in certain circumstances, how the many can affect the one.

The field reached a moment of unusual visibility in the mid-20th century, as practitioners, many of them Jewish refugees or first-generation immigrants from Europe, explored, post-World War II, the way group pressures or authority figures could influence human behavior.

In one simple study on conformity in 1951, the social psychologist Solomon Asch found that people would agree that one drawn line matched the length of another -- even if it clearly did not -- if others around them all agreed that it did.

In subsequent years, researchers like Stanley Milgram (who tested how people weighed their consciences against the demands of authority) and Philip Zimbardo (who observed the effect of power on students assigned as either prison guards or prisoners) rejected the traditional confines of the lab for more theatrical displays of human nature. "They felt the urgency of history," says Rebecca Lemov, a professor of the history of science at Harvard. "They really wanted to make people look."

November 9, 2017

Tesla is more than electric: Doug DeMuro,

Tesla is a philosophy, not just a car. Alex Roy supplements Doug DeMuro, explaining how duMuro became an unwitting pawn in a much bigger game: Tesla's asymmetric war on the auto industry.

November 8, 2017

Replication studies

Jay Van Bavel, a social psychologist at New York University, has tweeted openly about a published nonreplication of one of his studies and believes, as any scientist would, that replications are an essential part of the process; nonetheless, he found the experience of being replicated painful. "It is terrifying, even if it's fair and within normal scientific bounds," he says. "Because of social media and how it travels -- you get pile-ons when the critique comes out, and 50 people share it in the view of thousands. That's horrifying for anyone who's critiqued, even if it's legitimate."

The field, clearly, was not moving forward as one. "In the beginning, I thought it was all ridiculous," says Finkel, who told me it took him a few years before he appreciated the importance of what became known as the replication movement. "It was like we had been having a big party -- what big, new, fun, cool stuff can we discover? And we forgot to double-check ourselves. And then the reformers were annoyed, because they felt like they had to come in after the fact and clean up after us. And it was true.

November 6, 2017

Dawn of p hacking

Simmons lost touch with Cuddy, who was by then teaching at Northwestern. He remained close to Nelson, who had befriended a behavioral scientist, also a skeptic, Uri Simonsohn. Nelson and Simonsohn kept up an email correspondence for years. They, along with Simmons, took particular umbrage when a prestigious journal accepted a paper from an emeritus professor of psychology at Cornell, Daryl Bem, who claimed that he had strong evidence for the existence of extrasensory perception. The paper struck them as the ultimate in bad-faith science. "How can something not be possible to cause something else?" Nelson says. "Oh, you reverse time, then it can't." And yet the methodology was supposedly sound. After years of debating among themselves, the three of them resolved to figure out how so many researchers were coming up with such unlikely results.

Over the course of several months of conference calls and computer simulations, the three researchers eventually determined that the enemy of science -- subjectivity -- had burrowed its way into the field's methodology more deeply than had been recognized. Typically, when researchers analyzed data, they were free to make various decisions, based on their judgment, about what data to maintain: whether it was wise, for example, to include experimental subjects whose results were really unusual or whether to exclude them; to add subjects to the sample or exclude additional subjects because of some experimental glitch. More often than not, those decisions -- always seemingly justified as a way of eliminating noise -- conveniently strengthened the findings' results. The field (hardly unique in this regard) had approved those kinds of tinkering for years, underappreciating just how powerfully they skewed the results in favor of false positives, particularly if two or three analyses were underway at the same time. The three eventually wrote about this phenomenon in a paper called "False-Positive Psychology," published in 2011. "Everyone knew it was wrong, but they thought it was wrong the way it's wrong to jaywalk," Simmons recently wrote in a paper taking stock of the field. "We decided to write 'False-Positive Psychology' when simulations revealed it was wrong the way it's wrong to rob a bank."

Simmons called those questionable research practices P-hacking, because researchers used them to lower a crucial measure of statistical significance known as the P-value. The P stands for probable, as in: How probable is it that researchers would happen to get the results they achieved -- or even more extreme ones -- if there were no phenomena, in truth, to observe? (And no systematic error.) For decades, the standard of so-called statistical significance -- also the hurdle to considering a study publishable -- has been a P-value of less than 5 percent.

To examine how easily the science could be manipulated, Simmons and Simonsohn ran a study in which they asked 20 participants their ages (and their fathers' birthdays). Half the group listened to the Beatles song "When I'm Sixty-Four"; the other listened to a control (the instrumental music "Kalimba"). Using totally standard methodology common to the field, they were able to prove that the participants who listened to the Beatles song were magically a year and a half younger than they were before they had heard the music. The subject heading of the explanation: "How Bad Can It Be? A Demonstration of Chronological Rejuvenation." It was witty, it was relatable -- everyone understood that it was a critique of the fundamental soundness of the field.

"We realized entire literatures could be false positives," Simmons says. They had collaborated with enough other researchers to recognize that the practice was widespread and counted themselves among the guilty. "I P-hacked like crazy all through my time at Princeton, and I still couldn't get interesting results," Simmons says.

The paper generated its fair share of attention, but it was not until January 2012, at a tense conference of the Society for Personality and Social Psychology in San Diego, that social psychologists began to glimpse the iceberg looming ahead -- the sliding furniture, the recriminations, the crises of conscience and finger-pointing and side-taking that would follow. At the conference, several hundred academics crowded into the room to hear Simmons and his colleagues challenge the methodology of their field. First, Leslie John, then a graduate student, now an associate professor at the Harvard School of Business, presented a survey of 2,000 social psychologists that suggested that P-hacking, as well as other questionable research practices, was common. In his presentation, Simonsohn introduced a new concept, a graph that could be used to evaluate bodies of research, using the P-values of those studies (the lower the overall P-values, the better). He called it a P-curve and suggested that it could be used, for example, to evaluate the research that a prospective job candidate submitted. To some, the implication of the combined presentations seemed clear: The field was rotten with the practice, and egregious P-hackers should not get away with it.


In 2014, Psychological Science started giving electronic badges, an extra seal of approval, to studies that made their data and methodologies publicly available and preregistered their design and analysis ahead of time, so that researchers could not fish around for a new hypothesis if they turned up some unexpected findings.