Recently in science Category

small data

| No Comments | No TrackBacks

No one talks about big data any more, says Slate's Will Oremus. "Five years ago," he writes, "an article in the New York Times' Sunday Review heralded the arrival of a new epoch in human affairs: 'The Age of Big Data':"

Society was embarking on a revolution, the article informed us, one in which the collection and analysis of enormous quantities of data would transform almost every facet of life. No longer would data analysis be confined to spreadsheets and regressions: The advent of supercomputing, combined with the proliferation of internet-connected sensors that could record data constantly and send it to the cloud, meant that the sort ¬of advanced statistical analysis described in Michael Lewis' 2003 baseball book Moneyball could be applied to fields ranging from business to academia to medicine to romance. Not only that, but sophisticated data analysis software could help identify utterly unexpected correlations, such as a relationship between a loan recipient's use of all caps and his likelihood of defaulting. This would surely yield novel insights that would change how we think about, well, just about everything.

"Big data," he continues, "helps to power the algorithms behind our news feeds, Netflix recommendations, automated stock trades, autocorrect features, and health trackers, among countless other tools:"

But we're less likely to use the term big data these days--we just call it data. We've begun to take for granted that data sets can contain billions or even trillions of observations and that sophisticated software can detect trends in them.

Oremus cites Cathy O'Neil's Weapons of Math Destruction and Frank Pasquale's The Black Box Society as illustrations of "the fetishization of data, and its uncritical use, that tends to lead to disaster," and suggests "Another possible response to the problems that arise from biases in big data sets:"

Small data refers to data sets that are simple enough to be analyzed and interpreted directly by humans, without recourse to supercomputers or Hadoop jobs. Like "slow food," the term arose as a conscious reaction to the prevalence of its opposite.

Martin Lindstrom's 2016 book Small Data: The Tiny Clues That Uncover Big Trends looks intriguing, as Oremus concludes:

There is some hope, then, that in moving away from "big data" as a buzzword, we're moving gradually toward a more nuanced understanding of data's power and pitfalls. In retrospect, it makes sense that the sudden proliferation of data-collecting sensors and data-crunching supercomputers would trigger a sort of gold rush, and that fear of missing out would in many cases trump caution and prudence. It was inevitable that thoughtful people would start to call our collective attention to these cases, and that there would be a backlash, and perhaps ultimately a sort of Hegelian synthesis.

Slackers

| No Comments | No TrackBacks

I've been digging deeper into the Slack-doubter camp, hoping to convince one of my company's executives to temper her enthusiasm for constant chatting as some sort of productivity panacea. Samuel Hulick's piece "Slack, I'm Breaking Up with You" makes the all-too-common plea that Slack is "asking for A LOT of my time," and "it has been absolutely brutal on my productivity:"

I may have been fooling myself when we were still in the honeymoon phase, but when there was all the talk of you killing email, I have to admit I thought it was the email problem you were attacking, not just the email platform.

Which is to say, I thought you were providing some relief from the torrential influx of messages, alerts, and notifications I was receiving on a daily basis. "Me + Slack = Fewer distractions and more productivity," I thought at the time. I have to say, though, that I've since found it to be the opposite.

Like, WAY the opposite.

With you in my life, I've received exponentially more messages than I ever have before.

"While it's true that email was (and, despite your valiant efforts, still very much is) a barely-manageable firehose of to-do list items controlled by strangers," he continues, "one of the few things that it did have going for it was that at least everything was in one place:"

Trying to keep up with the manifold follow-up tasks from the manifold conversations in your manifold teams and channels requires a Skynet-like metapresence that is simply beyond me.

With you, the firehose problem has become a hydra-headed monster.

Everything is scattered, and the mental load that comes with it is real. Linda Stone calls this perpetual, shallow quasi-presence "continuous partial attention" [see below], and this makes each conversational thread, almost by definition, a loose one.

Responding to a description of Slack as "an all-day meeting with unknown participants and no agenda," Hulick notes:

Will they respond in 5 seconds or 5 hours? Who knows! It's like getting caught in one of those support chats from hell with a Comcast rep who's clearly trying to simultaneously jockey a dozen text conversations like some kind of bargain basement Bobby Fischer, except that it's all day long and with everyone I know. [...]

I wonder if conducting business in an asynchronish environment simply turns every minute into an opportunity for conversation, essentially "meeting-izing" the entire workday.

"All-day meetings every day of the week are substantially more 'meetings' than the ones you're saving me from," he observes:

This is awesome for speeding up the tempo of company directives, but it also places a ton of pressure on everyone involved to maintain even MORE Slack omnipresence; if any discussion might lead to a decision being made, that provides a whole lot of incentive to be available for as many discussions as possible.

Even worse, those with the least on their plates can maintain the most Slack presence, which leads to the most gregariously unengaged representing the majority of the discussion base while penalizing those who are fully engaged in their "real" work.

Christopher Batts, who titled his piece "Actually, Slack really sucks" comments that "my life isn't any easier now that everyone insists on using Slack. I've actually noticed its far more complex and distracting:"

Zero time saved, but lots of time newly wasted by the workflow Slack provides. It just doesn't cater for tasks that aren't immediate, and it doesn't cater for teams that work on different timezones.

In the "Managing Notifications" section of his article, Batts notes that "Notifications come from everywhere and couldn't be less ordered if they tried," and his section on "Productivity" simply states, "I get so much more done when Slack is closed:"

I can integrate Slack with everything from Skype to CircleCI. Great if I stare at Slack all day, checking a feed for a response from the latest set of integration tests running. Not so great if I have things that need doing.

Ann Diab's look at workplace chat asks a similar question: "If I'm always available, when can I get any work done?"

Whether it's HipChat, Slack, G-Chat, or any other form of IRC and instant messaging, workplace chat tools facilitate quick answers and instant gratification. But it's possible that getting instant input like this is doing more harm than good to the morale of your teammates and to overall company culture. [emphasis added]

Michael Muse's quantified look at Slack mentions that "Slack makes their users feel 32% more effective" but then asks, what's the downside? "Sometimes, Slack (or your favorite chat app, this topic applies equally to any of them) is the best tool for the job, but I'd like to delve into cases where it isn't." He observes that "nearly two-thirds of Slack messages at our company aren't in channels at all! They're in Direct Messages (DM)," and notes that "This poses a problem:

While a DM may sound like less of a concern for interruption than a public channel, they have a very different social contract:
- Unlike channels, you are assumed to always be listening. The sender knows you received a notification about their message, much like SMS.

- Unlike channels, you are the only person who can weigh in on the matter, so the message cannot be answered by someone else. It is awaiting your response.

This means that DMs feel much more urgent and important. Ever interrupted an in-person conversation because you noticed something in a Slack channel? Me neither. But you better believe I've cut off a verbal conversation midstream to answer a DM.

[He discusses this further in Footnote 4: "When a DM requires you to do some work to resolve it - the assignment of work happens at the requester's discretion, not the work-doer, which is TOTALLY backwards. On a team of work-doers, proper assignment of the work considers bandwidth, compatibility with other similar work, even learning opportunity" as opposed to some sort of LIFO prioritization.] He then considers "the aggregate listening cost" of the ten thousand direct messages his company deals with daily--as he writes, "14 is a data-confirmed, low-end-average for the number of high urgency, high importance interruptions each person at our company gets over DM every day:"

"So," you say, "what's the big deal with 14 interruptions per day? Had they been emails, I still would have dealt with them eventually." But you didn't deal with them eventually. You dealt with them immediately. [...]

There is a popular word for what's happening -- someone else puts some work into the very top of your queue, interrupting what you are doing and obligating you (socially or implicitly) to work on it now. The workplace slang is to call it a firedrill.

He writes that "DMing someone might as well be called firedrilling them," and pithily summarizes:

I like to call this unmeasured, unpredictable interruption phenomenon Slack-a-Mole. Don't play Slack-a-Mole. You're never gonna win the oversized teddy bear.

His footnotes link to an older piece from Trello co-founder Joel Spolsky that examines the harm of task-switching, and makes two main observations about workflow:

a) sequential processing gets you results faster on average, and

b) the longer it takes to task switch, the bigger the penalty you pay for multitasking.

This is because "programming is the kind of task where you have to keep a lot of things in your head at once:"

The more things you remember at once, the more productive you are at programming. A programmer coding at full throttle is keeping zillions of things in their head at once: everything from names of variables, data structures, important APIs, the names of utility functions that they wrote and call a lot, even the name of the subdirectory where they store their source code. If you send that programmer to Crete for a three week vacation, they will forget it all. The human brain seems to move it out of short-term RAM and swaps it out onto a backup tape where it takes forever to retrieve.

"As it turns out," he continues, "if you give somebody two things to work on, you should be grateful if they 'starve' one task and only work on one, because they're going to get more stuff done and finish the average task sooner:"

In fact, the real lesson from all this is that you should never let people work on more than one thing at once. Make sure they know what it is. Good managers see their responsibility as removing obstacles so that people can focus on one thing and really get it done. When emergencies come up, think about whether you can handle it yourself before you delegate it to a programmer who is deeply submersed in a project.

Also worth reading is this APA summary on the switching costs of multitasking, which observes that "Doing more than one task at a time, especially more than one complex task, takes a toll on productivity:"

Psychologists who study what happens to cognition (mental processes) when people try to perform more than one task at a time have found that the mind and brain were not designed for heavy-duty multitasking. Psychologists tend to liken the job to choreography or air-traffic control, noting that in these operations, as in others, mental overload can result in catastrophe.

The piece includes synopses of several relevant studies; here is a particularly relevant snippet:

Although switch costs may be relatively small, sometimes just a few tenths of a second per switch, they can add up to large amounts when people switch repeatedly back and forth between tasks. Thus, multitasking may seem efficient on the surface but may actually take more time in the end and involve more error. Meyer [see "A computational theory of executive cognitive processes and multiple-task performance" Parts 1 and 2] has said that even brief mental blocks created by shifting between tasks can cost as much as 40 percent of someone's productive time.

Linda Stone's description, metioned above by Hulick, of "continuous partial attention" points out that it is "different from multi-tasking." As she observes, "To pay continuous partial attention is to pay partial attention -- CONTINUOUSLY. It is motivated by a desire to be a LIVE node on the network:"

We pay continuous partial attention in an effort NOT TO MISS ANYTHING. It is an always-on, anywhere, anytime, any place behavior that involves an artificial sense of constant crisis. We are always in high alert when we pay continuous partial attention.

This has negative effects beyond impeding task completion, because "in large doses, it [continuous partial attention] contributes to a stressful lifestyle, to operating in crisis management mode, and to a compromised ability to reflect, to make decisions, and to think creatively:"

In a 24/7, always-on world, continuous partial attention used as our dominant attention mode contributes to a feeling of overwhelm, over-stimulation and to a sense of being unfulfilled. We are so accessible, we're inaccessible. The latest, greatest powerful technologies have contributed to our feeling increasingly powerless.

"We have focused on managing our time," she observes, where we should instead "focus on how we manage our attention:"

We are evolving beyond an always-on lifestyle. As we make choices to turn the technology OFF, to give full attention to others in interactions, to block out interruption-free time, and to use the full range of communication tools more appropriately, we will re-orient our trek toward a path of more engaged attention, more fulfulling relationships, and opportunities for the type of reflection that fuels innovation.

Executives whose simplistic bottom-line mentality that sees only short-term costs may treat cloud services as a panacea to onsite IT costs, but that's only because they're ignoring (or discounting) issues of availability and security. Quinn Norton's "the problem with Slack" at Medium mentioned the problem of security:

General computer and network security in the early 21st century simply isn't good enough, categorically, to trust logged, unencrypted communication touching the net to remain safe over time. Slack will get hacked, over and over. (I know Slack uses encryption at rest, but if Slack can access your data, so can a sufficiently motivated actor)... [...]

As Slack continues, likely years into the future, it will be hacked by people engaging in corporate espionage, governmental actors, and talented amateurs. Some of these will be discovered quickly, others will never be discovered at all. Like every other computer system, Slack's employees, no matter how diligent, will never have an easy way to ensure that they aren't compromised. Given enough time, everyone is compromised. Given enough interest, it doesn't take much time.

Considering the variety of problematic issues at hand with all these aspect of Slack, it's hard to see how its use makes anything better. It should go without saying--should, that is--but Slack and the other exemplars of our constant-interruption business culture are completely antithetical to accomplishing anything of depth and complexity. The do, however, serve to make workers stressed enough so that we tolerate employers' further encroachments into our personal lives, as they demand more "availability" for no additional compensation. Perhaps that's the point?

Slack

| No Comments | No TrackBacks

New York magazine asks with dismay, what has Slack done to the office? and notes that the messaging app "has essentially ushered employer-sanctioned social media into the workplace:"

Like Facebook or Twitter, Slack induces the same anxious, attention-hungry rhythm in its users, the same need to endlessly refresh, and gives off the same illusion of intimacy in an ultimately public space. It also makes the line between work and not-work blurrier than ever -- the constant scroll of maybe-relevant chatter in your chosen Slack channels registers at times like the background noise of any other newsfeed.

Bloomberg asserts that you're about to hate Slack as much as you hate email, and Giles Turnbull notes in https://gilest.org/slack-and-email.html his piece on Slack and email that "Right now, I am a member of 7 different Slack teams [with] A total of 194 channels:"

Of course I don't keep track of all of them, and subscribe to only a fraction - 27 channels across all 7 teams. And I only keep a close eye on 16 of those. But: that's 16 channels that I feel compelled to read. Even if I've not been mentioned, even if none of my highlight words have cropped up anywhere. It's quite likely that something could be said in one of those channels that I will find interesting or useful - but equally likely that I won't be mentioned by name when that happens, because why on earth would anyone do that?

He continues by observing that "my experience of multiple Slack teams and channels is that it's no less overwhelming than an inbox full of email:"

The two experiences - one of opening email and seeing a list of messages, and the other of opening Slack and seeing a list of unread channels - are exactly the same.

What's more, the old criticism of email - that it's a todo list other people have control of - still applies inside Slack. People are still sending me things to do inside it. They're just typing those messages into a different box.

"Slack (or any other chat-based interface) can be just as much work as email ever was," he points out, "and consequently doesn't feel as liberating as some people would argue it is:"

I don't have any answers, and I'm not going to stop using Slack or email. Both are useful. I just wanted make the point: for me, using Slack might have fractionally lessened the amount of email I have to read, but it hasn't lessened the amount of text-on-screens that I have to read. If anything, that's increased. So it doesn't feel like a problem has been solved - it's just moved to a different app.

"The Slack sell to employers," the piece continues, "is that it decreases the burden of email, because nobody likes email:"

GIFS and emoji are the incentive for employees to use Slack; greater oversight is the incentive for employers to tolerate GIFS and emoji. A company-operated social network might not be something most of us would seek out -- but years of experience have primed us to accept a certain loss of privacy as the price paid for online entertainment or, in this case, entertaining work.

"Slack came into my life in 2014," notes the author, observing that the app "made us spend more time chatting than we ever had before:"

Slack's own employees reportedly adhere to the principle "Work hard, then go home." They have nonetheless created a product that encourages the opposite: "Work half-distractedly, then keep doing that no matter where you go." Slack has made work, like the rest of the internet, a passive addiction.

I am less concerned with potential privacy issues that with the tendency of Slack to become "a compulsion, a distraction. A burden." The author of last year's Atlantic piece on the Slack backlash observes, for example, that "Slack has been transformative for the way I work," although "Slack is not for everyone:"

Some people dislike the platform because it's conceptually like an old-school IRC without being an open protocol. Others have complained that Slack isn't actually an email-killer, as so many have claimed, but just another thing to keep up with on top of email. (The Slack detox, [see the Verge piece below] in the grand tradition of people's fraught relationships with the digital tools they use most, is officially A Thing.)

PC Magazine suggests that its audience should read this before ditching email for Slack, and points out that "Slack is free until you hit 10,000 messages and five supported tools. After that, it costs $8 per month per user (or $6.67 when paid annually." At nearly $100/user/year, Slack is pricey compared to other chat apps--although they're not as trendy, which counts for a lot in some executive technology-selection circles. Justin Glow, a senior director at Vox Media, wrote at Verge in 2015 about the week he tried to unplug from Slack. "Originally," he writes, "I didn't feel distracted by Slack at all:"

On my phone in particular, I felt the opposite -- like I was benefiting myself and the company by finding small windows of time in strange places to be productive at work. Over the last few months, however, I've found myself impulsively and habitually checking it to catch up on channel activity the same way I used to open Twitter when in line at the grocery store, or any other time I spent in between more meaningful activities. I started to question if I was actually being productive, or if this was just another way to fill a void with information that didn't really matter.

I craved a reset. How critical was Slack to my ability to do my job? Could I still be a productive employee without it? Was the massive amount of time I spent lurking and interacting with fellow co-workers increasing my productivity, or hurting it?

"I was determined to quit using Slack entirely for a full week," he writes, but "Quitting cold turkey, even for a small amount of time, was out of the question." His use of addiction lingo seems warranted, as in this observation: "With the app closed for half of my first day, I had a renewed sense of focus and attacked my to-do list, but my mind was preoccupied with what I was missing in Slack:"

I quickly discovered not being available on Slack gives the impression you're not actually at work and getting things done. During my experiment, it took way too long not to feel self-conscious during the hours I spent with Slack closed -- like this time didn't count, or I might as well have been at the bar -- even though it was some of my most productive in months.

As he concludes, "I will continue to use my own approach to balancing the need to regularly interact and be available on Slack while staying productive and happy in and out of work."

loopy

| No Comments | No TrackBacks

There's some news on the three-slit experiment front:

Physicists have performed a variation of the famous 200-year-old double-slit experiment that, for the first time, involves "exotic looped trajectories" of photons. These photons travel forward through one slit, then loop around and travel back through another slit, and then sometimes loop around again and travel forward through a third slit.

Will a diagram help to clarify this loopy trajectory?

20170110-threeslits.jpg

Perhaps not.

Yale News notes that "Gun violence is often described as an epidemic or a public health concern, due to its alarmingly high levels in certain populations in the United States:"

It most often occurs within socially and economically disadvantaged minority urban communities, where rates of gun violence far exceed the national average. A new Yale study has established a model to predict how "contagious" the epidemic really is.

The study, "Modeling Contagion Through Social Networks to Explain and Predict Gunshot Violence in Chicago, 2006 to 2014," conducted "an epidemiological analysis of a social network of individuals who were arrested during an 8-year period in Chicago, Illinois, with connections between people who were arrested together for the same offense:"

Modeling of the spread of gunshot violence over the network was assessed using a probabilistic contagion model that assumed individuals were subject to risks associated with being arrested together, in addition to demographic factors, such as age, sex, and neighborhood residence.

"Social contagion accounted for 63.1% of the 11 123 gunshot violence episodes," the study continues:

...subjects of gun violence were shot on average 125 days after their infector (the person most responsible for exposing the subject to gunshot violence). Some subjects of gun violence were shot more than once. [...]

Gunshot violence follows an epidemic-like process of social contagion that is transmitted through networks of people by social interactions. [...] Contagion via social ties, then, may be a critical mechanism in explaining why neighborhoods matter when modeling the diffusion of crime and, perhaps more important, why certain individuals become subjects of gun violence while others exposed to the same high-risk environments do not.

"We postulated that a person becomes exposed to gun violence through social interactions with previous subjects of gun violence," write the authors:

Therefore, associating with subjects of gun violence, and specifically co-engaging in risky behaviors with them, may expose individuals to these same behaviors, situations, and people that in turn increase the probability of becoming a subject of gun violence. [...]

By identifying high-risk individuals and transmission pathways that might not be detected by other means, a contagion-based approach could detect strategic points of intervention that would enable measures to proactively reduce the trauma associated with gun violence rather than just react to past incidents.

Specifically, the study observed that "more than 70% of all subjects of gun violence could be located in networks containing less than 5%of the city's population."

BigThink recommends that you get off Facebook:

The big issue with Facebook use is that it offers endless opportunities for social comparison. It turns out that seeing countless exotic vacation photos and reading about the career accomplishments of your friends and acquaintances may make you feel worse about your current status.

Additionally, "The average American Facebook user spends around 50 minutes a day on Facebook:"

That's a significant amount of time. According to the Bureau of Labor Statistics, the average person spends 4 minutes a day on social events, 17 minutes exercising, and 19 minutes reading.

JFC, what a time sink! 300 hours a year?! That's nearly two months of 40-hour work weeks!

The study ("The Facebook Experiment: Quitting Facebook Leads to Higher Levels of Well-Being") points out that "provides causal evidence that Facebook use affects our well-being negatively " as well as the fact that "Most people use Facebook on a daily basis; few are aware of the consequences:"

By comparing the treatment group (participants who took a break from Facebook) with the control group (participants who kept using Facebook), it was demonstrated that taking a break from Facebook has positive effects on the two dimensions of well-being: our life satisfaction increases and our emotions become more positive. Furthermore, it was demonstrated that these effects were significantly greater for heavy Facebook users, passive Facebook users, and users who tend to envy others on Facebook.

The study noted that "two out of three 3 Danish Internet users had an account on Facebook in 2015," and summarized the sample as "86 percent women, geographically residing throughout the country, with an average age of 34 years (SD = 8.74), having an average of 350 Facebook friends and spending a bit over an hour on Facebook daily." The authors then consider the effects of "Millions of hours are spent on Facebook each day:"

We are surely better connected now than ever before, but is this new connectedness doing any good to our well-being? According to the present study, the answer is no. In fact, the predominant uses of Facebook--that is, as a means to communicate, gain information about others, and as habitual pastime--are affecting our well-being negatively on several dimensions. First, the present study provides causal evidence that quitting Facebook leads to higher levels of both cognitive and affective well-being.

The caveat is chilling:

The effects presented in this article were generated after just 1 week of absence from a single social network. Future studies should investigate the effects of quitting Facebook for longer periods of time to test if the effects are permanent.

Chris Mooney analyzes NOAA and the global-warming "pause" by looking at what might be "the most controversial climate change study in years:"

...the 2015 paper, led by NOAA's Thomas Karl, employed an update to the agency's influential temperature dataset, and in particular to its record of the planet's ocean temperatures, to suggest that really, the recent period was perfectly consistent with the much longer warming trend.

This consistency has drawn fire by way of "a congressional subpoena from Rep. Lamar Smith, chair of the House Committee on Science:"

That controversy is likely to be stirred anew in the wake of a new study, published Wednesday in Science Advances, that finds the NOAA scientists did the right thing in adjusting their dataset. In particular, the new research suggests that the NOAA scientists correctly adjusted their record of ocean temperatures in light of known biases in some observing systems -- and indeed, that keepers of other top global temperature datasets should do likewise.

"We pretty robustly showed that NOAA got it right," said study author Zeke Hausfather, a Ph.D. student at the University of California-Berkeley and a researcher with Berkeley Earth, a nonprofit consortium that has reanalyzed the Earth's temperatures. "There was no cooking of the books, there's no politically motivated twisting of the data."

In comparing data collected from ships versus that from buoys:

So to better patch together a long term temperature record necessarily reliant on both data sources, NOAA used a "bias correction" to take this into account, and more generally gave greater weight to the buoy data, in updating its dataset.

This highly technical switch, in turn, had the effect of increasing the overall warming of the oceans in the new dataset -- and helping to wipe out claims that there'd been any recent slowdown in the rate of climate change.

changing minds

| No Comments | No TrackBacks

Vox's Brian Resnick discusses a study on changing minds, which he describes as "the hardest challenge in politics right now:"

Psychologists have been circling around a possible reason political beliefs are so stubborn: Partisan identities get tied up in our personal identities. Which would mean that an attack on our strongly held beliefs is an attack on the self. And the brain is built to protect the self.

When we're attacked, we evade or defend -- as if we have an immune system for uncomfortable thoughts, one you can see working in real time.

"The brain's primary responsibility is to take care of the body, to protect the body," Jonas Kaplan, a psychologist at the University of Southern California, tells me. "The psychological self is the brain's extension of that. When our self feels attacked, our [brain is] going to bring to bear the same defenses that it has for protecting the body."

20161229-beliefchange.jpg

Thanks to decades of right-wing paranoia and propaganda, even the science of fluoride isn't safe from ideological blindness--remember Jack D. Ripper?

Noah Charney's piece "Your Brain on Art" praises Eric Kandel's book Reductionism in Art and Brain Science, saying that it "offers one of the freshest insights into art history in many years:"

Ask your average person walking down the street what sort of art they find more intimidating, or like less, or don't know what to make of, and they'll point to abstract or minimalist art. Show them traditional, formal, naturalistic art, like Bellini's "Sacred Allegory," art which draws from traditional core Western texts (the Bible, apocrypha, mythology) alongside a Mark Rothko or a Jackson Pollock or a Kazimir Malevich, and they'll retreat into the Bellini, even though it is one of the most puzzling unsolved mysteries of the art world, a riddle of a picture for which not one reasonable solution has ever been put forward. The Pollock, on the other hand, is just a tangle of dripped paint, the Rothko just a color with a bar of another color on top of it, the Malevich is all white.

Kandel offers this explanation:

In abstract painting, elements are included not as visual reproductions of objects, but as references or clues to how we conceptualize objects. In describing the world they see, abstract artists not only dismantle many of the building blocks of bottom-up visual processing by eliminating perspective and holistic depiction, they also nullify some of the premises on which bottom-up processing is based. We scan an abstract painting for links between line segments, for recognizable contours and objects, but in the most fragmented works, such as those by Rothko, our efforts are thwarted.

Thus the reason abstract art poses such an enormous challenge to the beholder is that it teaches us to look at art -- and, in a sense, at the world -- in a new way. Abstract art dares our visual system to interpret an image that is fundamentally different from the kind of images our brain has evolved to reconstruct.

"We like to think of abstraction as a 20th century phenomenon," he writes, but its roots lie far deeper:

A look at ancient art finds it full of abstraction. Most art history books, if they go back far enough, begin with Cycladic figurines (dated to 3300-1100 BC). Abstracted, ghost-like, sort-of-human forms. Even on cave walls, a few lines suggest an animal, or a constellation of blown hand-prints float on a wall in absolute darkness.

Abstract art is where we began, and where we have returned. It makes our brains hurt, but in all the right ways, for abstract art forces us to see, and think, differently.

Enriching, but not merely entertaining--no wonder it's so unpopular.

disappearing data?

| No Comments | No TrackBacks

WaPo's Brady Dennis informs us that "scientists have begun a feverish attempt to copy reams of government data onto independent servers in hopes of safeguarding it from any political interference:"

In recent weeks, President-elect Donald Trump has nominated a growing list of Cabinet members who have questioned the overwhelming scientific consensus around global warming. [...]

Those moves have stoked fears among the scientific community that Trump, who has called the notion of man-made climate change "a hoax" and vowed to reverse environmental policies put in place by President Obama, could try to alter or dismantle parts of the federal government's repository of data on everything from rising sea levels to the number of wildfires in the country.

There is, sadly, historical precedent for just this sort of disappearing data:

Climate data from NASA and the National Oceanic and Atmospheric Administration have been politically vulnerable. When Tom Karl, director of the National Centers for Environmental Information, and his colleagues published a study in 2015 seeking to challenge the idea that there had been a global warming "slowdown" or "pause" during the 2000s, they relied, in significant part, on updates to NOAA's ocean temperature data set, saying the data "do not support the notion of a global warming 'hiatus.'"

In response, the U.S. House Science, Space and Technology Committee chair, Rep. Lamar S. Smith (R-Tex.), tried to subpoena the scientists and their records.

Andrew Dessler, professor of atmospheric sciences at Texas A&M, commented:

"If you can just get rid of the data, you're in a stronger position to argue we should do nothing about climate change."

"strange numbers"

| No Comments | No TrackBacks

Kevin Hartnett's dive into the strange numbers found in particle collisions is a nice read, exploring "a surprising correspondence that has the potential to breathe new life into the venerable Feynman diagram and generate far-reaching insights in both fields:"

It has to do with the strange fact that the values calculated from Feynman diagrams seem to exactly match some of the most important numbers that crop up in a branch of mathematics known as algebraic geometry. These values are called "periods of motives," and there's no obvious reason why the same numbers should appear in both settings. Indeed, it's as strange as it would be if every time you measured a cup of rice, you observed that the number of grains was prime.

Hartnett writes that "mathematicians and physicists are working together to unravel the coincidence:"

For mathematicians, physics has called to their attention a special class of numbers that they'd like to understand: Is there a hidden structure to these periods that occur in physics? What special properties might this class of numbers have? For physicists, the reward of that kind of mathematical understanding would be a new degree of foresight when it comes to anticipating how events will play out in the messy quantum world.

Here's an infographic that might clarify things:

20161120-potentialshortcut.jpg

group selection

| No Comments | No TrackBacks

David S. Wilson writes about Elinor Ostrom and the tragedy of the commons, particularly the path forward from her 1990 book Governing the Commons. "Is the so-called tragedy of the commons," he asks (referencing Garrett Hardin's famed 1968 Science essay), "ever averted in the biological world and might this possibility provide solutions for our own species?"

One plausible scenario is natural selection at the level of groups. A selfish farmer might have an advantage over other farmers in his village, but a village that somehow solved the tragedy of the commons would have a decisive advantage over other villages. Most species are subdivided into local populations at various scales, just as humans are subdivided into villages, cities and nations. If natural selection between groups (favoring cooperation) can successfully oppose natural selection within groups (favoring non-cooperation), then the tragedy of the commons can be averted for humans and non-human species alike.

Rio scale

| No Comments | No TrackBacks

538 analyzes the search for an alien signal, opining that although the work of "alien hunting, commonly referred to as the Search for Extraterrestrial Intelligence, or SETI, is still very much relegated to the sidelines," it's become a "cutting edge" pursuit:

Occasionally, promising signals make their way through the broader astronomical community and into the public eye. A few such claims have made headlines recently, prompting some astronomers to call for a new framework to rank and interpret these signals.

"To help with this," she writes, "astronomers came up with a way to gauge the credibility of a SETI signal, called the Rio scale," where "an answer of 0 is obviously nothing and a 10 is 'wow, aliens are calling':"

Formulated at an astronomical conference in Rio de Janeiro in 2000, it's a 10-point scale intended to help people understand when to take an apparent signal from another world seriously. [...]

RS=Q⋅δ

In this equation, Q is the sum of numerical values assigned to three parameters: the class of phenomenon, such as whether it's an "obviously Earth-directed message" or a randomly swooping beacon; the type of discovery, like whether it's a steady signal or something that comes and goes; and the distance to the signal. The latter is important because you'd want to know how long it would take for aliens to receive a reply.

Each parameter has a numerical value from 1 to 4, 5 or 6.1 For instance, "an Earth-specific beacon designed to draw attention" gets a 4. If it's within the galaxy, add 2. If it was a passing signal detected once, it gets another 2. To get your Rio scale value, you multiply this sum by δ, which is a measure of the credibility of the claim. The values for δ go from 0 to 2/3. If it's uncertain but worth checking out, for example, that's 1/6. This quantity depends on experts' opinions, so it's inherently subjective. And unless a signal has been verified repeatedly by SETI experts, the δ value is almost certain to drop its total value to a 2 or a 3.

coding is not fun

| No Comments | No TrackBacks

Walter Vannini, a digital consultant and researcher, writes that coding is not fun:

Programming computers is a piece of cake. Or so the world's digital-skills gurus would have us believe. From the non-profit Code.org's promise that 'Anybody can learn!' to Apple chief executive Tim Cook's comment that writing code is 'fun and interactive', the art and science of making software is now as accessible as the alphabet.

"Unfortunately," he writes, "this rosy portrait bears no relation to reality:"

For starters, the profile of a programmer's mind is pretty uncommon. As well as being highly analytical and creative, software developers need almost superhuman focus to manage the complexity of their tasks. Manic attention to detail is a must; slovenliness is verboten. Attaining this level of concentration requires a state of mind called being 'in the flow', a quasi-symbiotic relationship between human and machine that improves performance and motivation.

Coding isn't the only job that demands intense focus. But you'd never hear someone say that brain surgery is 'fun', or that structural engineering is 'easy'. When it comes to programming, why do policymakers and technologists pretend otherwise?

Anyone can learn to type a "Hello, world!" program, but the artisans are few--and the artists far fewer. "Insisting on the glamour and fun of coding," he continues, "is the wrong way to acquaint kids with computer science:"

It insults their intelligence and plants the pernicious notion in their heads that you don't need discipline in order to progress. As anyone with even minimal exposure to making software knows, behind a minute of typing lies an hour of study.

It's better to admit that coding is complicated, technically and ethically [and] it's irresponsible to speak of coding as a lightweight activity. Software is not simply lines of code, nor is it blandly technical. In just a few years, understanding programming will be an indispensable part of active citizenship. The idea that coding offers an unproblematic path to social progress and personal enhancement works to the advantage of the growing techno-plutocracy that's insulating itself behind its own technology.

At Slate, David Auerbach discusses the Intercept's reporting last week that "Microsoft probably holds a copy of the encryption keys for Windows 10 users' hard drives," which suggests that "Microsoft took the path of least resistance and chose to store the recovery key on the user's OneDrive cloud account:"

The company is clearly trying to catch up to Google, Apple, and Facebook in the user-data race, and so its policies emit a whiff of eminent domain: Even when we aren't looking at your files which we mirror to OneDrive, Microsoft seems to be saying, we are the ones taking care of your data. Once the company's got it, will Microsoft be tempted to ask for a bigger peek, just as Facebook has gradually gotten nosier with its user data? The tenets of capitalism says yes.

The conclusion is bleak:

If privacy matters enough that you want to protect your machine and your data from the eyes of the government and the tech industry, you shouldn't be using Windows 10--or Apple, or Android--in the first place.

Theranos

| No Comments | No TrackBacks

Julia Belluz digs into the now-familiar "amazing origin story" of Theranos and Elizabeth Holmes:

Holmes, the company's founder, dropped out of Stanford as a sophomore in 2004. She said she'd started the company both to address her phobia of needles -- one that she realized many people shared -- and out of the desire to help people diagnose potential diseases faster and at more accessible prices.

Over the next decade, Holmes managed not only to get her own Stanford professor and mentor on board, but also to attract $400 million from venture capitalists, and assemble a star-studded board that included former US Secretaries of State Henry Kissinger and George P. Shultz.

Holmes became "one of the youngest self-made billionaires around" without having to deliver solid results:

Two years ago, the company started offering blood tests that it claimed required only a finger prick -- and could deliver results of up to 30 tests in hours using its own lab testing instrument (called "Edison machines"). In numerous interviews, Elizabeth Holmes, the 31-year-old founder of the company, argued that this technology would be revolutionary, slashing costs in the $75 billion-a-year blood testing industry. Investors and media loved it, and last year the company was valued at $9 billion.

"But recently," Belluz continues, "Theranos has started attracting doubters and critics"--including a prominent WSJ feature alleging false claims:

Theranos has allegedly been collecting blood samples the traditional way and then diluting them so they could be run on machines made by other companies -- not their much-hyped Edison technology.

What's more, the Journal's investigation, as well as a follow-up story, suggested there were major concerns about the accuracy of Theranos's test results. It's a messy story, full of wild claims and regulatory clashes.

More questions emerged regarding the lack of FDA and the absence of peer-reviewed results, and Belluz concludes that "This whole episode should be a cautionary tale:"

If a secretive tech company is claiming to revolutionize an entire industry with technology that still hasn't been validated, be skeptical.

Anthropologist Rachel Caspari of Central Michigan University suggests that
old age made us human, writing that during the Upper Paleolithic period about 30,000 years ago, "there were twice as many adults who died after age 30 as those who died young:"

The Upper Paleolithic is also when modern humans really started flourishing. That's one of the times the population boomed and humans created complex art, used symbols, and colonized even inhospitable environments.

The change was not in our genes, but in our culture, Slate continues:

Something about how people were living made it possible to survive into old age, maybe the way they found or stored food or built shelters, who knows. That's all lost--pretty much all we have of them is teeth--but once humans found a way to keep old people around, everything changed.

Old people are repositories of information, Caspari says. They know about the natural world, how to handle rare disasters, how to perform complicated skills, who is related to whom, where the food and caves and enemies are. They maintain and build intricate social networks. A lot of skills that allowed humans to take over the world take a lot of time and training to master, and they wouldn't have been perfected or passed along without old people. "They can be great teachers," Caspari says, "and they allow for more complex societies." Old people made humans human.

The question, "What's so special about age 30?" is answered by the observation that "That's when you're old enough to be a grandparent:"

Studies of modern hunter-gatherers and historical records suggest that when older people help take care of their grandchildren, the grandchildren are more likely to survive. The evolutionary advantages of living long enough to help raise our children's children may be what made it biologically plausible for us to live to once unthinkably old ages today.

"We're now on the other side," it continues, "of the second great demographic change in human evolutionary history:"

The main reason lifespan doubled in the past 150 years is that infant mortality plummeted. Just as having old people around changed human culture profoundly 30,000 years ago, having infants and children survive has fundamentally changed modern society.

Parents knew they couldn't expect infants to live. [...] But overall, parents' relationships with their children were fundamentally different than they are in much of the world today.

"Children were the focus of many early public health drives," the piece continues--and of much legislation today:

After the increase in child survival, the other major demographic change to come from the doubling of average human lifespan is a robust population of old people. In 1850, the proportion of people age 60 or older in the United States was about 4 percent. Today they account for about 20 percent of the population.

Economists fret about declining birth rates in the developed world and the challenge of financially supporting large elderly populations. But old people are awesome. Having a high ratio of older to younger people isn't just a consequence of living in peace and prosperity--it's also the foundation of a civilized society.

"Things go horribly wrong," the piece notes, "in societies composed largely of young people:"

Old people aren't merely less bellicose and impulsive than young people. They're also, as a group, wiser, happier, and more socially adept. They handle negative information better, have stronger relationships, and find better solutions to interpersonal conflicts than younger people do.

Lawrence Krauss explains why Creationism is child abuse, and suggests that we need to stop validating ignorance:

And if you think about it, teaching kids - or allowing the notion that the earth is 6,000 years old to be promulgated in schools is like teaching kids that the distance across the United States is 17 feet. That's how big an error it is.

"I've often said," he continues, that "the purpose of education is not to validate ignorance but to overcome it:"

Technology and biotechnology will be the basis of our economic future. And if we allow nonsense to be promulgated in the schools, we do a disservice to our students, a disservice to our children, and we're guaranteeing that they will fall behind in a competitive world that depends upon a skilled workforce able to understand and manipulate technology and science.

Michelle Legro explicates the transit of Venus, praising Andrea Wulf's book Chasing Venus: The Race to Measure the Heavens:

In 1716, sixty-year old Sir Edmund Halley called on astronomers all over the world to leave their cozy observatories, travel to the edges of the known world, set up their telescopes, and turn their eyes toward the sunrise on the morning of June 6th, 1761, when the first Transit of Venus of the scientific age would march across the face of the sun.

In the eighteenth century, the solar system had a shape but not a size. By timing the entrance and the exit of Venus across the sun from latitudes all over the world, Halley explained, astronomers could roughly calculate the distance between the Earth and the Sun -- a "celestial yardstick" for measuring the universe, as Andrea Wulf calls it in her excellent book Chasing Venus: The Race to Measure the Heavens.

It was the first worldwide scientific collaboration of its kind, a mathematical olympiad six hours in duration, with years of planning and seconds that counted. [...] Chasing Venus chronicles a rare planetary event that happened at a rare juncture in human history, when the age of empire, the age of science, and the age of curiosity brought the world together for just a few moments -- to achieve the measure of the universe.

"The Beer Archaeologist" (Smithsonian) discusses Patrick McGovern, an academic visitor to Delaware's Dogfish Head brewpub:

"Dr. Pat," as he's known at Dogfish Head, is the world's foremost expert on ancient fermented beverages, and he cracks long-forgotten recipes with chemistry, scouring ancient kegs and bottles for residue samples to scrutinize in the lab. He has identified the world's oldest known barley beer (from Iran's Zagros Mountains, dating to 3400 B.C.), the oldest grape wine (also from the Zagros, circa 5400 B.C.) and the earliest known booze of any kind, a Neolithic grog from China's Yellow River Valley brewed some 9,000 years ago.

Widely published in academic journals and books, McGovern's research has shed light on agriculture, medicine and trade routes during the pre-biblical era.

Scientific director of the University of Pennsylvania's Biomolecular Archaeology Laboratory for Cuisine, Fermented Beverages, and Health, McGovern comments:

"I don't know if fermented beverages explain everything, but they help explain a lot about how cultures have developed," he says. "You could say that kind of single-mindedness can lead you to over-interpret, but it also helps you make sense of a universal phenomenon."

McGovern, the piece continues, "believes that booze helped make us human:"

In what might be called the "beer before bread" hypothesis, the desire for drink may have prompted the domestication of key crops, which led to permanent human settlements. Scientists, for instance, have measured atomic variations within the skeletal remains of New World humans; the technique, known as isotope analysis, allows researchers to determine the diets of the long-deceased. When early Americans first tamed maize around 6000 B.C., they were probably drinking the corn in the form of wine rather than eating it, analysis has shown.

Maybe even more important than their impact on early agriculture and settlement patterns, though, is how prehistoric potions "opened our minds to other possibilities" and helped foster new symbolic ways of thinking that helped make humankind unique, McGovern says. "Fermented beverages are at the center of religions all around the world. [Alcohol] makes us who we are in a lot of ways." He contends that the altered state of mind that comes with intoxication could have helped fuel cave drawings, shamanistic medicine, dance rituals and other advancements.

It's an interesting thesis--certainly worth discussing over a pint.

About this Archive

This page is an archive of recent entries in the science category.

running is the previous category.

worst. president. ever. is the next category.

Find recent content on the main index or look in the archives to find all content.

Monthly Archives

Pages

  • About
  • Contact
OpenID accepted here Learn more about OpenID
Powered by Movable Type 5.031