Recently in science Category

Our loneliness epidemic is getting worse, writes Philip Perry, who points out that "staying connected is the healthiest thing to do, and not just psychologically:"

According to a 2014 University of Chicago study, loneliness can have a significant negative impact on physical health. It can increase the rate of atherosclerosis--the hardening of the arteries, increase the risk of high blood pressure and stroke, and decrease retention, which can even hurt learning and memory. What's more, the lonely often make worse life choices and are more prone to substance abuse.

Some research suggests loneliness is worse for you than smoking or obesity. It can even increase the risk of type 2 diabetes. Seniors are often the focus. Those who face social isolation actually see a 14% increased risk of premature death.

"It's ironic that we're more connected than ever before, and yet lonelier than ever," writes Perry:

Humans are social creatures and texting doesn't replace offline, face-to-face interaction. This is evident by the fact that the loneliest generation isn't the elderly but the young. Gen Z (ages 18-22), the most connected generation in history, are also in worse health than all older generations. Social media, rather than relieving the issue, has exasperated it. [sic; exacerbated]

"The survey does make some suggestions," he continues:

There's a balance one needs to strike among three particular life aspects: staying socially connected, getting regular exercise, and getting enough sleep. Americans seem to be missing the mark on all of these, throwing all their weight against their career and then, familial responsibilities, leaving little time for much else.

code to joy

| No Comments | No TrackBacks

Andrew Smith's essay Code to Joy begins with his being commissioned "to write the first British magazine piece" on Bitcoin and its pseudonymous creator, Satoshi Nakamoto. Smith was entranced by what he learned about coding:

I was astonished to find other programmers approaching Satoshi's code like literary critics, drawing conclusions about his likely age, background, personality and motivation from his style and approach. Even his choice of programming language - C++ - generated intrigue. Though difficult to use, it is lean, fast and predictable. Programmers choose languages the way civilians choose where to live and some experts suspected Satoshi of not being "native" to C++. By the end of my investigation I felt that I knew this shadowy character and tingled with curiosity about the coder's art. For the very first time I began to suspect that coding really was an art, and would reward examination.

Noting the ubiquity--and importance--of "the code conjured by an invisible cadre of programmers," Smith points out that "our relationship with code has become symbiotic, governing nearly every aspect of our lives:"

The accelerator in your new car no longer has any physical connection to the throttle - the motion of your foot will be converted into binary numbers by some of the 100m lines of code that tell the vehicle what to do. Turn on your TV or radio, use a credit card, check in a bag at the airport, change the temperature in your fridge, get an X-ray at the dentist, text a family member, listen to music on anything other than vinyl or read this article online and each of your desires will be fulfilled by code. You may think you're wedded to your iPhone - what you really love is the bewitching code that lies within it.

Though code makes our lives easier and more efficient, it is becoming increasingly apparent how easily it can be turned to malign purposes. It's used by terrorists to spread viruses, car manufacturers to cheat emissions tests and hostile powers to hack elections.

This leads Smith to ask himself some questions:

Should I learn to code? Could I learn to code? With a trepidation I later came to recognise as deeply inadequate, I decided there was only one way to find out.

Smith narrows his focus to three languages (Python, JavaScript, and C++), investigates freeCodeCamp for HTML5 and JavaScript, and other resources for Python. "The app I want to write," he explains, "will rove Twitter feeds looking for keywords provided by a user." Then he gets to work:

I must learn how to connect with Twitter's API, or Application Programming Interface, which provides developers with access to the company's feed. I must also become familiar with Tweepy, a library of Python tools specially written to talk to Twitter. To this end I spend an entire exhausting day reading the copious online documentation about this software. Tolstoy must look like a quick skim to these people.

Smith eventually got stymied by "endless 'syntax error' messages that stop my code from doing anything at all:"

Hours later, at two in the morning, nerves stretched as if the entire staff of Facebook has thrown them out the window and shimmied down them to escape, I send an SOS to [British programmer Nicholas] Tollervey, grateful, for the first time in my life, for the eight-hour time-zone lag between San Francisco, where I live, and the UK. To my unbounded relief, he answers straight away and arranges a screen share to help solve my problem. He looks for a moment, then laughs.

"You probably don't feel like it right now," Tollervey says of Smith's code, "but you're so close." Here's the code in question:

20180519-code.jpg

A stray parenthesis had thrown the whole program into chaos. Tollervey removes it and the code works. I stare at the screen in disbelief. We're done. Too wired to sleep, I stay up talking to Tollervey about programming for another hour. My app is crude and unlikely to change the world or disrupt anything soon, but it feels amazing to have made it. More than anything, I'm astonished at how few lines it contains. With the Twitter API security keys redacted, it appears as above.

"After all the caffeine, sweat and tears," Smith asks, "were my efforts to learn to code worthwhile?"

A few hours on freeCodeCamp, familiarising myself with programming syntax and the basic concepts, cost nothing and brought me huge potential benefits. My beginner's foray has taught me more than I could have guessed, illuminating my own mind and introducing me to a new level of mental discipline, not to mention a world of humility. The collaborative spirit at code culture's heart turns out to be inspiring and exemplary. When not staring at my screen in anguish, I even had fun and now thrill to look at a piece of code and know - or at least have some idea - what's going on. I fully intend to persist with Python.

"More powerful than any of this," he concludes, "is a feeling of enfranchisement that comes through beginning to comprehend the fascinating but profoundly alien principles by which software works:"

By accident more than design, coders now comprise a Fifth Estate and as 21st-century citizens we need to be able to interrogate them as deeply as we interrogate politicians, marketers, the players of Wall Street and the media. Wittgenstein wrote that "the limits of my language mean the limits of my world." My world just got a little bigger.

Ed Yong writes about how sleep and creativity are linked, referencing the study "How Memory Replay in Sleep Boosts Creative Problem-Solving" (by Penny Lewis of Cardiff University) about the two main phases of sleep--REM and a deeper sleep called slow-wave sleep (SWS):

During that state, the brain replays memories. For example, the same neurons that fired when a rat ran through a maze during the day will spontaneously fire while it sleeps at night, in roughly the same order. These reruns help to consolidate and strengthen newly formed memories, integrating them into existing knowledge.

"Essentially," summarizes Yong, "non-REM sleep extracts concepts, and REM sleep connects them:"

Lewis is also working with Mark van Rossum from the University of Nottingham to create an artificial intelligence that learns in the way she thinks the sleeping brain does, with "a stage for abstraction and a stage for linking things together," she says.

"So you're building an AI that sleeps?" I ask her.

"Yes," she says.

I wonder if it will dream of electric sheep.

Bravo!

Henry Kissinger speculates on how the Enlightenment ends, writing that "my experience as a historian and occasional practicing statesman gave me pause" in, among other things, AI learning to play Go:

The internet age in which we already live prefigures some of the questions and issues that AI will only make more acute. The Enlightenment sought to submit traditional verities to a liberated, analytic human reason. The internet's purpose is to ratify knowledge through the accumulation and manipulation of ever expanding data. Human cognition loses its personal character. Individuals turn into data, and data become regnant.

"Heretofore confined to specific fields of activity," Kissinger writes, "AI research now seeks to bring about a "generally intelligent" AI capable of executing tasks in multiple fields:"

A growing percentage of human activity will, within a measurable time period, be driven by AI algorithms. But these algorithms, being mathematical interpretations of observed data, do not explain the underlying reality that produces them.

Despite the "extraordinary benefits to medical science, clean-energy provision, environmental issues, and many other areas" that Kissinger envisions from AI, he also foresees problems:

But precisely because AI makes judgments regarding an evolving, as-yet-undetermined future, uncertainty and ambiguity are inherent in its results. There are three areas of special concern:

First, that AI may achieve unintended results. [...]

Second, that in achieving intended goals, AI may change human thought processes and human values. [...]

Third, that AI may reach intended goals, but be unable to explain the rationale for its conclusions. [...]

Those areas are little different from the same activities performed by humans, though--which Kissinger studiously ignores in favor of excessive hand-wringing.

Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy.

I guess we need more philosophers, then--contrary to what Marco Rubio might say.

"Dear iPhone: it was only physical," writes Katie Reid. "I recently went through a pretty significant break-up," she says, "with my smartphone. My relationship with my phone was unhealthy in a lot of ways:"

I don't remember exactly when I started needing to hold it during dinner or having to check Twitter before I got out of bed in the morning, but at some point I'd decided I couldn't be without it. I'd started to notice just how often I was on my phone--and how unpleasant much of that time had become--when my daughter came along, and, just like that, time became infinitely more precious. So, I said goodbye. Now, as I reflect on the almost seven years my smartphone and I spent together, I'm starting to realize: What I had with my phone was largely physical.

Cognitive scientists have long debated whether objects in our environment can become part of us. Philosophers Andy Clark and David Chalmers argued in their 1998 paper "The Extended Mind" that when tools help us with cognitive tasks, they become part of us--augmenting and extending our minds. Today the idea that phones specifically are extensions of ourselves is receiving a lot of recent attention.

Reid writes that "the physiological effects of losing that equipment [her phone] were acute:"

...my heart began to race in the Verizon store when the employee told me he was deactivating my phone, and in the following hours and days, I would frequently find myself reaching for my iPhone, the way a girl reaches for a non-existent ponytail after a drastic haircut. Of course, I would gradually begin to notice not being able to use Google Maps or post to Instagram, but the physical sense of loss was instantaneous and intense. I literally felt a part of me was missing.

"Clark may see a smartphone extending my mind," she continues, "but I could feel it dulling my senses:"

Without my phone, I'm more fully myself, both in mind and body. And now, more than ever, I know that looking at my phone is nothing compared to looking at my daughter while the room sways as I rock her to sleep, or how shades of indigo and orange pour in through the window and cast a dusky glow over her room, or the way her warm, milky breath escapes in tiny exhalations from her lips, or how the crickets outside sing their breathless, spring lullaby. See, once I looked up from my phone, I remembered that each experience could be a symphony for the senses, just like it had been when I was a child and, thank God, there was no such thing as smartphones.

Stephen Hawking's final theory [see here] "posits that we can obtain quantifiable data that must be collected via space probe in order to be proven correct:"

Basically, the theory holds that after the Big Bang, the universe expanded in what's known as exponential inflation but some "bubbles" of that space stopped inflating or slowed down enough for stars and galaxies to form.

The abstract is available, for those so inclined.

Quiet!

| No Comments | No TrackBacks

Matthew Jordan uses the new film "A Quiet Place" to springboard into the claim that "For hundreds of years, Western culture has been at war with noise:"

During the Industrial Revolution, people swarmed to cities roaring with factory furnaces and shrieking with train whistles. German philosopher Arthur Schopenhauer called the cacophony "torture for intellectual people" [*see note below], arguing that thinkers needed quietness in order to do good work. Only stupid people, he thought, could tolerate noise.

From factories to tug boats to car horns, Jordan suggests that "in modern times, the problem seems to have gotten exponentially worse:"

Planes were forced to fly higher and slower around populated areas, while factories were required to mitigate the noise they produced. In New York, the Department of Environmental Protection - aided by a van filled with sound-measuring devices and the words "noise makes you nervous & nasty" on the side - went after noisemakers as part of "Operation Soundtrap."

After Mayor Michael Bloomberg instituted new noise codes in 2007 to ensure "well-deserved peace and quiet," the city installed hypersensitive listening devices to monitor the soundscape and citizens were encouraged to call 311 to report violations.

Although "legislating against noisemakers rarely satisfied our growing desire for quietness, [and] products and technologies emerged to meet the demand of increasingly sensitive consumers," he continues, "unwanted sound continued to be a part of everyday life:"

Content as some may feel in their ready-made acoustic cocoons, the more people accustom themselves to life without unwanted sounds from others, the more they become like the family in "A Quiet Place." To hypersensitized ears, the world becomes noisy and hostile.

Maybe more than any alien species, it's this intolerant quietism that's the real monster.

It isn't that difficult to recognize that noise pollution can be on par with light pollution, or with fouled air and water, in influencing our quality of life. No one demands perfect uninterrupted silence--just the simple recognition that we're all living here together.


*Note:
An NYT piece by George Prochnik that mentioned Schopenhauer's essay notes that "around 1850, Schopenhauer pronounced noise to be the supreme archenemy of any serious thinker." As Schopenhauer writes, "This aversion to noise I should explain as follows:"

If you cut up a large diamond into little bits, it will entirely lose the value it had as a whole; and an army divided up into small bodies of soldiers, loses all its strength. So a great intellect sinks to the level of an ordinary one, as soon as it is interrupted and disturbed, its attention distracted and drawn off from the matter in hand; for its superiority depends upon its power of concentration -- of bringing all its strength to bear upon one theme, in the same way as a concave mirror collects into one point all the rays of light that strike upon it. Noisy interruption is a hindrance to this concentration. That is why distinguished minds have always shown such an extreme dislike to disturbance in any form, as something that breaks in upon and distracts their thoughts. Above all have they been averse to that violent interruption that comes from noise. Ordinary people are not much put out by anything of the sort.

"Noise," he continues, "is the most impertinent of all forms of interruption:"

It is not only an interruption, but also a disruption of thought. Of course, where there is nothing to interrupt, noise will not be so particularly painful.

Prochnik continues:

Every time a siren shrieks on the street, our conscious minds might ignore it, but other brain regions behave as if that siren were a predator barreling straight for us. Given how many sirens city dwellers are subject to over the course of an average day, and the attention-fracturing tension induced by loud sounds of every sort, it's easy to see how sensitivity to noise, once an early warning system for approaching threats, has become a threat in itself.

He also notes that "A Hyena (Hypertension and Exposure to Noise Near Airports) study published in 2009 examined the effects of aircraft noise on sleeping:"

The findings were clear: even when people stayed asleep, the noise of planes taking off and landing caused blood pressure spikes, increased pulse rates and set off vasoconstriction and the release of stress hormones. Worse, these harmful cardiovascular responses continued to affect individuals for many hours after they had awakened and gone on with their days. [...]

In American culture, we tend to regard sensitivity to noise as a sign of weakness or killjoy prudery. To those who complain about sound levels on the streets, inside their homes and across a swath of public spaces like stadiums, beaches and parks, we say: "Suck it up. Relax and have a good time." But the scientific evidence shows that loud sound is physically debilitating. A recent World Health Organization report on the burden of disease from environmental noise conservatively estimates that Western Europeans lose more than one million healthy life years annually as a consequence of noise-related disability and disease. Among environmental hazards, only air pollution causes more damage.

Prochnik wonders, "Could a critical mass of sound one day be reached that would make sustained thinking impossible?" and I submit that we can answer his question in the affirmative--at least all of us who strive to accomplish anything resembling thought in a workplace like this:

20180423-openplanoffice.jpg
(original image: Photofusion/Rex Features)

efficient brain

| No Comments | No TrackBacks

Stanford professor Liqun Luo wonders at Nautilus how the human brain is so efficient. "Which has more problem-solving power," Luo asks, "the brain or the computer?"

Given the rapid advances in computer technology in the past decades, you might think that the computer has the edge. Indeed, computers have been built and programmed to defeat human masters in complex games, such as chess in the 1990s and recently Go, as well as encyclopedic knowledge contests, such as the TV show Jeopardy! As of this writing, however, humans triumph over computers in numerous real-world tasks--ranging from identifying a bicycle or a particular pedestrian on a crowded city street to reaching for a cup of tea and moving it smoothly to one's lips--let alone conceptualization and creativity.

"The computer has huge advantages over the brain," writes Luo, in both the speed and the precision of basic operations. However, the brain is "neither slow nor imprecise:"

For example, a professional tennis player can follow the trajectory of a tennis ball after it is served at a speed as high as 160 miles per hour, move to the optimal spot on the court, position his or her arm, and swing the racket to return the ball in the opponent's court, all within a few hundred milliseconds. Moreover, the brain can accomplish all these tasks (with the help of the body it controls) with power consumption about tenfold less than a personal computer. How does the brain achieve that?

Part of the explanation is that the brain "employs massively parallel processing, taking advantage of the large number of neurons and large number of connections each neuron makes:"

For instance, the moving tennis ball activates many cells in the retina called photoreceptors, whose job is to convert light into electrical signals. These signals are then transmitted to many different kinds of neurons in the retina in parallel. By the time signals originating in the photoreceptor cells have passed through two to three synaptic connections in the retina, information regarding the location, direction, and speed of the ball has been extracted by parallel neuronal circuits and is transmitted in parallel to the brain. Likewise, the motor cortex (part of the cerebral cortex that is responsible for volitional motor control) sends commands in parallel to control muscle contraction in the legs, the trunk, the arms, and the wrist, such that the body and the arms are simultaneously well positioned to receiving the incoming ball.

This massively parallel strategy is possible because each neuron collects inputs from and sends output to many other neurons--on the order of 1,000 on average for both input and output for a mammalian neuron. (By contrast, each transistor has only three nodes for input and output all together.) Information from a single neuron can be delivered to many parallel downstream pathways. At the same time, many neurons that process the same information can pool their inputs to the same downstream neuron. This latter property is particularly useful for enhancing the precision of information processing. [...]

Another salient property of the brain, which is clearly at play in the return of service example from tennis, is that the connection strengths between neurons can be modified in response to activity and experience--a process that is widely believed by neuroscientists to be the basis for learning and memory. Repetitive training enables the neuronal circuits to become better configured for the tasks being performed, resulting in greatly improved speed and precision.

Although "recent advances have expanded the repertoire of tasks the computer is capable of performing," Luo still maintains that "the brain has superior flexibility, generalizability, and learning capability than the state-of-the-art computer:"

As neuroscientists uncover more secrets about the brain (increasingly aided by the use of computers), engineers can take more inspiration from the working of the brain to further improve the architecture and performance of computers. Whichever emerges as the winner for particular tasks, these interdisciplinary cross-fertilizations will undoubtedly advance both neuroscience and computer engineering.

[See Luo's Principles of Neurobiology (Garland Science, New York, NY, 2015) for more.]

Big Think mentioned a disturbing Pew study [see here] which found that 26% of Americans are 'almost constantly' online:

77% of American adults go online daily. while 43% are on several times per day. Only 11% of adults said they didn't use the internet at all. This rapid rise in near constant use has been attributed to the pervasiveness of smart phones.

Last November, electronics insurer Asurion completed a study that found that the average American checks their phone every 12 minutes, or about 80 times per day. Many respondents struggled to go just 10 minutes without looking at their phone, Asurion researchers said. According to a survey by Qualtrics and Accel, millennials check their phones even more often, 150 times per day on average.

"So what are the implications?" they ask:

Studies have shown that those who are constantly connected are more stressed, feel lonelier, and are more likely to experience depression or a sleep disorder. A 2015 University of Missouri study, found that regular use of social media platforms increased the likelihood of envy and depression.

In the Asurion survey, 31% of respondents felt separation anxiety when they couldn't check their phone. While 60% were stressed when their phone was off, charging, or out of reach. Most millennials don't go any more than five hours without checking their phone, according to the Qualtrics and Accel study, which can be considered addictive behavior. Half of all millennials in that investigation actually checked their phone in the middle of the night.

It is worth noting that "such devices aren't offered by those who love us, but who want money, which in this model is earned by placing the right ads in front of you as often as possible." Accordingly, "The best thing to do then for the sake of your own mental health, is to limit exposure:"

Consider turning your phone off and putting it in a drawer for certain hours of the day, and allow those closest to you other means such as a landline, to contact you in case of emergency. Also, social media and online interactions should never trump real, offline ones. If you find yourself wasting too much time online, get up and talk to a coworker, schedule coffee with a friend or a friendly acquaintance, or just take a walk and stretch your legs. If you can be conscious of your internet use and carefully consider dosage, chances are, you'll be more productive and happier too.

NYRB's Madeleine Bunting refers to this effort as disarming the weapons of mass distraction:

Technology provides us with new tools to grab people's attention. These innovations are dismantling traditional boundaries of private and public, home and office, work and leisure. Emails and tweets can reach us almost anywhere, anytime. There are no cracks left in which the mind can idle, rest, and recuperate. A taxi ad offers free wifi so that you can remain "productive" on a cab journey. [...]

What, then, are the implications of how digital technologies are transforming our patterns of attention? In the current political anxiety about social mobility and inequality, more weight needs to be put on this most crucial and basic skill: sustaining attention.

The work of the psychologist B.F. Skinner--specifically the concept of "variable-ratio reinforcement," which can be summarized as "Give the pigeon a food pellet sometimes, and you have it well and truly hooked"--is eminently useful with regards to smartphones, because "We're just like the pigeon pecking at the button when we check our email or phone:"

Variable reinforcement ensures that the customer will keep coming back. It's the principle behind one of the most lucrative US industries: slot machines, which generate more profit than baseball, films, and theme parks combined. Gambling was once tightly restricted for its addictive potential, but most of us now have the attentional equivalent of a slot machine in our pocket, beside our plate at mealtimes, and by our pillow at night. Even during a meal out, a play at the theater, a film, or a tennis match. Almost nothing is now experienced uninterrupted.

Anxiety about the exponential rise of our gadget addiction and how it is fragmenting our attention is sometimes dismissed as a Luddite reaction to a technological revolution. But that misses the point. The problem is not the technology per se, but the commercial imperatives that drive the new technologies and, unrestrained, colonize our attention by fundamentally changing our experience of time and space, saturating both in information.

Bunting writes that "We actually need what we most fear: boredom:"

Despite my children's multitasking, I maintain that vital human capacities--depth of insight, emotional connection, and creativity--are at risk. I'm intrigued as to what the resistance might look like. There are stirrings of protest with the recent establishment of initiatives such as the Time Well Spent movement, founded by tech industry insiders who have become alarmed at the efforts invested in keeping people hooked. But collective action is elusive; the emphasis is repeatedly on the individual to develop the necessary self-regulation, but if that is precisely what is being eroded, we could be caught in a self-reinforcing loop.

HBR's Larry Rosen suggests 6 ways to counteract your smartphone addiction, including the following:

Use "cc" and "reply all" judiciously.

Recalibrate response time expectations.

My suggested middle ground--used in several multinational companies including Volkswagen and Deutsche Telekom-- is a 7am-to-7pm policy: messages can, of course, be sent at any hour, but no one is required to respond earlier than 7am or later than 7pm.

Take regular, restorative breaks.

Reclaim friend and family time.

Keep technology out of the bedroom.

As Rosen summarizes:

Over the past decade technology has taken over our lives. While it offers access to information, connection and entertainment, it also has been shown to diminish our brainpower and harm our mental health. These six tactics--which you can implement for yourself or encourage on your team--are simple ways to ensure these ubiquitous devices do less harm than good.

Stephen Hawking's final paper is "an astounding farewell," writes Robby Berman at BigThink. "Stephen Hawking will never know if there really are multiple universes," Berman writes, "but he's left behind a hell of a parting shot: a test that could prove or disprove their existence:"

On March 4, a mere 10 days before he died, the theoretical physicist signed off on the final corrections for one last paper, "A Smooth Exit from Eternal Inflation." It proposes a data-collection mission for a deep-space probe, and it lays out the math for discerning the telltale signs of a multiverse in its data. How thrilling would it be if Hawking's final formula answers one of his most provocative questions?

The paper is still under review by a "leading journal," according to The Times, and hasn't been published yet. It was co-authored by theoretical physicist Thomas Hertog of KU Leuven University in Belgium. Work on the paper concluded at Hawking's deathbed, says The Times. [...]

Their paper asserts that evidence for multiple universes should be contained in background radiation from the beginning of time and that it should be measurable using the pair's new equations once a deep-space probe has made certain measurements.

"Leave it to Hawking to blow our minds one final, spectacular time," Berman concludes.

"woke tech"

| No Comments | No TrackBacks

"Woke tech" is the concept of selling technological solutions to problems caused by technology, writes Julianne Tveten at In These Times. "Capitalizing on this notion is the http://humanetech.com/ Center for Humane Technology (CHT)," she writes, "a cohort of tech-industry veterans who purportedly seek to render technology less, as they call it, 'addictive':"

CHT's plan, though scarce in detail, is multi-pronged: lobbying Congress to pressure hardware companies like Apple and Samsung to change their design standards, raising consumer awareness of harmful technologies and "empowering [tech] employees" to advocate for design decisions that command less user attention. The organization is helmed by former Google "design ethicist" Tristan Harris--who the Atlantic deems the "closest thing Silicon Valley has to a conscience"...

The tenets of the tech-remorse movement resemble those of another recent phenomenon: unplugging. Spearheaded by such multimillionaires as Deepak Chopra and Arianna Huffington, "unplugging" is the act of temporarily separating oneself from Internet-connected devices to foster relaxation and social connection. If even for a day or an evening, acolytes argue, turning off one's phone curbs its noxious, addictive effects--improving sleep, creativity, and productivity. (Relatedly, CHT is fiscally sponsored by Reboot, a nonprofit that hosts the National Day of Unplugging.)

Tveten points out that "the trend of tech repentance isn't a challenge to the bane of surveillance capitalism; it's merely an upgraded version of it:"

The smartphone makers, meditation-app companies and other appointees of the tech-reform vanguard will continue to track and monetize user data--the very issues they claim to address--while crowing about business ethics and preaching personal responsibility. While tech executives may admit to creating the problem, they most certainly won't be the ones to solve it.

"Our society is being hijacked by technology," writes Harris at CHT, and "Unfortunately, what's best for capturing our attention isn't best for our well-being:"

  • Snapchat turns conversations into streaks, redefining how our children measure friendship.
  • Instagram glorifies the picture-perfect life, eroding our self worth.
  • Facebook segregates us into echo chambers, fragmenting our communities.
  • YouTube autoplays the next video within seconds, even if it eats into our sleep.

"These are not neutral products," he continues, "They are part of a system designed to addict us." Harris is working through CHT, to "Create a Cultural Awakening" by:

...transforming public awareness so that consumers recognize the difference between technology designed to extract the most attention from us, and technology whose goals are aligned with our own. We are building a movement for consumers to take control of their digital lives with better tools, habits and demands to make this change.

Tristan Harris' TED talk "how better tech could protect us from distraction" is a good intro to his thoughts on the similarities between smartphones and slot machines.

Open Source turned 20 today, and I'd like to point toward Christine Peterson (Foresight Institute co-founder) and her
personal account
of being "the originator of the term 'open source software':"

On February 2, 1998, Eric Raymond arrived on a visit to work with Netscape on the plan to release the browser code under a free-software-style license. We held a meeting that night at Foresight's office in Los Altos to strategize and refine our message. In addition to Eric and me, active participants included Brian Behlendorf, Michael Tiemann, Todd Anderson, Mark S. Miller, and Ka-Ping Yee. But at that meeting, the field was still described as free software or, by Brian, "source code available" software. [...] Between meetings that week, I was still focused on the need for a better name and came up with the term "open source software." While not ideal, it struck me as good enough. [...]

Later that week, on February 5, 1998, a group was assembled at VA Research to brainstorm on strategy. Attending--in addition to Eric Raymond, Todd, and me--were Larry Augustin, Sam Ockman, and attending by phone, Jon "maddog" Hall. [...]

Toward the end of the meeting, the question of terminology was brought up explicitly, probably by Todd or Eric. Maddog mentioned "freely distributable" as an earlier term, and "cooperatively developed" as a newer term. Eric listed "free software," "open source," and "sourceware" as the main options. Todd advocated the "open source" model, and Eric endorsed this. I didn't say much, letting Todd and Eric pull the (loose, informal) consensus together around the open source name. [...] There was probably not much more I could do to help; Eric Raymond was far better positioned to spread the new meme, and he did. Bruce Perens signed on to the effort immediately, helping set up Opensource.org and playing a key role in spreading the new term.

For the name to succeed, it was necessary, or at least highly desirable, that Tim O'Reilly agree and actively use it in his many projects on behalf of the community. Also helpful would be use of the term in the upcoming official release of the Netscape Navigator code. By late February, both O'Reilly & Associates and Netscape had started to use the term.

"Coming up with a phrase is a small contribution," she demurs, "but I admit to being grateful to those who remember to credit me with it. Every time I hear it, which is very often now, it gives me a little happy twinge." ZDnet's Steven J. Vaughan-Nichols discusses Open Source and its impact, starting with Richard M. Stallman's "The GNU Manifesto" and the Free Software Foundation (FSF):

This went well for a few years, but inevitably, RMS collided with proprietary companies. The company Unipress took the code to a variation of his EMACS programming editor and turned it into a proprietary program. RMS never wanted that to happen again so he created the GNU General Public License (GPL) in 1989. This was the first copyleft license. It gave users the right to use, copy, distribute, and modify a program's source code. But if you make source code changes and distribute it to others, you must share the modified code. While there had been earlier free licenses, such as 1980's four-clause BSD license, the GPL was the one that sparked the free-software, open-source revolution.

In 1997, Eric S. Raymond published his vital essay, "The Cathedral and the Bazaar." In it, he showed the advantages of the free-software development methodologies using GCC, the Linux kernel, and his experiences with his own Fetchmail project as examples. This essay did more than show the advantages of free software. The programming principles he described led the way for both Agile development and DevOps. Twenty-first century programming owes a large debt to Raymond.

"Open source has turned twenty," concludes Vaughan-Nichols, "but its influence, and not just on software and business, will continue on for decades to come."

small data

| No Comments | No TrackBacks

No one talks about big data any more, says Slate's Will Oremus. "Five years ago," he writes, "an article in the New York Times' Sunday Review heralded the arrival of a new epoch in human affairs: 'The Age of Big Data':"

Society was embarking on a revolution, the article informed us, one in which the collection and analysis of enormous quantities of data would transform almost every facet of life. No longer would data analysis be confined to spreadsheets and regressions: The advent of supercomputing, combined with the proliferation of internet-connected sensors that could record data constantly and send it to the cloud, meant that the sort ¬of advanced statistical analysis described in Michael Lewis' 2003 baseball book Moneyball could be applied to fields ranging from business to academia to medicine to romance. Not only that, but sophisticated data analysis software could help identify utterly unexpected correlations, such as a relationship between a loan recipient's use of all caps and his likelihood of defaulting. This would surely yield novel insights that would change how we think about, well, just about everything.

"Big data," he continues, "helps to power the algorithms behind our news feeds, Netflix recommendations, automated stock trades, autocorrect features, and health trackers, among countless other tools:"

But we're less likely to use the term big data these days--we just call it data. We've begun to take for granted that data sets can contain billions or even trillions of observations and that sophisticated software can detect trends in them.

Oremus cites Cathy O'Neil's Weapons of Math Destruction and Frank Pasquale's The Black Box Society as illustrations of "the fetishization of data, and its uncritical use, that tends to lead to disaster," and suggests "Another possible response to the problems that arise from biases in big data sets:"

Small data refers to data sets that are simple enough to be analyzed and interpreted directly by humans, without recourse to supercomputers or Hadoop jobs. Like "slow food," the term arose as a conscious reaction to the prevalence of its opposite.

Martin Lindstrom's 2016 book Small Data: The Tiny Clues That Uncover Big Trends looks intriguing, as Oremus concludes:

There is some hope, then, that in moving away from "big data" as a buzzword, we're moving gradually toward a more nuanced understanding of data's power and pitfalls. In retrospect, it makes sense that the sudden proliferation of data-collecting sensors and data-crunching supercomputers would trigger a sort of gold rush, and that fear of missing out would in many cases trump caution and prudence. It was inevitable that thoughtful people would start to call our collective attention to these cases, and that there would be a backlash, and perhaps ultimately a sort of Hegelian synthesis.

Lifehacker's piece on smartphone addiction paraphrases Tristan Harris on the psychological similarities between smartphones and slot machines:

Most of the time, you check your phone and there's nothing interesting--no notifications, just the same old apps staring back at you. But sometimes checking your phone is rewarding --you get an amusing text, a flurry of likes, an email containing good news. This hit is satisfactory enough to keep you returning, checking your phone compulsively for another dopamine jolt.

Tristan Harris' piece on how technology hijacks people's minds explains the phenomenon in more detail, as part of a major problem based in "what product designers do to your mind:"

They play your psychological vulnerabilities (consciously and unconsciously) against you in the race to grab your attention.

He identifies ten hijacking methods, several of which I've excerpted below:

Hijack #1: If You Control the Menu, You Control the Choices

Western Culture is built around ideals of individual choice and freedom. Millions of us fiercely defend our right to make "free" choices, while we ignore how we're manipulated upstream by limited menus we didn't choose. [...]

By shaping the menus we pick from, technology hijacks the way we perceive our choices and replaces them new ones. But the closer we pay attention to the options we're given, the more we'll notice when they don't actually align with our true needs.

Hijack #2: Put a Slot Machine In a Billion Pockets

If you're an app, how do you keep people hooked? Turn yourself into a slot machine. [...] One major reason why is the #1 psychological ingredient in slot machines: intermittent variable rewards.

If you want to maximize addictiveness, all tech designers need to do is link a user's action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.

Does this effect really work on people? Yes. Slot machines make more money in the United States than baseball, movies, and theme parks combined. Relative to other kinds of gambling, people get 'problematically involved' with slot machines 3-4x faster according to NYU professor Natasha Dow Shull, author of Addiction by Design.

But here's the unfortunate truth -- several billion people have a slot machine their pocket:

  • When we pull our phone out of our pocket, we're playing a slot machine to see what notifications we got.
  • When we pull to refresh our email, we're playing a slot machine to see what new email we got.
  • When we swipe down our finger to scroll the Instagram feed, we're playing a slot machine to see what photo comes next.
  • When we swipe faces left/right on dating apps like Tinder, we're playing a slot machine to see if we got a match.
  • When we tap the # of red notifications, we're playing a slot machine to what's underneath.

Hijack #5: Social Reciprocity (Tit-for-tat)

We are vulnerable to needing to reciprocate others' gestures. [...] Email, texting and messaging apps are social reciprocity factories. But in other cases, companies exploit this vulnerability on purpose.

LinkedIn is the most obvious offender. LinkedIn wants as many people creating social obligations for each other as possible, because each time they reciprocate (by accepting a connection, responding to a message, or endorsing someone back for a skill) they have to come back through linkedin.com where they can get people to spend more time.

Like Facebook, LinkedIn exploits an asymmetry in perception. When you receive an invitation from someone to connect, you imagine that person making a conscious choice to invite you, when in reality, they likely unconsciously responded to LinkedIn's list of suggested contacts. In other words, LinkedIn turns your unconscious impulses (to "add" a person) into new social obligations that millions of people feel obligated to repay. All while they profit from the time people spend doing it. [...]

Imagine millions of people getting interrupted like this throughout their day, running around like chickens with their heads cut off, reciprocating each other -- all designed by companies who profit from it.

Welcome to social media.

Hijack #6: Bottomless bowls, Infinite Feeds, and Autoplay

Another way to hijack people is to keep them consuming things, even when they aren't hungry anymore.

How? Easy. Take an experience that was bounded and finite, and turn it into a bottomless flow that keeps going.

Cornell professor Brian Wansink demonstrated this in his study showing you can trick people into keep eating soup by giving them a bottomless bowl that automatically refills as they eat. With bottomless bowls, people eat 73% more calories than those with normal bowls and underestimate how many calories they ate by 140 calories.

Tech companies exploit the same principle. News feeds are purposely designed to auto-refill with reasons to keep you scrolling, and purposely eliminate any reason for you to pause, reconsider or leave.

It's also why video and social media sites like Netflix, YouTube or Facebookautoplay the next video after a countdown instead of waiting for you to make a conscious choice (in case you won't). A huge portion of traffic on these websites is driven by autoplaying the next thing.

Hijack #7: Instant Interruption vs. "Respectful" Delivery

Companies know that messages that interrupt people immediately are more persuasive at getting people to respond than messages delivered asynchronously (like email or any deferred inbox).

Given the choice, Facebook Messenger (or WhatsApp, WeChat or SnapChat for that matter) would prefer to design their messaging system to interrupt recipients immediately (and show a chat box) instead of helping users respect each other's attention.

In other words, interruption is good for business.

[But, like any other addiction, horrible for concentration.]

The problem is, while messaging apps maximize interruptions in the name of business, it creates a tragedy of the commons that ruins global attention spans and causes billions of interruptions every day.

"Facebook promises an easy choice to 'See Photo,' he writes, but "Would we still click if it gave the true price tag?" He illustrates his point with this image:

20170605-spendnext20minutes.png

"That's why I add 'Estimated reading time' to the top of my posts," he explains:

When you put the "true cost" of a choice in front of people, you're treating your users or audience with dignity and respect. [...] The ultimate freedom is a free mind, and we need technology to be on our team to help us live, feel, think and act freely.

We need our smartphones, notifications screens and web browsers to be exoskeletons for our minds and interpersonal relationships that put our values, not our impulses, first. People's time is valuable. And we should protect it with the same rigor as privacy and other digital rights.

In another article, Harris asks, is your web browser a credit card for your time?

Credit cards invite us to avoid feeling the pain of paying, and to forget how much money we actually have.

Cash invites us to consciously feel how much we spend.

He also stresses the scale of the issue, because "software designers affect how a billion people make choices about spending their attention - more than 150 times every day:"

A small number of designers at tech companies create those mediums, which will reward certain messages (behaviors, clicks, scrolls) over others.

And today, web browsers are designed like credit cards. They make it easy to "swipe" the credit card for our time and take out a loan against our future selves.

They make it easy to be swipe our credit card for more time than we intended, by getting lost in an infinitely scrolling feed. They make it easy to click something we wish we hadn't clicked later.

His pieces are all well worth reading--and thinking about--if we value our time.

Slackers

| No Comments | No TrackBacks

I've been digging deeper into the Slack-doubter camp, hoping to convince one of my company's executives to temper her enthusiasm for constant chatting as some sort of productivity panacea. Samuel Hulick's piece "Slack, I'm Breaking Up with You" makes the all-too-common plea that Slack is "asking for A LOT of my time," and "it has been absolutely brutal on my productivity:"

I may have been fooling myself when we were still in the honeymoon phase, but when there was all the talk of you killing email, I have to admit I thought it was the email problem you were attacking, not just the email platform.

Which is to say, I thought you were providing some relief from the torrential influx of messages, alerts, and notifications I was receiving on a daily basis. "Me + Slack = Fewer distractions and more productivity," I thought at the time. I have to say, though, that I've since found it to be the opposite.

Like, WAY the opposite.

With you in my life, I've received exponentially more messages than I ever have before.

"While it's true that email was (and, despite your valiant efforts, still very much is) a barely-manageable firehose of to-do list items controlled by strangers," he continues, "one of the few things that it did have going for it was that at least everything was in one place:"

Trying to keep up with the manifold follow-up tasks from the manifold conversations in your manifold teams and channels requires a Skynet-like metapresence that is simply beyond me.

With you, the firehose problem has become a hydra-headed monster.

Everything is scattered, and the mental load that comes with it is real. Linda Stone calls this perpetual, shallow quasi-presence "continuous partial attention" [see below], and this makes each conversational thread, almost by definition, a loose one.

Responding to a description of Slack as "an all-day meeting with unknown participants and no agenda," Hulick notes:

Will they respond in 5 seconds or 5 hours? Who knows! It's like getting caught in one of those support chats from hell with a Comcast rep who's clearly trying to simultaneously jockey a dozen text conversations like some kind of bargain basement Bobby Fischer, except that it's all day long and with everyone I know. [...]

I wonder if conducting business in an asynchronish environment simply turns every minute into an opportunity for conversation, essentially "meeting-izing" the entire workday.

"All-day meetings every day of the week are substantially more 'meetings' than the ones you're saving me from," he observes:

This is awesome for speeding up the tempo of company directives, but it also places a ton of pressure on everyone involved to maintain even MORE Slack omnipresence; if any discussion might lead to a decision being made, that provides a whole lot of incentive to be available for as many discussions as possible.

Even worse, those with the least on their plates can maintain the most Slack presence, which leads to the most gregariously unengaged representing the majority of the discussion base while penalizing those who are fully engaged in their "real" work.

Christopher Batts, who titled his piece "Actually, Slack really sucks" comments that "my life isn't any easier now that everyone insists on using Slack. I've actually noticed its far more complex and distracting:"

Zero time saved, but lots of time newly wasted by the workflow Slack provides. It just doesn't cater for tasks that aren't immediate, and it doesn't cater for teams that work on different timezones.

In the "Managing Notifications" section of his article, Batts notes that "Notifications come from everywhere and couldn't be less ordered if they tried," and his section on "Productivity" simply states, "I get so much more done when Slack is closed:"

I can integrate Slack with everything from Skype to CircleCI. Great if I stare at Slack all day, checking a feed for a response from the latest set of integration tests running. Not so great if I have things that need doing.

Ann Diab's look at workplace chat asks a similar question: "If I'm always available, when can I get any work done?"

Whether it's HipChat, Slack, G-Chat, or any other form of IRC and instant messaging, workplace chat tools facilitate quick answers and instant gratification. But it's possible that getting instant input like this is doing more harm than good to the morale of your teammates and to overall company culture. [emphasis added]

Michael Muse's quantified look at Slack mentions that "Slack makes their users feel 32% more effective" but then asks, what's the downside? "Sometimes, Slack (or your favorite chat app, this topic applies equally to any of them) is the best tool for the job, but I'd like to delve into cases where it isn't." He observes that "nearly two-thirds of Slack messages at our company aren't in channels at all! They're in Direct Messages (DM)," and notes that "This poses a problem:

While a DM may sound like less of a concern for interruption than a public channel, they have a very different social contract:
- Unlike channels, you are assumed to always be listening. The sender knows you received a notification about their message, much like SMS.

- Unlike channels, you are the only person who can weigh in on the matter, so the message cannot be answered by someone else. It is awaiting your response.

This means that DMs feel much more urgent and important. Ever interrupted an in-person conversation because you noticed something in a Slack channel? Me neither. But you better believe I've cut off a verbal conversation midstream to answer a DM.

[He discusses this further in Footnote 4: "When a DM requires you to do some work to resolve it - the assignment of work happens at the requester's discretion, not the work-doer, which is TOTALLY backwards. On a team of work-doers, proper assignment of the work considers bandwidth, compatibility with other similar work, even learning opportunity" as opposed to some sort of LIFO prioritization.] He then considers "the aggregate listening cost" of the ten thousand direct messages his company deals with daily--as he writes, "14 is a data-confirmed, low-end-average for the number of high urgency, high importance interruptions each person at our company gets over DM every day:"

"So," you say, "what's the big deal with 14 interruptions per day? Had they been emails, I still would have dealt with them eventually." But you didn't deal with them eventually. You dealt with them immediately. [...]

There is a popular word for what's happening -- someone else puts some work into the very top of your queue, interrupting what you are doing and obligating you (socially or implicitly) to work on it now. The workplace slang is to call it a firedrill.

He writes that "DMing someone might as well be called firedrilling them," and pithily summarizes:

I like to call this unmeasured, unpredictable interruption phenomenon Slack-a-Mole. Don't play Slack-a-Mole. You're never gonna win the oversized teddy bear.

His footnotes link to an older piece from Trello co-founder Joel Spolsky that examines the harm of task-switching, and makes two main observations about workflow:

a) sequential processing gets you results faster on average, and

b) the longer it takes to task switch, the bigger the penalty you pay for multitasking.

This is because "programming is the kind of task where you have to keep a lot of things in your head at once:"

The more things you remember at once, the more productive you are at programming. A programmer coding at full throttle is keeping zillions of things in their head at once: everything from names of variables, data structures, important APIs, the names of utility functions that they wrote and call a lot, even the name of the subdirectory where they store their source code. If you send that programmer to Crete for a three week vacation, they will forget it all. The human brain seems to move it out of short-term RAM and swaps it out onto a backup tape where it takes forever to retrieve.

"As it turns out," he continues, "if you give somebody two things to work on, you should be grateful if they 'starve' one task and only work on one, because they're going to get more stuff done and finish the average task sooner:"

In fact, the real lesson from all this is that you should never let people work on more than one thing at once. Make sure they know what it is. Good managers see their responsibility as removing obstacles so that people can focus on one thing and really get it done. When emergencies come up, think about whether you can handle it yourself before you delegate it to a programmer who is deeply submersed in a project.

Also worth reading is this APA summary on the switching costs of multitasking, which observes that "Doing more than one task at a time, especially more than one complex task, takes a toll on productivity:"

Psychologists who study what happens to cognition (mental processes) when people try to perform more than one task at a time have found that the mind and brain were not designed for heavy-duty multitasking. Psychologists tend to liken the job to choreography or air-traffic control, noting that in these operations, as in others, mental overload can result in catastrophe.

The piece includes synopses of several relevant studies; here is a particularly relevant snippet:

Although switch costs may be relatively small, sometimes just a few tenths of a second per switch, they can add up to large amounts when people switch repeatedly back and forth between tasks. Thus, multitasking may seem efficient on the surface but may actually take more time in the end and involve more error. Meyer [see "A computational theory of executive cognitive processes and multiple-task performance" Parts 1 and 2] has said that even brief mental blocks created by shifting between tasks can cost as much as 40 percent of someone's productive time.

Linda Stone's description, metioned above by Hulick, of "continuous partial attention" points out that it is "different from multi-tasking." As she observes, "To pay continuous partial attention is to pay partial attention -- CONTINUOUSLY. It is motivated by a desire to be a LIVE node on the network:"

We pay continuous partial attention in an effort NOT TO MISS ANYTHING. It is an always-on, anywhere, anytime, any place behavior that involves an artificial sense of constant crisis. We are always in high alert when we pay continuous partial attention.

This has negative effects beyond impeding task completion, because "in large doses, it [continuous partial attention] contributes to a stressful lifestyle, to operating in crisis management mode, and to a compromised ability to reflect, to make decisions, and to think creatively:"

In a 24/7, always-on world, continuous partial attention used as our dominant attention mode contributes to a feeling of overwhelm, over-stimulation and to a sense of being unfulfilled. We are so accessible, we're inaccessible. The latest, greatest powerful technologies have contributed to our feeling increasingly powerless.

"We have focused on managing our time," she observes, where we should instead "focus on how we manage our attention:"

We are evolving beyond an always-on lifestyle. As we make choices to turn the technology OFF, to give full attention to others in interactions, to block out interruption-free time, and to use the full range of communication tools more appropriately, we will re-orient our trek toward a path of more engaged attention, more fulfulling relationships, and opportunities for the type of reflection that fuels innovation.

Executives whose simplistic bottom-line mentality that sees only short-term costs may treat cloud services as a panacea to onsite IT costs, but that's only because they're ignoring (or discounting) issues of availability and security. Quinn Norton's "the problem with Slack" at Medium mentioned the problem of security:

General computer and network security in the early 21st century simply isn't good enough, categorically, to trust logged, unencrypted communication touching the net to remain safe over time. Slack will get hacked, over and over. (I know Slack uses encryption at rest, but if Slack can access your data, so can a sufficiently motivated actor)... [...]

As Slack continues, likely years into the future, it will be hacked by people engaging in corporate espionage, governmental actors, and talented amateurs. Some of these will be discovered quickly, others will never be discovered at all. Like every other computer system, Slack's employees, no matter how diligent, will never have an easy way to ensure that they aren't compromised. Given enough time, everyone is compromised. Given enough interest, it doesn't take much time.

Considering the variety of problematic issues at hand with all these aspect of Slack, it's hard to see how its use makes anything better. It should go without saying--should, that is--but Slack and the other exemplars of our constant-interruption business culture are completely antithetical to accomplishing anything of depth and complexity. The do, however, serve to make workers stressed enough so that we tolerate employers' further encroachments into our personal lives, as they demand more "availability" for no additional compensation. Perhaps that's the point?

Slack

| No Comments | No TrackBacks

New York magazine asks with dismay, what has Slack done to the office? and notes that the messaging app "has essentially ushered employer-sanctioned social media into the workplace:"

Like Facebook or Twitter, Slack induces the same anxious, attention-hungry rhythm in its users, the same need to endlessly refresh, and gives off the same illusion of intimacy in an ultimately public space. It also makes the line between work and not-work blurrier than ever -- the constant scroll of maybe-relevant chatter in your chosen Slack channels registers at times like the background noise of any other newsfeed.

Bloomberg asserts that you're about to hate Slack as much as you hate email, and Giles Turnbull notes in https://gilest.org/slack-and-email.html his piece on Slack and email that "Right now, I am a member of 7 different Slack teams [with] A total of 194 channels:"

Of course I don't keep track of all of them, and subscribe to only a fraction - 27 channels across all 7 teams. And I only keep a close eye on 16 of those. But: that's 16 channels that I feel compelled to read. Even if I've not been mentioned, even if none of my highlight words have cropped up anywhere. It's quite likely that something could be said in one of those channels that I will find interesting or useful - but equally likely that I won't be mentioned by name when that happens, because why on earth would anyone do that?

He continues by observing that "my experience of multiple Slack teams and channels is that it's no less overwhelming than an inbox full of email:"

The two experiences - one of opening email and seeing a list of messages, and the other of opening Slack and seeing a list of unread channels - are exactly the same.

What's more, the old criticism of email - that it's a todo list other people have control of - still applies inside Slack. People are still sending me things to do inside it. They're just typing those messages into a different box.

"Slack (or any other chat-based interface) can be just as much work as email ever was," he points out, "and consequently doesn't feel as liberating as some people would argue it is:"

I don't have any answers, and I'm not going to stop using Slack or email. Both are useful. I just wanted make the point: for me, using Slack might have fractionally lessened the amount of email I have to read, but it hasn't lessened the amount of text-on-screens that I have to read. If anything, that's increased. So it doesn't feel like a problem has been solved - it's just moved to a different app.

"The Slack sell to employers," the piece continues, "is that it decreases the burden of email, because nobody likes email:"

GIFS and emoji are the incentive for employees to use Slack; greater oversight is the incentive for employers to tolerate GIFS and emoji. A company-operated social network might not be something most of us would seek out -- but years of experience have primed us to accept a certain loss of privacy as the price paid for online entertainment or, in this case, entertaining work.

"Slack came into my life in 2014," notes the author, observing that the app "made us spend more time chatting than we ever had before:"

Slack's own employees reportedly adhere to the principle "Work hard, then go home." They have nonetheless created a product that encourages the opposite: "Work half-distractedly, then keep doing that no matter where you go." Slack has made work, like the rest of the internet, a passive addiction.

I am less concerned with potential privacy issues that with the tendency of Slack to become "a compulsion, a distraction. A burden." The author of last year's Atlantic piece on the Slack backlash observes, for example, that "Slack has been transformative for the way I work," although "Slack is not for everyone:"

Some people dislike the platform because it's conceptually like an old-school IRC without being an open protocol. Others have complained that Slack isn't actually an email-killer, as so many have claimed, but just another thing to keep up with on top of email. (The Slack detox, [see the Verge piece below] in the grand tradition of people's fraught relationships with the digital tools they use most, is officially A Thing.)

PC Magazine suggests that its audience should read this before ditching email for Slack, and points out that "Slack is free until you hit 10,000 messages and five supported tools. After that, it costs $8 per month per user (or $6.67 when paid annually." At nearly $100/user/year, Slack is pricey compared to other chat apps--although they're not as trendy, which counts for a lot in some executive technology-selection circles. Justin Glow, a senior director at Vox Media, wrote at Verge in 2015 about the week he tried to unplug from Slack. "Originally," he writes, "I didn't feel distracted by Slack at all:"

On my phone in particular, I felt the opposite -- like I was benefiting myself and the company by finding small windows of time in strange places to be productive at work. Over the last few months, however, I've found myself impulsively and habitually checking it to catch up on channel activity the same way I used to open Twitter when in line at the grocery store, or any other time I spent in between more meaningful activities. I started to question if I was actually being productive, or if this was just another way to fill a void with information that didn't really matter.

I craved a reset. How critical was Slack to my ability to do my job? Could I still be a productive employee without it? Was the massive amount of time I spent lurking and interacting with fellow co-workers increasing my productivity, or hurting it?

"I was determined to quit using Slack entirely for a full week," he writes, but "Quitting cold turkey, even for a small amount of time, was out of the question." His use of addiction lingo seems warranted, as in this observation: "With the app closed for half of my first day, I had a renewed sense of focus and attacked my to-do list, but my mind was preoccupied with what I was missing in Slack:"

I quickly discovered not being available on Slack gives the impression you're not actually at work and getting things done. During my experiment, it took way too long not to feel self-conscious during the hours I spent with Slack closed -- like this time didn't count, or I might as well have been at the bar -- even though it was some of my most productive in months.

As he concludes, "I will continue to use my own approach to balancing the need to regularly interact and be available on Slack while staying productive and happy in and out of work."

science march

| No Comments | No TrackBacks

Ed Yong writes in The Atlantic on how the science march found its voice:

Scientists are not a group to whom activism comes easily or familiarly. Most have traditionally stayed out of the political sphere, preferring to stick to their research. But for many, this historical detachment ended with the election of Donald Trump.

His administration has denied the reality of climate change, courted anti-vaccine campaigners, repeatedly stated easily disproven falsehoods, attempted to gag government scientists, proposed enormous budget cuts that would "set off a lost generation of American science," and pushed for legislation that would roll back environmental and public health protections, pave the way for genetic discrimination, and displace scientific evidence from the policy-making process. Sensing an assault on many fronts--to their jobs, funds, and to the value of empiricism itself--scientists are grappling with politics to an unprecedented extent.

Politicus USA points out that the science march out-drew Trump's inauguration, noting that "Millions of people are marching today for science" in cities ranging from New York City to Philadelphia to St. Paul, Minnesota--in addition to the main march in DC.

Yong also comments on the "55 consecutive speakers [...] who rallied the crowd behind a smorgasbord of causes." This criticism is often made of liberal protests, but it is unjustified--being under stack on many fronts means that defense must be aimed in many directions. He notes with approval the "610 satellite events taking place around the world" that accompanied the main march in Washington DC, comments on the pun-heavy signage, but also expresses some concern:

The risk that the march would further polarize America's view of science, portraying it as a liberal endeavor and diminishing its objectivity, has plagued the event since its inception.

loopy

| No Comments | No TrackBacks

There's some news on the three-slit experiment front:

Physicists have performed a variation of the famous 200-year-old double-slit experiment that, for the first time, involves "exotic looped trajectories" of photons. These photons travel forward through one slit, then loop around and travel back through another slit, and then sometimes loop around again and travel forward through a third slit.

Will a diagram help to clarify this loopy trajectory?

20170110-threeslits.jpg

Perhaps not.

Yale News notes that "Gun violence is often described as an epidemic or a public health concern, due to its alarmingly high levels in certain populations in the United States:"

It most often occurs within socially and economically disadvantaged minority urban communities, where rates of gun violence far exceed the national average. A new Yale study has established a model to predict how "contagious" the epidemic really is.

The study, "Modeling Contagion Through Social Networks to Explain and Predict Gunshot Violence in Chicago, 2006 to 2014," conducted "an epidemiological analysis of a social network of individuals who were arrested during an 8-year period in Chicago, Illinois, with connections between people who were arrested together for the same offense:"

Modeling of the spread of gunshot violence over the network was assessed using a probabilistic contagion model that assumed individuals were subject to risks associated with being arrested together, in addition to demographic factors, such as age, sex, and neighborhood residence.

"Social contagion accounted for 63.1% of the 11 123 gunshot violence episodes," the study continues:

...subjects of gun violence were shot on average 125 days after their infector (the person most responsible for exposing the subject to gunshot violence). Some subjects of gun violence were shot more than once. [...]

Gunshot violence follows an epidemic-like process of social contagion that is transmitted through networks of people by social interactions. [...] Contagion via social ties, then, may be a critical mechanism in explaining why neighborhoods matter when modeling the diffusion of crime and, perhaps more important, why certain individuals become subjects of gun violence while others exposed to the same high-risk environments do not.

"We postulated that a person becomes exposed to gun violence through social interactions with previous subjects of gun violence," write the authors:

Therefore, associating with subjects of gun violence, and specifically co-engaging in risky behaviors with them, may expose individuals to these same behaviors, situations, and people that in turn increase the probability of becoming a subject of gun violence. [...]

By identifying high-risk individuals and transmission pathways that might not be detected by other means, a contagion-based approach could detect strategic points of intervention that would enable measures to proactively reduce the trauma associated with gun violence rather than just react to past incidents.

Specifically, the study observed that "more than 70% of all subjects of gun violence could be located in networks containing less than 5%of the city's population."

BigThink recommends that you get off Facebook:

The big issue with Facebook use is that it offers endless opportunities for social comparison. It turns out that seeing countless exotic vacation photos and reading about the career accomplishments of your friends and acquaintances may make you feel worse about your current status.

Additionally, "The average American Facebook user spends around 50 minutes a day on Facebook:"

That's a significant amount of time. According to the Bureau of Labor Statistics, the average person spends 4 minutes a day on social events, 17 minutes exercising, and 19 minutes reading.

JFC, what a time sink! 300 hours a year?! That's nearly two months of 40-hour work weeks!

The study ("The Facebook Experiment: Quitting Facebook Leads to Higher Levels of Well-Being") points out that "provides causal evidence that Facebook use affects our well-being negatively " as well as the fact that "Most people use Facebook on a daily basis; few are aware of the consequences:"

By comparing the treatment group (participants who took a break from Facebook) with the control group (participants who kept using Facebook), it was demonstrated that taking a break from Facebook has positive effects on the two dimensions of well-being: our life satisfaction increases and our emotions become more positive. Furthermore, it was demonstrated that these effects were significantly greater for heavy Facebook users, passive Facebook users, and users who tend to envy others on Facebook.

The study noted that "two out of three 3 Danish Internet users had an account on Facebook in 2015," and summarized the sample as "86 percent women, geographically residing throughout the country, with an average age of 34 years (SD = 8.74), having an average of 350 Facebook friends and spending a bit over an hour on Facebook daily." The authors then consider the effects of "Millions of hours are spent on Facebook each day:"

We are surely better connected now than ever before, but is this new connectedness doing any good to our well-being? According to the present study, the answer is no. In fact, the predominant uses of Facebook--that is, as a means to communicate, gain information about others, and as habitual pastime--are affecting our well-being negatively on several dimensions. First, the present study provides causal evidence that quitting Facebook leads to higher levels of both cognitive and affective well-being.

The caveat is chilling:

The effects presented in this article were generated after just 1 week of absence from a single social network. Future studies should investigate the effects of quitting Facebook for longer periods of time to test if the effects are permanent.

Chris Mooney analyzes NOAA and the global-warming "pause" by looking at what might be "the most controversial climate change study in years:"

...the 2015 paper, led by NOAA's Thomas Karl, employed an update to the agency's influential temperature dataset, and in particular to its record of the planet's ocean temperatures, to suggest that really, the recent period was perfectly consistent with the much longer warming trend.

This consistency has drawn fire by way of "a congressional subpoena from Rep. Lamar Smith, chair of the House Committee on Science:"

That controversy is likely to be stirred anew in the wake of a new study, published Wednesday in Science Advances, that finds the NOAA scientists did the right thing in adjusting their dataset. In particular, the new research suggests that the NOAA scientists correctly adjusted their record of ocean temperatures in light of known biases in some observing systems -- and indeed, that keepers of other top global temperature datasets should do likewise.

"We pretty robustly showed that NOAA got it right," said study author Zeke Hausfather, a Ph.D. student at the University of California-Berkeley and a researcher with Berkeley Earth, a nonprofit consortium that has reanalyzed the Earth's temperatures. "There was no cooking of the books, there's no politically motivated twisting of the data."

In comparing data collected from ships versus that from buoys:

So to better patch together a long term temperature record necessarily reliant on both data sources, NOAA used a "bias correction" to take this into account, and more generally gave greater weight to the buoy data, in updating its dataset.

This highly technical switch, in turn, had the effect of increasing the overall warming of the oceans in the new dataset -- and helping to wipe out claims that there'd been any recent slowdown in the rate of climate change.

changing minds

| No Comments | No TrackBacks

Vox's Brian Resnick discusses a study on changing minds, which he describes as "the hardest challenge in politics right now:"

Psychologists have been circling around a possible reason political beliefs are so stubborn: Partisan identities get tied up in our personal identities. Which would mean that an attack on our strongly held beliefs is an attack on the self. And the brain is built to protect the self.

When we're attacked, we evade or defend -- as if we have an immune system for uncomfortable thoughts, one you can see working in real time.

"The brain's primary responsibility is to take care of the body, to protect the body," Jonas Kaplan, a psychologist at the University of Southern California, tells me. "The psychological self is the brain's extension of that. When our self feels attacked, our [brain is] going to bring to bear the same defenses that it has for protecting the body."

20161229-beliefchange.jpg

Thanks to decades of right-wing paranoia and propaganda, even the science of fluoride isn't safe from ideological blindness--remember Jack D. Ripper?

Noah Charney's piece "Your Brain on Art" praises Eric Kandel's book Reductionism in Art and Brain Science, saying that it "offers one of the freshest insights into art history in many years:"

Ask your average person walking down the street what sort of art they find more intimidating, or like less, or don't know what to make of, and they'll point to abstract or minimalist art. Show them traditional, formal, naturalistic art, like Bellini's "Sacred Allegory," art which draws from traditional core Western texts (the Bible, apocrypha, mythology) alongside a Mark Rothko or a Jackson Pollock or a Kazimir Malevich, and they'll retreat into the Bellini, even though it is one of the most puzzling unsolved mysteries of the art world, a riddle of a picture for which not one reasonable solution has ever been put forward. The Pollock, on the other hand, is just a tangle of dripped paint, the Rothko just a color with a bar of another color on top of it, the Malevich is all white.

Kandel offers this explanation:

In abstract painting, elements are included not as visual reproductions of objects, but as references or clues to how we conceptualize objects. In describing the world they see, abstract artists not only dismantle many of the building blocks of bottom-up visual processing by eliminating perspective and holistic depiction, they also nullify some of the premises on which bottom-up processing is based. We scan an abstract painting for links between line segments, for recognizable contours and objects, but in the most fragmented works, such as those by Rothko, our efforts are thwarted.

Thus the reason abstract art poses such an enormous challenge to the beholder is that it teaches us to look at art -- and, in a sense, at the world -- in a new way. Abstract art dares our visual system to interpret an image that is fundamentally different from the kind of images our brain has evolved to reconstruct.

"We like to think of abstraction as a 20th century phenomenon," he writes, but its roots lie far deeper:

A look at ancient art finds it full of abstraction. Most art history books, if they go back far enough, begin with Cycladic figurines (dated to 3300-1100 BC). Abstracted, ghost-like, sort-of-human forms. Even on cave walls, a few lines suggest an animal, or a constellation of blown hand-prints float on a wall in absolute darkness.

Abstract art is where we began, and where we have returned. It makes our brains hurt, but in all the right ways, for abstract art forces us to see, and think, differently.

Enriching, but not merely entertaining--no wonder it's so unpopular.

disappearing data?

| No Comments | No TrackBacks

WaPo's Brady Dennis informs us that "scientists have begun a feverish attempt to copy reams of government data onto independent servers in hopes of safeguarding it from any political interference:"

In recent weeks, President-elect Donald Trump has nominated a growing list of Cabinet members who have questioned the overwhelming scientific consensus around global warming. [...]

Those moves have stoked fears among the scientific community that Trump, who has called the notion of man-made climate change "a hoax" and vowed to reverse environmental policies put in place by President Obama, could try to alter or dismantle parts of the federal government's repository of data on everything from rising sea levels to the number of wildfires in the country.

There is, sadly, historical precedent for just this sort of disappearing data:

Climate data from NASA and the National Oceanic and Atmospheric Administration have been politically vulnerable. When Tom Karl, director of the National Centers for Environmental Information, and his colleagues published a study in 2015 seeking to challenge the idea that there had been a global warming "slowdown" or "pause" during the 2000s, they relied, in significant part, on updates to NOAA's ocean temperature data set, saying the data "do not support the notion of a global warming 'hiatus.'"

In response, the U.S. House Science, Space and Technology Committee chair, Rep. Lamar S. Smith (R-Tex.), tried to subpoena the scientists and their records.

Andrew Dessler, professor of atmospheric sciences at Texas A&M, commented:

"If you can just get rid of the data, you're in a stronger position to argue we should do nothing about climate change."

"strange numbers"

| No Comments | No TrackBacks

Kevin Hartnett's dive into the strange numbers found in particle collisions is a nice read, exploring "a surprising correspondence that has the potential to breathe new life into the venerable Feynman diagram and generate far-reaching insights in both fields:"

It has to do with the strange fact that the values calculated from Feynman diagrams seem to exactly match some of the most important numbers that crop up in a branch of mathematics known as algebraic geometry. These values are called "periods of motives," and there's no obvious reason why the same numbers should appear in both settings. Indeed, it's as strange as it would be if every time you measured a cup of rice, you observed that the number of grains was prime.

Hartnett writes that "mathematicians and physicists are working together to unravel the coincidence:"

For mathematicians, physics has called to their attention a special class of numbers that they'd like to understand: Is there a hidden structure to these periods that occur in physics? What special properties might this class of numbers have? For physicists, the reward of that kind of mathematical understanding would be a new degree of foresight when it comes to anticipating how events will play out in the messy quantum world.

Here's an infographic that might clarify things:

20161120-potentialshortcut.jpg

group selection

| No Comments | No TrackBacks

David S. Wilson writes about Elinor Ostrom and the tragedy of the commons, particularly the path forward from her 1990 book Governing the Commons. "Is the so-called tragedy of the commons," he asks (referencing Garrett Hardin's famed 1968 Science essay), "ever averted in the biological world and might this possibility provide solutions for our own species?"

One plausible scenario is natural selection at the level of groups. A selfish farmer might have an advantage over other farmers in his village, but a village that somehow solved the tragedy of the commons would have a decisive advantage over other villages. Most species are subdivided into local populations at various scales, just as humans are subdivided into villages, cities and nations. If natural selection between groups (favoring cooperation) can successfully oppose natural selection within groups (favoring non-cooperation), then the tragedy of the commons can be averted for humans and non-human species alike.

Rio scale

| No Comments | No TrackBacks

538 analyzes the search for an alien signal, opining that although the work of "alien hunting, commonly referred to as the Search for Extraterrestrial Intelligence, or SETI, is still very much relegated to the sidelines," it's become a "cutting edge" pursuit:

Occasionally, promising signals make their way through the broader astronomical community and into the public eye. A few such claims have made headlines recently, prompting some astronomers to call for a new framework to rank and interpret these signals.

"To help with this," she writes, "astronomers came up with a way to gauge the credibility of a SETI signal, called the Rio scale," where "an answer of 0 is obviously nothing and a 10 is 'wow, aliens are calling':"

Formulated at an astronomical conference in Rio de Janeiro in 2000, it's a 10-point scale intended to help people understand when to take an apparent signal from another world seriously. [...]

RS=Q⋅δ

In this equation, Q is the sum of numerical values assigned to three parameters: the class of phenomenon, such as whether it's an "obviously Earth-directed message" or a randomly swooping beacon; the type of discovery, like whether it's a steady signal or something that comes and goes; and the distance to the signal. The latter is important because you'd want to know how long it would take for aliens to receive a reply.

Each parameter has a numerical value from 1 to 4, 5 or 6.1 For instance, "an Earth-specific beacon designed to draw attention" gets a 4. If it's within the galaxy, add 2. If it was a passing signal detected once, it gets another 2. To get your Rio scale value, you multiply this sum by δ, which is a measure of the credibility of the claim. The values for δ go from 0 to 2/3. If it's uncertain but worth checking out, for example, that's 1/6. This quantity depends on experts' opinions, so it's inherently subjective. And unless a signal has been verified repeatedly by SETI experts, the δ value is almost certain to drop its total value to a 2 or a 3.

coding is not fun

| No Comments | No TrackBacks

Walter Vannini, a digital consultant and researcher, writes that coding is not fun:

Programming computers is a piece of cake. Or so the world's digital-skills gurus would have us believe. From the non-profit Code.org's promise that 'Anybody can learn!' to Apple chief executive Tim Cook's comment that writing code is 'fun and interactive', the art and science of making software is now as accessible as the alphabet.

"Unfortunately," he writes, "this rosy portrait bears no relation to reality:"

For starters, the profile of a programmer's mind is pretty uncommon. As well as being highly analytical and creative, software developers need almost superhuman focus to manage the complexity of their tasks. Manic attention to detail is a must; slovenliness is verboten. Attaining this level of concentration requires a state of mind called being 'in the flow', a quasi-symbiotic relationship between human and machine that improves performance and motivation.

Coding isn't the only job that demands intense focus. But you'd never hear someone say that brain surgery is 'fun', or that structural engineering is 'easy'. When it comes to programming, why do policymakers and technologists pretend otherwise?

Anyone can learn to type a "Hello, world!" program, but the artisans are few--and the artists far fewer. "Insisting on the glamour and fun of coding," he continues, "is the wrong way to acquaint kids with computer science:"

It insults their intelligence and plants the pernicious notion in their heads that you don't need discipline in order to progress. As anyone with even minimal exposure to making software knows, behind a minute of typing lies an hour of study.

It's better to admit that coding is complicated, technically and ethically [and] it's irresponsible to speak of coding as a lightweight activity. Software is not simply lines of code, nor is it blandly technical. In just a few years, understanding programming will be an indispensable part of active citizenship. The idea that coding offers an unproblematic path to social progress and personal enhancement works to the advantage of the growing techno-plutocracy that's insulating itself behind its own technology.

Apple's way out?

| No Comments | No TrackBacks

Salon reports that the DoJ has offered a way out for Apple:

The Obama administration has told a U.S. magistrate judge it would be willing to allow Apple Inc. to retain possession of and later destroy specialized software it has been ordered to design to help the FBI hack into an encrypted iPhone used by the gunman in December's mass shootings in California.

Although billed as "a way out," this is far too slippery a slope for any principled programmer to tread.

At Slate, David Auerbach discusses the Intercept's reporting last week that "Microsoft probably holds a copy of the encryption keys for Windows 10 users' hard drives," which suggests that "Microsoft took the path of least resistance and chose to store the recovery key on the user's OneDrive cloud account:"

The company is clearly trying to catch up to Google, Apple, and Facebook in the user-data race, and so its policies emit a whiff of eminent domain: Even when we aren't looking at your files which we mirror to OneDrive, Microsoft seems to be saying, we are the ones taking care of your data. Once the company's got it, will Microsoft be tempted to ask for a bigger peek, just as Facebook has gradually gotten nosier with its user data? The tenets of capitalism says yes.

The conclusion is bleak:

If privacy matters enough that you want to protect your machine and your data from the eyes of the government and the tech industry, you shouldn't be using Windows 10--or Apple, or Android--in the first place.

About this Archive

This page is an archive of recent entries in the science category.

running is the previous category.

worst. president. ever. is the next category.

Find recent content on the main index or look in the archives to find all content.

Monthly Archives

Pages

  • About
  • Contact
OpenID accepted here Learn more about OpenID
Powered by Movable Type 5.031