Thursday, August 08, 2019

Roll Over, Dorothy — We're Not Using Pegasus Mail Anymore

Today's essay took this blogger back to his memory of using Pegasus Mail in the early 1990s on the PC network of the Collegium Excellens. From that beginning, the blogger happened upon H-Net which began as a network of e-mail sites for the ever-growing areas of study in history and the social sciences. By the late 1990s, H-Net sites evolved into a network of Web sites and e-mail was not longer the main focus of H-Net. In 2002 or 2003, this blogger discovered Blogger software. The rest, as they say, is history. If this is a (fair & balanced) account of communication via computer, so be it.

[x The New Yorker]
Was E-mail A Mistake?
By Cal Newport


TagCrowd Cloud provides a visual summary of the following piece of writing

created at TagCrowd.com

The walls of the Central Intelligence Agency’s original headquarters, in Langley, Virginia, contain more than thirty miles of four-inch steel tubing. The tubes were installed in the early nineteen-sixties, as part of an elaborate, vacuum-powered intra-office mail system. Messages, sealed in fibreglass containers, rocketed at thirty feet a second among approximately a hundred and fifty stations spread over eight floors. Senders specified each capsule’s destination by manipulating brass rings at its base; electro-mechanical widgets in the tubes read those settings and routed each capsule toward its destination. At its peak, the system delivered seventy-five hundred messages each day.

According to oral histories maintained by the CIA, employees were saddened when, in the late nineteen-eighties, during an expansion of the headquarters, this steampunk mail system was shut down. Some of them reminisced about the comforting thunk, thunk of the capsules arriving at a station; others worried that internal office communication would become unacceptably slow, or that runners would wear themselves out delivering messages on foot. The agency’s archives contain a photograph of a pin that reads “Save the Tubes.”

The CIA’s tube system is a defining example of one of the major technological movements of the twentieth century: the push to create what communication specialists call “asynchronous messaging” in the workplace. An interaction is said to be synchronous when all parties participate at the same time, while standing in the same room, perhaps, or by telephone. Asynchronous communication, by contrast, doesn’t require the receiver to be present when a message is sent. I can send a message to you whenever I want; you answer it at your leisure.

For much of workplace history, collaboration among colleagues was synchronous by default. From Renaissance workshops to the nineteenth-century rooms occupied by Charles Dickens’s Bob Cratchit and Herman Melville’s Bartleby, an office was usually a single space where a few people toiled. Though letter-writing—an asynchronous style of communication—had been a part of commerce for centuries, it was too slow for day-to-day collaboration. For most office work, synchrony ruled.

This status quo was upended by the rise of a new work setting: the large office. In the book Cubed: A Secret History of the Workplace (2014), the critic and New Yorker contributor Nikil Saval writes that this shift took place between 1860, when the US Census counted around seven hundred and fifty thousand people who worked in “professional service,” and 1920, by which time that number had increased to more than four million—a period, Saval writes, in which “business became big business.” The small counting house gave way to edifices such as the Larkin Building, designed in 1903, by Frank Lloyd Wright, which housed eighteen hundred employees, spread over five floors and a basement, and was anchored by a cavernous, light-bathed central atrium. The introduction of office telephone exchanges, in the early twentieth century, helped make such spaces more functional. But coördinating telephone conversations required drawn-out games of secretarial phone tag.

As message slips piled up on office desks, what seemed to be missing was a system of practical asynchronous messaging: a way for me to send you a message when it was convenient for me, and for you to read that message when it was convenient for you, all at speeds less sluggish than that of intra-office mail. If such a system could be built, managers thought, then efficient non-real-time collaboration would become possible: no more missed-call slips, no more waiting for the mail cart. In the emerging age of large offices, practical asynchrony seemed like a productivity silver bullet. This belief motivated investment in projects such as the CIA’s pneumatic-tube network.

Other large office buildings also experimented with pneumatic solutions. But the expense and complexity of these systems rendered them essentially impractical. Then, in the nineteen-eighties, a far more convenient technology arrived, in the form of desktop computers connected through digital networks. As these networks spread, e-mail emerged as the killer app for bringing asynchronous communication to the office. To better understand this shift, I talked to Gloria Mark, a professor at the University of California, Irvine, who studies the impact that computer technology has had on the workplace. “I can show it to you,” she told me, when I asked about the spread of e-mail. She showed me a data table she had constructed, which summarized the results of office-time-use studies from 1965 to 2006. The studies can be divided into two groups: before e-mail and after. In the studies conducted before e-mail, workers spent around forty per cent of their time in “scheduled meetings,” and twenty per cent engaged in “desk work.” In those conducted after e-mail, the percentages are swapped.

With the arrival of practical asynchronous communication, people replaced a significant portion of the interaction that used to unfold in person with on-demand digital messaging, and they haven’t looked back. The Radicati Group, a technology-research firm, now estimates that more than a hundred and twenty-eight billion business e-mails will be sent and received daily in 2019, with the average business user dealing with a hundred and twenty-six messages a day. The domination of asynchronous communication over synchronous collaboration has been so complete that some developers of digital-collaboration tools mock the fact that we ever relied on anything so primitive as in-person meetings. In a blog post called “Asynchronous Communication Is the Future of Work,” the technology marketer Blake Thorne compares synchronous communication to the fax machine: it’s a relic, he writes, that will “puzzle your grandkids” when they look back on how people once worked.

As e-mail was taking over the modern office, researchers in the theory of distributed systems—the subfield in which, as a computer scientist, I specialize—were also studying the trade-offs between synchrony and asynchrony. As it happens, the conclusion they reached was exactly the opposite of the prevailing consensus. They became convinced that synchrony was superior and that spreading communication out over time hindered work rather than enabling it.

The synchrony-versus-asynchrony issue is fundamental to the history of computer science. For the first couple of decades of the digital revolution, programs were designed to run on individual machines. Later, with the development of computer networks, programs were written to be deployed on multiple machines that operated together over a network. Many of these early distributed systems, as they came to be known, were asynchronous by default. In such a system, Machine A could send a message to Machine B, hoping that it would eventually be delivered and processed, but Machine A couldn’t know for sure how long that would take. (If the network was slow, or if Machine B’s processor was busy with other tasks, or if Machine B crashed, it might take a while—or it might not happen at all.) An obvious solution was to engineer synchronous distributed systems. In such a system, communication would be closer to real time, with messages being passed back and forth within tight and predictable time frames. Machines could work together in rounds, with all the loose ends tied up before each round ended.

A few synchronous distributed systems were built in the nineteen-seventies and -eighties. NASA, for example, developed a computerized aircraft-control system, which relied on multiple computer processors to operate the aircraft’s control surfaces. The system was designed so that, if one processor failed in the extreme conditions of high-altitude flight, the system as a whole could keep functioning—preventing a crash from causing a crash. To simplify the task of writing software that safely implemented this sort of fault tolerance, the processors were connected on a custom timing circuit that kept their operations synchronized to within around fifty microseconds. But these synchronous systems were often costly to build. They required either custom hardware or special software that could precisely organize the processors’ activity. As in the world of workplace communication, synchrony was a more convenient way to communicate, once it was arranged, but arranging it required effort.

It was in the nineteen-eighties that business thinkers and computer scientists began to diverge in their thinking. People in office settings fixated on the organizational overhead required to organize synchronous collaboration. They believed that eliminating this overhead through asynchronous systems would make collaboration more efficient. Computer scientists, meanwhile, came to the opposite conclusion. Investigating asynchronous communication using a mathematical approach known as algorithm theory, they discovered that spreading out communication with unpredictable delays introduced new complexities that were difficult to reduce. While the business world came to see synchrony as an obstacle to overcome, theorists began to realize that it was fundamental for effective collaboration.

A striking computer-science discovery from this period is the difficulty of the so-called consensus problem. Imagine that each machine in a distributed system starts an operation, such as entering a transaction into a database, with an initial preference to either proceed or abort. The goal is for these machines to reach a consensus—either all agreeing to proceed or all agreeing to abort. The simplest solution is for each machine to gather the preferences of its peers and then apply some fixed rule—for example, counting the votes to determine a winner—to decide which preference to adopt. If all the machines gather the same set of votes, they will all adopt the same decision.

The problem is that some of the computers might crash. If that happens, the rest of the group will end up waiting forever to hear from peers that are no longer operating. In a synchronous system, this issue is easily sidestepped: if you don’t hear from a machine fast enough, you can assume that it has crashed and ignore it going forward. In asynchronous systems, these failures are more problematic. It’s difficult to differentiate between a computer that’s crashed and one that’s delayed. At first, to the engineers who studied this problem, it seemed obvious that, instead of waiting to learn the preference of every machine, one could just wait to hear from most of them. And yet, to the surprise of many people in the field, in a 1985 paper [PDF], three computer scientists—Michael Fischer, Nancy Lynch (my doctoral adviser), and Michael Paterson—proved, through a virtuosic display of mathematical logic, that, in an asynchronous system, no distributed algorithm could guarantee that a consensus would be reached, even if only a single computer crashed.

A major implication of research into distributed systems is that, without synchrony, such systems are just too hard for the average programmer to tame. It turns out that asynchrony makes coördination so complicated that it’s almost always worth paying the price required to introduce at least some synchronization. In fact, the fight against asynchrony has played a crucial role in the rise of the Internet age, enabling, among other innovations, huge data centers run by such companies as Amazon, Facebook, and Google, and fault-tolerant distributed databases that reliably process millions of credit-card transactions each day. In 2013, Leslie Lamport, a major figure in the field of distributed systems, was awarded the A. M. Turing Award—the highest distinction in computer science—for his work on algorithms that help synchronize distributed systems. It’s an irony in the history of technology that the development of synchronous distributed computer systems has been used to create a communication style in which we are always out of synch.

Anyone who works in a standard office environment has firsthand experience with the problems that followed the enthusiastic embrace of asynchronous communication. As the distributed-system theorists discovered, shifting away from synchronous interaction makes coördination more complex. The dream of replacing the quick phone call with an even quicker e-mail message didn’t come to fruition; instead, what once could have been resolved in a few minutes on the phone now takes a dozen back-and-forth messages to sort out. With larger groups of people, this increased complexity becomes even more notable. Is an unresponsive colleague just delayed, or is she completely checked out? When has consensus been reached in a group e-mail exchange? Are you, the e-mail recipient, required to respond, or can you stay silent without holding up the decision-making process? Was your point properly understood, or do you now need to clarify with a follow-up message? Office workers pondering these puzzles—the real-life analogues of the theory of distributed systems—now dedicate an increasing amount of time to managing a growing number of never-ending interactions.

Last year, the software company RescueTime gathered and aggregated anonymized computer-usage logs from tens of thousands of people. When its data scientists crunched the numbers, they found that, on average, users were checking e-mail or instant-messenger services like Slack once every six minutes. Not long before, a team led by Gloria Mark, the US-Irvine professor, had installed similar logging software on the computers of employees at a large corporation; the study found that the employees checked their in-boxes an average of seventy-seven times a day. Although we shifted toward asynchronous communication so that we could stop wasting time playing phone tag or arranging meetings, communicating in the workplace had become more onerous than it used to be. Work has become something we do in the small slivers of time that remain amid our Sisyphean skirmishes with our in-boxes.

There’s nothing intrinsically bad about e-mail as a tool. In situations where asynchronous communication is clearly preferable—broadcasting an announcement, say, or delivering a document—e-mails are superior to messengered printouts. The difficulties start when we try to undertake collaborative projects—planning events, developing strategies—asynchronously. In those cases, communication becomes drawn out, even interminable. Both workplace experience and the theory of distributed systems show that, for non-trivial coördination, synchrony usually works better. This doesn’t mean that we should turn back the clock, re-creating the mid-century workplace, with its endlessly ringing phones. The right lesson to draw from distributed-system theory is that useful synchrony often requires structure. For computer scientists, this structure takes the form of smart distributed algorithms. For managers, it takes the form of smarter business processes.

Isolated examples of well-planned, structured synchrony are starting to emerge in the business world. Many of these experiments come from the tech sector (where, perhaps not coincidentally, the ideas behind distributed-system theory are familiar). Recently, the founder and CEO of a publicly traded technology company told me that he spends at most two or three hours a week sending and receiving e-mails; he has replaced most of his asynchronous messaging with a “regular rhythm” of meetings, which allows him to efficiently address issues in real time. “If you keep needing to send me urgent messages, then my assumption is that there’s something broken about the way you’re doing business,” he said.

Similarly, the software-development firm Basecamp now allows employees to set professor-style office hours: if you need to talk to an expert on a given subject, you can sign up for her office hours instead of shooting her an e-mail. “You get that person’s full, undivided attention,” Jason Fried, the company’s co-founder and CEO, said, on the podcast "Curious Minds." “It’s such a calmer way of doing this.” If something is urgent and the expert’s office hours aren’t for another few days, then, Fried explained, “that’s just how it goes.”

At many technology companies, a popular alternative to hyperactive asynchronous messaging is a collaboration framework called Scrum, popular among software developers. Teams of programmers using Scrum divide their efforts into “sprints,” each focussed on introducing a related set of features to a piece of software. During these sprints, which last from one to four weeks, the team meets once a day. Everyone gets a chance to speak. Team members describe what they accomplished yesterday and what they’re going to work on today; if they think they’ll need help, they let the right people know. In classic Scrum, colored notes pinned to a board are arranged to publicly reflect these commitments, so that there’s no ambiguity about the plan. These meetings are often held standing up, so that no one feels tempted to bloviate, and they typically last for around fifteen minutes. The idea that a quarter of an hour of structured synchrony is enough time to enable a full day of work might sound preposterous, but, for more than twelve million software developers, it seems to be working. Many people are surprised when they first learn about the effectiveness of Scrum. This suggests that many of us are underestimating the value of synchrony: when organized properly, it’s more powerful than we realize.

We can acknowledge, with the benefit of hindsight, the reasonableness of the hypothesis that asynchrony in the office would increase productivity. We can also admit that this hypothesis has been largely refuted by experience. To use the terminology of computer science, it turned out that the distributed systems that resulted when we shifted toward asynchronous communication were soon overwhelmed by the increasing complexity induced by asynchrony. We must, therefore, develop better systems—ones that will almost certainly involve less ad-hoc messaging and more real-time coördination.

From this perspective, our moment in workplace history looks rather different. The era that will mystify our grandkids is ours—a period when, caught up in the promise of asynchronicity, we frantically checked our in-boxes every few minutes, exhausted by the deluge of complex and ambiguous messages, while applauding ourselves for eliminating the need to speak face to face. ###

[Cal Newport is a a computer science professor at Georgetown University (DC) who studies the theory of distributed systems. In addition to his academic work, he writes about the intersection of technology and culture in his blog, Study Hacks. Newport is the author of six books, including, most recently, Digital Minimalism: Choosing a Focused Life in a Noisy World (2019). See other Newport books here. His work has been published in over 25 languages and has been featured in many major publications, including the New York Times, Wall Street Journal, New Yorker, Washington Post, and Economist. Newport received an AB (computer science) from Dartmouth College (NH) and both an MS and PhD (computer science) from the Massachusetts Institute of Technology (MIT).]

Copyright © 2019 Condé Nast Digital



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License..

Copyright © 2019 Sapper's (Fair & Balanced) Rants & Raves