One of greatest canards espoused by the minions who support the HA (Horse's A$$) in the Oval Office is the pursuit of government deregulation of industry, regardless of consequences. All is justified for the sake of bloody profit when there's an attempt to make sense of the recent air tragedies involving the Boeing 737 Max airliners in Indonesia and Ethiopia. The managers of The Boeing Company and its Board of Directors should be indicted for manslaughter and the office-holders in Congress should be impeached and removed from office for their role in deregulating the air travel industry. Those with bloody hands should be held accountable for the deaths of hundreds of innocent, trusting passengers. If this is a (fair & balanced) demand for overdue justice, so be it.
[x VF]
“It's Not Going To Be Easy”: The Boeing Tragedy Is Just The Beginning Of The Self-Driving Techpochalypse
By Nick Bilton
TagCrowd Cloud provides a visual summary of the following piece of writing
In 1982, Atomic Energy of Canada Limited [AECL], one of the largest nuclear science and technology companies in Canada, debuted its latest radiation-therapy machine, the Therac-25. The massive contraption promised a breakthrough in the treatment of cancer. AECL’s previous models, the Therac-6 and Therac-20, required a radiotherapy technician to help guide the delivery of powerful X-rays and electrons. The new model, however, relied on computer software alone, increasing the number of patients that hospitals could treat each day. But the Therac-25 was also vulnerable to fatal programming mistakes. Thanks to an engineering error, the machine occasionally delivered radiation doses that were hundreds of times more powerful than intended, grievously injuring or killing a half-dozen patients.
For decades now, the deaths and injuries associated with the Therac-25, though limited, were considered among the most tragic examples of how software engineering can go awry. But it was only a matter of time until that changed. Now, investigators are trying to figure out if it was bad programming, sensors, or a combination of the two that led to two fatal crashes involving brand-new Boeing 737 Max planes, less than five months apart. At first, it appeared that the crashes were unrelated. But in recent days, researchers have uncovered a far more terrifying reality. In both cases, the 737 Max planes experienced an almost identical erratic altitude irregularity, rising and falling in short 15-to-20-second bursts as their pilots wrestled to take back control. In both cases, the plane ultimately crashed to the ground, killing a total of 346 people. (Boeing has declined to comment in depth “because of the ongoing investigation” into the crashes, but claims there have been “significant mischaracterizations” about its flight-control system, which was approved by the FAA.)
It will likely be months before investigators figure out exactly what caused the crashes. Early reports suggest Boeing rushed its safety assessment of the new control software, which was billed as a way to minimize Max pilot training and cut labor costs. But we already know enough to recognize these tragedies for what they are: a canary in a coal mine. In Silicon Valley, Europe, and China, countless start-ups are racing to engineer the next generation of driverless cars and delivery drones. In the not-too-distant future, we will be living in a society in which every aspect of our lives—from walking our dogs to dropping our kids off at school—will be automated, easier, and totally reliant on software engineers and artificial intelligence.
Software-based accidents are still incredibly rare, Clive Thompson, the author of the new book Coders: The Making of a New Tribe and the Remaking of the World (2019), told me in an interview. And yet, when we consider a future in which our lives are dictated by lines of code, there is vast potential for unforeseen trouble. “If we’re going to have big robot devices zooming around us all day long, we’re going to need rigorous review of the code. That’s clear. We’re moving to a world where FAA-like regulation will reach into lots of newfangled machines,” Thompson told me. “It’s not going to be easy, and there’ll always be risk. Software’s complex, so it’s hard to tell how it’ll behave in everyday life. The FAA already regulates aircraft software pretty tightly, yet we got the Max 8 catastrophe.”
The Sheraton Vancouver Wall Centre Hotel is astounding to behold. A gleaming glass tower that protrudes 518 feet into Vancouver’s skyline, the building boasts clubs, condominiums, and an approximately 55,000 square foot meeting space for conferences. One of its claims to fame is that the hotel was featured in "X-Men: The Last Stand" as a building used to cure mutants. But the Sheraton Vancouver could also end up being the place where hackers demonstrate how technology can be turned against us.
In the coming days, hackers and security experts from all over the world will gather in the third-floor ballroom of the North Tower to hack a variety of targets, including a Tesla Model 3. The goals will range from simply taking over the vehicle’s infotainment system or hacking its key fobs—like picking a traditional keyhole, but with code rather than a lock pick—to more sinister targets, such as taking over the Model 3’s autonomous driving features, giving the hacker total control. If it’s that easy to take a Tesla hostage, what’s to stop a malicious actor from driving the car (and its passengers) right off a bridge?
While the goal for hackers is to win money (more than $1 million in cash and prizes), and possibly even a Tesla Model 3, the larger objective for the organizers, Pwn2Own, is to prove just how vulnerable, and, in turn, how dangerous, these technologies can be when left unsecured. (Tesla, which is involved in the contest and has made a car available to the Pwn2Own organizers, is hoping the hackers can help identify software bugs. Musk, who has been vociferous about what could go wrong with A.I. in the future, sees this as an opportunity to improve the security of Tesla.)
In the past, the contest has been relatively innocuous, challenging hackers to break into a Web browser or computer operating system. This year, for the first time, the contest will entice people to prove that a roughly 4,000-pound vehicle, which can go from 0 to 60 miles per hour in 3.5 seconds, can be taken over by a rogue agent anywhere on the planet. And, as the contest perfectly illustrates, if you can hijack one Tesla, there’s nothing to stop you from repeating the process. From a national-security standpoint, the consequences are horrifying. Why would Russia, China, or Iran waste valuable weapons, or risk the death of their own soldiers, to attack America if they can simply smash 180,000 autonomous vehicles into each other? Why would ISIS need to radicalize people who live in England, New York, or Paris, to drive a truck into crowds of innocent people, if they could simply hack into a car, floor the gas, and point it down a city street?
As we see with so many mass killings around the globe, one lunatic can cause disproportionate carnage. What would happen if that one lunatic had incredible programming skills? What if that person worked for a company like Tesla, and somehow managed to send an over-the-air update without the rest of the company’s knowledge? What if a Boeing 737 Max software bug wasn’t limited to one class of aircraft, but affected every Boeing plane in the sky, causing them all to plummet simultaneously?
For the past few years, Elon Musk has described Tesla as more akin to a computer than a car. Not only is a Tesla able to update its own software while it’s parked in your driveway, it’s also one of the first cars to be marketed as partially autonomous. Even today, Musk says, fully driverless technology will be available in less than two years. “I think we will be ‘feature complete’ on full self-driving this year, meaning the car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year,” Musk said in a recent podcast interview. “I am certain of that. That is not a question mark.”
We are now repeating the same myopic mistakes that we made during the rise of social media, with potentially more dangerous consequences. Across the globe, companies are racing to move A.I. and automation out of labs and into our driveways, schools, financial markets, and battlefields. And we have no idea whether we’re putting ourselves and our society in imminent danger. Certainly our government is doing nothing to stop or slow the inevitable. Hundreds of billions of dollars in shareholder value are waiting to be unlocked by these technologies, which will allow us to watch Netflix or YouTube videos (and plenty of ads) instead of touching a steering wheel. The question we should be asking is: are we setting ourselves up for a future that we won’t be able to turn off?
Ironically, we might have the Trump administration to thank for mitigating a potential technological catastrophe at scale. As Axios has reported, Donald Trump thinks driverless-car technology is “crazy,” and says he’d never let a computer drive him around. “You know when he’s telling a story, and he does the hand motions,” said a source who has heard Trump talk about hypothetical accidents involving self-driving cars. “He says, ‘Can you imagine: you’re sitting in the back seat, and all of a sudden this car is zigzagging around the corner and you can’t stop the fucking thing?’”
For once, the president has a point. While Trump can’t stop the progression of technology, the government should be working to put in place regulations that would prevent millions of cars from zigzagging all over the place and potentially killing countless people. Because, as we’ve seen time and again, companies like AECL, or Boeing, or Facebook, can’t be trusted to regulate themselves. ###
[British-born Nick Bilton is a special correspondent for Vanity Fair. He also is a technology writer for The New York Times' Bits Blog and the author of I Live in the Future & Here's How It Works (2011). See other books by Bilton here. He receoved a BA (journalism and documentary film) from The New School (NYC) and an MFA (graphic design) from The School of Visual Arts (NYC).]
Copyright © 2019 Vanity Fair/Condé Nast Digital
This work is licensed under a Creative Commons Attribution 4.0 International License..
Copyright © 2019 Sapper's (Fair & Balanced) Rants & Raves