AuthorTopic: Why Technology Favors Tyranny  (Read 297 times)

Offline Surly1

  • Administrator
  • Master Chef
  • *****
  • Posts: 14541
    • View Profile
    • Doomstead Diner
Why Technology Favors Tyranny
« on: September 09, 2018, 04:49:00 AM »
Why Technology Favors Tyranny

Artificial intelligence could erase many practical advantages of democracy, and erode the ideals of liberty and equality.



I. The Growing Fear of Irrelevance

There is nothing inevitable about democracy. For all the success that democracies have had over the past century or more, they are blips in history. Monarchies, oligarchies, and other forms of authoritarian rule have been far more common modes of human governance.

The emergence of liberal democracies is associated with ideals of liberty and equality that may seem self-evident and irreversible. But these ideals are far more fragile than we believe. Their success in the 20th century depended on unique technological conditions that may prove ephemeral.

In the second decade of the 21st century, liberalism has begun to lose credibility. Questions about the ability of liberal democracy to provide for the middle class have grown louder; politics have grown more tribal; and in more and more countries, leaders are showing a penchant for demagoguery and autocracy. The causes of this political shift are complex, but they appear to be intertwined with current technological developments. The technology that favored democracy is changing, and as artificial intelligence develops, it might change further.

Information technology is continuing to leap forward; biotechnology is beginning to provide a window into our inner lives—our emotions, thoughts, and choices. Together, infotech and biotech will create unprecedented upheavals in human society, eroding human agency and, possibly, subverting human desires. Under such conditions, liberal democracy and free-market economics might become obsolete.

Ordinary people may not understand artificial intelligence and biotechnology in any detail, but they can sense that the future is passing them by. In 1938 the common man’s condition in the Soviet Union, Germany, or the United States may have been grim, but he was constantly told that he was the most important thing in the world, and that he was the future (provided, of course, that he was an “ordinary man,” rather than, say, a Jew or a woman). He looked at the propaganda posters—which typically depicted coal miners and steelworkers in heroic poses—and saw himself there: “I am in that poster! I am the hero of the future!”

In 2018 the common person feels increasingly irrelevant. Lots of mysterious terms are bandied about excitedly in ted Talks, at government think tanks, and at high-tech conferences—globalization, blockchain, genetic engineering, AI, machine learning—and common people, both men and women, may well suspect that none of these terms is about them.

In the 20th century, the masses revolted against exploitation and sought to translate their vital role in the economy into political power. Now the masses fear irrelevance, and they are frantic to use their remaining political power before it is too late. Brexit and the rise of Donald Trump may therefore demonstrate a trajectory opposite to that of traditional socialist revolutions. The Russian, Chinese, and Cuban revolutions were made by people who were vital to the economy but lacked political power; in 2016, Trump and Brexit were supported by many people who still enjoyed political power but feared they were losing their economic worth. Perhaps in the 21st century, populist revolts will be staged not against an economic elite that exploits people but against an economic elite that does not need them anymore. This may well be a losing battle. It is much harder to struggle against irrelevance than against exploitation.

The revolutions in information technology and biotechnology are still in their infancy, and the extent to which they are responsible for the current crisis of liberalism is debatable. Most people in Birmingham, Istanbul, St. Petersburg, and Mumbai are only dimly aware, if they are aware at all, of the rise of AI and its potential impact on their lives. It is undoubtable, however, that the technological revolutions now gathering momentum will in the next few decades confront humankind with the hardest trials it has yet encountered.

II. A New Useless Class?

Let’s start with jobs and incomes, because whatever liberal democracy’s philosophical appeal, it has gained strength in no small part thanks to a practical advantage: The decentralized approach to decision making that is characteristic of liberalism—in both politics and economics—has allowed liberal democracies to outcompete other states, and to deliver rising affluence to their people.

Liberalism reconciled the proletariat with the bourgeoisie, the faithful with atheists, natives with immigrants, and Europeans with Asians by promising everybody a larger slice of the pie. With a constantly growing pie, that was possible. And the pie may well keep growing. However, economic growth may not solve social problems that are now being created by technological disruption, because such growth is increasingly predicated on the invention of more and more disruptive technologies.

Fears of machines pushing people out of the job market are, of course, nothing new, and in the past such fears proved to be unfounded. But artificial intelligence is different from the old machines. In the past, machines competed with humans mainly in manual skills. Now they are beginning to compete with us in cognitive skills. And we don’t know of any third kind of skill—beyond the manual and the cognitive—in which humans will always have an edge.

At least for a few more decades, human intelligence is likely to far exceed computer intelligence in numerous fields. Hence as computers take over more routine cognitive jobs, new creative jobs for humans will continue to appear. Many of these new jobs will probably depend on cooperation rather than competition between humans and AI. Human-AI teams will likely prove superior not just to humans, but also to computers working on their own.

However, most of the new jobs will presumably demand high levels of expertise and ingenuity, and therefore may not provide an answer to the problem of unemployed unskilled laborers, or workers employable only at extremely low wages. Moreover, as AI continues to improve, even jobs that demand high intelligence and creativity might gradually disappear. The world of chess serves as an example of where things might be heading. For several years after IBM’s computer Deep Blue defeated Garry Kasparov in 1997, human chess players still flourished; AI was used to train human prodigies, and teams composed of humans plus computers proved superior to computers playing alone.

Yet in recent years, computers have become so good at playing chess that their human collaborators have lost their value and might soon become entirely irrelevant. On December 6, 2017, another crucial milestone was reached when Google’s AlphaZero program defeated the Stockfish 8 program. Stockfish 8 had won a world computer chess championship in 2016. It had access to centuries of accumulated human experience in chess, as well as decades of computer experience. By contrast, AlphaZero had not been taught any chess strategies by its human creators—not even standard openings. Rather, it used the latest machine-learning principles to teach itself chess by playing against itself. Nevertheless, out of 100 games that the novice AlphaZero played against Stockfish 8, AlphaZero won 28 and tied 72—it didn’t lose once. Since AlphaZero had learned nothing from any human, many of its winning moves and strategies seemed unconventional to the human eye. They could be described as creative, if not downright genius.

Can you guess how long AlphaZero spent learning chess from scratch, preparing for the match against Stockfish 8, and developing its genius instincts? Four hours. For centuries, chess was considered one of the crowning glories of human intelligence. AlphaZero went from utter ignorance to creative mastery in four hours, without the help of any human guide.

AlphaZero is not the only imaginative software out there. One of the ways to catch cheaters in chess tournaments today is to monitor the level of originality that players exhibit. If they play an exceptionally creative move, the judges will often suspect that it could not possibly be a human move—it must be a computer move. At least in chess, creativity is already considered to be the trademark of computers rather than humans! So if chess is our canary in the coal mine, we have been duly warned that the canary is dying. What is happening today to human-AI teams in chess might happen down the road to human-AI teams in policing, medicine, banking, and many other fields.

What’s more, AI enjoys uniquely nonhuman abilities, which makes the difference between AI and a human worker one of kind rather than merely of degree. Two particularly important nonhuman abilities that AI possesses are connectivity and updatability.

For example, many drivers are unfamiliar with all the changing traffic regulations on the roads they drive, and they often violate them. In addition, since every driver is a singular entity, when two vehicles approach the same intersection, the drivers sometimes miscommunicate their intentions and collide. Self-driving cars, by contrast, will know all the traffic regulations and never disobey them on purpose, and they could all be connected to one another. When two such vehicles approach the same junction, they won’t really be two separate entities, but part of a single algorithm. The chances that they might miscommunicate and collide will therefore be far smaller.

Similarly, if the World Health Organization identifies a new disease, or if a laboratory produces a new medicine, it can’t immediately update all the human doctors in the world. Yet even if you had billions of AI doctors in the world—each monitoring the health of a single human being—you could still update all of them within a split second, and they could all communicate to one another their assessments of the new disease or medicine. These potential advantages of connectivity and updatability are so huge that at least in some lines of work, it might make sense to replace all humans with computers, even if individually some humans still do a better job than the machines.

All of this leads to one very important conclusion: The automation revolution will not consist of a single watershed event, after which the job market will settle into some new equilibrium. Rather, it will be a cascade of ever bigger disruptions. Old jobs will disappear and new jobs will emerge, but the new jobs will also rapidly change and vanish. People will need to retrain and reinvent themselves not just once, but many times.

Just as in the 20th century governments established massive education systems for young people, in the 21st century they will need to establish massive reeducation systems for adults. But will that be enough? Change is always stressful, and the hectic world of the early 21st century has produced a global epidemic of stress. As job volatility increases, will people be able to cope? By 2050, a useless class might emerge, the result not only of a shortage of jobs or a lack of relevant education but also of insufficient mental stamina to continue learning new skills.

III. The Rise of Digital Dictatorships

As many people lose their economic value, they might also come to lose their political power. The same technologies that might make billions of people economically irrelevant might also make them easier to monitor and control.

AI frightens many people because they don’t trust it to remain obedient. Science fiction makes much of the possibility that computers or robots will develop consciousness—and shortly thereafter will try to kill all humans. But there is no particular reason to believe that AI will develop consciousness as it becomes more intelligent. We should instead fear AI because it will probably always obey its human masters, and never rebel. AI is a tool and a weapon unlike any other that human beings have developed; it will almost certainly allow the already powerful to consolidate their power further.

Consider surveillance. Numerous countries around the world, including several democracies, are busy building unprecedented systems of surveillance. For example, Israel is a leader in the field of surveillance technology, and has created in the occupied West Bank a working prototype for a total-surveillance regime. Already today whenever Palestinians make a phone call, post something on Facebook, or travel from one city to another, they are likely to be monitored by Israeli microphones, cameras, drones, or spy software. Algorithms analyze the gathered data, helping the Israeli security forces pinpoint and neutralize what they consider to be potential threats. The Palestinians may administer some towns and villages in the West Bank, but the Israelis command the sky, the airwaves, and cyberspace. It therefore takes surprisingly few Israeli soldiers to effectively control the roughly 2.5 million Palestinians who live in the West Bank.

In one incident in October 2017, a Palestinian laborer posted to his private Facebook account a picture of himself in his workplace, alongside a bulldozer. Adjacent to the image he wrote, “Good morning!” A Facebook translation algorithm made a small error when transliterating the Arabic letters. Instead of Ysabechhum (which means “Good morning”), the algorithm identified the letters as Ydbachhum (which means “Hurt them”). Suspecting that the man might be a terrorist intending to use a bulldozer to run people over, Israeli security forces swiftly arrested him. They released him after they realized that the algorithm had made a mistake. Even so, the offending Facebook post was taken down—you can never be too careful. What Palestinians are experiencing today in the West Bank may be just a primitive preview of what billions of people will eventually experience all over the planet.

Imagine, for instance, that the current regime in North Korea gained a more advanced version of this sort of technology in the future. North Koreans might be required to wear a biometric bracelet that monitors everything they do and say, as well as their blood pressure and brain activity. Using the growing understanding of the human brain and drawing on the immense powers of machine learning, the North Korean government might eventually be able to gauge what each and every citizen is thinking at each and every moment. If a North Korean looked at a picture of Kim Jong Un and the biometric sensors picked up telltale signs of anger (higher blood pressure, increased activity in the amygdala), that person could be in the gulag the next day.

And yet such hard-edged tactics may not prove necessary, at least much of the time. A facade of free choice and free voting may remain in place in some countries, even as the public exerts less and less actual control. To be sure, attempts to manipulate voters’ feelings are not new. But once somebody (whether in San Francisco or Beijing or Moscow) gains the technological ability to manipulate the human heart—reliably, cheaply, and at scale—democratic politics will mutate into an emotional puppet show.

We are unlikely to face a rebellion of sentient machines in the coming decades, but we might have to deal with hordes of bots that know how to press our emotional buttons better than our mother does and that use this uncanny ability, at the behest of a human elite, to try to sell us something—be it a car, a politician, or an entire ideology. The bots might identify our deepest fears, hatreds, and cravings and use them against us. We have already been given a foretaste of this in recent elections and referendums across the world, when hackers learned how to manipulate individual voters by analyzing data about them and exploiting their prejudices. While science-fiction thrillers are drawn to dramatic apocalypses of fire and smoke, in reality we may be facing a banal apocalypse by clicking.

The biggest and most frightening impact of the AI revolution might be on the relative efficiency of democracies and dictatorships. Historically, autocracies have faced crippling handicaps in regard to innovation and economic growth. In the late 20th century, democracies usually outperformed dictatorships, because they were far better at processing information. We tend to think about the conflict between democracy and dictatorship as a conflict between two different ethical systems, but it is actually a conflict between two different data-processing systems. Democracy distributes the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given 20th-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all available information fast enough and make the right decisions. This is one reason the Soviet Union made far worse decisions than the United States, and why the Soviet economy lagged far behind the American economy.

However, artificial intelligence may soon swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. In fact, it might make centralized systems far more efficient than diffuse systems, because machine learning works better when the machine has more information to analyze. If you disregard all privacy concerns and concentrate all the information relating to a billion people in one database, you’ll wind up with much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. An authoritarian government that orders all its citizens to have their DNA sequenced and to share their medical data with some central authority would gain an immense advantage in genetics and medical research over societies in which medical data are strictly private. The main handicap of authoritarian regimes in the 20th century—the desire to concentrate all information and power in one place—may become their decisive advantage in the 21st century.

Yoshi Sodeoka

New technologies will continue to emerge, of course, and some of them may encourage the distribution rather than the concentration of information and power. Blockchain technology, and the use of cryptocurrencies enabled by it, is currently touted as a possible counterweight to centralized power. But blockchain technology is still in the embryonic stage, and we don’t yet know whether it will indeed counterbalance the centralizing tendencies of AI. Remember that the Internet, too, was hyped in its early days as a libertarian panacea that would free people from all centralized systems—but is now poised to make centralized authority more powerful than ever.

IV. The Transfer of Authority to Machines

Even if some societies remain ostensibly democratic, the increasing efficiency of algorithms will still shift more and more authority from individual humans to networked machines. We might willingly give up more and more authority over our lives because we will learn from experience to trust the algorithms more than our own feelings, eventually losing our ability to make many decisions for ourselves. Just think of the way that, within a mere two decades, billions of people have come to entrust Google’s search algorithm with one of the most important tasks of all: finding relevant and trustworthy information. As we rely more on Google for answers, our ability to locate information independently diminishes. Already today, “truth” is defined by the top results of a Google search. This process has likewise affected our physical abilities, such as navigating space. People ask Google not just to find information but also to guide them around. Self-driving cars and AI physicians would represent further erosion: While these innovations would put truckers and human doctors out of work, their larger import lies in the continuing transfer of authority and responsibility to machines.

Humans are used to thinking about life as a drama of decision making. Liberal democracy and free-market capitalism see the individual as an autonomous agent constantly making choices about the world. Works of art—be they Shakespeare plays, Jane Austen novels, or cheesy Hollywood comedies—usually revolve around the hero having to make some crucial decision. To be or not to be? To listen to my wife and kill King Duncan, or listen to my conscience and spare him? To marry Mr. Collins or Mr. Darcy? Christian and Muslim theology similarly focus on the drama of decision making, arguing that everlasting salvation depends on making the right choice.

What will happen to this view of life as we rely on AI to make ever more decisions for us? Even now we trust Netflix to recommend movies and Spotify to pick music we’ll like. But why should AI’s helpfulness stop there?

Every year millions of college students need to decide what to study. This is a very important and difficult decision, made under pressure from parents, friends, and professors who have varying interests and opinions. It is also influenced by students’ own individual fears and fantasies, which are themselves shaped by movies, novels, and advertising campaigns. Complicating matters, a given student does not really know what it takes to succeed in a given profession, and doesn’t necessarily have a realistic sense of his or her own strengths and weaknesses.

It’s not so hard to see how AI could one day make better decisions than we do about careers, and perhaps even about relationships. But once we begin to count on AI to decide what to study, where to work, and whom to date or even marry, human life will cease to be a drama of decision making, and our conception of life will need to change. Democratic elections and free markets might cease to make sense. So might most religions and works of art. Imagine Anna Karenina taking out her smartphone and asking Siri whether she should stay married to Karenin or elope with the dashing Count Vronsky. Or imagine your favorite Shakespeare play with all the crucial decisions made by a Google algorithm. Hamlet and Macbeth would have much more comfortable lives, but what kind of lives would those be? Do we have models for making sense of such lives?

Can parliaments and political parties overcome these challenges and forestall the darker scenarios? At the current moment this does not seem likely. Technological disruption is not even a leading item on the political agenda. During the 2016 U.S. presidential race, the main reference to disruptive technology concerned Hillary Clinton’s email debacle, and despite all the talk about job loss, neither candidate directly addressed the potential impact of automation. Donald Trump warned voters that Mexicans would take their jobs, and that the U.S. should therefore build a wall on its southern border. He never warned voters that algorithms would take their jobs, nor did he suggest building a firewall around California.

So what should we do?

For starters, we need to place a much higher priority on understanding how the human mind works—particularly how our own wisdom and compassion can be cultivated. If we invest too much in AI and too little in developing the human mind, the very sophisticated artificial intelligence of computers might serve only to empower the natural stupidity of humans, and to nurture our worst (but also, perhaps, most powerful) impulses, among them greed and hatred. To avoid such an outcome, for every dollar and every minute we invest in improving AI, we would be wise to invest a dollar and a minute in exploring and developing human consciousness.

More practically, and more immediately, if we want to prevent the concentration of all wealth and power in the hands of a small elite, we must regulate the ownership of data. In ancient times, land was the most important asset, so politics was a struggle to control land. In the modern era, machines and factories became more important than land, so political struggles focused on controlling these vital means of production. In the 21st century, data will eclipse both land and machinery as the most important asset, so politics will be a struggle to control data’s flow.

Unfortunately, we don’t have much experience in regulating the ownership of data, which is inherently a far more difficult task than regulating land or machines. Data are everywhere and nowhere at the same time, they can move at the speed of light, and you can create as many copies of them as you want. Do the data collected about my DNA, my brain, and my life belong to me, or to the government, or to a corporation, or to the human collective?

The race to accumulate data is already on, and is currently headed by giants such as Google and Facebook and, in China, Baidu and Tencent. So far, many of these companies have acted as “attention merchants”—they capture our attention by providing us with free information, services, and entertainment, and then they resell our attention to advertisers. Yet their true business isn’t merely selling ads. Rather, by capturing our attention they manage to accumulate immense amounts of data about us, which are worth more than any advertising revenue. We aren’t their customers—we are their product.

Ordinary people will find it very difficult to resist this process. At present, many of us are happy to give away our most valuable asset—our personal data—in exchange for free email services and funny cat videos. But if, later on, ordinary people decide to try to block the flow of data, they are likely to have trouble doing so, especially as they may have come to rely on the network to help them make decisions, and even for their health and physical survival.

Nationalization of data by governments could offer one solution; it would certainly curb the power of big corporations. But history suggests that we are not necessarily better off in the hands of overmighty governments. So we had better call upon our scientists, our philosophers, our lawyers, and even our poets to turn their attention to this big question: How do you regulate the ownership of data?

Currently, humans risk becoming similar to domesticated animals. We have bred docile cows that produce enormous amounts of milk but are otherwise far inferior to their wild ancestors. They are less agile, less curious, and less resourceful. We are now creating tame humans who produce enormous amounts of data and function as efficient chips in a huge data-processing mechanism, but they hardly maximize their human potential. If we are not careful, we will end up with downgraded humans misusing upgraded computers to wreak havoc on themselves and on the world.

If you find these prospects alarming—if you dislike the idea of living in a digital dictatorship or some similarly degraded form of society—then the most important contribution you can make is to find ways to prevent too much data from being concentrated in too few hands, and also find ways to keep distributed data processing more efficient than centralized data processing. These will not be easy tasks. But achieving them may be the best safeguard of democracy.

"It is difficult to write a paradiso when all the superficial indications are that you ought to write an apocalypse." -Ezra Pound

Offline K-Dog

  • Administrator
  • Sous Chef
  • *****
  • Posts: 2713
    • View Profile
    • K-Dog
Re: Why Technology Favors Tyranny
« Reply #1 on: September 09, 2018, 11:22:52 AM »
Why Technology Favors Tyranny.  I read the whole thing and it was hard to deal with the mental bubblings of someone who stayed up late the previous night watching reruns of Star Treck the Next Generation and still dreams of the Borg while earning their salary banging on a keyboard.  In other words criminal activity.  I was tricked into suffering from their bad behavior.

I imagined the computer playing chess with itself masturbating its neural net weights and pruning connections to become a chess master and realized this article was written in exactly the same way.  By an algorithm.  Something like this.  Take the science section of one daily newspaper or magazine for a year and from another take the political section.  Cut out all the complete sentences into individual strips like loquacious fortune cookies.  Put them together in random ways and every time a pair makes sense put it aside.  Make about 2000 pairs total.  Now do the same thing and pair the pairs.  Now take each set of four sentences and collect them into piles of related concepts expressions and thoughts.  Now write an article using only the connections made by the fortune cookie sentence paired pairs.

Mission Accomplished deadline made and there are enough big words in the impressive looking article to choke a unicorn.  The mere mention of your article makes people say they know about it and that they think it is 'really good'.  Its a virtual technological semaphore and the tech porn gene in the center of your readers brain is stroked to ecstasy. 

Blockchain baby, oh yeah!

Now do exactly the same thing and write an article Why Technology does not Favor Tyranny.  Reconsider all the paired pairs of thoughts impressions and concepts rejected the first time because they did not fit in the first article.  They might fit here.

At the end of the day separate the Technology and Tyranny/Democracy dynamic from your damn mind.  The connection between the two is fickle and you have better things to do.

Technology does not kill, people do and I would say to Yuval(*) that radical Zionism is a sin that can't be blamed on technology.

* Yuval Noah Harari wrote the article originally.
« Last Edit: September 09, 2018, 11:42:38 AM by K-Dog »
Under ideal conditions of temperature and pressure the organism will grow without limit.

Offline K-Dog

  • Administrator
  • Sous Chef
  • *****
  • Posts: 2713
    • View Profile
    • K-Dog
Re: Why Technology Favors Tyranny
« Reply #2 on: September 09, 2018, 11:59:37 AM »
Yuval linked to this.  I'll comment after I finish reading it.  I took a break to get it here. 

How Israel Became a Hub for Surveillance Technology

Alex Kane

https://theintercept.com/2016/10/17/how-israel-became-a-hub-for-surveillance-technology/

In 1948, the year Israel was founded, the Mer Group was established as a metal workshop.

Today it’s a much different company. It operates a dozen subsidiaries and employs 1,200 people in over 40 countries, selling wireless infrastructure, software for public transit ticketing systems, wastewater treatment, and more. But at the ISDEF Expo, an event held last June to show off Israeli technology to potential buyers from foreign security forces, the Mer Group’s representatives were only promoting one thing: surveillance products sold by the company’s security division.

The Mer Group’s evolution from cutting metal to electronic snooping reflects a larger shift in the Israeli economy. Technology is one of the main sectors in Israeli industry. And Israeli firms with ties to intelligence, like the Mer Group, are using their expertise to market themselves internationally. The company’s CEO, Nir Lempert, is a 22-year veteran of Unit 8200, the Israeli intelligence unit often compared to the National Security Agency, and is chairman of the unit’s alumni association. The Mer Group’s ties to Unit 8200 are hardly unique in Israel, where the cyber sector has become an integral aspect of the Israeli economy, exporting $6 billion worth of products and services in 2014.

When drafted into the army, Israel’s smartest youth are steered toward the intelligence unit and taught how to spy, hack, and create offensive cyberweapons. Unit 8200 and the National Security Agency reportedly developed the cyberweapon that attacked Iranian computers running the country’s nuclear program, and Unit 8200 engages in mass surveillance in the occupied Palestinian territories, according to veterans of the military intelligence branch.

Increasingly, the skills developed by spying and waging cyberwarfare don’t stay in the military. Unit 8200 is a feeder school to the private surveillance industry in Israel, the self-proclaimed “startup nation” — and the products those intelligence veterans create are sold to governments around the world to spy on people. While the companies that Unit 8200 veterans run say their technologies are essential to keeping people safe, privacy advocates warn their products undermine civil liberties.

In August, Privacy International, a watchdog group that investigates government surveillance, released a report on the global surveillance industry. The group identified 27 Israeli surveillance companies — the highest number per capita of any country in the world. (The United States leads the world in sheer number of surveillance companies: 122.) Unit 8200 veterans either founded or occupy high-level positions in at least eight of the Israeli surveillance companies named by Privacy International, according to publicly available information. And that list doesn’t include companies like Narus, which was founded by Israeli veterans of Unit 8200 but is now owned by Boeing, the American defense contractor. (Privacy International categorized Narus as an American company because it’s headquartered in California.) Narus technology helped AT&T collect internet traffic and billions of emails and forward that information to the National Security Agency, according to reporting in Wired magazine and documents from the Snowden archive.

“It is alarming that surveillance capabilities developed in some of the world’s most advanced spying agencies are being packaged and exported around the world for profit,” said Edin Omanovic, a research officer at Privacy International. “The proliferation of such intrusive surveillance capabilities is extremely dangerous and poses a real and fundamental threat to human rights and democratization.”



A poster calling for the destruction of CCTV cameras is posted on a column at the Al-Aqsa Mosque compound in Jerusalem in front of the Dome of the Rock, April 8, 2016.  Photo: Ahmad Gharabli/AFP/Getty Images

Today, Amit Meyer is a journalist, an unusual career path for a veteran of Unit 8200. Many of his colleagues have taken the skills in intelligence collection and hacking they learned in the military and monetized them in the private sector. Unit 8200 is a “brand name” in Israel, a celebrated institution that allows members easy access to tech companies after their service, said Meyer. Sometimes technology companies approach alumni of the unit; other times alumni recommend one another. There’s a secret Facebook group for alumni filled with job offers at tech companies, Meyer said. “In many cases you just put Unit 8200 in your CV, and magic happens,” he told The Intercept.

Neve Gordon, an Israeli scholar who has studied the country’s homeland security industry, explained that Israel’s prominence in the surveillance industry stems from the close links between the Israel Defense Forces and the technology sector. In 1960, the Israeli military was developing computer software — nine years before the Israeli software industry and university computer science programs even existed. Israeli military units that work with computers, including Unit 8200, have become a “conveyor belt” toward Israel’s military and homeland security industry, said Gordon.

Gordon said there are two other reasons why Israel plays such an outsize role in the global surveillance industry. One is that there are “hardly any” legal limits on veterans “taking certain research ideas they worked on in the military and developing them” in the private sector. In addition, said Gordon, Israel’s decadeslong occupation of the West Bank, Gaza, and East Jerusalem, along with its periodic wars, “provides a laboratory for testing and fine tuning different commodities that are created, or different technologies.”

Those technologies are then exported around the world.

Mer Security is one of the companies exporting spy products. It is well-known in the country’s security circles; it won an Israeli police contract in 1999 to establish “Mabat 2000,” which set up hundreds of cameras in Jerusalem’s Old City, a flashpoint of tensions in the occupied area. In an interview with the Israel Gateway magazine, a trade publication, Haim Mer, chairman of the company’s board and also a Unit 8200 veteran, explained that “the police needed a system in which ‘Big Brother’ would control and would allow for an overall view of events in the Old City area.”

At the ISDEF Expo, Eyal Raz, the product director for Mer Security, told The Intercept about what “Israel’s greatest security minds,” as a company brochure puts it, have created. Raz was showcasing Open Source Collection Analysis and Response, known as OSCAR, which trawls through the internet and social media platforms and promises to uncover hidden connections from the data OSCAR collects and monitors.

Another product, called Strategic Actionable Intelligence Platform, or SAIP, takes that data in and groups it together. To pinpoint “actionable intelligence,” SAIP uses technology that can highlight words, sentences, and information that might interest intelligence officers. These types of language analysis tools are increasingly popular with intelligence services around the world as a tool for pinpointing the next threat. These products claim to “understand what it’s reading,” as Raz said. For instance, a list of chemicals included in a paragraph may seem innocuous to the layperson, but the language analysis machine can recognize that the person is talking about making an explosive device, Raz said.

Raz explained another feature of SAIP: Users can create an avatar “in order to get the credentials to closed forums and to gather information from closed forums. This is also one way you can counterfeit your activities.” Facebook does not allow people to create fake profiles, but the technology Raz and others are selling promises to blend into social networks so that profiles operated by law enforcement look authentic. Multiple news outlets this year have reported that the Israeli Police use similar tactics by creating fake Facebook profiles to befriend targets of investigations and monitor Palestinians. Though Israeli Police spokesperson Micky Rosenfeld was quoted in one report as confirming the police use this tactic, he denied this claim when contacted by The Intercept.

Mer Group’s clients are in Israel and abroad. The company does “joint development” work with Unit 8200, according to Raz, and they recruit veterans from the unit to work for the company. Other clients are scattered around the world, including in Europe, though Raz refused to divulge specifics. But publicly available information shows, for instance, that in 2011 Mer inked a $42 million contract with Buenos Aires to set up a “Safe City” system, complete with 1,200 surveillance cameras, including license plate recognition technology.


Ahmed Mansoor, a Dubai-based blogger and activist, poses for a portrait in Dubai, in the United Arab Emirates, on Sept. 25, 2012. Photo: Bloomberg/Getty Images

Unit 8200’s ties to the Israeli surveillance industry attracted widespread attention in late August, when digital security researchers at the University of Toronto-based Citizen Lab released a report detailing the provenance of a specific type of malware. They said it was likely that the United Arab Emirates had targeted Ahmed Mansoor, a prominent human rights activist, with sophisticated spyware that had the ability to turn his iPhone into a mobile surveillance device that could track his movement, record his phone calls, and control his phone camera and microphone.

Mansoor says he’s been targeted since 2011, the year he signed a petition demanding democratic reforms in the Emirates. “The state security authorities are basically very obsessed with the monitoring and spying on people, activists,” he said. “They are totally possessed with this kind of thinking.

Citizen Lab analyzed the spyware after Mansoor received a text message with a link promising “new secrets about torture of Emiratis in state prison.” Rather than clicking, Mansoor sent the texts to the digital security group, which had also, in 2012, analyzed spyware created by Italian surveillance company Hacking Team that had infected Mansoor’s computer.

“We’ve never seen any exploits like this for a mobile device which operates on the very latest version,” said Bill Marczak, a senior research fellow at Citizen Lab who co-authored the report.

The culprit behind the spyware, Citizen Lab’s report concluded, was the NSO Group, a secretive Israeli surveillance company.

Details about the NSO Group are hard to come by. Their founders rarely talk to the press. They have no website. Israeli and foreign news media have reported that Omri Lavie and Shalev Hulio, the founders of the company, are veterans of Unit 8200. However, some Israeli outlets have reported that they served in other units. Still, at least three of NSO Group’s current employees served in the intelligence unit, according to their LinkedIn pages. And Unit 8200 veterans provided the company with $1.6 million in seed money to develop Pegasus, the name for the spyware, according to Defense News, a trade publication.

Zamir Dahbash, the spokesperson for the company, did not answer specific questions about the NSO Group, which was bought in 2014 for $120 million by a U.S. private equity fund. He told The Intercept in a statement that “the company sells only to authorized governmental agencies, and fully complies with strict export control laws and regulations. … The agreements signed with the company’s customers require that the company’s products only be used in a lawful manner.”

On its face, an Israeli surveillance company selling spyware to an Arab nation is striking. The United Arab Emirates and Israel do not have official diplomatic relations, and like in other parts of the Arab world, many Emiratis detest Israel’s decadeslong occupation of Arab lands. But NSO Group’s sale to the UAE is an indication of the growing ties between Israel and the Gulf state, which has a growing appetite for surveillance gear.

“These regimes are unstable in the sense that most of the people living in these regimes do not have basic rights,” said Gordon, the Israeli scholar, “and they constantly need to monitor and surveil their populations.”

In February 2015, the Middle East Eye writer Rori Donaghy reported that the UAE had signed a contract with Asia Global Technologies, a Swiss-registered company owned by an Israeli and reportedly staffed by former Israeli intelligence agents, to set up a surveillance system featuring thousands of cameras.

Donaghy, who is also the founder of the Emirates Centre for Human Rights, said the UAE has quietly bought hundreds of millions of dollars worth of security products from Israel in recent years. The UAE turns to Israel, he said, because it believes Israelis are “simply the best in this market, the most intrusive, the most secretive.”

-------------------------------------------------------------------------------------------------------------------------------

I had to do some highlighting.  Boeing is close to me and I may have been a Guinea Pig.





Four years ago when I was actively being surveiled by all the tools, one of these five foot birds was dancing above the rooftops of houses around me as I rode my bicycle into Renton from Newcastle.  I had my phone with me and it just kept coming back.  It was hard to see, and while it was not going very fast it was only thirty feet above roofs, so blink and you missed it.  It was grey like the sky.  I'd see it, go a mile, and see it again.  That highlight was bold, it could be total coincidence and of course they are going to test them here.  Boeing airports are only miles from me.  The thing made no sound at all.

About the red highlights.  Those happened to me five years ago when I was online blogging about how I had traced troll URLs all the way to the Department of Defense Network which was at the time being used to proxy Homeland Security Troll locations from appearing to originate from different places.  If you want to get the treatment most Americans would call bat-shit crazy untrue not in America bullshit to happen to you, just get a line, some real details, about how we are all being surveiled and then try and talk about it.  You will find technology let tyranny in the back door when you were not looking several years ago.

Narus technology helped AT&T collect internet traffic and billions of emails and forward that information to the National Security Agency, according to reporting in Wired magazine and documents from the Snowden archive.

Try and write out a detailed explanation of what you found out about domestic spying and try and email it to any Major Media magazine or newspaper.  Your email will be triplicated and returned to sender.  The words will be scrambled in your email.  It will be a salad.  Ok, foiled but not stopped, you get a reporter on the phone.  The reporter will agree to meet with you if you send them an email first.

Sophisticated spyware that had the ability to turn his iPhone into a mobile surveillance device that could track his movement, record his phone calls, and control his phone camera and microphone.

You will know about this one when your battery suddenly needs charging all the time.  You will also notice some changes in how responsive your phone is.  It will lag more and show a different personality.  The kicker will be if you blog frequently in a comment section section in a blog such as Jim Kunstler's Clusterfuck Nation and from time to time a single troll and nobody else but that troll, such as a troll named Janos Sorensky at Clusterfuck Nation, repeatedly uses one of your sentences that you have spoken at home verbatum.  Sentences long enough and individual enough so that there can be no doubt in your mind at all that you are only one memo away from being in crosshairs if your mind want to run in that direction and your life will be full of coincidences to encourage your mind to go in that direction.  Such as gang stalking and a strange man confronting you and saying "I'd like to put a bullet in Obama's brain, what do you think about that?

Now he says "I'd like to put a bullet in Trump's brain, what do you think about that?  Of this we can be sure.

The thought comes to mind that I should write this up in a novel.  I never thought about that before now because to me all this has the glamour of anal rape and I don't think I'd like to write about that.  But the mundane facts of real life do matter and I think I have the plot twist worked out.  Not that anything really needs to be worked out since real life has already provided all the reality I need to capture.

The Orange Highlight:

“The state security authorities are basically very obsessed with the monitoring and spying on people, activists,” he said. “They are totally possessed with this kind of thinking.”

As the one who confronted me explained.  "There is a war on."  Those who hold that view, be their view rational or self serving, act on their belief and in that belief you have no rights and are an inconsequential gnat to be swatted down should you irritate the body of the state or threaten the existing order at all.  This tyrannical state of mind is without borders and Zealots of many kind embrace it.  Now an orange Zealot tweets and war will Trump justice.  The players have changed the game is the same.

'Those' with a  'there is a war on' attitude' very much consider Diners to be worthy of surveillance, as any critic be they rational or crazy can lead sheep to dissent and must be stopped by any means necessary.  That is 'their' attitude and you don't have to tell me they are crazy, I already know.  To bad they run the show.
« Last Edit: September 09, 2018, 01:40:47 PM by K-Dog »
Under ideal conditions of temperature and pressure the organism will grow without limit.

Offline Eddie

  • Administrator
  • Master Chef
  • *****
  • Posts: 16317
    • View Profile
Re: Why Technology Favors Tyranny
« Reply #3 on: September 09, 2018, 12:05:45 PM »
Why Technology Favors Tyranny.  I read the whole thing and it was hard to deal with the mental bubblings of someone who stayed up late the previous night watching reruns of Star Treck the Next Generation and still dreams of the Borg while earning their salary banging on a keyboard.  In other words criminal activity.  I was tricked into suffering from their bad behavior.

I imagined the computer playing chess with itself masturbating its neural net weights and pruning connections to become a chess master and realized this article was written in exactly the same way.  By an algorithm.  Something like this.  Take the science section of one daily newspaper or magazine for a year and from another take the political section.  Cut out all the complete sentences into individual strips like loquacious fortune cookies.  Put them together in random ways and every time a pair makes sense put it aside.  Make about 2000 pairs total.  Now do the same thing and pair the pairs.  Now take each set of four sentences and collect them into piles of related concepts expressions and thoughts.  Now write an article using only the connections made by the fortune cookie sentence paired pairs.

Mission Accomplished deadline made and there are enough big words in the impressive looking article to choke a unicorn.  The mere mention of your article makes people say they know about it and that they think it is 'really good'.  Its a virtual technological semaphore and the tech porn gene in the center of your readers brain is stroked to ecstasy. 

Blockchain baby, oh yeah!

Now do exactly the same thing and write an article Why Technology does not Favor Tyranny.  Reconsider all the paired pairs of thoughts impressions and concepts rejected the first time because they did not fit in the first article.  They might fit here.

At the end of the day separate the Technology and Tyranny/Democracy dynamic from your damn mind.  The connection between the two is fickle and you have better things to do.

Technology does not kill, people do and I would say to Yuval(*) that radical Zionism is a sin that can't be blamed on technology.

* Yuval Noah Harari wrote the article originally.

I love it when a Diner hits his stride as a writer and a critic of the popular press. You're on a roll, dude.

Carry on, and don't worry about remaining calm. 
What makes the desert beautiful is that somewhere it hides a well.

Offline Surly1

  • Administrator
  • Master Chef
  • *****
  • Posts: 14541
    • View Profile
    • Doomstead Diner
Re: Why Technology Favors Tyranny
« Reply #4 on: September 10, 2018, 02:08:04 AM »
Why Technology Favors Tyranny.  I read the whole thing and it was hard to deal with the mental bubblings of someone who stayed up late the previous night watching reruns of Star Treck the Next Generation and still dreams of the Borg while earning their salary banging on a keyboard.  In other words criminal activity.  I was tricked into suffering from their bad behavior.

I imagined the computer playing chess with itself masturbating its neural net weights and pruning connections to become a chess master and realized this article was written in exactly the same way.  By an algorithm.  Something like this.  Take the science section of one daily newspaper or magazine for a year and from another take the political section.  Cut out all the complete sentences into individual strips like loquacious fortune cookies.  Put them together in random ways and every time a pair makes sense put it aside.  Make about 2000 pairs total.  Now do the same thing and pair the pairs.  Now take each set of four sentences and collect them into piles of related concepts expressions and thoughts.  Now write an article using only the connections made by the fortune cookie sentence paired pairs.


No kidding. I guess we read two different articles. In the one I read, the author made a sophisticated argument about the dangers of AI and machine learning, and the power of these technologies in the wrong hands. Hence the various real-world examples that seem to frustrate you so much.

The prospect of a future dominated by AI in the hands of the Stephen Millers of the world, constantly learning from data provided by IoT and other sensors, makes me grateful that I will not live to see it.

Seems to me that the point of Harari's article is:
Quote
The conflict between democracy and dictatorship is actually a conflict between two different data-processing systems. AI may swing the advantage toward the latter.

and...
Quote
In ancient times, land was the most important asset, so politics was a struggle to control land. In the modern era, machines and factories became more important than land, so political struggles focused on controlling these vital means of production. In the 21st century, data will eclipse both land and machinery as the most important asset, so politics will be a struggle to control data’s flow.

Harari concludes,

Quote
If you find these prospects alarming—if you dislike the idea of living in a digital dictatorship or some similarly degraded form of society—then the most important contribution you can make is to find ways to prevent too much data from being concentrated in too few hands, and also find ways to keep distributed data processing more efficient than centralized data processing.

OK. Up for that? How to get started? While the clueless Star-Trek watching rubes are only beginning to trace the outlines of the gaming table, the smart money has already placed its bets:
Quote
Nationalization of data by governments could offer one solution; it would certainly curb the power of big corporations. But history suggests that we are not necessarily better off in the hands of overmighty governments. So we had better call upon our scientists, our philosophers, our lawyers, and even our poets to turn their attention to this big question: How do you regulate the ownership of data?

He offers nothin' in terms of a possible strategy or set of tactics, but punts that to the nearest gaggle of "scientists, philosophers, lawyers, and poets." So four guys at a bar are supposed to jawbone an answer? Now THAT's a legit criticism. Where is the political will to orchestrate a battle over data ownership, when even the merest fraction of people even understand that it's a problem? People have signed their rights away to social media providers in the name of convenience, and received far less in compensation than veterans of the Revolutionary War got for signing away their land-grant rights.

Little chance to win a war when you don't even know you're in a fight.
"It is difficult to write a paradiso when all the superficial indications are that you ought to write an apocalypse." -Ezra Pound

Offline K-Dog

  • Administrator
  • Sous Chef
  • *****
  • Posts: 2713
    • View Profile
    • K-Dog
Re: Why Technology Favors Tyranny
« Reply #5 on: September 10, 2018, 08:42:18 AM »
I can't get worried about AI.  It is hype.  The last years saw big progress in image recognition but the fact remains that all AI can do or will be able to do in the foreseeable future is solve domain specific problems.  Spend 200K and point a camera at a conveyor belt of black and red marbles and with another 200K in programming AI can sort black marbles from red ones for you.  Driving a car is an example of a domain specific problem.  Millions of dollars to solve 1 problem and taking into account all possible error scenarios still remains to be done.  Robot AI to make hamburgers will cost more than an unemployed American would after working for ten years.  It won't steal but it will break.

That is button number 1 with me.  I know about these things.  What are we talking about here?  An operating system or a program?  True AI will be an operating system but all we have now are programs.  That is a night and day difference but AI is bandied about in ignorance of that fact.

Button number 2 is that having experienced personal fuckification by the troll forces of national security who have the resources of the FBI to deal with barking dogs behind them; I'm sensitive to pontificates about spying by people who have not actually experienced the dark side of it themselves.  You seem to forget about number 2.  But since it happened to me hell will indeed freeze over before I forget about being followed by suits.  Black suits sans ties, memorable for eternity.

I am also particularly sensitive to Israelis who pontificate about surveillance as possible conduits of misinformation.  How did they get their software onto Apple phones? What level of international cooperation with American intelligence would that require?  The answer is lots.  What does that mean? Lots.

You read well informed, I read spin.  The issue here is my knowledge which in this area is extensive and also what has happened to me personally.
Under ideal conditions of temperature and pressure the organism will grow without limit.

 

Related Topics

  Subject / Started by Replies Last post
0 Replies
284 Views
Last post July 24, 2016, 07:19:56 AM
by Palloy
From Oilslick to Tyranny

Started by Guest Energy

0 Replies
287 Views
Last post June 07, 2017, 02:40:47 AM
by Guest
0 Replies
31 Views
Last post October 26, 2018, 04:49:43 AM
by RE