robots

AI– Threat or Menace?


That-Was-The-Week-That-W-That-Was-The-Week-473964gc2smFrom the keyboard of Surly1
Follow us on Twitter @doomstead666
Like us on Facebook

 

 

Originally published on the Doomstead Diner on April 2, 2018

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.”

—Ray Kurzweil


We came of age imagining New Frontiers, an idyllic time of relative innocence when anything seemed possible: rockets that would travel to the moon like buses,  a permanent space station, and flying cars a la the Jetsons.  It was the go-go 50s and 60s, when an energized Team America sat astride the top of the world, with few limits on dreams and none on ambition. Optimism hung in the air like the scent of roses on a spring morning. 

In the America of the 1950s and 60s, the future was filled to bursting with promise.  A youthful and beloved president set the country a challenge to travel from the earth to the moon in a decade, which we did, though he did not live to see it.

Young people read about ENIAC, the first (room-sized) computer designed to compute artillery tables during WWII (and later used for nukes). Large mainframes followed; in went punchcards, out came reports. Even my high school had one. Science fiction writers, envisioning the future, foresaw robots who would reliably assist humans in a variety of tasks and, of course, adventures. As a boy, I had a toy Robby the Robot, a dutiful servant in the 1956 MGM science fiction film Forbidden Planet. Later on, as I begin to read science fiction, I encountered Isaac Asimov's original three laws of robotics.

Introduced in his 1942 short story "Runaround" and included in I, Robot, The Three Laws are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These laws provided themes for Asimov's robotic-based fiction, and were devoured by young adults. Intended as a safety feature, The Laws could not be bypassed,. This led to interesting plot twists in many of Asimov's robot-focused stories, as robots react in unusual and counter-intuitive ways as a consequence of how the robot applies the Three Laws to a given situation. Other authors working in Asimov's fictional universe adopted them and over time, we seem to have taken them as a given.

They are not. The utopian futures envisioned to earlier writers have given way to Terminator robots, and Skynet, to say nothing of pilotless drones raining relentless death down on wedding parties. We're a long way from Robby the Robot.


The notion of intelligent automata, a non-human intelligence, dates back to ancient times. More recently, computer technology may trace itself to back to Charles Babbage and his Difference Engine, but "artificial intelligence" can be traced back to 1956 and a conference at Dartmouth where the term was coined. Research in the field ebbed and flowed over decades, and has clearly benefited most recently from in increases in computing power. In 1997, when IBM's Deep Blue defeated Russian grandmaster Garry Kasparov, and in 2011, when IBM's Watson won the quiz show "Jeopardy!" by beating reigning champions Brad Rutter and Ken Jennings, a technological Rubicon had been crossed.

It's neither my purpose nor within my ability to trace all of the meaningful developments in AI, but thought it might be useful to consider AI's implications for the future. And yes, I am aware that for much of this discursion I am conflating robotics and AI, but since both rely on vast increases in processing power to be fully realized, keep your rotten vegetables in the bag and bear with me.

“The miraculous has become the norm.” –Jonathan Romney

Sales of manufacturing robots increase each year. According to The International Federation of Robotics, robot sales in 2015 showed a 15% increase over the prior year. The IFR estimates that over 2.5 million industrial robots will be at work in 2019, a growth rate of 12% between 2016 and 2019. Workers have been working side-by-side with robots for decades. My wife's father was a foreman at Ford who worked with robots in the 70s, so robotic work technology is common. But the predicted rate of adoption, coupled with the prospects of driverless fleets, raises the question of what happens to the jobs? And the workers?

No doubt robots increase productivity and competitiveness. This productivity can lead to increased demand and new job opportunities, often in more highly skilled and better-paying jobs. Yet for all this rosy optimism, fear nags. More often, it leads right to profits for the owners and immiseration for the laid off.

Several years ago, author and futurist Ray Kurzweil referred to a point in time known as "the singularity," that point at which machine intelligence exceeds human intelligence. Based on the exponential growth of technology based on Moore's Law (which states that computing processing power doubles approximately every two years), Kurzweil has predicted the singularity will occur by 2045.

“The pace of progress in artificial intelligence is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.” —Elon Musk

Several thinkers worth listening to, including the late physicist Stephen Hawking and entrepreneur Elon Musk, warn that the development of AI portends cause for concern.

"The development of full artificial intelligence could spell the end of the human race," Hawking told the BBC, in response to a question about his new voice recognition system, which uses artificial intelligence to predict intended words. (Hawking had a form of the neurological disease amyotrophic lateral sclerosis, ALS or Lou Gehrig's disease, and communicated using specialized speech software.)

And Hawking isn't alone. Musk told an audience at MIT that AI is humanity's "biggest existential threat." He also once tweeted, "We need to be super careful with AI. Potentially more dangerous than nukes."

Despite these high-profile fears, other researchers argue the rise of conscious machines is a long way off. Says Charlie Ortiz, AI head of a Massachusetts-based software company, "I don't see any reason to think that as machines become more intelligent … which is not going to happen tomorrow — they would want to destroy us or do harm. Lots of work needs to be done before computers are anywhere near that level."

Reassured yet?

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”              —Eliezer Yudkowsky

“Someone on TV has only to say, ‘Alexa,’ and she lights up. She’s always ready for action, the perfect woman, never says, ‘Not tonight, dear.’” —Sybil Sage

"Alexa, make me a cocktail, willya?" Not quite yet, but perhaps soon, as companies are incorporating AI into their products. From smartphone assistants to driverless cars, Google is positioning itself be a major player in the future of AI. Amazon and Apple have staked out their own strong positions, as the ubiquity of digital assistants like Siri and Alexa makes them ghostly familiars… with access to your personal information, internet search histories, text messages and porn habits. And with Facebook and hundreds of apps hoovering up our personal information for resale to unseen third parties for purposes available only on a need to know basis, and you don't need to know…

… because YOU are the product.

"Machine learning" is a term of art referring to computer systems that learn from data. Time was computers followed instructions and performed computations for data crunching. Today's devices use a set of machine-learning algorithms, collectively referred to as "deep learning," that allow a computer to recognize patterns from massive amounts of data. This is a deep and profound change, the implications of which we have not yet grasped. And if we have not grasped it, how can we control it or appreciate its repercussions?

Recently AI developed its own non-human language. Researchers at the Facebook Artificial Intelligence Research training their chatbot “dialog agents” to negotiate, described how the bots made up their own way of communicating.

At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead.

In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language… the fact that machines will make up their own non-human ways of conversing is an astonishing reminder of just how little we know, even when people are the ones designing these systems.

So Facebook had to pull the plug because in a short period of time, the robots had developed their own language. Not sure about you, but when I envision a future where I attempt a transaction with online chatbots armed not only with a chip full of predictive algorithms, but also in possession of the entire dossier of personal information gleaned from every keystroke I've ever recorded, well, I'm not liking my odds. Here is your "permanent record" made real.

And then the prospect of the Internet of Things (IoT), a galaxy of sensors embedded in everyday objects, enabling them to send and receive data. This is made possible by more ubiquitous broadband internet is become more widely available, less expensive connection costs, and more devices created with Wi-Fi capabilities and sensors built in.  I already know my phone and TV listen to me; will they next connive against me in concert with the refrigerator and the coffee maker? Encourage the air conditioner to go on strike?

All roads in AI seem to lead to dystopia. Our inability to imagine a more positive future for artificial intelligence may stem from the fact that we've lost faith in ourselves. We're seen the tech companies in action, and they are opaque. And they sell the data mined with impunity to unseen actors. Our morality is defined not by the Church or in civic pride, but by the spreadsheet; our worth found in the lower right-hand corner. Knowing we are cooking the planet, we insist on burning the last few gallons of liquid sunlight left ion the ground to wring the last few dollars of profit. We willingly sacrifice children to the profits of the Slaughter Lobby. We elect louts to lead us, accept sabotage as political business-as-usual, embrace treason as a cost of doing business. Under the circumstances, who would dare possibly envision a happier future?

Who could imagine Asimov's Three Laws emerging from any part of today's debased culture?


banksy 07-flower-thrower-wallpaperSurly1 is an administrator and contributing author to Doomstead Diner. He is the author of numerous rants, screeds and spittle-flecked invective here and elsewhere, and was active in Occupy. He lives in Southeastern Virginia with his wife Contrary in quiet and richly-deserved obscurity. He will have failed if not prominently featured on an enemies list compiled by the current administration.

Your Robot Overlord Does Not Love You

Off the keyboard of Surly1
Follow us on Twitter @doomstead666
Friend us on Facebook

 

 

18tn1q8xcwwagjpg

 

Originally published on the Doomstead Diner on August 23, 2014
Discuss this article here in the Diner Forum.

 

Your Robot Overlord Does Not Love You

 

The Three Laws of Robotics, a set of rules devised by science fiction author Isaac Asimov:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
                                                                                                                                                                  ― Isaac Asimov, “I Robot”

 

In the process of preparing last week’s overheated screed, I came across an article that, after nearly 4000 words, consideration for my audience bade me defer to another day. That was the fact that Elon Musk, he of Tesla and Space-X, and widely regarded as one of the smartest guys in the room, had concluded that one of the gravest dangers to the continuation of the human race was not nuclear power so much as artificial intelligence.

Consider that for a moment. Or better yet, read the article in the original here.  In a couple of reported Tweets, Musk urged that we be “super careful with AI. Potentially more dangerous than nukes,” and “Hope we’re not just the biological boot loader for digital super intelligence. Unfortunately that is increasingly probable.” Musk’s concern was spurred by a book by Nick Bostrom of Oxford’s Future of Humanity Institute entitled  “Superintelligence: Paths, Dangers, Strategies.” 

The book addresses the prospect of an artificial superintelligence that could feasibly be created in the next few decades. According to theorists, once the AIis able to make itself smarter, it would quickly surpass human intelligence.

What would happen next? The consequences of such a radical development are inherently difficult to predict. But that hasn’t stopped philosophers, futurists, scientists and fiction writers from thinking very hard about some of the possible outcomes. The results of their thought experiments sound like science fiction—and maybe that’s exactly what Elon Musk is afraid of.

So what are some of these thought experiments? Bostrom says,

“We cannot blithely assume the super intelligence will necessarily share any of the final value stereotypically associated with wisdom and intellectual development in humans – scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture for the simple pleasures of life, humility and selflessness, and so forth.”

Your mileage may vary, but from Gaza to Ferguson, we find these so-called human values already lacking in much of what passes for humanity. What worries Musk and his oracles are the unintended consequences of building artificial intelligence detached from ordinary human ethics. Future AI might find more value in computing the decimals of pi or insuring its own survival than solving human problems in ways that we might recognize as helpful.

Put another way by AI theorist Eliezer Yudkowsky of the Machine Intelligence Research Institute:

“The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”

Without recapitulating the entire article, its point is that it is difficult for programmers to anticipate the instructions necessary to program the ethical dimension and problem solving capability to safeguard human life. On the other hand, we find that in other parts of our military-industrial complex, our tax dollars are already working overtime to create artificial creatures whose purpose is ostensibly benign, but the implications of which are terrifyingly apparent to anyone who has seen Terminator movies.

In a breezy article on Geek Pride entitled, “5 Apocalypses You Are Probably Not Ready For” the authors consider not only technology that enables one monkey to control the actions of another monkey by simply thinking, but also a device they call, “Human Powered, Googlezon Big Spider DroneBotcalypse.”

Now, a robot that can’t be knocked over is terrifying enough. It can also climb stairs and is allegedly powered by your hopes and dreams. Why google are doing this is anyone’s guess, but we can only be lead to assume that it is to take over the world.

“Well,” you say “It’s not like they’re trying to watch our every move or anything!” Well…

Google is watching you. And it likes what it sees. You been working out?

So we have a company that watches everything you do online, records video of you when you’re offline and robots that can walk up the stairs. The only way we can hide is the removal of stairs, and living in treehouses.

Wrong.

Enter delivery giants Amazon and their patented new delivery system: drones.

Yup.

The drones have been initially designed to eliminate the day long waiting period for Amazon deliveries, shortening the time to a possibility of just 30 mins. Currently the plan is to have them manned remotely by human pilots. so we’re safe, for now. The main problem is what is known in the drone world as “SWaP — size, weight and power. This is essentially a physics problem: The larger your payload, the more lift you need. The more lift you need, the larger your battery has to be, which further adds to the weight, which adds to the power requirements, and so on” (Washington Post, 2013).

Essentially what this boils down to is a matter of time and money before drones can carry a bigger payload, such as a 500lbs Big Dog robot. This may seem a long way off, but all Amazon probably needs is a massive cash-injection for the advances to be put into effect. Cash the likes of which Google might have.

I give you  Googlezon, probable merger of the late 2020s and new owners of the world.

The motorized bison is a creature called “Big Dog” currently developed by Boston Dynamics, under a DARPA grant generously provided by you and me.  The ostensible purpose is search, rescue and supply, but…

BigDog is a rough-terrain robot that walks, runs, climbs and carries heavy loads. BigDog is powered by an engine that drives a hydraulic actuation system. BigDog has four legs that are articulated like an animal’s, with compliant elements to absorb shock and recycle energy from one step to the next. BigDog is the size of a large dog or small mule; about 3 feet long, 2.5 feet tall and weighs 240 lbs.

BigDog’s on-board computer controls locomotion, processes sensors and handles communications with the user. BigDog’s control system keeps it balanced, manages locomotion on a wide variety of terrains and does navigation. Sensors for locomotion include joint position, joint force, ground contact, ground load, a gyroscope, LIDAR and a stereo vision system. Other sensors focus on the internal state of BigDog, monitoring the hydraulic pressure, oil temperature, engine functions, battery charge and others.

BigDog runs at 4 mph, climbs slopes up to 35 degrees, walks across rubble, climbs muddy hiking trails, walks in snow and water, and carries 340 lb load.

Development of the original BigDog robot was funded by DARPA. Work to add a manipulator and do dynamic manipulation was funded by the Army Research Laboratory’s RCTA program.

And the news keeps getting worse.  Rather than embrace the high ground of “robot morality” imagined by Asimov, we find that the Pentagon is in early days of raising a robot army.  The justification is that ostensibly the military is rapidly creating weapons systems that will need to make moral decisions. Current military regs prohibit armed systems that are fully autonomous. Yet the increasing sophistication of military technology demands greater and greater autonomy, and where lives are at stake, machines capable of weighing moral factors. What could possibly go wrong?

The U.S. military is trying to develop and deploy a real life terminator. A research agency associated with the Pentagon has unveiled pictures of a robot that looks and walks like a man.

The ATLAS robot is being developed by the Defense Advanced Research Projects Agency (DARPA) and a Massachusetts company called Boston Dynamics. DARPA, known as “the Pentagon’s weird science agency,” is the organization that is stated to have invented the internet. DARPA now has an intensive effort to create robots such as ATLAS underway at their facilities, and a new video reveals some of the latest developments.

DARPA has told the press that ATLAS is designed to enter disaster areas such as places contaminated by radiation or toxic chemicals and provide relief. Yet it would also function perfectly on the battlefield.

David Swanson imagines a brave new world of Pentagon robotics:

The Pentagon has hired a bunch of philosophy professors from leading U.S. universities to tell them how to make robots murder people morally and ethically.

Of course, this conflicts with [Asimov’s]  first law above. A robot designed to kill human beings is designed to violate the first law.

The whole project even more fundamentally violates the second law. The Pentagon is designing robots to obey orders precisely when they violate the first law, and to always obey orders without any exception. That’s the advantage of using a robot. The advantage is not in risking the well-being of a robot instead of a soldier. The Pentagon doesn’t care about that, except in certain situations in which too many deaths of its own humans create political difficulties. And there are just as many situations in which there are political advantages for the Pentagon in losing its own human lives: “The sacrifice of American lives is a crucial step in the ritual of commitment,” wrote William P. Bundy of the CIA, an advisor to Presidents Kennedy and Johnson. A moral being would disobey the orders these robots are being designed to carry-out, and — by being robots — to carry out without any question of refusal. Only a U.S. philosophy professor could imagine applying a varnish of “morality” to this project.

The Third Law should be a warning to us. Having tossed aside Laws one and two, what limitations are left to be applied should Law three be implemented? Assume the Pentagon designs its robots to protect their own existence, except when . . . what?

Now Big Dog has a buddy to take him for a walk. And in terms of reaction and tone,  at least to my taste, these guys have it about right:
No, it’s not a souped-up version of Robby the Robot — it’s ATLAS, DARPA’s latest attempt at creating a humanoid robot. Unlike the super-realistic Petman, which was designed to test chemical protection clothing, this 330-pound monster is meant to assist in emergency situations. Riiiight...

We’ve seen a proto-version of ATLAS before, but this updated unit can perform a host of new tricks, like walking through rugged terrain and climb using its hands as feet. It has 28 hydraulically actuated degrees of freedom, and of course, two hands, arms, legs, feet, and a torso with some kind of fancy-ass monitor on it that probably goes “ping!” every once in a while.

Hmmm, by “tools” I wonder if they mean “machine gun.”

No one who watched some of the best legal minds of a generation labor for the Bush administration to create legal justification for torture should be surprised that the Pentagon can hire ethicists and philosophers to determine under what circumstances a robot may commit murder.  Paging Dr. Mengele…

Here are the three laws David Swanson posits will replace Asimov’s:

1. A Pentagon robot must kill and injure human beings as ordered.
2. A Pentagon robot must obey all orders, except where such orders result from human weakness and conflict with the mission to kill and injure.
3. A Pentagon robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Behaving in much the same manner as some of our all-too-human military today, to say nothing of SWAT-gear hungry cops, those Barney Fifes in military drag making up for their dateless high school weekends and various manhood inadequacies by pointing loaded rifles at unarmed civilians to express their inchoate rage.

As anyone not living in a cave knows full well, the foreign-policy of this country, as conducted by the neocons who staged a silent coup to control it  (and control it yet despite the nominal change in political administration), operates in a conscience free zone. So perhaps Elon Musk is correct to be worried about artificial intelligence, or more precisely, the lack of ethics that guides its technological development. Our culture has technology in spades. What it lacks is a moral dimension other than materialism and  the quest to power to inform its use.

Thus no one should be surprised by developments like these technological fruits, or their subornation to the worst uses imaginable.  In a manner analogous to that in MRAPS, SWAT equipment, LRADs and other excess military equipment helpfully provisioned by the Defense Logistics Agency and transferred to local cops, so too are the military populace suppression techniques. Thus the police becomes an armed militia whose sole purpose is to protect the property of the .1% and to keep the rabble in line, as we have seen repeated from Oakland to ferguson to New York City.

Clearly Big Dog and Atlas are just two projects in the robot pipeline,  and these are the most visible and showy. For every ostensible “humanitarian use,” there are dozens of less humanitarian uses that don’t make the press releases.

What about the less sexy projects, the smart computers that control systems, that will make decisions  based on whatever parameters are fed into it by the best hired “ethicists and philosophers” that Pentagon money can buy?  Perhaps that’s what’s keeping Elon Musk up at night. What could be next: Machine-animal hybrids?

 

Or, other the other hand, nothing to worry about, citizen. Pass the Doritos.

***

Surly1 is an administrator and contributing author to Doomstead Diner. He is the author of numerous rants, articles and spittle-flecked invective on this site, and has been active in the Occupy movement. He shares a home in Southeastern Virginia with Contrary, and every day remarks at his undeserved good fortune at having such a redoubtable woman in his life.

 

 

Knarf plays the Doomer Blues

https://image.freepik.com/free-icon/musical-notes-symbols_318-29778.jpg

Support the Diner

Search the Diner

Surveys & Podcasts

NEW SURVEY

Renewable Energy

VISIT AND FOLLOW US ON DINER SOUNDCLOUD

" As a daily reader of all of the doomsday blogs, e.g. the Diner, Nature Bats Last, Zerohedge, Scribbler, etc… I must say that I most look forward to your “off the microphone” rants. Your analysis, insights, and conclusions are always logical, well supported, and clearly articulated – a trifecta not frequently achieved."- Joe D

Archives

Global Diners

View Full Diner Stats

Global Population Stats

Enter a Country Name for full Population & Demographic Statistics

Lake Mead Watch

http://si.wsj.net/public/resources/images/NA-BX686_LakeMe_G_20130816175615.jpg

loading

Inside the Diner

Stan Deyo is a Christian Fundamentalist that worked at Alice Springs Aus. in an underground military installationback in the day.He's been quite the researcher. Anyway he's claiming TSHF IMMEDIATELY  Start the vid at 10:40....[embed=640,380...

  Grid power is still not up for the Panama City Diners, but things are getting back to semi-normal at the Diner Doomstead.  Solar Cells...

We have examined, on several occasions, various views of the Ancient Astronaut Theory, the possibilities of the Anunnaki, and the increasing evidence of advanced civiliz...

Facebook purged more than 800 accounts last week, continuing its scorched-earth campaign of eradicating dissent as Americans prepare to go to the polls. The social media platform is nicely settling into its role as official censor, working hand in glo...

Quote from: Golden Oxen on October 14, 2018, 06:48:37 AMMaster Chef and world renowned food critic GO would have thought those steaks to be strip Sirloin RE. According to the Google Search, they were Ribe...

Recent Facebook Posts

Mainstream Media Drives Getaway Car for Alt-Media Purge - Global Research

Facebook purged more than 800 accounts last week, continuing its scorched-earth campaign of eradicating dissent as Americans prepare to go to..

14 minutes ago

Agents of Chaos: Trump, the Federal Reserve and Andrew Jackson - Global Research

Agents of Chaos: Trump, the Federal Reserve and Andrew Jackson

20 minutes ago

Hurricane Cost May Skyrocket As Billions In Stealth Fighter Jets Unaccounted For; Tyndall AFB "Complete Loss"

Hurricane Cost May Skyrocket As Billions In Stealth Fighter Jets Unaccounted For; Tyndall AFB “Complete Loss”

26 minutes ago

Here’s Where the Post-Apocalyptic Water Wars Will Be Fought

Here’s Where the Post-Apocalyptic Water Wars Will Be Fought–

29 minutes ago

Amid global outrage over Khashoggi, Trump takes soft stance toward Saudis

Global Outrage Over Khashoggi while Saudis Wire $100M, Ties to Crown Prince, Media Industry Is Complicit, Where Post-Apocalyptic Water Wars Will..

36 minutes ago

Diner Twitter feed

Knarf’s Knewz

Quote from: Eddie on March 13, 2018, 05:21:10 PMAl [...]

Quote from: knarf on March 13, 2018, 03:33:01 PMAU [...]

Quote from: knarf on March 13, 2018, 03:25:04 PM [...]

A new study found that the Great Recession correla [...]

From 2003 to 2005, Gina Haspel was a senior offici [...]

Diner Newz Feeds

  • Surly
  • Agelbert
  • Knarf
  • Golden Oxen
  • Frostbite Falls

Doomstead Diner Daily October 18The Diner Daily is [...]

Quote from: Surly1 on October 17, 2018, 02:33:16 P [...]

Quote from: RE on October 17, 2018, 11:22:26 AMQuo [...]

Quote from: Eddie on October 17, 2018, 01:03:32 PM [...]

I'm bi on hotdogs. I can go chili, cheese and [...]

Quote from: Eddie on March 13, 2018, 05:21:10 PMAl [...]

Quote from: knarf on March 13, 2018, 03:33:01 PMAU [...]

Quote from: knarf on March 13, 2018, 03:25:04 PM [...]

A new study found that the Great Recession correla [...]

From 2003 to 2005, Gina Haspel was a senior offici [...]

https://www.wsj.com/articles/sears-files-for-chapt [...]

Quote from: Golden Oxen on October 14, 2018, 11:26 [...]

Ah, here it is. Two separate incidents from Lake L [...]

We get a few of these Naegleria deaths around here [...]

Quote from: RE on October 01, 2018, 03:58:09 AMNot [...]

Not a good day to go Surfing.RESurfer dies from br [...]

Alternate Perspectives

  • Two Ice Floes
  • Jumping Jack Flash
  • Error

The Honor Box By Cognitive Dissonance   It is commonly said the fish rots from the head down, meanin [...]

Animal Spirits and Over Extended Markets By Cognitive Dissonance     Animal spirits is the term John [...]

  (Edit: I've tried to write on this subject for a while now and failed, realizing I would not, [...]

Mother Nature Shows Off Her Stuff By Cognitive Dissonance     Mrs. Cog and I live on the edge of the [...]

Control the Narrative and You Control the People By Cognitive Dissonance   It is extremely difficult [...]

Event Update For 2018-10-15http://jumpingjackflashhypothesis.blogspot.com/2012/02/jumping-jack-flash-hypothesis-its-gas.htmlThe [...]

Event Update For 2018-10-14http://jumpingjackflashhypothesis.blogspot.com/2012/02/jumping-jack-flash-hypothesis-its-gas.htmlThe [...]

Event Update For 2018-10-13http://jumpingjackflashhypothesis.blogspot.com/2012/02/jumping-jack-flash-hypothesis-its-gas.htmlThe [...]

Event Update For 2018-10-12http://jumpingjackflashhypothesis.blogspot.com/2012/02/jumping-jack-flash-hypothesis-its-gas.htmlThe [...]

Event Update For 2018-10-11http://jumpingjackflashhypothesis.blogspot.com/2012/02/jumping-jack-flash-hypothesis-its-gas.htmlThe [...]

RSS Error: This XML document is invalid, likely due to invalid characters. XML error: not well-formed (invalid token) at line 1, column 9109

Daily Doom Photo

man-watching-tv

Sustainability

  • Peak Surfer
  • SUN
  • Transition Voice

Twelve More Years To Do Nothing"The future is changing, with or without us."When the latest IPCC report landed with a thu [...]

How Joe Hill came to coin Pie in the Sky"When Christine Blasey Ford spoke of sound-memories embedded in the hippocampus she was attempt [...]

Ponzinomics"Tolerable parasites are those that have minimum pain and cost to the host."DONALD TRUMP, [...]

Somewhere, a Tiger Yawns"Simple, scalable, and shovel ready. China is moving negative emissions from laboratory to fiel [...]

Cherry Blossom Soap"China’s real wealth is not yuan but cherry blossoms."No longer having a television at hom [...]

The folks at Windward have been doing great work at living sustainably for many years now.  Part of [...]

 The Daily SUN☼ Building a Better Tomorrow by Sustaining Universal Needs April 3, 2017 Powering Down [...]

Off the keyboard of Bob Montgomery Follow us on Twitter @doomstead666 Friend us on Facebook Publishe [...]

Visit SUN on Facebook Here [...]

To fight climate change, you need to get the world off of fossil fuels. And to do that, you need to [...]

Americans are good on the "thoughts and prayers" thing. Also not so bad about digging in f [...]

In the echo-sphere of political punditry consensus forms rapidly, gels, and then, in short order…cal [...]

Discussions with figures from Noam Chomsky and Peter Senge to Thich Nhat Hanh and the Dalai Lama off [...]

Lefty Greenies have some laudable ideas. Why is it then that they don't bother to really build [...]

Top Commentariats

  • Our Finite World
  • Economic Undertow

but now saudi arabia posses nukes with help of pakistan https://www.bbc.com/news/world-middle-east-2 [...]

https://qz.com/1423636/saudi-threats-to-us-over-khashoggi-wouldve-been-scarier-in-past/ They threat [...]

The universities now are mostly only for creating jobs for those who teach and work at them. The uni [...]

"Apparently some purchases were brought forward due to new emission controls… causing this mass [...]

Apologies. It's no longer easy to get clear and accurate information from the Internet. This mu [...]

The author's gist seems to be that we should keep investing in shale because - all other condit [...]

Thanks for the article Steve. As usual, perfect - almost eldritch - timing. One question, what do yo [...]

What do you think of the author's reasoning? So the initial production rates from a well in the [...]

Here"s one: https://seekingalpha.com/article/4210065-shale-oil-ponzi-scheme-evidence-decline-cu [...]

@Dolf Can't disagree with your solution for many people. But f you are involved in producing an [...]

RE Economics

Going Cashless

Off the keyboard of RE Follow us on Twitter @doomstead666...

Simplifying the Final Countdown

Off the keyboard of RE Follow us on Twitter @doomstead666...

Bond Market Collapse and the Banning of Cash

Off the microphone of RE Follow us on Twitter @doomstead666...

Do Central Bankers Recognize there is NO GROWTH?

Discuss this article @ the ECONOMICS TABLE inside the...

Singularity of the Dollar

Off the Keyboard of RE Follow us on Twitter @doomstead666...

Kurrency Kollapse: To Print or Not To Print?

Off the microphone of RE Follow us on Twitter @doomstead666...

SWISSIE CAPITULATION!

Off the microphone of RE Follow us on Twitter @doomstead666...

Of Heat Sinks & Debt Sinks: A Thermodynamic View of Money

Off the keyboard of RE Follow us on Twitter @doomstead666...

Merry Doomy Christmas

Off the keyboard of RE Follow us on Twitter @doomstead666...

Peak Customers: The Final Liquidation Sale

Off the keyboard of RE Follow us on Twitter @doomstead666...

Collapse Fiction

Useful Links

Technical Journals

Forest management based on sustainability and multifunctionality requires reliable and user-friendly [...]

Nitrous oxide (N2O) is a potent greenhouse gas (GHG). Although it comprises only 0.03% of total GHGs [...]

This study presents a method to investigate meteorological drought characteristics using multiple cl [...]

El Niño–Southern Oscillation strongly influences rainfall and temperature patterns in Eastern Austra [...]