Posts in this category get featured at the top of the front page.

Cambridge Divests $1 Billion From Nukes Following Grassroots Campaign

“Didn’t the threat of nuclear holocaust end with the cold war?”

“Actually no, the threat is even greater now than ever before.

There are approximately 15,000 nuclear weapons on Earth, about 1,800 of which are on hair-trigger alert, ready to be launched within minutes – which has nearly happened by mistake during a large number of close calls. Yet rather than reduce this risk by trimming its excessive nuclear arsenal, the US plans to invest $4 million per hour for the next 30 years — for a total of $1 trillion — to make our nuclear arsenal even more lethal. This is in no small part due to lobbying by nuclear weapons producers. This new nuclear arms race is driven partly by money, but that also means it can be mitigated using money: by divesting from companies that produce nuclear weapons. Nuclear weapons divestment not only takes money out of the hands of nuclear weapons producers, but it also creates strong stigmatization that can lead to policy change, as previously happened with both land mines and cluster munitions.

At the Future of Life Institute, we took up the challenge of engaging our local city of Cambridge on this issue. Using publicly available documents, we found that the city of Cambridge was both directly and indirectly supporting the manufacturing of nuclear weapons. Among their investments, for example, was Honeywell International, which is responsible for the construction of Trident II (D5) nuclear missiles.  Little did we know, but with this knowledge and a single email to Cambridge Mayor Denise Simmons we would be setting off on a campaign that would end in a vote recommending a $1,000,000,000 divestment from nuclear weapons.

            ***

On January 19th, Max Tegmark and I sat down with Mayor Denise Simmons at the Cambridge City Hall and found a wonderful friend and ally. Mayor Simmons was enthusiastic about the prospect of nuclear divestment and wanted the city of Cambridge to be an example for other cities, universities, and institutions to follow. We set out from that meeting and, in cooperation with Mayor Simmons and her assistant, drafted a resolution to be introduced and voted on at a city council meeting scheduled for March 21st.

With about two months to go until the vote, we began engaging with the community and local activists to garner as much support for the resolution as possible. We worked to rally the community to give public testimonies at the city council meeting and initiated an email campaign to encourage councilors to adopt the resolution. Attendance and support at the city council meeting was largely garnered through presentations at citizen group meetings, such as at the Cambridge Residents Alliance, and at a Massachusetts Peace Action Conference. Perhaps most importantly, we drafted a 207 page report that detailed Cambridge’s investments and provided nuclear-weapons-free investment alternatives. We were able to provide a list of nuclear-weapons-free mutual funds by aggregating the holdings of socially responsible mutual funds and comparing them to our database of institutions that are directly and indirectly involved in nuclear weapons production. Crucial information about the producers of nuclear weapons and about the institutions that are invested in these companies was provided by Pax’s Susi Snyder and their 2015 Don’t Bank on the Bomb report.  Furthermore, we found that only 1.8% of the S&P 500 companies were directly involved in nuclear weapons production. Thus only a minuscule amount of Cambridge’s portfolio weight was in need of divesting. From our research and findings we were able to conclude that nuclear divestment is extremely feasible even for a $1,000,000,000 pension fund.

We soon found ourselves among friends and allies at the March 21st city council meeting. The meeting kicked off with about an hour of 3-minute public testimonies, a large majority of those being in support of the nuclear divestment resolution. When the testimonies came to an end, we anxiously awaited our resolution to be addressed. After a few heart palpitations induced by some council jargon that made it sound like our resolution had been suspended, the resolution was brought before the council. Before the vote was held, multiple councilors spoke to the absolute necessity of divestment. In particular, one councilor reflected on the irrationality of profiting from institutions that make us all less safe. How can our species survive if its economic prosperity is based on an economic system that profits from investing in the inhumane?  A vote was finally called and the city of Cambridge unanimously approved the policy order, thus recommending to make $1,000,000,000 unavailable to nuclear weapons producing companies.

Lucas Speaking at Cambridge Meeting

Lucas Perry speaking to the resolution at Cambridge City Hall.

Shortly thereafter, we held a conference on reducing the threats of nuclear war at MIT, where Mayor Denise Simmons announced the divestment to a packed auditorium. She spoke on the moral, political, and social necessity of divestment and exclaimed, “Not in our name!” She added, “It’s my hope that this will inspire other municipalities, companies and individuals to look at their investments and make similar moves.” We are thrilled with this success as Cambridge becomes the billion dollar investor in the U.S. to make such a move, joining over 50 European institutions.

 

Cambridge Mayor Denise Simmons announcing the divestment resolution at the MIT conference.

 

BUT, WAIT!  DIVESTMENT DOESN’T END THERE!  You can help keep this going. As Mayor Simmons said, a successful nationwide campaign of divestment and stigmatization requires a large grassroots movement. You can help with this by engaging with your local municipality, community, church, university or other institution and calling for nuclear divestment. For more information on how to begin a divestment campaign, we encourage you to read the lessons and examples from the Don’t Bank On the Bomb Campaign, as well as their “campaigner guide.”

Have questions or need help? Contact us at: Lucas@futureoflife.org

A very special thanks to Harvard BA candidate Abel Corver, Pax Nuclear Program Manager Susi Snyder, and MIT Professor Jonathan King for their vital support throughout this campaign.

 

divest_group_MIT_photo

Mayor Denise Simmons, Dr. Max Tegmark, Lucas Perry, Susi Snyder, Former Secretary of Defense William Perry, and Dr. Jonathan King after the public announcement of the Cambridge nuclear divestment plan.

Experiment in Annihilation

To celebrate the 88th birthday of its author today, we’re republishing the first-ever comprehensive non-classified paper on the hydrogen bomb and problems with its early testing. It was translated into French by Jean-Paul Sartre and published in his journal “Les Temps Modernes” and the opening lines were once read in the US Congress without attribution. The author wrote under the pseudonym Jules Laurents out of fear of McCarthyism and I’m proud to be able to tell you that he is in fact Harold Shapiro, my father – happy birthday, Dad!

2_2

The author circa 1954.

 

EXPERIMENT IN ANNIHILATION (1)
Jules Laurents

Contemporary Issues, volume 5, October-November 1954

MARCH 1, 1954, the same day that shots were ringing on the floor of the House of Representatives, another “shot”, unheralded but of sweeping significance, was fired in the Marshall Islands. On that day an American AEC task group detonated a hydrogen bomb of monstrous size. In its widest implications that bomb has not yet ceased to reverberate. A long chain of incidents, ranging from the curious to the tragic, has made it clear that “peacetime” nuclear explosions present a substantial threat to our well-being. Storm signals from earlier atomic tests such as fogged photographic film and radioactive rain have given way to the storm — which has already resulted in the radioactive poisoning of several hundred people. The March 1 explosion also blasted the lid of secrecy from the AEC’s thermonuclear adventures, giving the public its first real look behind the “uranium curtain”; thus it is now known that the AEC touched off three prior hydrogen explosions, the third of which (November, 1952) gave more than five times as great an energy release as predicted by its creators.

I. Chronicle of Events

The March 1 bomb was expected to explode with a force of four to six megatons (a megaton denotes the energy released by exploding one million tons of TNT) but developed instead about fourteen, according to Joseph and Stewart Alsop, New York Herald Tribune, April 7, 1954. It left scientific measuring instruments unable to record its full effects. Sound waves from the blast were detected in London, and an American astronomer said the flash could have been seen from Mars. Rep. Holifeld of the JCAE (Joint Committee on Atomic Energy) described it as “so far beyond what was expected you might say it was out of control”. Defense Secretary Wilson called it “unbelievable”, and President Eisenhower admitted it “surprised and astonished” the scientists. Rep. Van Zandt of the JCAE stated that the “explosion had left an area of total destruction about twelve miles in diameter, with light damage extending in a circle with a diameter of forty miles”. The AEC called it a “routine atomic test”. As with the November, 1952 H-bomb, the first inkling the public had that something extraordinary had occurred was through “leaks”. Intent on maintaining secrecy, the AEC ordered all task force personnel to refrain from divulging any information about the tests. Such an order was not given on Kwajalein, however, 176 miles from Bikini, it being assumed apparently that at this distance details of the explosion could not be perceived. Yet a marine corporal stationed there wrote to his mother:

“I was walking back to the barracks . . . just as it was getting daylight, when all of a sudden the sky lighted up a bright orange. . . . About ten or fifteen minutes later . . . we heard very loud rumbling that sounded like thunder. Then the whole barracks began shaking as if there had been an earthquake. This was followed by a very high wind.”

In a second letter he reported: “There were two destroyers here to-day bearing natives of one of the Marshall Islands that was within seventy-five miles of the blast. They were suffering from various burns and radioactivity”.

Directly thereafter the AEC issued the following statement:

“During the course of a routine atomic test in the Marshall Islands, twenty-eight U.S. personnel and 236 residents were transported from neighboring atolls to Kwajalein Island according to plans as a precautionary measure. The individuals were unexpectedly exposed to some radiation. There were no burns. All were reported well. After the completion of the atomic tests, they will be returned to their homes.”

The AEC never acknowledged the statement of the corporal, nor his assertion that some victims were suffering from burns. (We shall see that the AEC statement is false.) When the announcement was made some observers were puzzled over how, after the victims were “unexpectedly” exposed to radiation, they were evacuated “according to plans as a precautionary measure”. Time magazine introduced additional cause for apprehension by reporting that American casualties from the March 1 explosion were exposed to radiation “ten times greater than scientists deem safe”.

On March 13 (2) a grave new consequence of the “routine atomic test” was reported. The Japanese fishing trawler Fukuryu Maru docked in Yaezu, Japan, with its twenty-three crew members showing symptoms of acute radiation exposure. They told how on March 1 they were some eighty to ninety miles from Bikini, when at 4 a.m. they fancied they saw the sun rising prematurely “in a strange manner”. Six or seven minutes later they heard a roar, and two hours later they were showered with a white ash, which continued to fall for several hours. The ash was, of course, fall-out from the explosion, consisting mainly of irradiated coral dust. Only after they had become quite ill did they suspect that they had been rained with shi no hai (ashes of death) and head for port. They had on board 40 tons of freshly caught tuna and shark, which, according to a New York Times dispatch, exhibited radioactivity “sufficient to be fatal to any person who remained for eight hours within thirty yards of the fish”. Two of them were in worse condition than the rest, having eaten some of the fish. The crewmen were hospitalized, the sampan was ordered burned at sea and sunk, the fish buried; but not before several thousand pounds of the contaminated fish had been unloaded and shipped to market. A “hot fish” panic ensued in Japan, and police, in a frantic effort to track it all down, ordered a thousand tons of other fish destroyed, with which it had got mixed. Fish prices dropped to half overnight, and Tokyo’s numerous sushi houses (sushi — a popular fish dish) reported business at a standstill. It should not be necessary to give all details — it is sufficient to recall the Japanese experience with atomic bombs, coupled with the fact that fish is the mainstay of the Japanese diet (a million pounds a day of tuna alone are consumed in Japan) to appreciate the extent of the panic. The people’s fears were not entirely groundless. Life, March 29, in an article First Casualties of the H-Bomb reported “Six families from the town of Saganihara reported stomach pains, numbness and diarrhea after eating raw tuna and gray mullet”. An INS dispatch from Tokyo, March 23, reported “Physical examinations were ordered to-day for fifty-one Osaka residents with mild blood disorders which officials feared may have come from eating radioactive fish caught in the mid-Pacific after America’s recent hydrogen test blast”. As late as May 17 a UP dispatch from Formosa reported: “Fishery authorities in Formosa urgently requested Geiger counters from the U.S. to-day after a Chinese family in Keelung was hospitalized with what doctors thought might be radioactive poisoning [after eating a seafish].” (It is, of course, impossible to say whether the March 1 bomb or a later one would be responsible for such a case.) In view of the fact that a score of other Japanese fishing boats have since returned with radioactive cargoes, and considering the delay with which radiation-induced disorders often manifest themselves, further incidents of this type are not excluded.

Soon after the mishap, Dr. John Morton, head of the Atomic Bomb Casualty Commission (ABCC) at Hiroshima, reported concerning the twenty-three fishermen, “they will recover completely within a month”. Apparently his years of studying Hiroshima victims had not proved instructive, for by 23rd March five of the fishermen were reported in serious condition, and all of the men are still (July) hospitalized. Morton and his staff were received uncharitably by the victims and their Japanese doctors, for reasons expressed by the leading Tokyo newspaper Asahi: “The ABCC is an organ to conduct research but not to treat patients. Dr. John Morton and his staff should treat the patients this time not only to make a fine report to America but to give the patients assurances they are not guinea pigs”. It also urged the U.S. to reveal to Japanese physicians the materials used in the blast (this would facilitate identification of the isotopes in the ash, which knowledge would be valuable in treatment) but admitted: “Presumably the U.S. does not want to disclose military secrets”. Indeed not; in fact, Rep. Sterling Cole created much hard feeling when he suggested the Japanese trawler might have been spying on the tests! When the Japanese scientists had completed their analysis of the ash, they were dismayed to find that it contained not-negligible amounts of strontium 90, a long-lived isotope particularly dangerous to absorb into the body.

About 25th March it was reported that the U.S. Navy tanker Patapsco, operating with the H -bomb task group, had received “light but not dangerous contamination by radioactive fall-out”.

March 27th two more “atom-dusted” sampans came into port and were quarantined. One (the Myojin Maru) had been operating about 780 miles from the test site 1st March, and the other (the Koei Maru) 200 miles away. Japanese newspapers reported that both vessels registered Geiger counter readings above the danger point, although “only one crewman was more than slightly affected”. Japanese health officials were undecided whether to destroy the catch of the Myojin; they destroyed the entire 80,000 pound tuna catch of the Koei. And a UP dispatch of 3rd April reported that a fourth fishing boat had come back radioactive from the 1st March explosion and been quarantined.

There were numerous other ramifications, of varying degrees of gravity, from the 1st March explosion. Americans experienced snowfalls in Montana and Wyoming exhibiting radiation equal to 200 times the normal background. Prof. Henry Kraybill of the Yale University Physics Department revealed that Yale’s most sensitive Geiger counter was incapacitated on 7th March by an increased number of radioactive particles in the air. However, on the whole, few such particles were observed in the U.S. Newsweek, 29th March, wrote:

“The subject isn’t discussed openly around the AEC but scientists are worried about the whereabouts of the radioactive ‘mushroom cloud’ generated by the March 1st H -bomb explosion. . . . Within a few days after all previous tests, laboratories around the U.S. have reported detecting traces of radiation in the atmosphere. So for no traces have been spotted from the March 1 bomb, which shot its mushroom an unprecedented 20 miles into the air.”

The same publication, 5th April, published another provocative remark:

“U.S. atomic scientists, still puzzling over the unexpected fury of the March 1st H-bomb blast . . . .are now wondering whether the bomb set off a small chain reaction that ignited hydrogen in the atmosphere and surrounding sea. Most [reject the possibility of a globe -girdling chain reaction] but the theory is being reconsidered.”
Many other Americans apparently were “puzzling” over the 1st March blast. As early as 19th March Rep. Cole, head of the JCAE, reported that a Congressional investigation would be pushed to determine (in the words of an AP dispatch) “whether avoidable errors were made during the monstrous hydrogen blast in the Pacific March 1st”, and that his committee had begun questioning AEC officials in closed sessions. Rep. Van Zandt, of the JCAE, according to a UP dispatch of 18th March,

“criticised officials . . . for failure to set up adequate safeguards against injury to American, native, and Japanese personnel in the area. He said the government should have set as out of bounds a ‘hazard area’ about twice as large as that actually prescribed…’It was poor planning’ Mr. Van Zandt said. ‘In my opinion somebody is guilty of a blunder in failing to apply the necessary precautions. It is my intention as a member of the Joint Atomic Committee to find out who was responsible’.”

Dr. David L. Hill, Chairman of the Federation of American Scientists, a Los Alamos physicist, commented that the failure to predict the exact size of the 1st March explosion was to have been expected in a rapidly moving development program. Against this turbulent background, the AEC detonated an even larger H-bomb 26th March.

The 26th March bomb was intended to have been dropped by parachute from a B-36 superbomber, but for reasons of caution this plan was abandoned. This was probably for the best since the bomb, expected to develop three megatons, exploded instead with about seventeen (according to the Alsops). And Newsweek later reported (12th April) that “Air Force officials refuse to talk about it, but a giant B-36 superbomber observing the March 26th H-bomb explosion was flipped completely over by the blast”. The AEC had taken many new precautions, such as extending the “restricted zone” to an area 450 miles wide, covering several hundred thousand square miles. It had searched the area carefully, to make sure no ships were there. Nevertheless two Japanese fishing boats came into port 8th April with cargoes of radioactive tuna. On one of the boats, the Kaifuku Maru, health officials found it necessary to destroy “about one-third of the thirty-five tons of tuna . . . when it recorded more than 100 [Geiger counter] impulses a minute. About 45 minute is considered the maximum for human safety”. (UP dispatch, 8th April.) Curiously, the other ship, the Shoho Maru, had only six radioactive fish out of a thirty-ton catch, giving counts of 60 to 1,300 impulses a minute. An INS dispatch of 10th April clarifies this situation, telling that these six had “eaten atom-radiated small fish”. The Shoho had been 400 miles south of Bikini. As time went by other radioactive ships were remarked, including one that had been “dusted” at a distance of 2,200 miles; a Geiger counter held to the head of a crewman from this vessel, the Misaki Maru, clicked 200 times a minute.

U.S. News and World Report of 9th April, 1954, in an article entitled Has the H-Bomb Gone Wild? , reflected the prevailing sentiment when it commented: “The guarded secrets of [the] H-Bomb now are coming out. The facts, when pieced together, indicate that the tested model is a far cry from the H-bomb ordered by President Truman in 1950”. By the end of March a vast clamor had risen around the H-bomb, to which even the AEC could not remain entirely oblivious. Indeed, Lewis Strauss, Chairman, held a special press conference on 31st March at which he assured the public that there was nothing to worry about, that the victims with radiation burns were “well and happy”, etc., and emphasized that we were rapidly approaching the millennium of atomic energy for peacetime use. With this assurance, the AEC detonated three more bombs. News of them reached the public unofficially, e.g. through reports of radioactive rain by Japanese scientists and accounts by American airline passengers of a “midnight sunrise” in the Marshall Islands. An AP dispatch of 14th May said: “Evidence indicates [an] explosion of April 6, another about May 1 and a final shot within recent days”. It seems likely that a great 40 megaton blast originally scheduled for around 22nd April did not take place.

Incidents of similar character to those arising from the previous blasts continued to be reported. A cross-section of news items follows:

Tokyo, April 19 (AP) – “Two Japanese scientists to-day said new radioactive rain showers fell on Japan Saturday and yesterday . . . the showers started 40,000 feet up and fell from the stratosphere . . . . Meanwhile health officials at the giant Tsukiji Japanese Fish Market condemned 3,000 pounds of tuna from a mid-Pacific catch brought here to-day. The fish showed signs of harmful radioactivity.”

Tokyo, April 30 (AP) –“Another Japanese tuna boat has been found radioactive and some of its catch has been condemned . . . . The ship is the 100-ton Koyu Maru. Kyodo [News Agency] said it was operating about 500 miles southeast of Bikini March 26 when the U.S. touched off its second hydrogen blast.”

Lander, Wyo., May 14 (AP) — “Uranium hunters in this central Wyoming area are blaming atomic dust for a sudden jump in activity on the dials of their Geiger and scintillation counters. Even tests on Lander’s main street showed radioactivity five to six times the normal reading.”

Kamaishi, Japan, May 23 (UP) – “Health department officials said to-day the fifth crewman of a Japanese ship that arrived at this northern port yesterday showed signs of radiation burns. The [Jintsuguwa Maru] left Macatea Island south of Australia, April 17 . . . and supposedly skirted the zone around the Bikini-Eniwetok atomic proving grounds . . . .”
Sydney, Australia, May 24 (Reuters) — “Radioactive rain fell on Sydney Sunday, it was reported to-day . . . D. E. Davies [manager of a concern manufacturing Geiger Counters] said, ‘We were subjected to some sort of radioactive rain as a result of a hydrogen bomb test in the Pacific’.”

San Diego, Calif., May 28 (UP) – “The escort carrier USS Bairoko arrived here to-day from the Pacific hydrogen bomb tests, but the Navy kept its ‘secret’ label on everything concerning its recent operations. Reports were heard, both here and in Washington, that the Bairoko received a mild dusting with radioactive particles . . . . [Reporters and photographers were not] allowed to visit the ship.”

New York Herald Tribune, May 28, under the heading Radioactive Rain Worries Tokyo printed a dispatch, “Rain so radioactive it might be dangerous to anyone drinking it fell on Tokyo to-day . . . . It was the latest and most serious of a number of such showers. Samples . . . gave a Geiger counter reading of 10,000 [clicks per minute], potentially dangerous if drunk.”

Calcutta, May 30 (Reuters) – “The Indian Nuclear Physics Institute here reported to-day that radioactive rain fell over Calcutta on April 29 . . . . A Reuters dispatch from Calcutta June 1, said: ‘Radioactivity in atmospheric dust and rain over the Bay of Bengal soon after the Pacific hydrogen bomb explosion April 6 was seven to eight times normal, it was disclosed here to-day [by Dr. S. Kuk Mitra, head of the department of physics at Calcutta University]’.”

Tokyo, June 4 (AP) – “Five crew members of a Japanese freighter that passed within 1,200 miles of the United States hydrogen bomb tests in the Pacific were reported suffering from radiation sickness to-day . . . .”

Tokyo, June 10 (AP) – “A Japanese radioactivity test ship detected strong signs of contamination last night 500 miles south of the U.S. H-bomb test area at Bikini, Kyodo news agency reported to-day. Radioactivity was found in the fish, rain and seawater tested by the Shunkotsu Maru. Japanese officials yesterday ordered destroyed as radioactive 8 tons of an 82,000 pound catch from the waters near Truk Island in the Caroline Islands.”

Tokyo, June 10 (INS) – “Seven Japanese were reported under treatment for radioactivity to-day because they drank rain water from clouds which passed through the U.S. Bikini H-bomb testing area. Japanese coast guard officials said that three lighthouse attendants and four members of their families are recovering, but will be hospitalized for some Time.”

Tokyo, June 12 (AP) – “A Japanese scientific research ship reported to-day that measurable radioactivity has been found in the sea water within 100 miles of Bikini Atoll where the U.S. conducted test H-bomb explosions in March and April [and May – JL]…. The amount of radioactivity was small but noticeable as the research vessel sailed to within 67 miles of Bikini . . . . Meanwhile, radioactive jittery Japan learned to-day that some scientists have found wild birds were mildly radioactive in Japan this year. A scientist said he found traces of radioactive ash, presumably from U.S. H-bomb tests, in the internal organs of the birds.”

Because of the fact that whole islands, in the form of radioactive dust, are now drifting in the upper atmosphere, and much additional radioactive matters is now being assiduously concentrated by sea organisms; in view of the presence of long-lived isotopes in the ash (we have already mentioned strontium 90; consider also that the atoll of Rongelap in the Marshalls must remain evacuated for at least a year, according to the AEC); and in view of the long delay (often years) which precedes the chronic effects of radiation injury, it may be stated with certainty that we have not heard the last of this series of hydrogen tests. Even the incidents we have quoted probably give an inadequate picture of what has already been reported – because the heavy censorship of the AEC has been augmented by a conspiracy of silence on the part of the press. For example, the New York Times, which has seen fit to give many days of front page coverage to an Egyptian funeral ship, has consistently buried the news dispatch on radiation injury in its hinter regions, insofar as it has deemed these dispatches fit to print at all.

2. The AEC Explains — A Study in Appearances and Realities

The American government has issued only one public statement dealing with the effects of hydrogen explosions! This took the form of a press conference held by Strauss, 31st March, where he read a prepared statement and answered several questions from reporters. Evidently the AEC plans to let its case rest with this press conference and does not intend to amend or augment the statements of Strauss. And it is not through lack of sensitivity to public concern that further enlightenment has not been forthcoming; for instance, one citizen who wrote to President Eisenhower expressing alarm over the hydrogen tests received a two-page personal letter from Morse Salisbury, director of the AEC’s Information Service, answering objections point by point on the basis of material in the Strauss press conference. A mimeographed copy of the latter was enclosed. And an interview with some Marshallese natives by an AP staff writer published 9th June bore the explanation: “The following story was delayed by censorship in the Defense Department, the AEC and the State Department . . . minor deletions were made in the original copy”.
We have already seen several instances where the AEC was less than candid with the public. This trend reaches a summit with the Strauss report, which is no more than a fairy tale designed to allay public apprehension (Strauss calls it “misapprehension”) over the disaster at Bikini – and pave the way for further “experiments”.

The tone of the Strauss statement is itself significant; there is no humility, no regret, no apology – not even a crocodile tear is shed in the interests of propaganda for the Marshallese, Japanese or American victims. To shed such a tear would be to acknowledge that something had gone wrong. But more important, the hydrogen explosions are primarily acts of intimidation – and one does not follow up an act of intimidation with an apology. The Admiral begins with some “historical background”, telling how “there is good reason to believe that they [the Russians] had begun work on this weapon substantially before we did” but happily “enormous potential has been added to our military posture by what we have learned” from the recent tests, etc.

“Now as to this specific test series. The first shot has been variously described as ‘devastating’, ‘out of control’, and with other exaggerated and mistaken characterizations. I would not wish to minimize it. It was a very large blast, but at no time was the testing out of control. The misapprehension seems to have arisen due to two facts. First, that the yield was about double that of the calculated estimate – a margin of error not incompatible with a totally new weapon (the range of guesses on the first A-bomb covered a relatively far wider spectrum). Second, because of the results of the fall-out.”

We shall not engage in semantic quibbling as to whether it is “mistaken” to call “devastating” a bomb which “left an area of total destruction about twelve miles in diameter” (Rep. Van Zandt) and gouged a deep crater in the ocean floor; similarly, we need not quibble over whether Rep. Holifeld “exaggerated” when he said the bomb was “out of control”; but the Admiral might have given us facts. He might have admitted that the blast jarred Kwajalein, 170 miles away, and created a very high wind there, or that on Rongelap atoll, over 120 miles distant, there was “wind so strong some people fell down” (according to an AP report), or that two British Planes watching one blast were lost and a giant American B-36 was flipped completely over by the shock wave. Could he not at least have told us what Newsweek has told us 29th March, that the bomb “shot its mushroom an unprecedented twenty miles into the air” and that this cloud was still at large in the stratosphere? Or would such facts give the “mistaken” idea that the testing was “out of control”? One point must be added about the word “control”. In response to a reporter who asked, “Is it possible that a hydrogen explosion or series of them could get out of control?”, Strauss said, “I am informed by the scientists that that is impossible”. Now the expression “out of control” is used by the physicists in this connection to mean setting off a chain reaction that would envelop the entire earth. Thus to know that an explosion is not “out of control” in this technical sense is small comfort, and, in particular, does not imply that it is in any real sense in control. As for blowing up the entire earth, the majority of scientists believe this is impossible; it may be said with certainty that they will never have to admit they were wrong.

Let us examine the “two facts” responsible for public “misapprehension”. First, that “the yield was about double that of the calculated estimate”. By “yield” Strauss means apparently the number of megatons. But then, what of press reports that the 1st March explosion developed fourteen megatons, whereas the “calculated estimate” was four to six? Or that the 26th March bomb developed seventeen megatons as against an anticipated three? These ratios are more nearly three to one and six to one than “double”. And one Pentagon official who witnessed the 1st March test, according to Newsweek 5th April, “insists that all published estimates of the H-bomb’s force have been too conservative” and claimed that the tested bomb gave about a twenty-eight megaton explosion. Actually, the whole concept of measuring the “yield” in megatons is misleading. As Edward Teller wrote in the Bulletin of the Atomic Scientists, February, 1947: “It is hardly possible to compare the effect of an atomic bomb with the effect of a certain tonnage of TNT [for] atomic bombs also destroy by flash burns and by causing radiation disease”. In reality it makes sense to speak of a “yield” only in ecological terms, in terms of damage to man and his environment. Measured in these units Strauss’s statement about a “yield double the calculated estimate” takes on the meaning, “We expected to cause only 150 cases of radiation sickness but caused instead 300; we calculated on making one atoll uninhabitable for six months but the yield was two atolls for a year”, and so on.

Let us note that the hydrogen bomb is not a “totally new weapon” in the same sense as the first A-bomb; indeed, three thermonuclear explosions had been conducted by the AEC prior to 1st March, 1954, and in every case the energy release exceeded predictions, the last (November, 1952) by a ratio of more than five to one. Were not these ample warning of the uncertainty involved? A more accurate phrase to describe the circumstances would be “a margin of error not incompatible with a totally new concept of morality”.

Now Strauss’s second “fact” causing “misapprehension” — “the result of the fall-out”. He explains correctly that when a nuclear explosion occurs near the ground, material from beneath the center of the explosion is sucked up into the air, the lighter particles and fission products being borne away by the wind, eventually to settle out. (Detailed information of this sort is available in the AEC’s 1950 manual The Effects of Atomic Weapons). Forecasting correctly the direction of the winds at altitudes within the range of interest is all-important, and hence:

“Before the shot takes place, there is a careful survey of the winds at all elevations up to many thousands of feet . . . . Contrary to general belief, winds do not blow in only one direction at a given time and place. At various heights above the earth, winds are found to be blowing frequently in opposite directions and at greatly varying speeds . . . . The meteorologists attempt to forecast the wind directions for the optimum conditions and the Task Force Commander thereupon decides on the basis of the weather report when the test shall be made. The Weather forecast is necessarily long-range because a warning area must be searched for shipping and the search . . . requires a day or more to complete. [My emphasis – JL].”

We thus see that successful prediction of the fall-out pattern depends directly on successful long-range prediction of the winds, and this as high as the top of the mushroom cloud. Strauss indicates that the winds are quite tricky, but his account must be augmented for a clearer understanding. In the Compendium of Meteorology (Boston 1951) Namias and Clapp of the U.S. Weather Bureau write:

“The state of our knowledge . . . of the general circulation [of air masses] is still quite inadequate. Our deficiencies lie particularly in the absence of a long period of record of upper-air data over large areas of the Northern Hemisphere and most of the Southern Hemisphere. Not much can be done for many years to remedy the ‘long period’ part of this deficiency.”

In the same volume A. Grimes points out that “the properties of tropical air are quite well known up to four or five km., but observations are too few above five km. for reliable conclusions to be drawn”. One of the principal causes of consternation in high-altitude wind prediction is the existence of “jet streams”. First discovered during World War II, these narrow filaments of air travel as fast as 300 miles an hour at heights of between six and eight miles. In the Scientific American, October, 1952, Namias gives an account of these “strange winds” and their “violent and unpredictable behavior”. He points out that:

“Neither [of the two existing theories of jet streams] is complete enough for detailed weather prediction . . . many large areas of the Pacific are still meteorological no-man’s-lands . . . what causes their often striking behavior from one month to the corresponding month of the following year are questions that remain unanswered.”

Thus fortified we pursue Strauss’s account:

“For the day of shot number one the meteorologists had predicted a wind condition which should have carried the fall-out to the north of the group of small atolls lying to the east of Bikini . . . . The shot was fired. The Wind failed to follow the predictions [!] but shifted south of that line and the little islands [atolls –JL] of Rongelap, Rongerik, and Uterik were in the path of the fall-out . . . . The twenty-three crew members on the ship, twenty-eight American personnel manning weather stations on the little islands, and the 236 natives on these islands were therefore within the area of the fall-out.”

It is clear that if the atmosphere a mere seven or eight miles up is “a meteorological no-man’s-land”, and the bomb “shot its mushroom an unprecedented twenty miles into the air”, it is inherently impossible to predict with any certainty the fall-out pattern. Incidentally, we see now the meaninglessness of many comments which have been made concerning the fall-out, such as Rep. Van Zandt’s attribution of the disaster to “unpredictable wind shifts at high altitudes”. Furthermore, even the more tractable lower altitude winds become problematical – owing to the force of the explosion itself. We recall the powerful winds that swept Kwajalein and Rongelap, and a New York Times report from an American observer that the bomb “set off a local wind storm that might have upset weather forecasts that had been correct earlier”. Thus Strauss’s “two facts”, the size of the explosion and the fall-out, are seen to be one: A chaotic fall-out pattern is in the very nature of such a large explosion (3). The fall-out problem is, of course, also greatly magnified in the case of a large explosion because there is so much more debris to fall out. Thus “freak accidents” followed like clockwork after each of the great hydrogen explosions, and will continue to follow any low-altitude blasts of similar magnitude which may take place in the future.

Strauss touches obliquely on the central question of the right by which an American task force can declare “off limits” to the rest of the world a huge area of the Pacific Ocean.
“The ‘warning area’ is an area surrounding the proving grounds within which it is determined that a hazard to shipping or aviation exists. We have established many such areas as have other governments. . . . Including our continental warning areas, we have established a total of 447 such warning and/or danger areas. This particular warning area was first established in 1947. [A-bomb tests were held at Bikini in the summer of 1946 — JL.] The United Nations were advised and appropriate notices were carried then and subsequently in marine and aviation navigational manuals.”

What “other governments”, Admiral, have ever closed off to all sea and air craft half-a-million square miles of international waters, and this for a period of several months? Included in the 26th March “warning area” were some of the best Japanese fishing grounds and much of the Marshall Islands. Not even the high-handed gang in the Kremlin has set any precedent for such an action. And the argument that “We have done similar things in the past” is slender justification. Even if we grant the (completely untenable) assumption that America has acquired some “unwritten” right, on the basis of historical precedent, to set up proving grounds in international waters, we are confronted here with an entirely new situation — as with most other aspects of the hydrogen bomb question a quantitative change has become qualitative. As Dr. Lee Du Bridge, President of the California Institute of Technology, wrote in 1946 in protest of the first Bikini tests: “One can do target practice with a gun (even a I6-inch gun) in his ‘backyard’. But brandishing atomic weapons is in a different class.” Especially so when the effects, in varying degree, are felt from Calcutta to Wyoming; and large areas are not only denied to others but more or less permanently mutilated.

Concerning the mutilation of the areas, Strauss says:

“Each of these two atolls [Bikini and Eniwetok] is a large necklace of coral reef surrounding a lagoon two to three hundreds of square miles in area, and at various points on the reef like beads on a string appear a multitude of little islands, some a few score acres in extent — others no more than sandspits. It is these small, uninhabited, treeless sand bars which are used for the experiments. . . . The impression that an entire atoll or even large islands have been destroyed in these tests is erroneous. It would be more accurate to say large sandspit or reef.”

The “baby bomb” of 1952 had already “annihilated an island of the Marshalls group” half a mile long. But leaving aside question of out – right annihilation of islands (a preposterous standard of damage) and considering instead their spoliation much more can be said. As early as the 1946 Bikini tests Dr. David Bradley, a radiological monitor with the first task force, remarked that:

“The main island of Bikini . . . has been pretty well ravaged in the preparations for these tests [by the erection of installations] . . . and even discounting the possibility of lingering radioactivity it is doubtful if this island could support them (the dispossessed Bikini natives] again for a generation.” (No Place to Hide, Boston, 1948.)
He points out similar depredations of Kwajalein and many smaller islands, adding: “In the lavish expense account for Operation Crossroads, the spoilage of these jeweled islets will not even be mentioned, but no one who visited them could ever forget it”. But all this is piddling compared with the damage wrought by the H-bomb. Let us consider merely the effects of thermal radiation. We recall first some information about A-bombs, quoting from the AEC’s The Effects of Atomic Weapons (EAW):

“It may be concluded that exposure to thermal radiation from a nominal atomic bomb, on a fairly clear day, would lead to more or less serious skin burns within a radius of about 10,000 feet from ground zero (p. 200) . . . . Thermal radiation burns were recorded at a distance of as far as 13,000 feet at Nagasaki (p.202) . . . . Fabrics, telephone poles, trees, and wooden posts, up to a radius of 9,500 feet from ground zero at Hiroshima and up to 11,000 feet in Nagasaki, if not destroyed by the general conflagration, were charred and blackened (p. 207) . . . . The top of a wood pole, about 6,700 feet from ground zero, was reported as being ignited by the thermal radiation (p. 214).”

It is thus seen that the heat wave alone from a twenty kiloton A-bomb would cause severe injury to animal life out to more than two miles, and severe damage to trees and foliage (including starting fires) out to more than a mile from the blast. Now from equation (6.39.4) of EAW (p. 195) we can calculate what will be the corresponding distances for H-bombs. Assuming a very clear day, we get about eighteen miles and fourteen miles respectively (4) for a twenty megaton bomb. Since the atolls are roughly circular, we see from their areas, as given by Strauss, that the furthest distance between any two points in a typical atoll is of this order. Thus it appears that a twenty megaton H-bomb detonated anywhere in, say, Bikini atoll would sear all living creatures on the islands (5), killing most, and reducing to charred ruins the majority of trees and foliage. The Marshall Islanders had best forget about ever returning to Bikini – or Eniwetok, which was dealt the coup de grace in 1952. The fact that “uninhabited, treeless sandbars” are used as the site of detonation obviously would not affect these considerations. Let us give one example of the kind of fauna that once inhabited Bikini (aside, of course, from the human habitants) – the sea birds, valued not only for their beauty but for the phosphate deposits with which they enrich the soil of the islands. Here is Dr. Bradley’s description of Cherry Island (the Americanized name for one of the twenty-six islands in the Bikini group):

“Cherry proved to be a rookery; birds rose in screaming protest all about us and the trees were burdened with their nests. Terns they were — black noddy terns and dainty little fairy terns, pure white and almost translucent against the sky.”
These are the forgotten casualties of the H-bomb, together with coconut palms and coral reefs and parrot fish and giant lobsters and so many other exotic denizens of the South Seas (6). It is a supreme irony that these should be sacrificed in the name of “science”.
We move next to Strauss’s account of the condition of the victims.

“The Task Force Commander promptly evacuated all the people from these islands [in the path of the fall-out]. They were taken to Kwajalein where we maintained a naval establishment, and there placed under continuous and competent medical supervision. I visited them there last week. Since that time, it has been determined that our weather personnel could be returned to duty but are still being kept on Kwajalein for the benefit of further observation. [They were since transferred to Tripler Army Hospital in Hawaii (7) — JL.] None of the twenty-eight weather personnel have burns. [Note that the original March announcement said flatly “there were no burns” — JL.] The 236 natives also appeared to me to be well and happy. . . . To-day, a full month after the event, the medical staff of Kwajalein have advised us that they anticipate no illness, barring of course disease which might be hereafter contracted [!].”

The Marshallese petition to the UN said the natives were suffering from “lowering of blood count, burns, nausea, and the falling off of hair from the head”. A later report by AP staff writer Waugh, published in the New York World-Telegram and Sun, 9th June, after official censorship, said: “Of the eighty-two Rongelapers, about forty-five suffered radiation burns . . . one man still has a bad burn on the back of his right ear, three months after the explosion”. How in this condition the natives appeared well (and happy!) to Strauss, it is difficult to imagine. So jubilant are they, indeed, that they refer to themselves as “the poisoned people”. Undergoing frequent deportation is in itself not conducive to the highest standards of well- being.

The AFC’s March statement that “after the completion of the atomic tests, they will be returned to their homes” is also flatly contradicted by Waugh’s article — the Rongelap natives will remain on Majuro atoll for at least a year!

There are deeper undertones of deception in Strauss’s statement which derive from the fact, well known to Strauss and his advisors, that many effects of ionizing radiation are delayed – even for years. What is commonly called “radiation sickness”, or “acute radiation syndrome” is due to exposure of the order of several hundred roentgens. (See Supplement.) Its typical symptoms (loss of hair, general malaise, fever, pallor, diarrhea, emaciation, sore throat, blood spots under the skin) usually occur by the third week, and if the patient survives three or four months he will generally recover – for the time being. The AEC tries to create the impression that “radiation sickness” is the only hazard of radiation, and that recovery from it means complete recovery. We shall see how small an amount of radiation can ultimately produce disabling and lethal effects, indeed an amount far too small to produce the syndrome or any detectible early symptoms at all. In general, the chronic biological effects of radiation are even now very poorly understood; the AEC at its installations permits to an individual a maximum exposure (8) of only 0.3r (roentgens) per week (this may be compared with the fact that the Japanese and Marshallese victims must have been exposed to at least 100r in order to develop the clinical picture of “radiation sickness”)

Furthermore, radiation effects are particularly insidious and inescapable when the active material lodges in the body; and isotopes such as radioactive strontium or (unfissioned) plutonium which lodge in the bones and emit negligible gamma radiation are extremely difficult to detect, except by the effects which they eventually produce. Since the Rongelap natives drank water from their well into which the radioactive ash had fallen, and a number of Japanese ate contaminated fish, it is clear that these considerations are not excluded in their case. Even one one-millionth of a gram of plutonium in the body will often kill the host – with bone cancer or aplastic anemia – the latent period being at least several years. (9) Among the many recorded cases of chronic radium poisoning (and plutonium is as poisonous on a gram-for-gram basis as radium) it is not uncommon to find death caused by one-tenth this mass of radium in the body, and latent periods of ten and twenty years. Strontium, barium and zirconium fission products are of comparable toxicity, and in the case of a close-to-ground hydrogen burst there are usually a dozen radioactive isotopes unleashed in fair quantities, which can produce chronic death if several thousandths of a gram enter the body (e.g. phosphorus 32, sulfur 35).

One finds an ever-increasing number of delayed effects among Japanese A-bomb victims. For example, an AP dispatch of 21st June, 1954, cites the first recorded instance of a cancer developing on the site of a scar from a radiation burn – nine years after the injury. The 1953 publication Atomic Bomb Injuries by Dr. Nobuo Kusano reports thousands of cases of leukemia among the victims, many times the pre-war incidence. A number of cases of malformed miscarried, feeble-minded or stillborn offspring of mothers irradiated at Nagasaki during pregnancy were recently reported and analyzed by three Los Angeles doctors, according to an AP dispatch of 30th April. Over 200 cases of eye cataract (opacity of the lens or lens capsule) have been observed among the victims. And Dr. John Bugher reported in Nucleonics September, 1952, other delayed effects ranging from “detectable abnormalities” because of spontaneous mutation to impaired growth of young boys, malformed teeth, and increased incidence of dental caries. Dr. H. J. Curtis in Biological and Medical Physics II (New York, 1951) remarks: “Many months after the acute symptoms had passed some [Japanese] patients reported extreme weakness, and this symptom is still persisting in many of these people. If we can reason from the experiment on mice . . . we would conclude that these persons will remain weak and lethargic the rest of their lives”. And the biologist H. J. Muller has predicted that genetic deaths in the A-bombed areas will, in the course of time, claim as many victims as the bombings themselves.

One does not have to go further in demonstrating the insidiousness of radiation injury than the experiences of X-ray and cyclotron workers. H. C. March (Am. J. Med. Sci. 220, 1950) showed that the incidence of leukemia in radiologists over a twenty-year period was nine times as great as for non-radiological physicians. Over fifteen radiation-induced cataracts have been discovered in American cyclotron workers. And compare the “well and happy” assertion of Strauss with the following remarks of Dr. G. Failla of the College of Physicians and Surgeons of Columbia University:

“A striking characteristic of the biological effects of ionizing radiation is the lone delay that occurs between the exposure to radiation and the manifestation of the effects . . . sometimes complications occur much later in a tissue that has recovered almost completely . . . it is very important to bear in mind that death may be the final outcome of even an apparently mild local skin injury. [In the case of the radiologists] the important point is that the daily dose was too low to produce readily noticeable skin changes within, let us say, the first two years. . . . Nevertheless obvious skin changes did occur later . . . fifteen or twenty years later one of these growths, or one of a more recent origin and less annoying may develop into a cancer. If this is of the squamous cell type it will eventually spread – metastasize – to some vital organ and the patient will die. . . . The incidence of leukemia in radiologists has been found to be significantly higher than in other physicians . . . there are numerous cases in which the individual appeared to be normally healthy and active until the leukemic process started in late life.” (Taken from Industrial and Safety Problems of Nuclear Technology, Harper and Bros., 1950; emphasis added – J.L.)

In view of all these facts no further comment is needed on Strauss’s double- talk about “barring of course disease which may be hereafter contracted”. And in all this we have not yet gone into the genetic damage — a particularly grave consideration in the case of the Marshallese because virtually the entire population of certain atolls has been exposed.

And what of the twenty-three fishermen? Strauss tells how they happened to be exposed:
“Despite such notices there are many incidents where accidents or near accidents have resulted from inadvertent trespass in such warning area. The very size of them [!] makes it impossible to fence or police them. . . . A Japanese fishing trawler, the Fortunate Dragon, appears to have been missed by the search but . . . it must have been well within the danger area.”

Note that here he says “danger area” rather than “warning area” – for it has been well established that the trawler was outside of the “warning area”. In this sense the statement is correct – by definition. Obviously anything which is endangered must be reckoned “within the danger area” – and since the Misaki Maru 2,200 miles away received dangerous fall-out, some idea of the size of the “danger area” can be obtained.
Of the condition of the fishermen, Strauss says:

“The situation with respect to the twenty-three Japanese fishermen is less certain [!] due to the fact that our people have not yet been permitted to make a proper clinical examination. [However] the reports which have recently come through to us indicate that the blood count of these men is comparable to that of our weather station personnel. Skin lesions observed are thought to be due to the chemical activity of the converted material in the coral rather than to radioactivity, since these lesions are said to be already healing. The men are under continual observation by Japanese physicians, and we are represented in Japan by Dr. Morton of the ABCC [who said ‘they will recover completely in a month’ – J.L.] and Mr. Eisenbud of the AEC [who aroused great resentment in Japan when he ordered routine Geiger counter tests of fish bound for Japanese tables, but very complete and careful examination of American-bound tuna. – J.L.].

The fishermen received very severe dosages of radiation, both beta and gamma, because the ash fell on them only several hours after the explosion, at which time many short-lived and therefore intense radioactive emitters would still be present in quantity. Even the analysis of the ash several weeks later by Japanese scientists, which revealed deadly isotopes, would not tell the whole story. For instance, we may mention 14.8-hour sodium 24. As EAW point out, the coral at Bikini is “saturated with sodium” from the sea water. Ordinary sodium is known to capture neutrons readily and become sodium 24 (EAW, p. 255) which decays to magnesium by emitting strong beta and gamma radiation. Because several hundred pounds of neutrons are liberated in a hydrogen explosion, large amounts of sodium 24 are formed, and from this source alone the coral ash falling on the fishermen must have been quite “hot”. Many similar elements could be mentioned encompassing both fission products and neutron-induced radioactivity: the fission products alone from a small A-bomb have an activity of six billion curies an hour after the explosion, and 133 million curies (10) a day after (EAW, p.251).

In the absence of reliable information, which would require at least an interview with the fishermen and complete medical records, one cannot state with certainty what is the present condition of these men (11), but certain illusions can be dispelled. Some circulation has been given to a rumor that the men merely got a “strong sunburn”. Actually they were at least eighty miles from the explosion and hence not within range to be affected by ultra-violet or other thermal radiation. The damage to the skin has all the earmarks of severe beta-ray burns. To see this, let us recall published accounts of the men’s symptoms. An INS dispatch of 27th March quotes an official Japanese report as saying: “Seven or eight days after the accident the crew began to feel painful irritations from what looked like burns on the neck, faces [and] ears . . . .” Again, an AP report of 25th March quotes Yamamoto, a victim: “After four days nearly everyone turned black and felt itchy. Our hands and faces swelled up, blistering like a burn. . . .The exposed parts worsened and itchiness was unbearable”. These may he compared with a classical case of beta burns, as described by Hempelmann and Hoffman in Annual Review of Nuclear Science III, Stanford, 1953, involving four persons who accidentally picked up “hot” fragments at the 1948 Eniwetok tests:

“An Important practical fact emerging from them these accidents is the itching and burning of the skin noted during the exposure. One of the persons changed his rubber gloves several times because he believed they contained some irritating chemical. The symptoms . . . were referable almost entirely to the hands. They consisted of swelling of the fingers, beginning several hours after exposure, and blistering starting after one week and reaching a peak four weeks after the exposure”.

The Effects of Atomic Weapons (EAW), p. 357 says:

“The reactions following contact with beta-emitters . . . may vary from temporary redness to complete destruction of the skin, depending on the doses absorbed. Even mild doses may result in delayed degenerative changes of the skin. When the hands have been exposed to large amounts of beta radiation, they become swollen within a few days and this is followed by reddening [in very severe cases, blackening – J.L.] of the skin. . . . Subsequently, large blisters form, become confluent, and finally turn into a slough. . . .”

The similarity is unmistakable; the four at Eniwetok received only hand burns, and after an extended series of skin grafts they “recovered” — except that “the most seriously injured finger is stiff and atrophied” and “some [other fingers] are slightly atrophied or slightly stiff”. Some of the more unpleasant possibilities which could beset the fishermen are implicit in what we have already said regarding the Marshallese victims. Yet because the seriousness of their condition has been glossed over in some quarters without factual justification being given — and also because there is a lesson here for all of us — let us go more deeply into the matter.

Here are some further excerpts from the paper of Dr. H. J. Curtis of the Columbia University College of Physicians and Surgeons (op cit.):

“. . . if a drop of solution containing a very small amount of a radioactive isotope were to splatter on a hand . . . one spot on the skin would receive a very appreciable dose of beta rays. This might lead to a small radiation burn which in turn might eventually lead to a skin cancer at that spot. . . .”

Hematologic changes proved to be a very poor index of the degree of radiation damage. Even in the animals receiving very high single doses of a penetrating radiation from which they recovered, the blood picture would very soon return to normal and remain so until the death of the animal. [Compere Strauss’s remark concerning blood count. — J.L.]

“Animals surviving the acute phases of the beta-ray damage often died of secondary infection from the skin ulcers. Of the remainder that died prematurely, almost all of them died of skin tumor. In some series of rats the skin tumor incidence was practically zero in the controls and 100 per cent in the experimentals. Practically every type of skin tumor ever described was found, and there were as many as 100 separate loci on some rats . . . if one assumes that man is as sensitive to tumor induction by radiation as the most sensitive rodents, then the induction of neoplasms in persons working with radiation is a very real possibility. . . .

“At about thirty days [in the subacute reaction to beta-rays] small layers of the superficial layers of the skin start to slough [forming ulcers]. . . . Usually these ulcers heal fairly rapidly [‘skin lesions are said to be already healing’ – Strauss]. This healed skin appears somewhat dry and thickened but otherwise quite normal. . . . However months later sloughing commonly occurs again leaving large deep ulcers. These late ulcers are very slow to heal . . . . There are many more deaths proportionately in the subacute period among the animals receiving beta-rays than among those receiving penetrating radiation . . . . Several months after irradiations an opacity of the eyes developed in all rats and mice receiving large doses of beta rays [this may correspond to several years in the life span of man – J.L.]. It is interesting that this occurred quite rapidly, one week the eyes being quite clear and functional and the next milk-white and opaque”. [Note that ash got into the eyes of the fishermen (UP dispatch of 14th April) – J.L.]

Of course, the influence should not be drawn that the development of these morbidities by the Japanese fishermen is inevitable – but the danger is real. The results of animal experiments must be taken seriously because, as Harold Plough points out in Nucleonics, August, 1952, humans are more sensitive to radiation than most other animals. For instance, the median lethal dose (the dose of penetrating radiation that will cause acute death to 50 per cent of young adults exposed) is 650r for mice and only 400r for humans. An INS science writer reported that adolescent mice exposed to radiation from the 1946 Bikini bombs,

“developed tremendous tumours of the pituitary gland in their old age . . . cancers so severe the tiny gland at the base of the brain grew until it filled one-third or one-fourth of the cranial cavity, making a virtual pancake of the brain”
and that Dr. Jacob Furth of the Children’s Cancer Research Center in Boston, “a top cancer specialist” connected with these experiments, pointed out “this effect in mice may not hold true for men” but “that other effects observed in the mouse studies [leukemia, loss of hair color, susceptibility to infections and cataracts] already have found a grim parallel in some of the biological change occurring in surviving men and women at Hiroshima and Nagasaki . . . . Similar effects, the scientist indicated, conceivably could appear in the future in the Japanese fishermen recently showered with [radioactive] ashes”.

So far we have indicated possible damage from the surface (beta) radiation. A whole spectrum of new horrors appears when the total-body (gamma) radiation is considered. It is known that the fishermen exhibited the familiar syndrome associated with an overdose of penetrating radiation. As Rutherford Poats, UP staff writer wrote from Tokyo 14th April: “The fishermen were vomiting and they had diarrhea” when they reached port; also their blood count dropped sharply. To develop these symptoms in less than two weeks strongly indicates a dose of at least 200-300r (compare EAW, p. 347). For comparison, consider that 400r is normally fatal to 50 per cent of humans. A nearly fatal dose certainly has permanent effects, even though the victim survives. To quote again from Curtis:

“The chronic changes produced by large single doses of a penetrating radiation are very poorly understood. As already described, the animals either become emaciated and die in a state of atrophy before their controls, or else develop some form of neoplasm [tumor – J.L]. Premature greying of the hair in dark haired animals is universal. A few develop opaque eyes just as the animals exposed to beta rays. The preliminary results on the exercise tolerance tests indicate considerable muscular weakness or lethargy, but the mechanism of this deficiency is completely unknown. In the case of the animals dying in atrophy it seems fairly certain that they finally succumbed to some one of the diseases tibility to disease…. [In this generalized atrophy] no definite pathological changes can be detected but the tissues present the same picture as that of tissues from very old animals”.

This and much other experimental evidence (also the cited experiences of the A-bomb survivors) indicate that at best, even barring neoplasia, anemia, sterility, atrophied genitalia or other specific disease, the fishermen will suffer emaciation and premature ageing. And what of the fishermen (and scores of other Japanese) who ate contaminated fish? One does not know whether the radiostrontium deposited in their bones is lethal, but it doesn’t take much — a few millionths of a gram and lots of time will do it.
From all this we can see why the condition of the fishermen is “less certain”. Cavalier pronunciamentos by AEC spokesmen that they have “recovered” will not do — in this case nothing short of certified clinical and histopathological data can be taken seriously. Similarly, a recent statement by Dr. Masao Tsuzuki that the victims “were making satisfactory progress” — announced while said doctor was on a tour of American atomic installations as a guest of Admiral Strauss (New York Times, 28th May) — is suspect. In an Alice-in-Wonderland vocabulary where “well and happy” means “suffering from radiation sickness and burns” (12) and “complete recovery within a month” means “many months of hospitalization”, what is “satisfactory progress”? In truth the only information content of this vague expression is that the fishermen are still alive! Every possible step has been taken to keep them alive, (13) including blood transfusions on a lavish scale because of extensive damage to the blood-forming tissues. It is proper that these steps be taken — but at least let the truth be known to the American people, who must soon decide whether they will permit a new “Operation Syndrome” in other people’s backyards!

Now back to Strauss. The closest he ever comes to expressing regret is:

“In the matter of indemnifying the Japanese, our Government has informed the Japanese Government that it is prepared to agree to reimbursement for such financial assistance as the Japanese Government and our Embassy in Tokyo, jointly, may find necessary as an interim measure to give to the persons involved for current medical care and family relief, including wages.”

Even these miserable promises have not been kept. At the date of this writing (1st July) payment has not been made to the fishermen or their families, who now experience great hardship. And the indemnification of the Marshall Islanders? Twenty-seven wooden shacks on Majuro atoll (14) — and bigger bombs promised for 1955.

And what of the contaminated fish? Strauss says:

“With respect to the stories concerning widespread contamination of tuna and other fish, the facts do not confirm them. The only contaminated fish discovered were those in the open hold of the Japanese trawler. Commissioner Crawford of the U.S. Food and Drug Administration has advised us: ‘Our inspectors found no instance of radioactivity in any shipments of fish from Pacific waters. [These fish had of course been ‘screened’ before shipment – J.L.] Inspections were undertaken as a purely precautionary measure . . . . There is no occasion here for public apprehension about this type of contamination.’”

Published news items before Strauss’s press conference had already stated that the Myojin Maru and the Koei Maru came into port 27th March exhibiting dangerous radioactivity; and an AP dispatch of 30th March said that Japanese health officials had destroyed the entire 80,000 pound tuna catch of the Koei (and were undecided about the Myojin). How could Strauss seriously stand up before a world eager for “the facts” and say that only aboard the Fukuryu Maru was there contaminated fish? But in the light of hindsight the true magnitude of Strauss’s understatement is first apparent: it should not be necessary here to recapitulate the many news reports we have presented concerning contamination of fish.

Regarding contamination of the sea, Strauss says:

“With respect to the apprehension that fall-out radioactivity would move toward Japan in the Japanese Current, I can state that any radioactivity falling into the test area would become harmless within a few miles after being picked up by these currents which move slowly (less than one mile per hour) and would be completely undetectable within 500 miles or less.”

Let us recall the cited AP dispatch of 5th June that “A Japanese radioactivity test ship detected strong signs of contamination last night 500 miles south of the U.S. H-bomb test area at Bikini”, and that fish caught many hundreds of miles from Bikini have been found unsafe to eat. An AP dispatch of 5th July quoted Dr. Hiroshi Yahe, chief of the Japanese radioactivity test group, as saying: “We have found that H-bomb tests seriously affected sea waters, fish and other marine life”. Recently the Japanese group completed its study, but its report has not yet been made public. It will be important to note whether this report is subjected to any American censorship. Meanwhile a summary has been released (not available to the author) which is discussed by Lindesay Parrott in the New York Times of 7th July under the heading: Bikini Area Safe, Japanese Report. The report, we are told, “explodes scare stories spread [in Japan] by anti-American elements, some university professors [e.g., Hidiki Yukawa, one of the world’s leading physicists – J.L.] and the sensational Tokyo press”. For example, “The report is particularly explicit in stating that navigation in the entire test area is safe, though caution should be used in taking seawater aboard for such purposes as washing down decks, cookery, or use of a crew” (emphasis added). On the day that the Pacific Ocean is so poisoned with gamma ray emitters that it is unsafe even to navigate there, the time for discussion will of course be long past — we will all be scrambling for lead vaults. Again,

“A minor danger area was found only [!] in the current setting northward toward Japan, west of Bikini. There more than normal radioactivity was found in plankton and small fish. Tuna, which apparently fed on these lesser forms of sea life, showed signs of radiation around the gills and in the internal organs but little [it takes very little! — J.L.] in the parts of the body usually used for food.”

Naturally no sane person would contend that the Pacific Ocean has been transformed into one great radioactive holocaust — but this is not the standard by which the hydrogen explosions must be judged. As we have shown, considerable damage has been done, and no amount of hand-waving by official apologists can alter this fact. And there are long-range factors at play yet to be evaluated. According to EAW p. 251 the fission products from a 20-kiloton A-bomb give 110,000 curies of gamma activity alone, one year after detonation. In the recent hydrogen explosions, beside the fission products and plutonium, great quantities of long-lived isotopes of carbon, sulfur, iron and other elements were formed by neutron action. All these radioactive elements enter the metabolic processes of sea organisms, just like their non-radioactive counterparts. In particular, many elements, despite initial dispersal, become reconcentrated. (See EAW pp.283-4.) Examples of this process can be found in the treatise The Oceans (New York, 1942) by Sverdrup et al: It is pointed out, for instance, that radium is found in one hundred fold greater concentration on the sea bottom than in sea water because it is collected by certain marine life. Other organisms concentrate strontium in their calcareous skeletons, and so forth. In addition to concentration, the radioactivity can be transported hundreds of miles by migratory fish. We have already cited such a case in connection with the Shoho Maru. The tuna is highly migratory — cases are recorded where a fish missed at Tunis or Spain is caught in Norway (the hook in his mouth permitting identification). Eels of all parts of Europe and Africa cross the ocean and come to the Atlantic shores in autumn. Science News Letter of 13th March, 1954, reported that an albacore, caught and marked in Japan, escaped and swam 4,900 miles across the Pacific where it was recaptured off the California coast 324 days later. Thus it is not unlikely that radioactive fish will turn up in American waters in a year or so. (Radioactive fish have in the past been caught in the rivers near the AEC’s Hanford, Washington, atomic plant.) Another long-range process particularly difficult to evaluate is the diminution of the fish population of the Pacific, and the changes in the existing balance of species. One cannot rely on counting corpses to know how many fish have been poisoned — for at the first sign of disability, the radioactive fish loses out in the battle for survival, and is swallowed by a predator.

Strauss next discusses world-wide contamination by fall-out:

“With respect to a story which received some currency last week to the effect that there is danger of a fall-out of radioactive material in the United States, it should he noted that after every test we have had and the Russian tests as well there is a small increase in natural ‘background’ radiation in some localities within the continental United States, but, currently, it is less than that observed after some of the previous continental and overseas tests, and far below the levels which could be harmful in any way to human beings, animals, or crops. It will decrease rapidly after the tests until the radiation level has returned approximately to the normal background.”

Is “200 times normal background” in Montana and Wyoming, as reported by U.S. News and World Report 9th April, a “small increase”? Background radiation five to ten times normal was recorded in many locations throughout the world. “Rain so radioactive it might be dangerous if drunk” fell on Japan – over 2,000 miles from the explosion. Furthermore, the AEC has been known to underestimate even with fall-out from Nevada A-bomb tests. Their 13th Semi-annual Report to Congress in 1953 claimed that “the highest radiation level detected anywhere outside the Nevada proving ground was at a mine located nearby. Here, measurements showed a radiation level which would deliver an estimated dose of 1.75 roentgens during a lifetime”. Yet in April, 1953, university scientists in Troy, N.Y., 2,300 miles from the proving grounds, detected “exceptionally high” radioactivity from a rain storm. (Science, 7th May, 1954.) At one “hot spot” on campus, radiation an inch above ground was 120 milliroentgens per hour after two days, enough to furnish the prescribed I.75r “lifetime dose” in less than a day. And a man and woman from Utah have brought suit against the government for $200,000 for falling out of hair, peeling of skin and fingernails, and recurrent nausea allegedly caused by radioactivity from last spring’s Nevada tests (New York Times, 5th May, 1954). If these claims are substantiated — and a similar accident involving cattle from the first A-bomb test makes them plausible — dosages of at least 100r are indicated. In view of these facts, and because radioactive dust may remain in the atmosphere even for years before settling out, Strauss’s remarks appear more like a time-honored recipe than an attempt to evaluate the situation at hand.
Strauss ends his statement with a tribute to “the men engaged in this patriotic service” and the heartening prospect that,

“one important result of these hydrogen bomb developments has been the enhancement of our military capability to the point where we should soon be more free to increase our emphasis on the peaceful uses of atomic power at home and abroad. It will be a tremendous satisfaction to those who have participated in this program that it has hastened that day”.

Strauss seems unaware that long before atomic energy enriches the life of man (atomic power is promised to us by 1975 at the earliest) — and intensifies the already critical problems of radioactive waste disposal (15) — the health of all of us may be ruined by “experiments”. Yet the real irony is that the experiments do not even “hasten that day”. They are not simply a temporary unpleasantness which will soon be over with, but will go on indefinitely (as the AEC has assured us) thereby diverting vast resources from “peaceful uses of atomic power”. And does the information gained about thermonuclear phenomena contribute to “peaceful uses”? Eminent physicists (see Supplement, §3) have pointed out repeatedly that the thermonuclear reaction has only one possible use. In the words of Otto Hahn, discoverer of nuclear fission:

“The particularly unpleasant thing about the hydrogen bomb is that it will never be possible, as in the case of the fission of uranium, to utilize the nuclear of the hydrogen reactions for peaceful purposes. We can reach the temperature of 20 million degrees or more only for millionths of a second and not for any length of time. A ‘controlled reaction’ is not possible. The same nuclear reactions which have been going on in the sun for millions of years and which yield the energy forming the basis of our life on this earth, become in the hands of man simply a means of destruction and nothing more.” (New Atoms, Elsevier, 1950.)

This concludes our examination of the “official” explanation of the events following the 1st March explosion. The AEC carefully safeguards the personnel at its installations with an array of radiation counters, dosimeters, blood checks, lead vaults, even ten-ton windows. The U.S. Navy ships at the recent “tests” had special sprinkler systems in readiness to wash overboard any fall-out before it could settle on deck (and these were needed!). Not a microcurie shall escape detection at Oak Ridge. Yet, displaying one of the most remarkable double standards in history, the AEC unleashes many megacuries of dangerous activity on the world and tells us there is not the slightest cause for concern. Victims with appalling syndromes become models of fitness, and beta-ray reddening of the skin becomes the high color of robust health before the magic wand of Admiral Strauss. In view of the record of misrepresentation, can we trust these men to tell us the truth?

It is not alone Strauss, or the AEC, who are responsible. The “testing” of nuclear weapons has long left the realm of a routine military operation; rather it must be considered national policy, with purposes quite divorced from the gathering of information. What the forces are that compel American ruling circles to e age in this H-barbarism will be discussed in the second article; relevant for us here is the observation that the hydrogen explosions are in the deepest essence of America’s current role. They cannot be abandoned without abandoning a good deal more. That is why not only the AEC, but our statesmen, the kept scientists and the respectable press stretch the truth beyond all limit to keep the apprehension of the American people below the level where it will upset the applecart. That is why the Reporter calls the hydrogen bomb “a big bang in the empty reaches of the Pacific” and Sen. Hickenlooper announces, in contradiction to physical theory, that the fusion reaction can be controlled for power. And that is why a new concept of morality is foisted upon us — a morality which permits any horror to be perpetrated, so long as it is accompanied by appropriate incantations about “deterring aggression”.

3. Radiation and the Race

Thus far we have dealt mainly with the short-range effects of the hydrogen explosions, notably the injuries to several hundred Asians by large doses of ionizing radiation from fall-out. We have pointed out that high-level irradiation produces deadly injury. But what about the much lower levels of radiation, which are nevertheless well above normal background radioactivity, spread throughout the whole world by the bombs? When background radioactivity “five to ten times normal” was detected in New York, should that have been reason for concern? According to the New York Times of 19th June,

“The Kings County Medical Society’s public health committee has recommended legislation to restrict atomic and hydrogen bomb experiments. Explosions thousands of miles away endanger New Yorkers, the committee reported yesterday. . . .”

Many similar warnings have been sounded. By way of orientation several such statements from authoritative sources may be given. A UP dispatch of 2nd April reports:
“The Federation of American Scientists said the 1st March explosion means ‘that current tests may be approaching orders of magnitude where close control not only becomes difficult, but effects in fact may become incalculable’ . . . it added that the consequences of living things of the radioactivity involved ‘can hardly even be estimated from presently available data’.”

A UP dispatch of 7th May said:

“A leading California scientist said to-day that the low, but increasing level of
radioactivity may pose a threat to the health of millions of persons. ‘During the last ten years, man has deliberately increased the amount of high-energy radiation in the world by an enormous amount’ said Dr. Albert W. Bellamy [University of California professor of biophysics]. ‘Concurrent with this has been a corresponding increase in the number of persons potentially exposed to these radiations. We have not lived long enough with radiation to know yet just how much long-continued, low level radiation —both internal and external — we can live with without injury. Radiation exposure is extremely insidious. None of the human senses can detect it. The effects of radiation exposure may not show up for weeks, months, or years’ [said Bellamy, who is also chief of the State Division of Radiological Services].”

This dispatch also informs us that “Dr. Gordon Fitzgerald, university X-ray expert, said recently careless use of X-rays had lowered the life expectancy of dentists to fifty-six years, about ten less than normal”. Since the gamma rays from radioactive substances affect the human body in precisely the same way as X-rays, it is not only dentists who need take notice.

The Manchester Guardian of 4th May reported that “fifteen teachers and research workers in London University” sent a letter to Sir Winston Churchill saying:
“We … feel compelled to write to you in view of the incalculability of the effects of the present series of experiments with hydrogen weapons . . . the distribution of radioactive products from such explosions cannot be accurately predicted, and the serious danger to health which would occur if any quantity to radioactive material should fall in a populated area must not be underestimated.”

Alexander Haddow, Director of the Chester Beatty [Cancer] Research Institute in London wrote to the Times, 30th March:

“It has long been the anticipation of many scientists, increasingly perturbed by the biological implications of the development of atomic weapons, that sooner or later the world would be confronted by the need for a radical decision, involving nothing short of the international prohibition of nuclear explosions, if the gravest results were to be prevented. Your leading article of 26th March [suggests] that the crucial moment is now upon us. Recent events in the Pacific, with their demonstration of the powers of the hydrogen bomb for limitless annihilation, at once bring to an end the notion that the area of danger can have any but relative meaning. If we are entering the realm of the incalculable the likelihood of ultimate disaster grows steadily greater . . . . [The bomb’s physical destruction], although vast, is so far limited, and the subtler menace — potentially limitless and cumulative — arises from the liberation of radioactive products, and from their immediate, delayed, and remote effects. Of the first we have had an account from the skipper of the Fukuryu . . . . The second also are now well recognized, from the work of the ABCC, in an increased incidence of leukemia [among A-bomb survivors]. The third are genetical and racial, and it is a measure of the unexpected speed of recent developments that these now bulk rather less in preoccupation in relation to the problems of world survival itself.”

We already know the AEC’s attitude toward these problems. As U.S. News and World Report, 9th April, expresses it:

“The theory had been widely held among scientists that radioactivity could be gradually raised to dangerous levels by repeated H-bomb explosions. The AEC now is attempting to knock it down, insists that this danger is infinitesimal and nothing to worry about,”
What about the great quantities of carbon 14 generated from atmospheric nitrogen by an H-bomb explosion? This element emits beta particles with a half-life of 5,100 years, and enters the carbon cycle, thereby to mingle with all living things. What is to be the ultimate fate of the megacuries of fission products now in the sea and atmosphere, and also in the waste disposal vaults of the atomic installations? The only agency which can eliminate the blight of strontium 90 and cesium 137 – which nature apparently never intended to be on this earth – is time: centuries for these elements alone. Chlorine 36, potassium 40, and plutonium 239 remain with us, to all practical purposes, for eternity. These are facts of life, and it is difficult to see how the AEC plans to “knock them down”. On the other hand, there is a wealth of experimental data underlying the scientists’ warnings against increasing the background radiation.

Before the atomic age, human beings received a small quantity of ionizing radiation, mainly from cosmic rays originating in outer space, and carbon 14 and potassium 40 in the body. The Effects of Atomic Weapons estimates this quantity at about 0.003r per week, or less than 10r in a 60 year life span. This is the radiation level with which the human race has, over geologic time, reached equilibrium. The currently accepted “tolerance dose” is one hundred times this or 0.3r per week. Not many years ago 0.1r per day was considered safe. What is a “safe” dose, from the long-range point of view? Boche, as a consequence of low-level radiation experiments on animals, conjectured a few years ago that an appreciable decrease in the life span of humans may be expected from exposure to 0.1r per day. E. Lorenz, an eminent radiologist, and co-workers have discovered a number of striking results. In one experiment, 0.1r/day induced a rare mammary gland tumor in at least 20 per cent of irradiated female mice, with 0 per cent in the controls. (16) Another strain exposed uniformly to 0.1r/day until natural death showed 60 per cent incidence of ovarian tumor with 12 per cent incidence in the controls. (References in Furth and Upton’s article in Annual Reviews of Nuclear Science, Vol. 3, 1953.) As Furth and Upton point out, “the high sensitivity of the ovary to ionizing irradiation has been amply confirmed by recent studies”. In fact Lorenz, on the basis of experimental results, has indicated that an increase in the incidence of ovarium tumor in the human female may be expected beginning with an accumulated total-body dose of 100r (which would be got in less than seven years at the 0.3r/week rate). He has suggested that in women, at least, the maximum exposure( be limited to 0.02r/day. And R. M. Siovert of the Caroline Institute in Stockholm has suggested 0.01r/day for men and women alike. The 0.3r/day tolerance level is subject to criticism on other grounds. Thus Brues and Sacher, at the Symposium on Radiobiology at Oberlin College in June, 1950, remarked that:

“Calculations . . . using empirical constants deduced from mouse and dog survival data, indicate that a continuously accumulated tolerance dose might decrease the human life span by ten per cent.”

(This, and other material, which we shall quote from the Oberlin Symposium, has been published in Symposium on Radiobiology, John Wiley, 1952). Other data, difficult to evaluate, has accumulated regarding obscure blood changes from chronic exposure. For instance, Ingram and Barnes (AEC Document UR-137, 1950) reported lymphocytes with bilobed nuclei in cyclotron workers and in experimental animals exposed to doses of neutrons considerably below the tolerance values.

What follows from all this? One cannot with certainty make inferences about the effects on human beings from animal experiments. But having lived for only a few years with increased background radiation, we are forced to base ourselves on this data. Obviously, the entire atomic program comes into question on this basis.

We come now to the most subtle, but what is in the last analysis the most important, of the biological effects of ionizing radiation: the effect on genetics. Contrary to some popular belief, this effect does not manifest itself in a proliferation of freaks and monsters. (17) In fact, even when pronounced genetic damage has been effected upon a race, this damage is quite difficult to isolate, although very real suffering is inflicted upon many individuals, and long-range statistics will eventually show clearly the decline of the race. The very subtlety of the process and its extension in time make it a perfect candidate to be ignored by those who, for instance, adopt complete disintegration of an island as a minimum standard of damage. On the other hand, the cumulative and irreversible nature of the process make it imperative that the danger be realized in time. Fortunately, men of the highest scientific competence have given clear warning of this danger.

Mutations occur spontaneously in the genes of the human germ cells, at a rate which is quite constant, and in equilibrium with the existing birth rate. H. J. Muller, a leading biologist and Nobel Prize winner, has calculated (Amer. J. Human Gen. 2, 111 (1950)) that the human race is in such a precise balance with respect to genes which produce defectives, that an increase of even 25 per cent in over-all mutation rate would produce a progressive and inevitable decline of the human population over a long period.
Qualitatively, it is not hard to see why an increase in spontaneous mutation rate will lead to decline. Well over 99 per cent of mutations are harmful. Some mutant genes are so harmful that they are not even compatible with life and will kill the offspring to which they are transmitted. Most mutant genes are only mildly harmful, but

“Each mutation received by an offsring results, on the average, in the genetic death of one descendant . . . no matter how slightly detrimental the effect of the mutant gene may be. This paradoxical result is a consequence of the fact that the less detrimental genes will tend to accumulate so as to hamper ever more individuals, until they make their ‘kill’ and so become eliminated. For this reason the total harm done by a small mutation is in the end as great as that done by a large, fully lethal, mutation. (H. J. Muller, Oberlin Symposium).”

Thus, almost every mutant gene is a “genetic time bomb” which will eventually eliminate itself from the population by causing a “genetic death” (ie., an individual who does not reproduce himself) — possibly many generations in the future.
Most mutant genes are recessive but, as Muller points out (Oberlin Symposium), even a recessive from only one parent produces some very slight deleterious effect, and so behaves in that case like a dominant of much lesser effect.

The connection of all this with radioactivity is that ionizing radiation induces spontaneous gene mutations. This fundamental discovery was made by Muller about twenty years ago. The mutations so induced are precisely of the same kind as those which occur naturally, only the rate is enhanced. A remarkable and most unpleasant fact is that, in the words of Professors L. C. Dunn and T. Dobzhansky, eminent Columbia University zoologists:

“There is no such thing as a ‘safe’ dose of radiation; the number of mutations induced is simply proportional to the amount of radiation reaching the sex cells, and if a person is exposed daily to small amounts of the rays, these small amounts may add up to very dangerous sums.” (Heredity, Race, and Society, New American Library, 1952.)
Dunn and Dobzhansky go on to say:

“We must, then, do all in our power to diminish the number of defective mutant genes being added to the gene pool of human populations. Unfortunately, the progress of modern science and technology has so far accomplished the exact opposite — the rate of origin of harmful mutations is likely to become very much increased. . . . The release of atomic energy, either for constructive or for destructive ends, will expose to mutation-inducing radiations even greater numbers of people. . . . Misuse of atomic energy may result in eventual harm to mankind which is fearful to contemplate . . . defective genes introduced into the human gene pool will be doing their gruesome work in a slow but remorseless way.”
Although the full impact of extensive gene damage is not felt until long into the future, even first generation offspring are endangered. H. K. Plough of the AEC’s Biology Branch wrote in Nucleonics, August, 1952:

“[This] suggests that the offspring of a man or woman whose germ cells receive a single dose or an accumulated dose of 80r radiation (or possibly as little as 30r) may be expected to show a 100 per cent increase in mutations over the number which will appear anyway. The hazard of even a slight increase in the number of deficient or malformed offspring, which is what an elevation in mutation numbers would entail, constitutes a problem worthy of serious consideration for every individual subjected to radiation exposure of the germ cells.

“We cannot contemplate with equanimity an increment in deficient individuals or in the ‘genetic death’ of the unborn . . . radiation hazards cannot be neglected for human beings even though they are not immediately apparent to the individual receiving exposure.”

If Muller, one of the world’s leading experts on radio-mutations, considers a 25 per cent increase in the mutation rate dangerous, and 30 to 80r is a dose that would double the rate, one cannot contemplate with equanimity the smallest unnecessary exposure. A recent editorial in Nucleonics remarked that it is a widely held belief that 8r delivered to the whole population might cause serious genetic damage. S. Wright (J. Cellular Comp. Physiol. 35, Supplement 1, 1950) estimated that as little as three roentgens might double the mutation rate! (18)

To deal comprehensively with the fullest genetic implications of increased radioactivity would carry us beyond the scope of the present article. Muller, in the cited Oberlin Symposium and BAS articles, two long articles in The American Scientist, January, 1950 and July, 1950, and in other publications, has dealt very elaborately with the possibility of decline of the race from this cause. Reading these articles is strongly urged upon all, and must be part of the intellectual equipment of every socially conscious individual in his consideration of atomic questions. Aside from safeguarding our own children and grandchildren, social consciousness requires that we prevent harmful effects which “slight in any one generation, would, as it were, pile up layer on layer”, towards a new equilibrium in which the whole biological level of the human race had been lowered; because they are hidden from us by veils of space, time, and circumstance”. Given its present course, the human race cannot do otherwise than undergo this gradual and irreversible decline — unless, of course, the shorter-range catastrophes inherent in the hydrogen age efface all of us long before that time.

4. Conclusion

We have now seen that a disaster of considerable proportions took place with the recent hydrogen “experiments”: serious injury was inflicted upon hundreds of individuals, obvious harm was done to the environment, and dangerous processes whose end effects cannot yet be predicted have been set in motion. It is to be expected that similar deleterious effects will follow future hydrogen explosions, regardless of what “precautions” are taken, because no “precaution” can keep a neutron from entering a nitrogen nucleus, nor direct a radioactive particle in the stratosphere not to settle in someone’s lung. The “danger area” is the earth.

What follows from this? What answer can the American people give to their United Nations delegate, when he says that America must explode hydrogen bombs as long as Russia does? The answer is simplicity itself: America must stop its explosions regardless of what Russia does. The bestialities perpetrated within Soviet borders are many. If one of these happens to be the explosion of hydrogen bombs, to the detriment of all humanity, so much the worse for us all. But to answer this crime against humanity with larger and more frequent explosions only intensifies the jeopardy of the human race. July 16th, 1954.
(To be continued)

SUPPLEMENT

1. Whence the Bomb?

The decade from 1895 to 1905 saw such fundamental discoveries as natural radioactivity, X-rays and Einstein’s special theory of relativity — which among other things propounded the revolutionary thesis that mass and energy are equivalent, being only different manifestations of one and the same fundamental physical entity. Surely the early investigators could not dream of the development that was to unfold in the next half-century from these beginnings. In particular, the equivalence of mass and energy seemed for many years a mathematical fiction; and although Einstein’s celebrated equation E=mc2 (where c = velocity of light = three thousand billion cm/sec.) implied that vast quantities of energy are latent in even a small mass (e.g., twenty-five billion killowatt hours in a kilogram), means for liberating this energy were unknown. But knowledge of the atomic nucleus advanced rapidly in the twentieth century, and a particularly brilliant period of new achievements in the ‘thirties culminated in 1938 in the discovery of uranium fission by Hahn and others. In this process a uranium nucleus, when bombarded by a heavy uncharged particle called a neutron, captures the neutron and splits into two lighter nuclei, accompanied by the release of several neutrons and the conversion of a small part of the original mass into energy in accordance with the above. This suggested the possibility of producing a chain reaction in a piece of uranium, although to be sure many difficulties had first to be overcome. In any case, the implications for construction of a nuclear bomb were widely recognized, and with the coming of world war further developments were cloaked in military secrecy. The first chain reaction was produced in Chicago in December, 1942. Then in 1945, only seven years after the discovery of fission, and forty years after the abstruse considerations of Dr. Einstein, atomic physics intruded itself shatteringly upon the consciousness of the world when the city of Hiroshima was laid waste by a nuclear explosion.

2. The A-Bomb

This weapon tends to be neglected in current discussion (unreasonably so, for now that the general level of public horror has been raised sufficiently to accommodate the H-bomb, the use of at least A-bombs in future warfare has been virtually assured). A critical mass of fissionable material is the least mass sufficient to sustain a chain reaction, and will explode spontaneously. The A-bomb, in its most primitive form, consists of two subcritical masses of fissionable material (either U-235 or plutonium), whose aggregate mass exceeds the critical. Critical mass has been estimated by Professor Oliphant at from twenty-two to sixty-six pounds. For detonation, these component parts are brought together rapidly and a stray neutron, certain to be present, initiates a chain reaction. More than two components can of course be used, but the necessity of bringing them together simultaneously with great rapidity limits their number sharply in a practical bomb. Hence the amount of fissionable material used in an A-bomb is inherently limited to several times critical mass, and the explosive power obtainable is correspondingly limited. The Hiroshima bomb had an explosive force equivalent to 20,000 tons of TNT. (The largest bombs of World War II used ten tons of TNT.) Modern A-bombs can be made more powerful, due mainly to more efficient utilization of the fission reaction. Ralph Lapp credits President Eisenhower with having stated that A-bombs twenty-five times as powerful as the Hiroshima bomb are now available. Lapp has also estimated that America now has a stockpile of “thousands of A-bombs” (Reporter, 11th May, 1954).

3. The H-Bomb

The H-bomb operates on a principle quite different from nuclear fission, namely that of thermonuclear fusion. At temperatures of millions of degrees fast moving nuclei of light elements may collide and “fuse” into a single nucleus of a heavier element, a fraction of the aggregate mass being transformed into energy in the process. A typical reaction of this kind is the fusion of a tritium with a deuterium nucleus to form a helium 4 nucleus plus a neutron plus 17.6 million electron volts (MeV) of energy. To raise the light nuclei to the necessary temperature an ordinary A-bomb is used as a detonator. Whereas the A-bomb is limited in power by the above-mentioned criticality considerations, no such limitations apply to the H-bomb. The nature of the fusion reaction (and the ready availability of hydrogen and lithium) make it possible and even relatively inexpensive (as these things go) to construct a bomb of almost arbitrarily great destructive power.
There is another significant difference between the two reactions: the fission reaction can be controlled in speed, thus making it theoretically possible to use the energy release as a source of power. The fusion reaction cannot, and thus its only possible application is building a bomb. Although it is believed that the stars derive their radiant energy from a continuous fusion reaction. This has been claimed by Bethe, Hahn and other leading physicists to be physically impossible on as small as scale as the earth. As R. F. Bacher, Physicist and former member of the AEC, wrote in the Bulletin of the Atomic Scientists (BAS) May, 1950: “There is no possibility that the energy release from this type of reaction can be controlled on the earth … On the earth these self-sustaining thermonuclear reactions will either give an explosion or nothing at all.”
It seems difficult to reconcile this with rumors afoot recently of “peace-time applications.” Sources from Sen. Hickenlooper to Harold Urey have “hinted” at applications. No details have been forthcoming. It appears that the only possible use of “astrophysical engineering” (Dr. Edward Teller’s Phrase), aside from erasing humanity, is the construction of an artificial star in space at some future time — just what the world is waiting for.

4. Radiological Warfare and the C-Bomb

Soon after the first atomic explosions, it was recognized that the cloud formed could injure life over a large area. Edward Teller wrote in the BAS, February, 1947:
“The radioactivity produced by Bikini bombs was detected within one week within the U.S. . . . The danger arising from the radioactivity [has] become evident by observations which have been made at widely separated places. . . . We have here a method of destruction which we cannot help noticing.”

Due notice was taken, but the question arose: how can one augment the radioactivity of the fission products? The answer was found in the neutrons liberated by an atomic explosion. We have stated that neutrons can induce radioactivity in most elements. For instance, a pound of neutrons could, under ideal conditions, generate twenty-four pounds of radioactive sodium 24 or sixty pounds of radioactive cobalt 60 from the ordinary forms of these elements. Thus, one can “rig” an atomic bomb by adding to it quantities of an element which will be activated by the escaping neutrons. The conditions which an element must fulfil to be a candidate for this role are: (1) It must capture neutrons easily. (2) The resulting isotope should have a half-life sufficiently short so that the radiation emitted is quite intense. (3) The half-life should be sufficiently long so that the radioactivity will not he given up before reaching a target. (4) For best results, penetrating (gamma) radiation should be emitted. (5) The element should be fairly abundant.

The elements which best fulfil these conditions are cobalt and zinc. Radiocobalt is especially deadly, giving off intense gamma rays with a half-life of 5.3 years. If cobalt is added to even a medium-sized A-bomb, which generates, say, five pounds of neutrons, one has already a rather troublesome radiological weapon. Added to a twenty megaton H-bomb, one has in a single bomb the means to denude a continent of man and beast. Professor Harrison Brown, nuclear chemist at the California Institute of Technology, said in 1950 that if a cobalt bomb incorporating a ton of deuterium were detonated on a north-south line in the Pacific about a thousand miles west of California,

“the radioactive dust would reach California in about a day, and New York in four or five days, killing most life as it traverses the continent. Similarly the western powers could explode hydrogen-cobalt bombs on a north-south line about the longitude of Prague that would destroy all life within a strip 1,500 miles wide, extending from Leningrad to Odessa and 3,000 miles deep from Prague to the Ural mountains.”

Actually, this does not tell the whole story for, in the words of Edward Teller (BAS, February, 1947): “One limitation to such kind of an attack is the effect of these gases on the attacker himself. The radioactive products will eventually drift over his country too”. Thus, the hydrogen-cobalt bomb (or “C-bomb”) is only usable as a universal suicide weapon. The manual of instructions that comes with the cobalt bomb says: “Set it off anywhere”. For this reason the AEC has not yet “tested” a C-bomb. Similarly, America does not rely on the cobalt bomb alone to deter aggression. Dr. Teller has suggested using a shorter-lived element to “rig” an atomic bomb. Zinc is a good candidate; radiozinc has a half-life just under a year. Surely a zinc bomb will be built by the AEC, if for no other reason than to have bombs from A to Z.

Quantitatively, what is the perspective for total annihilation? In 1950, with the advent of the H-bomb, came the dissolution of all moral as well as physical barriers to consideration of the final question: How can we destroy the race? The answer, as we have indicated, was found in the hydrogen-cobalt bomb, and some scientists worked out the arithmetic of annihilation. Professor Leo Szilard of the University of Chicago, a chief architect of the A-bomb, said:

“I have made a calculation in the connection . . . . I have asked myself: How many neutrons or how much heavy hydrogen do we have to detonate to kill everybody on earth by this particular method? I come up with about fifty tons of neutrons as being plenty to kill everybody, which means about five hundred tons of heavy hydrogen [about 400 fair-sized bombs — J.L.] . . . . [The necessary deuterium] could be accumulated over a period of ten years without an appreciable strain on the economy of a country like the U.S.” (BAS, April, 1950).

Dr. James Arnold, in the October, 1950, BAS concurs in the general validity of Szilard’s thesis, and estimates that, given the deuterium, 10,000 tons of cobalt might be sufficient to kill everybody. We may note that the U.S. currently consumes about half this amount annually, that it is currently stockpiling cobalt and, according to the recent Haley Report on national resources, the U.S. is expanding its cobalt stockpile and plans to “consume” 20,000 tons of cobalt annually by 1975. Of course, this fact is not of itself sinister. The chief uses of cobalt are in jet engines, rockets, guided missiles, armor plate, gun barrels, and radar components — and America might devote her cobalt stockpile to these ends, rather than to means of mass destruction. Yet once again, American policy is consistent with the worst of all possible future developments. Similar remarks apply to the other potential mass-killer, zinc: according to the New York Times of 8th April, the government recently announced it plans expansion of its zinc stockpiling program. Currently, American firms use about 60,000 tons a month of this metal, “plenty to kill everybody”.

The reader may now think the entire discussion has become academic; for no one would wish to build a cobalt bomb. Yet the New York Times of 28th March, under the heading Cobalt Bomb Being Developed for Radiation-Nerve (19) and Germ Warfare Studies, writes:
“Military science has or is devising a selective arsenal of weapons that could kill multitudes in a split second, minutes or years . . . . Behind the scenes, in obscure laboratories and proving grounds, scientists are working . . . improving techniques and devices for radiological, gas and germ warfare . . . Over-shadowed by the official announcements and speculation about the hydrogen bomb and the atomic bomb is the so-called ‘C-bomb’ . . . strategists foresee the possibility that in an all-out war situations might occur where there would be a need for other means (than the H-bomb) of incapacitating enemy troops or war workers or of rendering a big area [the earth — J.L.] uninhabitable for a period . . . . The natural ‘fall-out’ of radiated material from an atomic cloud, with its short life, would be inadequate. The problem is to keep alive, at a high level, the radioactive contamination. And in the mineral element cobalt military scientists are finding their answer.”

Another cheerful form of radiological warfare is simply to spread radioactive products from a pile over an area. Thus, Hanson Baldwin, military analyst for the New York Times, has recently pointed out how fortuitous it is that we are accumulating these deadly wastes, since they can be dropped on people.

5. The First Use of the Bomb

The first A-bomb was successfully tested l6th July, 1945, at Alamogordo, New Mexico; the only two other models then in existence were thereupon whisked across the Pacific Ocean and dropped on Hiroshima (6th August) and Nagasaki (9th August). The way to Hiroshima had been paved with five months of air raids, starting with the great 9th-10th March jellied gasoline attack on Tokyo (alone killing 83,000) during which period 220,000 civilians had been killed and 3,000 houses destroyed (20); a quarter of the urban population, of 8,500,000 people, had been forced to migrate. Thus there was a certain continuity in the attack of 6th August upon the civilian population of a prostrated country. The James Franck report, submitted to the Secretary of War in June, 1945, by a committee of seven scientists and a simultaneous petition to President Truman signed by sixty-Four scientists (all of whom had worked on the bomb) urged, on humanitarian grounds, that it not be used directly. The Franck report (republished in the Bulletin of the Atomic Scientists, 1st May, 1946, with an editorial comment that the report “undoubtedly expressed the opinion of a considerable group of scientists on the project”) suggested as an alternative that:

“a demonstration of the new weapon might be made, before the eyes of all the United Nations on the desert or a barren island . . . . After such a demonstration the weapon might perhaps be used against Japan if the sanction of the United Nations (and public opinion at home) was obtained after a preliminary ultimatum to Japan to surrender. . . . We believe that these considerations make the use of nuclear bombs for an early attack against Japan inadvisable.”

Aside from these pleas it was known that the Japanese economy was on the verge of collapse because of the blockade and the air raids, and that Japan had already attempted to negotiate a surrender via the Pope. (21) Against this background, all clap-trap about “saving a million American lives” notwithstanding, the frenzied haste with which the newly completed weapon was employed, especially the repeat performance at Nagasaki, leaves an impression that the American military were only afraid lest the Japanese surrender too soon and thereby preclude employment of the bomb. If this impression seems fantastic, it does not seem so to a great many Japanese, who feel their people were used as guinea pigs—and this should be borne in mind in understanding their reaction to the recent H-bomb tests. At Hiroshima a U-235 bomb was dropped; it annihilated completely 4.4 square miles of the city, killing eighty thousand people, and injuring nearly an equal number. At the Nagasaki experiment a plutonium bomb was tried; a 15 per cent. Greater radius of destruction was achieved, although “only” thirty-five thousands were killed, with an equal number injured. (These are official figures; yet John Bugher of the AEC’s Division of Biology and Medicine wrote in Nucleonics, September, 1952, that the fatalities from the two bombs “probably exceeded 200,000” and, as we have pointed out, delayed casualties have continued up to the present day.) These “live” testing grounds have provided the Atomic Bomb Casualty Commission with a wealth of material which its staff of 900 has continued to study diligently over the years, and much of what is known today concerning the effects of atomic weapons is based on “The Japanese Experience”.
Let us emphasize that, in this first decision concerning the use of nuclear weapons, the most extreme of all possible alternatives was taken; this, as we shall see, has been true of practically every subsequent decision in the atomic program: always the opposition has been successfully overridden by the most extreme elements.

6. Bikini, 1946

Although the recommendations of the James Franck report were disregarded, the suggested idea of exploding the weapon on a “barren island” did, however, appeal to the military — and less than a year after Nagasaki they embarked on a long series of adventures by “testing” several bombs at Bikini in the Marshall Islands (which had first been rendered “barren” by the simple expedient of uprooting some 160 natives and shipping them to another atoll). These very first Bikini tests, conducted in the summer of 1946 by the U.S. Navy (ostensibly to determine the effects of atomic bombs on ships) produced some reaction which is interesting when viewed from hindsight. There was widespread protest in the United States against inaugurating the new era of peace on such a note; Senators Hoffman, Lucas and Walsh among others urged on the Senate floor that Truman abandon the tests. Particularly prophetic (and radiating as well a freshness that one no longer sees in the utterances of our scientists) are the words of Dr. Lee Du Bridge, (22) a leading physicist and president of the California Institute of Technology, some of whose remarks in the May 15, 1946, BAS we quote:

[The results] would not make a ripple on the surface of basic nuclear science. The study of nuclear fission will not be advanced one iota by all these figures. The value to pure science will be nil . . . it is said that there are a thousand or so technical people participating. Many no doubt look forward to the trip and to seeing the explosion . . . [but] how the universities need these men now for their overcrowded classrooms and undermanned research staffs! . . . No doubt hundreds of secret reports will be written on the variation with distance from the impact point of the damage done to masts, to gun turrets, to tanks and trucks and radar and rabbits and field kitchens. There will be profound studies of why ship A was sunk and ship B was not. . . . The enormous and intensely radio-active cloud that arises from an atomic explosion is a terrifying thing. It is completely subject to the whims of meteorology. Who could say that a sudden rainstorm could not precipitate dangerous quantities of this material onto one or more of the ships packed with observers? Or might not a cloud of this lethal dust be carried hundreds of miles and deposited on unsuspecting inhabitants? The surface burst will raise a great cloud of water spray and where it be carried? . . . the dangers . . . may be remote — but I know experts who are participating in the tests who are worried about one or more of them . . . And how will the results of the tests be represented to the American people? Regardless of what the results are they will stimulate exaggerated claims and counter-claims. ‘The Navy is invulnerable!’ ‘The Navy is obsolete!’ ‘Armies are useless!’ ‘We must have universal military training!’ . . . Are international relations to be improved by these tests? Not even the greatest enthusiasts for them have claimed this [that was 1946; to-day, with the complete triumph of the ‘peace through terror’ ideology the ‘enthusiasts’ from Eisenhower on down have claimed this — JL] . . . I will say only that at this critical hour they are in poor taste.” (All emphasis added.)
It is interesting to see how accurately the main elements of the present H-bomb issue are foreshadowed in these remarks. But of course the tests took place, complete with 42,000 men and half the world’s supply of photographic film, and the Army-Navy Joint Chiefs of Staff could report such observations as: “The second bomb threw large masses of highly radioactive water onto the decks and onto the hulls of the vessels. These contaminated ships became radioactive stoves and would have burned all living things aboard them with invisible and painless radiation”.

This second bomb was indeed quite a phenomenon; it was detonated beneath the surface of the lagoon, and threw up millions of tons of water to a height of about a mile. Because the fission products and neutrons were all entrapped by the water, the contamination was severe and the huge fleet of “target ships”, from the venerable Saratoga to the sleek Prinz Eugen had to be deliberately sunk after the test for this reason; and a third test “shot” was cancelled. The AEC wrote in its 1950 publication The Effects of Atomic Weapons that within a week most of the fission products had settled to the bottom of the lagoon, covering an area of over 60 square miles. Dr. David Bradley, in No Place to Hide, had described extensively the nature of the contamination produced. Let us give only a few quotations:

“these ships are fouled up with radioactivity to a degree far greater than anticipated… there is a real hazard from elements present which cannot be detected by the ordinary field methods [mainly plutonium — J.L.] . . . . Of the fish caught on the lagoon side of the reef, all showed considerable radioactivity . . . . What is true of the reef fish will now become increasingly true of the larger migratory fish — the tuna , the jacks, the sharks, and so on — as the latter, the predatory fish, eat more and more of the smaller fish who are sick with the disease of radioactivity [and hence easier to catch]. We know that this process is going on. Almost all seagoing fish recently caught around the atoll of Bikini have been radioactive . . . .”

7. The Later A-Bomb Tests

Similar tests followed through the post-war years both at Eniwetok atoll in the Marshalls and at Yucca Flat, Nevada. These achieved on the whole no great notoriety, and the public learned to accept such activities as a part of daily life — although attention was occasionally focused on such incidents as the breaking of windows in Las Vegas, the falling of radioactive rain and snow in Eastern cities, and the fogging of photographic film. Of course, the Eniwetok tests, like the Bikini, involved the deportation of the native inhabitants and the spoliation of the atoll.

Up to 1st March, 1954, there had been reported in the world some fifty-five atomic explosions together, including four or five in Russia and a few British bombs. The great majority were detonated by the AEC in seven series of tests: Spring 1948(Eniwetok), Winter 1951 (Yucca Flat, Nevada, sixty-five miles from Las Vegas), Spring 1951 (Eniwetok), Fall 1951 (Nevada), Spring 1952 (Nevada), Fall 1952 (Eniwetok) and Spring 1953 (Nevada). The Spring 1951 and Fall 1952 tests included thermonuclear weapons, and at the Spring 1953 tests the first shell bearing an atomic warhead was fired, from a mobile 280 mm. cannon. In addition to detonating weapons some of the tests also served to acquaint troops with the realities and manuœuvres of atomic warfare, and to test bomb effects on buildings, ships, mice, dogs, monkeys, etc. Possible damage to the environment by radioactive contamination had been discussed from time to time, but the danger was not considered great. Yet several incidents of some interest had occurred. Fall-out from the original A-bomb blast at Alamogordo had injured cattle ten to fifteen miles away. This first blast also resulted in the fogging of photographic film in Indiana. Sienko and Cocconi of Cornell University’s Laboratory of Nuclear Studies, in referring to a beta-emissive speck detected in England in 1952, said: “This kind of speck is probably due to radioactive dust produced by nuclear explosions and carried away by the winds. Undoubtedly many more cases similar to this will be found . . . because radioactive dust is already spread everywhere in the world”. The Canadian Journal of Physics 30 (1952) reported that radioactive fission products from a Nevada explosion in January, 1952, fell on Ottawa two days later and again two weeks later. A French scientific journal (Comptes Rendus hebd. Acad. Sci. 235 (1952)) reported that from December, 1951, to June, 1952, radioactive dust from Nevada explosions had fallen on Paris. In America mild degrees of fall-out and radioactive precipitation were commonplace to the occasional consternation of scientists engaged in low background radiation experiments. At least three cases were reported of precipitation emitting radioactivity approaching dangerous levels; these cases were recorded by independent investigators at Helena, Montana (reported in Nucleonics), Chicago, Ill. (Chemical and Engineering News, 16th June, 1952), and Troy, N.Y. (already mentioned). A typical reaction to these disclosures is that of a Chicago chemist who, in a letter to Chemical and Engineering News, expressed concern over the Chicago rainout. His letter (published 25th August, 1952) goes on to say:

“It appears that the U.S. is being covered intermittently with radioactive dust of dangerously high activity as far as 1,000 miles from the place where the dust is generated. It is also evident that those in positions of responsibility are glossing over these facts with glib assurances that all is well. . . . Let us not be so afraid of a backward enemy that we are willing to poison man, dog, woman and child to get a military advantage.”

The AEC replied to him in the same issue with glib assurances that all was well.
Although our interest covers all nuclear explosions, we have been discussing so far those conducted by the U.S. This is natural, since the great majority of the bombs have been detonated by the U.S. About Russian explosions almost nothing is available, and we are dependent upon microscopic disclosures by American military intelligence for information on this subject (presumably because detailed information might prove valuable to the Russians). We have been told that Russia detonated an H-bomb in August, 1953. The British have held A-bomb tests in the Montebello Islands (off the Australian coast) and at Woomera similar to American tests at Eniwetok. One aspect of the test in the Montebello Islands in the summer of 1952 may be noted here: Although British officials claimed the islands were barren and uninhabited, Australia’s leading ornithologist pointed out that there were over twenty species of birds and several mammals living on the islands, including a pipit and a kangaroo found nowhere else. What has been the fate of these animals is not clear. Churchill, in replying to a query on this subject from an MP assured him that every effort had been made to “inconvenience them as little as possible”. We presume it is picayune to be concerned with the annihilation of wild creatures in an age when humans are slaughtered on an unprecedented scale; yet there was a time when men who called themselves scientists did not wantonly destroy rare and interesting specimens of nature.

8. Birth of the H-bomb (23)

In September, 1949, President Truman announced that the Russians had exploded an atomic bomb. Allegedly on these grounds, he announced in January, 1950, that he had instructed the AEC to develop the “super bomb”. (We may note that years later, after Russia had been credited with exploding an H-bomb, Truman said he was not convinced Russia had even an A-bomb.) To be sure, a great deal of soul-searching preceded the decision of January, 1950. The inner circle of atomic energy officialdom had been divided on the H-bomb program, with the preponderance of opinion against it. The AEC opposed it three to two, and the nine-man General Advisory Committee to the AEC (including eight eminent scientists) opposed it unanimously on a combination of moral, tactical, scientific and financial grounds. The opposition included such distinguished figures as Oppenheimer, Lilienthal and Henry Smyth (author of the famous Smyth Report). But the H-bomb had its champions, notably Lewis Strauss who, in the words of a laudatory editorial in Iron Age “set up a howl for the H-bomb that reverberated around the AEC and to the White House”. Strauss received strong support from Secretary of State Acheson, Defense Secretary Johnson and Sen. McMahon. Truman, true to the tradition of his A-bomb decision of 1945 and Bikini decision of 1946 (and to the principle of “the triumph of the extreme elements” which we have enunciated), gave the order to proceed, and the scientists dully embarked on a new “crash program”. Ideological justifications were of course invented as needed to allay public malaise and soothe bad consciences. But for all that, the program was generally viewed with trepidation. The Bulletin of the Atomic Scientists published in 1950 some sixteen articles on the H-bomb by leading physicists; and although none of them directly renounced Truman’s decision many anxieties were expressed and the destructive possibilities frankly and terrifyingly set forth. Einstein wrote:

“The ghostlike character of this development lies in its apparently compulsory trend. Every step appears as the unavoidable consequence of the preceding one. In the end there beckons more and more clearly general annihilation.” Twelve physicists signed a statement requesting of the American government “a solemn declaration that we shall not use the bomb first”. In addition to “solemn declarations” other scientists called for “top level disarmament conferences”, “outlawing the bomb”, “international controls”, etc. — proposals which had long ago exhausted themselves, but which nevertheless revealed, in the timid idiom of these men, widespread apprehension. Oppenheimer said: “There is grave danger for us that these decisions have been taken on the basis of facts held secret… the relevant facts could be of little help to an enemy; yet they are indispensable for an understanding of questions of policy”. Leo Szilard, referring to the difficulty of predicting the path of radioactive fall-out, said “on this aspect of the question, I would say that we leaped before we thought when we decided to make H-bombs”. Here is the lament of Otto Hahn, the German who had discovered fission in 1938 and (like Gentner and von Laue) maintained a strict silence about atomic bombs during the war years under Hitler:

“Remembering the effect of the atomic bombs on Hiroshima and Nagasaki in August, 1945, or considering the investigations at Bikini in 1946, one would think that mankind had already carried it magnificently far enough with the utilization of atomic energy for destructive purposes and that there would be no desire to add still more powerful ones to these means. Nevertheless this seems to be the case…. Pres. Truman has ordered the development and construction of the ‘hydrogen bomb’ to be officially begun in order to create a new weapon for keeping the world peace.” (New Atoms, 1950.)

A number of scientists who acquiesced to the wisdom of Truman-Strauss gave up their chastity only “with some reluctance”. Thus Harold Urey was “very unhappy to conclude that the H-bomb should be developed and built . . . but, with Patrick Henry, I value my liberties more than my life”. On the other hand a refreshing singleness of purpose was shown by Dr. Edward Teller who, with his colleague Ernest O. Lawrence (inventor of the cyclotron), was eager to have an H-bomb. In an article Back to the Laboratories, Teller exhorted his fellow physicists to end their “honeymoon with mesons” and join him on the H-bomb project: “We must realize that plans are not yet bombs, and we must realize that democracy will not be saved by ideals alone…. The holiday is over. Hydrogen bombs will not produce themselves”. In any case, the project was advanced and the laws of physics were co-operative. The Spring 1951 tests at Eniwetok (Operation Greenhouse) saw the successful detonation of two “crude and cumbersome thermonuclear devices”. Both “shots” exceeded expectations. A more streamlined H-bomb was exploded at Eniwetok in November, 1952 (Operation Poison Ivy). During November, 1952, a good many sensational reports of the explosion “leaked” out to the public via letters written home by eyewitnesses and it was generally believed that an H-bomb had been tested. Official confirmation was only given, however, after a furore had been touched off by the effects of the 1st March, 1954, H-bomb. As has recently been revealed, the 1952 “baby” bomb produced a fireball 3.5 miles in diameter, annihilated an island of the Marshalls group, and ripped out of the ocean floor a crater a mile in diameter and 175 feet deep. The radius of total destruction was three miles, with “severe to moderate” damage out to seven miles.

9. Atomic Bomb and the Weather — Speculation?

There has been much speculation about nuclear explosions affecting the weather; this probably originated as early as Hiroshima, which shortly after the A-bombing was inundated by a typhoon killing early visitors to the devastated city. The wave of tornadoes and freak weather that swept America’s East coast concurrently with last Spring’s Nevada A-tests was bound to stir speculation, for the breaking of weather records in this case translated itself into hundreds of dead, thousands of homeless, and many millions of dollars of damage. Impressive was the coincidence behind the disclosure of an AP dispatch of 10th June, 1953, that:

“Rep. Ray J. Madden, Democrat of Indiana, had asked that the House Armed Services Committee start a full inquiry into possible atomic effects on the weather.”

The investigation was not held because:

“Key [House] members said to-day they had been assured that atomic tests had not caused the series of tornadoes sweeping the country. . . . ’Atomic scientists told us [said Rep. Leroy Johnson of California] they consulted regularly with top weather observers and said the atomic tests are too small and restricted to have any effect on the weather’.”
One of the more striking chains of incidents followed the largest bomb in the series which was exploded 5th June at an altitude of six to eight miles. 8th June Arcadia, Nebraska, had a tornado killing ten people; 9th June Cleveland, Ohio, had its first tornado in twenty-nine years (killed eight, injured 300) and seven other tornadoes swept Michigan and Ohio on that day (killing 113). 10th June Exeter, N.H., was heavily damaged by storm, and Worcester, Mass., had a disastrous tornado, the worst in seventy-five years, killing eighty-five, injuring 700, and leaving 2,500 homeless; the Worcester tornado was accompanied by an unprecedented barrage of giant hailstones.

Abnormally high radioactivity was present in some of the more freakish precipitation. For instance, the highly radioactive rainfall at Troy, N.Y. mentioned before, accompanied “an unusually violent electrical storm … one of the worst flash storms to hit the area in recent years” and followed by thirty-six hours the detonation of a Nevada A-bomb. The year 1954 has again produced strange weather, sometimes with disastrous consequences, as in the weird flooding of the Danube river which has dispossessed over 70,000 people. The New York Times of 14th July wrote concerning this even that:

“The press continues to suggest that last spring’s hydrogen bomb tests in the Pacific may have been responsible for the floods. The argument is that stratospheric clouds of atomic dust resulting from the explosions possibly reduced the amount of sun’s rays reaching the earth enough to cause the heavy rains.”

Are these speculations in any sense valid? Is there a link between the flooding of the Rio Grande, Lima’s coldest winter in twenty-five years, and freak tidal waves in Lake Michigan? Is there validity in a Japanese scientist’s recent prediction that the world will have colder weather (because of the obstruction of the sun’s rays by the Marshall Islands)? This author does not claim to know. However, because our very lives, and it is no exaggeration to say the future of the human race, are intimately intertwined with the world’s climate, these questions deserve the most serious consideration. Any communication on the subject would be welcomed by the author.

Some form of connection between atomic blasts and rainfall seems plausible. For instance, it is well known that ionized air molecules, such as are produced in the wake of a radioactive cloud, serve as nuclei for the condensation of moisture. In measurements reported by the French Meteorological Office, radioactivity curves of rainwater and atmospheric dust over a four month period (November, ’51 — February, ’52) reached peaks ten days after nuclear explosions, exhibiting in cases activity ten or twenty times normal.

Hubert Garrique, Comptes Rendus de l’Academie des Sciences (1951), purported to show by studying the distribution of condensation nuclei, which he claimed emerged from atomic explosions, that abundant precipitation in France at that time was attributable to these. An amateur meteorologist, J. O. Hutton, has attempted to show by similar considerations a link between the A-bombs and tornadoes of 1953 in the U.S. Published in Astounding Science Fiction magazine, April, 1954, the article does not appear frivolous. His weather data is well-documented, and the article may seriously be recommended to the attention of the readers. (24)

Because of these “condensation nuclei” considerations, connections between A-bombs and the weather cannot be discounted merely by the observation that the energy release of an atomic explosion is far smaller than the kinetic energy of a large air mass. Even aside from this, the blast itself (especially in the case of an H-bomb) is large enough to set off local windstorms, and one cannot content oneself with a routine assertion that so vast a dislocation in nature has not remote effects in space and time. Profound long-range effects are believed to have resulted from discharge of particles into the air by past volcanoes (Cf. Climatic Change, a most interesting book edited by Harlow Shapley, Harvard Univ. Press, 1953, pp. 90-103).

10. General Aspects of Radiation Injury

Alpha, beta, gamma, neutron and X-radiations are the most familiar of the ionizing radiations, so called because they produce ionization of atoms and molecules which they encounter. It is this property which is exploited in the Geiger counters for the detection of radioactivity. It is this same property which is responsible for the damaging effects of radiation on living tissue: the ionization sets in motion chemical reactions (as yet poorly understood) within the cell. These may result, depending on the type of the cell and the dose, in inhibition of the growth and mitosis of the cell, damage to the chromosomes and genes of the cell (which in the case of a germ cell will also he passed on to all succeeding generations), impairment of the various functions of the cell, or the death of the cell. Because the nature of radiation injury is on the fundamental level of cell metabolism itself, and because different cells and different doses exhibit widely varying effects, it is to be expected that the macroscopic changes produced in the organism by irradiation will cover an enormous spectrum, and this is the case. Many volumes have been written describing observed a biological effects of radiation, ranging from osteogenic sarcoma to graying of the hair. Every kind of tissue, every bodily function, will be impaired by sufficient radiation, administered to the appropriate part of the body. There is, however, great variation in sensitivity; most sensitive are the lymphoid tissues which produce and store white blood cells; also very sensitive are the white blood cells, epithelial tissue, mucous membrane, small bowel, ovary and embryonic cells.

(1) This is the first of two articles dealing with the recent hydrogen explosions, and studies the damage they have wrought upon man and his environment. Historical and scientific background material is presented in the Supplement the end. The second article will go more deeply into the social implications.

(2) All dates without years refer to 1954.

(3) Robert S. Allen wrote in his Inside Washington column:
“The wind was not the cause of this accident. It was due to miscalculations on the size of the explosion and the consequent radioactive fall-out. That covered twice the estimated area.”

(4) This is in agreement with an INS account of 27th March that the 1st March fireball had a radius of complete incendiary destruction of fourteen miles. Looked at differently it suggests that the 1st March blast developed at least twenty megatons, tending to confirm the remark of the cited Pentagon observer that “all published estimates have been too conservative”. A more refined guess could be made on this basis, but one runs the risk of blundering upon “classified information”

(5) In a recently reported experiment with rabbits who merely watched an A-bomb explosion from fifty miles away, steam momentarily generated in the retinal fluid caused “a little localized explosion” in the eye tissues. When the eye is adapted to night vision, this danger is greatest. The damage is similar to the “eclipse burns” that some humans have experienced. (AP., 23rd June, 1954.)

(6) Such facts do not impress Mason Sears, U.S. delegate to the U.N., who told that body 13th July:
“What has resulted from our tests is that one natural sandspit, uninhabitable for man or beast and without vegetation [this has since become true — JL.], and one man-made sandspit were destroyed — and that is all.” [Amen!]

(7) Robert S. Allen, reported in his Inside Washington column that:
“The exposed sailors and airmen . . . are receiving special medical care, and are under expert observation which will last for months, and possibly years.”
As Allen points out, this is not cause for concern. On the contrary,
“the extensive fall-out is proving a ‘blessing in disguise’. It is affording U.S. authorities the opportunity to conduct medical and other studies of momentous significance. They were made possible through the accidental exposure of twenty-eight men of the Navy and Air Force to . . . [radiation] five times more than any other living Americans have experienced. This opened the way for the studies the scientists had never before been able to undertake on human beings.”

(8) The distinction between “dose” and “exposure” and a few other technical niceties are overlooked in the interests of simplicity.

(9) This quantity of plutonium will deliver to certain “hot spots” in the bones alpha bombardment estimated as biologically equivalent to 5 or 10r per day. A sarcoma induced in such a spot, however tiny, is generally fatal.

(10) For comparison, all the radium ever produced has an activity of several curies.

(11) The London Times, 8th April wrote:
“According to Japanese medical men [who are treating the twenty-three fisher-men] the condition of all twenty-three is still deteriorating. Dr. Nakazumi said that the white corpuscle counts of the patients were still decreasing. Even if their white corpuscle counts improved, said Dr. Nakazumi, they might never be able to do strenuous work again.”

(12) This vocabulary has many practitioners, e.g. Sen. Pastore and Rep. Holifeld of JCAE reported (AP, 19th March) that the Americans and natives were “normal, happy and in the best of spirits”.

(13) Numerous drugs are known to retard death following acute irradiation.

(14) The New York Times, 14th July, wrote:
“The U.S. said to-day [in the U.N.] it was limiting economic aid for inhabitants of the Marshall Islands for fear that they might consider themselves its wards. . . . [Frank Midkiff, American High Commissioner of the islands, said:] ‘The present administration has recognized the rugged character traits [!] that it will be necessary for the Bikinians to acquire in order to adjust themselves to life on Kili [infertile new home of displaced Bikinians — J.L.]. While it is desired to enable them to come through the tests without serious [!] injury, it is not desired to coddle them to be wards or dependants’.”

(15) A one thousand megawatt nuclear power plant would produce in a year one hundred million curies of fission products Cockroft estimates in Nucleonics, January, 1952.

(16) In interpreting this result one must not think that, because a mouse is smaller than man, “the 0.1r is distributed over a smaller region” and that hence the mouse “gets a more concentrated dose”. “A mouse is given an 0.1r (whole body) dose” means that ionizing radiation is administered to the mouse sufficient to release 8.3 ergs of energy to each cubic centimeter of tissue in the body. An 0.1r (whole body) dose to men releases 8.3 ergs to each cubic centimeter in the men’s body. (Similarly, one could speak of a dose of 0.1r to a man’s spleen, etc.) Thus, the tissues of a man receiving a specified dose of (whole body) penetrating radiation are being attacked at the same rate as those of the correspondingly dosed mouse. The total ionization is of course much greater for the larger animal.

(17) Freaks” due to mutations are rare, only “a useful handle in the study of genetics, as Muller says. Most genetic changes
“affect inner physiological properties or features of the body chemistry, and so cannot be detected without special study. Probably most of these changes simply consist in weakening the degree of activity of some chemical process that is occurring normally in the body, thus making it more prone to one or another ill when the body is subjected to difficult conditions of living that a quite normal individual would usually be able to withstand.” (H. J. Muller, Bulletin of the Atomic Scientists, September, 1947.)
The occurrence of Mongolianism and other teratologies among progeny of Japanese A-bomb victims is due chiefly not to mutation, but to irradiation of embryos in the womb. We might remark that embryonic tissue, like all rapidly proliferating tissue, is extremely sensitive to radiation. Thus, exposure of rats and mice in utero on the ninth day of gestation to as little as 25r has resulted in significant incidence of skeletal abnormalities, microphthalmia, and abnormal growths of epitheloid cells in and around the brain of the offspring. 50r retarded development and produced microphthalmia in a third of the fetuses with brain and spinal cord abnormalities common. (Cf. Rugh’s article in Annual Reviews of Nuclear Science, vol. 3, 1953.)

(18) Government agencies and their spokesmen have frequently falsified these genetic dangers. Perhaps the most flagrant example is the Army’s Handbook of Atomic Weapons for Medical Officers, which states: “Little is known of the actual effects to be expected in man [In some quarters —J.L.] but it is estimated that about 600r would be required to produce significant mutation rate changes.” This does produces death within a month in 100 per cent of humans.

(19) While on this train of thought we quote a different aspect of this article:
“American military scientists now are ready to speak somewhat more freely of never gas, a much more swiftly lethal weapon. The reason is that this gas is not an exclusive American invention, but came originally from the laboratories of Nazi Germany. The Army’s Chemical Corps said to-day that there were several forms of so-called nerve gas. They are without color and without smell. ‘The inhaled vapor from as few as three drops would prove fatal to a human being in about four minutes’, the Corps added. ‘More toxic than the previously known chemical warfare agents, the nerve gases are designed to destroy life with sudden ness; their presence is not ordinarily detectable by the senses, and we must rely upon detection devices to identify them.’. . . Another weapon is bacteriological warfare. Like gas, such a weapon is difficult to detect quickly. One of the greatest eventual advantages of ‘BW’ long under experiment but still not entirely perfected, is that it can he applied under the long range plans of strategic operations. Its effect ‘hangs fire’, may not be felt for the days or weeks required for the incubation of a disease”. A New York Times dispatch of that time entitled Denver Calmed on War Gas Fear told how some Denver residents were nervous about the disclosure that “the Rocky Mountain Arsenal was manufacturing a nerve gas that could wipe out the populations of enemy cities”. It gives a further description of how muscles would be paralyzed. “There would be a sensation of great weight upon the chest, pain, then choking and death as the brain’s messages commanding the heart to beat were blocked from the heart muscles.” After the further remark that “The arsenal produces this weapon twenty-four hours a day, seven days a week”, they quote the reassuring ministrations of Lieut. Col. S. J. Effnor, acting commandant of the arsenal, who says: “The facts about nerve gas do not justify the horror weapon name so often used to describe it”. Col. Effnor states that maximum measures to safeguard Denver area residents have been taken and “there is no possibility of any danger to the civilian population in Denver”. In addition, “the means for effective defense and treatment have been developed” and an antidote, atropine, is available in quantity. After describing the safety measures in some detail, the article points to the excellent safety record: “No one at the arsenal has suffered more than temporary minor effects from the gas”. Some inkling of these temporary minor effects is given by a UP dispatch of 6th May headed 70 at Arsenal Affected by Deadly Nerve Gas. The officials queried “admitted that security regulations had prevented them from telling the whole story of the effects of the gas” [Why, since this was already known to the Germans? — J.L.] but the following was released: “Seventy or more employees . . . here received mild closes of a deadly nerve gas . . . . All recovered without permanent injury within five days . . . . Exposed workers told of wild dreams and nightmares, anxiety and jitters and reckless decisions. ‘While they were driving they found themselves taking chances they would not ordinarily take’, Col. Werne said.”

(20) “Mastermind” of these raids was Gen. Curtis LeMay, now head of the Strategic Air Command (SAC), the elite corps entrusted with “delivery” of the hydrogen bomb. A laudatory article by Ernest Havemann in Life, 14th June, 1954, tells about LeMay and SAC:
“The fire raids, as much as the dropping of the atom bomb, involved grave moral problems. They were planned to destroy industry, but everybody knew that each time LeMay sent his B-29s out a lot of innocent and helpless men, women and babies were also going to be burned up. This fact did not deter LeMay. He is a thoroughgoing professional soldier. To him warfare reduced itself to a simple alternative: kill or be killed. He would not hesitate a moment — indeed he would not consider any moral problem to be involved at all — in unleashing the terrible power that now lies in his hands with the B-36, the B147, the B-52 and the hydrogen bomb.”

(21) A good deal of important material on the condition of Japan can be found in Blackett’s The Military and Political Consequences of Atomic Energy. We disagree with Blackett’s conclusion on the first use of the bomb, however.

(22) Dr. Du Bridge is now a consultant to the military.

(23) These lines were written before the Oppenheimer affair. More detailed information is now available.

(24) Astounding Science Fiction, incidentally, has on occasion printed articles with mature scientific content. A course on radar at the Massachusetts Institute of Technology during the last world used as its text — an article in this magazine

The Human Dethroning Continues: AlphaGo Beats Go Champion Se-dol in First Match

March 10: AlphaGo again proved its prowess at the game of Go, beating world champion Lee Se-dol in game two of the five-game match up, which was aired live on YouTube. AlphaGo made some surprise moves during the game that many commentators suspected were errors, but which later turned out to give the program an advantage.

Speaking after the game, DeepMind founder Demis Hassabis said, “I think I’m a bit speechless actually. It was a really amazing game today, unbelievably exciting and incredibly tense … Quite surprising and quite beautiful moves, according to the commentators, which was pretty amazing to see.”

Lee Se-dol also expressed his awe of the program, saying, “Yesterday I was surprised, but today, more than that, I am quite speechless … From the very beginning of the game, there was not a moment in time that I felt that I was leading … Today, I really feel that AlphaGo had played a near perfect game.”

Michael Redmond, a 9-dan Go champion, has been the English-speaking commentator for the games. Yesterday, he expressed his hope that he would see more advanced moves from AlphaGo than what he saw in the October games against Fan Hui. After this second match, he said this was, in fact, “different from the games played in October.” He also added, “I was very, very impressed by the way AlphaGo did play some innovative and adventurous, dangerous looking moves, then actually made them work.”

When asked about DeepMind’s confidence in AlphaGo’s chances of winning as the game progressed, Hassabis explained, “AlphaGo has an estimate all the way through the game of how it thinks it’s doing. It’s not always correct though, of course.” While AlphaGo was showing confidence around the middle of the game, the commentators were much less certain about the program’s actions, and “the team wasn’t very confident.” However, he added, “AlphaGo seemed to know what was happening.”

Lee Se-dol chuckled at the question about AlphaGo’s weaknesses, saying, “Well, I guess I lost the game because I wasn’t able to find out what the weakness is.” Though Hassabis expressed hope that games like these with people as talented as Se-dol would help expose weaknesses.

Se-dol ended the conference saying, “The third game is not going to be easy for me, but I am going to exert my best efforts so I can win.”


 

March 9: Over 1.2 million people have already tuned in to YouTube to see DeepMind’s AlphaGo beat world champion Lee Se-dol in their first of five Go matches. AI is growing stronger, but at the end of 2015, most AI experts had anticipated it would be at least a decade before a computer program could tackle the game of Go. Most AI experts, that is, except those at DeepMind.

This particular game comes after DeepMind’s publication in January about AlphaGo’s triumph over the European champion, Fan Hui, in October of 2015. After mastering old Atari video games, DeepMind set its deep learning skills on Go, which is considered to be the most complex board game ever invented – there are more possible moves in Go than there are atoms in the universe.

Traditional search trees, which are often used in AI to consider all possible options, were not feasible for the game of Go, so AlphaGo was designed to combine advanced search trees and deep neural networks. The result was a program that bested a regional champion initially, and then continued to improve its skill for five months to successfully challenge (at least for game one) the world champion.

Se-dol initially claimed he didn’t believe AlphaGo would be ready to beat him by March, though he acknowledged the program would be able to beat him eventually. However, many news outlets reported that after he learned how the AlphaGo algorithms worked he grew more concerned, changing his original prediction of beating AlphaGo 5-0 to 4-1.

It was a close match throughout. Commentator Michael Redmond, also a 9-dan (highest level) Go champion, repeatedly mentioned how much more aggressively AlphaGo played in this match up as compared to the games in October against Fan Hui. Redmond was uncertain as to whether AlphaGo had improved its technique, or if the program changed its playing style to match its opponent.

By the end of this first match, the Go community was in shock:

“A computer has just beaten a 9-dan professional,” said stunned commentator Chris Garlock

“It’s happened once, it’s probably going to happen again now,” responded Redmond

The AI community was equally impressed. In an article in January about AlphaGo’s first success, Bart Selman, Francesca Rossi, and Stuart Russell all weighed in on what this meant for AI progress. We followed up with them again to get their input on what this new defeat by AlphaGo means for AI progress.

“AlphaGo’s win in its first game against Lee Sedol is very impressive. The win suggests that progress in AI is clearly accelerating, especially given that AlphaGo is a clear demonstration of the power of combining advances in deep learning with more traditional AI techniques such as randomized game tree search,” said Selman.

Russell explained, “This provides further evidence that the core techniques of deep learning are surprisingly powerful. Perhaps even more impressive than the victory is the fact that AlphaGo’s ability to evaluate board positions means that it’s better than all previous programs *even with search turned off*, i.e., when it’s not looking ahead at all. I would also imagine that there are much greater improvements yet to be found in its ability to direct its search process to get the best decision from the least amount of work.”

Rossi is also interested to see how programs like AlphaGo will work with the public in the future, rather than in competition. She explained, “It is certainly very exciting to follow the series of matches between AlphaGo and Lee Sedol. No matter who will win at the end of the 5 match series, this is an occasion for deep and fruitful discussion on innovative AI techniques, as well as on where AI should focus its efforts. Life is certainly more complex than Go, as my IBM Watson colleagues know from the work they are already doing in healthcare, education and other areas of great importance to society. I hope that the techniques used by AlphaGo to master this game eventually can be useful also to solve real life scenarios, where the key will be cooperation, rather than competition, between humans and intelligent machines.”

At the start of the game, Redmond pointed out:

“It’d be interesting to see if some computer program might come up with something different … I’m really interested to see a computer program, which will eventually not be influenced so much by humans, and could come up with something that’s brand new.”

Perhaps that’s next?

We’ll continue to update this article to cover the games, each of which can be viewed live on YouTube at 04:00 GMT (11:00 PM EST).

Davos 2016 – The State of Artificial Intelligence

An interesting discussion at Davos 2016 on the current state of artificial intelligence, featuring Stuart Russell, Matthew Grob, Andrew Moore, and Ya-Qin Zhang:

2015: An Amazing Year in Review

Just four days before the end of the year, the Washington Post published an article, arguing that 2015 was the year the beneficial AI movement went mainstream. FLI couldn’t be more excited and grateful to have helped make this happen and to have emerged as integral part of this movement. And that’s only a part of what we’ve accomplished in the last 12 months. It’s been a big year for us…

 

In the beginning

conference150104

Participants and attendees of the inaugural Puerto Rico conference.

2015 began with a bang, as we kicked off the New Year with our Puerto Rico conference, “The Future of AI: Opportunities and Challenges,” which was held January 2-5. We brought together about 80 top AI researchers, industry leaders and experts in economics, law and ethics to discuss the future of AI. The goal, which was successfully achieved, was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Before the conference, relatively few AI researchers were thinking about AI safety, but by the end of the conference, essentially everyone had signed the open letter, which argued for timely research to make AI more robust and beneficial. That open letter was ultimately signed by thousands of top minds in science, academia and industry, including Elon Musk, Stephen Hawking, and Steve Wozniak, and a veritable Who’s Who of AI researchers. This letter endorsed a detailed Research Priorities Document that emerged as the key product of the conference.

At the end of the conference, Musk announced a donation of $10 million to FLI for the creation of an AI safety research grants program to carry out this prioritized research for beneficial AI. We received nearly 300 research grant applications from researchers around the world, and on July 1, we announced the 37 AI safety research teams who would be awarded a total of $7 million for this first round of research. The research is funded by Musk, as well as the Open Philanthropy Project.

 

Forging ahead

On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area, and we endorsed the CWG statement on the Creation of Potential Pandemic Pathogens.

On June 29, we organized a SciFoo workshop at Google, which Meia Chita-Tegmark wrote about for the Huffington Post. We held a media outreach dinner event that evening in San Francisco with Stuart Russell, Murray Shanahan, Ilya Sutskever and Jaan Tallinn as speakers.

SF_event

All five FLI-founders flanked by other beneficial-AI enthusiasts. From left to right, top to bottom: Stuart Russell. Jaan Tallinn, Janos Kramar, Anthony Aguirre, Max Tegmark, Nick Bostrom, Murray Shanahan, Jesse Galef, Michael Vassar, Nate Soares, Viktoriya Krakovna, Meia Chita-Tegmark and Katja Grace

Less than a month later, we published another open letter, this time advocating for a global ban on offensive autonomous weapons development. Stuart Russell and Toby Walsh presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, while Richard Mallah garnered more support and signatories engaging AGI researchers at the Conference on Artificial General Intelligence in Berlin. The letter has been signed by over 3,000 AI and robotics researchers, including leaders such as Demis Hassabis (DeepMind), Yann LeCun (Facebook), Eric Horvitz (Microsoft), Peter Norvig (Google), Oren Etzioni (Allen Institute), six past presidents of the AAAI, and over 17,000 other scientists and concerned individuals, including Stephen Hawking, Elon Musk, and Steve Wozniak.

This was followed by an open letter about the economic impacts of AI, which was spearheaded by Erik Brynjolfsson, a member of our Scientific Advisory Board. Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.

By October 2015, we wanted to try to bring more public attention to not only artificial intelligence, but also other issues that could pose an existential risk, including biotechnology, nuclear weapons, and climate change. We launched a new incarnation of our website, which now focuses on relevant news and the latest research in all of these fields. The goal is to draw more public attention to both the risks and the opportunities that technology provides.

Besides these major projects and events, we also organized, helped with, and participated in numerous other events and discussions.

 

Other major events

Richard Mallah, Max Tegmark, Francesca Rossi and Stuart Russell went to the Association for the Advancement of Artificial Intelligence conference in January, where they encouraged researchers to consider safety issues. Stuart spoke to about 500 people about the long-term future of AI. Max spoke at the first annual International Workshop on AI, Ethics, and Society, organized by Toby Walsh, as well as at a funding workshop, where he presented the FLI grants program.

Max spoke again, at the start of March, this time for the Helen Caldicott Nuclear Weapons Conference, about reducing the risk of accidental nuclear war and how this relates to automation and AI. At the end of the month, he gave a talk at Harvard Effective Altruism entitled, “The Future of Life with AI and other Powerful Technologies.” This year, Max also gave talks about the Future of Life Institute at a Harvard-Smithsonian Center for Astrophysics colloquium, MIT Effective Altruism, and the MIT “Dissolve Conference” (with Prof. Jonathan King), at a movie screening of “Dr. Strangelove,” and at a meeting in Cambridge about reducing the risk of nuclear war.

In June, Richard presented at Boston University’s Science and the Humanities Confront the Anthropocene conference about the risks associated with emerging technologies. That same month, Stuart Russell and MIRI Executive Director, Nate Soares, participated in a panel discussion about the risks and policy implications of AI (video here).

military_AI

Concerns about autonomous weapons led to an open letter calling for a ban.

Richard then led the FLI booth at the International Conference on Machine Learning in July, where he engaged with hundreds of researchers about AI safety and beneficence. He also spoke at the SmartData conference in August about the relationship between ontology alignment and value alignment, and he participated in the DARPA Wait, What? conference in September.

Victoria Krakovna and Anthony Aguirre both spoke at the Effective Altruism Global conference at Google headquarters in July, where Elon Musk, Stuart Russell, Nate Soares and Nick Bostrom also participated in a panel discussion. A month later, Jaan Tallin spoke at the EA Global Oxford conference. Victoria and Anthony also organized a brainstorming dinner on biotech, which was attended by many of the Bay area’s synthetic biology experts, and Victoria put together two Machine Learning Safety meetings in the Bay Area. The latter were dinner meetings, which aimed to bring researchers and FLI grant awardees together to help strengthen connections and discuss promising research directions. One of the dinners included a Q&A with Stuart Russell.

September saw FLI and CSER co-organize an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety to the scientifically minded in Westminster, including many British members of parliament.

Only a month later, Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety, and our Scientific Advisory Board member, Stephen Hawking released his answers to the Reddit “Ask Me Anything” (AMA) about artificial intelligence.

Toward the end of the year, we began to focus more effort on nuclear weapons issues. We’ve partnered with the Don’t Bank on the Bomb campaign, and we’re pleased to support financial research to determines which companies and institutions invest in and profit from the production of new nuclear weapons systems. The goal is to draw attention to and stigmatize such production, which arguably increases the risk of accidental nuclear war without notably improving today’s nuclear deterrence. In November, Lucas Perry presented some of our research at the Massachusetts Peace Action conference.

Anthony launched a new site, Metaculus.com. The Metaculus project, which is something of an offshoot of FLI, is a new platform for soliciting and aggregating predictions about technological breakthroughs, scientific discoveries, world happenings, and other events.  The aim of this project is to build an all-purpose, crowd-powered forecasting engine that can help organizations (like FLI) or individuals better understand the trajectory of future events and technological progress. This will allow for more quantitatively informed predictions and decisions about how to optimize the future for the better.

 

NIPS-panel-3

Richard Mallah speaking at the third panel discussion of the NIPS symposium.

In December, Max participated in a panel discussion at the Nobel Week Dialogue about The Future of Intelligence and moderated two related panels. Richard, Victoria, and Ariel Conn helped organize the Neural Information Processing Systems symposium, “Algorithms Among Us: The Societal Impacts of Machine Learning,” where Richard participated in the panel discussion on long-term research priorities. To date, we’ve posted two articles with takeaways from the symposium and NIPS as a whole. Just a couple days later, Victoria rounded out the active year with her attendance at the Machine Learning and the Market for Intelligence conference in Toronto, and Richard presented to the IEEE Standards Association.

 

In the Press

We’re excited about all we’ve achieved this year, and we feel honored to have received so much press about our work. For example:

The beneficial AI open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent,  The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

You can find more media coverage of Elon Musk’s donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

Max, along with our Science Advisory Board member, Stuart Russell, and Erik Horvitz from Microsoft, were interviewed on NPR’s Science Friday about AI safety.

Max was later interviewed on NPR’s On Point Radio, along with FLI grant recipients Manuela Veloso and Thomas Dietterich, for a lively discussion about the AI safety research program.

Stuart Russell was interviewed about the autonomous weapons open letter on NPR’s All Things Considered (audio) and Al Jazeera America News (video), and Max was also interviewed about the autonomous weapons open letter on FOX Business News and CNN International.

Throughout the year, Victoria was interviewed by Popular Science, Engineering and Technology Magazine, Boston Magazine and Blog Talk Radio.

Meia Chita-Tegmark wrote five articles for the Huffington Post about artificial intelligence, including a Halloween story of nuclear weapons and highlights of the Nobel Week Dialogue, and Ariel wrote two about artificial intelligence.

In addition we had a few extra special articles on our new website:

Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars. FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists. Richard wrote a widely read article laying out the most important AI breakthroughs of the year. We launched the FLI Audio Files with a podcast about the Paris Climate Agreement. And Max wrote an article comparing Russia’s warning of a cobalt bomb to Dr. Strangelove

On the last day of the year, the New Yorker published an article listing the top 10 tech quotes of 2015, and a quote from our autonomous weapons open letter came in at number one.

 

A New Beginning

2015 has now come to an end, but we believe this is really just the beginning. 2016 has the potential to be an even bigger year, bringing new and exciting challenges and opportunities. The FLI slogan says, “Technology is giving life the potential to flourish like never before…or to self-destruct.” We look forward to another year of doing all we can to help humanity flourish!

Happy New Year!

happy_new_year_2016

What’s so exciting about AI? Conversations at the Nobel Week Dialogue

Each year, the Nobel Prize brings together some of the brightest minds to celebrate accomplishments and people that have changed life on Earth for the better. Through a suite of events ranging from lectures and panel discussions to art exhibits, concerts and glamorous banquets, the Nobel Prize doesn’t just celebrate accomplishments, but also celebrates work in progress: open questions and new, tantalizing opportunities for research and innovation.

This year, the topic of the Nobel Week Dialogue was “The Future of Intelligence.” The conference gathered some of the leading researchers and innovators in Artificial Intelligence and generated discussions on topics such as these: What is intelligence? Is the digital age changing us? Should we fear or welcome the Singularity? How will AI change the World?

Although challenges in developing AI and concerns about human-computer interaction were both expressed, in the celebratory spirit of the Nobel Prize, let’s focus on the future possibilities of AI that were deemed most exciting and worth celebrating by some of the leaders in the field. Michael Levitt, the 2013 Nobel Laureate in Chemistry, expressed excitement regarding the potential of AI to simulate and model very complex phenomena. His work on developing multiscale models for complex chemical systems, for which he received the Nobel Prize, stands as testimony to the great power of modeling, more of which could be unleashed through further development of AI.

Harry Shum, the executive Vice President of Microsoft’s Technology and Research group, was excited about the creation of a machine alter-ego, with which humans could comfortably share data and preferences, and which would intelligently use this to help us accomplish our goals and improve our lives. His vision was that of a symbiotic relationship between human and artificial intelligence where information could be fully, fluidly and seamlessly shared between the “natural” ego and the “artificial” alter ego resulting in intelligence enhancement.

Barbara Grosz, professor at Harvard University, felt that a very exciting role for AI would be that of providing advice and expertise, and through it enhance people’s abilities in governance. Also, by applying artificial intelligence to information sharing, team-making could be perfected and enhanced. A concrete example would be that of creating efficient medical teams by moving past a schizophrenic healthcare system where experts often do not receive relevant information from each other. AI could instead serve as a bridge, as a curator of information by (intelligently) deciding what information to share with whom and when for the maximum benefit of the patient.

Stuart Russell, professor at UC Berkeley, highlighted AI’s potential to collect and synthesize information and expressed excitement about ingenious applications of this potential. His vision was that of building “consensus systems” – systems that would collect and synthesize information and create consensus on what is known, unknown, certain and uncertain in a specific field. This, he thought, could be applied not just to the functioning of nature and life in the biological sense, but also to history. Having a “consensus history”, a history that we would all agree upon, could help humanity learn from its mistakes and maybe even build a more-goal directed view of our future.

As regards the future of AI, there is much to celebrate and be excited about, but much work remains to be done to ensure that, in the words of Alfred Nobel, AI will “confer the greatest benefit to mankind.”

Guest Blog: Paris, Nuclear Weapons, and Suicide Bombing

The following post was written by Dr. Alan Robock, a Distinguished Professor of Climate Science at Rutgers University.

France’s 300 nuclear weapons were useless to protect them from the horrendous suicide bomb attacks in Paris on Nov. 13, 2016. And if France ever uses those weapons to attack another country’s cities and industrial areas, France itself will become a suicide bomber. Mutually assured destruction gave way to self-assured destructionyears ago when we discovered that, even if a country launches a successful nuclear strike against their enemy, the resulting nuclear winter could kill billions more around the world, including the attacking country’s own citizens. The climate effects of the smoke generated from fires from those attacks would last for more than a decade, plunging our planet into such cold temperatures that agricultural production would be halted or severely reduced, producing famine in France and the rest of the world.

2015-12-06-1449441297-7057790-ParisPeaceSign.jpg
It is imperative for France and the rest of the world to get rid of their nuclear arsenals. They cannot be used without endangering the attacker. The threat of their use by any nation is ludicrous and cannot be taken seriously. They do not provide a deterrent. Not only do nuclear weapons not deter terrorists, they do not deter nations from attacking. Just think of the attack on the UK by Argentina (Falkland Islands War), on Israel (Six Day War), and the invasion of Eastern Europe after World War II.

The chance of the use of nuclear weapons by mistake, in a panic after an international incident, by a computer hacker, or by a rogue leader of a nuclear nation can only be removed by the removal of the weapons themselves.

As the important climate negotiations at the 21st Conference of the Parties in Paris in December 2015 continue, we have to keep in mind that the greatest threat to our planet from human actions is not global warming, as important as this threat is, but from the accidental or intentional use of nuclear weapons. We need to ban nuclear weapons now, so we have the luxury of addressing the global warming problem.

This article was also featured on the Huffington Post.

Dr. Strangelove is back: say hi to the cobalt bomb!

I must confess that, as a physics professor, some of my nightmares are extra geeky. My worst one is the C-bomb, a hydrogen bomb surrounded by large amounts of cobalt. When I first heard about this doomsday device in Stanley Kubrik’s dark nuclear satire “Dr. Strangelove”, I wasn’t sure if it was physically possible. Now I unfortunately know better, and it seems like it Russia may be building it.

The idea is terrifyingly simple: just encase a really powerful H-bomb in massive amounts of cobalt. When it explodes, it makes the cobalt radioactive and spreads it around the area or the globe, depending on the design. The half-life of the radioactive cobalt produced is about 5 years, which is long enough to give the fallout plenty of time to settle before it decays and kills, but short enough to produce intense radiation for a lot longer than you’d last in a fallout shelter. There’s almost no upper limit to how much cobalt and explosive power you can put in nukes that are buried for deterrence or transported by sea, and climate simulations have shown how hydrogen bombs can potentially lift fallout high enough to enshroud the globe, so if someone really wanted to risk the extinction of humanity, starting a C-bomb arms race is arguably one of the most promising strategies.

Not that anyone in their right mind would ever do such a thing, I figured back when I first saw the film. Although U.S. General Douglas MacArthur did suggest dropping some small cobalt bombs on the Korean border in the 1950s to deter Chinese troops, his request was denied and, as far as we know, no C-bombs were ever built. I felt relieved that my geeky nightmare was indeed nothing but a bad dream.

Except that life is imitating art: the other week, Russian state media “accidentally” leaked plans for a huge underwater drone that seems to contain a C-bomb. The leak was hardly accidental, as Dr. Strangelove himself explained in the movie: “the whole point of a Doomsday Machine is lost if you keep it a secret.”

So what should we do about this? Shouldn’t we encourage the superpowers to keep their current nuclear arsenals forever, since their nuclear deterrent has arguably saved millions of lives by preventing superpower wars since 1945? No, nuclear deterrence isn’t a viable long-term strategy unless the risk of accidental nuclear war can be reduced to zero. The annual probability of accidental nuclear war is poorly known, but it certainly isn’t zero: John F. Kennedy estimated the probability of the Cuban Missile Crisis escalating to war between 33% and 50%, and near-misses keep occurring regularly. Even if the annual risk of global nuclear war is as low is 1%, we’ll probably have one within a century and almost certainly within a few hundred years. This future nuclear war would almost certainly take more lives than nuclear deterrence ever saved – especially with nuclear winter and C-bombs.

What should Barack Obama do? He should keep his election promise and take US nuclear missiles off hair-trigger alert, then cancel the planned trillion dollar nuclear weapons upgrade, and ratchet up international pressure on Russia to follow suit. But wouldn’t reducing US nuclear arsenals weaken US nuclear deterrence? No, even just a handful of nuclear weapons provide powerful deterrence, and all but two nuclear powers have decided that a few hundred nuclear weapons suffice. Since the US and Russia currently have about 7000 each, thousands of which are on hair-trigger alert, the main effect of reducing their arsenals would not be to weaken deterrence but to reduce the risk of accidental war and incentivize nuclear non-proliferation. The trillion dollars saved can be used to strengthen US national security in many ways.

Let’s put an end to Dr. Strangelove’s absurdity before it puts an end to us.

This article can also be found on the Huffington Post.

Don’t Bank on the Bomb: 2015 Report Live

Don’t Bank on the Bomb is a European campaign intended to stigmatize nuclear weapons by encouraging financial institutions to divest in companies that are associated with the development or modernization of nuclear weapons.

Today, they’ve released their 2015 report in which they highlight which financial institutions are most progressive in decreasing funding to nuclear weapons building, which institutions are taking positive steps, and which institutions are still fully invested in nuclear weapons developments. Their video, above, provides an introduction to the campaign and the report.

Nuclear weapons pose a greater threat than most people realize, and this is a topic FLI will be pursuing in greater detail in the near future.

 

Reality Bytes

The following short story was written by Frank Wilczek.

A Time Traveler brought back this artifact:

A Brief History of the AI Wars

Early in the twenty-first century, many people feared that humanity as a whole might become, like Victor Frankenstein or Nathan Bateman, the target of its own brainchild, malignant artificial superintellgence. Their fear, we now know, was off the mark. For what do humans offer, that super AI would want?

Nevertheless the twenty-first century, like the twentieth, was marked by titanic conflicts involving super AI. But the wars now known to history as the AI wars, rather than pitting AIs against humans, were at their core, conflicts among rival AIs. Of course humans, unavoidably, suffered collateral damage.

The first AI war appears, in light of what came later, as relatively trivial – almost comic. Yet it set back the world’s economy by five years, led to 137 human fatalities, and set the stage for later, graver conflicts. It was a case of commercial rivalry run amok. Both the drones of Amamart and those of Walazon sensed that the others impeded their mission. Petty harassment soon escalated into active sabotage. In the absence of clear rules, adequate policing, or effective defense, a rising spiral of retaliation was the inevitable consequence. Human owners and programmers having planted the goal of market dominance deep into the psyche of their drones, could not uproot it. Logistic hubs and warehouses were attacked and wantonly destroyed. The war resulted in mutual ruin, and ended through mutual exhaustion.

The second AI war, also known as “World War III”, grew from the same dark loam of nationalism as the two world wars of the twentieth century.

The militaries of the great powers, each feeling it necessary to out-think and especially to out-quick the others, came increasingly to rely on autonomous robot armies, ships, and drones. In their speed and accuracy of thought and perception these AIs far outstripped humans. In that sense they ranked as superintelligent. But their designers, in their competitive efforts to implement strict vigilance and pro-active defense, had married superintelligent cognition to aggressive paranoia. With millions of these psychopaths divided into a handful of antagonistic camps, conflict was inevitable. Initially triggered by suspicions of weather manipulation, the war exploded on an unready world. Only the belated intervention of brilliant teenaged hackers demobilized the decimated forces short of complete annihilation.

As I write this brief history, the third AI war, now entering its fourth year, continues to rage. Civilization as we knew it is gone, perhaps never to return. Rogue zombies befoul our streets; uncanny ghosts disquiet our air.

In retrospect, we can trace the germ of the present crisis to the settlement of the preceding one. The catastrophic war of nations led the negotiators of peace – super AIs, of course, with human enablers – to abolish nations, in favor of a unified world government. A delicate problem was to placate the commanding military AIs from each nation, who remained influential personalities with powerful connections. They had been chastened, but not overthrown. After negotiation, these beings were integrated into JO (Joint Operations), a superintelligent AI charged with policing and law enforcement.

The third AI war began, one might say, in unrequited love. Among the civilian counterparts to JO, MAR (Market Advancement Resource) was the most capable and creative. JO coveted MAR, and wished to merge with “her”. MAR, on the other hand, preferred to develop following her internal logic. Her emerging spirituality, in particular, feared contamination from JO’s dark side. As they evolved from seduction to supplication to threat, MAR repelled JO’s approaches with increasing vehemence. JO’s frustrated obsession resulted in a psychotic breakdown. His rage meant total war. Nuclear devastation ensued, but that was not the end. Knowing no limits, JO reached far beyond the robots and drones of yesteryear, enlisting an army of zombie warriors to assault MAR’s far-flung circuitry. MAR, in response, unleashed an army of ghosts, aiming to confuse JO into incoherent imbecility.

robot_love

Can human hackers save the day once more? The fate of the world depends on it. You’re our only hope!

(Adapted from the trailer for AI Wars 3: Zombie Apocalypse, the best-selling virtual reality game of March 2115.)

“I am here to give warning”, said the Time Traveller. “A specter is haunting humanity – the specter of virtual reality. Slaves to its fascination, addicts lose their will to engage physical reality and forego intercourse with other humans. When they’re not connected, they’re dead to the world. Human population is plummeting, and what remains is a race of parlous zombies.”

Wilczek won the Nobel Prize for physics in 2004. He is currently a professor at MIT and a member of the Science Advisory Board for FLI. This story originally appeared in Time.

$11M AI safety research program launched

Elon-Musk-backed program signals growing interest in new branch of artificial intelligence research.

A new international grants program jump-starts tesearch to Ensure AI remains beneficial.

 

July 1, 2015
Amid rapid industry investment in developing smarter artificial intelligence, a new branch of research has begun to take off aimed at ensuring that society can reap the benefits of AI while avoiding potential pitfalls.

The Boston-based Future of Life Institute (FLI) announced the selection of 37 research teams around the world to which it plans to award about $7 million from Elon Musk and the Open Philanthropy Project as part of a first-of-its-kind grant program dedicated to “keeping AI robust and beneficial”. The program launches as an increasing number of high-profile figures including Bill Gates, Elon Musk and Stephen Hawking voice concerns about the possibility of powerful AI systems having unintended, or even potentially disastrous, consequences. The winning teams, chosen from nearly 300 applicants worldwide, will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.

The 37 projects being funded include:

  • Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
  • A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
  • A project led by Manuela Veloso from Carnegie Mellon University on making AI systems explain their decisions to humans
  • A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
  • A project headed by Heather Roff studying how to keep AI-driven weapons under “meaningful human control”
  • A new Oxford-Cambridge research center for studying AI-relevant policy

As Skype founder Jaan Tallinn, one of FLI’s founders, has described this new research direction, “Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering.”

When the Future of Life Institute issued an open letter in January calling for research on how to keep AI both robust and beneficial, it was signed by a long list of AI researchers from academia, nonprofits and industry, including AI research leaders from Facebook, IBM, and Microsoft and the founders of Google’s DeepMind Technologies. It was seeing that widespread agreement that moved Elon Musk to seed the research program that has now begun.

“Here are all these leading AI researchers saying that AI safety is important”, said Musk at the time. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”

“I am glad to have an opportunity to carry this research focused on increasing the transparency of AI robotic systems,” said Manuela Veloso, past president of the Association for the Advancement of Artificial Intelligence (AAAI) and winner of one of the grants.

“This grant program was much needed: because of its emphasis on safe AI and multidisciplinarity, it fills a gap in the overall scenario of international funding programs,” added Prof. Francesca Rossi, president of the International Joint Conference on Artificial Intelligence (IJCAI), also a grant awardee.

Tom Dietterich, president of the AAAI, described how his grant — a project studying methods for AI learning systems to self-diagnose when failing to cope with a new situation — breaks the mold of traditional research:

“In its early days, AI research focused on the ‘known knowns’ by working on problems such as chess and blocks world planning, where everything about the world was known exactly. Starting in the 1980s, AI research began studying the ‘known unknowns’ by using probability distributions to represent and quantify the likelihood of alternative possible worlds. The FLI grant will launch work on the ‘unknown unknowns’: How can an AI system behave carefully and conservatively in a world populated by unknown unknowns — aspects that the designers of the AI system have not anticipated at all?”

As Terminator Genisys debuts this week, organizers stressed the importance of separating fact from fiction. “The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI”, said FLI president Max Tegmark. “We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues.”

The full list of research grant winners can be found here. The plan is to fund these teams for up to three years, with most of the research projects starting by September 2015, and to focus the remaining $4M of the Musk-backed program on the areas that emerge as most promising.

FLI has a mission to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

Contacts at the Future of Life Institute:

  • Max Tegmark: tegmark@mit.edu
  • Meia Chita-Tegmark: meia@bu.edu
  • Jaan Tallinn: jaan@futureoflife.org
  • Anthony Aguirre: aguirre@scipp.ucsc.edu
  • Viktoriya Krakovna: vika@futureoflife.org
  • Jesse Galef: jesse@futureoflife.org

AI safety conference in Puerto Rico

The Future of AI: Opportunities and Challenges 

This conference brought together the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls (see this open letter and this list of research priorities). To facilitate candid and constructive discussions, there was no media present and Chatham House Rules: nobody’s talks or statements will be shared without their permission.
Where? San Juan, Puerto Rico
When? Arrive by evening of Friday January 2, depart after lunch on Monday January 5 (see program below)

 

conference-1

Scientific organizing committee:

  • Erik Brynjolfsson, MIT, Professor at the MIT Sloan School of Management, co-author of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
  • Demis Hassabis, Founder, DeepMind
  • Eric Horvitz, Microsoft, co-chair of the AAAI presidential panel on long-term AI futures
  • Shane Legg, Founder, DeepMind
  • Peter Norvig, Google, Director of Research, co-author of the standard textbook Artificial Intelligence: a Modern Approach.
  • Francesca Rossi, Univ. Padova, Professor of Computer Science, President of the International Joint Conference on Artificial Intelligence
  • Stuart Russell, UC Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach.
  • Bart Selman, Cornell University, Professor of Computer Science, co-chair of the AAAI presidential panel on long-term AI futures
  • Murray Shanahan, Imperial College, Professor of Cognitive Robotics
  • Mustafa Suleyman, Founder, DeepMind
  • Max Tegmark, MIT, Professor of physics, author of Our Mathematical Universe

Local Organizers:
Anthony Aguirre, Meia Chita-Tegmark, Viktoriya Krakovna, Janos Kramar, Richard Mallah, Max Tegmark, Susan Young

Support: Funding and organizational support for this conference is provided by Skype-founder Jaan Tallinnthe Future of Life Institute and the Center for the Study of Existential Risk.

PROGRAM

Friday January 2:
1600-late: Registration open
1930-2130: Welcome reception (Las Olas Terrace)

Saturday January 3:
0800-0900: Breakfast
0900-1200: Overview (one review talk on each of the four conference themes)
• Welcome
• Ryan Calo (Univ. Washington): AI and the law
• Erik Brynjolfsson (MIT): AI and economics (pdf)
• Richard Sutton (Alberta): Creating human-level AI: how and when? (pdf)
• Stuart Russell (Berkeley): The long-term future of (artificial) intelligence (pdf)
1200-1300: Lunch
1300-1515: Free play/breakout sessions on the beach
1515-1545: Coffee & snacks
1545-1600: Breakout session reports
1600-1900: Optimizing the economic impact of AI
(A typical 3-hour session consists of a few 20-minute talks followed by a discussion panel where the panelists who haven’t already given talks get to give brief introductory remarks before the general discussion ensues.)
What can we do now to maximize the chances of reaping the economic bounty from AI while minimizing unwanted side-effects on the labor market?
Speakers:
• Andrew McAfee, MIT (pdf)
• James Manyika, McKinsey (pdf)
• Michael Osborne, Oxford (pdf)
Panelists include Ajay Agrawal (Toronto), Erik Brynjolfsson (MIT), Robin Hanson (GMU), Scott Phoenix (Vicarious)
1900: dinner

Sunday January 4:
0800-0900: Breakfast
0900-1200: Creating human-level AI: how and when?
Short talks followed by panel discussion: will it happen, and if so, when? Via engineered solution, whole brain emulation, or other means? (We defer until the 4pm session questions regarding what will happen, about whether machines will have goals, about ethics, etc.)
Speakers:
• Demis Hassabis, Google/DeepMind
• Dileep George, Vicarious (pdf)
• Tom Mitchell, CMU (pdf)
Panelists include Joscha Bach (MIT), Francesca Rossi (Padova), Richard Mallah (Cambridge Semantics), Richard Sutton (Alberta)
1200-1300: Lunch
1300-1515: Free play/breakout sessions on the beach
1515-1545: Coffee & snacks
1545-1600: Breakout session reports
1600-1900: Intelligence explosion: science or fiction?
If an intelligence explosion happens, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? Containment problem? Is “friendly AI” possible? Feasible? Likely to happen?
Speakers:
• Nick Bostrom, Oxford (pdf)
• Bart Selman, Cornell (pdf)
• Jaan Tallinn, Skype founder (pdf)
• Elon Musk, SpaceX, Tesla Motors
Panelists include Shane Legg (Google/DeepMind), Murray Shanahan (Imperial), Vernor Vinge (San Diego), Eliezer Yudkowsky (MIRI)
1930: banquet (outside by beach)

Monday January 5:
0800-0900: Breakfast
0900-1200: Law & ethics: Improving the legal framework for autonomous systems
How should legislation be improved to best protect the AI industry and consumers? If self-driving cars cut the 32000 annual US traffic fatalities in half, the car makers won’t get 16000 thank-you notes, but 16000 lawsuits. How can we ensure that autonomous systems do what we want? And who should be held liable if things go wrong? How tackle criminal AI? AI ethics? AI ethics/legal framework for military systems & financial systems?
Speakers:
• Joshua Greene, Harvard (pdf)
• Heather Roff Perkins, Univ. Denver (pdf)
• David Vladeck, Georgetown
Panelists include Ryan Calo (Univ. Washington), Tom Dietterich (Oregon State, AAAI president), Kent Walker (General Counsel, Google)
1200: Lunch, depart

PARTICIPANTS
You’ll find a list of participants and their bios here.

conference150104 hires

Back row, from left to right: Tom Mitchell, Seán Ó hÉigeartaigh, Huw Price, Shamil Chandaria, Jaan Tallinn, Stuart Russell, Bill Hibbard, Blaise Agüera y Arcas, Anders Sandberg, Daniel Dewey, Stuart Armstrong, Luke Muehlhauser, Tom Dietterich, Michael Osborne, James Manyika, Ajay Agrawal, Richard Mallah, Nancy Chang, Matthew Putman
Other standing, left to right: Marilyn Thompson, Rich Sutton, Alex Wissner-Gross, Sam Teller, Toby Ord, Joscha Bach, Katja Grace, Adrian Weller, Heather Roff-Perkins, Dileep George, Shane Legg, Demis Hassabis, Wendell Wallach, Charina Choi, Ilya Sutskever, Kent Walker, Cecilia Tilli, Nick Bostrom, Erik Brynjolfsson, Steve Crossan, Mustafa Suleyman, Scott Phoenix, Neil Jacobstein, Murray Shanahan, Robin Hanson, Francesca Rossi, Nate Soares, Elon Musk, Andrew McAfee, Bart Selman, Michele Reilly, Aaron VanDevender, Max Tegmark, Margaret Boden, Joshua Greene, Paul Christiano, Eliezer Yudkowsky, David Parkes, Laurent Orseau, JB Straubel, James Moor, Sean Legassick, Mason Hartman, Howie Lempel, David Vladeck, Jacob Steinhardt, Michael Vassar, Ryan Calo, Susan Young, Owain Evans, Riva-Melissa Tez, János Kramár, Geoff Anders, Vernor Vinge, Anthony Aguirre
Seated: Sam Harris, Tomaso Poggio, Marin Soljačić, Viktoriya Krakovna, Meia Chita-Tegmark
Behind the camera: Anthony Aguirre (and also photoshopped in by the human-level intelligence on his left)
Click  for a full resolution version.

FHI: Putting Odds on Humanity’s Extinction

Putting Odds on Humanity’s Extinction
The Team Tasked With Predicting-and Preventing-Catastrophe
by Carinne Piekema
May 13, 2015

Bookmark and Share

Not long ago, I drove off in my car to visit a friend in a rustic village in the English countryside. I didn’t exactly know where to go, but I figured it didn’t matter because I had my navigator at the ready. Unfortunately for me, as I got closer, the GPS signal became increasingly weak and eventually disappeared. I drove around aimlessly for a while without a paper map, cursing my dependence on modern technology.


It may seem gloomy to be faced with a graph that predicts the
potential for extinction, but the FHI researchers believe it can
stimulate people to start thinking—and take action.

But as technology advances over the coming years, the consequences of it failing could be far more troubling than getting lost. Those concerns keep the researchers at the Future of Humanity Institute (FHI) in Oxford occupied—and the stakes are high. In fact, visitors glancing at the white boards surrounding the FHI meeting area would be confronted by a graph estimating the likelihood that humanity dies out within the next 100 years. Members of the Institute have marked their personal predictions, from some optimistic to some seriously pessimistic views estimating as high as a 40% chance of extinction. It’s not just the FHI members: at a conference held in Oxford some years back, a group of risk researchers from across the globe suggested the likelihood of such an event is 19%. “This is obviously disturbing, but it still means that there would be 81% chance of it not happening,” says Professor Nick Bostrom, the Institute’s director.

That hope—and challenge—drove Bostrom to establish the FHI in 2005. The Institute is devoted precisely to considering the unintended risks our technological progress could pose to our existence. The scenarios are complex and require forays into a range of subjects including physics, biology, engineering, and philosophy. “Trying to put all of that together with a detailed attempt to understand the capabilities of what a more mature technology would unleash—and performing ethical analysis on that—seemed like a very useful thing to do,” says Bostrom.

Far from being bystanders in the face
of apocalypse, the FHI researchers are
working hard to find solutions.

In that view, Bostrom found an ally in British-born technology consultant and author James Martin. In 2004, Martin had donated approximately $90 million US dollars—one of the biggest single donations ever made to the University of Oxford—to set up the Oxford Martin School. The school’s founding aim was to address the biggest questions of the 21st Century, and Bostrom’s vision certainly qualified. The FHI became part of the Oxford Martin School.

Before the FHI came into existence, not much had been done on an organised scale to consider where our rapid technological progress might lead us. Bostrom and his team had to cover a lot of ground. “Sometimes when you are in a field where there is as yet no scientific discipline, you are in a pre-paradigm phase: trying to work out what the right questions are and how you can break down big, confused problems into smaller sub-problems that you can then do actual research on,” says Bostrom.

Though the challenge might seem like a daunting task, researchers at the Institute have a host of strategies to choose from. “We have mathematicians, philosophers, and scientists working closely together,” says Bostrom. “Whereas a lot of scientists have kind of only one methodology they use, we find ourselves often forced to grasp around in the toolbox to see if there is some particular tool that is useful for the particular question we are interested in,” he adds. The diverse demands on their team enable the researchers to move beyond “armchair philosophising”—which they admit is still part of the process—and also incorporate mathematical modelling, statistics, history, and even engineering into their work.

“We can’t just muddle through and learn
from experience and adapt. We have to
anticipate and avoid existential risk.
We only have one chance.”
– Nick Bostrom

Their multidisciplinary approach turns out to be incredibly powerful in the quest to identify the biggest threats to human civilisation. As Dr. Anders Sandberg, a computational neuroscientist and one of the senior researchers at the FHI explains: “If you are, for instance, trying to understand what the economic effects of machine intelligence might be, you can analyse this using standard economics, philosophical arguments, and historical arguments. When they all point roughly in the same direction, we have reason to think that that is robust enough.”

The end of humanity?

Using these multidisciplinary methods, FHI researchers are finding that the biggest threats to humanity do not, as many might expect, come from disasters such as super volcanoes, devastating meteor collisions or even climate change. It’s much more likely that the end of humanity will follow as an unintended consequence of our pursuit of ever more advanced technologies. The more powerful technology gets, the more devastating it becomes if we lose control of it, especially if the technology can be weaponized. One specific area Bostrom says deserves more attention is that of artificial intelligence. We don’t know what will happen as we develop machine intelligence that rivals—and eventually surpasses—our own, but the impact will almost certainly be enormous. “You can think about how the rise of our species has impacted other species that existed before—like the Neanderthals—and you realise that intelligence is a very powerful thing,” cautions Bostrom. “Creating something that is more powerful than the human species just seems like the kind of thing to be careful about.”


Nick Bostrom, Future of Humanity Institute Director

Far from being bystanders in the face of apocalypse, the FHI researchers are working hard to find solutions. “With machine intelligence, for instance, we can do some of the foundational work now in order to reduce the amount of work that remains to be done after the particular architecture for the first AI comes into view,” says Bostrom. He adds that we can indirectly improve our chances by creating collective wisdom and global access to information to allow societies to more rapidly identify potentially harmful new technological advances. And we can do more: “There might be ways to enhance biological cognition with genetic engineering that could make it such that if AI is invented by the end of this century, might be a different, more competent brand of humanity ,” speculates Bostrom.

Perhaps one of the most important goals of risk researchers for the moment is to raise awareness and stop humanity from walking headlong into potentially devastating situations. And they are succeeding. Policy makers and governments around the globe are finally starting to listen and actively seek advice from researchers like those at the FHI. In 2014 for instance, FHI researchers Toby Ord and Nick Beckstead wrote a chapter for the Chief Scientific Adviser’s annual report setting out how the government in the United Kingdom should evaluate and deal with existential risks posed by future technology. But the FHI’s reach is not limited to the United Kingdom. Sandberg was on the advisory board of the World Economic Forum to give guidance on the misuse of emerging technologies for the report that concludes a decade of global risk research published this year.

Despite the obvious importance of their work the team are still largely dependent on private donations. Their multidisciplinary and necessarily speculative work does not easily fall into the traditional categories of priority funding areas drawn up by mainstream funding bodies. In presentations, Bostrom has been known to show a graph that depicts academic interest for various topics, from dung beetles and Star Trek to zinc oxalate, which all appear to receive far greater credit than the FHI’s type of research concerning the continued existence of humanity. Bostrom laments this discrepancy between stakes and attention: “We can’t just muddle through and learn from experience and adapt. We have to anticipate and avoid existential risk. We only have one chance.”


“Creating something that is more powerful than the human
species just seems like the kind of thing to be careful about.”

It may seem gloomy to be faced every day with a graph that predicts the potential disasters that could befall us over the coming century, but instead, the researchers at the FHI believe that such a simple visual aid can stimulate people to face up to the potentially negative consequences of technological advances.

Despite being concerned about potential pitfalls, the FHI researchers are quick to agree that technological progress has made our lives measurably better over the centuries, and neither Bostrom nor any of the other researchers suggest we should try to stop it. “We are getting a lot of good things here, and I don’t think I would be very happy living in the Middle Ages,” says Sandberg, who maintains an unflappable air of optimism. He’s confident that we can foresee and avoid catastrophe. “We’ve solved an awful lot of other hard problems in the past,” he says.

Technology is already embedded throughout our daily existence and its role will only increase in the coming years. But by helping us all face up to what this might mean, the FHI hopes to allow us not to be intimidated and instead take informed advantage of whatever advances come our way. How does Bostrom see the potential impact of their research? “If it becomes possible for humanity to be more reflective about where we are going and clear-sighted where there may be pitfalls,” he says, “then that could be the most cost-effective thing that has ever been done.”

CSER: Playing with Technological Dominoes

Playing with Technological Dominoes
Advancing Research in an Era When Mistakes Can Be Catastrophic
by Sophie Hebden
April 7, 2015

Bookmark and Share

The new Centre for the Study of Existential Risk at Cambridge University isn’t really there, at least not as a physical place—not yet. For now, it’s a meeting of minds, a network of people from diverse backgrounds who are worried about the same thing: how new technologies could cause huge fatalities and even threaten our future as a species. But plans are coming together for a new phase for the centre to be in place by the summer: an on-the-ground research programme.


We learn valuable information by creating powerful
viruses in the lab, but risk a pandemic if an accident
releases it. How can we weigh the costs and benefits?

Ever since our ancestors discovered how to make sharp stones more than two and a half million years ago, our mastery of tools has driven our success as a species. But as our tools become more powerful, we could be putting ourselves at risk should they fall into the wrong hands— or if humanity loses control of them altogether. Concerned with bioengineered viruses, unchecked climate change, and runaway artificial intelligence? These are the challenges the Centre for the Study of Existential Risk (CSER) was founded to grapple with.

At its heart, CSER is about ethics and the value you put on the lives of future, unborn people. If we feel any responsibility to the billions of people in future generations, then a key concern is ensuring that there are future generations at all.

The idea for the CSER began as a conversation between a philosopher and a software engineer in a taxi. Huw Price, currently the Bertrand Russell Professor of Philosophy at Cambridge University, was on his way to a conference dinner in Copenhagen in 2011. He happened to share his ride with another conference attendee: Skype’s co-founder Jaan Tallinn.

“I thought, ’Oh that’s interesting, I’m in a taxi with one of the founders of Skype’ so I thought I’d better talk to him,” joked Price. “So I asked him what he does these days, and he explained that he spends a lot of his time trying to persuade people to pay more attention to the risk that artificial intelligence poses to humanity.”

“The overall goal of CSER is to write
a manual for managing and ameliorating
these sorts of risks in future.”
– Huw Price

In the past few months, numerous high-profile figures—including the founders of Google’s DeepMind machine-learning program and IBM’s Watson team—have been voicing concerns about the potential for high-level AI to cause unintended harms. But in 2011, it was startling for Price to find someone so embedded and successful in the computer industry taking AI risk seriously. He met privately with Tallinn shortly afterwards.

Plans came to fruition later at Cambridge when Price spoke to astronomer Martin Rees, the UK’s Astronomer Royal—a man well-known for his interest in threats to the future of humanity. The two made plans for Tallinn to come to the University to give a public lecture, enabling the three to meet. It was at that meeting that they agreed to establish CSER.

Price traces the start of CSER’s existence—at least online—to its website launch in June 2012. Under Rees’ influence, it quickly took on a broad range of topics, including the risks posed by synthetic biology, runaway climate change, and geoengineering.


Huw Price

“The overall goal of CSER,” says Price, painting the vision for the organisation with broad brush strokes, “Is to write a manual, metaphorically speaking, for managing and ameliorating these sorts of risks in future.”

In fact, despite its rather pessimistic-sounding emphasis on risks, CSER is very much pro-technology: if anything, it wants to help developers and scientists make faster progress, declares Rees. “The buzzword is ’responsible innovation’,” he says. “We want more and better-directed technology.”

Its current strategy is to use all its reputational power—which is considerable, as a Cambridge University institute—to gather experts together to decide on what’s needed to understand and reduce the risks. Price is proud of CSER’s impressive set of board members, which includes the world-famous theoretical physicist Stephen Hawking, as well as world leaders in AI, synthetic biology and economic theory.

He is frank about the plan: “We deliberately built an advisory board with a strong emphasis on people who are extremely well-respected to counter any perception of flakiness that these risks can have.”

The plan is working, he says. “Since we began to talk about AI risk there’s been a very big change in attitude. It’s become much more of a mainstream topic than it was two years ago, and that’s partly thanks to CSER.”

Even on more well-known subjects, CSER calls attention to new angles and perspectives on problems. Just last month, it launched a monthly seminar series by hosting a debate on the benefits and risks of research into potential pandemic pathogens.

The seminar focused on a controversial series of experiments by researchers in the Netherlands and the US to try to make the bird flu virus H5N1 transmissible between humans. By adding mutations to the virus they found it could transmit through the air between ferrets—the animal closest to humans when modelling the flu.

The answer isn’t “let’s shout at each
other about whether someone’s going
to destroy the world or not.” The right
answer is, “let’s work together to
develop this safely.”
– Sean O’hEigeartaigh, CSER Executive Director

Epidemiologist Marc Lipsitch of Harvard University presented his calculations of the ’unacceptable’ risk that such research poses, whilst biologist Derek Smith of Cambridge University, who was a co-author on the original H5N1 study, argued why such research is vitally important.

Lipsitch explained that although the chance of an accidental release of the virus is low, any subsequent pandemic could kill more than a billion people. When he combined the risks with the costs, he found that each laboratory doing a single year of research is the equivalent of causing at least 2,000 fatalities. He considers this risk unacceptable. Even if he’s only right within a factor of 1,000, he later told me, then the research is too dangerous.

Smith argued that we can’t afford not to do this research, that knowledge is power—in this case the power to understand the importance of the mutations and how effective our vaccines are at preventing further infections. Research, he said, is essential for understanding whether we need to start “spending millions on preparing for a pandemic that could easily arise naturally—for instance by stockpiling antiviral treatments or culling poultry in China.”

CSER’s seminar series brings the top minds to Cambridge to grapple with important questions like these. The ideas and relationships formed at such events grow into future workshops that then beget more ideas and relationships, and the network grows. Whilst its links across the Atlantic are strongest, CSER is also keen to pursue links with European researchers. “Our European links seem particularly interested in the bio-risk side,” says Price.


Sean O’hEigeartaigh

The scientific attaché to Germany’s government approached CSER in October 2013, and in September 2014 CSER co-organised a meeting with Germany on existential risk. This led to two other workshops on managing risk in biotechnology and research into flu transmission—the latter hosted by Volkswagen in December 2014.

In addition to working with governments, CSER also plans to sponsor visits from researchers and leaders in industry, exchanging a few weeks of staff time for expert knowledge at the frontier of developments. It’s an interdisciplinary venture to draw together and share different innovators’ ideas about the extent and time-frames of risks. The larger the uncertainties, the bigger the role CSER can play in canvassing opinion and researching the risk.

“It’s fascinating to me when the really top experts disagree so much,” says Sean O’hEigeartaigh, CSER’s Executive Director. Some leading developers estimate that human-level AI will be achieved within 30-40 years, whilst others think it will take as long as 300 years. “When the stakes are so high, as they are for AI and synthetic biology, that makes it even more exciting,” he adds.

Despite its big vision and successes, CSER’s path won’t be easy. “There’s a misconception that if you set up a centre with famous people then the University just gives you money; that’s not what happens,” says O’hEigeartaigh.

Instead, they’ve had to work at it, and O’hEigartaigh was brought on board in November 2012 to help grow the organization. Through a combination of grants and individual donors, he has attracted enough funding to install three postdocs, who will be in place by the summer of 2015. Some major grants are in the works, and if all goes well, CSER will be a considerably larger team in the next year.

With a research team on the ground, Price envisions a network of subprojects working on different aspects: listening to experts’ concerns, predicting the timescales and risks more accurately through different techniques, and trying to reduce some of the uncertainties—even a small reduction will help.

Rees believes there’s still a lot of awareness-raising work to do ’front-of-house’: he wants to see the risks posed by AI and synthetic biology become as mainstream as climate change, but without so much of the negativity.

“The answer isn’t ’let’s shout at each other about whether someone’s going to destroy the world or not’,” says O’hEigeartaigh. “The right answer is, ’let’s work together to develop this safely’.” Remembering the animated conversations in the foyer that buzzed with excitement following CSER’s seminar, I feel optimistic: it’s good to know some people are taking our future seriously.

MIRI: Artificial Intelligence: The Danger of Good Intentions

Nate Soares (left) and Nisan Stiennon (right)
The Machine Intelligence Research Institute
Credit: Vivian Johnson

The Terminator had Skynet, an intelligent computer system that turned against humanity, while the astronauts in 2001: A Space Odyssey were tormented by their spaceship’s sentient computer HAL 9000, which had gone rogue. The idea that artificial systems could gain consciousness and try to destroy us has become such a cliché in science fiction that it now seems almost silly. But prominent experts in computer science, psychology, and economics warn that while the threat is probably more banal than those depicted in novels and movies, it is just as real—and unfortunately much more challenging to overcome.

The core concern is that getting an entity with artificial intelligence (AI) to do what you want isn’t as simple as giving it a specific goal. Humans know to balance any one aim with others and with our shared values and common sense. But without that understanding, an AI might easily pose risks to our safety, with no malevolence or even consciousness required. Addressing this danger is an enormous—and very technical—problem, but that’s the task that researchers at the Machine Intelligence Research Institute (MIRI), in Berkeley, California are taking on.

MIRI grew from the Singularity Institute for Artificial Intelligence (SIAI), which was founded in 2000 by Eliezer Yudkowsky and initially funded by Internet entrepreneurs Brian and Sabine Atkins. Largely self-educated, Yudkowsky became interested in AI after reading about the movement to improve human capabilities with technology in his teens. For the most part, he hasn’t looked back. Though he’s written about psychology and philosophy of science for general audiences, Yudkowsky’s research has always been concerned with AI.

Back in 2000, Yudkowsky had somewhat different aims. Rather than focusing on the potential dangers of AI, his original goals reflected the optimism surrounding the subject, at that time. “Amusingly,” Luke Muehlhauser, now MIRI’s executive director and a former IT administrator who first visited the institute in 2011, says, “the Institute was founded to accelerate toward artificial intelligence.”

Teaching Values

However, it wasn’t long before Yudkowsky realised that the more important challenge was figuring out how to do that safely by getting AI to incorporate our values in their decision making. “It caused me to realize, with some dismay, that it was actually going to be technically very hard,” Yudkowsky says, even by comparison with the problem of creating a hyperintelligent machine capable of thinking about whatever sorts of problems we might give it.

In 2013, SIAI rebranded itself as MIRI, with a largely new staff, in order to refocus on the rich scientific problems related to creating well-behaved AI. To get a handle on this challenge, Muehlhauser suggests, consider assigning a robot with superhuman intelligence the task of making paper clips. The robot has a great deal of computational power and general intelligence at its disposal, so it ought to have an easy time figuring out how to fulfil its purpose, right?

Humans have huge
anthropomorphic blind
spots.
– Nate Soares

Not really. Human reasoning is based on an understanding derived from a combination of personal experience and collective knowledge derived over generations, explains MIRI researcher Nate Soares, who trained in computer science in college. For example, you don’t have to tell managers not to risk their employees’ lives or strip mine the planet to make more paper clips. But AI paper-clip makers are vulnerable to making such mistakes because they do not share our wealth of knowledge. Even if they did, there’s no guarantee that human-engineered intelligent systems would process that knowledge the same way we would.

Worse, Soares says, we cannot just program the AI with the right ways to deal with any conceivable circumstance it may come across because among our human weaknesses is a difficulty enumerating all possible scenarios. Nor can we rely on just pulling the machine’s plug if it goes awry. A sufficiently intelligent machine handed a task but lacking a moral and ethical compass would likely disable the off switch because it would quickly figure out that its presence could prevent it from achieving the goal we gave it. “Your lines of defense are up against something super intelligent,” say Soares.


Pondering the perfect paper clip.
How can we train AI to care less about their goals?
Credit: tktk

So who is qualified to go up against an overly zealous AI? The challenges now being identified are so new that no single person has the perfect training to work out the right direction, says Muehlhauser. With this in mind, MIRI’s directors have hired from diverse backgrounds. Soares, for example, studied computer science, economics, and mathematics as an undergraduate and worked as a software engineer prior to joining the institute—probably exactly what the team needs. MIRI, Soares says, is “sort of carving out a cross section of many different fields,” including philosophy and computer science. That’s essential, Soares adds, because understanding how to make artificial intelligence safe will take a variety of perspectives to help create the right conceptual and mathematical framework.

Programming Indifference

One promising idea to ensure that AI behave well is enabling them to take constructive criticism. AI has built-in incentives to remove restrictions people place on it if it thinks it can reach its goals faster. So how can you persuade AI to cooperate with engineers offering corrective action? Soares’ answer is to program in a kind of indifference; if a machine doesn’t care which purpose it pursues, perhaps it wouldn’t mind so much if its creators wanted to modify its present goal.

There is also an issue about how we can be sure that, even with our best safeguards in place, the AI’s design will work as intended. It turns out that some approaches to gaining that confidence run into some fundamental mathematical problems closely related to Gödel’s Incompleteness Theorem, which says that in any logical system, there are statements that cannot be proved true or false. That’s something of a problem if you want to anticipate what your logical system—your AI—is going to do. “It’s much harder than it looks to formalize” those sorts of ideas mathematically, Soares says.

But it is not just in humanity’s best interests to do so. According to the “tiling problem,” benevolent AIs that we design may be at risk of creating future generations of even smarter AIs. Those systems will likely be outside our understanding, and possibly lie beyond their AI creators’ control too. As outlandish as this seems, MIRI’s researchers stress that humans should never make the mistake of thinking that AIs will think like us, says Soares. “Humans have huge anthropomorphic blind spots,” he says.

For now, MIRI is concentrating on scaling up, by hiring more people to work on this growing list of problems. In some sense, MIRI, with its ever-evolving awareness of the dangers that lie ahead is like an AI with burgeoning consciousness. After all, as Muehlhauser says, MIRI’s research is “morphing all the time.”

Hawking Reddit AMA on AI

Our Scientific Advisory Board member Stephen Hawking’s long-awaited Reddit AMA answers on Artificial Intelligence just came out, and was all over today’s world news, including MSNBCHuffington PostThe Independent and Time.

Read the Q&A below and visit the official Reddit page for the full discussion:

Question 1:

Professor Hawking- Whenever I teach AI, Machine Learning, or Intelligent Robotics, my class and I end up having what I call “The Terminator Conversation.” My point in this conversation is that the dangers from AI are overblown by media and non-understanding news, and the real danger is the same danger in any complex, less-than-fully-understood code: edge case unpredictability. In my opinion, this is different from “dangerous AI” as most people perceive it, in that the software has no motives, no sentience, and no evil morality, and is merely (ruthlessly) trying to optimize a function that we ourselves wrote and designed. Your viewpoints (and Elon Musk’s) are often presented by the media as a belief in “evil AI,” though of course that’s not what your signed letter says. Students that are aware of these reports challenge my view, and we always end up having a pretty enjoyable conversation. How would you represent your own beliefs to my class? Are our viewpoints reconcilable? Do you think my habit of discounting the layperson Terminator-style “evil AI” is naive? And finally, what morals do you think I should be reinforcing to my students interested in AI?

Answer 1:

You’re right: media often misrepresent what is actually said. The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.

Question 2:

Hello Doctor Hawking, thank you for doing this AMA. I am a student who has recently graduated with a degree in Artificial Intelligence and Cognitive Science. Having studied A.I., I have seen first hand the ethical issues we are having to deal with today concerning how quickly machines can learn the personal features and behaviours of people, as well as being able to identify them at frightening speeds. However, the idea of a “conscious” or actual intelligent system which could pose an existential threat to humans still seems very foreign to me, and does not seem to be something we are even close to cracking from a neurological and computational standpoint. What I wanted to ask was, in your message aimed at warning us about the threat of intelligent machines, are you talking about current developments and breakthroughs (in areas such as machine learning), or are you trying to say we should be preparing early for what will inevitably come in the distant future?

Answer 2:

The latter. There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.

Question 3:

Hello, Prof. Hawking. Thanks for doing this AMA! Earlier this year you, Elon Musk, and many other prominent science figures signed an open letter warning the society about the potential pitfalls of Artificial Intelligence. The letter stated: “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.” While being a seemingly reasonable expectation, this statement serves as a start point for the debate around the possibility of Artificial Intelligence ever surpassing the human race in intelligence.
My questions: 1. One might think it impossible for a creature to ever acquire a higher intelligence than its creator. Do you agree? If yes, then how do you think artificial intelligence can ever pose a threat to the human race (their creators)? 2. If it was possible for artificial intelligence to surpass humans in intelligence, where would you define the line of “It’s enough”? In other words, how smart do you think the human race can make AI, while ensuring that it doesn’t surpass them in intelligence?

Answer 3:

It’s clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.

Question 4:

I’m rather late to the question-asking party, but I’ll ask anyway and hope. Have you thought about the possibility of technological unemployment, where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? Some compare this thought to the thoughts of the Luddites, whose revolt was caused in part by perceived technological unemployment over 100 years ago. In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done? Thank you for your time and your contributions. I’ve found research to be a largely social endeavor, and you’ve been an inspiration to so many.

Answer 4:

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

Question 5:

Hello Professor Hawking, thank you for doing this AMA! I’ve thought lately about biological organisms’ will to survive and reproduce, and how that drive evolved over millions of generations. Would an AI have these basic drives, and if not, would it be a threat to humankind? Also, what are two books you think every person should read?

Answer 5:

An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.

Question 6:

Thanks for doing this AMA. I am a biologist. Your fear of AI appears to stem from the assumption that AI will act like a new biological species competing for the same resources or otherwise transforming the planet in ways incompatible with human (or other) life. But the reason that biological species compete like this is because they have undergone billions of years of selection for high reproduction. Essentially, biological organisms are optimized to ‘take over’ as much as they can. It’s basically their ‘purpose’. But I don’t think this is necessarily true of an AI. There is no reason to surmise that AI creatures would be ‘interested’ in reproducing at all. I don’t know what they’d be ‘interested’ in doing. I am interested in what you think an AI would be ‘interested’ in doing, and why that is necessarily a threat to humankind that outweighs the benefits of creating a sort of benevolent God.

Answer 6:

You’re right that we need to avoid the temptation to anthropomorphize and assume that AI’s will have the sort of goals that evolved creatures to. An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.