We Love God!

God: "I looked for someone to take a stand for me, and stand in the gap" (Ezekiel 22:30)

That man is truly happy who can say of all his substances, be it little or be it much, 'The Lord gave it to me'.
Charles Spurgeon.

Not Without Us

Not Without Us

NOT WITHOUT US

A Challenge to Computer Professionals to Help Bring the Present Insanity to a Halt

By Joseph Weizenbaum, professor of computer science at MIT

(This is an English translation of a talk given in German to the Association of Computer Professionals in West Germany in July 1986. You are welcome to reproduce and distribute it.)

. Whenever I come to Europe, especially to West Germany, I am amazed by the normality of everyday life: superhighways, “music” that assaults one in restaurants, the many parks, the forests of television antennas on the roofs of houses and so on. I am amazed because of Europe’s geographic position and all that follows from it. In West Germany, for example, there is the border with the other Germany, dense with military installations of all sorts. There are holes in the street that are intended to be filled with nuclear land mines if Russian tanks should come. These are signs of Europe’s physical and psychological proximity to the final catastrophe. . We in America are, in a certain sense, no more distant from the catastrophe than the Europeans are. Not only Chernobyl, but also the threat of war is everywhere. And war is everyone’s enemy. In case of war, regardless of whether unintentionally initiated by technology allegedly designed to avert war, or by so-called statesmen or women who thought it their duty to push the button, you may die ten minutes earlier than we in fortress America, but we shall all die. . But we have no holes in our streets for atomic land mines that are intended to delay Soviet tank regiments. We see our missile silos only now and then — that is, only whenever it pleases someone to show them to us on television. No matter how passionately our government tries to convince us that the nasty Soviets are effectively as near to us as to Europeans, that they threaten us from Cuba and Nicaragua, Americans are, on the whole, quite unconvinced and untroubled by such efforts. The American experience of war has allowed us to develop an “it can’t happen here” attitude, rather than a concrete fear of what appears to be far removed from the immediate concerns of daily life. . We know that it is emotionally impossible for anyone to live for very long in the face of immediate threats to existence without bringing to bear psychological mechanisms that will exclude these dangers from consciousness, permitting them to surface only rarely. But when repression necessitates systematically misdirected efforts, or excludes potentially life-saving behavior, then it is time to replace it with a conscious effort to find the prod to correct action. . That time has come for computer professionals. We now have the power radically to turn the state of the world in directions conducive to life.
. In order to gain the necessary courage (not all of us are saints or heroes) we have to understand that for us as individuals, as well as for those we love, our present behavior is far more dangerous, even life threatening, than what healthy common sense now demands of us. None of the weapons that today threaten every human being with murder, and whose design, manufacture and sale condemns countless people to starvation, could be developed without the earnest cooperation of computer professionals. Without us, the arms race, especially the qualitative arms race, cannot march another step. . What does this say to us?
. First, that we computer experts — as well as specialists in many other technical domains — share in the guilt of having brought about the present dangerous state of the world. Those among us who, perhaps without being aware of it, devote our talents and strengths to death rather than to life have little right to curse politicians, statesmen and women for not bringing us peace. It isn’t enough to make pretty posters that can be carried in demonstrations. Those who carry them must care whether their daily work helps to make possible the very devices the use of which they are protesting. . At this point, the domain called Artificial Intelligence (AI) comes especially to mind. Many of the technical tasks and problems in this subdiscipline of computer science stimulate the imagination and creativity of technically oriented workers particularly strongly. Goals like making a thinking being out of the computer, giving the computer the ability to understand spoken language, making it possible for the computer to see, offer nearly irresistible temptations to those among us who have not fully sublimated our playful sandbox fantasies, or who mean to satisfy our delusions of omnipotence on the computer stage. Such tasks are extraordinarily demanding and interesting. Robert Oppenheimer called them sweet. Besides, research projects in these areas are generously funded. The required moneys usually come out of the coffers of the military, at least in America. . It is enormously tempting and, in Artificial Intelligence work, seductively simple to lose or hide oneself in details, in subproblems and their subproblems and so on. The actual problems on which one works — and which are so generously supported — are disguised and transformed until their representations are mere fables: harmless, innocent, lovely fairy tales.
. Here is an example. A doctoral student characterized his projected dissertation task as follows. A child, six or seven years old, sits in front of a computer display that shows a kitten and a bear, in full color. The kitten is playing with a ball. The child speaks to the computer system: “The bear should say ‘thank you’ when someone gives him something.” The system responds in a synthetic, but nevertheless pleasing voice: “Thank you, I understand.” Then the child again: “Kitty, give your ball to your friend.” Immediately we see the kitten on the computer display throw the ball to the bear. Then we hear the bear say: “Thank you, my dear kitten.” . This is the kernel of what the system, development of which is to constitute the student’s doctoral work, is to accomplish. Seen from a technical point of view, the system is to understand spoken instructions — that alone is not simple — and translate them into a computer program which it is then to integrate seamlessly into its own computational structure. Not at all trivial, and beyond that, quite touching.
. Now a translation to reality. A fighter pilot is addressed by his pilot’s assistant system: “Sir, I see an enemy tank column below. Your orders, please.” The pilot: “When you see something like that, don’t bother me, destroy the bastards and record the action. That’s all.” The system answers: “Yes, sir!” and the plane’s rockets fly earthward.
. This pilot’s assistant system is one of three weapons systems that are expressly described, mainly as a problem for artificial intelligence, in the Strategic Computing Initiative, a new major research and development program of the American military. Over $600,000,000 are to be spent on this program in the next four or five years.
. It isn’t my intention to assail or revile military systems at this point. I intend this example from the actual practice of academic artificial intelligence research in America to illustrate the euphemistic linguistic dissimulation whose effect it is to hinder thought and, ultimately, to still conscience. . I don’t know whether it is especially computer science or its subdiscipline Artificial Intelligence that has such an enormous affection for euphemism. We speak so readily of computer systems that understand, that see, decide, make judgments, and so on, without ourselves recognizing our own superficiality and immeasurable naivete with respect to these concepts. We anesthetize our ability to evaluate the quality of our work and, what is more important, to identify and become conscious of its end use. . The student mentioned above imagines his work to be about computer games for children, involving perhaps toy kittens, bears and balls. Its actual and intended end use will probably mean that some day a young man, quite like the student himself — someone with parents and possibly a girl friend — will be set afire by an exploding missile sent his way by a system shaped by his own research. The psychological distance between the student’s conception of his work and its actual implications is astronomic. It is precisely that enormous distance that makes it possible not to know and not to ask if one is doing sensible work or contributing to the greater efficiency of murderous devices.
. One can’t escape this state without asking, again and again: “What do I actually do? What is the final application and use of my work? Am I content or ashamed to have contributed to this use?” . I am reminded in this context of a well known American journalist who, during a Middle East highjacking, suggested that under certain circumstances the Israelis shoot ten Arab prisoners and, should the circumstances not change, shoot ten more the next day, and so on. He should not have made this suggestion unless he was prepared to go personally among the prisoners and look into the eyes of the men, some of whom would hear him say, “You, you will die today.” He should have been prepared as well to hold the pistol to the heads of those he selected and command his own finger to pull the trigger. . Just so should we ask ourselves about our own work. Once we have abandoned the prettifying of our language, we can begin to speak among ourselves realistically and in earnest about our work as computer professionals.
. “You, colleague of many years, you are working on a machine consisting of two to the fifteenth and more microprocessors running simultaneously. With the help of such a machine one can first simulate then construct much more efficient, smaller and lighter hydrogen bombs. Imagine, for a moment, you were an eyewitness at Hiroshima in 1945; you saw people stripped of their skin die. Would you want to make this happen thousands of times more? Would you so torture a single human being with your own hands? If you would not, regardless of what end would be served, then you must stop your work.” . One should ask similar questions with respect to other branches of computer science, for example, with respect to attempts to make it possible for computer systems to see. Progress in this domain will be used to steer missiles like the Cruise and Pershing ever more precisely to their targets, where murder will be committed. . Many will argue that the computer is merely a tool. As such it can be used for good or evil. In and of itself, it is value free. Scientists and technicians cannot know how the products of their work will be applied, whether they will find a good or an evil use. Hence scientists and technicians cannot be held responsible for the final application of their work.
. That point of view is manifested in the world famous Draper Laboratory, next door to the MIT building where I work. Draper is devoted almost entirely to missile guidance and submarine navigation. Many of the scientists employed there argue that the systems they work on can take men to the moon and bring them back, as well as guarantee that missiles aimed at Moscow will actually hit Moscow, their target. They cannot know in advance, they say, which of these two or still other goals their work will serve in the end. How then can they be held responsible for all the possible consequences of their work? . So it is, on the whole, with computer professionals. The doctoral student I mentioned, who wishes to be able to converse with his computer display, does in fact believe that future applications of his work will be exclusively in innocent applications like children’s games. Perhaps his research is not sponsored by the Pentagon’s Strategic Computing Initiative; perhaps he never even heard of SCI. How then can he be held responsible if his work is put to anti-human use?
. Here is where we come to the essence of the matter. Today we know with virtual certainty that every scientific and technical result will, if at all possible, be put to use in military systems. . The computer, together with the history of its development, is perhaps the key example. But we should also think in this connection of everything that has to do with flight, or of things atomic, of communications systems, satellites, space ships, and most of the scientific achievements of the human genius. We may then convince ourselves that in the concrete world in which we live, the burden of proof rests with those who assert that a specific new development is immune from the greed of the military.
. In these circumstances, scientific and technical workers cannot escape their responsibility to inquire about the end use of their work. They must then decide, once they know to what end it will be used, whether or not they would serve these ends with their own hands. . I don’t believe the military, in and of itself, to be an evil. Nor would I assert that the fact that a specific technology that has been adopted by the military is, on that ground alone, an evil. In the present state of the evolution of the sovereign nation-state — in other words, in the insane asylum in which we live — each state needs a military just as every city needs a fire department. But no one pleads for a fire station on every corner, and no one wishes for a city fire department that makes a side business of committing arson in the villages adjacent to the city.
. But we see our entire world, particularly its universities and science and engineering facilities, being more profoundly militarized every day. “Little” wars burn in almost every part of the earth. (They serve, in part, to test the high tech weapons of the “more advanced nations.”) More than half of all the earth’s scientists and engineers work more or less directly in military institutions, or in institutions supported by the military. That is an evil that must be resisted.
. We must also recognize that it is only our already internalized habit of prettifying our language, in order not to arouse our conscience, that permits us to speak in terms of weapons and weapons delivery systems at all, when we are, in fact, discussing atomic explosives and hydrogen bombs. Those aren’t weapons, they are mass murder machines and mass murder machine delivery systems. That is how we should speak of them: clearly, distinctly, and without evasion. Once we recognize that a nuclear mass murder machine is nothing other than an instant Auschwitz — without railroads or Eichmanns or Dr. Mengele, but an Auschwitz just the same — can we continue then to work on systems that steer these devices to living cities? . That is the question I ask. Each of us must earnestly ask ourselves such questions and deeply consider the responses we find in ourselves. Our answers must finally manifest themselves in our actions — concretely, in what we do every day. . Probably the most pandemic mental illness of our time is the almost universally held belief that the individual is powerless. This self-fulfilling delusion will surely be offered as a counter argument to my theses. I demand, do I not, that a whole profession refuse to participate in the murderous insanity of our time. “That cannot be effective,” I can already hear it said,” That is plainly impossible. After all, if I don’t do it, someone else will.” First, and on the most elementary level, “If I don’t do it, someone else will” cannot serve as a basis of moral behavior. Every crime imaginable can be justified with those words. For example: If I don’t steal the sleeping drunk’s money, someone else will. But it is not at all trivial to ask after the meaning of effectiveness in the present context. Surely, effectiveness is not a binary matter, an either/or matter. To be sure, if what I say here were to induce a strike on the part of all scientists with respect to weapons work, that would have to be counted as effective. But there are many much more modest measures of effectiveness.
. I think it was George Orwell who once wrote, “The highest duty of intellectuals in these times is to speak the simplest truths in the simplest possible words.” For me that means, first of all, to articulate the absurdity of our work in my actions, my writings and with my voice. I hope thereby to stir my students, my colleagues, everyone to whom I can speak directly. I hope to encourage those who have already begun to think similarly, and to be encouraged by them, and possibly rouse others out of their slumber. Courage, like fear is catching.
. Even the most modest success in such attempts has to be counted as effective. Beyond that, in speaking as I do, I put what I discuss here on the public agenda and contribute to its legitimation. These are modest goals that can surely be reached. . But, finally, I want to address such larger goals as, for example:

. Ridding the world of nuclear mass murder devices and perhaps also of nuclear power generators.

. So reordering the world that it becomes impossible ever again to convince workers of one country that it is a necessity of life that they feed their families on the flesh and the blood and the tears of people of other countries. (That is, unfortunately, the fate of many workers today, and not only those who earn their daily bread in armaments factories, but equally those of us whose daily work is to sharpen high tech weapons.)

. So reordering the world that every human being has available to him or herself all material goods necessary for living in dignity. (I have often heard well-meaning people say that, if we apply technology, especially computer and communications technology wisely, we may reach this goal in perhaps 50 to 100 years. But we can reach it sooner, and without waiting for technological advances. For the obstacle is not the absence of technology, it is the absence of political will.)

. I once heard Elie Wiesel say: “We must believe the impossible is possible.” I understood that in two different ways. First, had we been able to believe that “the land of the poets and the thinkers” could give birth to human extermination factories, we might not have had to experience Bergen Belsen. The impossible horror proved possible and became reality.
. But there is a more hopeful interpretation. It seemed impossible in the America of only 150 years ago ever to abolish the slavery of the black people. The entire economy of America’s south was built on cotton. Cotton could neither be planted nor harvested, it was believed, without the unpaid toil of thousands of human beings out of whose wretchedness the plantation master could squeeze his profit. Nevertheless, at first only a few farseeing men and women, dreamers all, in Massachusetts, later many more citizens, came to believe the impossible was possible, that the slaves could be freed and slavery ended.
. The impossible goals I mention here are possible, just as it is possible that we will destroy the human race. I alone can neither achieve the one nor prevent the other. But neither can it be done without me, without us.
. I have no right to demand anything from my colleagues. But they must know that we have the power either to increase the efficiency of the mass murder instruments we have and thereby make the murder of our children more likely, or to bring the present insanity to a halt, so that we and our children have a chance to live in human dignity. . Let us think about what we actually accomplish in our work, about how it will be used, and whether we are in the service of life or death.

Computers for Christ – Chicago