Role of bioengineering in CFS, GWS & AIDS

Professor Donald W. Scott, M.A. M.Sc

Role of bioengineering in CFS, GWS & AIDS
CFS Radio Program
Dec. 19th, 1999
Roger G. Mazlen, M.D. Host
with
Donald W. Scott http://www.carolsweb.net/ccf/
http://member.aol.com/rgm1/private/transcr.htm

Dr. Mazlen
Our guest is waiting from Canada and be with us shortly. We’re going to be talking with Don W. Scott, the author of the Brucellosis Triangle. Interestingly enough, Donald Scott was our guest on the year end show in 1998. He was here on the show the 20th of December and we’re here again at the year end show and we’re going to follow up and it’s going to be a really exciting and interesting follow-up. We’re going about basically, infectious diseases for which our Chronic Fatigue Syndrome audience are particularly susceptible, most of them being significantly immunosuppressed and vulnerable to just about anything that’s contagious that comes along. So they should be listening carefully. Now, we’re going to talk about the West Nile Virus, which was present with us here in New York during the late summer. The West Nile Virus appeared in New York and then was found in Connecticut and New Jersey. It’s mosquito-borne. It killed 7 people in those three states before the summer was over and it’s been predicted, and I’m quoting from the New York Times in an article published on December 15th, they said “The virus would likely re-emerge next spring when mosquitoes come out of hibernation.” This quote is from health experts testifying before the United States Senate. So, with that, we’re going to kick off and talk to Donald W. Scott because he has some news for our listening audience about where this West Nile Virus may actually have come from. We’re not saying we know it for sure, and we’re not documenting it but we want to present you with some facts. Hi Donald, welcome to the show.

Don Scott
Good morning, it’s nice to be here.

Dr. Mazlen
And I’d like you to tell our audience something about the information that you have from the Riegle Committee with regard to this West Nile Virus.

Don Scott
As you and many of your listeners will recall, when the West Nile Virus was first identified, official government sources said, “well this is very strange because there has never been any West Nile virus in continental United States that we ever knew of.” However, if one turns to the Riegle Report which was authored and supervised in its compilation by Donald Riegle who was a senator from Michigan, one will find that on May 21st, 1985, the United States had a supply of West Nile virus in the United States and that they shipped a quantity of that to Suddam Hussein. And that was in 1985 so that Suddam could use it against Iran with whom Iraq, at that point, was at war. So, there was West Nile virus in the United States well before 1985. It was shipped in 1985. There are still stocks of such a virus in the United States and the possibilities are two as to where the current outbreak came from. 1. Either accidentally from local facilities such as Plum Island there was an inadvertent escape of the virus on the mosquitoes or other insects or, there is always the possibility that one of Suddam Hussein’s agents returned to the United States and was able to release a quantity of West Nile virus.

Dr. Mazlen
Well, I hope everybody’s listening because you all have Congressman and Senators and public officials that you might know and you might be able to ask them about this and ask them to inquire about where this West Nile virus went when it got to Iraq. One thing I want to point out, and Donald, you can listen to this because this was also recently in the newspaper in New York, it was in the New York Times on the 5th of December and it’s a quote. It says, “The scientists at the Centers for Disease Control and Prevention in Atlanta using scientific techniques,” I’m paraphrasing that, “have also independently determined that the West Nile virus found in New York is very close to one isolated in 1998 from a stork in Israel” which is not far from Iraq. So, we have another way that it could have come around and rejoined us here in the United States because for all we know Iraq found a way to get it into Israel with whom they have no friendly relations. In fact, they’re still at war with Israel.

Don Scott
Yes, people who know the migrating patterns of birds can infect that particular type of bird and release it in such an area that they know its migratory pattern would be over a selected country and if your congressman or your senator says, “well, give us some proof” tell them to get out the Riegle Report, published by the Congressional Printing Office and turn to page 47 and check the date, May 21st, 1985 and they will see that lo and behold, West Nile virus was shipped from the United States to Iraq on that date.

Dr. Mazlen
And, of course, we don’t even know whether or not is was possibly even used in the Gulf War which is another subject we’ll be turning to shortly. I want to also point out, that you’re the author of the book, the Brucellosis Triangle and so we’re going to stop next at another agent, that was shipped to Iraq and you have evidence of that, also, which was the agent, brucella melitensis, and tell us about that, because we also sent that over there.

Don Scott
Yes, they did. Brucellosis was a disease known for thousands of years actually, but in 1942 Canada, the United States and Great Britain entered into a secret agreement to take the brucellosis bacteria and make it more contagious and more virulent and to weaponize it, make it such that it would do serious damage to an enemy and they came up with a variant which they had tested in a number of places in Canada, the United States and Britain as well as other countries and they had indeed weaponized brucellosis. And when Iraq was at war with Iran, and Iraq wanted biological weapons to use against Iran who was winning that war, the United States shipped a supply of brucella melitensis bio type I and bio type III from the American type culture collection in Rockville, Maryland. They shipped that to Iraq and said, here you go, let this stuff out over Iran, it won’t kill all the civilians that you’re targeting, but it will produce a disease known as chronic fatigue as well as several other serious symptoms. So, in the Riegle report, lo and behold, that our research was confirmed. Brucella melitensis was shipped to Iraq from the United States and that evidence is found on page 41 of the Riegle report.

Dr. Mazlen
Now, just a quick aside. It’s labeled as a class 3 pathogen. I presume that means it’s fairly detrimental.

Don Scott
Yes, that means it’s disabling but not deadly.

Dr. Mazlen
OK, for the audience so that they get an idea of how they classify these things.

Don Scott
Yes, well Donald Riegle spelled it out on page 38 of his report when he said, brucella melitensis is bacteria which can cause chronic fatigue, loss of appetite, profuse sweating when at rest, pain in joints and muscles, insomnia, nausea and damage to major organs which includes the heart, the liver and so on. Now, not only are you describing Chronic Fatigue Syndrome with that description, you’re also describing Gulf War Illness. In other words, Suddam Hussein used the brucella melitensis that had been shipped to him against the Desert Storm forces in the Desert Storm attack of 1991.

Dr. Mazlen
I just want to ask Donald, where do you want people to contact you about this topic or about your book, the Brucellosis Triangle?

Don Scott
They can reach me at Box 133, Station B, Sudbury, Canada, P3E 4N5. And for this particular show’s listening audience, I would like to indicate that if they want a copy of the Brucellosis Triangle which details the development of the weaponized Brucellosis bacteria, they can send a $20 check to me, Don Scott, at the address given and that $20 check will get them the Brucellosis Triangle which is regularly $21.95, but I will put in, in addition, a copy of our new Journal of Degenerative Diseases which covers all of the neurodegenerative diseases and especially in a series on Chronic Fatigue Syndrome, and I will also put in 10 pages of documents on history from congressional records including the June 9th, 1969 congressional meeting that may get to speak about, NSSM 200 (National Security Study Memorandum 200) by Henry Kissinger and pages from the Riegle report, so they can see for themselves why this disease that has existed for thousands of years suddenly erupted in two forms, one disabling, Chronic Fatigue Syndrome, and one lethal as AIDS. $20 will get you that whole package because we want people to see this for themselves.

Dr. Mazlen
And I can understand your willingness to do this to get it out and it’s appreciated. I want to stop you there because I want to talk to you about the reason why, as you had said to me, why the Desert Storm attack was halted after 100 hours of amazing progress against Iraqi forces. What happened there?

Don Scott
Well, when the Desert Storm attack was launched, they were, of course, going great guns. They would have been in Baghdad in another 12 hours, as everybody knows. However, there were a series of scud explosions, somewhere close to 24 scud explosions, which did not shower shrapnel down upon the Desert Storm forces, but instead seemed to emit nothing but kind of a blue haze. Well, those scud missiles, plain and simply were armed with brucella melitensis as well as some other toxic agents. Now the other toxic agents included mustard gas, for example. They weren’t trying to kill anybody with the mustard gas, they just wanted to make sure that the people being attacked knew they were being attacked. The people in the intelligence branch knew immediately from the 14,000 biological alarms that went off all up and down the front that they were being hit by brucella melitensis and they knew that thousands of veterans in the Desert Storm would become ill with Gulf War Illness. However, they also knew something else. They knew that back in 1985 and on other dates, the United States had not only provided brucella melitensis to the Iraqis, they had also provided anthrax. There were several shipments of anthrax to Iraq for use against Iran and the allied intelligence forces knew that when this melitensis attack occurred, the next volley of scuds would be armed with anthrax unless Desert Storm stopped dead in it’s tracks, and George Bush new this. He phoned Schwartzkof in the middle of the night and he said “stop where you are,” and Schwartzkof–we’ve got this in news accounts from the period–said “why is that Bob, things are going great,” and George Bush said “I can’t talk to you about it now, but stop where you are, don’t move another foot,” and Desert Storm stopped short of Baghdad, left Suddam Hussein in power because if they didn’t stop, the next volley of scud missiles would be armed with anthrax and at 10% fatality rate there would be 70,000 dead allied troops on the desert within the next week. They stopped Desert Storm dead in its tracks, they began to withdraw two weeks later, Suddam Hussein is still in power and the leaders of most of the allied countries are out of power.

Dr. Mazlen
Well, that’s a certainly striking story and we’d love to hear more about the documents or the information as to the phone call, as to what information Bush had gotten from national or military security sources. We just have a few minutes here but we need to cover a couple of quick important topics because mycoplasma are found in between 40 to 60% of Chronic Fatigue Syndrome patients and Gulf War patients as well have a huge prevalence of it. Where’s the connection here and how does it relate to cancer, Don?

Don Scott
The mycoplasma is, as you know, a fragment of bacterial DNA and this particular fragment varies with the bacteria from which it was derived. The National Academy of Science, in 1995 in Washington, D.C. received a presentation from a group of top line microbiologists who had by experiment determined that there is a linkage between the mycoplasma fermentans incognitus strain and cancer. Now, we did not know of this, even though it was back in 1995, the report was made at that time, but for some reason or other mainstream medicine and the National Institutes of Health and so on do not seem to have done very much to make it common knowledge. Now, we have secured, as anybody can secure, and we will provide a copy if you want to write to us–they’d have to pay the cost of copying–but we have secured a copy of the report that clearly links or suggest very strongly that there’s a linkage between the mycoplasma fermentans incognitus and cancer. That was the National Academy of Science, Washington, D.C., 1995.

Dr. Mazlen
That’s startling and very important. Just quickly, what is the story in terms of the AIDS virus as you see it?

Don Scott
Well on June, 9th, 1969 Dr. Donald McArthur of Pentagon Biological Warfare research branch spoke to several congressmen in a top secret meeting and he told those congressmen if they voted him an additional 10 million dollars, the Pentagon would have within a 10 year period a new microorganism, one which does not naturally exist and for which, these are his words “no immunity could have been acquired.” In other words, he promised the congressmen that if you give us 10 million dollars and 10 years we will give you the AIDS virus and you will be able to use that AIDS virus in strategic warfare, which means such things as reducing the population of certain target countries and that is in the hearings of June 9th, 1969 and that page is one of the pages that I will provide to anybody that wants to send me the $20 for the Brucellosis Triangle. I’ll send them that page and I’ll send them other relevant pages so they can see for themselves that AIDS and Chronic Fatigue Syndrome were both diseases that were engineered in biological warfare research laboratories. There’s no doubt when you read these documents, which are government documents, achieved under freedom of information.

Dr. Mazlen
OK, now one quick thing. On our last show we mentioned about the mosquitoes that were raised and infected. Just very quickly how many mosquitoes were raised back when this happened and infected with these biological agents?

Don Scott
Yes, the Canadian government agreed to cooperate with the American government, and in Belleville, Ontario, the dominion parasite laboratory back in the 1950’s and 1960’s and into the 80’s the dominion parasite laboratory raised 100 million mosquitoes a month which were transferred to certain universities to be contaminated with certain disease agents such as the disease agent that causes Chronic Fatigue Syndrome and these were then let out in a controlled way in certain communities so the the community could be studied to see how effective the mosquito was as a vector.

Transcribed by 

Carolyn Viviani 
carolynv@inx.net  

Permission is given to repost, copy and distribute this transcript as long as my name is not removed from it. 

Š 1999 Roger G. Mazlen, M.D. 

[2001] MYCOPLASMA The Linking Pathogen in Neurosystemic Diseases—-Donald W. Scott MA, MSc. 2001

[2000] THE LINKING PATHOGEN IN NEURO-SYSTEMIC DISEASES: CHRONIC FATIGUE, ALZHEIMER’S, PARKINSON’S & MULTIPLE SCLEROSIS by: Scott, Donald W., M.Sc.

Role of bioengineering in CFS, GWS & AIDS–Dr Mazlen

Biological Warfare Weapons Development and Testing: A Chronology by Donald C. Scott

Donald Scott is the President of The Common Cause Foundation in Ontario, Canada.

He earned the following academic credentials: BA from University of Toronto – 1952; MA from Laurentian University – 1973; M.Sc. from Guelph University – 1976.

He taught at the high school and university levels from 1957 until 1982, when he retired from teaching. Elected Commissioner of Ontario Teachers’ Pension Plan 1971 – 1976.

Elected Governor, Ontario Teachers’ Federation, 1976 – 1978.

Founder and President of Ontario Teachers’ Retirement Villages, 1978 – 1982.

In 1995, he began fulltime study of the scientific basis of neuro/systemic degenerative diseases.

In collaboration with William Scott, he wrote The Extremely Unfortunately Skull Valley Incident (1996) and The Brucellosis Triangle (1998) published by Chelmsford Publishing.

In August 1998, he began building the executive structure for the not-for-profit Common Cause Foundation. Now CCF has divisions in New York State, Maryland, Massachusetts, Quebec, Ontario and Manitoba Provinces,as well as other executive positions including Environmental, Multiple Sclerosis, Medical Professionals and Gulf War Illness Concerns.

Mr. Scott is also the Adjunct Professor of The Institute for Molecular Medicine, Huntington Beach, California.

Mr. Scott is also currently involved in intervenor status in the class action lawsuit of Graves vs. William Cohen, DOD. The suit, filed on September 28, 1998, alleges that the U.S. Department of Defense was involved in the “creation, production and proliferaiton of the AIDS virus”.

The Extremely Unfortunate Skull Valley Incident Chronic Fatigue Syndrome, Acquired Immunodeficiency Syndrome, Gulf War Illness and American Biological Warfare. by Donald W. Scott, MA., M.Sc and William L. C. Scott (Hons.) BA

In this fact-packed study of American biological weapons development from 1945, when the US hired Gen. Ishii Shiro, who had headed Japan’s ghastly tests of new biological agents upon Allied prisoners of war, until the 1991 Iraqi scud attacks on the forces of Desert Storm, the authors trace the evolution of CFS, AIDS, and GWS. Using US Government documents accessed under Freedom of Information legislation, they demonstrate that all of these emerging illnesses had their origins in US military laboratories.

Actual Government records are reproduced that reveal plans made in 1969 to create a “new synthetic virus, one which does not naturally exist, and for which no immunity could have been acquired.” Other documents reveal that between 1985 and 1989, while publicly labeling Saddam Hussein an evil monster, the United States was secretly selling Iraq hundreds of deadly biological weapons, some of which Hussein employed against the Allied ground attack and stopped that attack dead in its tracks after just 100 hours.

A four page Foreword by Dr. Garth Nicolson, one of the world’s top microbiological medical researchers, validates the research. Conclusions presented by the authors demonstrate how certain pathogens proved much more contagious than anticipated and have destroyed the health and lives of millions of victims.

The Brucellosis Triangle by Donald W. Scott, MA., M.Sc and William L. C. Scott (Hons.) BA

The neurodegenerative and systemic degenerative diseases, including CFIDS, ME/FM, MS,Alzheimer’s, Parkinson’s Huntington’s, Crohn’s-Colitis, Diabetes (type1) and others. Where they come from, why they are increasing in incidence. (1998)

The authors have tracked these diseases from the early 1940’s to the present, and demonstrate that these diseases will probably constitute the greatest medical challenge of the next millennium.

In The Brucellosis Triangle the authors, supported by extensive (and often startling) supporting documentation, much from government documents not previously available to the public, demonstrate that the fundamental pathogenic component giving rise to the neurodegenerative, systemic-degenerative diseases are the brucellla species: melitensis, abortus and suis. Brucella infection affect all the major body systems: neurologic, cardio-vascular; musculo-skeletal; digestive; genito-urinary and pulmonary.

In the early 1940’s researchers developed the technical capacity to isolate the bacterial toxin from the brucella bacteria and to reduce it to a crystalline form. Thus, they had the disease agent without the original bacteria and in an extremely virulent form. This crystalline bacterial toxin was capable of diffusion by primary aerosol and by insect vector. The pathogen was tested in Britain, the US, Iceland and Australia, producing outbreaks of “mystery” diseases variously labels “Neuromyasthemia”, “Iceland Disease”, “Royal Free Disease”, and others. The tests in the US were conducted by the CIA/Military under the supervision of the NIH and the CDC. Thus, these agencies had a most compelling reason for obscuring the historic record.

The book goes on to describe the experimentation which would lead to testing the pathogen in different areas of Canada and the US on unsuspecting citizens. The resultant new infective sub-viral organism was refractory to the immunological and therapeutic processes since “no natural immunity could have been acquired”. The diseases are contagious but will only manifest as active diseases if the recipient is genetically pre-disposed, has a compromised immune system, and if there is some triggering trauma.

http://www.blackopradio.com/video/scott1a.wmv

http://www.blackopradio.com/video/scott1b.wmv

MYCOPLASMA
The Linking Pathogen in Neurosystemic Diseases

Several strains of mycoplasma have been “engineered” to become more dangerous. They are now being blamed for AIDS, cancer, CFS, MS, CJD and other neurosystemic diseases.

Donald W. Scott MA, MSc. Ó 2001

Nexus Magazine Aug 2001

I – PATHOGENIC MYCOPLASMA
    A Common Disease Agent Weaponised

    How the Mycoplasma Works
II- CREATION OF THE MYCOPLASMA
    A Laboratory-Made Disease Agent
    Crystalline BrucellaCrystalline Brucella and Multiple Sclerosis
    Contamination of Camp Detrick Lab Workers
III — COVERT TESTING OF MYCOPLASMA
    Testing the Dispersal Methods
    Testing via Mosquito Vector in Punta Gorda, Florida
    Testing via Mosquito Vector in Ontario
IV – COVERT TESTING OF OTHER DISEASE AGENTS
    Mad Cow Disease/Kuru/CJD in the Fore Tribe
    Testing Carcinogens over Winnipeg, Manitoba
V – BRUCELLA MYCOPLASMA AND DISEASE AIDS
    Chronic Fatigue Syndrome/ Myalgic Encephalomyelitis
VI-TESTING FOR MYCOPLASMA IN YOUR BODY
    Polymerase Chain Reaction Test
    Blood Test
    ECG Test
    Blood Volume Test
VII- UNDOING THE DAMAGE

I – PATHOGENIC MYCOPLASMA
A Common Disease Agent Weaponised

There are 200 species of Mycoplasma. Most are innocuous and do no harm; only four or five are pathogenic. Mycoplasma fermentans (incognitus strain) probably comes from the nucleus of the Brucella bacterium. This disease agent is not a bacterium and not a virus; it is a mutated form of the Brucella bacterium, combined with a visna virus, from which the mycoplasma is extracted.

The pathogenic Mycoplasma used to be very innocuous, but biological warfare research conducted between 1942 and the present time has resulted in the creation of more deadly and infectious forms of Mycoplasma. Researchers extracted this mycoplasma from the Brucella bacterium and actually reduced the disease to a crystalline form. They “weaponised” it and tested it on an unsuspecting public in North America.

Dr Maurice Hilleman, chief virologist for the pharmaceutical company Merck Sharp & Dohme, stated that this disease agent is now carried by everybody in North America and possibly most people throughout the world.

Despite reporting flaws, there has clearly been an increased incidence of all the neuro/systemic degenerative diseases since World War II and especially since the 1970s with the arrival of previously unheard-of diseases like chronic fatigue syndrome and AIDS.

According to Dr Shyh-Ching Lo, senior researcher at The Armed Forces Institute of Pathology and one of America’s top mycoplasma researchers, this disease agent causes many illnesses including AIDS, cancer, chronic fatigue syndrome, Crohn’s colitis, Type I diabetes, multiple sclerosis, Parkinson’s disease, Wegener’s disease and collagen-vascular diseases such as rheumatoid arthritis and Alzheimer’s.

Dr Charles Engel, who is with the US National Institutes of Health, Bethesda, Maryland, stated the following at an NIH meeting on February 7, 2000: “I am now of the view that the probable cause of chronic fatigue syndrome and fibromyalgia is the mycoplasma…”

I have all the official documents to prove that mycoplasma is the disease agent in chronic fatigue syndrome/fibromyalgia as well as in AIDS, multiple sclerosis and many other illnesses. Of these, 80% are US or Canadian official government documents, and 20% are articles from peer-reviewed journals such as the Journal of the American Medical Association, New England Journal of Medicine and the Canadian Medical Association Journal. The journal articles and government documents complement each other.

How the Mycoplasma Works

The mycoplasma acts by entering into the individual cells of the body, depending upon your genetic predisposition.

You may develop neurological diseases if the pathogen destroys certain cells in your brain, or you may develop Crohn’s colitis if thepathogen invades and destroys cells in the lower bowel.

Once the mycoplasma gets into the cell, it can lie there doing nothing sometimes for 10, 20 or 30 years, but if a trauma occurs like an accident or a vaccination that doesn’t take, the mycoplasma can become triggered.

Because it is only the DNA particle of the bacterium, it doesn’t have any organelles to process its own nutrients, so it grows by uptaking pre-formed sterols from its host cell and it literally kills the cell; the cell ruptures and what is left gets dumped into the bloodstream.

II- CREATION OF THE MYCOPLASMA
A Laboratory-Made Disease Agent

Many doctors don’t know about this mycoplasma disease agent because it was developed by the US military in biological warfare experimentation and it was not made public. This pathogen was patented by the United States military and Dr Shyh-Ching Lo. I have a copy of the documented patent from the US Patent Office.(1)

All the countries at war were experimenting with biological weapons. In 1942, the governments of the United States, Canada and Britain entered into a secret agreement to create two types of biological weapons (one that would kill, and one that was disabling) for use in the war against Germany and Japan, who were also developing biological weapons. While they researched a number or disease pathogens, they primarily focused on the Brucella bacterium and began to weaponise it.

From its inception, the biowarfare program was characterised by continuing in-depth review and participation by the most eminent scientists, medical consultants, industrial experts and government officials, and it was classified Top Secret.

The US Public Health Service also closely followed the progress of biological warfare research and development from the very start of the program, and the Centers for Disease Control (CDC) and the National Institutes of Health (NIH) in the United States were working with the military in weaponising these diseases. These are diseases that have existed for thousands of years, but they have been weaponised—which means they’ve been made more contagious and more effective. And they are spreading.

The Special Virus Cancer Program, created by the CIA and NIH to develop a deadly pathogen for which humanity had no natural immunity (AIDS), was disguised as a war on cancer but was actually part of MKNAOMI.2 Many members of the Senate and House of Representatives do not know what has been going on.  For example, the US Senate Committee on Government Reform had searched the archives in Washington and other places for the document titled “The Special Virus Cancer Program: Progress Report No. 8”, and couldn’t find it. Somehow they heard I had it, called me and asked me to mail it to them. Imagine: a retired schoolteacher being called by the United States Senate and asked for one of their secret documents! The US Senate, through the Government Reform Committee, is trying to stop this type of government research.

Crystalline Brucella

The title page of a genuine US Senate Study, declassified on February 24, 1977, shows that George Merck, of the pharmaceutical company, Merck Sharp & Dohme (which now makes cures for diseases that at one time it created), reported in 1946 to the US Secretary of War that his researchers had managed “for the first time” to “isolate the disease agent in crystalline form”.3

They had produced a crystalline bacterial toxin extracted from the Brucella bacterium. The bacterial toxin could be removed in crystalline form and stored, transported and deployed without deteriorating. It could be delivered by other vectors such as insects, aerosol or the food chain (in nature it is delivered within the bacterium). But the factor that is working in the Brucella is the mycoplasma.

Brucella is a disease agent that doesn’t kill people; it disables them. But, according to Dr Donald MacArthur of the Pentagon, appearing before a congressional committee in 1969,(4) researchers found that if they had mycoplasma at a certain strength—actually, 10 to the 10th power—it would develop into AIDS, and the person would die from it within a reasonable period of time because it could bypass the natural human defences.  If the strength was 10 to 8, the person would manifest with chronic fatigue syndrome or fibromyalgia. If it was l0 to 7, they would present as wasting; they wouldn’t die and they wouldn’t be disabled, but they would not be very interested in life; they would waste away.

Most of us have never heard of the disease brucellosis because it largely disappeared when they began pasteurising milk, which was the carrier. One salt shaker of the pure disease agent in a crystalline form could sicken the entire population of Canada. It is absolutely deadly, not so much in terms of killing the body but disabling it.

Because the crystalline disease agent goes into solution in the blood, ordinary blood and tissue tests will not reveal its presence. The mycoplasma will only crystallise at 8.1 pH, and the blood has a pH of 7.4 pH. So the doctor thinks your complaint is “all in your head”.

Crystalline Brucella and Multiple Sclerosis

In 1998 in Rochester, New York, I met a former military man, PFC Donald Bentley, who gave me a document and told me: “I was in the US Army, and I was trained in bacteriological warfare. We were handling a bomb filled with brucellosis, only it wasn’t brucellosis; it was a Brucella toxin in crystalline form. We were spraying it on the Chinese and North Koreans.”

He showed me his certificate listing his training in chemical, biological and radiological warfare. Then he showed me 16 pages of documents given to him by the US military when he was discharged from the service. They linked brucellosis with multiple sclerosis, and stated in one section: “Veterans with multiple sclerosis, a kind of creeping paralysis developing to a degree of 10% or more disability within two years after separation from active service, may be presumed to be service-connected for disability compensation. Compensation is payable to eligible veterans whose disabilities are due to service.” In other words: “If you become ill with multiple sclerosis, it is because you were handling this Brucella, and we will give you a pension. Don’t go raising any fuss about it.” In these documents, the government of the United States revealed evidence of the cause of multiple sclerosis, but they didn’t make it known to the public—or to your doctor.

In a 1949 report, Drs Kyger and Haden suggested “the possibility that multiple sclerosis might be a central nervous system manifestation of chronic brucellosis”. Testing approximately 113 MS patients, they found that almost 95% also tested positive for Brucella.(5)We have a document from a medical journal, which concludes that one out of 500 people who had brucellosis would develop what they call neurobrucellosis; in other words, brucellosis in the brain, where the Brucella settles in the lateral ventrides—where the disease multiple sclerosis is basically located.6

Contamination of Camp Detrick Lab Workers
A 1948 New England Journal of Medicine report titled “Acute Brucellosis Among Laboratory Workers” shows us how actively dangerous this agent is.7   The laboratory workers were from Camp Detrick, Frederick, Maryland, where they were developing biological weapons. Even though these workers had been vaccinated, wore rubberised suits and masks and worked through holes in the compartment, many of them came down with this awful disease because it is so absolutely and terrifyingly infectious.

The article was written by Lt Calderone Howell, Marine Corps Captain Edward Miller, Marine Corps, Lt Emily Kelly, United States Naval Reserve; and Captain Henry Bookman. They were all military personnel engaged in making the disease agent Brucella into a more effective biological weapon

III — COVERT TESTING OF MYCOPLASMA

Testing the Dispersal Methods
Documented evidence proves that the biological weapons they were developing were tested on the public in various communities without their knowledge or consent.

The government knew that crystalline Brucella would cause disease in humans. Now they needed to determine how it would spread and the best way to disperse it. They tested dispersal methods for Brucella suis and Brucella melitensis at Dugway Proving Ground, Utah, in June and September 1952. Probably, 100% of us now are infected with Brucella suis and Brucella melitensis.(8)

Another government document recommended the genesis of open-air vulnerability tests and covert research and development programs to be conducted by the Army and supported by the Central Intelligence Agency.

At that time, the Government of Canada was asked by the US Government to cooperate in testing weaponised Brucella, and Canada cooperated fully with the United States. The US Government wanted to determine whether mosquitoes would carry the disease and also if the air would carry it. A government report stated that “open-air testing of infectious biological agents is considered essential to an ultimate understanding of biological warfare potentialities because of the many unknown factors affecting the degradation of micro-organisms in the atmosphere”.9

Testing via Mosquito Vector in Punta Gorda, Florida
A report from The New England Journal of Medicine reveals that one of the first outbreaks of chronic fatigue syndrome was in Punta Gorda, Florida, back in 1957.(10)   It was a strange coincidence that a week before these people came down with chronic fatigue syndrome, there was a huge influx of mosquitoes.

The National Institutes of Health claimed that the mosquitoes came from a forest fire 30 miles away. The truth is that those mosquitoes were infected in Canada by Dr Guilford B. Reed at Queen’s University. They were bred in Belleville, Ontario, and taken down to Punta Gorda and released there.

Within a week, the first five cases ever of chronic fatigue syndrome were reported to the local clinic in Punta Gorda. The cases kept coming until finally 450 people were ill with the disease.

Testing via Mosquito Vector in Ontario
The Government of Canada had established the Dominion Parasite Laboratory in Belleville, Ontario, where it raised 100 million mosquitoes a month. These were shipped to Queen’s University and certain other facilities to be infected with this crystalline disease agent The mosquitoes were then let loose in certain communities in the middle of the night, so that the researchers could determine how many people would become ill with chronic fatigue syndrome or fibromyalgia, which was the first disease to show.

One of the communities they tested it on was the St Lawrence Seaway valley, all the way from Kingston to Cornwall, in 1984. They let out hundreds of millions of infected mosquitoes. Over 700 people in the next four or five weeks developed myalgic encephalomyelitis, or chronic fatigue syndrome.

IV – COVERT TESTING OF OTHER DISEASE AGENTS

Mad Cow Disease/Kuru/CJD in the Fore Tribe
Before and during World War II, at the infamous Camp 731 in Manchuria, the Japanese military contaminated prisoners of war with certain disease agents.

They also established a research camp in New Guinea in 1942. There they experimented upon the Fore Indian tribe and inoculated them with a minced-up version of the brains of diseased sheep containing the visna virus which causes “mad cow disease” or Creutzfeldt—Jakob disease.

About five or six years later, after the Japanese had been driven out, the poor people of the Fore tribe developed what they called kuru, which was their word for “wasting”, and they began to shake, lose their appetites and die. The autopsies revealed that their brains had literally turned to mush. They had contracted “mad cow disease” from the Japanese experiments.

When World War II ended, Dr Ishii Shiro—the medical doctor who was commissioned as a General in the Japanese Army so he could take command of Japan’s biological warfare development, testing and deployment—was captured. He was given the choice of a job with the United States Army or execution as a war criminal. Not surprisingly, Dr Ishii Shiro chose to work with the US military to demonstrate how the Japanese had created mad cow disease in the Fore Indian tribe.

In 1957, when the disease was beginning to blossom in full among the Fore people, Dr Carleton Gajdusek of the US National Institutes of Health headed to New Guinea to determine how the minced-up brains of the visna-infected sheep affected them. He spent a couple of years there, studying the Fore people, and wrote an extensive report. He won the Nobel Prize for “discovering” kuru disease in the Fore tribe.

Testing Carcinogens over Winnipeg, Manitoba
In 1953, the US Government asked the Canadian Government if it could test a chemical over the city of Winnipeg. It was a big city with 500,000 people, miles from anywhere. The American military sprayed this carcinogenic chemical in a 1,000%-attenuated form, which they said would be so watered down that nobody would get very sick; however, if people came to clinics with a sniffle, a sore throat or ringing in their ears, the researchers would be able to determine what percentage would have developed cancer if the chemical had been used at full strength.

We located evidence that the Americans had indeed tested this carcinogenic chemical—zinc cadmium sulphide—over Winnipeg in 1953. We wrote to the Government of Canada, explaining that we had solid evidence of the spraying and asking that we be informed as to how high up in the government the request for permission to spray had gone. We did not receive a reply.

Shortly after, the Pentagon held a press conference on May 14, 1997, where they admitted what they had done. Robert Russo, writing for the Toronto Star11 from Washington, DC, reported the Pentagon’s admission that in 1953 it had obtained permission from the Canadian Government to fly over the city of Winnipeg and spray out this chemical—which sifted down on kids going to school, housewives hanging out their laundry and people going to work. US Army planes and trucks released the chemical 36 times between July and August 1953. The Pentagon got its statistics, which indicated that if the chemical released had been full strength, approximately a third of the population of Winnipeg would have developed cancers over the next five years.

One professor, Dr Hugh Fudenberg, MD, twice nominated for the Nobel Prize, wrote a magazine article stating that the Pentagon came clean on this because two researchers in Sudbury, Ontario—Don Scott and his son, Bill Scott—had been revealing this to the public. However, the legwork was done by other researchers!

The US Army actually conducted a series of simulated germ warfare tests over Winnipeg. The Pentagon lied about the tests to the mayor, saying that they were testing a chemical fog over the city, which would protect Winnipeg in the event of a nuclear attack.

A report commissioned by US Congress, chaired by Dr Rogene Henderson, lists 32 American towns and cities used as test sites as well.

V – BRUCELLA MYCOPLASMA AND DISEASE AIDS

The AIDS pathogen was created out of a Brucella bacterium mutated with a visna virus; then the toxin was removed as a DNA particle called a mycoplasma. They used the same mycoplasma to develop disabling diseases like MS, Crohn’s colitis, Lyme disease, etc.

In the previously mentioned US congressional document of a meeting held on June 9, 1969, (12) the Pentagon delivered a report to Congress about biological weapons. The Pentagon stated: “We are continuing to develop disabling weapons.” Dr MacArthur, who was in charge of the research, said: “We are developing a new lethal weapon, a synthetic biological agent that does not naturally exist, and for which no natural immunity could have been acquired.”

Think about it. If you have a deficiency of acquired immunity, you have an acquired immunity deficiency. Plain as that. AIDS.

In laboratories throughout the United States and in a certain number in Canada including at the University of Alberta. the US Government provided the leadership for the development of AIDS for the purpose of population control. After the scientists had perfected it, the government sent medical teams from the Centers for Disease Control-under the direction of Dr Donald A. Henderson, their investigator into the 1957 chronic fatigue epidemic in Punta Gorda—during 1969 to 1971 to Africa and some countries such as India, Nepal and Pakistan where they thought the population was becoming too large.13 They gave them all a free vaccination against smallpox; but five years after receiving this vaccination, 60% of those inoculated were suffering from AIDS. They tried to blame it on a monkey, which is nonsense.

A professor at the University of Arkansas made the claim that while studying the tissues of a dead chimpanzee she found traces of HIV. The chimpanzee that she had tested was born in the United States 23 years earlier. It had lived its entire life in a US military laboratory where it was used as an experimental animal in the development of these diseases. When it died, its body was shipped to a storage place where it was deep-frozen and stored in case they wanted to analyse it later. Then they decided that they didn’t have enough space for it, so they said, “Anybody want this dead chimpanzee?” and this researcher from Arkansas said: “Yes. Send it down to the University of Arkansas. We are happy to get anything we can get.” They shipped it down and she found HIV in it. That virus was acquired by that chimpanzee in the laboratories where it was tested.14

Chronic Fatigue Syndrome/ Myalgic Encephalomyelitis
Chronic fatigue syndrome is more accurately called myalgic encephalomyelitis. The chronic fatigue syndrome nomenclature was given by the US National Institutes of Health because it wanted to downgrade and belittle the disease.

An MRI scan of the brain of a teenage girl with chronic fatigue syndrome displayed a great many scars or punctate lesions in the left frontal lobe area where portions of the brain had literally dissolved and been replaced by scar tissue. This caused cognitive impairment, memory impairment, etc. And what was the cause of the scarring? The mycoplasma. So there is very concrete physical evidence of these tragic diseases, even though doctors continue to say they don’t know where it comes from or what they can do about it.

Many people with chronic fatigue syndrome, myalgic encephalo-myelitis and fibromyalgia who apply to the Canada Pensions Plan Review Tribunal will be turned down because they cannot prove that they are ill. During 1999 I conducted several appeals to Canada Pensions and the Workers Compensation Board (WCB, now the Workplace Safety and Insurance Board) on behalf of people who have been turned down. I provided documented evidence of these illnesses, and these people were all granted their pensions on the basis of the evidence that I provided.

In March 1999, for example, I appealed to the WCB on behalf of a lady with flbromya1gia who had been, denied her pension back in 1993. The vice-chairman of the board came to Sudbury to hear the appeal, and I showed him a number of documents which proved that this lady was physically ill with fibromyalgia. It was a disease that caused physical damage, and the disease agent was a mycoplasma. The guy listened for three hours, and then he said to me: “Mr Scott, how is it I have never heard of any of this before? I said: “We brought a top authority in this area into Sudbury to speak on this subject and not a single solitary doctor came to that presentation.”

VI-TESTING FOR MYCOPLASMA IN YOUR BODY

Polymerase Chain Reaction Test
Information is not generally available about this agent because, first of all, the mycoplasma is such a minutely small disease agent. A hundred years ago, certain medical theoreticians conceived that there must be a form or disease agent smaller than bacteria and viruses. This pathogenic organism, the mycoplasma, is so minute that normal blood and tissue tests will not reveal its presence as the source of the disease.

Your doctor may diagnose you with Alzheimer’s disease, and he will say:

“Golly, we don’t know where Alzheimer’s comes from. All we know is that your brain begins to deteriorate, cells rupture, the myelin sheath around the nerves dissolves, and so on.” Or if you have chronic fatigue syndrome, the doctor will not be able to find any cause for your illness with ordinary blood and tissue tests.

This mycoplasma couldn’t be detected until about 30 years ago when the polymerase chain reaction (PCR) test was developed, in which a sample of your blood is examined and damaged particles are removed and subjected to a polymerase chain reaction. This causes the DNA in the particles to break down. The particles are then placed in a nutrient, which causes the DNA to grow back into its original form. If enough of the substance is produced, the form can be recognised, so it can be determined whether Brucella or another kind of agent is behind that particular mycoplasma.

Blood Test
If you or anybody in your family has myalgic encephalomyelitis, fibromyalgia, multiple sclerosis or Alzheimer’s, you can send a blood sample to Dr Les Simpson in New Zealand for testing.

If you are ill with these diseases, your red blood cells will not be normal doughnut-shaped blood cells capable of being compressed and squeezed through the capillaries, but will swell up like cherry-filled doughnuts which cannot be compressed. The blood cells become enlarged and distended because the only way the mycoplasma can exist is by uptaking pre-formed sterols from the host cell. One of the best sources of pre-formed sterols is cholesterol, and cholesterol is what gives your blood cells flexibility. If the cholesterol is taken out by the mycoplasma, the red blood cell swells up and doesn’t go through, and the person begins to feel all the aches and pains and all the damage it causes to the brain, the heart, the stomach, the feet and the whole body because blood and oxygen are cut off.

And that is why people with fibromyalgia and chronic fatigue syndrome have such a terrible time. When the blood is cut off from the brain, punctate lesions appear because those parts of the brain die. The mycoplasma will get into portions of the heart muscle, especially the left ventricle, and those cells will die. Certain people have cells in the lateral ventricles of the brain that have a genetic predisposition to admit the mycoplasma, and this causes the lateral ventricles to deteriorate and die. This leads to multiple sclerosis, which will progress until these people are totally disabled; frequently, they die prematurely. The mycoplasma will get into the lower bowel, parts of which will die, thus causing colitis. All of these diseases are caused by the degenerating properties of the mycoplasma.

In early 2000, a gentleman in Sudbury phoned me and told me he had fibromyalgia. He applied for a pension and was turned down because his doctor said it was all in his head and there was no external evidence. I gave him the proper form and a vial, and he sent his blood to Dr Simpson to be tested. He did this with his family doctor’s approval, and the results from Dr Simpson showed that only 4% of his red blood cells were functioning normally and carrying the appropriate amount of oxygen to his poor body, whereas 83% were distended, enlarged and hardened, and wouldn’t go through the capillaries without an awful lot of pressure and trouble. This is the physical evidence of the damage that is done.

ECG Test
You can also ask your doctor to give you a 24-hour Holter ECG. You know, of course, that an electrocardiogram is a measure of your heartbeat and shows what is going on in the right ventricle, the left ventricle and so on. Tests show that 100% of patients with chronic fatigue syndrome and fibromyalgia have an irregular heartbeat. At various periods during the 24 hours, the heart, instead of working happily away going “bump-BUMP, bump-BUMP”, every now and again goes “buhbuhbuhbuhbubbuhbuhbuhbuh”. The T-wave (the waves are called P, Q, R, S and T) is normally a peak, and then the wave levels off and starts with the P-wave again. In chronic fatigue and fibromyalgia patients, the T-wave flattens off, or actually inverts. That means the blood in the left ventricle is not being squeezed up through the aorta and around through the body.

My client from Sudbury had this test done and, lo and behold, the results stated: “The shape of T and S-T suggests left ventricle strain pattern, although voltage and so on is normal.” The doctor had no clue as to why the T-wave was not working properly. I analysed the report of this patient who had been turned down by Canada Pensions and sent it back to them. They wrote back, saying: “It looks like we may have made a mistake. We are going to give you a hearing and you can explain this to us in more detail.”

So it is not all in your imagination. There is actual physical damage to the heart. The left ventricle muscles do show scarring.

That is way many people are diagnosed with a heart condition when they first develop fibromyalgia, but it’s only one of several problems because the mycoplasma can do all kinds of damage.

Blood Volume Test
You can also ask your doctor for a blood volume test. Every human being requires a certain amount of blood per pound of body weight, and it has been observed that people with fibromyalgia, chronic fatigue syndrome, multiple sclerosis and other illnesses do not have the normal blood volume their body needs to function properly. Doctors aren’t normally aware of this.

This test measures the amount of blood in the human body by taking out 5 cc, putting a tracer in it and then putting it back into the body. One hour later, take out 5 cc again and look for the tracer. The thicker the blood and the lower the blood volume, the more tracer you will find.

The analysis of one of my clients stated: “This patient was referred for red cell mass study. The red cell volume is 16.9 ml per kg of body weight. The normal range is 25 to 35 ml per kg. This guy has 36% less blood in his body than the body needs to function.” And the doctor hadn’t even known the test existed.

If you lost 36% of your blood in an accident, do you think your doctor would tell you that you are allright and should just take up line dancing and get over it? They would rush you to the nearest hospital and start transfusing you with blood. These tragic people with these awful diseases are functioning with anywhere from 7% to 50% less blood than their body needs to function.

VII- UNDOING THE DAMAGE

The body undoes the damage itself. The scarring in the brain of people with chronic fatigue and fibromyalgia will be repaired. There is cellular repair going on all the time. But the mycoplasma has moved on to the next cell.

In the early stages of a disease, doxycydine may reverse that disease process. It is one of the tetracycline antibiotics, but it is not bactericidal; it is bacteriostatic—it stops the growth of the mycoplasma. And if the mycoplasma growth can be stopped for long enough, then the immune system takes over.

Doxycycline treatment is discussed in a paper by mycoplasma expert Professor Garth Nicholson, PhD, of the Institute for Molecular Medicine.” Dr Nicholson is involved in a US$8 million mycoplasma research program funded by the US military and headed by Dr Charles Engel of the NIH. The program is studying Gulf War veterans, 450 of them, because there is evidence to suggest that Gulf War syndrome is another illness (or set of illnesses) caused by mycoplasma.

 Endnotes
1. “Pathogenic Mycoplasma”, US Patent No. 5,242,820, issued September 7, 1993. Dr Lo is listed as the Inventor” and the American Registry of Pathology, Washington, DC, is listed as the “Assignee”.
2. “Special Virus Cancer Program: Progress Report No. 8”, prepared by the National Cancer Institute, Viral Oncology, Etiology Area, July 1971, submitted to NIH Annual Report in May 1971 and updated July 1971.
3. US Senate, Ninety-fifth Congress, Hearings before the Subcommittee on Health and Scientific Research of the Committee on Human Resources, Biological Testing Involving Human Subjects by the Department of Defense, 1977; released as US Army Activities in the US Biological Warfare Programs, Volumes One and Two, 24 February 1977.
4. Dr Donald MacArthur, Pentagon, Department of Defense Appropriations for 1970, Hearings before Subcommittee of the Committee on Appropriations, House of Representatives, Ninety-First Congress, First Session, Monday June 9, 1969, pp 105—144, esp. pp. 114, 129.
5. Kyger, E. R. and Russell L. Haden, “Brucellosis and Multiple Sclerosis”, The American journal of Medical Sciences 1949:689-693.
6. Colmonero et al., “Complications Associated with Brucella melitensis Infection: A Study of 530 Cases”, Medicine 1996;75(4).
7. Howell, Miller, Kelly and Bookman, “Acute Brucellosis Among Laboratory Workers”, New England Journal of Medicine 1948;236:741.
8. “Special Virus Cancer Program: Progress Report No. 8”, ibid., table 4, p. 135.
9. US Senate, Hearings before the Subcommittee on Health and Scientific Research of the Committee on Human Resources, March 8 and May 23, 1977, ibid.
10. New England journal of Medicine, August 22, 1957, p. 362.
11. Toronto Star, May 15, 1997.
12. Dr Donald MacArthur, Pentagon, Department of Defense Appropriations for 1970, Hearings, Monday June 9, 1969, ibid., p.129.
13. Henderson, Donald A., “Smallpox: Epitaph for a Killer”, National Geographic, December 1978, p. 804.
14. Blum, Deborah, The Monkey Wars, Oxford University Press, New York, 1994.
15. Nicholson, G. 1., “Doxycycline treatment and Desert Storm”, JAMA 1995;273:61 8-619.

Recommended Reading

Horowitz, Leonard, Emerging Viruses: Aids and Ebola, Tetrahedron Publishing, USA, 1996.
• Johnson, Hillary, Osler’s Web, Crown Publishers, New York, 1996.
• Scott, Donald W. and William L. C. Scott, The Brucellosis Triangle, The Chelmsford Publishers (Box 133, Stat. B., Sudbury, Ontario P3E 4N5), Canada, 1998 (US$21.95 + $3 s&h in US).
• Scott, Donald W. and William 1. C. Scott, The Extremely Unfortunate Skull Valley Incident, The Chelmsford Publishers, Canada, 1996 (revised, extended edition available from mid-September 2001; US$16.00 pre-pub. Price + US$3 s&h in US).
•The journal of Degenerative Diseases (Donald W. Scott, Editor), The Common Cause Medical Research Foundation (Box 133, Stat B., Sudbury, Ontario, P3E 4N5), Canada (quarterly journal; annual subscription: US$25.00 in USA, $30 foreign).

Additional Contacts
• Ms Jennie Burke, Australian Biologics, Level 6, 383 Pitt Street, Sydney NSW 2000, Australia tel +61 (0)2 9283 0807, fax +61 (0)2 9283 0910. Australian Biologics does tests for mycoplasma.
• Consumer Health Organization of Canada, 1220 Sheppard Avenue East #412, Toronto, Ontario, Canada M2K 255, tel +1 (416)490 0986, website www.consumerhealth.org/.
• Professor Garth Nicholson, PhD, Institute for Molecular Medicine, 15162 Triton Lane, Huntington Beach, CA, 92649-1401, USA, tel +1(714) 903 2900.
• Dr Les Simpson, Red Blood Cell Research Ltd, 31 Bath Street, Dunedin, 9001, New Zealand, tel +64 (0)3 471 8540, email rbc.research.limited@xtra.co.nz . (Note: Dr Simpson directs his study to red cell shape analysis, not the mycoplasrna hypothesis.)
• The Mycoplasma Registry for Gulf War Illness, S. & 1. Dudley, 303 47th St, J-10 San Diego, CA 92102-5961, tel/fax +1(619) 266 1116, fax (619) 266 1116, email mycoreg@juno.com.

About the Author
Donald Scott, MA, MSc, is a retired high school teacher and university professor. He is also a veteran of WWII and was awarded the North Atlantic Star, the Burma Star with Clasp, the 1939—1945 Volunteer Service Medal and the Victory Medal. He is currently President of The Common Cause Medical Research Foundation, a not-for-profit organisation devoted to research into neurosystemic degenerative diseases. He is also Adjunct Professor with the Institute for Molecular Medicine and he produces and edits the journal of Degenerative Diseases. He has extensively researched neurosystemic degenerative diseases over the past five years and has authored many documents on the relationship between degenerative diseases and a pathogenic mycoplasma called Mycoplasma fermentans. His research is based upon solid government evidence.

You may contact Donald Scott at: 190 Mountain St., Ste. 405, Sudbury, Ontario, Canada P3B 4G2. 705-670-0180.

Note: Dr. David Webster at Sudbury General Hospital, a wonderful person, with whom I have had conversations about these awful diseases can tell your doctor about the Blood Volume test.

[Home[Mycoplasma]

THE LINKING PATHOGEN IN NEURO-SYSTEMIC DISEASES: CHRONIC FATIGUE, ALZHEIMER’S, PARKINSON’S & MULTIPLE SCLEROSIS

by Scott, Donald W., M.Sc.

[Similar, but earlier to this Nexus article]

Donald Scott is a retired high school teacher and university professor who is currently president of the Common Cause Medical Research Foundation and adjunct professor of the Institute of Molecular Medicine. He has extensively researched neurosystemic degenerative diseases over the past five years and has authored many documents on the relationship between degenerative diseases and a pathogenic mycoplasma called Mycoplasma fermentans. His research is based upon solid government evidence. Donald Scott is a veteran of WWII and was awarded the North Atlantic Star, the Burma Star with Clasp, the 1939-1945 Volunteer Service Medal and the Victory Medal.

I – THE MYCOPLASMA
A COMMON PATHOGENIC MYCOPLASMA
BIOWARFARE RESEARCH DOCUMENTED EVIDENCE
II – CREATION OF THE MYCOPLASMA
MYCOPLASMA PATENTA LABORATORY-CREATED PATHOGEN BY THE U.S. MILITARY
COMMITTEE ON GOVERNMENT REFORM
BIOLOGICAL WARFARE RESEARCH AGREEMENT
CRYSTALLINE BRUCELLOSIS
CRYSTALLINE BRUCELLOSIS AND MULTIPLE SCLEROSIS
CONTAMINATION OF CAMP DETRICK LAB WORKERS
III – COVERT TESTING OF THE MYCOPLASMA
TESTING BRUCELLOSIS UPON AN UNSUSPECTING PUBLIC
TESTING BRUCELLOSES VIA MOSQUITO VECTOR IN PUNTA GORDA
TESTING BRUCELLOSIS VIA MOSQUITO VECTOR IN ONTARIO
IV – OTHER SECRET GOVERNMENT TESTING
MAD COW DISEASE IN THE FORE INDIAN TRIBE
TESTING CARCINOGENS IN RUSSIA
TESTING CARCINOGENS IN WINNIPEG
V – BRUCELLOSIS MYCOPLASMA AND DISEASE
AIDS
CHRONIC FATIGUE
APPEALS TO CANADA PENSION
VI – TESTING FOR THE PRESENCE OF MYCOPLASMA IN YOUR BODY
THE POLYMERASE CHAIN REACTION TEST
THE BLOOD TEST
THE ECG TEST
BLOOD VOLUME TEST
UNDOING THE DAMAGE
GULF WAR RESEARCH

I – THE MYCOPLASMA

A COMMON PATHOGENIC MYCOPLASMA There are 200 species of mycoplasmas. Most are innocuous and do no harm; only four or five are pathogenic. The Mycoplasma fermentans (incognitus strain) probably comes from the nucleus of the brucellosis bacteria. This disease agent is not a bacteria, and not a virus; it is a mutated form of the brucellosis bacteria, mutated with a visna virus, from which the mycoplasma, is extracted. Dr. Maurice Hilleman, chief virologist for the pharmaceutical company of Merck, Sharp and Dohme, stated that this disease agent is now carried by everybody in North America and possibly most people throughout the world. The mycoplasma used to be very innocuous. Only one person out of 500,000 would get multiple sclerosis; one out of 300,000 would develop Alzheimer’s; one out of 1,000,000 would develop Creutzfeldt-Jakob disease. Before the early 1980’s, nobody ever died of AIDS because it didn’t exist. The mycoplasma is also the disease agent in AIDS, and I have all the documentation to prove it.

BIOWARFARE RESEARCH Between 1942 and the present time, biological warfare research has resulted in a more deadly and infectious form of the mycoplasma. They extracted this mycoplasma from the brucellosis bacteria, weaponized it and actually reduced the disease to a crystalline form. According to Dr. Shyh-Ching Lo, one of America’s top, top researchers, this disease agent, the mycoplasma, causes among other things, AIDS, chronic fatigue syndrome, multiple sclerosis, Wegener’s disease, Parkinson’s disease, Crohn’s colitis, Type I diabetes, and collagen-vascular diseases such as rheumatoid arthritis and Alzheimer’s. The mycoplasma enters into the individual cells of the body depending upon your genetic predisposition. You may develop neurological diseases if the pathogen destroys certain cells in your brain, or you may develop Crohn’s colitis if the pathogen invades and destroys cells in the lower bowel. Once it gets into the cell, it can lie there doing nothing sometimes for 10, 20 or 30 years, but if a trauma occurs like an accident, or a vaccination that doesn’t take, the mycoplasma can become triggered. Because it is only the DNA particle of the bacteria, it doesn’t have any organelles to process its own nutrients, so it grows by uptaking preformed sterols from its host cell, literally kills the cell, and the cell ruptures and what is left gets dumped into the blood stream.

DOCUMENTED EVIDENCE My conclusions are entirely based upon official documents: 80% are United States or Canadian official government documents, and 20% are articles from peer-reviewed journals, such as the Journal of the American Medical Association, The New England Journal of Medicine, and The Canadian Medical Association Journal. The journal articles and government documents complement each other. We also have a document from Dr. Shyh-Ching Lo which names the mycoplasma as a cause of cancer. Dr. Charles Engel who is with the National Institutes of Health, Bethesda, Maryland, stated at an NIH meeting on February 7, 2000, “I am now of the view that the probable cause of Chronic Fatigue Syndrome and fibromyalgia is the mycoplasma”.

II – CREATION OF THE MYCOPLASMA

MYCOPLASMA PATENT Many doctors don’t know about this mycoplasma because it was developed by the U.S. military in biological warfare experimentation, and it was not made public. This pathogenic mycoplasma disease agent was patented by the United States military by Dr. Shyh-Ching Lo, who was the top researcher for the military biological warfare research facility. I have the documented patent from the U.S. patent office.

A LABORATORY-CREATED PATHOGEN BY THE U.S. MILITARY Researchers in the United States, Canada and Britain were doing biowarfare research with the brucellosis bacteria as well as with a number of other disease agents. From its inception, the biowarfare program was characterized by continuing in-depth review and participation by the most eminent scientists, medical consultants, industrial experts and government officials, and it was top secret. The U.S. Public Health Service also closely followed the progress of biological warfare research and development from the very start of the program, and the Centers for Disease Control (CDC), and the National Institutes of Health (NIH) in the United States were working with the military in weaponizing these diseases. These are diseases which have existed for thousands of years, but they have been weaponized which means they were made more contagious and more effective. And they are spreading. A program developed by the CIA and NIH to develop a deadly lethal pathogen for which humanity had no natural immunity (AIDS) was disguised as a war on cancer and was part of MKNAOMI (ref. Special Virus Cancer Program: Progress Report 8, prepared by National Cancer Institute, Viral Oncology, Etiology Area, July, 1971 and submitted to NIH Annual Report in May, 1971 and updated July, 1971).

COMMITTEE ON GOVERNMENT REFORM Many members of the Senate and House of Represent-atives do not know what has been going on. For example, the US Senate Committee on Government Reform had searched the archives in Washington and other places for the document titled The Special Virus Cancer Program: Progress Report No.8 mentioned above and couldn’t find it. Somehow they heard I had it, called me and asked me to mail it to them. Imagine. A retired school teacher being called by the United States Senate and asked for one of their secret documents! The United States Senate through their government reform committee is trying to stop this type of government research.

BIOLOGICAL WARFARE RESEARCH AGREEMENT All the countries at war were experimenting with biological weapons. In 1942, the governments of the United States, Canada and Great Britain entered into a secret agreement to create two types of biological weapons (one that would kill and one that was disabling) for use in the war against Germany and Japan, who were also developing biological weapons. They primarily focused on brucellosis, and they began to weaponize the brucellosis bacteria.

CRYSTALLINE BRUCELLOSIS In a genuine U.S. Senate Study unclassified on February 24, 1977, the title page of this government record reports that George Merck, of the pharmaceutical company, Merck, Sharp and Dohme (which now makes cures for diseases they at one time created), in 1946, reported to the Secretary of War in the United States that his researchers had produced in isolation for the first time, a crystalline bacterial toxin extracted from brucellosis bacteria. The bacterial toxin could be removed in crystalline form and delivered by other vectors (in nature they are delivered within the bacteria). But the factor that is working in the brucellosis is the mycoplasma. Brucellosis is a disease agent that doesn’t kill people; it disables them. But they found that if they had mycoplasma at a certain strength, actually ten to the tenth power, it would develop into AIDS, and the person would die from it within a reasonable period of time because it could bypass our natural human defenses. If it was 108, the person would manifest with chronic fatigue syndrome or fibromyalgia. If it was 107, they would present as wasting; they wouldn’t die, and they wouldn’t be disabled, but they would not be that interested in life, they would waste away (ref. Dr. Donald MacArthur of the Pentagon appearing before a Congressional Committee, June 9, 1969, Department of Defense Appropriations, p.114, 129). Most of us have never heard of brucellosis because it largely disappeared when they began pasteurizing milk, which was the carrier. One salt shaker of this pure disease in a crystalline form could sicken the entire population of Canada. It is absolutely deadly, not in terms of killing the body, but in terms of disabling the body. The advantage of this crystalline disease agent is that it does not show up in blood and tissue tests because the bacteria has disappeared and only the pure disease agent remains. So the doctor thinks that it’s all in your head.

CRYSTALLINE BRUCELLOSIS AND MULTIPLE SCLEROSIS About three years ago in Rochester, New York, a gentleman gave me a document and told me, “I was in the U.S. Army, and I was trained in bacteriological warfare. We were handling a bomb filled with brucellosis, only it wasn’t brucellosis; it was a brucellosis toxin in crystalline form. We were spraying it on the Chinese and North Koreans.” He showed me his certificate listing his training in chemical, biological, and radiological warfare. Then he showed me 16 pages of documents given to him by the U.S. military when he was discharged from the service. It linked brucellosis with multiple sclerosis and stated: “Veterans with multiple sclerosis, a kind of creeping paralysis developing to a degree of 10% or more disability within two years after separation from active service may be presumed to be service-connected for disability compensation. Compensation is payable to eligible veterans whose disabilities are due to service.” In other words, “If you become ill with multiple sclerosis, it is because you were handling this brucellosis and we will give you a pension. Don’t go raising any fuss about it.” The government of the United States, in this official document revealed evidence of the cause of multiple sclerosis, but they didn’t make it known to the public, or to your doctor. In a 1958 report, Drs. Kyger and Haden suggest “…the possibility that multiple sclerosis might be a central nervous system manifestation of chronic brucellosis”. Testing approximately 113 MS patients, they found that almost 95% also tested positive for brucellosis. We have a document from a medical journal which concludes that one out of 500 people who had brucellosis would develop what they called neurobrucellosis, in other words, brucellosis in the brain which settles in the lateral ventricles where the disease multiple sclerosis is basically located.

CONTAMINATION OF CAMP DETRICK LAB WORKERS A report from the New England Journal of Medicine, 1948, Vol.236, p.741 called “Acute Brucellosis Among Laboratory Workers” shows us how actively dangerous this agent is. The laboratory workers were from Camp Detrick, Frederick, Maryland where they were developing biological weapons. Even though these laboratory workers had been vaccinated, wore rubberized suits and masks, and worked through holes in the compartment, many of them came down with this awful disease because it is so absolutely and terrifyingly infectious. The article was written by Lt. Calderone Howell, Marine Corps, Captain Edward Miller, Marine Corps, Lt. Emily Kelly, United States Naval Reserve and Captain Henry Bookman. They were all military personnel engaged in making the disease agent brucellosis into a more effective biological weapon.

III – COVERT TESTING OF THE MYCOPLASMA

TESTING BRUCELLOSIS UPON AN UNSUSPECTING PUBLIC Documented evidence proves that the biological weapons they were developing were tested on the public in various communities without their knowledge or consent. The government knew that crystalline brucellosis would cause disease in humans. Now they needed to determine how it spread, and the best way to disperse it. They tested dispersal methods for Brucella suis and Brucella melitensis at Dugway Proving Ground, Utah, June and September 1952. Probably, 100% of us now are infected with Brucella suis and Brucella melitensis. (ref. p.135, table 4 of Special Virus Cancer Program: Progress Report 8) . Another government document recommended the genesis of open air vulnerability tests, and covert research and development programs to be conducted by the army and supported by the Central Intelligence Agency. At that time, the government of Canada was asked by the government of the United States to cooperate in testing weaponized brucellosis, and Canada cooperated fully with the government of the United States. They wanted to determine (i) if mosquitoes will carry the disease and (ii) if the air will carry it. A government report stated that “…open air testing of infectious biological agents is considered essential to an ultimate understanding of biological warfare potentialities because of the many unknown factors affecting the degradation of micro-organisms in the atmosphere”.

TESTING BRUCELLOSES VIA MOSQUITO VECTOR IN PUNTA GORDA A report from The New England Journal of Medicine, August 22, 1957, p.362 reveals that one of the first outbreaks of chronic fatigue syndrome was in Punta Gorda, Florida, back in 1957. It was a strange coincidence that a week before these people came down with chronic fatigue syndrome, there was a huge influx of mosquitoes. The National Institutes of Health claimed that the mosquitoes came from a forest fire 30 miles away. When the forest fire broke out, the mosquitoes all said, “Well, let’s go over to Punta Gorda – there will be a bunch of people over there, we can have a picnic, and then we will go home”. The truth is that those mosquitoes were infected in Canada by Dr. J.B. Reed at Queen’s University. They were bred in Belleville, Ontario, and taken down and released in Punta Gorda. Within a week, the first five cases ever of chronic fatigue syndrome were reported to the local clinic in Punta Gorda, and it continued until finally 450 people were ill with the disease.

TESTING BRUCELLOSIS VIA MOSQUITO VECTOR IN ONTARIO The government of Canada established the Dominion Parasite Laboratory in Belleville, Ontario, and raised 100 million mosquitoes a month which were shipped to Queen’s University and certain other facilities to be infected with this disease agent. The mosquitoes were then let loose in certain communities in the middle of the night so they could determine how many people would become ill with chronic fatigue syndrome, or fibromyalgia, which was the first disease to show. One of the communities they tested it on was the St. Lawrence Seaway valley all the way from Kingston to Cornwall in 1984. They let out absolutely hundreds of millions of infected mosquitoes. Over 700 people in the next four or five weeks developed myalgic encephalomyelitis, or chronic fatigue syndrome.

IV – OTHER SECRET GOVERNMENT TESTING

MAD COW DISEASE IN THE FORE INDIAN TRIBE At the infamous Japanese Camp 731 in Manchuria, they contaminated prisoners of war with certain disease agents. They also established a research camp in New Guinea in 1942, and experimented upon the Fore Indian tribe, and inoculated them with a minced-up version of the brains of diseased sheep containing the visna virus which causes mad cow disease (Creutzfeldt-Jakob disease which is known to you as mad cow disease, but which was known to the Fore Indian tribe as kuru). About five or six years later, after the Japanese had been driven out, the poor people of the Fore tribe developed what they called kuru which was their word for wasting, and they began to shake, lose their appetites, and die. The autopsies revealed that their brains had literally turned to mush. They had contracted mad cow disease from the Japanese experiments. When World War II ended, the Japanese General Doctor who was in charge of biological warfare experimentations in Japan, Dr. Ishii Shiro, was captured. They gave him the choice of a job with the United States army or execution as a war criminal. Not surprisingly, Dr. Ishii Shiro chose to work with the United States military to demonstrate how they had created mad cow disease in the Fore Indian tribe. In 1957, when the disease was beginning to blossom in full among these Fore Indian people, Dr. Carleton Gajdusek of the National Institutes of Health of the U.S. headed down to New Guinea to to determine how the minced-up brains of the visna-infected sheep affected these people. He spent a couple of years in New Guinea studying the Fore tribe, wrote an extensive report on it, and won the Nobel Prize for “discovering” kuru disease (also known as mad cow or Creutzfeldt-Jakob disease) in the Fore Indian tribe in New Guinea.

TESTING CARCINOGENS IN RUSSIA In 1953, the Americans developed a carcinogenic chemical which they wanted to test, but they didn’t want to test it in the United States so they flew over Russia, accidentally wandered off course, and sprayed this stuff. Many people started getting cancer. And the U.S. had some jokes about this. One American researcher, Dr. Maurice Hilleman of Merck, Sharp and Dohme, joked, “We are going to win the next Olympics because all the Russians are going to turn up with 40-pound tumours.” They thought it was a big joke.

TESTING CARCINOGENS IN WINNIPEG Next they said, “How about testing it in Canada?” In 1953, the U.S. asked the government of Canada if they could test this carcinogenic chemical over the city of Winnipeg. It was a big city with 500,000 people, miles from anywhere. They sprayed the chemical in a 1,000% attenuated form, which they said would be so watered down that nobody would get very sick. However, if people came to clinics with a sniffle, a sore throat, or ringing in their ears, the researchers would be able to determine what percentage would have developed cancer if it had been full strength. When we located evidence that the Americans had tested this carcinogenic chemical over the city of Winnipeg in 1953, and informed the government that we had this evidence, they denied it. However, finally, on May 15, 1997, a story out of the Canadian Press in Washington, D.C. by Robert Russo, published in the Toronto Star, stated that the Pentagon of the United States admitted that in 1953 they had obtained permission from the government of Canada to fly over the city of Winnipeg and spray this crap out, and it sifted down on kids going to school, housewives hanging out their laundry, and people going to work. US Army planes and trucks released the chemical 36 times between July and August 1953. The chemical used was zinc cadmium sulfide, a carcinogen. They got their statistics, which indicated that if it had been full strength, approximately a third of the population of Winnipeg would have developed cancers over the next five years. The Pentagon called a press conference to admit what they had done. One professor, Dr. Hugh Fudenberg, MD, who was nominated twice for the Nobel Prize wrote a magazine article which stated that the Pentagon has come clean on this because two researchers up in Sudbury, Ontario, Don Scott and his son Bill Scott had been revealing this to the public. The US Army actually conducted a whole series of simulated germ warfare tests in Winnipeg. The Pentagon lied about the tests to the mayor, saying that they were testing a chemical fog over the city, which would protect Winnipeg in the event of a nuclear attack. A report commissioned by US Congress, chaired by Dr. Rogene Henderson, lists 32 American towns and cities used as test sites as well.

V – BRUCELLOSIS MYCOPLASMA AND DISEASE

AIDS The AIDS pathogen was created out of a brucellosis bacteria mutated with a visna virus; then the toxin was removed as a DNA particle called a mycoplasma. They used the same mycoplasma to develop disabling diseases like MS, Crohn’s colitis, Lyme disease etc. In a United States congressional document of a meeting held June 9, 1969, the Pentagon delivered a report to Congress about biological weapons (described on page 129 of the document). The Pentagon stated, “We are continuing to develop disabling weapons.” Dr. MacArthur, who was in charge of the research said, “We are developing a new lethal weapon, a synthetic biological agent that does not naturally exist, and for which no natural immunity could have been acquired.” Think about it. If you have a deficiency of acquired immunity, you have an acquired immunity deficiency. Plain as that. AIDS. In laboratories throughout the United States and a certain number in Canada, including the University of Alberta, the U.S. government provided the leadership for the development of the AIDS virus for the purpose of population control. After they had it perfected, they sent medical teams from the Centers for Disease Control to Africa and other mid-eastern countries where they thought the population was becoming too large. They gave them all a free vaccination for smallpox. Five years after receiving this smallpox vaccination, 60% of them were suffering from AIDS. They tried to blame it on a monkey, which is nonsense. There was a report in the newspapers a while back about a professor at the University of Arkansas who claimed that while studying the tissues of a dead chimpanzee, she found the HIV virus. The chimpanzee that she had tested was born in the United States 23 years earlier. It had lived its entire life in a U.S. military laboratory where it was used as an experimental animal for the development of these diseases. When it died, its body was shipped to a storage place where it was deep-frozen and stored in case they wanted to analyze it later. Then they decided that they didn’t have enough space for it, so they said, “Anybody want this dead chimpanzee?” and this researcher from Arkansas said, “Yes. Send it down to the University of Arkansas. We are happy to get anything that we can get.” They shipped it down and she found the HIV virus in it. That virus was acquired by that chimpanzee in the laboratories where it was tested.

CHRONIC FATIGUE Chronic fatigue syndrome is more accurately called myalgic encephalomyelitis, not chronic fatigue syndrome. That nomenclature was given by the National Institutes of Health in the United States because they wanted to downgrade and belittle the disease. An MRI of the brain of a teenage girl who had chronic fatigue syndrome displayed a great many scars or punctate lesions in the left frontal lobe area where portions of the brain had literally dissolved and had been replaced by scar tissue. This caused cognitive impairment, memory impairment, etc. And what was the cause of the scars? The mycoplasma. So there is very concrete physical evidence of these tragic diseases even though doctors continue to say they don’t know where it comes from or what they can do about it

APPEALS TO CANADA PENSION Many people with chronic fatigue syndrome, myalgic encephalo-myelitis and fibromyalgia who apply to the Canada Pension Plan will be turned down because they cannot prove that they are ill. Over the past year I have conducted several appeals to Canada Pension and Workers Compensation on behalf of people who have been turned down. I provided documented evidence of these illnesses, and they were all granted their pensions on the basis of the evidence that I provided. In March of last year, for example, I appealed to the Workers’ Compensation on behalf of a lady with fibromyalgia who had been denied her pension back in 1993. The vice-chairman of the board came up to Sudbury to hear the appeal, and I showed him a number of documents which proved that this lady was physically ill with fibromyalgia. It was a disease which caused physical damage, and the disease agent was a mycoplasma. The guy listened for three hours and then he said to me, “Mr. Scott, how is it I have never heard of any of this before? I said, “We brought a top authority in this area into Sudbury to speak on this subject and not a single solitary doctor came to that presentation.”

VI – TESTING FOR THE PRESENCE OF MYCOPLASMA IN YOUR BODY

THE POLYMERASE CHAIN REACTION TEST Information is not generally available about this agent, because first of all, the mycoplasma is such an infinitely small disease agent. A hundred years ago certain medical theoreticians conceived that there must be something smaller than the bacteria and the virus, which are the most common living forms of disease agents. This pathogenic organism is so infinitely small that normal blood and tissue tests will not reveal the source of the disease. Your doctor may diagnose you with Alzheimer’s and he will say, “Golly, we don’t know where Alzheimer’s comes from. All we know is that your brain begins to deteriorate, cells rupture, the myelin sheath around the nerves dissolves, and so on.” Or if you have chronic fatigue syndrome, the doctor will not be able to find any cause for your illness with ordinary blood and tissue tests. This mycoplasma couldn’t be detected until about 30 years ago when they developed the polymerase chain reaction test in which they examine a sample of your blood, remove damaged particles, and subject that damaged particle to a polymerase chain reaction. This causes the DNA in the particle to break down. Then they place it in a nutrient which causes the DNA to grow back into its original form. If they get enough of it they can recognize what it is, and determine whether brucellosis or another kind of agent is behind that particular mycoplasma.

THE BLOOD TEST If anybody in your family has myalgic encephalomyelitis, fibromyalgia, multiple sclerosis, or Alzheimer’s, you can send a blood test to Dr. Les Simpson in New Zealand. If you are ill with these diseases, your red blood cells will not be normal donut-shaped blood cells capable of being compressed and squeezed through the capillaries, but will swell up like cherry-filled donuts, which cannot be compressed. The blood cells become enlarged and distended because the only way the mycoplasma can exist is by uptaking preformed sterols from the host cell. One of the best sources of preformed sterols is cholesterol, and cholesterol is what gives your blood cells flexibility. If the cholesterol is taken out by the mycoplasma, the red blood cell swells up, doesn’t go through and the person begins to feel all the aches and pains, and all the damage it causes to the brain, the heart, the stomach, the feet and the whole body because blood and oxygen is cut off. And that is why people with fibromyalgia and chronic fatigue syndrome have such a terrible time. When the blood is cut off from the brain, punctate lesions appear, because those parts of the brain die. It will get into portions of the heart muscle, especially the left ventricle, and those cells will die. Certain people have cells in the lateral ventricles of the brain that have a genetic predisposition to admit the mycoplasma, and it causes the lateral ventricles to deteriorate and die and this leads to multiple sclerosis which will progress until they are totally disabled and frequently die prematurely. It will get into the lower bowel and parts of the lower bowel will die and cause colitis. All of these diseases are caused by the degenerating properties of the mycoplasma.

About two months ago a gentleman in Sudbury phoned me and told me he had fibromyalgia. He applied for Canada Pension and was turned down because his doctor said it was all in his head and there was no external evidence. I gave him the proper form and a vial, and he sent his blood to Dr. Les Simpson of New Zealand to be tested. He did this with his family doctor’s approval, and the results from Dr. Simpson showed that only 4% of his red blood cells were functioning normally and carrying the appropriate amount of oxygen to his poor body, whereas 83% were distended, enlarged and hardened, and wouldn’t go through the capillaries without an awful lot of pressure and trouble. This is the physical evidence of the damage that is done.

THE ECG TEST You can also ask your doctor to give you a 24-hour Holter ECG. You know, of course, that an electrocardiogram is a measure of your heart beat, which shows what is going on in the right ventricle, the left ventricle, and so on. Tests show that 100% of patients with chronic fatigue syndrome and fibromyalgia have an irregular heart beat. At various periods of time, during the 24 hours, the heart, instead of working happily away, going “bump-BUMP, bump-BUMP”, every now and again, it will go “buhbuhbuhbuhbuhbuhbuhbuhbuh”. The T-wave (the waves are called P, Q, R, S, and the last one is T) is normally a peak, and then the wave levels off and starts with the P-wave again. In chronic fatigue and fibromyalgia patients, the T-wave flattens off, or actually inverts. That means the blood in the left ventricle is not being squeezed up through the aorta and around through the body. My client did this test, and lo and behold, the test results stated: “The shape of T and S-T suggest left ventricle strain pattern, although voltage and so on is normal”. The doctor had no clue as to why the T-wave was not working properly. I analyzed the report of the patient who had been turned down by Canada Pension and sent it back to them. They wrote back and said, “It looks like we may have made a mistake. We are going to give you a hearing and you can explain this to us in more detail.” So it is not all in your imagination. There is actual physical damage to the heart. The left ventricle muscles do show scarring. That is why many people are diagnosed with a heart condition when they first develop fibromyalgia, but it’s only one of several problems because the mycoplasma can do all kinds of damage.

BLOOD VOLUME TEST You can also ask your doctor for a blood volume test. Every human being requires a certain amount of blood per pound of body weight, and it has been observed that people with fibromyalgia, chronic fatigue syndrome, multiple sclerosis and others do not have the normal blood volume their body needs to function properly. Doctors aren’t normally aware of this. This test measures the amount of blood in the human body by taking out five cc, putting a tracer in it, and then putting it back in the body. One hour later take out five cc again and look for the tracer. The thicker the blood and the lower the blood volume, the more tracer you will find. The analysis of one of my clients stated: “This patient was referred for red cell mass study. The red cell volume is 16.9 ml per kg of body weight. The normal range is 25 to 35 ml. per kg.” This guy has 36% less blood in his body than the body needs to function”. And the doctor hadn’t even known the test existed. If you lost 36% of your blood in an accident, do you think your doctor would tell you that you are all right, just take up line dancing and you will get over it? They would rush you to the nearest hospital and start infusing you with blood transfusions. These tragic people with these awful diseases are functioning with anywhere from 7 to 50% less blood than their bodies need to function.

UNDOING THE DAMAGE The body undoes the damage itself. The scarring in the brain of people with chronic fatigue and fibromyalgia will be repaired. There is cellular repair going on all the time. But the mycoplasma has moved on to the next cell. In the early stages of a disease, doxycycline may reverse the disease. It is one of the tetracycline antibiotics, but it is not bactericidal; it is bacteriostatic. It stops the growth of the mycoplasma, and if it is stopped long enough, then the immune system takes over. (Nicholson, G.L., Doxycycline treatment and Desert Storm, JAMA, 1995, 273: 618-619),

GULF WAR RESEARCH Professor Garth Nicholson, Ph.D., of the Institute for Molecular Medicine is one of the top experts on mycoplasma. He has been given an $8 million grant to study 450 Gulf War veterans, because Gulf War illness is caused by the mycoplasma. Dr. Les Simpson has done most of the research in detecting the disease by the polymerase chain reaction blood test. You may contact Dr. Nicholson at 15162 Triton Lane, Huntington Beach, Ca, 92649-1401, tel 714-903-2900.

In summary, there is a disease agent that is called a mycoplasma. All of these neurodegenerative systemic diseases are caused by a particle of a bacterial DNA, a mycoplasma, that enters into the cells of living organisms and takes the cells apart, sterol by sterol, leaving scar tissue, and causing all the range of symptoms that you see in people with these diseases. The military and the National Institutes of Health and the government are all dedicated to keeping this mycoplasma as covert as they possibly can.

For more information and references, please refer to The Brucellosis Triangle and The Extremely Unfortunate Skull Valley Incident by Don Scott and William Scott, both available at Consumer Health Organization.
Other recommended reading is Osler’s Web by Hillary Johnson and Emerging Viruses: Aids and Ebola by Leonard Horowitz. Don Scott also produces The Journal of Degenerative Diseases.

You may contact Donald Scott at: 190 Mountain St., Ste. 405, Sudbury, Ontario, Canada P3B 4G2. 705-670-0180.

Note: Dr. David Webster at Sudbury General Hospital, a wonderful person, with whom I have had conversations about these awful diseases can tell your doctor about the Blood Volume test.

http://www.consumerhealth.org/home.cfm

Holocaust denial was already taking root in Britain during WWII, says UK author

timesofisrael.com

Holocaust denial was already taking root in Britain during WWII, says UK author

By Robert Philpot 14-18 minutes


LONDON — In the grim search by historians and academics to pinpoint the first examples of postwar Holocaust denial, the finger of blame is most often pointed at fascists, anti-Semites and far-right figures in France, Sweden and the United States.

However, argues a new book, this misses the pivotal role played by Nazi sympathizers in Britain, both during World War II and in its immediate aftermath, in developing a “blueprint” that has been drawn on ever since by those who seek to deny history’s greatest crime.

“The truth is that Holocaust denial in its traditional form began not in France or America, as most have argued, but actually in Britain,” says Dr. Joe Mulhall, author of “British Fascism After the Holocaust: From the Birth of Denial to the Notting Hill Riots 1939-1958.

Get The Times of Israel’s Daily Edition by email and never miss our top stories Free Sign Up

Mulhall, senior researcher at the UK anti-fascism campaign group Hope Not Hate, identifies the British fascist leader Oswald Mosley as a central player in the emergence of Holocaust denial in postwar Europe.

As the book details, the French fascist Maurice Bardèche, his compatriots Paul Rassinier and Prof. René Fabre, and the veteran Swedish anti-Semite Einar Åberg are among those who have been awarded “the ignoble distinction of being the first person to maliciously deny the validity and uniqueness of Nazi war crimes.” By contrast, in the words of one historian, early Holocaust denial in Britain is viewed as “a pale reflection” of that propagated in countries such as France.

“While there is no solid consensus among historians as to who was the first true Holocaust denier,” writes Mulhall, a common thread is to “ignore or overlook early British deniers.”

‘British Fascism After the Holocaust,’ by Dr. Joe Mulhall. (Courtesy)

Mulhall attributed this omission to a broader attitude among some academics towards British fascism. “Part of the reason is that the people who were denying the Holocaust from Britain were primarily British fascists and British fascism is often seen by scholars that look at fascism more broadly as a bit of a backwater,” Mulhall tells The Times of Israel in an interview.

However, believes Mulhall, this perception of British fascism has helped to skew the historiography of Holocaust denial. His research demonstrates that as soon as concrete evidence of Nazi atrocities began to emerge in the war, leading British far-right activists lost no time in attempting to downplay and discredit it.

In 1942, for instance, the Duke of Bedford, who helped bankroll the far-right British People’s party, published a pamphlet that dismissed pictorial evidence of Nazi killings as fake and claimed reports were overplayed. “In regards to the infliction upon Jews of actual physical brutality, it appears certain that this has happened on many occasions, but it may be deemed equally certain that the extent of the abuse has been greatly exaggerated by propaganda,” the duke argued.

A year later, Alexander Ratcliffe, a virulent anti-Semite and founder of the Scottish Protestant League, published “The Truth about the Jews,” which went further still. “The various press reports about Hitler’s terrible persecution of the Jews mostly are written up by Jews and circulated by Jews. Mostly such reports are the invention of the Jewish mind,” he claimed. “For the historian immediately after the war will prove that 95% of the Jew ‘atrocity’ stories and ‘photographs’ of such atrocities appearing in the press, magazines and journals are mere invention.”

Dr. Joe Mulhall, author of ‘British Fascism After the Holocaust: From the Birth of Denial to the Notting Hill Riots 1939-1958.’ (Courtesy)

He also denounced the “lying photographs” printed by the press, drawing a parallel with unreliable stories and propaganda which deluged the public during World War I. There is, Ratcliffe went on to argue, “not a single authentic case on record of a single Jew having been massacred or unlawfully put to death under the Hitler regime.”

Thus, notes Mulhall, while historians have suggested that Bardèche was the first to claim that pictorial evidence of the Nazis’ murder of the Jews was a fake, this lie was peddled earlier by Ratcliffe.

As the war drew to a close in the spring and summer of 1945, Ratcliffe began to change tack, no longer denying the existence of atrocities but attempting to shift the blame for them from the Nazi killing machine. Referring to the images emerging from the camps, he asked: “These bodies were starved to death! And why were these bodies starved to death? Because there was no food for these bodies! And who were to blame for that? Directly, or indirectly, the Allies.”

Circulation figures for the Duke of Bedford and Ratcliffe’s publications are difficult to ascertain and, Mulhall says, they were “in some ways marginal extreme figures.”

“In a direct sense, it’s unlikely that they created content which affected societal perceptions of the Holocaust,” says Mulhall.

Nonetheless, he argues, their importance shouldn’t be dismissed. Instead, they created the “original sources” and the “blueprints that are used by later Holocaust deniers” such as David Irving and Robert Faurisson. He quotes the historian Colin Holmes’s assertion that Ratcliffe was both “an important carrier of ideological anti-Semitism” and a “pioneer revisionist.”

Holocaust denial underestimated

These early themes — the suggestion that the extent of the Holocaust had been “greatly exaggerated,” that the blame for deaths should not rest primarily with the Nazis, and the refusal to accept the pictorial evidence — are ones which were both echoed, and added to, by Mosley.

All of the things that become the key tenets of Holocaust denial, Mosely is talking about in the 1940s

“Mosley is a way more important figure than people have given him credit for,” Mulhall says. “All of the things that become the key tenets of Holocaust denial, he is talking about in the 1940s. He was central to Holocaust denial in the UK and creating lots of the arguments that became the staples of international Holocaust denial.”

Sir Oswald Mosley, leader of the British Union of Fascists, inspects the ranks of blackshirts, in East London, on October 4, 1936. (AP Photo/Len Puttnam/Staff)

Mosley, who led the British Union of Fascists in the 1930s and founded a new far-right party, the Union Movement, in 1948, is often seen as having abandoned the worst excesses of his prewar anti-Semitism after the Third Reich’s defeat. But, says Mulhall, “this is just not what the historical record shows. It’s not what the newspapers show from the period. It’s not what Mosley himself talks about. He’s vehemently anti-Semitic and remains so.”

Moreover, even after the war, the former Blackshirt leader had a far greater public platform than any other figure on the British far right.

Buchenwald and Belsen are completely unproved… Pictorial evidence proves nothing at all. We have no impartial evidence

“He’s still a national figure — he might be a hated national figure, but he’s still a national figure,” says Mulhall. “What he does, people still look at.”

And, at a time when many of his former fascist comrades were dead, on trial or lying low, Mosley also had an enhanced standing within the European far right and an unparalleled ability to spread Holocaust denial within its networks.

While recognizing the existence of concentration camps, Mosley, like Ratcliffe, sought to discredit the images which emerged from them. “Buchenwald and Belsen are completely unproved,” he argued in his 1947 book “The Alternative.” “Pictorial evidence proves nothing at all. We have no impartial evidence.”

Sir Oswald Mosley makes his address to the fascist ‘Blackshirts’, assembled in Victoria Park, London, June 7, 1936. (AP Photo)

Indeed, he wrote, the camps were, in fact, simply an unpleasant necessity. “Men were short, food was short, disorder raged as all supply services broke down under incessant bombing. They held in prison or camps a considerable disaffected population, some German, but most alien, who were requiring guards and good food supplies,” the book claimed.

The fascist leader — who usually placed the word “atrocity” in quotation marks — mocked the “atrocity business.” His Union Movement newspaper derided “concentration camp fairy tales” while he also sought to deny the existence of a conscious mechanical extermination program by the Nazis and shift responsibility for any deaths which did occur elsewhere.

If you have typhus outbreaks you are bound to have a situation where you have to use the gas ovens to get rid of the bodies. If we had been bombed here in prisons and concentration camps, there would have been a few of us going into the gas ovens

The conditions in camps were the result of “Allied bombing and consequent epidemics,” he claimed. “If you have typhus outbreaks you are bound to have a situation where you have to use the gas ovens to get rid of the bodies. If we had been bombed here in prisons and concentration camps, there would have been a few of us going into the gas ovens,” he told a 1947 press conference.

Alongside the Allies, the Jews themselves were also responsible for their fate. “Modern war is the end of morality. Those responsible for beginning war, are, also, responsible for ending morality,” Mosley — who had repeatedly deplored the “Jew’s war” — wrote in “The Alternative.” To this noxious mix, he also added the notion — later taken up with gusto by Holocaust deniers such as Irving — that Hitler knew nothing about the Final Solution.

‘Immoral equivalency’

Mosley’s attacks on the Nuremberg Trials, which he called “a zoo and a peep show,” were also a key theme for early British Holocaust deniers. Those attacks were crucial to propagating the idea of what the American historian Deborah Lipstadt terms “immoral equivalency,” a tactic which seeks to undermine the uniqueness of the Nazis’ crimes by equating them to alleged Allied ones.

Deborah Lipstadt, right, professor of Modern Jewish, and Holocaust studies at Emory University Atlanta, Georgia, with Pengiun books chief executive Anthony Forbes Watson, left, arrive at London’s High Court Tuesday, January 11, 2000, to attend her libel case brought on by David Irving against her and Pengiun books for claiming he is ‘one of the most dangerous spokespersons of Holocaust denial.’ (AP Photo/Max Nash)

Funded by the Duke of Bedford, the British People’s party’s pamphlet “Failure at Nuremberg” received much coverage in both left- and right-wing magazines after its publication in 1946. “If the Nuremberg law is to be held inviolate, therefore, it will be seen that a strong prima facie case exists against both the Russian and the American leadership, whose surviving members must forthwith be placed in the dock as suspected war-criminals,” it said.

As Mulhall outlines, the publication was simply one element of a prolonged effort by the party to relativize the Holocaust. “We can safely rest assured that the cellars of Hamburg, the deserts which were once Hiroshima and Nagasaki will not be on view,” noted the BPP newspaper, People’s Post, in December 1945 after footage of atrocities was screened in the Nuremberg courtroom.

Writing in the paper in September 1945, the Duke of Bedford also pointed to events in Central and Eastern Europe in the immediate aftermath of the war in an attempt to downplay the Final Solution. “The expulsion of Germans by Czechs and Poles, approved of by Russia and tolerated by Great Britain and the USA, is going on under conditions of cruelty which equal anything ever attributed to Nazi policy and which, moreover, is being carried on a much larger scale,” he wrote.

The city of Lubeck burns after an Allied air raid in 1942. (Bundesarchiv bild)

Beyond the BPP, the British journalist Montgomery Belgion’s 1946 book “Epitaph On Nuremberg” offered a similarly strong attack on the alleged double standards of Nuremberg, which he dismissed as “a gigantic piece of propaganda.” The book went on to describe the Allied bombing campaign as the “RAF’s Holocaust,” claiming it had brought disease and starvation to Germany. The Jewish publisher, Victor Gollancz, who had originally encouraged Belgion to write the book, was horrified by the “unpublishable draft.”

Wolves in sheeps’ clothing

Mulhall also decided to include in his account the writings of the military historian and theorist Capt. Basil Liddell Hart, an altogether more respected and respectable figure than Mosley or the leadership of the British People’s party. His 1948 book “The German Generals Talk,” while “by no means a work of outright Holocaust denial,” says Mulhall, staunchly defended the Wehrmacht and the German Military High Command and sought to absolve it of responsibility for the Final Solution.

British military historian and theorist Capt. Basil Liddell Hart. (Public domain)

“What is really more remarkable than the German generals’ submission to Hitler is the extent to which they managed to maintain in the Army a code of decency that was in constant conflict with Nazi ideas,” wrote Liddell Hart. But as historian Graham Macklin has argued, Liddell Hart’s book “wilfully ignored the Wehrmacht’s willing complicity in the descent into genocide” and “actively colluded in whitewashing their horrific crimes.”

The impact of the British Holocaust deniers is, Mulhall says, evident in Bardèche’s 1948 book “Nuremberg or the Promised Land.” He, too, questioned the pictorial evidence, which he called “a film set,” and labeled Nuremberg “another Dreyfus case,” arguing: “I will believe in the judicial existence of war crimes when I see General Eisenhower and Marshal Rossokovsky take seats at the Nuremberg Court on the bench for the accused.”

He also pursued the idea of “immoral equivalency,” suggesting that the Allies engaged in “different but just as effective methods, a system of extermination almost as wide-spread.” This is, says Mulhall, “no mere coincidence” and, in a further book in 1950, Bardèche readily acknowledged his debt to his comrades across the English Channel, singling out the Duke of Bedford, the BPP, Liddell Hart and Belgion, who he quoted at length.

Similarly, Mulhall notes that the best-known early American Holocaust denier, Francis Parker Yockey, was heavily influenced by writings from Britain. “The argument and tone of Yockey’s denial echoes the work of the British ‘pioneer revisionists,’” Mulhall writes. “While American, Yockey was based in Britain, with British fascists, during much of the late 1940s, and his work ‘Imperium’ was published first in the UK, which likely accounts for the similarities with early British denial literature.”

However marginal, obscure and extreme the writings of Britain’s Nazi apologists may have seemed in the immediate aftermath of the war, their poisonous long-term impact should not be discounted, believes Mulhall.

“The Holocaust denial that becomes this international phenomenon a decade or two later and into the 1970s — this really dangerous thing filling out theaters and selling huge numbers of books — is based on ideas which were created in this period,” he says.

Women And Hysteria In The History Of Mental Health

ncbi.nlm.nih.gov

Women And Hysteria In The History Of Mental Health

Cecilia Tasca,1 Mariangela Rapetti,1,2 Mauro Giovanni Carta,2,* and Bianca Fadda1 60-76 minutes


Abstract

Hysteria is undoubtedly the first mental disorder attributable to women, accurately described in the second millennium BC, and until Freud considered an exclusively female disease. Over 4000 years of history, this disease was considered from two perspectives: scientific and demonological. It was cured with herbs, sex or sexual abstinence, punished and purified with fire for its association with sorcery and finally, clinically studied as a disease and treated with innovative therapies. However, even at the end of 19th century, scientific innovation had still not reached some places, where the only known therapies were those proposed by Galen. During the 20th century several studies postulated the decline of hysteria amongst occidental patients (both women and men) and the escalating of this disorder in non-Western countries. The concept of hysterical neurosis is deleted with the 1980 DSM-III. The evolution of these diseases seems to be a factor linked with social “westernization”, and examining under what conditions the symptoms first became common in different societies became a priority for recent studies over risk factor.

Keywords: History, Hysteria, Mental Health, Psychiatry, West, Woman.

INTRODUCTION

We intend to historically identify the two dominant approaches towards mental disorders, the “magic-demonological” and “scientific” views in relation to women: not only is a woman vulnerable to mental disorders, she is weak and easily influenced (by the “supernatural” or by organic degeneration), and she is somehow “guilty” (of sinning or not procreating). Thus mental disorder, especially in women, so often misunderstood and misinterpreted, generates scientific and / or moral bias, defined as a pseudo-scientific prejudice [1].

19-20th centuries’ studies gradually demonstrate that hysteria is not an exclusively female disease allowing a stricter scientific view to finally prevail. 20th century’s studies have also drawn on the importance of transcultural psychiatry, in order to understand the role of environmental factors in the emotive evolution and behavioral phenomenology and in modifying the psychopathology, producing the hypotheses of a modification to hysteria from the increase of mood disorders.

1. Ancient Egypt

The first mental disorder attributable to women, and for which we find an accurate description since the second millennium BC, is undoubtedly hysteria.

The first description referring to the ancient Egyptians dates to 1900 BC (Kahun Papyrus) and identifies the cause of hysterical disorders in spontaneous uterus movement within the female body [2, 3].

In the Eber Papyrus (1600 BC) the oldest medical document containing references to depressive syndromes, traditional symptoms of hysteria were described as tonic- clonic seizures and the sense of suffocation and imminent death (Freud’s globus istericus). We also find indications of the therapeutic measures to be taken depending on the position of the uterus, which must be forced to return to its natural position. If the uterus had moved upwards, this could be done by placing malodorous and acrid substances near the woman’s mouth and nostrils, while scented ones were placed near her vagina; on the contrary, if the uterus had lowered, the document recommends placing the acrid substances near her vagina and the perfumed ones near her mouth and nostrils [2, 3].

2. The Greek world

According to Greek mythology, the experience of hysteria was at the base of the birth of psychiatry.

The Argonaut Melampus, a physician, is considered its founder: he placated the revolt of Argo’s virgins who refused to honor the phallus and fled to the mountains, their behavior being taken for madness. Melampus cured these women with hellebore and then urged them to join carnally with young and strong men. They were healed and recovered their wits. Melampus spoke of the women’s madness as derived from their uterus being poisoned by venomous humors, due to a lack of orgasms and “uterine melancholy” [24].

Thus arose the idea of a female madness related to the lack of a normal sexual life: Plato, in Timaeus, argues that the uterus is sad and unfortunate when it does not join with the male and does not give rise to a new birth, and Aristotle and Hippocrates were of the same opinion [24].

The Euripidy’s myth says that a collective way of curing (or, if we prefer, preventing) melancholy of the uterus is represented by the Dionysian experience of the Maenads, who reached catharsis through wine and orgies [5]. Women suffering from hysteria could be released from the anxiety that characterizes this condition by participating in the Maenad experience. Trance status guided and cured by the Satyr, the priest of Dionysus, contributed to solving the conflict related to sexuality, typical of hysteria disease [6].

Hippocrates (5th century BC) is the first to use the term hysteria. Indeed he also believes that the cause of this disease lies in the movement of the uterus (“hysteron”) [24].

The Greek physician provides a good description of hysteria, which is clearly distinguished from epilepsy. He emphasizes the difference between the compulsive movements of epilepsy, caused by a disorder of the brain, and those of hysteria due to the abnormal movements of the uterus in the body. Then, he resumes the idea of a restless and migratory uterus and identifies the cause of the indisposition as poisonous stagnant humors which, due to an inadequate sexual life, have never been expelled. He asserts that a woman’s body is physiologically cold and wet and hence prone to putrefaction of the humors (as opposed to the dry and warm male body). For this reason, the uterus is prone to get sick, especially if it is deprived of the benefits arising from sex and procreation, which, widening a woman’s canals, promote the cleansing of the body. And he goes further; especially in virgins, widows, single, or sterile women, this “bad” uterus – since it is not satisfied – not only produces toxic fumes but also takes to wandering around the body, causing various kinds of disorders such as anxiety, sense of suffocation, tremors, sometimes even convulsions and paralysis. For this reason, he suggests that even widows and unmarried women should get married and live a satisfactory sexual life within the bounds of marriage [24].

However, when the disease is recognized, affected women are advised not only to partake in sexual activity, but also to cure themselves with acrid or fragrant fumigation of the face and genitals, to push the uterus back to its natural place inside the body [24].

3. Rome

Aulus Cornelius Celsus (1st century BC) gives a good and accurate clinical description of hysterical symptoms. In De re medica Celsus, he wrote “In females, a violent disease also arises in the womb; and, next to the stomach, this part is most sympathetically affected or most sympathetically affects the rest of the system [7]. Sometimes also, it so completely destroys the senses that on occasions the patient falls, as if in epilepsy. This case, however, differs in that the eyes are not turned, nor does froth issue forth, nor are there any convulsions: there is only a deep sleep”.

Claudius Galen’s theories on hysteria (2nd century AD) are comparable to those of Hippocrates. Furthermore Galen says of hysteria “Passio hysterica unum nomen est, varia tamen et innumera accidentia sub se comprehendit” (hysterical passion is the name, but various and several are its symptoms), highlighting the variety of hysterical events [7]. In his work In Hippocratis librum de humoribus, Galen criticizes Hippocrates: “Ancient physicians and philosophers have called this disease hysteria from the name of the uterus, that organ given by nature to women so that they might conceive [7]. I have examined many hysterical women, some stuporous, others with anxiety attacks […]: the disease manifests itself with different symptoms, but always refers to the uterus”. Galen’s treatments for hysteria consisted in purges, administrations of hellebore, mint, laudanum, belladonna extract, valerian and other herbs, and also getting married or repressing stimuli that could excite a young woman [2, 3, 7].

Hysterical cures are only revolutionized by Soranus (a Greek physician from the 1st half of 2nd century AD, practicing in Alexandria and Rome), who wrote a treatise on women’s diseases and who is considered the founder of scientific gynecology and obstetrics : women’ disorders arise from the toils of procreation, their recovery is encouraged by sexual abstinence and perpetual virginity is women’ ideal condition. Fumigations, cataplasms and compressions are ineffectual, the hysterical body should be treated with care: hot baths, massages, exercise are the best prevention of such women’ diseases [2, 3, 7].

4. Middle Ages

After the fall of the Roman Empire, Greek-Roman medical culture had its new epicenter in Byzantium, where physicians inherited Galen’s science without making any significant innovations (the most famous was Paul of Aegina, 625-690 AD). Sometime before, Bishop Nestorius (381-451 approx.), who took refuge in the Middle East in an area between today’s Iraq and Egypt, had brought with him his knowledge of classical science, contributing to the spread of Greek-Roman medicine in these areas.

The political events of the early Middle Ages caused a rupture between Christian Europe, with its auctoritas culture – in the hands of just a few scholars – and the Middle East of the Caliphs, where thanks to a climate of tolerance and cultural ferment, the texts of Hippocrates and Galen were translated and commented on in Arabic, becoming widespread and well-known [3].

In this context, two great scientists carry out their work : the Persian Avicenna (980-1037) [8, 9] and the Andalusian Jew Maimonides (1135-1204) [10]. Thanks to them, the legacy of Hippocrates and Galen is not only maintained, but spreads throughout Europe: the Reconquista of Spain (718-1492) and new contacts with the Near East bring important cultural exchanges, Avicenna’s Canon of Medicine and Galen’s Corpus are diffused along with the Latin translations ascribed to Gerard of Cremona (1114-1187), while Maimonides’ texts are disseminated in the Jewish world, along with other basic medical texts, thanks to translations by the Ibn Tibbon family (13-14th centuries). In particular, the medical schools of Salerno and Montpellier were vehiclesfor the dissemination of these works [11].

This was how Hippocratic concepts of melancholia and hysteria spread in late-medieval Europe, and in informed circles these diseases were treated according to what we shall call the “scientific” vision. In particular, this advocated the use of melissa as a natural remedy nerve comforter (melissa was considered excellent even in cases of insomnia, epilepsy, melancholy, fainting fits, etc.) [3, 12].

Besides the natural remedies, a sort of “psychotherapy” developed, practiced not only by Avicenna, but also for example by Arnaldus of Villa Nova (1240-1311). The latter, considered medieval Europe’s greatest physician, will be counted along with Galen and Avicenna in the inventories of physicians’ libraries throughout the Modern era [13].

It is also interesting to note that in the many treatises diffused at the time (Constantine the African’s Viaticum and Pantegni, but also the Canon of Avicenna and Arnaldus of Villa Nova’s texts) women were often not described as “patients” to be cured but rather as the “cause ” of a particular human disease, defined as amor heroycus or the madness of love, unfulfilled sexual desire [8].

But we cannot talk about women’ health in the Middle Ages without citing Trotula de Ruggiero from Salerno (11th century). While as a woman she could never become a magister, Trotula is considered the first female doctor in Christian Europe: she belonged to the ranks of famous women active in the Salerno School but discredited, among others, by Arnaldus of Villa Nova [14].

Called sanatrix Salernitana, Trotula was an expert in women’ diseases and disorders. Recognizing women as being more vulnerable than men, she explained how the suffering related to gynecological diseases was “intimate”: women often, out of shame, do not reveal their troubles to the doctor. Her best known work, De passionibus mulierum ante, in et post partum, deals female problems, including hysteria. Faithful to the teachings of Hippocrates, Trotula was devoted to the study of women’ diseases, of which she tried to capture the secrets, without being influenced by the prejudices and morals of her time, also giving advice on how to placate sexual desire: in her work abstinence is seen as a cause of illness and she recommends sedative remedies like musk oil or mint [15].

Trotula works at a time when women are still considered inferior to men because of their physiological and anatomical differences. Hildegard of Bingen (1098-1179), German abbess and mystic, was another female doctor. Her work is very important for the attempt to reconciliate science with faith, that happens at the expense of science. Hildegard resumes the “humoral theory” of Hippocrates and attributes the origin of black bile to the original sin [16]. In her view, melancholy is a defect of the soul originated from Evil and the doctor must accept the incurability of this disease. Her descriptions are very interesting. Melancholic men are ugly and perverse, women slender and minute, unable to fix a thought, infertile because of a weak and fragile uterus [16]. In the ideology of Hildegard, Adam and Eve share responsibility with respect to original sin, and man and woman – sexually complementary – are equal in front of God and the cosmos [17].

The mainstream view of the time is one in which the woman is a physically and theologically inferior being, an idea that has its roots in the Aristotelian concept of male superiority: St. Thomas Aquinas’ (1225-1274) Summa Theologica Aristotle’s assertions that “the woman is a failed man” [18]. The inferiority of women is considered a consequence of sin, and the solutions offered by St. Thomas’ reflection leave no doubt about what will overturn the relationship between women and Christianity: the concept of “defective creature” is just the beginning. In question 117, article 3, addressing the possibility that the human soul can change the substance, St. Thomas says that “some old women” are evil-minded; they gaze on children in a poisonous and evil way, and demons, with whom the witches enter into agreements, interacting through their eyes [18]. The idea of a woman-witch, which we shall call the “demonological vision”, almost becomes insuperable: preachers disclose the Old Testament’s condemnation of wizards and necromancers and the fear of witches spreads in the collective imagination of the European population. The ecclesiastical authorities try to impose celibacy and chastity on the clergy, and St. Thomas’ theological descriptions regarding woman’s inferiority are, perhaps, the start of a misogynistic crusade in the late Middle Ages.

From the thirteenth century onwards, the struggle with heresy assumes a political connotation: the Church aims tat unifying Europe under its banner, so breviaries become manuals of the Inquisition and many manifestations of mental illness are seen as obscene bonds between women and the Devil. “Hysterical” women are subjected to exorcism: the cause of their problem is found in a demonic presence. If in early Christianity, exorcism was considered a cure but not a punishment, in the late Middle Ages it becomes a punishment and hysteria is confused with sorcery [19, 20].

Political and religious status quo in Europe is threatened by the first humanist ideas and the Church responds by intensifying inquisitions: the apogee is reached in 1484 with the Summis desiderantes affectibus, Innocent VIII’s Bull, which confirms the witch hunt and an obligation to “punish, imprison and correct” heretics [21, 22]. The German Dominicans Heinrich “Institor” Kramer and Jacob Sprenger are accredited with the publication of the famous Hammer of Witches, the Malleus Meleficarum (1486) [21, 22]. Although not an official Church manual, it takes on an official tone due to the inclusion of the papal Bull within the text. It is interesting to note that the title itself includes signs of misogyny: “Maleficarum” as witches, not “Maleficorum” as wizards… as if to say “evil is female/ evil origins from women”!

The devil is everywhere in these pages: he makes men sterile, kills children, causes famine and pestilence and all this with the help of witches. The compilers of the manual are familiar with the medicine of the age, and they investigate the relationship between sorcery and human temperaments: their descriptions rival those contained in the best psychopathology manuals [21, 22]. The text is divided into three parts and aims at proving the existence of demons and witches (warning the reader that anyone not convinced is also a victim of the Devil) explaining how to find and punish sorcery.

But what has this to do with women’s health? It is quite simple: if a physician cannot identify the cause of a disease, it means that it is procured by the Devil. The inquisitor finds sin in mental illness because, he says, the devil is a great expert of human nature and may interfere more effectively with a person susceptible to melancholy or hysteria. Hysteria is considered a woman’s disease, and who more than women are prone to melancholy? This disease is the basis of female delirium: the woman feels persecuted and the devil himself is the cause of this “mal de vivre”, which deprives the women of confession and forgiveness, leading them to commit suicide.

Obviously, the women most affected are elderly and single, in most cases they have already been in mourning or victims of violence. Sorcery becomes the scapegoat for every calamity and etymological explanations are also provided: for Sprenger and Krämer, the Latin word foemina is formed from fe and minus, that is “who has less faith”. This text is the worst condemnation of depressive illness and women to be found throughout the course of Western history: until the eighteenth century, thousands of innocent women were put to death on the basis of “evidence” or “confessions” obtained through torture [21, 22].

5. Renaissance

At the end of the Middleage, journeys along the coasts of the Mediterrinean sea contributed to a quick diffusion of Greek Classics, preserved and disseminated by the Arabians.

The humanistic movement (born with Dante, Boccaccio and Petrarch) emphasized a respect for the writings of the Antiquity. During these centuries, a new realistic approach to man as a person was born, which opposed the scholastics and introduced a fresh point of view about nature and man [19].

Italian philosopher Giovanni Pico della Mirandola (1463-1494) espoused the principle that each man is free to determinate his own fate, a concept that perhaps more than any other has influenced the developments of the last three centuries: only man is capable of realizing his ideal and this condition can, however, be achieved only through education [23]. Pico’s thesis was implemented by the Spanish educator Juan Luis Vives (1492-1540). His pragmatic orientation produced occasional flashes of insight; for instance, he thought that emotional experience rather than abstract reason detained the primary role in a man’s mental processes: in order to educate a person it is necessary to understand the complex functioning of his mind [19].

Up to this time the medical vision of hysteria, inherited from the Hippocratic-Galenic tradition, continues to dominate [24]. At the end of the 16th century, in European countries affected by the Counter-Reformation, the theological vision tends to overwhelm the medical community. During this period the most intense activity of the Roman Inquisition, in which magic has replaced the fight against heresy, is recorded. Thus in these states, a new generation of physicians emerges, which is destined to be subordinated to inquisitors [24]. It is precisely the physician and theologian Giovan Battista Codronchi (1547-1628) who, by criticizing the medical therapy of the time aimed at treating hysteria, give us a detailed description of them.

Codronchi said that midwives, recalling Galenus’ and Avicenna’s theachings, took care of the hysterical women introducing the fingers in their genital organs in order to stimulate orgasm and semen production [24]. The physician prohibited this treatment at all, an attitude due to the concern typical of that historical phase related to sex and sexual repression. The treatment for him must be practices by the spiritual guides [24]. And if Codronchi is also a proud supporter of the existence of demons, in favour of which he argued by referring to biblical and philosophical sources, the Italian Renaissance had already tried to condemn witch hunts and to give a “scientific” explanation of mental illness: among others, Girolamo Cardano (1501-1576) and Giovanni Battista Della Porta (1535-1615) were interested in sorcery and marginality, but did not see a demonic cause in them. They identified the origin of certain behaviors in fumes, in polluted water and in the suggestion (for Cardano) or in the acquisition of certain substances that induce “visions” and “pictures” (according to Della Porta) but both base most of their considerations on physiognomy [25]. Another important physician, the Dutch Johann Weyer (1515- 1588) intended to prove that witches were mentally ill and had to be treated by physicians rather than interrogated by ecclesiastics [19]. In 1550 he became the private physician of the Duke William of Cleves, who was a chronic depressive. The Duke observed that witches manifesedt many of the same symptoms as his relatives became insane. So, he sympathizes with Weyer’s theory that these women are really suffering from mental illness, but he cannot keep the witch hunter under control because of his transient psychotic episodes cause by an apopletic stroke [19]. In 1563, Weyer publishes De prestigiis Daemonum, which is a step-by-step rebuttal of the Malleus Maleficarum. He’s been called by his contemporaries “hereticus” or “insanus”, but his pages reveal that he’s not rebellious but that he’s a religious man [19].

However, for the doctors of that time, the uterus is still the organ that allows to explain vulnerable physiology and psychology of women: the concept of inferiority towards men is still not outdated.

Hysteria still remains the “symbol” of femininity [26].

6. Modern Age

The 16th century is a period of important medical developments, as proved by the writings of Andreas Vesalius (De humani corporis fabrica, 1543) and French surgeon Ambroise Paré (1510-1590).

These authors’ findings are the basis of the birth of modern medical science [24], combined with the “philosophical revolution”, in which René Descartes (1596-1650) explains how the actions attributed to the soul are actually linked with the organs of the body, and also combined with the studies on the anatomy of the brain by physician Thomas Willis (1621-1675). Willis introduces a new etiology of hysteria, no longer attached to the central role of the uterus but rather related to the brain and to the nervous system [24]. In 1680, another English physician, Thomas Sydenham (1624-1689), published a treatise on hysteria (Epistolary Dissertation on the Hysterical Affections) which refers back to natural history through describing an enormous range of manifestations and recognizing for the first time the fact that hysterical symptoms may simulate almost all forms of organic diseases [19]. However, the author fluctuates between a somatic and a psychological explanation [27]. Sydenham demonstrates that the uterus is not the primary cause of the disease, which he compares to hypochondria: his work is revolutionary as it opposes the prejudices, but it will take several decades for the theory of “uterine fury” to be dismissed [26].

The scientific development does not mark a dramatic shift from a demonological vision of medicine, but progresses hand in hand with evolution of theories on exorcism. The written records tell us of several outbreaks of hysteria, the most famous of which is undoubtedly the one occurred in the village of Salem (Massachusetts) in 1692. The texts recall an episode in which a slave originally from Barbados talks about the prediction of fate and some girls creat a circle of initiation. This latter was formed by women yunger then twenty years of age and unmarried.The action of creating a circle of initiation was in itself an open violation of the precepts of the Puritans.

There is no record of the first stages of the disease: the girls result “possessed” since February 1692. The symptoms described were staring and barred eyes, raucous noises and muffled, uncontrolled jumps, sudden movements etc. The local doctor, William Griggs, referred the problem to the priest. The slave and two other women were summoned, and the former admitted witchcraft and pacts with the devil. Gradually they began to accuse each other. Eventually, 19 were hanged as “witches”, and over 100 were kept in detention. Only when the girls accused the wife of the Colonial Governor of being part of this circle herself, the latter forbade further arrests and trials for witchcraft [27]. Marion Starkey, at the end of World War II, reports the case comparing it with more contemporary events [27]. Her explanation of classical hysteria is that the illness manifested itself in young women repressed by Puritanism, and was aggravated by the intervention of Puritan pastors, this leading to dramatic consequences. The incident proves thus that hysteria could be seen as a consequence of social conflicts [27].

Social conflicts do not occur exclusively in closed societies, such as small communities such as puritanical circles, but they also occur in more open and dynamic societies asbig cities. In 1748 Joseph Raulin published a work in which he defines hysteria as an affection vaporeuse and describes it as a disease caused by foul air of big cities and unruly social life. In theory, the disorder can affect both sexes, but women are more at risk for their being lazy and irritable [26].

Between the 17th and 18th centuries a trend of thought that delegated to the woman a social mission started developing. If from a moral point of view she finds redemption in maternal sacrifice that redeems the soul but it does not rehabilitate the body, from the social point of view, the woman takes a specific role. In 1775 the physician-philosopher Pierre Roussel published the treatise “Systeme physique et moral de la femme” greatly influenced by the ideas of Jean-Jacques Rousseau. Femininity is for both authors an essential nature, with defined functions, and the disease is explained by the non-fulfillment of natural desire. The excesses of civilization causes disruption in the woman as well as moral and physiological imbalance, the identified by doctors in hysteria [26]. The afflictions, diseases and depravity of women result from the breaking away from the normal natural functions. Following natural determinism, doctors confine the woman within the boundaries of a specific role: she is a mother and guardian of virtue [26]. In this context, the woman-witch appears more and more an artifice to secure the social order of ancien régime.

The Enlightenment is a time of growing rebellion against misogyny and sorcery becomes a matter for psychiatrists: in the Encyclopédie we read that sorcery is a ridiculous activity, stupidly attributed to the invocation of demons. And further: mental illness starts to to be framed within the “scientific view” and hysteria is indeed described in the Encyclopédie as one of the most complicated diseases, originally identified by ancient scientists as a problem related to the uterus. Even more interesting is the fact that the causes and symptoms of hysteria and melancholy are linked to the humor theory. Fortunately, the “demonological vision” of women’s mental illness did not prevent previous medical theories from being maintained [28].

The last “witch” was sentenced to death in Switzerland in 1782, 10 years after the publication of the latest volumes of the Encyclopédie. Her name was Anna Göldi, and her memory was rehabilitated only in 2008 [29].

In the 18th century, hysteria starts being gradually associated with the brain rather than the uterus, a trend which opens the way to neurological etiology: if it is connected to the brain, then perhaps hysteria is not a female disease and can affect both sexes. But this is not such a simple shift as it may seem.

The German physician Franz Anton Mesmer (1734-1815) found in suggestion a method of treatment for his patients suffering from hysteria, practicing both group and individual treatments. He identified in the body a fluid called “animal magnetism” and his method soon became famous as “mesmerism”. Indeed, it was thought that the magnetic action of the hands on diseased parts of the body could treat the patient, interacting with the fluid within the body. Only later we realized that this was a mere suggestion. Mesmerism had subsequent developments in the study of hypnosis [30].

The French physician Philippe Pinel (1745-1826) assuming that kindness and sensitivity towards the patient are essential for good care, frees the patients detained in Paris’ Salpêtrière sanatorium from their chains. Pinel’s theory derives from ideas linked to the French Revolution: “mad” is not substantially different from “healthy”, the balance is broken by the illness and treatment must retrieve this balance. Nonethelsess, Pinel too considered hysteria a female disorder [19, 31]. Jean Martin Charcot (1825-1893) the French father of neurology, pushed for a systematic study of mental illnesses. In particular, he studied the effectiveness of hypnosis in hysteria, which, from 1870 onwards, is distinguished from other diseases of the spirit. Charcot argues that hysteria derives from a hereditary degeneration of the nervous system, namely a neurological disorder .By drawing graphs of the paroxysm, he eventually shows that this disease is in fact more common amongst men than women [3236].

During the Victorian Age (1837-1901) most women carried a bottle of smelling salts in their handbag: they were inclined to swoon when their emotions were aroused, and it was believed, that, as postulated by Hipocrates, the wandering womb disliked the pungent odor and would return to its place, allowing the woman to recover her consciousness [34]. This is a very important point, as it shows how Hippocrates’ theories remained a point of reference for centuries.

7. Contemporary Age

French neuropsychiatrist Pierre Janet (1859-1947), with the sponsorship of J. M. Charcot, opened a laboratory in Paris’ Salpêtrière. He convinced doctors that hypnosis — based on suggestion and dissociation — was a very powerful model for investigation and therapy. He wrote that hysteria is “the result of the very idea the patient has of his accident”: the patient’s own idea of pathology is translated into a physical disability [35]. Hysteria is a pathology in which dissociation appears autonomously for neurotic reasons, and in such a way as to adversely disturb the individual’s everyday life. Janet studied five hysteria’s symptoms: anaesthesia, amnesia, abulia, motor control diseases and modification of character. The reason of hysteria is in the idée fixe, that is the subconscient or subconscious. For what concerns eroticism, Janet noted that “the hysterical are, in general, not any more erotic than normal person”. Janet’s studies are very important for the early theories of Freud, Breuer and Carl Jung (1875-1961) [35, 36].

The father of psychoanalysis Sigmund Freud (1865-1939) provides a contribution that leads to the psychological theory of hysteria and the assertion of a “male hysteria”. Freud himself wrote in 1897: “After a period of good humor, I now have a crisis of unhappiness. The chief patient I am worried about today is myself. My little hysteria, which was much enhanced by work, took a step forward” [37]. In 1889 he published his Studies on Hysteria with Joseph Breuer (1842-1925). The key-concepts of his psychoanalytical theory (the influence of childhood sexual fantasies and the different ways of thinking of the unconscious mind) have not yet been formulated, but they are already implicit in this text. Among the cases presented, we find the hysteria of the young Katherina, who suffers from globus hystericus. The text does not refer to the famous Oedipus complex, which emerges through the study of male hysteria, developed after this treatise [3638].

We now reach a crucial point: until Freud it was believed that hysteria was the consequence of the lack of conception and motherhood. Freud reverses the paradigm: hysteria is a disorder caused by a lack of libidinal evolution (setting the stage of the Oedipal conflict) and the failure of conception is the result not the cause of the deasease [3638]. This means that a hysterical person is unable to live a mature relationship. Furthermore, another important point under a historical point of view is that Freud emphasizes the concept of “secondary advantage”. According to psychoanalysis the hysterical symptom is the expression of the impossibility of the fulfillment of the sexual drive because of reminiscence of the Oedipal conflict [3638]. The symptom is thus a “primary benefit” and allows the “discharge” of the urge – libidinal energy linked to sexual desire. It also has the “side benefit” of allowing the patient to manipulate the environment to serve his/her needs. However, it is a disease of women: it is a vision of illness linked to the mode (historically determined) to conceive the role of women. The woman has no power but “handling”, trying to use the other in subtle ways to achieve hidden objectives. It is still an evolution of the concept of “possessed” woman [37, 38].

During 19th Century, description of hysteria as a variety of bodily symptoms experiencedby a single patient is labeled Briquet’s syndrome. In 20th Century several studies are based on a particular presentation of hysteria’s symptoms: a loss or disturbance of function which does not conform to what is known about the anatomy and physiology of the body, as loss of speech but not of singing. Psychiatrists note that any function of the body can be affected by hysteria [34].

An analysis of the framing of these diagnoses in British medical discourse c. 1910-1914 demonstrates that hysteria and neurasthenia, although undergoing redefinition in these years, were closely connected through the designation of both as hereditary functional diseases. Before the war these diagnoses were perceived as indicators of national decline. Continuity, as well as change, is evident in medical responses to shell-shock [38].

The identification of hysterical fit, according to Pierre Janet’s theories, was for a long time considered impossible: an example of this diagnostic dilemma is provided by the Royal Free Disease, an epidemic of neurological, psychiatric and other miscellaneous symptoms which swept through the staff of the Royal Free Hospital in London between July and November 1955 and which affected a total of 292 members of staff. In the Medical Staff Report it was concluded that an infective agent was responsible [34]. In 1970 McEvedy and Beard put forward an alternative suggestion that Royal Free Disease was an epidemic of hysteria (for example the sensory loss affected a whole limb or part of a limb but the pattern rarely followed the distribution of nerves to the skin) and also pointed out that the spread of the symptoms, predominantly affecting young female resident staff, is characteristic of epidemics of hysteria, which usually occur in populations of segregated females such as girl schools, convents and factories. They wrote also that hysteria had a pejorative meaning in their society, but that should not prevent doctors from weighing the evidence dispassionately [34].

Besides defining the nature of hysteria, 20th Century psychiatrists also considered its history and geography. During World Wars hysteria attracted the attention of military doctors, and several authors have recorded their impressions on the frequency of hysteria in this period. Under battle conditions, the way in which hysterical symptoms provide a solution for emotional conflicts is particularly clear. A soldier torn between fear of facing death and shame at being thought a coward may develop a hysterical paralysis of his arm, sickness being a legitimate way out of the conflict [34]. For instance, in 1919 Hurst wrote that “many cases of gross hysterical symptoms occurred in soldiers who had no family or personal history of neuroses, and who were perfectly fit”. In particular, in 1942 Hadfield commented that the most striking change in war neurosis from World War I to World War II was “the far greater proportion of anxiety states in this war, as against conversion hysteria in the last war” [34]. But World War II not only allowed for a comparison with World War I in terms of patterns of neurotic symptoms, but also become a opportunity for cross-cultural comparisons between troops from widely differing cultural backgrounds [34].

Abse’s studies (1950) on hysteria in India during World War II demonstrate that, 57% of the 644 patients admitted to the Indian Military Hospital in Delhi during the year 1944, were diagnosed as suffering from hysteria and 12% were diagnosed as suffering from anxiety states. Abse also collected data from a British Military Hospital in Chester (June to October 1943) and he demonstrated the existence of a majority of anxiety states (50%) than hysteria cases (24%) [34].

Others studies confirm these data. In particular, in 1950 Williams demonstrated that Indian hysterics were often of high morale and were of all grades of intelligence, whereas among the British, gross hysterical reactions were the breakdowns of men with low stability and morale and usually of low intelligence [34]. Moreover, these studies demonstrate that from World War I to World War II there was a small relative decline of hysteria among British soldiers which was paralleled by a relative rise in anxiety states and by contrast, hysteria was still the most common form of neurosis among Indian soldiers in World War II. The contrasting patterns shown by soldiers suggest that hysteria and anxiety neurosis bear a reciprocal relationship, so that the decline of the former is compensated for by a rise in the latter [34].

But this also seems to demonstrate a different progress of hysterical disease in Western and non-Western societies. In the second half of the 20th century, we witness a “decrease” of hysteria (as response to stress, which represents the patient concept’s of bodily dysfunction) in western societies. Data of annual admissions for hysteria to psychiatric hospitals in England and Wales from 1949 to 1978 show that they are diminished by nearly two-thirds, with a marked decline in the proportion from 1971 onwards, and a similar decrease is recorded in a study conducted in Athens as well [34]. Hysteria was in fact a major form of neurotic illness in Western societies during the 19th Century and remained so up to World War II. Since then there appears to have been a rapid decline in its frequency and it has been replaced by the now common conditions of depressive and anxiety neuroses.

But the studies focused on Indian patients as well as on others non-Western countries as Sudan, Egypt and Lebanon [34] demonstrate that during the second half of 20th Century hysteria, as one of the somatic ways of expressing emotional distress, remained a prominent condition among psychiatric patients, although anxiety and depressive neuroses may have gained a little ground. Hence, psychiatrists supposed that it was an unstable transitional phase and predicted the disappearance of hysteria by the end of 20th Century [34].

There seems to be an inverse relationship between decreasing of hysteria and increasing of depression in Western society. The idea that depression was more likely to manifest itself in those born after the Second World War has been suggested in 1989 by Klerman [39]. More recently it has been documented by studies repeated over time in America and Australia, although there are exceptions in specific areas in relation to specific socio-environmental conditions and migration [4044].

A systematic review of misdiagnosis of conversion symptoms and hysteria, based on studies published since 1965 on the diagnostic outcome of adults with motor and sensory symptoms unexplained by disease, demonstrate that a high rate of misdiagnosis of conversion symptoms was reported in early studies but this rate has been only 4% on average in studies of this diagnosis since 1970 [45]. This decline is probably due to improvements in study quality rather than improved diagnostic accuracy arising from the introduction of computed tomography of the brain [40].

We know that the concept of hysterical neurosis is deleted with the 1980 DSM-III: hysterical symptoms are in fact now considered as manifestation of dissociative disorders.

The evolution of this disease seems to be a factor of the social “westernization”. Several studies on mental diseases seem to validate this hypothesis. In 1978 Henry B. Murphy (1915-1987) [46] individuated the main causes of melancholy in social change and consequent socio-economic changes. A picture characterized by self-blame feelings, low self-esteem and helplessness. These features were described as being due to a rapid social change in two different social theatres: in those areas of England interested in turning the feudal economy into an industrial7at the centre of one at the end of the 17th century, and more recently in some areas of Africa affected by rapid economic development. In both cases the onset of psychopathological symptoms has been related to two main factors: on the one hand, the disruption of an enlarged family and the loss of a close emotional support for the individual, and on the other hand by a marked striving towards economic individualism. In this new psychological and external contest destiny and future will no longer be determined by fate, but menbuildtheir own destiny, an unknown and hard responsibility towards life [47]. In 1978 Murphy wrote that in Asia and in Africa these symptoms are rare, except among the Westernized persons, and that it could be useful to examine under what conditions these symptoms first became common in different societies [46].

From the expression of discomfort “hysteria” to the expression of discomfort “melancholy” the different conception of the self isessential. The world of hysterical manifestation is a world of “dissociation”: something dark (trauma, external influences) affects a symptom not directly interpretable. From here the development in the West of hypnotic therapies (up by Mesmer to Freud and Janet) [36] and, in the West more than in non-Westernized world, it is the implementation of exorcism and purificatory rituals that mark the meeting with the groups: Tarantism and Argia in South Italy [47], Narval-Wotal practices of West African immigrants [4852]. A world linked to a vision of women as a means unaware of evil forces, “out of control” from reasonableness or (in European Positivism) be an “immature” with manipulative behavior that seeks to achieve an improper position of power. Also the world of Melancholy is female, predominantly female since women suffer from depression at a ratio of 2.5 to 1 compared to men [48, 43]. But it is a reality in which, indeed, the patient (and therefore the patient woman) is aware of the conviction-conquest of being the master of its own destiny (and therefore to blame for their failures). We can see this passage in 1980s Africa.

Modern Africa is characterized by a variety of different economic and social situations which are not easy to compare, but in which urbanization and the progressive loss of tribal links is a common trend. In recent years several research projects concerning the transformation of psychopathology, based on African populations and African Immigrants in Sardinia, Italy, confirmed Murphy’s hypotheses on the role of social change and its socio-economic consequences in the genesis of a depressive symptomatology [48]. Studies involved populations in which traditional social structure still survives and which have just marginally been affected by social changes; populations undergoing a rapid change towards economic individualism, although these have now become a rarity in modern-day Africa; populations whose traditional social structures and underlying human relationships have been able to compromise and face the processes of partial change by actively adapting to the new realities [48]. is the starting point is the distinction between the character of African psychopathology, the prevalent form of which is characterized by ideas of reference, persecutory delusions and psychosomatic symptoms, and the “western” depression, which involves self-guilt, unworthiness and suicidal conduct. The “Westernization” of the pathology is expressed through the changing of symptoms, from African to West models. A detailed analysis of the African community surveys revealed in the Bantu area the existence of populations characterized by a psychopathological risk similar to the one highlighted in westernized settings such as among the women in Harare who presented a yearly prevalence rate for anxiety and depressive disorders. A psychosocial key – confirmed by several studies – may suggest that maintaining close links with the group of origin can play a protective role against mood-related disorders [48].

Several studies identify the existence of two counter-posed means of expressing depression which are most likely “culturally determinated” from a “different level of westernization” [42]. Researchers in transcultural psychiatry suggest that social factors may influence the modification of the melancholical phenomenology and modulate the risk of depression [5355].

A survey on the Dagon Plateau conducted amongst farmers and nomadic Fulani herdsmen in Mali, reveals a very low frequency of depression and depressive cadres that are exclusively linked to secondary reactions of serious somatic disease in illiterate individuals [50]. In addition, the psychopathology over the Plateau is manifested with two opposing syndromic lines, first the constellation of symptoms of persecution, psychosomatic and psichastenia, loss of interest in things, syndrome guilt, sadness, suicidal ideas. This is typical of educated individuals [51].

A study carried out in the Namwera area in Malawi on the Mozambique border, during a deep micro and macro-social transformation which led to the establishment of a multiparty form of democracy following popular referendum, demonstrates that an emotional earthquake was caused by the conflict in having to choose between innovation and tradition. This situation in fact flew into a full-blown epidemic of hysteria among young women [48]. In the above context, in 1988, a dress factory, financed through an Italian co-operation, had been established in a village populated by the Yao and Chicewa groups, characterized by an agricultural economy. The project was articulated in order to allow women to redeem the equipment following a training period and set up independent activity [48].

In view of the particular condition of women in these cultures, this sudden passage from a traditional female role to a more independent activity seemed to be particularly suited for a study of the relationship between personal transformation and psychopathological changes. The study was carried out using three samples of age-matched women: dressmakers, farmers/housewives (traditional role), and a group of nurses and obstetricians [48]. The history of their development, including the presence of stressful events and other risk factors, together with the degree of satisfaction with their jobs and married life and other socio-anagraphic variables, was investigated by means of a specifically validated interview [51].

The choice of an innovative occupation (dressmaker/nurse) could be read as an adaptive answer in order to survive. Innovative occupations were source of satisfaction as job in itself, but they were causes of serous interpersonal and couple conflicts, linked to the new woman role and job. Housewives and dressmakers were more dissatisfied with their situation than nurses and they presented an increased number of psychopathological symptoms and the number of depressed subjects diagnosed according to DSM-IIIR was higher [48, 51].

Housewives also experienced an increased frequency of psychosomatic symptoms, such as headache, excessive fatigue, feelings of worthlessness, and often reported suffering from the conviction that people did not recognize the importance of their role, and that someone could affect their health which is interpretable as an external localization of the source of their distress, in according to the character of African psychopathology. [48, 51] On the other hand, dressmakers showed a high frequency of depressive symptoms, problems about self-esteem, belief of social uselessness and suicidal thoughts [48, 51].

In a characteristic manner the suffering women also differed in the attribution of the causes of their discomfort. The “entrepreneurs” believed that the cause of their suffering had to be sought in their mistakes, the traditional women attributed to “evil spell” their ailments [51].

Among the three groups, nurses showed the highest frequency of psychological well-being and emotional stability. This should be interpreted as the result of good integration into a new identity due to a job related to a women’s traditional role and to satisfaction about financial stability. Without drastically breaking with tradition, according to several psychosocial lines, a cultural institution such as an innovative job is perceived by both society and the individual as being an integral part of the evolving self, and it creates conditions for cultural transmission to go on. This interpretation explains why nurses did not suffer from conflicts between tradition and innovation, while dressmakers, whose new individualistic role broke with women’s traditional one, did not feel accepted by their group and were consequently more vulnerable to mood disorders and particularly to depression, a “western” depression [48].

Instead, in populations which were far removed from the processes of westernization depressive disorders were relatively rare and nearly always secondary to severe somatic disorders, while they manifested themselves as primary disorders only in better educated subjects [48]. Several studies demonstrated that the threshold of onset of depression is situated on a higher level compared to western cultures and tend to support the hypothesis of a means of expression characterized by syndromic aggregations halfway between “western” style or “guilty” and “traditional” or “dislocation from the group”. Environmental factors seem to affect the the evolution of depressive symptoms and risk of depression, through modifications in the social organization thatelicit an attitude of “compulsive self-responsabilization” which would otherwise have been destined for extinction [48].

8. Focus on Sardinian Modern Cases

We should like to conclude by discussing some Sardinian cases which seem to contradict what has been said above: in modern times, they apparently document the continued use of Hippocrates’ and Galen’s ancient medical theories in relation to hysteria.

THE MANAGU HOSPITAL IN SIDDI, SARDINIA, ITALY

This small rural hospital, which was open from 1860 to 1890 in the small village of Siddi (in the heart of the Marmilla area, Sardinia) admitted 463 patients, the subject of our recent research. 122 were women (mainly peasants, maids and housewives), and of these, 10 were suffering from hysteria (sometimes the diagnosis was simple hysteria, on other occasions they were suffering from convulsions, constipation, intermittent fever…) [56]. In analyzing the simplest cases, where the hysteria was not combined with other diseases, we found the constant use of antispasmodics, sedatives and refreshing concoctions in the form of decoctions, infusions, creams, ointments and poultices. First of all, a decoction of tamarind and barley, extract of belladonna, valerian and liquid laudanum. Following this, infusions of fennel, mint and orange flowers, chamomile flowers and lime, cassia pulp and elder-tree ointment [57]. Only in one case (1868) was additional treatment prescribed in the form of polenta poultices, sulphates of potassium iodide, leeches, rubbery emulsions with iron carbonate and gentian extracts, and in another (1871) morphine acetate, infusions of senna leaves, citric acid and ammonium acetate ethers [57].

Treatment varied when the hysteria was associated with other symptoms such as, for example, epileptic convulsions: in the first phase the patient was administered zinc oxide, valerian extract, enemas with an emulsion of asafoetida and an egg yolk (to be repeated every 4 days) and then baking soda, water, fennel, turpentine and rosewater for rubs. Finally electuaries and polenta poultices [57].

The case of a young female patient at Managu is similar to the previous ones. Hospitalized for less than 54 days, the young woman was subjected to treatment based on emulsions of chloral hydrate, Burgundy pitch plasters, lemonade, water mint and lemon balm [57].

VILLA CLARA, CAGLIARI’S PSYCHIATRIC HOSPITAL, SARDINIA, ITALY

We are at the beginning of the twentieth century: the psychiatric hospital Villa Clara in Cagliari is an institution which ensures the implementation of the most advanced “psychiatric therapy”. In actual fact, this advanced therapy consisted in the “application of leeches, drastic purges, cold baths and in procuring groups of blisters, usually on the neck” [58]. Villa Clara’s story is contained in 16,000 archival files, still being sorted, but if there were any need of corroboration, its history is screamed out in the words of Giovanna M., Villa Clara’s Register 1. Giovanna M. was admitted to Genoa hospital when she was 10 years old, diagnosed with madness: she had a terrible headache, but preferred to say she had a “cranky head” and three years later in 1836, she was moved to the basement of Cagliari’s Sant’Antonio Hospital [58]. She describes this “as dark as a tomb, the only place on the island where the mad… or the insane… or the maniacs… or the idiots – as we were called- were locked up. We were 50 people in chains, in the smell of our own excrement, with rats gnawing at our ulcers…” [58]

In the early years of the new century, after a long break at Cagliari’s new San Giovanni di Dio Hospital, Giovanna M., now old and blind, was transferred to the Villa Clara psychiatric hospital, where Professor Sanna Salaris formulated a diagnosis of “consecutive dementia” and hysteria. But despite being constantly subjected to careful clinical observation, she was only treated here with “tonics … two eggs and milk … balneotherapy, rhubarb tinctures, potassium iodide, lemonade and laudanum, insulin and laxatives, a lot of purgatives: always, for everything”. Giovanna M. died in the mental hospital in 1913 due to “ageing of organs” and “senile marasmus”, as confirmed in the necrological report. Anna Castellino and Paola Loi know all there is to know about Giovanna M. and end their work Oltre il cancello with Giovanna’s words: “And you’d better believe it: I was 90 years old. Fate, which takes away healthy, free, young people, never pardoned me once. It has let me live all this time, quite lucid, but closed up in here … since I was ten years old …. eighty years in psychiatric hospital for a headache” [58].

CONCLUSIONS

Lengthy the history, social changes seem to offer a fertile substrate for the evolution of complex innovative systems of interpreting reality, of attributing the causes and controlling events, of living emotions. A critical study of the historical development and the interpretations of mental diseases may contribute to providing an explanation for the means of psychopathological expression. Moreover, it may provoke a re-discussion of the threshold and vulnerability concept in cases where it could be hypothesized that the new cognitive systems, although adaptive to new social requirements, might represent a factor of vulnerability (“culturally specific”) to specific mental disorders.

We have seen that both the symptomatic expression of women’ malaise and the culturally specific interpretation of the same malaise witness the changing role of women. From incomprehensible Being (and therefore mean of the Evil) to frail creatures that try, however, to manipulate the environment to their own ends (in Freud’s view) to creature arbiter of his fate (in the modern transformation from hysteria to melancholia), where the woman seems to have traded power with the loneliness and guilt.

REFERENCES

1. Angermeyer MC, Holzinger A, Carta MG, Schomerus G. Biogenetic explanations and public acceptance of mental illness: systematic review of population studies. Br J Psychiatry. 2011;199:367–72. [PubMed] [Google Scholar]

2. Sigerist HE. A history of medicine. Primitive and archaic medicine. New York : Oxford University Press ; 1951. [Google Scholar]

3. Cosmacini G. The long art: the history of medicine from antiquity to the present. 00. Rome : Oxford University Press ; 1997. [Google Scholar]

4. Sterpellone L. Greek medicine . 2nd ed. Noceto : Essebiemme ; 2002. [Google Scholar]

5. Euripides. The Bacchae. 00. Turin : Pearson ; 1920. [Google Scholar]

6. Tommasi MCO. Orgiasmo orgies and ritual in the ancient world: a few notes. Kervan . 2006-2007;4/5:113–29. [Google Scholar]

7. Penso G. Roman medicine. 3rd ed. Noceto : Essebiemme ; 2002. [Google Scholar]

8. Vanzan A. Malinconia e Islam. POL.it [revista en la Internet]. Julio de 2007. Disponible en: http://www.pol-it.org/ital/islam.htm. POL.it [revista en la Internet]. Julio de 2007. Disponible en: http://www.pol-it.org/ital/islam.htm.

9. Jacquart D, Micheau F. Arab medicine and medieval Europe. Paris : Maisonneuve et Larose ; 1990. [Google Scholar]

10. Iancu Agou D, Nicolas E. Of Tibbonides to Maimonides. Radiation Andalusian Jews in medieval Pays d’Oc. 00 . Paris : Cerf ; 2009. [Google Scholar]

11. Grmek MD. History of Western medical thought. Antiquity and the Middle Ages. Rome : Oxford University Press ; 1993. [Google Scholar]

12. Genet JP. The transformation of education and culture Medieval: The Christian West twelfth-fifteenth century milieu. 00 . Paris : Seli Arslan ; 1999. [Google Scholar]

13. Laharie MR. Insanity in the Middle Ages: eleventh to thirteenth centuries. Paris : The Golden Leopard ; 1991. [Google Scholar]

14. Santucci F. Virgo Virago. Catania : Akkuaria ; 2008. [Google Scholar]

15. Trotula de Ruggiero. On women’s health. Palermo: La Luna wise. 1994.

16. Hildegard Von Bingen. Causes and treatment of disease. Palermo : Sellerio ; 1997. [Google Scholar]

17. Mancini A. “One day you came to me melancholy …”. 00. Milan : Franco Angeli ; 1998. [Google Scholar]

18. Thomas A. Summa Theologica. Bologna : ESD ; 1996. [Google Scholar]

19. Alexander FG, Selesnick ST. History of Psychiatry. Rome : Newton & Compton ; 1975. [Google Scholar]

20. Loi S. Inquisition witchcraft and wizardry in Sardinia. Cagliari : AM & D ; 2003. [Google Scholar]

21. Kramer H, Sprenger J. The Hammer of Witches. Venice : Marsilio ; 1982. [Google Scholar]

22. Danet A. In: The Inquisitor and his witches. Kramer HJ, editor. Grenoble: Millon : Sprenger The Witches’ Hammer ; 1990. [Google Scholar]

23. Pico della Mirandola G. Oratio de hominis Dignity. Pordenone: : Thesis Studio ; 1994. [Google Scholar]

24. Brambilla E. The end of exorcism. Hist Pap. 2003;1:117–64. [Google Scholar]

25. Bonuzzi L. Psychopathology and criminality, Italian itinerary. Ital Noteb Psychiatry. 1996;4-5:225–79. [Google Scholar]

26. Duby G, Perrot M. History of Women in the West from the Renaissance to Modern. Bari : Oxford University Press ; 1991. [Google Scholar]

27. Wing JK. Reasoning About Madness. Oxford : Oxford University Press ; 1978. [Google Scholar]

28. Diderot D, D’Alembert J. Encyclopedia or Dictionary rational sciences, arts and crafts. Bari : Oxford University Press ; 1968. [Google Scholar]

29. Hauser W. The judicial murder of Anna Goeldi. New research for the last witch trial in Europe. Zurich : Limmat Verlag ; 2007. [Google Scholar]

30. Zanobio B, Armocida G. History of Medicine. New York : Masson ; 1997. [Google Scholar]

31. Pessotti I. The century of asylums. Sao Paulo : Editora ; 1996. [Google Scholar]

32. Bannour W, Jean M. Charcot and hysteria. Paris : Métailié ; 1992. [Google Scholar]

33. Mitchell J. Crazy and Jellyfish. New York : The Turtle ; 2004. [Google Scholar]

34. Leff J. Psychiatry around the Globe: a transcultural view. New York : Marcel Dekker ; 1981. [Google Scholar]

35. Haule JR. Pierre Janet and dissociation: the first transference theory and its origins in Hypnosis. Am J Clin Hypnosis. 1986;29:86–94. [PubMed] [Google Scholar]

36. Pérez-Rincón H. Pierre Janet, Sigmund Freud and Charcot’s psy-chological and psychiatric legacy. Front Neurol Neurosci. 2011;29:115–24. [PubMed] [Google Scholar]

37. Mattioli G, Scalzone F. Current hysteria. Disease or obsolete original position? Milan: Franco Angeli; 2002. [Google Scholar]

38. Loughran T. Hysteria and neurasthenia in pre-1914 medical discourse and in histories of shell- shock. Hist Psychiatry. 2008;19(73 pt 1 ):25–46. [PubMed] [Google Scholar]

39. Klerman GL, Weisman MM. Increasing rates of depression. JAMA. 1989;261(15 ):2229–35. [PubMed] [Google Scholar]

40. Compton WM, Conway KP, Stinson FS, Grant BF. Changes in the prevalence of major depression and comorbid substance use disorders in the United States between 1991-1992 and 2001–2002. Am J Psychiatry. 2006;163(12 ):2141–7. [PubMed] [Google Scholar]

41. Goldney RD, Eckert KA, Hawthorne G, Taylor AW. Changes in the prevalence of major depression in an Australian community sample between 1998 and 2008. Aust N Z J Psychiatry. 2010;44:901–10. [PubMed] [Google Scholar]

42. Carta MG, Mura G, Lecca ME, et al. Decreases in depression over 20 years in a mining area of Sardinia: Due o selective migration? J Affect Disord 2012. 2012. [Epub ahead of print] [PubMed]

43. Carta MG, Kovess V, Hardoy MC, et al. Psychosocial wellbeing and psychiatric care in the European Communities: analysis of macro indicators. Soc Psychiatry Psychiatr Epidemiol. 2004;39(11 ):883–92. [PubMed] [Google Scholar]

44. Hardoy MC, Carta MG, Marci AR, et al. Exposure to aircraft noise and risk of psychiatric disorder: the Elmas survey. Soc Psychiatry Psychiatr Epidemiol. 2005;40(1 ):24–6. [PubMed] [Google Scholar]

45. Stone J, Smyth R, Carson A, et al. Systematic review of misdiagnosis of conversion symptoms and “hysteria” BMJ. 2005;331(7523 ):989. [PMC free article] [PubMed] [Google Scholar]

46. Murphy HB. The advent of guilt feelings as a common depressive symptom: a historical comparison on two continents. Psychiatry. 1978;41(3 ):229–42. [PubMed] [Google Scholar]

47. Gallini C. The colorful dancer. Naples : Liguori ; 1988. [Google Scholar]

48. Altamura AC, Carta MG, Tacchini G, Musazzi A, Pioli MR. Prevalence of somatoform disorders in a psychiatric population. Eur Arc Psychiatry Clin Neurosci. 1998;248:267–71. [PubMed] [Google Scholar]

49. Carta MG, Coppo P, Reda MA, Hardoy MC, Carpiniello B. Depression and social change. From transcultural psychiatry to a constructivist model. Epidemiologia e Psichiatria Sociale. 2001;10:46–58. [PubMed] [Google Scholar]

50. Carta MG, Coppo P, Carpiniello B, Mounkuoro PP. Mental disorders and health care seeking in Bandiagara: a community survey in the Dogon Plateau. Soc Psychiatry Psychiatr Epidemiol. 1997;32(4 ):222–9. [PubMed] [Google Scholar]

51. Carta MG, Carpiniello B, Dazzan P, Reda MA. Psychopathology in the Dogon plateau: an assessment using the QDSM and principal components analysis. Soc Psychiatry Psychiatr Epidemiol. 1999;34:282–5. [PubMed] [Google Scholar]

52. Carta MG, Carpiniello B, Coppo P, Hardoy MC, Reda MA, Rudas N. Social changes and psychopathological modifications. Results of a research program in african populations. Psychopathology. 2000;33:240–5. [Google Scholar]

53. Carta MG, Aguglia E, Bocchetta A, et al. The use of antidepressant drugs and the lifetime prevalence of major depressive disorders in italy. Clin Pract Epidemiol Ment Health. 2010;6:94–100. [PMC free article] [PubMed] [Google Scholar]

54. Carta MG, Angst J. Epidemiological and clinical aspects of bipolar disorders: controversies or a common need to redefine the aims and methodological aspects of surveys. Clin Pract Epidemiol Ment Health. 2005;1(1 ):4. [PMC free article] [PubMed] [Google Scholar]

55. Carta MG, Hardoy MC, Garofalo A, et al. Association of chronic hepatitis C with major depressive disorders: irrespective of interpheron-alpha therapy. Clin Pract Epidemiol Ment Health. 2007;3:22. [PMC free article] [PubMed] [Google Scholar]

56. Tasca C. The archive of the hospital Managu Siddi: health care in rural Sardinia century. Cagliari : Deputation of the country’s history to Sardinia ; 2001. [Google Scholar]

57. Tasca C. Recipes for the poor, medicine in Sardinia in the midnineteenth century. Dolianova : Graphics Parteolla ; 2009. [Google Scholar]

58. Castellino A, Loi AP. Beyond the gate. History of mental asylums of Cagliari from Saint Anthony to Villa Clara through the archives. 00. Cagliari : AM & D ; 2007. [Google Scholar]

With all that had happened as well as his own rational, John issued on May 15, 1213, the document that we have been discussing, the Concession of England to the Pope.

King John I of England and Pope Innocent III had struggles that lasted throughout the pontificate of Pope Innocent III (1198-1216). These struggles only culminated in the issuance of the Concession of England to the Pope in 1213, and it did not finalize the question, either. I plan to divide this paper into five sections to get at why the document was written. The first section will deal with what the document actually says and what the interpretation of the document means. The next section will deal with King John I; it will cover a large part of his life as a monarch to help us understand why he did what he did. The following section will do the same thing, but concerning Pope Innocent III. Then the other parties involved will be discussed, mainly concerning why they were involved. In the final section, everything will be tied together so that we can have a clear, concise picture of why King John gave his kingdom to the Church. I hope that we will be able to see that it was not out of John’s love for the Church, but for political gain that he ceded his kingdom to the papacy, and still it was only for appearances.

In the Concession of England to the Pope, everything seems to me to be straightforward. The entire document tells us the following things: John tells us that he is doing this act by his own free will with the consultation of his barons for the remission of the sins of himself and for his entire kingdom, living and dead. There are two major provisions that he makes in the document. The first being the giving of the whole kingdoms of England and Ireland to Pope Innocent III and his successors with John and his successors holding onto the land as the vassal of the pope. The second provision that he makes is to set-up an annual tribute to the papacy in the amount a thousand marks sterling. He then establishes the payment plan, which divides it into two payments with one on the feast of St. Michael and the other on Easter. He then binds all of his successors to his concession and declares that any who do attempt to undo this concession will forfeit their right to the kingdom, for he states, “this charter of our obligation and concession shall always remain firm” (Henderson 430). The next section of the concession deals with the formula of an oath that King John makes to Pope Innocent III. John vows faithfulness to God, St. Peter, the Church, and the pope; as well as, pledges that he will not allow harm to come to the pontiff by his own hand or another if he is aware of such a deed.

The interpretation of this document should seem to be straightforward, but that is not the case. The document says that the King of England and Ireland (King John and his successors) are completely subservient to the Roman Pontiff; but John still engages in influencing the election of bishops, the placement of clerics, and still defies some of the papal orders on political policy. As a political document, it did not have any real affect. However, like many documents, they eventually fall into disuse and then totally disobeyed, which is what eventually happens to the concession in the reign of Henry VIII when he violates the Concession of England to the Pope by declaring himself head of the Church in England.

Since the problems we are dealing with in this paper have to do with those between papal and monarchial concerns, I will limit my discussion of King John to just his dealings with the clergy and only where necessary delve into his secular matters. During the Middle Ages, the ever-expanding Code of Canon Law had provisions that stated that whenever a bishopric was vacated, the successor was to be elected by the cathedral canons and approved by the Holy See. In addition, in cases of disputed elections the Holy See was too directly decide the receiver of the bishopric. However, monarchs usually wanted to influence the Church towards an advantageous position for the monarch. King John, like every other monarch, influenced the election of prelates in all of his dioceses and worked to have clergy placed where he saw fit. In fact, there have been many holy and pious men that where elected by the kings’ influence, so in the mind of many people in the Middle Ages this was not that great of a problem. Nevertheless, it did still involve the controversy of lay-investiture that at this time was in high gear (Cheney Pope 124, 147, 155).

King John was a very stubborn monarch and believed in the notion of absolute monarchy.

Due to his belief in absolute monarchy, he felt that he should wield even the spiritual sword. In his attempts to control the Church, John tried to have the elections of bishops and the appointments of clergy to various positions under his thumb. Usually he was very successful in his attempts because there was some precedents for the monarchial influence in the spiritual realm (Clayton 156-159).

The papal court rarely favored John, for John seemed to consistently support the side that Innocent III opposed. For instance, Pope Innocent declared Otto of Brunswick, the nephew of John, (also referred to as Otto of Saxony) as emperor of the Holy Roman Empire, but John did not want a rival on the German throne. John did not attack Otto, but refused any aid to help Otto fight Philip of Swabia in 1203. After much reluctance, John did finally give into the pope’s demands to aid Otto. Then after the betrayal of Otto against Pope Innocent, John refused to stop aiding Otto. This action called the pope to ask Philip Augustus, King of France, to march on England. So when the controversy over the election of Stephen Langton to the Archbishopric of Canterbury, it was only a long string of defiance by John (Cheney Letter 76).

Most historians consider Pope Innocent III to be the greatest Medieval Pope. He was also one of a line of Canon Lawyer Popes, which shows the great emphasis he would place on the laws, rules, and regulations of the Church. By the time that Innocent dealt with King John, he had already distinguished himself as pope. He had forced Philip Augustus to reaccept his first wife that he had uncanonically divorced. Innocent had already made the call for another crusade to the Holy Lands and had successfully handled the question of who would be the emperor of the Germans.

Innocent III was well known for being able to play one ruler off another ruler. Due to this, it is no surprise that he brought other countries into the struggle between himself and King John. The two countries that he brought into this particular quarrel were both France and the Holy Roman Empire. The Holy Roman Empire was brought into the matter because England really wanted to get involved with them. John was the uncle of one of the rivals for the throne, and at first he did not want to get involved until the papacy forced him to support his nephew. Once King John supported his nephew, there was no going back. Emperor Otto, the Holy Roman Emperor, betrayed and broke his pledge to the Innocent III that he would be subservient to the pope. John refused to stop supporting his nephew’s claim to the throne, after Pope Innocent demanded he stop aid. It is possible that John was doing it merely out of spite, but it is impossible to know.

France was dragged by the sure desire of the papacy alone. The pope did not have troops to bring King John into line if it was necessary. So Innocent III was able to convince King Phillip of France to fight for him if it came to being necessary. For his services the papacy was willing to allow him to take much, possibly all, of the lands the English crown held in France as well as in England itself.

All of these previous issues are in someway inter-related to what happens that leads to the Concession of England to the Pope. However, the proverbial “straw that broke the camels back” has to be the problems surrounding the issue of Stephen Langton being elevated to the Archbishopric of Canterbury.

In July 1205 Hubert Walter the Archbishop of Canterbury died. Normally in such cases, the canons of the cathedral would vote to elect a successor, but questions arose because the see was the English Primacy. The bishops wanted to be able to have a vote as well as the king. The king supported his secretary, John de Gray, while the canons favored their prior, Reginald. Because of this confusion, the king sent delegates to the Holy See to ask for his opinion on how the election should proceed.

The canons of the cathedral heard a rumor that the king’s men in Rome were not working on their common cause of knowing the particulars to the election of a primacy (the head diocese of a country), but to support John de Gray’s claim to the see. Therefore they met and elected their prior and sent him to Rome to receive approval. The king received word of these actions and marched to the canons and inquired about what they had done, they assured him that they had never held an election.

Taxed by the entire situation the king called for a proper election that was held in December. Under the watchful eye of King John, the canons unanimously elected John de Gray and a final delegation was sent to Rome in 1206 for Innocent III’s approval.

Still in Rome was Prior Reginald who denounced the election and made his case before the Roman Pontiff. Innocent III was baffled at all of the confusion, and ordered the delegations to go back to England and sent “for fresh representatives with power of attorney” (Warren 162). When the new representatives arrived he denounced both elections and ordered the canons to vote immediately. The chapter was split between the two candidates, which pleased Pope Innocent greatly, for now he was able to make his move and raised Stephen Langton to the See of Canterbury and the Canons unanimously agreed. Innocent sent word to King John on December 20, 1206 concerning the election of Stephen. As to be expected King John was completely opposed to the candidate. We know this is the case because of a letter that Innocent III wrote to King John on May 26, 1207, concerning his attitude to the election of Stephen Langton (Semple 86, 87).

Innocent III waited to see if John would accept the duly elected archbishop who was consecrated by Pope Innocent himself. On August 27, 1207, Innocent was tired of waiting and decided to write a lengthy letter to the bishops of London, Worcester, and Ely. In this letter he asked the bishops to plead with King John to receive the Cardinal Archbishop of Canterbury; and if he still refused they were ordered to impose an interdict on the whole island of England and to enforce the interdict. In another letter to the bishop of Rochester he order him to excommunicate two specific men and any others who would bring harm to the canons of the cathedral church, Christ Church, of Canterbury.

The threats of Innocent seemed to have worked to some extent for in 1208 King John modified his position and “on January 21 he informed the bishops of London, Ely, and Worcester that he was ready to obey the pope if his ‘rights, dignity, and liberties’ were preserved” (Painter 173). Therefore on February 19 the king allowed for the archbishop’s brother, Master Simon Langton, to meet him in a conference, which took place at Westminster on March 12. The meeting quickly collapsed because John was upset that Simon Langton did not have the power to make any concessions.

The king quickly issued his demands to the pope and sent them via the abbot of Beaulieu.

“John would receive Stephen as archbishop, restore the money and property he had taken from the church, and allow the monks of Christ Church to return to their house. Stephen would give security for the loyalty of himself and his followers. The king would surrender the ‘regalia’ of the archepiscopal see to the pope by the hands of the abbot of Beaulieu and the pope could have them given to Stephen … He insisted that Innocent should admit his right to participate in the election of English prelates by giving or withholding his assent” (Painter 173).

Innocent then immediately wrote on June 14, 1208, to his faithful bishops in England, and told them of the demands of the king and bids them to be steadfast in the interdict as well as to accept the persecution humbly.

It was obvious that John was unwilling to concede the primacy of the church in spiritual matters, for on March 17, 1208, John placed the sees of Bath and Exeter into the custody of mercenaries and issued a letter to the clerics of Lincoln and Ely that if the sacraments were not performed then their property would be confiscated. There are no other letters to indicate that these actions were not taken in other diocese, but it is more than likely the case that the same holds true for all the English sees. Moreover, when the interdict was finally published on March 24 the bishops of London, Ely, and Worcester were forced into exile in France.

Innocent finally gave into John’s demands, and on July 14, the king allowed for Simon Langton, two monks of Canterbury, and the bishops of London, Ely, and Worcester to meet with him. They apparently did meet and on September 9, John issued a letter enabling Stephen to come to England. Stephen was not confident that he would not be harmed and refused to go. John was not bothered at all because he was receiving great revenue from the confiscated lands of the clergy.

In January 1209, Innocent demanded that John comply with the mandates that John himself had made. Innocent demanded that if peace was not made within three months the bishops of London, Ely, and Worcester were to publish a decree of excommunication upon the king, even though they were still in exile in France. The pope’s statements did not seem to bother the king, but his advisors worried about what their status would for consulting an excommunicate. Most of the king’s advisors were other prelates and clerics of rich dioceses and parishes; and if they continued in the service of the king they would be excommunicated and lose their rich staples. A conference was held on March 23, 1209, with Simon Langton and the justiciar with two bishops, this meeting accomplished absolutely nothing, and the king’s advisors pushed for another meeting, which was finally started in July 1209. It was decided in the meeting that Stephen was to be Archbishop of Canterbury; all money and lands that were directly taken were to be restored to the Church, as well as, any money that was gained from the confiscation; John was to personally receive Archbishop Stephen as well as the other exiled bishops; and Stephen was to receive the regalia when he arrived in England and at that time swear fidelity to John (Painter 177-178).

The pope’s representatives withheld publication of the excommunication of John until five weeks after August 10. When the King received the demands he rejected them and wrote a letter to the bishops of Ely, London, and Worcester asking for a meeting, which they refused to attend fearing that they would be seized to prevent the publication of the decree of excommunication. After much persuading in the hopes of having an end to the conflict, the bishops agreed to withhold the publication of the decree of excommunication until October 7. Archbishop Stephen Langton sent his own messenger across the English Channel to ask to meet with the king. The meeting was granted and Stephen crossed to Dover in early October. John did not go to the meeting, but sent an emissary in his stead, and when no agreement was made Stephen returned to the continent. Another attempt at reaching an agreement was started in the spring of 1210 when King John sent an abbot and a prior to invite Stephen to meet at Dover, which Stephen refused because he learned that the king was unwilling to budge from his position.

During the absence and silence of the prelates of England, John filled three vacant sees with some of his most trusted advisors. When Stephen Langton learned of the appointment of these three bishops Stephen declared void their appointments, due to their receiving of the censure of the church by standing by John after his excommunication. Innocent did not involve himself with these three appointments, but he did address a letter to Archbishop Langton on June 21, 1209, concerning the election of Hugh de Welles. Innocent ordered Stephen to conduct a full investigation to see if he was properly elected and if he was suitable for an episcopal office. Stephen offered to consecrate Hugh as a bishop if he would desert King John. At any rate Hugh crossed to the continent and was consecrated by Cardinal Archbishop Stephen Langton on December 20. In the spring of 1211, the pope decided to send another envoy to King John. They arrived on August 30, but once again they reached an agreement that was rejected by King John in the summer of 1211.

John was soon in trouble though. The political tide in Europe was changing and it did not look favorable to John. Pope Innocent crowned Otto of Brunswick the Emperor of the Romans on October 4, 1209. At this very time, Pope Innocent had sent a delegation to Kent to negotiate with King John to have him submit to aiding Otto. Unfortunately though, roughly a year later, Otto began to persecute the Church in Germany and Innocent excommunicated him. Innocent then backed the House of Hohenstaufen, which aligned him with Philip Augustus. This enabled him to have a secular power at the pope’s disposal to act against England. For it was felt that it would not be hard to persuade Philip to attack England, especially if it enabled him (Philip) to launch a crusade against England (Painter 187-188).

John was not worried that the French alone would overthrow him, but the mass of people that were beginning to pull together to topple him. The Welsh had been in and out of revolt in 1211 and 1212, and it gave John reason to believe that many if not all were allied with the French. Also Scotland had only recently agreed to a begrudging peace with the English and John was positive that if the appropriate situation arrived they would immediately desert him and join England’s enemies. However, what troubled him the worst was that many of his barons and lords had doubtful allegiance to him. At the beginning they supported him whole-heartedly, but since the interdict many were beginning to waver, which was very apparent when one of his barons, suspected of treason, easily crossed to France and was welcomed with open-arms. John’s advisors were now strongly urging him to make peace with the papacy. John reluctantly agreed because the cheapest and most important adversary to buy-off was the papacy.

In November 1212, John sent a negotiating team consisting of the abbot of Beaulieu, Alan Martel, a Templar, Master Richard de Tiring, Thomas de Eardington, Philip of Worcester, and one other. Unfortunately three of these men were captured while traveling to Rome, which when they reached Rome lacked a quorum. After much persuasion Pope Innocent accepted their claims, that King John would accept the terms that would be offered to him by the abbot and the Templar. On February 27, 1213, Innocent sent to John the agreed upon terms which consisted of: the reception of all that were in exile during the controversy; the exiles were to name whom was to allow them safe passage and that if such safe passage was revoked King John would forever forfeit his rights of patronage over the Church in England; if the king so desired he could order the exiles to swear that they would not harm his person or his crown; and that all money confiscated was to be returned as well as payment of 8,000 pounds to each of the exiles; and John was to swear to never presume to outlaw clerks.

Before John was able to send his response to these demands, Stephen Langton with the bishops of London and Ely, the bishop of Worcester had died the summer before, visited the Court of Innocent III. They brought with them horrific stories concerning John’s treatment of the clergy in England.

Pope Innocent was moved by the pleas of the bishops, and he sent them forth to France with letters that issued the deposition of King John and gave King Philip of France the right to invade England and seize the crown for himself. The bishops arrived in France in January 1213 and met with King Philip. It was not very hard for them to convince Philip that he should invade England, for he had already been in league with some of the English barons and he was merely biding his time until the papacy would allow him to do so. King Philip allowed the bishops to read the letter from the pope before an assembly of barons at Soissons on April 8. However, shortly after the assembly of barons, Pandulf, King John’s messenger, arrived in France proclaiming that John had accepted all of the pope’s terms.

John did not think that agreeing to the pope’s demands in themselves helped him much, for they cost him a substantial amount of money and he was never personally bothered by the interdict or the excommunication. The one thing that it did do was to deprive Philip of any papal backing to invade England. Nevertheless, he was still not out of trouble, for if Philip still invaded England the pope would not necessarily stop him from doing so. He needed a way that would make the papacy condemn Philip for invading England.

Out of thoughts of crushing Philip and stopping an invasion, he came up with a brilliant political maneuver. He believed that if he surrendered his entire kingdom over to the pope and secured the granting of it to him as a fief he would gain the favor and support of Innocent against Philip and his other foes. John as well as every other monarch, was aware that Innocent was looking to bring increased prestige and dignity to himself and the papacy in secular affairs. Because of this he was positive that Innocent would be “deeply gratified” if the powerful English Monarchy would be an eternal vassal of the papacy. Moreover, in animo habebat that Innocent would now be defending the very prerogatives that John had been defending. For “if in the future the bishops of England infringed on the king’s rights, they would be indirectly injuring the pope” (Painter 193).

With all that had happened as well as his own rational, John issued on May 15, 1213, the document that we have been discussing, the Concession of England to the Pope.

Works Cited

Cheney, Christopher R. Papste Und Papsttum: Pope Innocent III and England. Stuttgart: Hiersemann, 1976.

Cheney, Christopher R. and Mary G. The Letters of Pope Innocent III concerning England and Wales. London: Oxford UP, 1967.

Clayton, Joseph. Pope Innocent III and His Times. Milwaukee: Bruce, 1941.

Henderson, Ernest F. Select Historical Documents of the Middle Ages. London: George Bell, 1910.

Painter, Sidney. The Reign of King John. Baltimore: John Hopkins, 1964.

Semple, W.H. and Cheney, Christopher R. Selected Letters of Pope Innocent III Concerning England (1198-1216). London: Nelson, 1953.

Warren, W.L. King John. New York: Norton, 1961. Hosted by http://www.Geocities.ws

OSI: The Internet That Wasn’t


IEEE Spectrum logo



OSI: The Internet That Wasn’t
How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
By Andrew L. Russell


A Fairer, Faster Internet Protocol


If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.
Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46 million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee, but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.
OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture (Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Accept & Close
Join IEEE
|
IEEE.org
|
IEEE Xplore Digital Library
|
IEEE Standards
|
IEEE Spectrum
|
More Sites
Create Account
|
Sign In
Engineering Topics
Special Reports
Blogs
Multimedia
The Magazine
Professional Resources
Search

about:blankAdd titleOSI: The Internet That Wasn’t

IEEE Spectrum logo


OSI: The Internet That Wasn’t
How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
By Andrew L. Russell


A Fairer, Faster Internet Protocol


If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.
Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46 million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee, but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.
OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture (Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Accept & Close
Join IEEE
|
IEEE.org
|
IEEE Xplore Digital Library
|
IEEE Standards
|
IEEE Spectrum
|
More Sites
Create Account
|
Sign In
Engineering Topics
Special Reports
Blogs
Multimedia
The Magazine
Professional Resources
Search

IEEE Spectrum logo
Topics
Special Reports
Blogs
Multimedia
The Magazine
Professional Resources
Newsletters
Sign In
Create Account
Feature
History
Cyberspace
Cyberspace
30 Jul 2013 | 01:17 GMT
OSI: The Internet That Wasn’t
How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
By Andrew L. Russell
Posted 30 Jul 2013 | 01:17 GMT
Editor’s Picks
Illustration: Dan Page
Was the Internet Inevitable?
Was the Internet Inevitable?
Illustration of a carrier pigeon with a bag that has an SD card shape on it.
Pigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of Data
Pigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of Data
Illustration by QuickHoney
A Fairer, Faster Internet Protocol
A Fairer, Faster Internet Protocol
08OLOSIHistoryOpener
Photo: INRIA
Only Connect: Researcher Hubert Zimmermann [left] explains computer networking to French officials at a meeting in 1974. Zimmermann would later play a key role in the development of the Open Systems Interconnection standards.
If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.
For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.
Paul Baran
1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.
And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”
What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.
Donald W. Davies
1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.
The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.
The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.
Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.
map-usa
1969: ARPANET, the first packet-switching network, is created in the United States.
1970: Estimated U.S. market revenues for computer communications: US $46 million.
1971: Cyclades packet-switching project launches in France.
With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.
The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.
INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).
group-shot
1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.
The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.
1974 Cerf and Kahn
1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.
For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.
Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.
OSI was devised by committee, but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.
In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.
The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.
OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.
As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.
When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.
1974: IBM launches a packet-switching network called the Systems Network Architecture.
1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.
1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”
Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.
The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.
OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.
Bachman group
1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].
1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”
Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture (Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”
Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.
This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.
Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.
While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.
January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”
May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.
1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.
1988: U.S. market revenues for computer communications: $4.9 billion.
That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.
That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.
OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”
1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.
1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.
Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.
Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”
conference table
Photo: John Day
What’s In A Name: At a July 1986 meeting in Newport, R.I., representatives from France, Germany, the United Kingdom, and the United States considered how the OSI reference model would handle the crucial functions of naming and addressing on the network.
And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.
Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”
Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.
1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.
1996: Internet community defines IP version 6.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
2013: IPv6 carries approximately 1 percent of global Internet traffic.
Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”
By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.
In light of the success of the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.
Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?
For more about the author, see the Back Story, “How Quickly We Forget.”
How Quickly We Forget
08backstoryAndrewRussell
Photo: Andrew L. Russell
History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.
Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”
After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.
“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).
“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.
This article appears in the August 2013 print issue as “The Internet That Wasn’t.”
To Probe Further
This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.
Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.
Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.
James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.
08OLOSIHistoryOpener
Photo: INRIA
Only Connect: Researcher Hubert Zimmermann [left] explains computer networking to French officials at a meeting in 1974. Zimmermann would later play a key role in the development of the Open Systems Interconnection standards.
If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.
For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.
Paul Baran
1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.
And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”
What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.
Donald W. Davies
1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.
The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.
The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.
Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.
map-usa
1969: ARPANET, the first packet-switching network, is created in the United States.
1970: Estimated U.S. market revenues for computer communications: US $46 million.
1971: Cyclades packet-switching project launches in France.
With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.
The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.
INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).
group-shot
1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.
The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.
1974 Cerf and Kahn
1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.
For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.
Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.
OSI was devised by committee, but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.
In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.
The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.
OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.
As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.
When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.
1974: IBM launches a packet-switching network called the Systems Network Architecture.
1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.
1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”
Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.
The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.
OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.
Bachman group
1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].
1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”
Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture (Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”
Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.
This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.
Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.
While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.
January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”
May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.
1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.
1988: U.S. market revenues for computer communications: $4.9 billion.
That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.
That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.
OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”
1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.
1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.
Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.
Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”
conference table
Photo: John Day
What’s In A Name: At a July 1986 meeting in Newport, R.I., representatives from France, Germany, the United Kingdom, and the United States considered how the OSI reference model would handle the crucial functions of naming and addressing on the network.
And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.
Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”
Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.
1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.
1996: Internet community defines IP version 6.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
2013: IPv6 carries approximately 1 percent of global Internet traffic.
Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”
By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.
In light of the success of the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.
Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?
For more about the author, see the Back Story, “How Quickly We Forget.”
How Quickly We Forget
08backstoryAndrewRussell
Photo: Andrew L. Russell
History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.
Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”
After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.
“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).
“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.
This article appears in the August 2013 print issue as “The Internet That Wasn’t.”
To Probe Further
This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.
Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.
Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.
James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.
Recommended For You
Conceptual photo-illustration imagining IBM’s AI Watson as a concerned doctor, with the Watson logo standing in for the doctor’s face.
How IBM Watson Overpromised and Underdelivered on AI Health Care
How IBM Watson Overpromised and Underdelivered on AI Health Care
Illustration: Brian Stauffer
The Real Story of Stuxnet
The Real Story of Stuxnet
The words In Memoriam on a stone surface
Developer of Handheld Cable Tester for U.S. Army Dies at 80
Developer of Handheld Cable Tester for U.S. Army Dies at 80
Close-up of the 1960s-era polygraph machine, on display at the Science Museum in London.
A Brief History of the Lie Detector
A Brief History of the Lie Detector
Ham radio operator Sterling Mann (N0SSC)
The Uncertain Future of Ham Radio
The Uncertain Future of Ham Radio
Chenming Hu
How the Father of FinFETs Helped Save Moore’s Law

How the Father of FinFETs Helped Save Moore’s Law

Learn More
TCP/IP historyOSIOpen Systems InterconnectionInternet historycomputing networking standardscomputer networking history
READ NEXT
Conceptual photo-illustration imagining IBM’s AI Watson as a concerned doctor, with the Watson logo standing in for the doctor’s face.
How IBM Watson Overpromised and Underdelivered on AI Health Care
How IBM Watson Overpromised and Underdelivered on AI Health Care
Illustration: Brian Stauffer
The Real Story of Stuxnet
The Real Story of Stuxnet
The words In Memoriam on a stone surface
Developer of Handheld Cable Tester for U.S. Army Dies at 80
Developer of Handheld Cable Tester for U.S. Army Dies at 80
Close-up of the 1960s-era polygraph machine, on display at the Science Museum in London.
A Brief History of the Lie Detector
A Brief History of the Lie Detector
Ham radio operator Sterling Mann (N0SSC)
The Uncertain Future of Ham Radio
The Uncertain Future of Ham Radio
Chenming Hu
How the Father of FinFETs Helped Save Moore’s Law

How the Father of FinFETs Helped Save Moore’s Law

Featured Jobs
Senior Radar Software Researcher – ATAS
Atlanta, GA
Georgia Tech Research Institute (GTRI)
Entry/Junior Robotics Engineer – ATAS
Atlanta, GA
Georgia Tech Research Institute (GTRI)
Web Application Systems Developer – ATAS
Atlanta, GA
Georgia Tech Research Institute (GTRI)
More Jobs >>
Comments
Comment Policy
IEEE Spectrum logo
Contact Us
About
Reprints & Permissions
Newsletters
Advertising & Media Center
IEEE Job Site
Facebook
Twitter
LinkedIn
Instagram
RSS
© Copyright 2021 IEEE Spectrum | Terms & Conditions | Privacy & Opting Out of Cookies | Accessibility | Nondiscrimination Policy
Join IEEE
About IEEE
Conferences & Events
Education & Careers
Membership & Services
Publications and Standards
Societies & Communities
© Copyright 2021 IEEE — All rights reserved. Use of this Web site signifies your agreement to the IEEE Terms and Conditions. A not-for-profit organization, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.
Contact Us
Email Newsletters
About IEEE Spectrum
About IEEE
© Copyright 2021 IEEE Spectrum
Current Issue
Magazine cover image

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Accept & Close
Join IEEE
|
IEEE.org
|
IEEE Xplore Digital Library
|
IEEE Standards
|
IEEE Spectrum
|
More Sites
Create Account
|
Sign In
Engineering Topics
Special Reports
Blogs
Multimedia
The Magazine
Professional Resources
Search

IEEE Spectrum logo
Topics

about:blankAdd titleOSI: The Internet That Wasn’t

IEEE Spectrum logo


OSI: The Internet That Wasn’t
How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
By Andrew L. Russell


A Fairer, Faster Internet Protocol


If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.
Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46 million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee, but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.
OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture (Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate



How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking


Only Connect: Researcher Hubert Zimmermann [left] explains computer networking to French officials at a meeting in 1974. Zimmermann would later play a key role in the development of the Open Systems Interconnection standards.
If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.
For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.
Paul Baran


1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.
And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”
What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.


Donald W. Davies
1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.
The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.
The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.
Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.
map-usa


1969: ARPANET, the first packet-switching network, is created in the United States.
1970: Estimated U.S. market revenues for computer communications: US $46 million.
1971: Cyclades packet-switching project launches in France.
With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.
The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.
INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative
1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.


The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.
1974 Cerf and Kahn
1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.
For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.


Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.
OSI was devised by committee, but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.
In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.
The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.


OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.
As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.


When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.


1974: IBM launches a packet-switching network called the Systems Network Architecture.
1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.
1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”
Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.


The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.
OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.
Bachman group


1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].
1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”
Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture (Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”
Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.
This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.
Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.
While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.
January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”


May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.


1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.


1988: U.S. market revenues for computer communications: $4.9 billion.
That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.
That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.
OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”
1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.
1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.
Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.
Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

What’s In A Name: At a July 1986 meeting in Newport, R.I., representatives from France, Germany, the United Kingdom, and the United States considered how the OSI reference model would handle the crucial functions of naming and addressing on the network.
And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.
Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”
Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.
1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.
1996: Internet community defines IP version 6.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
2013: IPv6 carries approximately 1 percent of global Internet traffic.
Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”


By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.
In light of the success of the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.
Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?


For more about the author, see the Back Story, “How Quickly We Forget.”
How Quickly We Forget

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.


Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”
After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.


“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).


“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.
This article appears in the August 2013 print issue as “The Internet That Wasn’t.”
To Probe Further
This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.
Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.
Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.
James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.
08OLOSIHistoryOpener
Photo: INRIA
Only Connect: Researcher Hubert Zimmermann [left] explains computer networking to French officials at a meeting in 1974. Zimmermann would later play a key role in the development of the Open Systems Interconnection standards.
If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.
For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.
Paul Baran
1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.
And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”
What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never e

xisted.
Donald W. Davies
1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.
The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.
The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.
Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.
map-usa
1969: ARPANET, the first packet-switching network, is created in the United States.
1970: Estimated U.S. market revenues for computer communications: US $46 million.
1971: Cyclades packet-switching project launches in France.
With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.
The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.
INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).
group-shot
1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.
The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.
1974 Cerf and Kahn
1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.
For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.
Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.
OSI was devised by committee, but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.
In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.
The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.
OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.
As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.
When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.
1974: IBM launches a packet-switching network called the Systems Network Architecture.
1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.
1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”
Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.
The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.
OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.
Bachman group
1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].
1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”
Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture (Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”
Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.
This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.
Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.
While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.
January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”
May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.
1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.
1988: U.S. market revenues for computer communications: $4.9 billion.
That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.
That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.
OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”
1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.
1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.
Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.
Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”
conference table
Photo: John Day
What’s In A Name: At a July 1986 meeting in Newport, R.I., representatives from France, Germany, the United Kingdom, and the United States considered how the OSI reference model would handle the crucial functions of naming and addressing on the network.
And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.
Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”
Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.
1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.
1996: Internet community defines IP version 6.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
2013: IPv6 carries approximately 1 percent of global Internet traffic.
Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”
By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.
In light of the success of the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.
Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?
For more about the author, see the Back Story, “How Quickly We Forget.”
How Quickly We Forget
08backstoryAndrewRussell
Photo: Andrew L. Russell
History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.
Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”
After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.
“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).
“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.
This article appears in the August 2013 print issue as “The Internet That Wasn’t.”
To Probe Further
This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.
Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.
Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.
James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.
Recommended For You
Conceptual photo-illustration imagining IBM’s AI Watson as a concerned doctor, with the Watson logo standing in for the doctor’s face.
How IBM Watson Overpromised and Underdelivered on AI Health Care
How IBM Watson Overpromised and Underdelivered on AI Health Care
Illustration: Brian Stauffer
The Real Story of Stuxnet
The Real Story of Stuxnet
The words In Memoriam on a stone surface
Developer of Handheld Cable Tester for U.S. Army Dies at 80
Developer of Handheld Cable Tester for U.S. Army Dies at 80
Close-up of the 1960s-era polygraph machine, on display at the Science Museum in London.
A Brief History of the Lie Detector
A Brief History of the Lie Detector
Ham radio operator Sterling Mann (N0SSC)
The Uncertain Future of Ham Radio
The Uncertain Future of Ham Radio
Chenming Hu
How the Father of FinFETs Helped Save Moore’s Law

How the Father of FinFETs Helped Save Moore’s Law

Learn More
TCP/IP historyOSIOpen Systems InterconnectionInternet historycomputing networking standardscomputer networking history
READ NEXT
Conceptual photo-illustration imagining IBM’s AI Watson as a concerned doctor, with the Watson logo standing in for the doctor’s face.
How IBM Watson Overpromised and Underdelivered on AI Health Care
How IBM Watson Overpromised and Underdelivered on AI Health Care
Illustration: Brian Stauffer
The Real Story of Stuxnet
The Real Story of Stuxnet
The words In Memoriam on a stone surface
Developer of Handheld Cable Tester for U.S. Army Dies at 80
Developer of Handheld Cable Tester for U.S. Army Dies at 80
Close-up of the 1960s-era polygraph machine, on display at the Science Museum in London.
A Brief History of the Lie Detector
A Brief History of the Lie Detector
Ham radio operator Sterling Mann (N0SSC)
The Uncertain Future of Ham Radio
The Uncertain Future of Ham Radio
Chenming Hu
How the Father of FinFETs Helped Save Moore’s Law

How the Father of FinFETs Helped Save Moore’s Law

Featured Jobs
Senior Radar Software Researcher – ATAS
Atlanta, GA
Georgia Tech Research Institute (GTRI)
Entry/Junior Robotics Engineer – ATAS
Atlanta, GA
Georgia Tech Research Institute (GTRI)
Web Application Systems Developer – ATAS
Atlanta, GA
Georgia Tech Research Institute (GTRI)
More Jobs >>
Comments
Comment Policy
IEEE Spectrum logo
Contact Us
About
Reprints & Permissions
Newsletters
Advertising & Media Center
IEEE Job Site
Facebook
Twitter
LinkedIn
Instagram
RSS
© Copyright 2021 IEEE Spectrum | Terms & Conditions | Privacy & Opting Out of Cookies | Accessibility | Nondiscrimination Policy
Join IEEE
About IEEE
Conferences & Events
Education & Careers
Membership & Services
Publications and Standards
Societies & Communities
© Copyright 2021 IEEE — All rights reserved. Use of this Web site signifies your agreement to the IEEE Terms and Conditions. A not-for-profit organization, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.
Contact Us
Email Newsletters
About IEEE Spectrum
About IEEE
© Copyright 2021 IEEE Spectrum
Current Issue
Magazine cover image

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Accept & Close
Join IEEE
|
IEEE.org
|
IEEE Xplore Digital Library
|
IEEE Standards
|
IEEE Spectrum
|
More Sites
Create Account
|
Sign In
Engineering Topics
Special Reports
Blogs
Multimedia
The Magazine
Professional Resources
Search

IEEE Spectrum logo
Topics

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Accept & Close
Join IEEE
|
IEEE.org
|
IEEE Xplore Digital Library
|
IEEE Standards
|
IEEE Spectrum
|
More Sites
Create Account
|
Sign In
Engineering Topics
Special Reports
Blogs
Multimedia
The Magazine
Professional Resources
Search

Paragraph

Start with the building block of all narrative.Font sizeFont sizeabout:blankCustomLine heightDrop cap

Toggle to show a large initial letter.Automatically fit text to container

  • Paragraph

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Accept & Close
Join IEEE
|
IEEE.org
|
IEEE Xplore Digital Library
|
IEEE Standards
|
IEEE Spectrum
|
More Sites
Create Account
|
Sign In
Engineering Topics
Special Reports
Blogs
Multimedia
The Magazine
Professional Resources
Search

Paragraph

Start with the building block of all narrative.Font sizeFont sizeabout:blankCustomLine heightDrop cap

Toggle to show a large initial letter.Automatically fit text to container

  • Paragraph

IEEE Spectrum logo
Topics
Special Reports
Blogs
Multimedia
The Magazine
Professional Resources
Newsletters
Sign In
Create Account
Feature
History
Cyberspace
Cyberspace
30 Jul 2013 | 01:17 GMT
OSI: The Internet That Wasn’t
How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
By Andrew L. Russell
Posted 30 Jul 2013 | 01:17 GMT
Editor’s Picks
Illustration: Dan Page
Was the Internet Inevitable?
Was the Internet Inevitable?
Illustration of a carrier pigeon with a bag that has an SD card shape on it.
Pigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of Data
Pigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of Data
Illustration by QuickHoney
A Fairer, Faster Internet Protocol
A Fairer, Faster Internet Protocol
08OLOSIHistoryOpener
Photo: INRIA
Only Connect: Researcher Hubert Zimmermann [left] explains computer networking to French officials at a meeting in 1974. Zimmermann would later play a key role in the development of the Open Systems Interconnection standards.
If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.
For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.
Paul Baran
1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.
And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”
What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.
Donald W. Davies
1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.
The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.
The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.
Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.
map-usa
1969: ARPANET, the first packet-switching network, is created in the United States.
1970: Estimated U.S. market revenues for computer communications: US $46 million.
1971: Cyclades packet-switching project launches in France.
With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.
The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.
INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).
group-shot
1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.
The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.
1974 Cerf and Kahn
1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.
For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.
Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.
OSI was devised by committee, but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.
In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.
The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.
OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.
As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.
When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.
1974: IBM launches a packet-switching network called the Systems Network Architecture.
1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.
1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”
Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.
The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.
OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.
Bachman group
1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].
1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”
Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture (Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”
Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.
This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.
Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.
While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.
January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”
May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.
1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.
1988: U.S. market revenues for computer communications: $4.9 billion.
That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.
That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.
OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”
1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.
1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.
Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.
Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”
conference table
Photo: John Day
What’s In A Name: At a July 1986 meeting in Newport, R.I., representatives from France, Germany, the United Kingdom, and the United States considered how the OSI reference model would handle the crucial functions of naming and addressing on the network.
And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.
Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”
Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.
1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.
1996: Internet community defines IP version 6.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
2013: IPv6 carries approximately 1 percent of global Internet traffic.
Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”
By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.
In light of the success of the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.
Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?
For more about the author, see the Back Story, “How Quickly We Forget.”
How Quickly We Forget
08backstoryAndrewRussell
Photo: Andrew L. Russell
History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.
Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”
After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.
“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).
“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.
This article appears in the August 2013 print issue as “The Internet That Wasn’t.”
To Probe Further
This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.
Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.
Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.
James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.
08OLOSIHistoryOpener
Photo: INRIA
Only Connect: Researcher Hubert Zimmermann [left] explains computer networking to French officials at a meeting in 1974. Zimmermann would later play a key role in the development of the Open Systems Interconnection standards.
If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.
For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.
Paul Baran
1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.
And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”
What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.
Donald W. Davies
1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.
The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.
The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.
Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.
map-usa
1969: ARPANET, the first packet-switching network, is created in the United States.
1970: Estimated U.S. market revenues for computer communications: US $46 million.
1971: Cyclades packet-switching project launches in France.
With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.
The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.
INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).
group-shot
1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.
The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.
1974 Cerf and Kahn
1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.
For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.
Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.
OSI was devised by committee, but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.
In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.
The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.
OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.
As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.
When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.
1974: IBM launches a packet-switching network called the Systems Network Architecture.
1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.
1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”
Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.
The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.
OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.
Bachman group
1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].
1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”
Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture (Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”
Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.
This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.
Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.
While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.
January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”
May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.
1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.
1988: U.S. market revenues for computer communications: $4.9 billion.
That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.
That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.
OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”
1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.
1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.
Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.
Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”
conference table
Photo: John Day
What’s In A Name: At a July 1986 meeting in Newport, R.I., representatives from France, Germany, the United Kingdom, and the United States considered how the OSI reference model would handle the crucial functions of naming and addressing on the network.
And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.
Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”
Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.
1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.
1996: Internet community defines IP version 6.
1991: Tim Berners-Lee announces public release of the WorldWideWeb application.
2013: IPv6 carries approximately 1 percent of global Internet traffic.
Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”
By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.
In light of the success of the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.
Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?
For more about the author, see the Back Story, “How Quickly We Forget.”
How Quickly We Forget
08backstoryAndrewRussell
Photo: Andrew L. Russell
History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.
Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”
After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.
“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).
“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.
This article appears in the August 2013 print issue as “The Internet That Wasn’t.”
To Probe Further
This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.
Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.
Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.
James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.
Recommended For You
Conceptual photo-illustration imagining IBM’s AI Watson as a concerned doctor, with the Watson logo standing in for the doctor’s face.
How IBM Watson Overpromised and Underdelivered on AI Health Care
How IBM Watson Overpromised and Underdelivered on AI Health Care
Illustration: Brian Stauffer
The Real Story of Stuxnet
The Real Story of Stuxnet
The words In Memoriam on a stone surface
Developer of Handheld Cable Tester for U.S. Army Dies at 80
Developer of Handheld Cable Tester for U.S. Army Dies at 80
Close-up of the 1960s-era polygraph machine, on display at the Science Museum in London.
A Brief History of the Lie Detector
A Brief History of the Lie Detector
Ham radio operator Sterling Mann (N0SSC)
The Uncertain Future of Ham Radio
The Uncertain Future of Ham Radio
Chenming Hu
How the Father of FinFETs Helped Save Moore’s Law

How the Father of FinFETs Helped Save Moore’s Law

Learn More
TCP/IP historyOSIOpen Systems InterconnectionInternet historycomputing networking standardscomputer networking history
READ NEXT
Conceptual photo-illustration imagining IBM’s AI Watson as a concerned doctor, with the Watson logo standing in for the doctor’s face.
How IBM Watson Overpromised and Underdelivered on AI Health Care
How IBM Watson Overpromised and Underdelivered on AI Health Care
Illustration: Brian Stauffer
The Real Story of Stuxnet
The Real Story of Stuxnet
The words In Memoriam on a stone surface
Developer of Handheld Cable Tester for U.S. Army Dies at 80
Developer of Handheld Cable Tester for U.S. Army Dies at 80
Close-up of the 1960s-era polygraph machine, on display at the Science Museum in London.
A Brief History of the Lie Detector
A Brief History of the Lie Detector
Ham radio operator Sterling Mann (N0SSC)
The Uncertain Future of Ham Radio
The Uncertain Future of Ham Radio
Chenming Hu
How the Father of FinFETs Helped Save Moore’s Law

How the Father of FinFETs Helped Save Moore’s Law

Featured Jobs
Senior Radar Software Researcher – ATAS
Atlanta, GA
Georgia Tech Research Institute (GTRI)
Entry/Junior Robotics Engineer – ATAS
Atlanta, GA
Georgia Tech Research Institute (GTRI)
Web Application Systems Developer – ATAS
Atlanta, GA
Georgia Tech Research Institute (GTRI)
More Jobs >>
Comments
Comment Policy
IEEE Spectrum logo
Contact Us
About
Reprints & Permissions
Newsletters
Advertising & Media Center
IEEE Job Site
Facebook
Twitter
LinkedIn
Instagram
RSS
© Copyright 2021 IEEE Spectrum | Terms & Conditions | Privacy & Opting Out of Cookies | Accessibility | Nondiscrimination Policy
Join IEEE
About IEEE
Conferences & Events
Education & Careers
Membership & Services
Publications and Standards
Societies & Communities
© Copyright 2021 IEEE — All rights reserved. Use of this Web site signifies your agreement to the IEEE Terms and Conditions. A not-for-profit organization, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.
Contact Us
Email Newsletters
About IEEE Spectrum
About IEEE
© Copyright 2021 IEEE Spectrum
Current Issue
Magazine cover image

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Accept & Close
Join IEEE
|
IEEE.org
|
IEEE Xplore Digital Library
|
IEEE Standards
|
IEEE Spectrum
|
More Sites
Create Account
|
Sign In
Engineering Topics
Special Reports
Blogs
Multimedia
The Magazine
Professional Resources
Search

IEEE Spectrum logo
Topics

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Accept & Close
Join IEEE
|
IEEE.org
|
IEEE Xplore Digital Library
|
IEEE Standards
|
IEEE Spectrum
|
More Sites
Create Account
|
Sign In
Engineering Topics
Special Reports
Blogs
Multimedia
The Magazine
Professional Resources
Search

TCP/IP eclypsed Open SYSetms Protacael Globael Nettwerkking How did this accoeur

  • |
  • OSI: The Internet That Wasn’t

How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking

By Andrew L. RussellPosted 30 Jul 2013 | 01:17 GMTEditor’s PicksWas the Internet Inevitable?Was the Internet Inevitable?Pigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of DataPigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of DataA Fairer, Faster Internet ProtocolA Fairer, Faster Internet Protocol

08OLOSIHistoryOpener
Photo: INRIA

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46  million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee,but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.

OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.

Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.

While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.

January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”

May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.

1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.

1988: U.S. market revenues for computer communications: $4.9 billion.

That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.

That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.

OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”

1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.

1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.

Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.

Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

conference table
Photo: John Day

And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.

Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”

Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.

1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.

1996: Internet community defines IP version 6.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

2013: IPv6 carries approximately 1 percent of global Internet traffic.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.

In light of the successof the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.

Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?

For more about the author, see the Back Story, “How Quickly We Forget.”

How Quickly We Forget

08backstoryAndrewRussell
Photo: Andrew L. Russell

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.

Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”

After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.

“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).

“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.

This article appears in the August 2013 print issue as “The Internet That Wasn’t.”

To Probe Further

This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.

Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.

Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.

James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.

08OLOSIHistoryOpener
Photo: INRIA

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46  million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee,but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.

OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.

Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.

While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.

January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”

May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.

1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.

1988: U.S. market revenues for computer communications: $4.9 billion.

That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.

That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.

OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”

1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.

1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.

Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.

Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

conference table
Photo: John Day

And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.

Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”

Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.

1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.

1996: Internet community defines IP version 6.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

2013: IPv6 carries approximately 1 percent of global Internet traffic.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.

In light of the successof the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.

Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?

For more about the author, see the Back Story, “How Quickly We Forget.”

How Quickly We Forget

08backstoryAndrewRussell
Photo: Andrew L. Russell

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.

Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”

After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.

“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).

“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.

This article appears in the August 2013 print issue as “The Internet That Wasn’t.”

To Probe Further

This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.

Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.

Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.

James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet. Recommended For YouHow IBM Watson Overpromised and Underdelivered on AI Health CareHow IBM Watson Overpromised and Underdelivered on AI Health CareThe Real Story of StuxnetThe Real Story of StuxnetDeveloper of Handheld Cable Tester for U.S. Army Dies at 80Developer of Handheld Cable Tester for U.S. Army Dies at 80A Brief History of the Lie DetectorA Brief History of the Lie DetectorThe Uncertain Future of Ham RadioThe Uncertain Future of Ham RadioHow the Father of FinFETs Helped Save Moore’s Law
How the Father of FinFETs Helped Save Moore’s Law


Learn More

TCP/IP historyOSIOpen Systems InterconnectionInternet historycomputing networking standardscomputer networking historyREAD NEXTHow IBM Watson Overpromised and Underdelivered on AI Health CareHow IBM Watson Overpromised and Underdelivered on AI Health CareThe Real Story of StuxnetThe Real Story of StuxnetDeveloper of Handheld Cable Tester for U.S. Army Dies at 80Developer of Handheld Cable Tester for U.S. Army Dies at 80A Brief History of the Lie DetectorA Brief History of the Lie DetectorThe Uncertain Future of Ham RadioThe Uncertain Future of Ham RadioHow the Father of FinFETs Helped Save Moore’s Law
How the Father of FinFETs Helped Save Moore’s Law


More Jobs >>

Comments

Comment Policyhttps://disqus.com/embed/comments/?base=default&f=ieeespectrum&t_i=%2Ftech-history%2Fcyberspace%2Fosi-the-internet-that-wasnt&t_u=https%3A%2F%2Fspectrum.ieee.org%2Ftech-history%2Fcyberspace%2Fosi-the-internet-that-wasnt&t_d=OSI%3A%20The%20Internet%20That%20Wasn%E2%80%99t&t_t=OSI%3A%20The%20Internet%20That%20Wasn%E2%80%99t&s_o=default#version=d31a003da6a2fa81acbeb5fc947cef7d

about:blankChange block type or styleConvert to unordered listConvert to ordered listOutdent list itemIndent list itemAdd title

  • |
  • 
  • OSI: The Internet That Wasn’t

How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking

By Andrew L. RussellPosted 30 Jul 2013 | 01:17 GMTEditor’s PicksWas the Internet Inevitable?Was the Internet Inevitable?Pigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of DataPigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of DataA Fairer, Faster Internet ProtocolA Fairer, Faster Internet Protocol

08OLOSIHistoryOpener
Photo: INRIA

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46  million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee,but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.

OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.

Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.

While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.

January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”

May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.

1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.

1988: U.S. market revenues for computer communications: $4.9 billion.

That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.

That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.

OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”

1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.

1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.

Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.

Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

conference table
Photo: John Day

And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.

Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”

Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.

1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.

1996: Internet community defines IP version 6.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

2013: IPv6 carries approximately 1 percent of global Internet traffic.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.

In light of the successof the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.

Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?

For more about the author, see the Back Story, “How Quickly We Forget.”

How Quickly We Forget

08backstoryAndrewRussell
Photo: Andrew L. Russell

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.

Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”

After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.

“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).

“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.

This article appears in the August 2013 print issue as “The Internet That Wasn’t.”

To Probe Further

This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.

Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.

Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.

James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.

08OLOSIHistoryOpener
Photo: INRIA

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46  million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee,but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.

OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.

Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.

While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.

January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”

May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.

1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.

1988: U.S. market revenues for computer communications: $4.9 billion.

That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.

That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.

OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”

1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.

1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.

Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.

Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

conference table
Photo: John Day

And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.

Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”

Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.

1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.

1996: Internet community defines IP version 6.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

2013: IPv6 carries approximately 1 percent of global Internet traffic.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.

In light of the successof the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.

Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?

For more about the author, see the Back Story, “How Quickly We Forget.”

How Quickly We Forget

08backstoryAndrewRussell
Photo: Andrew L. Russell

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.

Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”

After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.

“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).

“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.

This article appears in the August 2013 print issue as “The Internet That Wasn’t.”

To Probe Further

This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.

Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.

Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.

James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet. Recommended For YouHow IBM Watson Overpromised and Underdelivered on AI Health CareHow IBM Watson Overpromised and Underdelivered on AI Health CareThe Real Story of StuxnetThe Real Story of StuxnetDeveloper of Handheld Cable Tester for U.S. Army Dies at 80Developer of Handheld Cable Tester for U.S. Army Dies at 80A Brief History of the Lie DetectorA Brief History of the Lie DetectorThe Uncertain Future of Ham RadioThe Uncertain Future of Ham RadioHow the Father of FinFETs Helped Save Moore’s Law
How the Father of FinFETs Helped Save Moore’s Law


Learn More

TCP/IP historyOSIOpen Systems InterconnectionInternet historycomputing networking standardscomputer networking historyREAD NEXTHow IBM Watson Overpromised and Underdelivered on AI Health CareHow IBM Watson Overpromised and Underdelivered on AI Health CareThe Real Story of StuxnetThe Real Story of StuxnetDeveloper of Handheld Cable Tester for U.S. Army Dies at 80Developer of Handheld Cable Tester for U.S. Army Dies at 80A Brief History of the Lie DetectorA Brief History of the Lie DetectorThe Uncertain Future of Ham RadioThe Uncertain Future of Ham RadioHow the Father of FinFETs Helped Save Moore’s Law
How the Father of FinFETs Helped Save Moore’s Law


  • 
  • 
  • 

More Jobs >>

Comments

Comment Policyhttps://disqus.com/embed/comments/?base=default&f=ieeespectrum&t_i=%2Ftech-history%2Fcyberspace%2Fosi-the-internet-that-wasnt&t_u=https%3A%2F%2Fspectrum.ieee.org%2Ftech-history%2Fcyberspace%2Fosi-the-internet-that-wasnt&t_d=OSI%3A%20The%20Internet%20That%20Wasn%E2%80%99t&t_t=OSI%3A%20The%20Internet%20That%20Wasn%E2%80%99t&s_o=default#version=d31a003da6a2fa81acbeb5fc947cef7d

This image has an empty alt attribute; its file name is ieee-logo-851bf96265ef3f9dfa532b4db93f0843.png
  • 

Add title

  • |
  • 
  • OSI: The Internet That Wasn’t

How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking

By Andrew L. RussellPosted 30 Jul 2013 | 01:17 GMTEditor’s PicksWas the Internet Inevitable?Was the Internet Inevitable?Pigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of DataPigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of DataA Fairer, Faster Internet ProtocolA Fairer, Faster Internet Protocol

08OLOSIHistoryOpener
Photo: INRIA

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46  million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee,but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.

OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.

Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.

While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.

January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”

May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.

1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.

1988: U.S. market revenues for computer communications: $4.9 billion.

That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.

That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.

OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”

1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.

1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.

Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.

Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

conference table
Photo: John Day

And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.

Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”

Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.

1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.

1996: Internet community defines IP version 6.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

2013: IPv6 carries approximately 1 percent of global Internet traffic.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.

In light of the successof the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.

Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?

For more about the author, see the Back Story, “How Quickly We Forget.”

How Quickly We Forget

08backstoryAndrewRussell
Photo: Andrew L. Russell

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.

Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”

After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.

“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).

“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.

This article appears in the August 2013 print issue as “The Internet That Wasn’t.”

To Probe Further

This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.

Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.

Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.

James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.

08OLOSIHistoryOpener
Photo: INRIA

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46  million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee,but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.

OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.

Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.

While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.

January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”

May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.

1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.

1988: U.S. market revenues for computer communications: $4.9 billion.

That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.

That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.

OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”

1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.

1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.

Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.

Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

conference table
Photo: John Day

And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.

Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”

Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.

1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.

1996: Internet community defines IP version 6.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

2013: IPv6 carries approximately 1 percent of global Internet traffic.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.

In light of the successof the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.

Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?

For more about the author, see the Back Story, “How Quickly We Forget.”

How Quickly We Forget

08backstoryAndrewRussell
Photo: Andrew L. Russell

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.

Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”

After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.

“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).

“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.

This article appears in the August 2013 print issue as “The Internet That Wasn’t.”

To Probe Further

This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.

Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.

Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.

James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet. Recommended For YouHow IBM Watson Overpromised and Underdelivered on AI Health CareHow IBM Watson Overpromised and Underdelivered on AI Health CareThe Real Story of StuxnetThe Real Story of StuxnetDeveloper of Handheld Cable Tester for U.S. Army Dies at 80Developer of Handheld Cable Tester for U.S. Army Dies at 80A Brief History of the Lie DetectorA Brief History of the Lie DetectorThe Uncertain Future of Ham RadioThe Uncertain Future of Ham RadioHow the Father of FinFETs Helped Save Moore’s Law
How the Father of FinFETs Helped Save Moore’s Law


Learn More

TCP/IP historyOSIOpen Systems InterconnectionInternet historycomputing networking standardscomputer networking historyREAD NEXTHow IBM Watson Overpromised and Underdelivered on AI Health CareHow IBM Watson Overpromised and Underdelivered on AI Health CareThe Real Story of StuxnetThe Real Story of StuxnetDeveloper of Handheld Cable Tester for U.S. Army Dies at 80Developer of Handheld Cable Tester for U.S. Army Dies at 80A Brief History of the Lie DetectorA Brief History of the Lie DetectorThe Uncertain Future of Ham RadioThe Uncertain Future of Ham RadioHow the Father of FinFETs Helped Save Moore’s Law
How the Father of FinFETs Helped Save Moore’s Law


  • 
  • 
  • 

More Jobs >>

Comments

Comment Policyhttps://disqus.com/embed/comments/?base=default&f=ieeespectrum&t_i=%2Ftech-history%2Fcyberspace%2Fosi-the-internet-that-wasnt&t_u=https%3A%2F%2Fspectrum.ieee.org%2Ftech-history%2Fcyberspace%2Fosi-the-internet-that-wasnt&t_d=OSI%3A%20The%20Internet%20That%20Wasn%E2%80%99t&t_t=OSI%3A%20The%20Internet%20That%20Wasn%E2%80%99t&s_o=default#version=d31a003da6a2fa81acbeb5fc947cef7d

about:blankChange block type or styleConvert to unordered listConvert to ordered listOutdent list itemIndent list itemAdd title

  • |
  • OSI: The Internet That Wasn’t

How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking

By Andrew L. RussellPosted 30 Jul 2013 | 01:17 GMTEditor’s PicksWas the Internet Inevitable?Was the Internet Inevitable?Pigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of DataPigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of DataA Fairer, Faster Internet ProtocolA Fairer, Faster Internet Protocol

08OLOSIHistoryOpener
Photo: INRIA

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46  million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee,but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.

OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.

Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.

While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.

January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”

May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.

1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.

1988: U.S. market revenues for computer communications: $4.9 billion.

That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.

That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.

OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”

1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.

1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.

Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.

Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

conference table
Photo: John Day

And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.

Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”

Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.

1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.

1996: Internet community defines IP version 6.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

2013: IPv6 carries approximately 1 percent of global Internet traffic.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.

In light of the successof the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.

Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?

For more about the author, see the Back Story, “How Quickly We Forget.”

How Quickly We Forget

08backstoryAndrewRussell
Photo: Andrew L. Russell

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.

Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”

After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.

“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).

“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.

This article appears in the August 2013 print issue as “The Internet That Wasn’t.”

To Probe Further

This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.

Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.

Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.

James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.

08OLOSIHistoryOpener
Photo: INRIA

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46  million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee,but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.

OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.

Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.

While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.

January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”

May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.

1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.

1988: U.S. market revenues for computer communications: $4.9 billion.

That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.

That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.

OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”

1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.

1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.

Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.

Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

conference table
Photo: John Day

And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.

Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”

Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.

1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.

1996: Internet community defines IP version 6.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

2013: IPv6 carries approximately 1 percent of global Internet traffic.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.

In light of the successof the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.

Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?

For more about the author, see the Back Story, “How Quickly We Forget.”

How Quickly We Forget

08backstoryAndrewRussell
Photo: Andrew L. Russell

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.

Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”

After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.

“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).

“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.

This article appears in the August 2013 print issue as “The Internet That Wasn’t.”

To Probe Further

This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.

Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.

Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.

James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet. Recommended For YouHow IBM Watson Overpromised and Underdelivered on AI Health CareHow IBM Watson Overpromised and Underdelivered on AI Health CareThe Real Story of StuxnetThe Real Story of StuxnetDeveloper of Handheld Cable Tester for U.S. Army Dies at 80Developer of Handheld Cable Tester for U.S. Army Dies at 80A Brief History of the Lie DetectorA Brief History of the Lie DetectorThe Uncertain Future of Ham RadioThe Uncertain Future of Ham RadioHow the Father of FinFETs Helped Save Moore’s Law
How the Father of FinFETs Helped Save Moore’s Law


Learn More

TCP/IP historyOSIOpen Systems InterconnectionInternet historycomputing networking standardscomputer networking historyREAD NEXTHow IBM Watson Overpromised and Underdelivered on AI Health CareHow IBM Watson Overpromised and Underdelivered on AI Health CareThe Real Story of StuxnetThe Real Story of StuxnetDeveloper of Handheld Cable Tester for U.S. Army Dies at 80Developer of Handheld Cable Tester for U.S. Army Dies at 80A Brief History of the Lie DetectorA Brief History of the Lie DetectorThe Uncertain Future of Ham RadioThe Uncertain Future of Ham RadioHow the Father of FinFETs Helped Save Moore’s Law
How the Father of FinFETs Helped Save Moore’s Law


More Jobs >>

Comments

Comment Policyhttps://disqus.com/embed/comments/?base=default&f=ieeespectrum&t_i=%2Ftech-history%2Fcyberspace%2Fosi-the-internet-that-wasnt&t_u=https%3A%2F%2Fspectrum.ieee.org%2Ftech-history%2Fcyberspace%2Fosi-the-internet-that-wasnt&t_d=OSI%3A%20The%20Internet%20That%20Wasn%E2%80%99t&t_t=OSI%3A%20The%20Internet%20That%20Wasn%E2%80%99t&s_o=default#version=d31a003da6a2fa81acbeb5fc947cef7d

Add title

  • |
  • 
  • OSI: The Internet That Wasn’t

How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking

By Andrew L. RussellPosted 30 Jul 2013 | 01:17 GMTEditor’s PicksWas the Internet Inevitable?Was the Internet Inevitable?Pigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of DataPigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of DataA Fairer, Faster Internet ProtocolA Fairer, Faster Internet Protocol

08OLOSIHistoryOpener
Photo: INRIA

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46  million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee,but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.

OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.

Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.

While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.

January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”

May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.

1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.

1988: U.S. market revenues for computer communications: $4.9 billion.

That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.

That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.

OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”

1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.

1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.

Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.

Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

conference table
Photo: John Day

And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.

Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”

Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.

1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.

1996: Internet community defines IP version 6.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

2013: IPv6 carries approximately 1 percent of global Internet traffic.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.

In light of the successof the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.

Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?

For more about the author, see the Back Story, “How Quickly We Forget.”

How Quickly We Forget

08backstoryAndrewRussell
Photo: Andrew L. Russell

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.

Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”

After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.

“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).

“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.

This article appears in the August 2013 print issue as “The Internet That Wasn’t.”

To Probe Further

This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.

Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.

Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.

James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.

08OLOSIHistoryOpener
Photo: INRIA

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46  million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee,but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.

OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.

Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.

While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.

January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”

May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.

1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.

1988: U.S. market revenues for computer communications: $4.9 billion.

That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.

That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.

OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”

1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.

1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.

Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.

Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

conference table
Photo: John Day

And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.

Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”

Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.

1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.

1996: Internet community defines IP version 6.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

2013: IPv6 carries approximately 1 percent of global Internet traffic.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.

In light of the successof the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.

Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?

For more about the author, see the Back Story, “How Quickly We Forget.”

How Quickly We Forget

08backstoryAndrewRussell
Photo: Andrew L. Russell

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.

Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”

After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.

“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).

“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.

This article appears in the August 2013 print issue as “The Internet That Wasn’t.”

To Probe Further

This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.

Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.

Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.

James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet. Recommended For YouHow IBM Watson Overpromised and Underdelivered on AI Health CareHow IBM Watson Overpromised and Underdelivered on AI Health CareThe Real Story of StuxnetThe Real Story of StuxnetDeveloper of Handheld Cable Tester for U.S. Army Dies at 80Developer of Handheld Cable Tester for U.S. Army Dies at 80A Brief History of the Lie DetectorA Brief History of the Lie DetectorThe Uncertain Future of Ham RadioThe Uncertain Future of Ham RadioHow the Father of FinFETs Helped Save Moore’s Law
How the Father of FinFETs Helped Save Moore’s Law


Learn More

TCP/IP historyOSIOpen Systems InterconnectionInternet historycomputing networking standardscomputer networking historyREAD NEXTHow IBM Watson Overpromised and Underdelivered on AI Health CareHow IBM Watson Overpromised and Underdelivered on AI Health CareThe Real Story of StuxnetThe Real Story of StuxnetDeveloper of Handheld Cable Tester for U.S. Army Dies at 80Developer of Handheld Cable Tester for U.S. Army Dies at 80A Brief History of the Lie DetectorA Brief History of the Lie DetectorThe Uncertain Future of Ham RadioThe Uncertain Future of Ham RadioHow the Father of FinFETs Helped Save Moore’s Law
How the Father of FinFETs Helped Save Moore’s Law


  • 
  • 
  • 

More Jobs >>

Comments

Comment Policyhttps://disqus.com/embed/comments/?base=default&f=ieeespectrum&t_i=%2Ftech-history%2Fcyberspace%2Fosi-the-internet-that-wasnt&t_u=https%3A%2F%2Fspectrum.ieee.org%2Ftech-history%2Fcyberspace%2Fosi-the-internet-that-wasnt&t_d=OSI%3A%20The%20Internet%20That%20Wasn%E2%80%99t&t_t=OSI%3A%20The%20Internet%20That%20Wasn%E2%80%99t&s_o=default#version=d31a003da6a2fa81acbeb5fc947cef7d

about:blankChange block type or styleConvert to unordered listConvert to ordered listOutdent list itemIndent list itemAdd title

  • |
  • OSI: The Internet That Wasn’t

How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking

By Andrew L. RussellPosted 30 Jul 2013 | 01:17 GMTEditor’s PicksWas the Internet Inevitable?Was the Internet Inevitable?Pigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of DataPigeon-Based ‘Feathernet’ Still Wings-Down Fastest Way to Transfer Massive Amounts of DataA Fairer, Faster Internet ProtocolA Fairer, Faster Internet Protocol

08OLOSIHistoryOpener
Photo: INRIA

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46  million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee,but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.

OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.

Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.

While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.

January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”

May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.

1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.

1988: U.S. market revenues for computer communications: $4.9 billion.

That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.

That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.

OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”

1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.

1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.

Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.

Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

conference table
Photo: John Day

And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.

Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”

Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.

1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.

1996: Internet community defines IP version 6.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

2013: IPv6 carries approximately 1 percent of global Internet traffic.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.

In light of the successof the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.

Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?

For more about the author, see the Back Story, “How Quickly We Forget.”

How Quickly We Forget

08backstoryAndrewRussell
Photo: Andrew L. Russell

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.

Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”

After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.

“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).

“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.

This article appears in the August 2013 print issue as “The Internet That Wasn’t.”

To Probe Further

This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.

Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.

Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.

James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet.

08OLOSIHistoryOpener
Photo: INRIA

If everything had gone according to plan, the Internet as we know it would never have sprung up. That plan, devised 35 years ago, instead would have created a comprehensive set of standards for computer networks called Open Systems Interconnection, or OSI. Its architects were a dedicated group of computer industry representatives in the United Kingdom, France, and the United States who envisioned a complete, open, and multi­layered system that would allow users all over the world to exchange data easily and thereby unleash new possibilities for collaboration and commerce.

For a time, their vision seemed like the right one. Thousands of engineers and policy­makers around the world became involved in the effort to establish OSI standards. They soon had the support of everyone who mattered: computer companies, telephone companies, regulators, national governments, international standards setting agencies, academic researchers, even the U.S. Department of Defense. By the mid-1980s the worldwide adoption of OSI appeared inevitable.Paul Baran

1961: Paul Baran at Rand Corp. begins to outline his concept of “message block switching” as a way of sending data over computer networks.

And yet, by the early 1990s, the project had all but stalled in the face of a cheap and agile, if less comprehensive, alternative: the Internet’s Transmission Control Protocol and Internet Protocol. As OSI faltered, one of the Internet’s chief advocates, Einar Stefferud, gleefully pronounced: “OSI is a beautiful dream, and TCP/IP is living it!”

What happened to the “beautiful dream”? While the Internet’s triumphant story has been well documented by its designers and the historians they have worked with, OSI has been forgotten by all but a handful of veterans of the Internet-OSI standards wars. To understand why, we need to dive into the early history of computer networking, a time when the vexing problems of digital convergence and global interconnection were very much on the minds of computer scientists, telecom engineers, policymakers, and industry executives. And to appreciate that history, you’ll have to set aside for a few minutes what you already know about the Internet. Try to imagine, if you can, that the Internet never existed.

Donald W. Davies

1965: Donald W. Davies, working independently of Baran, conceives his “packet-switching” network.

The story starts in the 1960s. The Berlin Wall was going up. The Free Speech movement was blossoming in Berkeley. U.S. troops were fighting in Vietnam. And digital computer-communication systems were in their infancy and the subject of intense, wide-ranging investigations, with dozens (and soon hundreds) of people in academia, industry, and government pursuing major research programs.

The most promising of these involved a new approach to data communication called packet switching. Invented ­independently by Paul Baran at the Rand Corp. in the ­United States and Donald Davies at the ­National Physical Laboratory in England, packet switching broke messages into discrete blocks, or packets, that could be routed separately across a network’s various channels. A computer at the receiving end would reassemble the packets into their original form. Baran and Davies both believed that packet switching could be more robust and efficient than circuit switching, the old technology used in telephone systems that required a dedicated channel for each conversation.

Researchers sponsored by the U.S. Department of Defense’s Advanced Research Projects Agency created the first packet-switched network, called the ARPANET, in 1969. Soon other institutions, most notably the ­computer giant IBM and several of the telephone monopolies in Europe, hatched their own ambitious plans for packet-switched networks. Even as these institutions contemplated the digital convergence of computing and communications, however, they were anxious to protect the revenues generated by their existing businesses. As a result, IBM and the telephone monopolies favored packet switching that relied on “virtual circuits”—a design that mimicked circuit switching’s technical and organizational routines.

map-usa

1969: ARPANET, the first packet-switching network, is created in the United States.

1970: Estimated U.S. market revenues for computer communications: US $46  million.

1971: Cyclades packet-switching project launches in France.

With so many interested parties putting forth ideas, there was widespread agreement that some form of international standardization would be necessary for packet switching to be viable. An ­early attempt began in 1972, with the formation of the Inter­national Network Working Group (INWG). Vint Cerf was its first chairman; other active members included Alex ­McKenzie in the United States, ­Donald Davies and Roger ­Scantlebury in England, and Louis Pouzin and ­Hubert Zimmermann in France.

The purpose of INWG was to promote the “datagram” style of packet switching that Pouzin had designed. As he explained to me when we met in Paris in 2012, “The essence of datagram is connectionless. That means you have no relationship established between sender and receiver. Things just go separately, one by one, like photons.” It was a radical proposal, especially when compared to the connection-oriented virtual circuits favored by IBM and the telecom engineers.

INWG met regularly and exchanged technical papers in an effort to reconcile its designs for datagram networks, in particular for a transport protocol—the key mechanism for exchanging packets across different types of networks. After several years of debate and discussion, the group finally reached an agreement in 1975, and Cerf and Pouzin submitted their protocol to the international body responsible for overseeing telecommunication standards, the International Telegraph and Telephone Consultative Committee (known by its French acronym, CCITT).

group-shot

1972: International Network Working Group (INWG) forms to develop an international standard for packet-switching networks, including [left to right] Louis Pouzin, Vint Cerf, Alex ­McKenzie, ­Hubert Zimmermann, and Donald Davies.

The committee, dominated by telecom engineers, rejected the INWG’s proposal as too risky and untested. Cerf and his colleagues were bitterly disappointed. Pouzin, the combative leader of Cyclades, France’s own packet-­switching research project, sarcastically noted that members of the CCITT “do not object to packet switching, as long as it looks just like circuit switching.” And when Pouzin complained at major conferences about the “arm-twisting” tactics of “national monopolies,” everyone knew he was referring to the French telecom authority. French bureaucrats did not appreciate their country­man’s candor, and government funding was drained from Cyclades between 1975 and 1978, when Pouzin’s involvement also ended.

1974 Cerf and Kahn

1974: Vint Cerf and Robert Kahn publish “A Protocol for Packet Network Intercommunication,” in IEEE Transactions on Communications.

For his part, Cerf was so discouraged by his international adventures in standards making that he resigned his position as INWG chair in late 1975. He also quit the faculty at Stanford and accepted an offer to work with Bob Kahn at ARPA. Cerf and Kahn had already drawn on Pouzin’s datagram design and published the details of their “transmission control program” the previous year in the IEEE Transactions on Communications. That provided the technical foundation of the “Internet,” a term adopted later to refer to a network of networks that utilized ARPA’s TCP/IP. In subsequent years the two men directed the development of Internet protocols in an environment they could control: the small community of ARPA contractors.

Cerf’s departure marked a rift within the INWG. While Cerf and other ARPA contractors eventually formed the core of the ­Internet community in the 1980s, many of the remaining veterans of INWG regrouped and joined the international alliance taking shape under the banner of OSI. The two camps became bitter rivals.

OSI was devised by committee,but that fact alone wasn’t enough to doom the ­project—after all, plenty of successful standards start out that way. Still, it is worth noting for what came later.

In 1977, representatives from the British computer industry proposed the creation of a new standards committee devoted to packet-switching networks within the International Organization for Standardization (ISO), an independent nongovernmental ­association created after World War II. Unlike the CCITT, ISO wasn’t specifically concerned with telecommunications—the wide-ranging topics of its technical committees included TC 1 for standards on screw threads and TC 17 for steel. Also unlike the CCITT, ISO already had committees for computer standards and seemed far more likely to be receptive to connectionless datagrams.

The British proposal, which had the support of U.S. and French representatives, called for “network standards needed for open working.” These standards would, the British argued, provide an alternative to traditional computing’s “self-contained, ‘closed’ systems,” which were designed with “little regard for the possibility of their inter­working with each other.” The concept of open working was as much strategic as it was technical, signaling their desire to enable competition with the big incumbents—namely, IBM and the telecom monopolies.

OSI vs TCP/IP
A layered approach: The OSI reference model [left column] divides computer communications into seven distinct layers, from physical media in layer 1 to applications in layer 7. Though less rigid, the TCP/IP approach to networking can also be construed in layers, as shown on the right.

As expected, ISO approved the British request and named the U.S. database ­expert Charles Bachman as committee chairman. Widely respected in computer circles, ­Bachman had four years earlier received the prestigious Turing Award for his work on a database management system called the Integrated Data Store.

When I interviewed Bachman in 2011, he described the “architectural vision” that he brought to OSI, a vision that was inspired by his work with databases generally and by IBM’s Systems Network Architecture in particular. He began by specifying a reference model that divided the various tasks of computer communication into distinct layers. For example, physical media (such as copper cables) fit into layer 1; transport protocols for moving data fit into layer 4; and applications (such as e-mail and file transfer) fit into layer 7. Once a layered architecture was established, specific protocols would then be developed.

1974: IBM launches a packet-switching network called the Systems Network Architecture.

1975: INWG submits a proposal to the International Telegraph and Telephone Consultative Committee (CCITT), which rejects it. Cerf resigns from INWG.

1976: CCITT publishes Recommendation X.25, a standard for packet switching that uses “virtual circuits.”

Bachman’s design departed from IBM’s Systems Network Architecture in a significant way: Where IBM specified a terminal-to-­computer architecture, Bachman would connect computers to one another, as peers. That made it extremely attractive to companies like General Motors, a leading proponent of OSI in the 1980s. GM had dozens of plants and hundreds of suppliers, using a mix of largely incompatible hardware and software. Bachman’s scheme would allow “interworking” between different types of proprietary computers and networks—so long as they followed OSI’s standard protocols.

The layered OSI reference model also provided an important organizational feature: modularity. That is, the layering allowed committees to subdivide the work. Indeed, Bachman’s reference model was just a starting point. To become an international standard, each proposal would have to complete a four-step process, starting with a working draft, then a draft proposed international standard, then a draft international standard, and finally an international standard. Building consensus around the OSI reference model and associated standards required an extra­ordinary number of plenary and committee meetings.

OSI’s first plenary meeting lasted three days, from 28 February through 2 March 1978. Dozens of delegates from 10 countries participated, as well as observers from four international organizations. Everyone who attended had market interests to protect and pet projects to advance. Delegates from the same country often had divergent agendas. Many attendees were veterans of INWG who retained a wary optimism that the future of data networking could be wrested from the hands of IBM and the telecom monopolies, which had clear intentions of dominating this emerging market.

Bachman group

1977: International Organization for Standardization (ISO) committee on Open Systems Interconnection is formed with Charles Bachman [left] as chairman; other active members include Hubert Zimmermann [center] and John Day [right].

1980: U.S. Department of Defense publishes “Standards for the Internet Protocol and Transmission Control Protocol.”

Meanwhile, IBM representatives, led by the company’s capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI’s development in line with IBM’s own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates “fighting over who would get a piece of the pie.… IBM played them like a violin. It was truly magical to watch.”

Despite such stalling tactics, Bachman’s leadership propelled OSI along the precarious path from vision to reality. ­Bachman and Hubert Zimmermann (a veteran of ­Cyclades and INWG) forged an alliance with the telecom engineers in CCITT. But the partnership struggled to overcome the fundamental incompatibility between their respective worldviews. Zimmermann and his computing colleagues, inspired by Pouzin’s datagram design, championed “connectionless” protocols, while the telecom professionals persisted with their virtual circuits. Instead of resolving the dispute, they agreed to include options for both designs within OSI, thus increasing its size and complexity.

This uneasy alliance of computer and telecom engineers published the OSI reference model as an international standard in 1984. Individual OSI standards for transport protocols, electronic mail, electronic directories, network management, and many other functions soon followed. OSI began to accumulate the trappings of inevitability. Leading computer companies such as Digital Equipment Corp., Honeywell, and IBM were by then heavily invested in OSI, as was the European Economic Community and national governments throughout Europe, North America, and Asia.

Even the U.S. government—the main sponsor of the Internet protocols, which were incompatible with OSI—jumped on the OSI bandwagon. The Defense Department officially embraced the conclusions of a 1985 National Research Council recommendation to transition away from TCP/IP and toward OSI. Meanwhile, the Department of Commerce issued a mandate in 1988 that the OSI standard be used in all computers purchased by U.S. government agencies ­after August 1990.

While such edicts may sound like the work of overreaching bureaucrats, remember that throughout the 1980s, the ­Internet was still a research network: It was growing rapidly, to be sure, but its managers did not allow commercial traffic or for-profit service providers on the ­government-subsidized backbone until 1992. For businesses and other large entities that wanted to exchange data between different kinds of computers or different types of networks, OSI was the only game in town.

January 1983: U.S. Department of Defense’s mandated use of TCP/IP on the ARPANET signals the “birth of the Internet.”

May 1983: ISO publishes “ISO 7498: The Basic Reference Model for Open Systems Interconnection” as an international standard.

1985: U.S. National Research Council recommends that the Department of Defense migrate gradually from TCP/IP to OSI.

1988: U.S. market revenues for computer communications: $4.9 billion.

That was not the end of the story, of course. By the late 1980s, frustration with OSI’s slow development had reached a boiling point. At a 1989 meeting in Europe, the OSI advocate Brian Carpenter gave a talk titled “Is OSI Too Late?” It was, he recalled in a recent memoir, “the only time in my life” that he “got a standing ovation in a technical conference.” Two years later, the French networking expert and former INWG member Pouzin, in an essay titled “Ten Years of OSI—Maturity or Infancy?,” summed up the growing uncertainty: “Government and corporate policies never fail to recommend OSI as the solution. But, it is easier and quicker to implement homogenous networks based on proprietary architectures, or else to interconnect heterogeneous systems with TCP-based products.” Even for OSI’s champions, the Internet was looking increasingly attractive.

That sense of doom deepened, progress stalled, and in the mid-1990s, OSI’s beautiful dream finally ended. The effort’s fatal flaw, ironically, grew from its commitment to openness. The formal rules for international standardization gave any interested party the right to participate in the design process, thereby inviting structural tensions, incompatible visions, and disruptive tactics.

OSI’s first chairman, Bachman, had anticipated such problems from the start. In a conference talk in 1978, he worried about OSI’s chances of success: “The organizational problem alone is incredible. The technical problem is bigger than any one previously faced in information systems. And the political problems will challenge the most astute statesmen. Can you imagine trying to get the representatives from ten major and competing computer corporations, and ten telephone companies and PTTs [state-owned telecom monopolies], and the technical experts from ten different nations to come to any agreement within the foreseeable future?”

1988: U.S. Department of Commerce mandates that government agencies buy OSI-compliant products.

1989: As OSI begins to founder, computer scientist Brian Carpenter gives a talk entitled “Is OSI Too Late?” He receives a standing ovation.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

1992: U.S. National Science Foundation revises policies to allow commercial traffic over the Internet.

Despite Bachman’s and others’ best efforts, the burden of organizational overhead never lifted. Hundreds of engineers ­attended the meetings of OSI’s various committees and working groups, and the bureaucratic procedures used to structure the discussions didn’t allow for the speedy production of standards. Everything was up for debate—even trivial nuances of language, like the difference between “you will comply” and “you should comply,” triggered complaints. More significant rifts continued between OSI’s computer and telecom experts, whose technical and business plans remained at odds. And so openness and modularity—the key principles for ­coordinating the project—ended up killing OSI.

Meanwhile, the Internet flourished. With ample funding from the U.S. government, Cerf, Kahn, and their colleagues were shielded from the forces of international politics and economics. ARPA and the Defense Communications Agency accelerated the Internet’s adoption in the early 1980s, when they subsidized researchers to implement Internet protocols in popular operating systems, such as the modification of Unix by the University of California, Berkeley. Then, on 1 January 1983, ARPA stopped supporting the ­ARPANET host protocol, thus forcing its contractors to adopt TCP/IP if they wanted to stay connected; that date became known as the “birth of the Internet.”

conference table
Photo: John Day

And so, while many users still expected OSI to become the future solution to global network interconnection, growing numbers began using TCP/IP to meet the practical near-term pressures for interoperability.

Engineers who joined the Internet community in the 1980s frequently misconstrued OSI, lampooning it as a misguided monstrosity created by clueless European bureaucrats. Internet engineer Marshall Rose wrote in his 1990 textbook that the “Internet community tries its very best to ignore the OSI community. By and large, OSI technology is ugly in comparison to Internet technology.”

Unfortunately, the Internet community’s bias also led it to reject any technical insights from OSI. The classic example was the “palace revolt” of 1992. Though not nearly as formal as the bureaucracy that devised OSI, the Internet had its Internet Activities Board and the Internet Engineering Task Force, responsible for shepherding the development of its standards. Such work went on at a July 1992 meeting in Cambridge, Mass. Several leaders, pressed to revise routing and ­addressing limitations that had not been anticipated when TCP and IP were designed, recommended that the community ­consider—if not adopt—some technical protocols developed within OSI. The hundreds of Internet engineers in attendance howled in protest and then sacked their leaders for their heresy.

1992: In a “palace revolt,” Internet engineers reject the ISO ConnectionLess Network Protocol as a replacement for IP version 4.

1996: Internet community defines IP version 6.

1991: Tim Berners-Lee announces public release of the WorldWideWeb application.

2013: IPv6 carries approximately 1 percent of global Internet traffic.

Although Cerf and Kahn did not design TCP/IP for business use, decades of government subsidies for their research eventually created a distinct commercial advantage: Internet protocols could be implemented for free. (To use OSI standards, companies that made and sold networking equipment had to purchase paper copies from the standards group ISO, one copy at a time.) Marc Levilion, an engineer for IBM France, told me in a 2012 interview about the computer industry’s shift away from OSI and toward TCP/IP: “On one side you have something that’s free, available, you just have to load it. And on the other side, you have something which is much more architectured, much more complete, much more elaborate, but it is expensive. If you are a director of computation in a company, what do you choose?”

By the mid-1990s, the Internet had become the de facto standard for global computer networking. Cruelly for OSI’s creators, Internet advocates seized the mantle of “openness” and claimed it as their own. Today, they routinely campaign to preserve the “open Internet” from authoritarian governments, regulators, and would-be monopolists.

In light of the successof the nimble Internet, OSI is often portrayed as a cautionary tale of overbureaucratized “anticipatory standardization” in an immature and volatile market. This emphasis on its failings, however, ­misses OSI’s many successes: It focused attention on cutting-edge technological questions, and it became a source of learning by doing—­including some hard knocks—for a generation of network engineers, who went on to create new companies, advise governments, and teach in universities around the world.

Beyond these simplistic declarations of “success” and “failure,” OSI’s history holds important lessons that engineers, policymakers, and Internet users should get to know better. Perhaps the most important lesson is that “openness” is full of contradictions. OSI brought to light the deep incompatibility between idealistic visions of openness and the political and economic realities of the international networking industry. And OSI eventually collapsed because it could not reconcile the divergent desires of all the interested parties. What then does this mean for the continued viability of the open Internet?

For more about the author, see the Back Story, “How Quickly We Forget.”

How Quickly We Forget

08backstoryAndrewRussell
Photo: Andrew L. Russell

History is written by the winners, as they say. And in the fast-moving world of technology, history can mean things that happened just 15 or 20 years ago. In “The Internet That Wasn’t,” in this issue, Andrew L. Russell, an assistant professor of history and director of the Program in Science & Technology Studies at Stevens Institute of Technology, in Hoboken, N.J., explores just such a case: an alternative scheme for computer networking that, despite years of effort by thousands of engineers, ultimately lost out to the Internet’s Transmission Control Protocol/Internet Protocol (TCP/IP) and is now all but forgotten.

Russell first wrote about the competition between that scheme, called Open Systems Interconnection (OSI), and the Internet in 2006, for the IEEE Annals of the History of Computing. During his research on the Internet and its precursor, the ARPANET, “OSI would creep up as a foil, something they didn’t want the Internet to turn into,” he says. “So that’s the way I presented it.”

After the article was published, he says, veterans of OSI “came out of the woodwork to tell their stories.” One of the e-mails was from a computer networking pioneer named John Day, who had worked on both TCP/IP and OSI. Day told Russell that his article hadn’t captured the full scope of the story.

“Nobody likes to hear that they got it wrong,” Russell recalls. “It took me a while to cool down.” Eventually, he talked to Day, who put him in touch with other OSI participants in the United States and France. Through those interviews and archival research at the Charles Babbage Institute, in Minnesota, a more balanced, complex history of networking emerged, which he describes in his upcoming book Open Standards and the Digital Age: History, Ideology, and Networks (Cambridge University Press).

“It’s almost alarming that something that recent can be so easily forgotten,” Russell says. On the other hand, it’s what makes being a historian of technology so rewarding.

This article appears in the August 2013 print issue as “The Internet That Wasn’t.”

To Probe Further

This article is a follow-up to a 2006 article Andrew L. Russell published in IEEE Annals of the History of Computing, called “ ‘Rough Consensus and Running Code’ and the Internet-OSI Standards War.” And he will be delving into the history of OSI and the Internet—along with related topics such as standardization in the Bell System—in his upcoming book, Open Standards and the Digital Age: History, Ideology, and Networks, which will be published by Cambridge University Press in late 2013 or early 2014.

Janet Abbate’s Inventing the Internet (MIT Press, 1999) is an excellent account of the events that led to the development of the Internet as we know it.

Alexander McKenzie’s article “INWG and the Conception of the Internet: An Eyewitness Account,” published in the January 2011 issue of IEEE Annals of the History of Computing, builds on documents McKenzie saved from his experience with the International Networking Working Group and that now are archived at the Charles Babbage Institute at the University of Minnesota, Minneapolis.

James Pelkey’s online book Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988 is based on interviews and documents he collected in the late 1980s and early 1990s, a time when OSI seemed certain to dominate the future of computer internetworking. Pelkey’s project also was described in a recent Computer History Museum blog post celebrating the 40th anniversary of Ethernet. Recommended For YouHow IBM Watson Overpromised and Underdelivered on AI Health CareHow IBM Watson Overpromised and Underdelivered on AI Health CareThe Real Story of StuxnetThe Real Story of StuxnetDeveloper of Handheld Cable Tester for U.S. Army Dies at 80Developer of Handheld Cable Tester for U.S. Army Dies at 80A Brief History of the Lie DetectorA Brief History of the Lie DetectorThe Uncertain Future of Ham RadioThe Uncertain Future of Ham RadioHow the Father of FinFETs Helped Save Moore’s Law
How the Father of FinFETs Helped Save Moore’s Law


Learn More

TCP/IP historyOSIOpen Systems InterconnectionInternet historycomputing networking standardscomputer networking historyREAD NEXTHow IBM Watson Overpromised and Underdelivered on AI Health CareHow IBM Watson Overpromised and Underdelivered on AI Health CareThe Real Story of StuxnetThe Real Story of StuxnetDeveloper of Handheld Cable Tester for U.S. Army Dies at 80Developer of Handheld Cable Tester for U.S. Army Dies at 80A Brief History of the Lie DetectorA Brief History of the Lie DetectorThe Uncertain Future of Ham RadioThe Uncertain Future of Ham RadioHow the Father of FinFETs Helped Save Moore’s Law
How the Father of FinFETs Helped Save Moore’s Law


More Jobs >>

Comments

Comment Policyhttps://disqus.com/embed/comments/?base=default&f=ieeespectrum&t_i=%2Ftech-history%2Fcyberspace%2Fosi-the-internet-that-wasnt&t_u=https%3A%2F%2Fspectrum.ieee.org%2Ftech-history%2Fcyberspace%2Fosi-the-internet-that-wasnt&t_d=OSI%3A%20The%20Internet%20That%20Wasn%E2%80%99t&t_t=OSI%3A%20The%20Internet%20That%20Wasn%E2%80%99t&s_o=default#version=d31a003da6a2fa81acbeb5fc947cef7d

This image has an empty alt attribute; its file name is ieee-logo-851bf96265ef3f9dfa532b4db93f0843.png
  • Post
  • Block

List

Create a bulleted or numbered list.

Typography

Font sizeFont sizeabout:blankDefaultCustomReset

  • Post
  • Block

List

Create a bulleted or numbered list.

Typography

Font sizeFont sizeabout:blankDefaultCustomReset

  • Post
  • Block

List

Create a bulleted or numbered list.

Typography

Font sizeFont sizeabout:blankDefaultCustomReset

Color settings

Advanced

Skip to the selected blockOpen publish panel

  • Document
  • List

Texas RepubliC00Ns Grow up To Be Serial Killers


21PM PDT

Murder Trail, Houston, Texas (Molesting Garbage Law CIty)

HAVE YOU HEARD?

The Jonathan Foster Murder Trial Just Got Started in Texas!

What’s that you say? You don’t know who Jonathan Foster was?

On Christmas Eve Day of 2010, 12 year old Jonathan Foster was kidnapped from his home by Mona Nelson, a 44 year old Black woman.

After abducting the child, Nelson, a welder, tied his hands, and then roasted the boy alive with her blow torch. Imagine what went through Jonathan’s mind as he was being burned alive!

Jonathan’s badly burned body was soon discovered in a roadside ditch in Houston, not far from where he lived. Nelson admits to dumping the container which held the body–she was caught on film–but makes the ridiculous claim that she was given the container by one of Jonathan’s relatives.

Police suspect that Jonathan may not have been her only victim.

Houston Police Department Homicide Detective Mike Miller calls Nelson a “cold, soulless murderer who showed an absolute lack of remorse in taking the life of Jonathan Foster.”

Next question: Do you know who Trayvon Martin and George Zimmerman are?

Thug Martin (right) was shot while brutally beating Mr. Zimmerman

Of course you do!!!

Everybody in America knows about Thug Martin and George Zimmerman!

It doesn’t take a Sherlock Holmes to clearly see that there is a double standard aimed at White folks. The question my dear Mr. Watson, is, WHY does this horrible double standard exist?

And more importantly, WHO is real power behind this “anti White” media bias? (Hint: It ain’t Al Sharpton!)

And most importantly, TO WHAT END?

The “War on Whites” serves the interests of the THE NEW WORLD ORDER!

Journalist and former Presidential candidate Pat Buchanan cracks the code for us:

“Global elites view the White Western world as the main obstacle standing in the way of a future world government. Multiculturalism is a tool used by such elites to dismantle White Western civilization.”

Can you handle the truth? Are you ready to take the next step?

Where is the uproar over this “hate crime?” Why have you not heard about this trial in main stream media?

We challenge you to open the final door, to “get smart,” and solve this mystery.

   Go to Order Form

REALLY???? WHAT THE FUCK IS WRONG WITH YOU FUCKING NIGGERS???? CAN ONE OF YOU FUCKING NIGGERS EXPLAIN WHY THE FUCK YOU ANIMALS ARE SO FUCKING UNCIVILIZED?????

  • Location: Molesting Garbage Law CIty

DISCLAIMER FOR THE CLUELESS:

This is humor, folks. I am not a White Supremacist or a Skinhead.

SAVE OUR TRAILERPARKS FROM
JEWISH UFO’S AND BLACK UNITED NATIONS HELICOPTERS!!!

Nazi Space Aryans DO NOT PERFORM MEDICAL EXPERIMENTS ON WHITE PEOPLE!!!

It is a DOCUMENTED FACT that the space aliens abducting people for medical

experiments are of JEWISH origin from the star system 51 Peg.

In the 16th century, a JEWISH rabbi constructed a golem out of clay to attack

the Christian settlements around them. This golem eventually turned on the

JEWISH population and the rabbi destroyed it. This was the first DOCUMENTED

CASE OF JEWS CONSTRUCTING PEOPLE!!!

Later, as JEWISH technology and NECROMANCY became more advanced, they began

constructing BLACK PEOPLE. In the year 1904, the JEWS INVENTED THE BLACK RACE.

Even today, JEWISH “Archeologists” are creating “ancient” structures they’re

“discovering” in Africa and Egypt. For example, the pyramids were mostly

constructed by JEWISH “Archaeologists” in 1973. Just last year, they

“discovered” yet another pyramid. PRIOR TO 1904, THERE WERE NO BLACK PEOPLE

ANYWHERE! Any information to the contrary is A JEWISH HOAX.

Occasionally we Nazi’s, with the help of our Aryan Space Nazis from Tau Ceti

Prime, can capture and re-program some of these Black “People” who are in fact

really just biological androids created by THE JEWS. These are the only

“abductions” we Interstellar Space Nazis perform.

Some of the Black “people” we have reprogramed have been:

  • Louis Farrakahn
  • Jesse Jackson
  • Al Sharpton
  • Michael Jackson (We even made him LOOK White!)
  • Malcolm Little (Aka, Malcolm X. The JEWS reprogramed him after we did,so we had to have Farrakahn kill him).

All of the above people were reprogramed by our extraterrestrial Aryan friends

to become Black Nazis and servants of the Interstellar Aryan Space Corps.

THE JEWS, on the other hand, continue to have their Interstellar Zionists abduct

WHITE PEOPLE.

These JEWISH SPACE SHIPS are based in Groom Lake, which is also called Area 51.

This is an area kept secret by the US Government, also known as THE ZIONIST

OCCUPIED GOVERNMENT or ZOG. To cover up their activities, THE JEWS are burning

toxic waste there and have allowed this information to become public knowledge,

but in reality the toxic waste disposal is merely a more sinister front for

their true agenda of ABDUCTING HUMAN CHRISTIANS WITH UFO’S. In the process of

this cover-up, MANY WHITE PEOPLE HAVE BEEN POISONED with known toxins such as

Dioxin, PCB’s, Aspratame, dextromathorphan, and even monosodium glutamate which

is being illegally disposed of at Area 51. Of course, we Nazis dispose of toxic

waste by either deporting it to the east or making it into lampshades and soap.

This is almost as awful as the JEWISH SPACE ALIEN UFO ABDUCTIONS.

You will nottice that most of these JEWISH abductions are in the midwest from

trailer parks, which, coincidentally, happens to be where most of our Neo-Nazi

allies come from. What we need to do is set up anti-UFO lasers to defend

trailer parks from JEWISH UFO’s and the tornados they unleash with the aid of

BLACK HELICOPTERS FROM THE UNITED NATIONS!!!

Even now, UN Troops from UNICEF may be hovering above your astroturf making your

pink flamingo whirligigs go balistic.

In the meantime, we need to get as many of our Aryan brothers and sisters to the

safety of the Aryan Nazi UFO Mothership at the center of the Earth. Getting

there is easy. Sell everything you have. The JEWS will be eager to buy it.

Get yourself an airplane ticket to McMurdo Sound in Antarctica. Then, once you

are in Antarctica, find a volcano and jump into it. The Aryan Space Nazis will

beam you into their ship as you fall and you will land unharmed in the hands of

beautiful, superior Aryan people from the stars.

— Ernst Zundel*

My right to free speech supersedes your right to exist.


Not the real Zundel! This is

a pseudo-Ernst Zundel posting from AOL.


DISCLAIMER FOR THE CLUELESS:

This is humor, folks. I am not a White Supremacist or a Skinhead.


Train robbers are stealing rolling cargo in the Mojave. But today’s prize may be TVs or Nikes


By Phil Garlington of The Orange County Register (KRT)

MOJAVE NATIONAL PRESERVE, Calif. — National Park Ranger Tim Duncan has his hands full dealing with speeders, cactus poachers and off-roaders as the lone federal lawman for this arid, 1.4 million acres of protected high desert.

And the most serious criminal activity Duncan faces?

Train robbers.

The Union Pacific railroad line from Los Angeles to Las Vegas runs straight through the desolate heart of the eastern Mojave. Dead center in the national preserve is The Hill, an 18-mile grade, where 11/2-mile-long eastbound freight trains laden with double stacks of containers slow to a crawl.

“It’s been bad,” says the bearded, jovial Duncan. “Sometimes both sides of the track have been littered with boxes of merchandise the thieves have thrown off the train. This has been one of their favorite spots.”

It’s night, and Duncan is hunkered under a concrete railroad bridge with Union Pacific special agent David Sachs, who is dressed Ninja-fashion entirely in black, including Kevlar vest and boonie hat. Moonlight casts spectral shadows across sage, tamarisk and gray desert floor. Two other pairs of railroad police, and a German shepherd, are concealed along the tracks.

Half a mile away, another special agent, Darrell Brown, an infared night scope slung around his neck, has taken position atop a railroad signal mast. He’s wearing camouflage pants and a black T-shirt with POLICE stenciled on it in white letters.

Over Brown’s radio, the not-quite-human voice of an automated sensor reports that the coming freight’s brakes are operational:

“Two-forty-three. No defects.”

Suddenly, there’s the flash of the train’s headlamp and the squealing and wheezing of steel wheels as the 120-car freight starts up the grade.

The thieves — if there are any — will be scrunched down in “tubs” between the containers and the sides of the freight cars, invisible to anybody at ground level. But from his vantage point atop the signal mast, Brown will be able to peer directly into the tubs as the train rumbles by beneath him.

“They are difficult to see,” says Brown, a muscular veteran of two decades with the railroad. “The train crews seldom spot them, unless they’re alerted by a crew on a train going the other direction.”

As the six diesels begin the laborious climb uphill, the container train slows to 8 mph.

Often, Brown says, robbers meet the train at Yermo, five miles east of Barstow. During the day, they hole up in abandoned buildings. As night falls, they flit across the railroad yard and hop aboard eastbound double stackers carrying goods from the Pacific Rim. They travel light. Burglar tools and a quart of water. They wear several layers of clothing to cushion the spine-jarring bumping of the freight cars.

Using lengths of pipe, bolt cutters or hacksaws, they cut the seals on the containers and quickly rummage through boxes, looking for electronics, athletic shoes or expensive clothing that can be turned over for quick money.

“We’ve tried different kinds of locks, but they always figure out how to gain entrance,” Brown says.

It’s so hit-and-miss, Brown says, thieves sometimes miss valuable electronics because they can’t identify the products.

“The people hired by the gang bosses in Los Angeles to rob the trains are very low-level,” Brown says. “They’re like the drug cartel mules, guys with nothing to lose. They’re given a few hundred dollars, and the gangs look on them as being expendable.”

The thieves hop off trains and drag the loot into the desert, sometimes for half a mile, cover it with brush, and wait for a truck.

In the past month, thieves have been hitting the trains hard. One double stacker arrived in Las Vegas recently with 24 containers broached.

“We have to keep hammering ’em, or the loss of merchandise would be staggering,” says Union Pacific special agent Paul Kunze, whose usual beat is the stretch of track between Yermo and Las Vegas.

“It used to be they’d steal cigarettes, tires and booze,” Kunze says, “and the pros could smell the merchandise in the boxcars.”

Now it’s electronics and expensive clothing, although thieves also have taken outboard motors and even washing machines.

The jackpot for a container burglar, however, is finding a consignment of Nike Air Jordans, Kunze says.

A 31-year veteran of the railroad police, Kunze, at 56, is fit and athletic, and takes pride in being able to pursue fleeing suspects over miles of desert. Last month, during the pursuit of a train robber flushed from a double stacker, Kunze chased the 20-year-old suspect four miles through deep sand, gullies and thorn brush, crossing Interstate 15 twice, until a California Highway Patrol helicopter helped make the arrest. He says that afterward he had to pull the thorns out of his leg with a pair of pliers.

“In the last year we’ve arrested about 50 train robbers,” Kunze says.

Most, however, escape into the desert.

“There’s no water out there, and I don’t know how they survive,” Duncan says. “But they’re very tough.”

Even when captured, few of the thieves have been prosecuted, Brown says.

“They’re just deported. It’s impossible to get them to inform on the higher-ups. They know it’d be a death sentence for them.”

Often, the agents simply “light up” a train from their perches atop the signal masts by shining lights into the tubs, that way preventing the thieves from cracking into the boxes. “It denies them a payday,” Kunze says. “They took a long, uncomfortable, thirsty ride into the desert for nothing.”

Another tactic has been to ambush the trucks. Railroad police have recovered a couple of rental vans that got stuck in the sand when they were driven out to load stolen merchandise, Kunze says.

Railroad policing is relatively new to the Mojave. In 1994, what had been the Mojave National Scenic Area, administered by the Bureau of Land Management, became the Mojave National Preserve, run by the National Park Service.

Funding in the first year was exactly $1, because of a squabble in Washington about the amount of off-road use. The park staff got laid off, and for a year no rangers patrolled the back country.

“During that year, the stretch of track between Kelso and Nipton was littered with cartons and all sorts of merchandise the robbers couldn’t fit into their vans,” Duncan says.

“All of this clothing and jewelry started turning up along the track,” says Linda Darryl, manager of the Nipton Store. “Everybody around here was wearing T-shirts and bracelets. We thought it was strange that this stuff was falling out of locked containers.”

Rural mail carrier Mike Smith has retrieved and returned truckloads of merchandise scattered by the thieves. One park ranger found (and returned) thousands of cartons of cigarettes.

While Brown and the Union Pacific police mainly are concerned about protecting shipments, Duncan’s primary concern is the safety of park visitors.

“The park service isn’t happy about the idea of criminals out on the road trying to hitch a ride,” Duncan says. “That’s not considered part of the National Park experience.”

When Brown or Kunze spot thieves in the tubs, they radio the train’s engineer to stop at a point where agents are concealed, and the chase begins. “We’ve had some success using dogs,” says Brown, who handles one called Bet.

This night, Max and Bet are still sluggish after the four-hour ride from Vegas. “Achtung!” shouts Max’s handler, Steve Stevenson (all commands to the dogs are given in German). “You need to get them fired up.” For practice, Max, on command, attacks a padded sleeve worn by a visitor.

Kunze, who now also has clambered up on a signal mast, worries about his Navy Seal wristwatch, which gives off a faint greenish glow, and about the luminous sights on his 9mm pistol. “Can they see that? I might have to go back to a Timex.”

“I really don’t have animosity for most of these guys,” Kunze says of the train robbers. “A few of them are hardened felons. Most of them are just poor Mexicans. When I catch them they say, “I’m sorry. I’m just trying to feed my family.” But I want to catch them. With me, it’s pride. I don’t want them to outrun me.”

The former Nebraska football player always carries a bottle of water during these foot races. And a radio. He has called away section crews from their work repairing track to join the chase. When one suspect saw himself surrounded by three Navajo gandy dancers armed with pick handles, he surrendered, and then fainted, Kunzel says.

Once, during a pursuit, Kunze commandeered a squad of Marine Corps ultra-marathoners, who happened to be running alongside the tracks, to help make a collar.

Kunze also says he has to change strategy as thieves change tactics.

“We’ve been tracking them to where they stash the loot. Now they’re starting to brush out their footprints.

“They use one kind of signal marker, we catch on to it, and they start using something else. The trucks used to pick up the loot right away. Now they may bury it and pick it up weeks later.

“It’s cowboys and Indians out here.”


(c) 1998, The Orange County Register (Santa Ana, Calif.).

Visit the Register on the World Wide Web at http://www.ocregister.com/

Distributed by Knight Ridder/Tribune Information Services.

The TruTh and Facts’ about Lt. Colonel Michael A. Aquino, Ph.D. These things can be verified if you just take the time to research. This deals with Satanism in General and the crimes with clear connections to Satanism. Sure a few were Mentally Ill from the start but that Satanic link is undeniable. I will say all Faiths have their own crimes connected with them. I have links included you can look up. This is about Satanism though, and in particular Dr. Aquino.

Lt. Colonel Michael A. Aquino, Ph.D. Lt. Colonel, Psychological Operations (Ret.) (I personally knew him but in ways I can’t disclose for obvious reasons. He has bragged of doing a Satanic Ritual in the exact same place that the Occultic Nazi, Heinrich Himmler did.

https://en.wikipedia.org/wiki/Nazism_and_occultism
Dr. Michael A. Aquino, Ph.D. United States Army, he also worked with the CIA and The National Security Agency. (NSA) Michael Aquino – Church of Satan, Temple of Set

Michael Aquino accused of multiple acts of child abuse, child sex slavery, child abuse, torture and psychological human experimentation through the years. This was mainly a big story during the 1980s. There have been multiple witnesses who have ended up dead or are labeled as crazy. These are facts! He has been implicated in multiple law suites as well as filing law suits himself to cover his family and reputation. Many believe that he is connected to the Washington call boy operation that was used as blackmail operations against people in high levels of government. It is also believed by many that he is possibly connected to the #Pizzagate child sex ring.

There are tons of conspiracy theories, false stories and disinformation surrounding Dr. Michael A. Aquino but there is some truth and facts, you be the judge. One day the Truth will reveal itself.

Michael Aquino, repeatedly implicated in Satanic ritual abuse (SRA), child abuse and contemporary iterations of MKULTRA derived trauma-based programming, by numerous victims over the decades, from child victims of SRA to adult victims of Project Monarch. These allegations were never proven definitively, however remain a compelling topic when one objectively examines the testimony of those involved and the circumstantial evidence, and considering with the manner in which the case was handled (or mishandled) by the courts.

It’s worth noting that he (Michael Aquino) had been heavily involved in military PSYOPS and the NSA, in acts of torture under CIA’s Phoenix Program, and had authored the pivotal ‘MindWar’ paper on the topic of psychological operations, propaganda and mass mind control including the usage of psychotronics to that effect.

As you are likely aware, he (Aquino) had founded the Temple of Set after a falling out with Church of Satan founder Anton LaVey. The Temple of Set is a left-hand path (e.g. black magic) oriented initiatory order with an added emphasis on theistic Satanism, while incorporating the Egyptian mysteries and Hermeticism into its ritual and doctrine.

You mention the Ordo Templi Orientis, however it should be noted the Temple of Set is not directly affiliated with the O.T.O. in any official capacity, nor can it trace any lineage through the O.T.O. itself, though it was indeed heavily influenced by the work of Aleister Crowley and his pioneering research into the Hermetic arts, Western esotericism and occultism in general, incorporated with LaVeyan Satanism into a theistic form of Satanic gnosis, embodied by the Egyptian god Set and the term xeper.
If you would like to learn more about Setianism and the Temple of Set, I will refer you to their website, along with this assortment of publications and documents I’ve taken the liberty of uploading here.

https://en.wikipedia.org/wiki/List_of_satanic_ritual_abuse_allegations
https://en.wikipedia.org/wiki/Satanic_ritual_abuse

.
Born in 1946, Michael Aquino was a military intelligence officer specializing in psychological warfare. In 1969 he joined Anton LaVey’s Church of Satan and rose rapidly through the group’s ranks. Him At His Satanic Church Here on Wikipedia
https://en.wikipedia.org/wiki/Temple_of_Set

The Church Of Satan Written by Michael Aquino Here

Basically This is is Resume:

TEMPLE OF SET WEBSITE:
https://www.xeper.org/
https://xeper.org/maquino/
Email: Xeper@aol.com

It is said on Wikipedia that The Temple of Set also setup its own private intranet for private communication around the world. Wikipedia:
The Temple first registered a website in 1997, the same year as the Church of Satan. It would also establish its own intranet, allowing for communication between Setians in different parts of the world.

He is also affiliated with this occult website:
http://khprvod.org/

Dr. Michael A. Aquino, Ph.D, still remains active online and comments on many Youtube videos.

Google: https://plus.google.com/115942221653091196019/about/p/pub
Youtube: https://www.youtube.com/user/MAAquino/videos
Twitter: https://twitter.com/templeofset

One of His Websites:
http://www.rachane.org/

Timeline:

1967 – Michael Aquino began a two-year tour of duty in Vietnam, taking part in the infamous Phoenix Program. The Phoenix Program was an assassination/torture/terror operation that was initiated by the CIA, with the aim of ‘neutralizing’ the civilian infrastructure that supported the Viet Cong insurgency in South Vietnam. It was a terrifying ‘final solution’ that blatantly violated the Geneva Conventions. Targets for assassination included VC tax collectors, supply officers, political cadre, local military officials, and suspected sympathizers. However, ‘faulty intelligence’ more often than not led to the murder of innocent civilians, even young children. Sometimes orders were even given to kill US military personnel who were considered security risks. In 1971, William Colby, head of CIA in Vietnam at the time, later testified that the number killed was 20,857, while South Vietnamese government figures claimed it was 40,994 dead. This murderous psyop program had the effect of creating legions of cold-blooded psychopathic killers who would return home to the USA as completely different people than when they left. Many of them would become involved in satanism during or after their involvement in the Phoenix Program. And Michael Aquino was there to lead them into it. Soon after these killers started coming home, there began a steady rise in horrific serial murders with satanic undertones that centered around the southern California area (where Michael Aquino has always lived).

1980 – According to sworn testimony given before a US Senate in later years, MKULTRA mind-control victim Cathy O’Brien claimed that she was programmed at Fort Campbell, Kentucky, in 1980 by Lt. Col. Michael Aquino of the US Army. She stated that Aquino used barbaric trauma techniques on both her daughter Kelly and herself that involved NASA technology. Cathy O’Brien claimed that she was a ‘presidential model’ Monarch sex slave, meaning that she was specially programmed to cater to the sexual perversions of the highest-ranking politicians in the USA. She stated that during her time as a sex slave (which started as a child), she serviced a number of well-known politicians, including both Bill and Hilary Clinton, Ronald Reagan, Pierre Trudeau, Brian Mulroney, George H.W. Bush, Dick Cheney, Governors Lamar Alexander and Richard Thornburgh, Bill Bennett, Senator Patrick Leahy, Senator Robert Byrd (who she says was her handler) and Arlen Spector. O’Brien eventually gave testimony before the US Senate regarding the events she was forced to go through, and although she named her perpetrators, not one of them dared to challenge her or accuse her of slander.

1982 (September 5) – Twelve-year-old Johnny Gosch was abducted from a shopping mall parking lot in West Des Moines, Iowa, while doing his early-morning paper route, never to be seen again. Years later, during an interview with private investigator Ted Gunderson, child abductee and sex slave victim Paul Bonacci revealed that, as a child, he was directly involved in Gosch’s abduction, having acted as a lure to draw Gosch into the hands of his pedophile abductors. According to Bonacci, the abduction was ordered by Lt. Col. Michael Aquino, who later picked Gosch up at a farmhouse he was being held at and delivered him to a buyer in Colorado. For years, both boys were used for the pedophiliac pleasures of high-ranking government officials.

1985 – Allegations of ritual abuse at the Jubilation Day Care Center at Fort Bragg erupted when several children reported being sexually abused by a number of people at the day care center and several other locations, including at least two churches. Lt. Col. Michael Aquino was identified as having been present at one of those churches.

1986 (November) – Allegations emerged regarding sexual abuse being perpetrated at the US Army’s Presidio Child Development Center in San Francisco. Within a year, at least 60 victims were identified, all between the ages of three and seven. Victims told of being taken to private homes to be abused, and at least three houses were positively identified, one of them being Aquino’s. They also described being urinated and defecated upon, and being forced to ingest urine and feces. Irrefutable medical evidence documented the fact that these children were sexually abused, including five who had contracted chlamydia, and many others who showed clear signs of anal and genital trauma consistent with violent penetration. Even before the abuse was exposed, the children were exhibiting radical changes in behavior, including temper outbursts, sudden mood shifts, and poor impulse control. Both Lt. Col Michael Aquino and his satanist wife Lilith were positively identified by victims as two of the perpetrators. At least one victim was able to positively identify Aquino’s home and describe with uncanny accuracy the distinctively satanic interior of the house. Only one person was ever charged for the abuse of one child, and these charges were dismissed three months later.

1987 (August 14) – As part of the Presidio investigation, a search warrant was served on the residence of Lt. Col. Michael Aquino and his wife Lilith, and numerous videotapes, photographs, photo albums, photographic negatives, cassette tapes, and address books were confiscated. Also observed during the search was what appeared to be a soundproof room that may have been used as a torture chamber.

1987 (November) – The US Army received allegations of child abuse at fifteen of its day care centers and several elementary schools. There were also at least two other cases at Air Force day care centers, and another one at a center run by the US Navy. In addition to these, a special team of experts were sent to Panama to help determine if as many as ten children at a Department of Defense elementary school were molested and possibly infected with AIDS. Another case also emerged in a US-run facility in West Germany. These cases occurred at some of the most esteemed military bases in the country, including Fort Dix, Fort Leavenworth, Fort Jackson, and West Point. In the West Point case alone, by the end of the year, fifty children were interviewed by investigators. There were reports of satanic acts, animal sacrifices, and cult-like behavior among the abusers. An investigation led by former US Attorney Rudolph Giuliani produced no federal grand jury indictments. His investigation concluded that only one or two children were abused, in spite of all the evidence to the contrary.

1988 (November 4) – The FBI raided the Franklin Credit Union in Omaha, Nebraska, run by a man named Lawrence King. In the process, they uncovered evidence relating to drug running, pedophilia, pornography, and satanic activity involving prominent individuals in the local community and beyond. Eighty children eventually came forward and identified many of those involved, including the chief of police (who impregnated one of the victims), a local newspaper publisher, a former vice squad officer, a judge, and others. The children described satanic ceremonies involving human and animal sacrifice. Evidence that came out showed that children were abducted from shopping mall parking lots and auctioned off in Las Vegas and Toronto. Airplanes owned by the DEA were often used to transport the children. Other children were removed from orphanages and foster homes and taken to Washington, DC to take part in sex orgies with dignitaries, congressmen, and other high-ranking public officials. A number of the child victims testified that George Bush Sr. was one of the people who was often seen at these parties. Photographs were being surreptitiously taken at these orgies by the child traffickers for blackmail purposes. There was also evidence of ties to mind-control programs being conducted at Offutt Air Force Base near Omaha, Nebraska, where the head of the Strategic Air Command (SAC) is located. Minot is an area that has satanic cults operating in it that have been directly tied to the Son of Sam and Manson murders, among others.

There was no follow-up investigation when these findings were made. The US national media didn’t report on the story. Local media only focused on discrediting the witnesses. The FBI and other enforcement officers harassed and discredited victims in the aftermath, causing all but two of them – Paul Bonacci and Alisha Owen – to recant their testimonies. The child victims, rather than the perpetrators, were thrown in prison. Alisha Owen spent more time in solitary confinement than any other woman in the history of the Nebraska penal system. She received a sentence of 9 to 25 years for allegedly committing perjury, which is ten years longer than the sentence that was given to Lawrence King for looting his Franklin Credit Union of $40 million. This heavy sentence imposed on Owen was meant to serve as a warning message to all other victims who might think of talking.

The key investigator in the case, Gary Caradori, was killed when his private plane mysteriously exploded in mid-air while en route to delivering evidence to Senator Loran Schmit. His briefcase went missing from the wreckage. This was the first of many deaths of people attempting to uncover this politically connected satanic cult/sex slave/drug trafficking ring. The Discovery Channel made a documentary about this case, entitled ‘Conspiracy of Silence’, but at the last moment, a group of unidentified US Congressmen paid them $500,000 to not air it, and all copies were destroyed (one copy survived). Republican senator John DeCamp, who was on the investigative committee, wrote a book exposing the case, titled, The Franklin Cover-Up.

In 1999 (see below), Paul Bonacci, who had been kept as a child sex slave by Lawrence King, positively identified Lt. Col. Michael Aquino as an associate of King, who he said was known to the children only as ‘the Colonel’. Rusty Nelson, King’s personal photographer, also identified Aquino as the man that he once saw King give a briefcase full of money and bearer bonds to, and who King had told him was involved in the Contra gun and cocaine trafficking operation being run by George Bush Sr. and Lt. Col. Oliver North.

Michael Aquino has also been linked to Offutt Air Force Base, a Strategic Air Command post near Omaha that was implicated in the investigation by the Franklin Committee…”Aquino was also claimed to have ordered the abduction of a Des Moines, Iowa paperboy…” linked to the kidnapping and disappearance of Gosch’s son.

1989 (May) – Lt. Col. Michael Aquino was again questioned in connection with child abuse investigations. This time, at least five children in three cities were making the accusations. The children had seen Aquino in newspaper and television coverage of the Presidio case and immediately recognized him as one of their abusers. The children were from Ukiah, Santa Rosa, and Fort Bragg.

1990 (August 31) – Lt. Col. Michael Aquino was processed out of the Army after being investigated for satanic ritual child abuse in the Presidio case.Although never formally charged, according to court documents, Aquino was ‘titled’ in a Report of Investigation by the Army’s Criminal Investigative Division (CID) for “indecent acts with a child, sodomy, conspiracy, kidnapping, and false swearing”. The child abuse charges remained against Aquino because, according to the CID, the evidence of alibi offered by Aquino “was not persuasive.” Aquino has since denied that he was ever processed out of the Army and even claims that he was selected as one of their first Space Intelligence Officers during this same year, and was stationed at Cheyenne Mountain for four years of active duty before retiring. There is no evidence that this is true.

1991– (Although this entry isn’t directly connected to Michael Aquino, it directly relates to the cover up of events that he and his pedophile cronies have been involved in.) After being accused of molestation as a child by their daughter, Peter and Pamela Freyd established the False Memory Syndrome Foundation (FMSF). The original board members included doctors who were directly involved in MKULTRA mind-control programs, such as expert hypnotist Martin Orne and Dr. Louis Jolyin West, as well as many others who have been accused of child sexual abuse. One board member, Richard Ofshe, is an alleged expert on coercive persuasion techniques, and another, Margaret Singer, was a government expert on cults and cult tactics. Elizabeth Loftus is an expert on memory. The mandate of the FMSF has always been to discredit the recovered memories of people who report having been traumatically abused as children – usually by claiming that the child’s therapist has implanted false memories – and to develop legal defenses for protecting pedophiles in court. They have resorted to lies, intimidation, character assassination, legal tactics, and coercing victims to recant their claims and sue their therapists for large settlements. The FMSF has routinely argued in court cases that satanic ritual abuse (SRA) and multiple personality disorder (MPD) don’t exist, and the organization and its members have specifically targeted any therapists who claim that they do. This defense strategy, which has proven to be quite successful, has allowed victims of trauma-based mind-control and ritual abuse to be completely discredited, while allowing their perpetrators to continue their activities unimpeded.

At about the time that the FMSF was established, a number of mind-control and ritual abuse victims were starting to remember being involved in these events, and this threatened to expose the perpetrators, so it was important that a means to discredit them was put in place.

The False Memory Syndrome Foundation was created by known pedophiles and its board was fortified with CIA mind-control experts who cut their teeth on MKULTRA victims. Many of them are known to be closely associated with Michael Aquino. This organization of pedophiles and mind-control experts have been very instrumental in covering for Aquino and other pedophiles while destroying the lives and careers of their victims, the victim’s families, and their therapists, even long after these pedophiles performed their vile acts against them.

Also in 1991, Lieutenant Colonel Michael Aquino, formerly of the U.S. Army Reserves, filed suit under the Privacy Act of 1974, 5 U.S.C. § 552a (1988), against the Secretary of the Army seeking to amend an Army report of a criminal investigation about him and to recover damages caused by inaccuracies in the report. He also sued under the Administrative Procedure Act, 5 U.S.C. § 701, et seq. (1988), to review the Secretary’s refusal to amend the report. The district court entered summary judgment for the Secretary, concluding that criminal investigatory files are exempt from the provisions of the Privacy Act that were invoked by Aquino and that the Secretary’s decision not to amend was not arbitrary or capricious. 768 F. Supp. 529. Finding no reversible error, we affirm.

Aquino sued the Army in part because they refused to remove his name from the titling block or amend their report stating he was the subject of an investigation for sexual abuse and related crimes. The court document notes that several members of the Army thought there was probable cause to “Title” Aquino with offenses of indecent acts with a child, sodomy, conspiracy, kidnapping, and false swearing. Aquino tried to charge a Captain, the father who reported his child’s alleged abuse and whose child’s name appears in the victim block of the report, with “conduct unbecoming an officer.” Due to that, Aquino was titled for false swearing, in addition to “indecent acts with a child, sodomy, conspiracy, and kidnapping.” He also filed complaints against the SFPD, therapists involved in the case, journalists and CID investigating officers.

The Court Documents Can Be Seen Here:
http://law.justia.com/cases/federal/appellate-courts/F2/957/139/2044/

1995 – Diana Napolis was a Child Protection Services investigator in San Diego who was alarmed by the increasing number of children who were reporting satanic ritual abuse, starting as far back as the mid-1980s. Napolis went undercover online in 1995 and approached Aquino and several others who were associated with him, while also posting information and evidence relating to these crimes and these people’s involvement in them. In response, Aquino and his associates (several of them from the False Memory Syndrome Foundation) cyber-stalked Napolis for five years and finally tracked her down in 2000, thereby discovering her real identity. At this point, Napolis’ efforts to expose these people were defeated, with Aquino and associates using their power and influence to pose themselves as the victims and accusing her of cyber-stalking, as well as engaging in assassinating her character both online and through the media. Napolis was also targeted with directed-energy weapons (V2K) and set up to appear mentally unstable, with claims that she was stalking various celebrities. This resulted in her spending a year in jail and several more months in a mental facility, and eventually being forced to quit her job. The character assassination continued against her, with someone claiming to be Napolis posting insane ravings on the internet in order to make her appear crazy.

A reporter at the San Diego Union Tribune was working for Aquino and his cronies by painting Napolis in a bad light in news reports, accusing her of cyber-stalking, making threats, and acting crazy. Aquino was publicly complaining that she was causing serious problems for him and his fellow pedophiles. Nonetheless, the article at the first link below clearly reveals the one-sided reporting on this story by the San Diego Union Tribune and the fact that if anyone was being cyber-stalked, it was Napolis. The second link below is Napolis’ far more professional and believable response to the article:

http://www.uniontrib.com/news/uniontrib/sun/currents/news_mz1c24curio.html

http://www.konformist.com/2002/curio-tribune2.htm

Discredited Tourette Therapist Leslie Packer is a Temple of Set “Bodyguard”

The point of going after Napolis so publicly served several agendas. First, it was a public warning to anyone else who might attempt to expose the increasing satanic ritual abuse that was going on and the people behind it. Second, it acted to deflate the satanic ritual abuse scare that was mounting, making it appear to be nothing more than the ravings of delusional people. Third, it assured that stealing other people’s children using child protection services could continue. Fourth, (with the help of the FMSF) it made out children’s claims of molestation and satanic ritual abuse to be nothing more than false memories.

Some of the articles that I found posted online by Diana Napolis do make her sound a bit crazy however it is NOT known is they were really posted by the real Diana Napolis

Modification of The Court Order Can Be Found Here:
http://newsgroups.derkeiler.com/Archive/Misc/misc.legal/2008-06/msg00403.html

In 2008, Diana Napolis filed a lawsuit against Michael A. Aquino and his affilates. The court documents can be seen here:
https://www.scribd.com/doc/4981526/Diana-Napolis-vs-Michael-Aquino-lawsuit-2008

Satanic cult leader, Michael Aquino’s harassment and why…
Posted by Karen Jones on February 20, 1999
http://www.napanet.net/~moiraj/wwwboard/messages/2374.html
http://www.rumormillnews.com/cgi-bin/archive.cgi?noframes%3Bread=4435

1999 (February 5) – In US District Court in Lincoln, Nebraska, a hearing was held in the matter of Paul A. Bonacci v. Lawrence E. King, a civil action in which Bonacci charged that he had been ritualistically abused by King as part of a nationwide pedophile ring that was linked to powerful political figures in Washington and to elements of the US military and intelligence agencies.

During the hearing, Noreen Gosch, whose twelve-year-old son Johnny had been abducted in 1982, provided the court with sworn testimony linking US Army Lt. Col. Michael Aquino to the nationwide pedophile ring. She stated:

“Well, then there was a man by the name of Michael Aquino. He was in the military. He had top Pentagon clearances. He was a pedophile. He was a Satanist. He’s founded the Temple of Set. And he was a close friend of Anton LaVey. The two of them were very active in ritualistic sexual abuse. And they deferred funding from this government program to use [in] this experimentation on children.
Where they deliberately split off the personalities of these children into multiples, so that when they’re questioned or put under oath or questioned under lie detector, that unless the operator knows how to question a multiple-personality disorder, they turn up with no evidence.
They used these kids to sexually compromise politicians or anyone else they wish to have control of. This sounds so far out and so bizarre I had trouble accepting it in the beginning myself until I was presented with the data. We have the proof. In black and white.”

Paul Bonacci, who was a victim of this nationwide pedophile crime syndicate, subsequently identified Aquino as the man who ordered the kidnapping of Johnny Gosch.

Three weeks after the hearing, on February 27, Judge Warren K. Urbom ordered Lawrence King to pay $1 million in damages to Paul Bonacci.

* * *
The question here isn’t whether Michael Aquino is guilty of being one of the world’s most despicable pedophiles and mind-control programmers ever to crawl out of a toilet, which the evidence makes quite clear. Rather, the question is whether there is a conspiracy against him by all of these people (including young children) making these allegations against him over the years, and for what reason? After all, this is exactly what he claims to be the case, and this is how he has attempted to excuse these many claims against him.

Conspiracy or not, it certainly is quite unusual for his name to come up in so many cases of child abuse and sex slavery. I counted up to 200 child victims in the above listed events, all of who would have had to be carefully coached to lie without being tripped up by a more intelligent adult during questioning. And then there are all the doctors who were involved in examining these children and who claimed that many of them had definite physical signs of having been sexually violated, who would also have to be in on it. And there are also the children’s parents who would have had to have been either very easily deceived by their own children, or were in on the conspiracy as well. And of course, there are also all of the investigators, lawyers, judges, etc. who supposedly conspired against Aquino in the Presidio case.

SOURCE:
http://exposinginfragard.blogspot.com/2014/02/the-case-against-michael-aquino-satanic.html

2010-Letter To U.S. Attorney Eric Holder and The First Lady Michelle Obama
From:
America’s Bureau of Investigation
and Loving Intervention for our Nation’s Children
Douglas R. Millar
PO Box 464. Santa Rosa, CA 95404
(707) 396-8215

In this letter they request a Federal Grand Jury investigation into Army/Special Forces/CIA
Lt. Col. Michael Angelo(MichaeltheAngel) Aquino and his suspected criminal acts of Satanic ritual abuse (SRA).

You Can See The Letter Written By Douglas R. Millar Here:

Documented Evidence
Police Reports, Requests For Investigation and Other Documented Evidence Here:

YOUTUBE LINKS:

MICHAEL AQUINO ON THE OPRAH SHOW:

Dr. Michael A. Aquino and Lilith Aquino Interview

Michael Aquino Ted Gunderson and a Jesuit on Geraldo

Michael Aquino and The Disappearance of Kevin Collins

Rusty Nelson testimony – Lt. Col. Michael Aquino

MindWar paper by NSA Gen. Michael Aquino

Doug Millar – Michael Aquino & Satanic Sex Ritual Child Abuse

Michael Aquino – Satanisun

In This Video He Confirms Government Human Experimentation Such As MKUltra But He Does Not Admit To Being A Part of It.
RADIO INTERVIEW WITH MICHAEL AQUINO Aug 3, 2016

Satan’s Soldiers: Devil Worship in the US Military

WANTED DEAD OR ALIVE: U.S. Army Lt. Col. Michael Aquino

The Devil’s Advocate: An Interview with Dr. Michael Aquino
http://disinfo.com/2013/09/devils-advocate-interview-dr-michael-aquino/

Satanic Subversion of the U.S. Military
http://www.abeldanger.net/2015/11/us-armypentagonnsas-six-degrees-of.html

Michael Aquino
http://www.konformist.com/2001/aquino.htm

CHILD ABUSE AND THE AMERICAN GOVERNMENT
http://aangirfan.blogspot.com/2009/09/child-abuse-and-american-government.html

Article on Lawrence E. King Jr: Overachiever (The Franklin Coverup)

Lawrence E. King Jr: Overachiever

Satanic Ritual Abuse 2016: Child Trafficking/ILLUMINATI-FREEMASON Ritual Abuse

June 21, 1974, The Washington Post, Behind Psychological Assessment’s Door, A CIA Operation, by Laurence Stern,
https://www.diigo.com/item/note/27gb8/oq82

***NOTE*** I do NOT agree with everything in this video as a lots of it is part of the sensationalized “Satanic Panic” from the 80’s which lots has been debunked HOWEVER there are clear links with Satanists and Murder and other crimes.

Here are some Great Links. This one is a book on “Black Metal” (Satanic Metal) which is one I’ve read. Most of those into this music. Look at the case of the very mentally ill guy Per Ohlin (Dead) he committed suicide. He often wrote about it and even felt he didn’t have normal blood going through is veins. Read about how Satanist / Nazi / Church Burner Varg Vikernes “Count Grishnackh” killed fellowed Satanist Oystein Asrseth. “Euronymous” who ran a Satanic shop called Helvete which I believe means “Hell”. This book has lots of interviews with these people I’ve read. Satanism, Nazism, Church Burnings are common themes. That is just straight out fact.

Other Satanic Murders.

Here is a Satanic Group that openly admits to being okay with Human Sacrifice and if you go to there site you can download their PDF’s which are disturbing.

The case here of Adolf Constanzo is especially brutal. He was more into Palo Mayombe though but still evil in Origin.

Murray
Visits: 5070 · Online: 0
Save as PDF
© 2019 justpaste.it

Account Terms Privacy Cookies Blog About

Pagan Pioneers:  Founders, Elders, Leaders and Others

Michael A. Aquino

(The Temple of Set)

Written and compiled by George Knowles.

Michael A. Aquino is an Amerian born occultist, satanist, and author of: The Book of Coming Forth By Night. He is also known as the 13th Baron of Rachane (Clan of Campbell) in Scotland, UK. A former Lt. Colonel in the United States Army, Aquino had been a specialist in psychological warfare operations during the Vietmam War, but is perhaps best known as a High Priest of the Church of Satan (CoS) founded by Anton Szandor LaVey in 1966, and as the founder and High Priest of the Temple of Set (ToS) in 1975, which today is one of the largest “neo-satanic” Churches in the USA.

Aquino was born in San Francisco on the 16th October 1946. His father, Michael Aquino Sr. had been a Sergeant in Patton’s 3rd Army during World War II, and serving with distinction was decorated with a Purple Heart for wounds received during combat. His mother Marian Dorothy Elisabeth Ford (affectionately known as Betty Ford) was a child prodigy, after only three years of early formal education and at the age of just fourteen years, she was enrolled as one of the youngest students ever admitted into Stanford University. Three years later she completed a B.A. (Hon) degree in English, the youngest ever to receive such an honour at that time.

Betty Ford c. 1920, and later c. 1980

Michael A. Aquino was raised and took his early education in Santa Barbara, California, from where he graduated the Santa Barbara High School in 1964. He then enrolled at the University of California (1964-1968) earning a B.A. degree in Political Science, returning later (1974-1976) to earn a M.A. degree. Also in 1968 he joined the Army as an Intelligence Officer specialising in Psychological Warfare. In the following year while on leave from training to marry his first wife Janet, with whom he had a son called Dorien, he joined the newly created Church of Satan (CoS) founded by Anton Szandor LaVey in 1966, but soon after in 1970 left on a tour of duty in Vietnam. On his return to the United States in 1971, he resumed his association with the CoS and was ordained a High Priest, after which he established his own core group (termed a grotto) that met and practised at his home in Santa Barbara.

2nd Lieutenant Michael Aquino (circa. 1968) – Janet Aquino as the Egyptian Goddess Nepthys (circa. 1970)

As a High Priest of the CoS, Aquino quickly rose to a position of prominence, but soon grew dissatisfied with LaVey’s administrative leadership and philosophical approach to Satanism as a true religion. In 1972 together with other disillusioned members, Aquino resigned from the CoS and was joined by Lilith Sinclair, a High Priestess from the New York who later became his second wife.

Lilith Sinclair

Over the next two years through ritual invocation and meditation Aquino claims to have received a communication from Satan himself in the guise Set, the ancient Egyptian deity, who inspired him to write the book: The Book of Coming Forth by Night, and further to found a new Church in his name which would supersede the CoF. As a result in 1975, a new Church called the Temple of Set was founded in Santa Barbara and formally incorporated as a non-profit organization with both federal and tax-exempt status in California. Today the Temple of Set is consider the leading Satanic Church in the United States.

In 1976 Aquino returned to academia to finish his doctorial program at the University of California earning a PhD in Political Science in 1980 with a dissertation on “The Neutron Bomb.” He then took a position as an adjunct professor of Political Science at the Golden Gate University in San Francisco. It was shortly after this during the mid 1980s that rumours of satanic child abuse began to surface in connection with a day-care Centre at the Presidio Army Base in California, a base where Aquino had once been assigned. When the media picked up on Aquino’s name and his active association with army Psychological Warfare Operations, and then privately with the Church of Satan and the Temple of Set, the whole thing became a media witch-hunt leading to over a decade of personal persecution.

It is not my intention in this brief to argue the guilt or innocence of such allegations against Aquino, for that the reader should conduct their researches. However it is fair to report that after months of research and scrolling through reams and reams of some of the most heartbreaking allegations levelled against him, that despite numerous and exhaustive investigations conducted Military and Civilian law enforcement agencies, including the CIA and FBI, no proof of any wrong doing has ever been found and no formal charges have ever brought against the him.

In 1985 Aquino’s devoted mother ‘Betty’ passed away having died of cancer in San Francisco. Betty, who had steadfastly supported her son through all his endeavours including the child-abuse allegations, was also a High Priestess in his Temple of Set. On her death she left her him a $3.2 million estate, which included a house leased by Project Care for Children and the Marin County Child Abuse Council (I have so far found no association here with the earlier allegations). A year later in 1986 Aquino married his second wife Lilith (formally Patricia Sinclair – dob. 21st April 1942) who had been a prominent High Priestess of a CoS and a leader of a grotto in New York before resigning with Aquino in 1972.

Michael & Lilith Aquino

Despite being dogged by repeated allegations of child abuse, and most probably because of his continued association with the Temple of Set, Aquino continued his professional military career rising to the rank of Lt Colonel with Military Intelligence. Initially he was involved in military psychological operations (“psy-ops”), but he also qualified as a Special-Forces officer (Green Berets), as a Civil Affairs officer and as a Defence Attaché.

In addition Aquino was a graduate of the Industrial College of the Armed Forces, the Command and General Staff College, the National Defence University, the Defence Intelligence College, the US Army Space Institute and the US State Departments’ Foreign Service Institute. His decorations are equally impressive and include: the Bronze Star, the Army Commendation Medal (3 awards), the Air Medal, the Special Forces Tab, the Parachutist Badge, the Republic of Vietnam Gallantry Cross and the long service Meritorious Service Medal.

Lt. Colonel Michael A. Aquino

In 1990 at the end of his full-time 22-year contract of active duty, despite accusations that he was forced to leave due to all the previous allegations made against him, he continued his service as a part-time active USAR officer for another four years and was assigned to the Headquarters of the US Space Command with an above “Top Secret” clearance. He finally retired in 1994 with an unblemished distinguished record and remains in the Army reserve with the rank of Lt. Colonel (USAR-Retired).

The Barony of Rachane

In 2004 Aquino applied for and became the present caretaker of the Barony of Rachane in the County of Dumbarton, Argyllshire, Scotland, UK, to which he now holds legal title as the 13th Baron of Rachane, of Clan Campbell. The Coat-of-Arms and title of the Barony is now recorded in the Register of Sasines, Scotland, and is recognised on behalf of the Crown by the Lord Lyon King of Arms. In the United States it is also a Registered Trademark with both the California Secretary of State and the United States Patent and Trademark Office.

The Baroness and Baron of Rachane, Lilith and Michael A. Aquino – The Coat-of-Arms

In 2000 the Abolition of Feudal Tenure Act concluded all the land-tenure aspects of the Scottish feudal system as at 28th November 2004. The effect upon baronies was to end their superior/vassal attachment to specific areas of land, while continuing and preserving them as titles in the Noblesse of Scotland. The present Baron and Baroness of Rachane – Michael and Lilith Aquino have dedicated their Barony towards charitable support of animal protection, rescue and welfare.

In 2007 a Fellowship of the Barony of Rachane was inaugurated to formally honour people of dignity, wisdom, enlightenment and accomplishment, such as is known to the Baron and Baroness. Fellows are presented with the Crest Badge of the Barony, which they hope will become a symbol of benevolence and goodwill, as was once a tradition in Scotland.

The Temple of Set (over-view)

The Temple of Set is a left-hand initiatory occult Order founded by Michael A. Aquino and incorporated as a non-profit religious organization in the State of California in 1975. Initiates and members of the Order are known as “Setian’s.”

Aquino had been a lead figure and High Priest in the Church of Satan (CoS) founded by Anton Szandor LaVey earlier in 1966, but after administrative and philosophical disagreements with LaVey in 1972, and together with other disillusioned members that same year Aquino resigned. Later through ritual invocation and meditation Aquino sought a new mandate from the “Prince of Darkness” in the guise of “Set,” the Egyptian god of death and the underworld who inspired him to write the book: “The Book of Coming Forth by Night,” and to found a new Church in his name as the “Temple of Set”.

Anton Szandor LaVey

While based on a similar hierarchy structure as the CoS, the Temple of Set (ToS) appears to be a more intellectually evolved form of Satanism, in that the CoS only use the name of Satan symbolically and do not really believe that he exists. They use his name merely to draw attention to and boost their hedonistic aims of self-indulgence and elitism. The ToS members however believe that a real Satan exists in the form of “Set” who they consider to be the true “Prince of Darkness”. The worship of Set can be traced back to ancient times, images of which have been dated to 3200 BC with inscriptions dated to 5000 BC.

In the ToS, the figure of Set is understood as a principle but is not worshipped as a god. He is considered a “role model” for initiates, a being totally apart from the objective universe. They consider him ageless and the only god with an independent existence. He is described as having given humanity through means of non-natural evolution a questioning intellect that sets humans apart from nature and gives us the possibility to attain divinity.

The philosophy of the Temple of Set is heavily influenced by the writings and ritual’s of Aleister Crowley’s A.A., and the earlier Hermetic Order of the Golden Dawn. Emphasis through its degree structure is based on the individual’s “Xeper”. Xeper is a term use by the ToS to mean the true nature of “becoming” or “coming into being” and teaches that the true self, or the essence of self is immortal, and that through self-initiation and development, or Xeper, one gains the ability to align consciousness with this essence.

Aleister Crowley

There are several stages of degrees within the ToS that indicates an individual’s development and skill in magic, or black magic if you will. The ToS terms the progression through the degrees as “recognitions”, and because their philosophy prefers individuals to self-initiate, after a time of assessment they acknowledge their progress by granting an appropriate degree.

The degrees of the ToS are:

The first degree is that of Setian

The second degree is that of Adept

The third degree is that of High Priest/Priestess

The fourth degree is that of Magister/Magistra Templi

The fifth degree is that of Magus/Maga

The sixth and final degree is that of Ipsissimus/Ipsissima

A “Council of Nine” holds the main power of authority within the structure of the TOS and is responsible for appointing both the operating High Priest/Priestess, who act as the public face of the Order, and an Executive Director, whose main task is to deal with administrative issues. Members of the Council of Nine are elected to office from the main body of third degree High Priests/Priestesses, or higher, for a term of nine years with a new member being elected each year during the annual International Conclave.

On joining, a new initiate is provisionally admitted as a first degree Setian and receives a copy of Aquino’s The Book of Coming Forth By Night, their newsletter The Scroll of Set, and a set of encyclopaedias entitled The Jeweled Tablets of Set. This material contains all the organizational, philosophical and magical information they will need to qualify for full membership into the second degree, that of Adept. They also receive information on active “Pylons and Orders” (see more below) sponsored by the ToS with open access to their on-line forums and archives through which they can communicate with others should they have questions with which they may need help.

New members then have a two-year time limit to qualify for recognition as a second degree Adept. Certification and recognition is awarded by third degree members of the ToS, but only after demonstrating they have successfully mastered and applied the essential principles of magic, or black magic if you will. If such recognition is not received by that time, full membership is declined.

Once full membership as a second degree Adept is attained, most members are happy to remain in that degree and to continue to learn and advance their knowledge through the Order’s teachings in achieving individual self-realisation and self-development of free will (Xeper). Advancement to the third degree, that of High Priest/Priestess, involves much greater responsibilities towards the ToS, such as holding office in the ToS hierarchy and acting as official representatives.

The fourth degree, that of Magister/Magistra Templi, is granted by the reigning High Priest/Priestess in acknowledgement of an individual’s advancement in magical skills to such a level that they can found their own specialized “Schools of Magic” within the structure of active Pylons and Orders of the ToS.

Advancement to the fifth degree Magus/Maga can only be awarded by a unanimous decision of the Council of Nine. A fifth degree member has the power to define concepts affecting the philosophy of the ToS, such as the concept of Xeper as defined by Aquino in 1975. The final sixth degree Ipsissimus/Ipsissima, represents a Magus/Maga whose task is complete. Only a very few members of the Order achieve this position, although any fifth degree member can assume it based on his own assessment.

The Temple of Set does not tolerate docile new members and expects them to prove themselves capable as “cooperative philosophers and magicians”. To demonstrate this, the ToS has loosely structured interest groups where specific themes and issues are addressed. Local and regional “Pylons” are meetings and seminars where discussions and magical work take place. These are hosted and led by second degree Adepts or higher called Sentinels. There are also various Orders providing specific Schools of Magic and differing paths of initiation. These are led by a forth degree Magister/Magistra Templi who will usually be the founder of that Order. The ToS also holds an annual Conclave where official business takes place, and where workshops are held in which members can take part in a wide variety of topics and activities. The annual Conclave usually lasts for about a week and is held in various global locations.

The ToS emphasizes that magic, or black magic if you will, can be as dangerous to a newcomer as volatile chemicals are to an inexperienced lab technician. They also stress that the practice of magic, black or otherwise, is not for unstable, immature, or emotionally weak-minded individuals, and that their teachings offer nothing that an enlightened, mature intellectual would regard as undignified, sadistic, criminal or depraved.

Resources:

http://www.churchofsatan.com/index.php

http://www.rachane.org/History.html

http://www.trapezoid.org/mission.html

Email Contact – Xeper@sbcglobal.net.

Plus way to many to include here.

Written and compiled by George Knowles © 24th June 2016

Best Wishes and Blessed Be.

Site Contents – Links to all Pages

Home Page

A Universal Message:

Let there be peace in the world  –   Where have all the flowers gone?

About me:

My Personal PageMy Place in England / My Family Tree (Ancestry)

Wicca & Witchcraft

Wicca/Witchcraft /  What is WiccaWhat is Magick

Traditional Writings:

The Wiccan RedeCharge of the GoddessCharge of the God  /  The Three-Fold Law (includes The Law of Power and The Four Powers of the Magus) /  The Witches ChantThe Witches CreedDescent of the GoddessDrawing Down the MoonThe Great Rite InvocationInvocation of the Horned GodThe 13 Principles of Wiccan Belief /  The Witches Rede of ChivalryA Pledge to Pagan Spirituality

Correspondence Tables:

IncenseCandlesColoursMagickal DaysStones and GemsElements and Elementals

Traditions:

Traditions Part 1  –  Alexandrian Wicca /  Aquarian Tabernacle Church (ATC) / Ár Ndraíocht Féin (ADF) / Blue Star Wicca / British Traditional (Druidic Witchcraft) /  Celtic Wicca / Ceremonial Magic / Chaos Magic / Church and School of Wicca / Circle Sanctuary / Covenant of the Goddess (COG) / Covenant of Unitarian Universalist Pagans (CUUPS) / Cyber Wicca / Dianic Wicca / Eclectic Wicca / Feri Wicca /

Traditions Part 2  –  Gardnerian Wicca /  Georgian Tradition / Henge of Keltria /  Hereditary Witchcraft / Hermetic Order of the Golden Dawn (H.O.G.D.) / Kitchen Witch (Hedge Witch) /  Minoan Brotherhood and Minoan Sisterhood Tradition / Nordic Paganism / Pagan Federation / Pectic-Wita /  Seax-Wica /  Shamanism /  Solitary /  Strega /  Sylvan Tradition /  Vodoun or Voodoo / Witches League of Public Awareness (WLPA) /

Other things of interest:

Gods and Goddesses (Greek Mythology) /  Esbats & Full MoonsLinks to Personal Friends & ResourcesWicca/Witchcraft ResourcesWhat’s a spell?Circle Casting and Sacred Space Pentagram – PentacleMarks of a WitchThe Witches PowerThe Witches Hat An esoteric guide to visiting LondonSatanismPow-wowThe Unitarian Universalist Association /  Numerology:  Part 1 Part 2  /  Part 3A history of the Malleus Maleficarum:  includes:  Pope Innocent VIII / The papal Bull / The Malleus Maleficarum / An extract from the Malleus Maleficarum / The letter of approbation  / Johann Nider’s Formicarius /  Jacob Sprenger / Heinrich Kramer / Stefano Infessura / Montague Summers  /  The Waldenses / The Albigenses / The Hussites /  The Native American Sun DanceShielding (Occult and Psychic Protection) The History of ThanksgivingAuras  – Part 1 and Part 2 /  “Doreen Valiente Witch” (A Book Review) /

Sabbats and Festivals:

The Sabbats in History and Mythology / Samhain (October 31st) / Yule (December 21st) / Imbolc (February 2nd) / Ostara (March 21st) / Beltane (April 30th) /  Litha (June 21st) /  Lammas/Lughnasadh (August 1st) / Mabon (September 21st)

Rituals contributed by Crone:

Samhain / YuleImbolcOstara /  BeltaneLithaLammasMabon

Tools:

Tools of a Witch  /  The Besom (Broom) /  Poppets and DollsPendulums / Cauldron MagickMirror Gazing

Animals:

Animals in Witchcraft (The Witches Familiar and Totem Animals) /  AntelopeBatsCrowFoxFrog and ToadsGoat / HoneybeeKangarooLionOwlPhoenixRabbits and HaresRavenRobin RedbreastSheepSpiderSquirrelSwansUnicornWild BoarWolf / Serpent / Pig / Stag / Horse / Mouse / Cat /  Rats /  Unicorn

Trees:

In Worship of Trees – Myths, Lore and the Celtic Tree Calendar.  For descriptions and correspondences of the thirteen sacred trees of Wicca/Witchcraft see the following:  Birch / Rowan / Ash / Alder / WillowHawthorn / Oak / Holly / Hazel / Vine / Ivy / Reed / Elder

Sacred Sites:

Mystical Sacred Sites  –  Stonehenge /  Glastonbury Tor /  Malta – The Hypogeum of Hal Saflieni /  Avebury /  Cerne Abbas – The Chalk Giant /  Ireland – Newgrange /

Rocks and Stones:

Stones – History, Myths and Lore

Articles contributed by Patricia Jean Martin:

Apophyllite  / AmberAmethystAquamarineAragoniteAventurineBlack TourmalineBloodstoneCalciteCarnelianCelestiteCitrineChrysanthemum StoneDiamond  /  Emerald / FluoriteGarnet /  HematiteHerkimer DiamondLabradoriteLapis LazuliMalachiteMoonstoneObsidianOpalPyriteQuartz (Rock Crystal)Rose QuartzRubySeleniteSeraphinite  /  Silver and GoldSmoky QuartzSodaliteSunstoneThundereggTree AgateZebra Marble

Wisdom and Inspiration:

Knowledge vs Wisdom by Ardriana Cahill /  I Talk to the TreesAwakeningThe Witch in YouA Tale of the WoodsI have a Dream by Martin Luther King /

Articles and Stories about Witchcraft:

Murdered by WitchcraftThe Fairy Witch of ClonmelA Battleship, U-boat, and a WitchThe Troll-Tear (A story for Children) /  Goody Hawkins – The Wise Goodwife /  The Story of Jack-O-LanternThe Murder of the Hammersmith Ghost /  Josephine Gray (The Infamous Black Widow) /  The Two Brothers – Light and Dark

Old Masters of Academia:

Pliny the ElderHesiodPythagoras

Biographies

A “Who’s Who” of Witches, Pagans and other associated People

(Ancient, Past and Present)

Remembered at Samhain

(Departed Pagan Pioneers, Founders, Elders and Others)

Pagan Pioneers:  Founders, Elders, Leaders and Others

Abramelin the Mage /  AgrippaAidan A KellyAlbertus Magnus – “Albert the Great” /  Aleister Crowley – “The Great Beast” /  Alex Sanders – “King of the Witches” /  Alison Harlow /   Allan Bennett – the Ven. Ananda Metteyya / Amber KAnna FranklinAnodea JudithAnton Szandor LaVey /  Arnold CrowtherArthur Edward Waite /  Austin Osman Spare /  Biddy Early /  Bridget Cleary – The Fairy Witch of Clonmel /  Carl ” Llewellyn” WeschckeCecil Hugh WilliamsonCharles Godfrey Leland /   Charles WaltonChristina Oakley Harrington Damh the Bard – “Dave Smith” /  Dion Fortune /  Dolores Aschroft-NowickiDoreen ValienteDorothy MorrisonDr. John Dee & Edward Kelly /  Dr. Leo Louis Martello /  Edward FitchEleanor Ray Bone “Matriarch of British Witchcraft” Eliphas Levi /  Ernest Thompson Seton /  Ernest Westlake /  Fiona Horne /   Frederick McLaren Adams – Feraferia /  Friedrich von Spee /  Francis Barrett /  Gavin and Yvonne Frost and the School and Church of Wicca /  Gerald B. Gardner – The father of contemporary Witchcraft /  Gwydion PendderwenHans HolzerHelen Duncan /   Herman Slater – Horrible Herman /  Isaac BonewitsIsrael RegardieIvo Domínguez Jr.Jack Whiteside Parsons – Rocket Science and Magick /  James “Cunning” Murrell – The Master of Witches /  Janet Farrar and Gavin BoneJessie Wicker Bell – “Lady Sheba” / Johann Weyer  / Johannes Junius – “The Burgomaster of Bamberg” /  John Belham-PayneJohn George Hohman – “Pow-wow” /  John Gerard /  John Gordon Hargrave and the Kibbo Kith Kindred /  John Michael Greer /  John ScoreJoseph John Campbell /  Karl von EckartshausenLady Gwen Thompson – and “The Rede of the Wiccae” / Laurie Cabot  – “the Official Witch of Salem” /  Lewis SpenceMadeline Montalban and the Order of the Morning Star /  Margaret Alice MurrayMargot AdlerMichael Howard and the UK “Cauldron Magazine” /  Marie Laveau – ” the Voodoo Queen of New Orleans” /  Marion WeinsteinMatthew Hopkins – “The Witch-Finder General” /   Max Ehrmann and the “Desiderata” /  Michael A. Aquino – and The Temple of Set /  Monique WilsonMontague Summers /  Nicholas CulpeperNicholas RemyM. R. SellarsMrs. Maud Grieve – “A Modern Herbal” /  Oberon Zell-Ravenheart and Morning GloryOld Dorothy Clutterbuck /  Old George PickingillOlivia Durdin-Robertson – co-founder of the Fellowship of Isis /  Paddy SladePamela Colman-SmithParacelsus /  Patricia CrowtherPatricia Monaghan /  Patricia “Trish” TelescoPaul Foster Case and the “Builders of the Adytum” mystery school /    Philip HeseltonRaven GrimassiRaymond Buckland /  Reginald Scot /  Robert CochraneRobert ‘von Ranke’ Graves and the “The White Goddess” /  Rosaleen Norton – “The Witch of Kings Cross” /  Rossell Hope Robbins /   Ross Nichols and the ” Order of Bards, Ovates & Druids” (OBOD) /  Rudolf SteinerSabrina Underwood – “The Ink Witch” /  Scott CunninghamSelena Fox – founder of “Circle Sanctuary” /  Silver RavenwolfSir Francis Dashwood /  Sir James George Frazer and the “ The Golden BoughS.L. MacGregor Mathers and the “Hermetic Order of the Golden Dawn” /  Starhawk /  Stewart Farrar /  Sybil LeekTed AndrewsThe Mather Family – (includes:  Richard Mather, Increase Mather and Cotton Mather ) /   Thomas AdyT. Thorn CoyleVera ChapmanVictor & Cora Anderson and the ” Feri Tradition” /  Vivianne CrowleyWalter Brown GibsonWalter Ernest ButlerWilliam Butler YeatsZsuzsanna Budapest /

Many of the above biographies are briefs and far from complete.  If you know about any of these individuals and can help with additional information, please contact me privately at my email address below.  Many thanks for reading  🙂

“FAIR USE NOTICE”

While I have taken due care and diligence to credit all sources where possible, this website may contain copyrighted material which has not been specifically authorized by the copyright owner.  My use of making such material available here is done so in my efforts to advance our understanding of religious discrimination, the environmental and social justice issues etc.   If you wish to use copyrighted material from this website for purposes of your own then you must obtain permission from the relevant copyright owner yourself.

Any queries please contact me at email – George@controverscial.com

Email_Witches

My online email discussion group:

http://groups.yahoo.com/group/Email_Witches

Dove of Peace

Help send a message of peace around the world!  The Dove of Peace flies from site to site, through as many countries as possible.  It does not belong to ANY belief system.  Please help make a line around the globe by taking it with you to your site, by giving it to someone for their site, by passing it on to another continent or to the conflict areas of the world.  May trouble and strife be vanquished in it’s path.

mailto:George@controverscial.com

  • Pali Hello, Boxcar he guitar guy. (: Does this affect your charity also? I cannot believe as rich the SPCA is that they have the damn gall to Cut Funding as part of their new 2030 vision plan. What kind of a plan? They used to make Billions by selling carcasses to slaughter houses to be ground up into Swine and Poultry fodder, but this was covered up as a ‘Conspiracy Theory,” I was living on a Cattle Ranch.They have been CUT-OFF from Selling Euthanized animals to Pprocessing for Corporate Hog and Chicken Processing plants. Their cannibalistic funding source has been the cause of neurological death.  this video shows Creutzfeldt-Jakob Disease and Other Prion Diseases – Brian Appleby, M.D. Seattle Science Foundation Seattle Science Foundation Seattle Science Foundation, a private 501(c)(3), offers unparalleled professional training and educational resou… Occurrence and Transmission | Creutzfeldt-Jakob Disease … Creutzfeldt-Jakob Disease USA 2 in 1 Mil – Google Search … including the United States, at a rate of roughly 1 to 1.5 cases per 1 million population per year, although rates of up to two cases per million are not unusual. … CreutzfeldtJakob Disease Deaths and Age-Adjusted Death Rate, United States, … Young-onset sporadic Creutzfeldt–Jakob disease with atypical phenotypic … Durjoy Lahiri Sporadic Creutzfeldt–Jakob disease, with a mean survival of 6 months, is duly considered among the most fatal ne… Published on Mar 14, 2019 http://www.seattlesciencefoundation.org Seattle Science Foundation is a non-profit organization dedicated to the international collaboration among physicians, scientists, technologists, engineers and educators. The Foundation’s training facilities and extensive internet connectivity have been designed to foster improvements in health care through professional medical education, training, creative dialogue and innovation. NOTE: All archived recorded lectures are available for informational purposes only and are only eligible for self-claimed Category II credit. They are not intended to serve as, or be the basis of a medical opinion, diagnosis, prognosis, or treatment for any particular patient. YouTube Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the wor… Pali Hello, Boxcar he guitar guy. (: Does this affect your charity also? I cannot believe as rich the SPCA is that they have the damn gall to Cut Funding as part of their new 2030 vision plan. What kind of a plan? They used to make Billions by selling carcasses to slaughter houses to be ground up into Swine and Poultry fodder, but this was covered up as a ‘Conspiracy Theory,” I was living on a Cattle Ranch. Published on Mar 14, 2019 http://www.seattlesciencefoundation.org Seattle Science Foundation is a non-profit organization dedicated to the international collaboration among physicians, scientists, technologists, engineers and educators. The Foundation’s training facilities and extensive internet connectivity have been designed to foster improvements in health care through professional medical education, training, creative dialogue and innovation. NOTE: All archived recorded lectures are available for informational purposes only and are only eligible for self-claimed Category II credit. They are not intended to serve as, or be the basis of a medical opinion, diagnosis, prognosis, or treatment for any particular patient. Dr. Valerie Sim: Combination Therapies for Human Prion Disease CJD Foundation Creutzfeld-Jacob Disease Foundation videos regarding prion disease 147 Occurrence and Transmission | Creutzfeldt-Jakob Disease … cjd cases 2019 – Google Search Whereas the majority of cases of CJD (about 85%) occur as sporadic disease, a smaller proportion of patients (5-15%) develop CJD because of inherited … News Stories | Creutzfeldt-Jakob Disease Foundation News Stories If you’re looking for the latest research news on Prion Disease, you’ve come to the right place. We’ve grouped t… Prion Disease Research July 2019 mBio: Chronic Wasting Disease in Cervids: Implications … The reporting of this case as probable vCJD – a disease linked to … —– Forwarded Message —– From: Katherine D’Amato <kdamato@shanti.org> To: Katherine D’Amato <kdamato@shanti.org> Sent: Tuesday, September 24, 2019, 3:17:17 p.m. PDT Subject: Important update about SPCA changes Dear PAWS client, Hope you are doing well. We are writing to let you know about changes in how PAWS and The San Francisco SPCA will work together going forward. SPCA remains a key partner of PAWS. SPCA will be changing how they provide services to PAWS clients starting October 1st, as part of their new 2030 vision plan. There are some important changes for PAWS clients: ·       SPCA hospitals are not offering free Wellness Checks or free vaccines to PAWS clients at this time. ·       SPCA will no longer be providing PAWS clients with an ongoing 25% discount at the SPCA hospitals. Instead, SPCA is offering a one-time 30% discount of up to $250/year. ·       The SPCA will no longer be offering Helping Hand funds for diagnostics (such as x-rays). We know these are significant changes and we are happy to talk through them with you. You can still use your PAWS funds for visits at either SPCA hospital (Pacific Heights or Mission), but free or discounted care will no longer be available at the SPCA hospitals. The voucher process will remain the same: If you go to the SPCA, you do not need to call PAWS to request a voucher. Please continue to call PAWS if you are going to any other vet office partner such as Blue Cross (25% discount), Mission Pet Hospital (25% discount), SFVS (10% discount), All Pets Hospital (10% discount), or any other partner hospitals. PAWS will continue to provide at least $200 in vet funds per client per calendar year, plus more funds if they are available. We know this is big news. If you have any questions or concerns or want to talk further, please contact your PAWS Care Navigator. Ali Sutch is the Care Navigator for clients last names A-J (asutch@shanti.org or 415-265-9208). Richard Goldman is the Care Navigator for clients last names K-Z (rgoldman@shanti.org or 415-815-8244). You can also contact the SPCA main number at 415-554-3030. Sincerely, Katherine, Prado, Richard, & Ali

Many academic studies, government reports and news articles have analyzed the role of religion (or the misinterpretation of religious concepts and scripture) in radicalizing Muslims and mobilizing them to wage “Holy War” against their enemies around the globe. Few have discussed how right-wing extremism exploits Christianity and the Bible to radicalize and mobilize its violent adherents toward criminality and terrorism. Much like Al-Qaeda and the Islamic State, violent right-wing extremists — who refer to themselves as “Soldiers of Odin,” “Phineas Priests,” or “Holy Warriors” — are also inspired by religious concepts and scriptural interpretations to lash out and kill in the name of religion. Yet very little is said or written about such a connection.

White supremacists, sovereign citizens, militia extremists and violent anti-abortion adherents use religious concepts and scripture to justify threats, criminal activity and violence. This discussion of religious extremism should not be confused with someone being extremely religious. It should also not be misconstrued as an assault on Christianity. Rather, it represents an exploration of the links between violent right-wing extremism and its exploitation of Christianity and other religions to gain a better understanding of how American extremists recruit, radicalize and mobilize their adherents toward violence and terrorism.

White Supremacy

Researchers have long known that white supremacists, such as adherents of Christian Identity (a racist, antisemitic religious philosophy) and racial Nordic mythology, use religion to justify acts of violence and condone criminal activity. Lesser known are the ways other white supremacy groups, such as the Ku Klux Klan and the Creativity Movement (formerly known as the Church of Creator or World Church of the Creator), incorporate religious teachings, texts, and symbolism into their group ideology and activities to justify violating the law and committing violent acts.

The Kloran, a universal KKK handbook, features detailed descriptions of the roles and responsibilities of various KKK positions, ceremonies, and procedures. There are many biblical references in the Kloran, as well as biblical symbolism in the detailed KKK ceremonies. Also, the KKK’s primary symbol (e.g. “Blood Drop Cross” or Mystic Insignia of a Klansman) — a white cross with a red tear drop at the center — symbolizes the atonement and sacrifice of Jesus Christ and those willing to die in his name.

A lesser-known white supremacist group is the neo-Nazi Creativity Movement. Ben Klassen is credited with creating this new religion for the white race in Florida in 1973. Klassen authored two primary religious texts for the Creativity Movement; “Nature’s Eternal Religion” and “the White Man’s Bible.” Creativity emphasizes moral conduct and behavior for the white race (e.g. “your race is your religion”) including its “Sixteen Commandments” and the “Five Fundamental Beliefs of Creativity.” Klassen had a vision that every worthy member of the Creativity religion would become an ordained minister in the Church.

Two other examples of entirely racist religious movements within white supremacy are the Christian Identity movement and racist Nordic mythology. The Christian Identity movement is comprised of both self-proclaimed followers who operate independently and organized groups that meet regularly or even live within insular communities. In contrast, racist Nordic mythology rarely consists of organized groups or communities, preferring to operate through an autonomous, loose-knit network of adherents who congregate in prison or online.

A unique concept within Christian Identity is the “Phineas [sic.] Priesthood.” Phineas Priests believe they have been called to be “God’s Holy Warriors” for the white race. The term Phineas Priest is derived from the biblical story of Phineas, which adherents interpret as justifying the killing of interracial couples. Followers have advocated martyrdom and violence against homosexuals, mixed-race couples, and abortion providers.

Matt Hale of the World Church of the Creator received 40 years in prison for plotting to assassinate a federal judge.

Racial Nordic mysticism is most commonly embraced by neo-Nazis, racist skinheads and Aryan prison gang members. It is most prolific among younger white supremacists. Odinism and Asatru are the most popular Nordic mythological religions among white supremacists. These non-Christian religious philosophies are not inherently racist, but have been exploited and embraced by white supremacists due to their symbolically strong image of “Aryan” life and Nordic heritage. Aryan prison gang members may also have another reason for declaring affiliation with Odinism and Asatru due to prison privileges — such as special dietary needs or extra time to worship — given to those inmates who claim membership in a religious group.

Chip Berlet, a former senior analyst at Political Research Associates, points out that some white supremacists may be attracted to Nordic mythological religions as a result of their affinity toward Greek mythology, Celtic lore or interest in Nazi Germany, whose leaders celebrated Nordic myths and used Nordic symbolism for their image of heroic warriors during World War II. Neo-Nazi groups, such as the National Alliance and Volksfront, have used Norse symbolism, such as the life rune, in their group insignias and propaganda. Racist prison gangs have also been known to write letters and inscribe messages on tattoos using the runic alphabet. “These myths were the basis of Wagner’s “Ring” opera cycle, and influenced Hitler, who merged them with his distorted understanding of Nietzsche’s philosophy of the centrality of will and the concept of the Ubermensch, which Hitler turned into the idea of an Aryan ‘Master Race,’” says Berlet.

Militia Extremists

The militia movement compares itself to the “Patriots” of the American Revolution in an attempt to “save” the ideals and original intent of the U.S. Constitution and return America to what they perceive to be the country’s Judeo-Christian roots. They have adopted some of the symbols associated with the American Revolution, such as using the term “Minutemen” in group names, hosting anti-tax events (much like the Boston Tea Party), celebrating April 19 — the anniversary date of the Battles of Lexington and Concord in 1775 — and using the Gadsden Minutemen flag with its revolutionary “Don’t Tread on Me” slogan.

Many militia members have a deep respect and reverence for America’s founding fathers. Their admiration takes on religious overtones, believing the U.S. Constitution was “divinely inspired” and that the founding fathers were actually chosen and led by God to create the United States of America. For example, an Indiana Militia Corps’ citizenship recruitment pamphlet states, “The Christian faith was the anchor of the founding fathers of these United States.” The manual also states, “People of faith, Christians in particular, recognize that God is the source of all things, and that Rights come from God alone.” The militia movement erroneously believes that the principles the founding fathers used to create the U.S. Constitution are derived solely from the Bible.

Nine members of the Hutaree militia were arrested in March 2010 for conspiring to attack police officers and blow up their funeral processions.

Antigovernment conspiracy theories and apocalyptic “end times” Biblical prophecies are known to motivate militia members and groups to stockpile food, ammunition, and weapons. These apocalyptic teachings have also been linked with the radicalization of militia extremist members. For example, nine members of the Hutaree militia in Lenawee County, Michigan, were arrested in March 2010 for conspiring to attack police officers and blow up their funeral processions. According to the Hutaree, its doctrine is “based on faith and most of all the testimony of Jesus.” Charges against all nine were eventually dismissed.

On their website, the Hutaree referenced the story of the 10 virgins (Matthew 25: 1-12) as the basis for their existence. The verses declare, “The wise ones took enough oil to last the whole night, just in case the bridegroom was late. The foolish ones took not enough oil to last the whole night and figured that the bridegroom would arrive earlier than he did.” According to the Hutaree, the bridegrooms represented the Christian church today; the oil represented faith; and, those with enough faith could last through the darkest and most doubtful times, which Hutaree members believed were upon them. Further, militia members often reason that defending themselves, their families, and communities against the New World Order is a literal battle between good (i.e. God) and evil (i.e. Satan or the devil).

The militia movement has historically both feared and anticipated a cataclysmic event that could lead to the collapse of the United States. Some militia members believe that such cataclysmic events are based in biblical prophecies. For example, some militia members believe that the so-called “Anti-Christ” in the last days predicted in the Book of Revelation is a world leader that unites all nations under a “one world government” before being exposed as the agent of Satan. They further believe that Jesus will battle the Anti-Christ before restoring his kingdom on earth. Militia members cite the creation of Communism, the establishment of the United Nations, and attacks against their Constitutional rights as “signs” or “evidence” that the Anti-Christ is actively working to create the “one world government” predicted in the Bible (e.g. Book of Revelation). Towards the end of the 1990s, many in the militia movement prepared for the turn of the millennium (e.g. Y2K) due to the impending belief that American society would collapse and result in anarchy and social chaos. The failure of the Y2K prophecy left many in the militia movement disillusioned and they left as a result.

More recently, militia extremists have begun organizing armed protests outside of Islamic centers and mosques fearing a rise in Muslim terrorism, perceived encroachment of Sharia law in America and/or out of pure hatred of Muslims and Islam. Some militia extremists have also provided support to gun stores and firing ranges in Arkansas, Florida and Oklahoma that were declared “Muslim Free Zones” by their owners. These types of activities are meant to harass and intimidate an entire faith-based community. They are likely inspired by militia extremists’ personal religious views of preserving America as a Christian nation.

Sovereign Citizens

Sovereign citizen extremists believe their doctrine is both inspired and sanctioned by God. Many have their own version of law that is derived from a combination of the Magna Carta, the Bible, English common law, and various 19th century state constitutions. Central to their argument is the view of a Supreme Being having endowed every person with certain inalienable rights as stated in the U.S. Declaration of Independence, the Bill of Rights, and the Bible.

David Brutsche (L), 42, and Devon Newman, 67, were arrested for allegedly plotting to capture and kill a police officer. Authorities say they were part of the anti-government “sovereign citizen” movement.

In particular, since there is a strong anti-tax component to the sovereign citizen movement, many adherents use Biblical passages to justify not paying income or property taxes to the government. They most often cite Old Testament scriptures, which reference paying usury and taking money from the poor, such as Ezekiel 22:12-13, Proverbs 28:8, Deuteronomy 23:19, and Leviticus 25:36-37. Sovereign citizen extremists further cite Nehemiah 9:32-37 to bolster the belief that oppressive taxation results from sin. Also, 1 Kings 12:13-19 is used to justify rebellion against the government for oppressive taxation.

Sovereign citizen extremists have also been known to avoid paying taxes. They misuse a financial option called “corporation sole.” In general, sovereign citizen extremists misuse the corporation sole (e.g. forming a religious organization or claiming to be a religious figure such as a pastor or minister) tax exemption to avoid paying income and property taxes. They typically obtain a fake pastoral certification or minister certificate through a mail-order seminary or other bogus religious school. Then they change their residence to a “church.” Courts have routinely rejected this tax avoidance tactic as frivolous, upheld criminal tax evasion convictions against those making or promoting such arguments, and imposed civil penalties for falsely claiming corporation sole status.

Violent Anti-Abortion Extremists

The majority of violent anti-abortion extremist ideology is based on Christian religious beliefs and use of Biblical scripture. A review of violent anti-abortion extremist propaganda online is filled with Biblical references to God and Jesus Christ. Many of the Biblical scriptures quoted in violent anti-abortion extremist propaganda focus on protecting children, fighting against evil doers, and standing up to iniquity or sin.

The ultimate goal of anti-abortion extremists is to rid the country of the practice of abortion and those who perform and assist in its practice. They use religious and moral beliefs to justify violence against abortion providers, their staff, and facilities. Violent anti-abortion extremists believe that human life begins at conception. For this reason, some equate abortion to murder. Using this logic, they rationalize that those performing abortions are murdering other human beings. Anti-abortion extremists also equate the practice of abortion to a “silent holocaust.” Some anti-abortion extremists go as far as claiming abortion providers are actually “serial killers” and worthy of death. This sentiment is echoed in passages from the Army of God (AOG) manual in which they declare that the killing of abortion providers is morally acceptable and justified as doing God’s work.

The AOG perpetuates the belief that violent anti-abortion extremists literally represent soldiers fighting in God’s Army and that a divine power is at the helm of their cause. “The Army of God is a real Army, and God is the General and Commander-in-Chief,” the AOG says. Their manual further states, “The soldiers, however, do not usually communicate with one another. Very few have ever met each other. And when they do, each is usually unaware of the other’s soldier status.”

Robert Dear admitted killing three people at a Planned Parenthood office in Colorado. He called the attack a “righteous crusade.”

The AOG also utilizes religious symbolism in its name and logo. The AOG name literally compares its adherents to soldiers in battle with Satan. They are fighting a war with Jesus Christ at their side in an effort to save the unborn. The AOG logo also includes a white cross (e.g. symbolizing the crucifixion of Christ and his resurrection). The logo has a soldier’s helmet hanging off the cross with a bomb featuring a lit fuse inside a box. The words “The Army of God” are inscribed over and below the cross and bomb. The AOG also uses the symbol of a white rose; a reference to the White Rose Banquet, an annual anti-abortion extremist event organized by convicted abortion clinic arsonist Michael Bray.

Religious concepts — such as Christian end times prophecy, millennialism and the belief that the Second Coming of Jesus Christ is imminent — play a vital role in the recruitment, radicalization and mobilization of violent right-wing extremists and their illegal activities in the United States. For example, white supremacists have adopted Christian concepts and Norse mythology into their extremist ideology, group rituals and calls for violence. Similarly, sovereign citizens use God and scriptural interpretation to justify breaking “man-made” laws, circumventing government regulation, avoiding taxation, and other criminal acts. Violent anti-abortion extremists have used Biblical references to create divine edicts from God and Jesus Christ to kill others and destroy property. And militia extremists and groups use religious concepts and scripture to defy the government, break laws, and stockpile food, ammunition and weapons to hasten or await the end of the world. As a result, religious concepts and scriptures have literally been hijacked by right-wing extremists, who twist religious doctrine and scriptures, to justify threats, criminal behavior and violent attacks.

Religion and scriptural interpretations have played an essential role in armed confrontations between right-wing extremists and the U.S. government during the 1980s and 1990s (e.g. the Covenant, Sword, Arm of the Lord standoff in 1985, the siege at Ruby Ridge in 1992, and raid and standoff at Waco in 1993) as well as today (e.g. the 2014 Bunkerville standoff and the takeover of the Malheur Wildlife Refuge in 2016).

These events not only demonstrate extremists rebelling against the U.S. government and its laws, but also served as declarations of their perceived divinely inspired and Constitutional rights. They also serve as radicalization and recruitment nodes to boost the ranks of white supremacists, militia extremists, sovereign citizens, and other radical anti-government adherents who view the government’s response to these standoffs as tyrannical and overreaching.

Thanksgiving 2018 Ellie asked me to Play Guitar at her Shop Closure after I bought a Wonderful Texas Rancher’s Hat at her shop & Prattled on anout being from Waco, Texas. Well, Ellie was from Texas also, so that was a great opportunity.

This lst time I walked up Hyde Street her shop is still Vacant.

For two friends, the opening of Anthophile—a new vintage and flower shop that opened last month at 611 Hyde St. (between Geary and Post)—is the fulfillment of a lifelong dream.

Four years ago, Ellie Bobrowski and Meryll Cawn met in the San Francisco bar community. The two bonded over their shared backgrounds in design and interest in collecting vintage wares.

The two friends have kicked around the idea of opening a flower/vintage shop in the Tenderloin since August 2016, when Cawn quit her job in sales at a start-up.

But until they learned about the space at 611 Hyde earlier this year, their dream remained just that. Once they saw the space, they jumped on the opportunity, signing a lease the very next day.

Both women have a connection to the neighborhood: Bobrowski lives here and says it’s the home she has been looking for since leaving Texas several years ago. Cawn lives in the East Bay but works as a bartender at the Hi Lo Club on Polk Street, just a few blocks away from the new store.

Anthophile’s exterior at 611 Hyde St. | Photo: Anthophile/Facebook

The new shop offers vintage goods and accessories along with floral arrangements. Bobrowski regularly goes to Texas to buy wares for the store, and also delivers handcrafted floral arrangements across the city.

The two share similar tastes, but are 10 years apart in age, so their preferences are different enough to appeal to a wider audience, Cawn said.

Along with vintage clothes and accessories, the women are working to bring new brands to the city. For some brands, Anthophile will be their first retailer in the city or in California.

The goal is to keep prices affordable for people who live in the neighborhood. Most clothing items range from $20-$60 each, and most new jewelry below $60.

Cawn is also making jewelry for the store to sell. She converts broken vintage pieces that she can pick up fairly cheaply, and she says she likes to pass that value on.

There will be some pricier items, but “we like a good bargain ourselves,” Cawn said, “we don’t want to bring stuff in that is way outside of our price range.”

“There is such great community here,” Bobrowski said. “We want to be respectful of the neighborhood and the people who live here.”

Although Cawn and Bobrowski are hoping to make Anthophile a success, that doesn’t mean that they’re quitting their day jobs just yet.

Both owners have other jobs to help the pay the bills: Cawn bartends at the Hi Lo Club on Polk and Bobrowski works for Intel — but they share a desire for Anthophile to become a permanent part of the neighborhood.

The two have a month-to-month lease through August, and then the option to sign a longer-term one. At that point they will reevaluate the situation and see if it’s still the right spot, or if they need more room.

Previously, the space had been a men’s clothing boutique, KnoxSF. After it shuttered in 2013, the space housed a number of different pop-up businesses.

“I want to make it into a destination for the neighborhood,” Bobrowski said. “Some combination of a neighborhood boutique you can find in Europe, with San Francisco flavor and a touch of Texas thrown in.”

Stop by and welcome Anthophile to the neighborhood. Its current hours are 1pm-8pm Tuesday-Friday and 11am-8pm on Saturdays, but those may change, so check out its Facebook page before you go.


To mark the 49th anniversary this week of the founding of the American Indian Movement (AIM), we’re taking a look at the FBI file of John Trudell, esteemed Santee Dakota poet, writer, speaker, and musician who was a key member of AIM, rising to the rank of National Chairman by the mid seventies.

To the Bureau, Trudell was a renowned “agitator,” but within his community he was a motivator who inspired Indigenous peoples across the nation to strive for a better life.

Trudell first came to the attention of the FBI in 1969 when he and other AIM members occupied Alcatraz Island in an attempt to form an Indigenous colony. Over the course of the next decade, the Bureau built up a 138 page case file, utilizing open source intelligence from newspapers and a number of confidential sources.

In September 1972, Trudell and other AIM leadership and membership occupied the Bureau of Indian Affairs headquarters in Washington, DC. The year after, in February 1973, Trudell would go with hundreds of other AIM members to Wounded Knee, South Dakota on the Pine Ridge Reservation, the site of the massacre of 300 Sioux, including women and children, by US cavalry in 1890. The massacre effectively ended the last chapter of the Indian Wars.

While there, he participated in an armed occupation in protest against the treaties broken by the US government and against the Pine Ridge Tribal Chairman Richard Wilson and his Guardians of the Oglala Nation (GOONs), a brutal security force on the Reservation. After the occupation ended in May 1973, Trudell returned to a transient way of life, joining a protest in New Mexico for better conditions in mines for Navajo workers.

In 1975, Trudell held up the Duck Valley Trading Post with a pistol, demanding food for starving elders on the Reservation and a drop in prices. In the process, Trudell fired a single round into the wall behind the clerk, who was not injured. The FBI believed the incident was staged in order to get publicity for the brutal living conditions at Duck Valley, and they devoted a considerable amount of time trying to bring him up on an assault with a deadly weapon charge. The Bureau took control of the investigation and even went so far as to forensically examine the can of Hawaiian Punch that the bullet had passed through.

The file also includes an investigation into the alleged 1979 arson of Trudell’s home on the Duck Valley Reservation. His wife and children all perished in the fire, which began on the roof – an extremely unlikely and suspicious place for a fire to begin, especially when one takes into account that less than 24 hours prior Trudell had burned an American flag in front of the FBI’s DC headquarters as an act of protest. Congressman Ronald Dellums from California wrote to the then-Director of the FBI William Webster, imploring him to investigate “the suspicious circumstances,” and saying “I strongly feel that the Nevada fire deserves your immediate and thorough investigation.” The Bureau begged to differ, instead sticking with the party line that the BIA had conducted their investigation and it resulted in an “accidental” finding, thus the investigation would be not be pursued further.

To this day, the case has never been fully investigated, and likely will never be.

Other highlights from his file include the charges that the FBI considered bringing Trudell up on, including the exceptionally rare “Insurrection or Rebellion” Title 18 United States Code, Section 2383. They also batted around Section 2384, “Seditious Conspiracy.” If anything is to be taken away from that, it is how fearful the FBI was of his efforts to organize and motivate the Indigenous nations to strive for something greater than reservation status. This makes sense especially when considering what sources said about him, and how long the Bureau had special agents out investigating him and trying to apprehend him.

An article the FBI had used as part of the investigation, which appeared around the time that the leaders of AIM were arrested for their role in the occupation of Wounded Knee, quoted Trudell speaking about the US government. “The government is dragging us through the court system so that American consciousness can pretend at humanity. Americans should begin to think about their government, as it is the one instrument that can bring people together or keep them at odds.”

After presenting the remarks without comment, the brief simply skips to Trudell acting as an organizer and spokesman for the Iroquois armed takeover at Eagle Bay, New York, in 1974. The occupation of the Ganienkeh Territory, as it is called by the Mohawk Iroquois, was another episode of Indigenous people struggling to take back land ruthlessly stolen from them. The briefs are full of accounts featuring Trudell traveling across the country organizing Indigenous people, and telling them to not be afraid to challenge their meager status quo, and to if necessary fight for their right to sovereignty.

One source had a particularly glowing assessment of Trudell, culminating with the fitting line, “Trudell has the ability to meet with a group of ‘pacifists’ and in a short time have them yelling and screaming ‘right on!’”

Whatever you think of his politics, Trudell was a man singularly devoted to improving the lives of his people. And for his efforts, he received this extensive FBI file, his family most likely murdered, and federal agents haranguing him for years on end. His speeches survive on YouTube and elsewhere on the internet, and it is highly recommended that you check them out, along with the rest of his file below.


Image via American Indian Movement

We Are an Old People, We Are a New People

Part Three,  Cybele and Her Gallae

by Cathryn Platine

When discussing the pre-history of the Mother Goddess it is best to start with a brief discussion of socio-political blinders that much of the prior research has suffered from.  Even the very word “mother” conjures images and expectations that, upon closer examination, bear little relation to the Anatolian Great Mother Goddess, yet colour almost every account written about Her  In her introduction to In Search of God the Mother, Lynn Roller touches on many of these issues along with an excellent recap of the history of the examination of the very concept of a Mother Goddess and deconstructs both the paternal “Mother Goddess primitive, father God superior” linear viewpoint of the majority of scholars as well as the “Golden Age free of strife and warfare” views of many modern dianic pagans.  Were the ancient Anatolian civilizations matriarchal?  The plain fact of the matter is we may never know.  My own guess is that they were equalitarian in nature, but to a western world that only recently started to grant equal rights to women in the past hundred years, I suppose an equalitarian society could look downright matriarchal.  To those who feel that a progression from a Mother Goddess to a father god is progress, I’d remind them that the religion of Cybele was the official religion of Rome for 600 years as well a major part of the religious landscape of the known world.   When the Roman empire turned from Magna Mater to christianity, the empire promptly fell and the long dark ages began…….hardly progress unless progress means something entirely different to you than to me.

We started our journey into the ancient past at Catal Huyuk where the first representation of the Great Mother was found in a granary bin circa 7000 BCE.  I am giving one of the earlier dates for this representation that is used because there has been a marked pushing backwards of the dating of ancient civilizations with increased knowledge recently.  In part one of this series we examined how neo-lithic life was considerably more advanced than most people realize.  Before dismissing Catal Huyuk as the exception, know that several other settlements from Anatolia show the same level of advanced home building and home life as well as considerable trade over a wide area.  Indeed, several newer discoveries have pushed back the date of the ancient neo-lithic Anatolian civilization past 10,000 BCE and there are literally hundreds of known sites that haven’t been touched yet.

Lynn Roller dismisses the connection of Catal Huyuk seated Goddess with Cybele and although she gives excellent reasons, I feel she also overlooks compelling evidence for the direct connection of the seated Catal Huyuk Goddess to Her.  Many writers on the subject of ancient Goddess worship assume, based on widespread finds of female figurines from the neo-lithic age on an almost universal Goddess religion.  While I agree that you cannot assume these were all Goddess representations it is also apparent that the concept of a Great Mother Goddess associated with lions and bulls spread from ancient Anatolia to Sumeria, India, Egypt and the Minoans by 3000 BCE.  Sometimes, as in Sumeria, a formally somewhat minor Goddess was elevated to this position.  Sometimes the Mother appears alone.  That this happened cannot be denied and is readily apparent by looking at timelines, deities and maps of the ancient world.  The point of origin is clearly central Anatolia.  Also quite telling is that the Great Mother is almost never associated with children, but rather with wild places and beasts and the very earth, moon and sun.  As we have seen in parts one and two, transsexual priestesses are almost always associated with Mother by Her various names.  Just as interesting is that when one digs back into the various mythologies of Her origins how often one finds vague references to Mother originating as a hermaphrodite.  By Phrygian times, this hermaphroditic connection is transferred the consort Attis but even the earliest versions of the Attis myths start with Cybele as a hermaphrodite Herself.

Central to Lynn Roller’s discussion of the Phrygian Mother, Cybele, being of later origin is that the Phrygian people were preceded by the Hittites and co-existed with the neo-Hittites.  Compared to some of the other groups we’ve discussed, the Hittites are relative late comers, rising to power circa 1600 BCE and apparently worshipping an entire pantheon of gods and goddesses.  While the name associated most with Magna Mater, Cybele or Kybele almost certainly was from this period, it is important to note that the Phrygians themselves simply referred to Her as “Mother”.  As we have seen, this concept of Mother Goddess is far older and widespread.  Prior to the Hittites as far back as 4000 BCE we find a Mother Goddess associated with both cattle and lions in the Halaf culture of eastern Anatolia.

We know a flourishing civilization was in place in central Anatolia by 10,000 BCE.  We know it abruptly ended around 4000 BCE.  We know that Mother Goddess worship was central to this civilization.  So what happened?  Walled cities appear around this time period throughout the Middle East.  The answer to many of these previous mysteries is fairly simple.  There was a mini ice age that affected the area that began around 4000 BCE and lasted roughly 1000-1500 years.  Central Anatolia was simply not fit for civilized life during that period and it’s people spread out east and south.   This is when the Minoan civilization arose, the flowering of the Tigris and Euphrates civilizations and the migration of people and ideas to the Indian subcontinent.  This is also when areas that had pantheons, such as the Sumerians, adopted a Mother Goddess to head them, the elevation of Inanna being one the best known examples.

Looking at timelines and migrations what literally jumps out at you, if you are looking, is that Mother Goddess spread from ancient Anatolia and the banks of the Caspian Sea throughout the Middle East, the Mediterranean and all the way to India all in the same period of time as the ending of the ancient neo-lithic cultures of Anatolia.  When this mass migration started, we then started seeing the appearance of walled cities as conflicts arose between those migrating and those already in the areas. It is the period between 10,000 BCE and 4000 BCE that was the model for the “peaceful matriarchal civilizations” of the modern Dianics……except it wasn’t a matriarchy and the conflicts didn’t start because of the introduction of patriarchal thought.  It was simply a matter of people competing for increasingly less resources as a result of weather forced migration.

Moving ahead to around 2500-1500 BCE, much of the non-archaeological material on the religious practices of Anatolia come to us from Greek and Roman sources.  Considering that these sources were dependant on oral traditions, for the most part, and comparing our own misunderstandings of Greek and Roman history today, a similar distance in time, over reliance on this material could be misleading.  When you add the factors of ethnocentric thinking (cultural bias) and the fact that the accounts come to us from ancient scholars who were not part of a Mother Goddess religion themselves and add a pinch of transphobia the bias is practically assured.  No, what is remarkable is that associations between the concept of a Mother Goddess, bulls, snakes, bees, transsexual priestesses, and lions reoccur over and over in the same  general area in different civilizations.  Even more remarkable when you consider that the Phrygians themselves, who only worshipped Mother, lost some of these associations and yet as soon as Cybele encountered the Aegean people (proto greeks) these associations were once again added back.

We have accounts that Mother’s priestesses not only practiced in cities but also roamed in small nomadic groups and did so throughout the Phrygian, Hellenistic and Roman periods in Anatolia.  It is a small step to suppose that these groups also predated the Phrygian period and provided the link in traditions that is so clear from culture to culture.  We need only look at the modern example of christianity to understand that the central figure of one religion can be incorporated into another as happened with Hinduism and Islam. Again, we need look no further than the Catholic church to see that even in a poppa god religion, Mother will once again rise as She has done there in the Marian movement within Catholicism.

To understand Cybele’s relationship to the Greek and Roman schools of religion it is necessary to deconstruct widely held misconceptions about the various gods and goddesses.  The Cybeline faith was the first of the mystery religions.  A mystery religions teaches with stories, plays and oral traditions.   The various stories about Attis, Cybele’s consort son/daughter, appeared around the same time as the origins of the Greek mythological stories.  Attis and the so-called Greek Gods were never meant to be taken as literal truth, but rather as poetic expressions of the world and morality stories.  It is no accident that the only “stories” Cybele appears in are those about Attis yet, as we shall see, She was above all the various gods and goddesses in both ancient Greece and later Rome.  1600 years of literalist christian tradition makes understanding the nature of the Greek and Roman pantheons all but impossible for the average person today.  The famous Greek mystery schools developed from the Aegean contact with the Phrygian Cybelines.  The faith spread throughout the Mediterranean as far as Spain and southern Italy at a much earlier time than previously had been believed.  In Alexandria, Cybele was worshipped by the Greek population as “The Mother of the Gods, the Saviour who Hears our Prayers” and as “The Mother of the Gods, the Accessible One.” Ephesus, one of the major trading centres of the area, was devoted to Cybele as early the tenth century BCE, and the city’s ecstatic celebration, the Ephesia, honoured her. It was also around this time that Mother’s temples underwent a change from a beehive shape to a more Grecian looking columned pattern.  This shows in the various “doorway” shrines to Kubaba/Kubele that appeared during this period throughout the Phrygian mountains.

During the Phrygian period, Cybele’s Gallae priestesses were wandering priestesses as well as  those living in religious communities mixed with Mellissae priestesses.  We know that both were fairly common in Greece from various accounts such as the mistreatment one Galla received in Athens. She was killed by throwing her in a pit.  Athens fortunes fell so low afterwards a Maetreum was built and dedicated Cybele that was viewed as so important that all of the official records of Athens were kept there.  There is also much evidence that Sappho of Lesbos was a Mellissa priestess and several Phrygianae were spread thoughout the islands.  Careful examination of art work showing the greek gods often reveals Cybele’s image over them which continued into the Roman times.

The story of Cybele’s presence in Rome begins circa the early sixth century BCE at the dawn of Roman history.  According to the story, King Tarquinius Superbus the Seventh (and last) legendary King of Rome, was approached by an old woman bearing nine scrolls of prophecies by the Sibyl.  She asked for three hundred gold pieces for the set, but Tarquinius thought she was a fraud and refused.  She then burned three of the scrolls in his hearth and again offered the remaining six scrolls for the same three hundred gold pieces.  Once again Tarquinius refused.  Again she burned three more scrolls.  When she offered the remaining three scrolls for the same three hundred gold pieces, Targuinius suspected he was dealing with the Sibyl of Cumae herself and agreed.  These were the original Sibylline prophecies of Rome.  They were housed in the Capitoline temples as the most sacred books of Rome and accesses to them limited to a specially appointed priesthood who only consulted them in times of threat to Rome.

One such threat to Rome came during the second Punic Wars.  Rome was being badly beaten,  rains of stones from heaven falling on the city itself, and according to legend, numerous other ill portents.  The Sibylline scrolls were consulted and it was found that if a foreign foe should carry war to Italy, if Magna Mater Idaea was brought to Rome from Pessinus, Rome would not only endure, but prosper.  This was made all the more impressive by the arrival of pronouncements of the Sibyl of Delphi of a similar nature at this exact moment.  Romans had prided themselves on their Phrygian origins from Troy so the introduction of a Phrygian religion was actually embraced.  Five of Rome’s leading citizens travelled to Perganum by way of Delphi to see King Attalus.  The Sibyl of Delphi confirmed that Rome’s salvation could be had from Attalus and that when Cybele arrived in Rome She must be accorded a fitting reception.  They went to Attalus’ royal residence at Perganum, were conducted to Pessinus and arrangement were made for the Mother of Gods to Rome. Word was sent ahead and the senate voted young Scipio the best and noblest and he was given the task of greeting Magna Mater at Ostia and overseeing Her procession to Rome.

Scipio was accompanied to Ostia by the Matrons of Rome, who were to carry Magna Mater (in the form of a statue with a black meteoric stone in Her forehead) by hand to Rome from Ostia.  When the ship arrived, it became stuck at the mouth of the Tiber and resisted all attempts to free it.  Among the Matrons of Rome was Claudia Quinta, who’s reputation had been questioned.  She waded into the waters, shoo’d off the men and pulled the ship free by herself according to the legend.  Thus she restored her reputation.  Cybele arrived in Rome April 12’th, 203 BCE and was greeted with rejoicing, games, offerings and a lectisternium (7-day city-wide feast).  Until the mid fourth century CE this event was celebrated in Rome with games, festivals and feasts as Megalesian every year.

Cybele was installed in the Temple of Victory on the Palatine close to where Her own temple was already under construction.  That summer Scipio defeated Hannibal and Rome’s devotion to Cybele was cemented.  The Cybeline faith remained the only “official” religion in Rome up until the introduction of Mithraism, a faith that allowed male priests.  The Maetreum on the Palatine was dedicated in 194 BCE

We Are an Old People, We Are a New People

Part Two,  Transsexual Priestesses, Sexuality and the Goddess

by Cathryn Platine


Sexual “morality” is one of the major blind spots to understanding the past.  The Western world has become so enmeshed in the Judaeo-Christian view of sexuality that it takes a major effort for most to take an unbiased viewpoint of cultures that had a much healthier view of human sexuality.  Even today’s neo-Pagan, who is taught that all acts of pleasure, that harm none, are forms of Her worship, often still struggle with the “morality” of same-sex relationships and even the existence of transsexuals so it should not be a surprise that much written about ancient sexuality is tainted with unexamined bias.  The term “temple prostitute” is an excellent example.  The very term is extremely negatively emotionally loaded.  To avoid this, I shall refer to those who practiced the institutional sacred sex role as hierodules, a greek term without that loading to the modern reader.

One other term widely used incorrectly is eunuch.  Historians apply this term indiscriminately with clearly no idea of it’s meaning.  It conjures up visions of large castrated male harem guards and castrati singers of Middle Ages which falls within the true meaning of the word but is widely applied to the transsexual priestesses of the Goddess, which is misleading at least, and at any rate, insulting in the extreme to those ancient transsexual women.  Today it is widely applied to the Hijra of India as well, also in blatant disrespect of their own identity.  When the term eunuch is not used, we find in it’s stead “castrated male priests” almost universally.  Gay and feminist historians are particularly guilty of this last use.  So what is the truth?  The truth lies in examining the lives of these priestesses, for deeds speak louder than words and how they lived is the best record we have of who they were.

Some things never change regardless of culture.  As any woman can tell you, men place a very high sense of their identity on their genitals and always have and so the idea that thousands upon thousands of “men” would willingly castrate themselves and then live as women the rest of their lives is just as absurd then as it is today.  We aren’t talking about involuntary castrations of infants or young males by others such as is the source of the historic eunuchs, we are talking about individuals cutting off their own genitalia in order to live as priestesses.  Any transsexual woman reading the accounts decodes the mystery instantly and effortlessly…….these individuals are not males, they are transsexual women.  Knowledge about transsexuality is widespread enough today among the educated that continuing to refer to these ancient women as “castrated male priests” or eunuchs is out and out transparently transphobic.  Unfortunately this transphobia runs rampant everywhere even today.  Despite their expressed wishes, despite the way they live their lives, almost all accounts today of the Hijra of India refer to them as eunuchs or “neither male nor female”, a sort of third sex.  If you ask a hijra about her sex, she’ll tell you she is female in her eyes just as any modern transsexual woman would.  If you observe their lives, they live and function (as much as they are allowed to) as women.  Even in our own culture it has been only a few years past that the press even wrote guidelines regarding the pronouns to use when writing about transsexuals and even with those guidelines, a lurid, post-mortem insult of “man living as a woman” is still too often the default of the press when one of us is murdered.  Transphobia is rooted in gynophobia, it is the last socially “acceptable” form of bigotry, but is pure bigotry nonetheless.  In ancient times as well as today, the imperative to bring one’s body in conformity with one’s identity cannot be truly understood by those who don’t have it.  The non-transsexual will just have to accept the word of those of us called, a call that cannot be denied.  Now we have the key to unlock truth of the transsexual priestesses, for indeed, that was what they were.

How common were transsexual priestesses in the ancient world?  Almost every form of the Goddess was associated with them.  Inanna, also called Ishtar, had Her Assinnu.  The Assinnu were the hierodule priestesses of Inanna whose change was performed by  crushing the testicles between two rocks in the earliest references.  Inanna also had transgendered priests who did not do this and who wore clothing that was female on one side and male on the other called the Kurgarru.  They were two distinct groups.  Becoming an Assinnu was a mes, a call from the Goddess.  This mes is a common thread among all transsexual priestesses.  It was recognized that transforming one’s life and body was not a choice but a destiny, the call usually in the form of dreams of Inanna when young.  We have several different accounts of Inanna’s decent to the underworld and rescue from Her sister, Ereshkigal.  In one, Asushunamir (She whose face is Light), the first Assinnu, was created to save Inanna.   In another version,  two beings, the first Kurgarru and Kalaturru, neither males nor females, are created by Enki from the dirt under his fingernails for the mission.  As hierodules, the Assinnu were seen as mortal representatives of Inanna and sex with Assinnu was congress with the Goddess Herself.  As magicians, their amulets and talismans were the most powerful of magick to protect the wearer from harm, even just touching the head of an Assinnu was believed to bestow on a warrior the power to conquer his enemies.  As ritual artists they played the lyre, cymbals, 2 string lutes and flutes and composed hymns and lamentations all in Emesal, the women’s language, said to be a direct gift of Inanna, as opposed to the common language of men, Eme-ku.

In Canaan we find the Goddess as Athirat also called Asherah or Astarte and Her hierodule transsexual priestesses, the Qedshtu.  It should be noted that just as Gallae is changed into Gallus denying the very gender of these priestesses and erasing the truth of their lives, the bible refers to them as Qedeshim (masculine).  The functions of the Qedshtu were almost identical to those of the Assinnu and sexual congress with the Qedshtu was considered sex with Athirat Herself.  Apparently they also practiced a tantric sexual rite accompanied by drums and other instruments and also used flagellation to obtain an estatic state.  The worship of Athirat dates back as far as 8000 BCE by the Natufians who were replaced around 4000 BCE by the Yarmukians.  The young consort, Baal added around this time and somewhat better known in biblical times as El.  By around 2000 BCE the Qedshtu worn long flowing caftans made of mixed colours, interweaved with gold and silver threads intended to envoke a vision of Athirat in Her full glory in the springtime and are thought to have also worn veils over their faces.  They were renown for charity, maintained the garden like groves and temples of Athirat and were prized potters and weavers.  Among the surviving rites was the preparation of a sacred ritual food made from a mixture of milk, butter, mint and coriander blended in a cauldron and blessed by lighting seven blocks of incense over the top while accompanied by music played by other Qedshtu.

The invasion of Canaan by the bloodthirty, patriarchial and fanatical followers of Yahweh, the people later known as the Israelites, took place around 1000 BCE.  Yahweh’s worshipers insisted he was a jealous god that would have no rivals.  Unable to completely conquer the Canaanites, they lived in close proximity for a while.  It’s no wonder that the Israelite women were drawn to Athirat, now often called Asherah, whose followers believed in equality of the sexes.  It is no wonder that the sexually repressed Israelite men would also want to participate in Her rites.  For a time the religions mixed enough that Yahweh and Asherah were considered co-deities.  The Levite priests of Yahweh were at their wits end, since even their wives often openly worshiped Asherah.  That some of their “sons” became Qedshtu, can be decoded in the story of Joseph and his “coat of many colours”.  It is believed that Rachel, Joseph’s mother, was a priestess of Asherah and the coat came from her.  We’ve mentioned the colourful caftans with gold and silver threads that were the marks of the Qedshtu, both transsexual and non transsexual priestesses.  Small wonder that Joseph’s brothers, devotees of Yahweh, would react badly to their brother becoming a woman, a hierodule priestess of Asherah, for indeed this is what the story indicates.

Almost all of the various levitian laws came from this period as an attempt to kept the Israelites from worshiping Asherah.  Outlawed was the “wearing of cloth made from mixed fibres”, banned from the presence of Yahweh were the eunuchs who “had crushed their testicles between stones”, outlawed was the wearing of clothing of the opposite sex.  Israelite men were given permission, even directed, to kill their own wives and children if they did not follow their teachings. The Levites were essentialists and not only would not recognize the womanhood of the trans-Qedshtu, but referred to them as men who laid with men.  Among the Canaanites, homosexual behaviour wasn’t uncommon and was widely accepted.  There are ample examples of artwork showing these relations that are clearly not with Qedshtu.  Then, as today, these essentialists failed to understand the difference between a transsexual and a homosexual.  It wasn’t so much the homoerotic sex that upset them, it was the idea that a man would become a woman and chose to live that way that terrified them.

The open warfare between the Israelites and followers of Athirat began in earnest soon after the rule of Solomon when Canaan was divided into Israel and Judah.  That many Hebrew rulers were not only tolerant of the worship of Athirat, but sometimes were themselves worshipers cannot be denied.  Qedshtu were welcomed and openly practiced in Hebrew temples. Jeroboam, Rehoboam and Abijam all openly worshiped Athirat and Baal.  Rehoboam’s mother was a Qedshtu. Abijam’s son, Asa, who ruled between 908 to 867 BCE, converted wholy to Yehweh and exiled many Qedshtu and destroyed their temples and burned their groves.  He removed his own mother, Maacah, from the throne because she was a Qedshtu priestess. Jehosphaphat of Judah, went further and “the remnants of the male cult prostitutes who remained….he exterminated.” ( 1 Kings 22:46 RSV)  The war on the followers of Athirat continued, it’s interesting to note that Athirat was so feared, She is not even mentioned, but rather the biblical text refer to the followers of Baal, her consort, only.  This pattern is repeated in much of the old testament.  King Jehu, whose murderous attempt at genocide of Athirat and Baal’s worshipers is called “cunning”,  pretended to convert and called all the Qedshtu together for a mass celebration at the temple of Jerusalem.  When he had gathered them all together and invited them to partake of their rituals, he had the doors locked and his guards murder everyone and then throw their bodies on the city garbage dump.  King Josiah, yet another son of a follower of Athirat, Amon, in the tenth year of his rule ordered all images of Athirat and Baal gathered together at Kidron and burned.  Not content with this, he then committed total sacrilege and ordered all the bones of Her worshipers dug up and burned on the altars and then scattered to the winds.  Then he proceeded to hunt out the remaining worshipers in their communal homes and temples (he broke down the houses of the male cult prostitutes……where the women wove hangings) and killed them all.  The christian decendants of the Israelites a thousand years later would repeat these deeds, but more of that later.  Let us now journey to ancient Russia then back to Anatolia and then on to Greece and Rome.

Reaching back as far as  8000 BCE the people of the area known today as Russia and the Ukanine worshiped a Mother Goddess.  Our first records give Her name as Artimpasa or Argimpasa and like most other Mother Goddess aspects, She had her transsexual priestesses.  What they called themselves is lost in the mists of time, we know them ony by the names the Greeks gave them, insulting names, the least of which was Enarees, meaning un-manned.  Many authors suggest they are the spiritual decendants of the paleolithic shamans of Siberia and the source of the “twin-spirits” of the AmerIndians and Inuit.  We do know something about them.  The not only “lived like women” but also “play the woman” in all things.  Artimpasa was associated with plant life and particularly cannabis.  Like Cybele, She is accompanied often by a lion.  We know the Enarees wore the clothes of women and spoke the women’s language and performed the tasks associated with women.  Writing about them, the Greeks, who were somewhat transphobic, claimed that they were who they were as a punishment and made jokes about how they were the Scythians who had castrated themselves by spending too much time in the saddle.  From a plaque that formed the front of a queen’s tiara dating from between 4000 to 3000 BCE, we know that they probably served the same function as priestesses as almost everywhere She was worshiped.  From Herodotus we learned that they acted as diviners by “taking a piece of the inner bark of the linden tree” and cutting it “into three pieces and twisting and untwisting it around their fingers”.  We also know that part of the rites included making a “sweat lodge” and burning cannabis inside to obtain an estatic state, some of the tripods, braziers and charcoal with remains of cannabis have been found in various digs in the area.

By the time of the Scythians, the relations with the Enarees were mixed, both respected as priestesses and seers and also ridiculed.  This was the common pattern as societies turned more patriarchial and the fear of men of being “called” as were the trans-priestesses was given voice.  By the sixth century BCE the Scythians had a far reaching empire and one of the more interesting tales was how they came into conflict with Amazons, made a truce and intermarried for a while then separating.  Another tells of a Scythian noble, Anarcharsis, who traveled to the west in search of wisdom around 600 BCE.  He joined a mystery religion and while visiting Cyzicus encounters a festival of Cybele.  He made a vow that if he was able to return home safely, he would worship Cybele just as he’d seen.  Good to his word, upon returning to his homelands he donned the dress of the Gallae and “went through the ceremonies with all the proper rites and observances”.  As we shall see, the “proper rites and observances” of the Gallae included an initiation by working up an estatic state and then quickly removing both the penis and testicles with a sharp object and thereafter donning the robes and dress of the Gallae and living as female.  Anarcharsis would have had to done this if he made the proper rites.  He no doubt witnessed the rites if he attended the major festival of Cybele and would have been aware of this had he been initiated into one of the various mystery religions.  While Aanrcharsis is refered to in the masculine throughout the account, it comes to us via those horrified by the Gallae.  Fearing the re-introduction of Goddess worship to the Scythians, who had just recently separated into two camps, Anarcharsis’ brother, King Saulius murdered her.  Centuries later, Clement of Alexandra wrote of it:

“Blessings be upon the Scythian king”  ” When a countryman of his own (his brother) was imitating among the Scythians the rite of the Mother of the Gods as practiced at Cyzicus, by beating the drum and clanging the cymbal, and by having images of the Goddess suspended from his neck after the manner of a priest of Cybele, this king (Saulius) slew him (Anacharisis) with an arrow, on the ground that the man, having been deprived of his own virility in Greece, was now communicating the effeminate disease to his fellow Scythians.”

A few words about hierodules are in order.  These priestesses were both transsexual and non transsexual women.  Often, during the festivals of almost all these various aspects of Her, women who hadn’t also dedicated their lives to Her would take part and children concieved at these times were considered special gifts of the Goddess.  Because transsexual women could not become pregnant, they had a special regard and sacred sexual relations with them, which were indeed viewed as a sacred rite and not some wild orgy, brought the partner into an even more sacred state.  Today we’ve lost the connection of the sacred with sex because of the repressive nature of Judeao-Christian traditions towards anything pleasurable.  Tantric sexual worship is still practiced today in India.  Now let us talk of the Mother of the Gods Herself, Cybele, Her consort son/daughter Attis and Her Gallae.



Main Page

PRIESTS OF
THE GODDESS

Gender Transgression in Ancient Religion

Related Materials
Theorizing the Third…or How I Became a Queen in the Empire of Gender
“The radically ‘de-oedipalized’ body of the priest of the goddess, in ways mysterious to us, is bound up with techniques of ecstasy no less historically tenacious than the weight brought to bear against it by the patriarchal Judeo-Christian tradition. The defiant presence of this figure in the midst of prevailing phallocentrism remains striking and unexpected. Today, long after the last temple of Cybele fell into ruin, we are discovering that the boundaries of gender are no less friable, and that the human body, which has been so deeply inscribed with the cultural construction of its meaning as to seem for all purposes what it is represented to be—natural and fixed—may yet be reinscribed with other meanings and other constructions.”

Goddess of Catul Huyuk
The earliest known depiction of a goddess with the attributes later associated with Cybele—seated in a throne and flanked by lions. From the early Neolithic site at Catal Hüyük, 5th millennium BCE
The following is an excerpt from the introduction and conclusion of my article on priests of the goddess in the Old World, from the devotees of Inanna/Ishtar in Sumer, Assyria, and Babylonia to the followers of Cybele and Attis in Roman times and the hijra of contemporary and ancient India. The full article with notes can be found in History of Religions 35(3) (1996): 295-330.

Phrygia
Phyrgia, in modern day Turkey, was the homeland of Cybele and Attis, and their galli priests.
In the competition between Christian and pagan in the ancient world neither side hesitated to broadcast the most outrageous and shocking accusations against its opponents in the most inflammatory rhetoric it could muster. “In their very temples,” wrote Firmicus Maternus in the mid-fourth century, “can be seen deplorable mockery before a moaning crowd, men taking the part of women, revealing with boastful ostentation this ignominy of impure and unchaste bodies (impuri et impudici). They broadcast their crimes and confess with superlative delight the stain of their polluted bodies (contaminati corporis)” (De errore profanarum religionum 4.2). These infamous men, with their impure, unchaste, polluted bodies, were none other than the galli, priests of the gods Cybele and Attis, whose mystery religion constituted one of early Christianity’s major rivals. Time and again, Christian apologists cited the galli as representative of all they abhorred in pagan culture and religion. And of all the outrages of the galli, none horrified them more than the radical manner in which they transgressed the boundaries of gender.
“They wear effeminately nursed hair,” continued Firmicus Maternus,“and dress in soft clothes. They can barely hold their heads up on their limp necks. Then, having made themselves alien to masculinity, swept up by playing flutes, they call their Goddess to fill them with an unholy spirit so as to seemingly predict the future to idle men. What sort of monstrous and unnatural thing is this?” A century later, Saint Augustine found the galli no less shocking: “Even till yesterday, with dripping hair and painted faces, with flowing limbs and feminine walk, they passed through the streets and alleys of Carthage, exacting from merchants that by which they might shamefully live” (De civitate Dei 7.26).

Malibu Cybele
Marble statue of Cybele as she was typically depicted in the Roman era, 50-60 CE

Inanna/Ishtar
The Mesopotamiam goddess Inanna/Ishtar. In mythological accounts, Inanna is rescued from the underworld by two beings described as “neither male nor female.” Various classes of priests in Sumerian and Assyrian religion occupied alternative gender roles distinct from those of men and women.

It would be easy to dismiss the numerous references to galli in ancient literature, both Christian and pagan, as exoticisms equivalent to today’s fascination with gender transgression as evidenced by such films as M. Butterfly and The Crying Game. Unlike the modern figure of the transvestite, however, galli were part of an official Roman state religion with manifestations in every part of the Greco-Roman world and at every level of society. One finds the Roman elite worshiping Cybele with bloody animal sacrifices officiated by state-appointed archigalli; common freedman and plebians forming fraternal associations, such as the dendrophori and canophori, to perform various roles in her annual festivals; and the poor and slaves swept up by the frenzy of her rites, often to the consternation and alarm of their social superiors.
It is the widespread dispersion and great historical depth of the Cybele and Attis cult, as well as its appeal to multiple levels of ancient Mediterranean societies, that make its study fascinating on its own, not to mention its relevance to current debates concerning the social construction of sexuality and gender. The galli become even more interesting, however, when placed next to evidence of similar patterns of religious gender transgression from the Near East and south Asia, which suggests that goddess-oriented cults and priests are part of an ancient cultural legacy of the broad world-historical region Marshall Hodgson referred to as the “Oikoumene.”
In the discussion that follows, I will focus on three of the better-documented cases of goddess-centered priesthoods: the Greco-Roman galli, the priests of the goddess called Inanna in Sumeria and Ishtar in Akkad, and the hijra of contemporary India and Pakistan. The parallels between these priesthoods and the social roles and identities of their personnel are detailed and striking. Without ruling out dispersion as a factor, I will argue that these priesthoods are largely independent inventions whose shared features reflect commonalties in the social dynamics of the societies in which they arose, specifically, the agrarian city-state. The presence of goddess-centered priesthoods in the regions where the urban lifestyle first developed raises unexpected and challenging questions concerning the role of gender diversity in the origins of civilization….

Galli: tertium sexus
[See full article]
Hijra: neither man nor woman
[See full article]
Gala et al.: penis+anus
[See full article]

Social origins and social meanings of gender transgression
At the time of the birth of Christ, cults of men devoted to a goddess flourished throughout the broad region extending from the Mediterranean to south Asia. While galli were missionizing the Roman Empire, kalû, kurgarrû, and assinnu continued to carry out ancient rites in the temples of Mesopotamia, and the third-gender predecessors of the hijra were clearly evident. To complete the picture we should also mention the eunuch priests of Artemis at Ephesus; the western Semitic qedeshim, the male “temple prostitutes” known from the Hebrew Bible and Ugaritic texts of the late second millennium; and the keleb, priests of Astarte at Kition and elsewhere. Beyond India, modern ethnographic literature documents gender variant shaman-priests throughout southeast Asia, Borneo, and Sulawesi. All these roles share the traits of devotion to a goddess, gender transgression and homosexuality, ecstatic ritual techniques (for healing, in the case of galli and Mesopotamian priests, and fertility in the case of hijra), and actual (or symbolic) castration. Most, at some point in their history, were based in temples and, therefore, part of the religious-economic administration of their respective city-states.
The goddesses who stand at the head of these cults—Cybele, Bahuchara Mata, and Inanna/Ishtar—also share important traits. All three are credited with the power to inspire divine madness, which can include the transformation of gender. Their mythologies clearly place them outside the patriarchal domestic sphere: Cybele roams the mountains with her wild devotees; Inanna/Ishtar is the patron of the battlefield; and Bahuchara Mata becomes deified while on a journey between cities (see synopses of myths in table 2). Indeed, all three transgress patriarchal roles and structures just as much as their male followers: Cybele begets a child out of wedlock, which infuriates her father; Ishtar, the goddess of sexuality, is notoriously promiscuous, never marries, and, indeed, is herself a transvestite; and Bahuchara Mata, at the other extreme, cuts off her breasts in an act of asceticism to avoid unwanted heterosexual contact. The influence of these goddesses over human affairs is often as destructive as it is beneficial. “To destroy, to build up, to tear out and to settle are yours, Inanna,” reads one Sumerian text, and in the next line, “To turn a man into a woman and a woman into a man are yours, Inanna.” Despite the common reference to these goddesses as “mother” by their worshippers, there is much in their nature that exceeds and confounds our present-day connotations of the maternal.
How can we account for such a consistent pattern over such a broad area and time span? Without ruling out diffusion as a factor—the spread of Cybele and Attis was due in part to missionizing by galli themselves, and the influence of Mesopotamian religion certainly reached Syria and Anatolia-simple cultural exchange nonetheless seems the least likely explanation. A more promising approach would be to address three interrelated questions: What were the belief systems of the societies in which these priesthoods existed, in particular, beliefs concerning sex, gender, and sexuality? What was the nature of the social systems in which these roles originated? What was the source of their long-term popular appeal?
The eclectic approach implied by these questions-encompassing cultural, social and psychological analysis-is key to understanding cultural phenomena as social constructions. When we refuse to regard femininity, masculinity, heterosexuality, homosexuality, and social inequality in general as precultural givens, we necessarily make our task as historians and social theorists more complicated, for cultural facts are always multiply determined and their explication requires analysis of the social wholes in which they occur. The goal must be a unified analysis, one that integrates the synchronic viewpoint of culture afforded by anthropology with the diachronic perspective of historical study. In the case of the ancient priesthoods of the goddess described here, such an approach reveals their roles to be not accidental but, indeed, consistent features of the societies in which they flourished.
To begin with, in all three cultural regions, goddess-inspired priests were conceptualized as occupying a distinct gender category. As we have seen, hijra routinely refer to themselves as “neither men nor women,” consistent with the ancient Sanskrit designation trtiya prakrti. (The galli, as we saw, were also described as a tertium genus.) Similarly, the Sumerian myth called “The Creation of Man” (ca. 2000 B.C.E.) relates how Ninmah fashioned seven types of physically challenged persons, including “the woman who cannot give birth” and “the one who has no male organ, no female organ.” Enki finds each one an occupation and position in society-the sexless one “stands before the king,” while the barren woman is the prototype for the naditum priestesses. These proceedings are echoed in the Akkadian myth of Atrahasis (Atra-hasis) (ca. 1700 B.C.E.), where Enki instructs the Lady of Birth (Nintu) to establish a “third (category) among the people,” which includes barren women, a demon who seizes babies from their mothers, and priestesses who are barred from childbearing (3.7.1).

Kybele
The worship of Cybele was originally centered in Phrygia (central Turkey), where she was known Kubaba or Kybele. The Romans formally adopted her worship in 204 BCE, when they brought a statue representing her from her main shrine in the Phrygian city of Pergamum back to Rome. This statue, from a site in Anatolia dating to the eighth or early seventh century BCE, depicts the Phrygian goddess with two youthful attendents playing a flute and harp.

Attis
Attis, the Phrygian shepherd, whose worship became part of the cult of Cybele. In varying mythological accounts, Attis is killed or is driven insane and castrates himself, as the result of jealousy and passions arising from an ill-fated love affair. Attis’ fate served as the model for the galli priests, who underwent castration to become Cybele’s dedicated and chaste servants.

Gallus
Marble relief of a gallus, or priest of Cybele, with various ritual objects, 2nd cen

Archigalluls
Marble relief depicting an archigallus, or head galli priest, 3rd century CE

Cybele Temple
Remains of the temple of Cybele on the Palatine Hill in Rome

Hijra
Contemporary hijra in India

Clearly, the underlying conceptualization of gender implied by these taxonomies is at variance with the idea that physical sex is fixed, marked by genitalia, and binary. Recent reviews of Greek and Roman medical texts, for example, reveal a notion of gender as grounded in physiology, but the physiology involved is inherently unstable. Masculinity and femininity depend on relative levels of heat and cold in the body (and, secondarily, moisture and dryness). These factors determine the sex of developing fetuses, but even after birth an individual’s gender status was subject to fluctuations in bodily heat. If men were not at risk of literally becoming females, they were in danger of being feminized by any number of causes. A similar hydraulic construction of the body, as Wendy Doniger has termed it, is evident in Hindu belief as well.
The frequent references to priests of the goddess as “eunuchs” or “impotent” males points to another important commonality in the ancient construction of male and female genders. A little known episode in Roman legal history is especially revealing in this regard. In 77 B.C.E., a slave named Genucius, a priest of Cybele, attempted to take possession of goods left him in a will by a freedman, but this was disallowed by the authorities on the grounds that he had voluntarily mutilated himself (amputatis sui ipsius) and could not be counted among either women or men (neque virorum neque mulierum numero) (Valerius Maximus, 7.7.6). Presumably, only women and men qualified to exercise inheritance rights, and this privilege of their gender identity was, in turn, a function of their ability to reproduce. This seemingly minor case nonetheless underscores the way in which gender identity and citizenship were linked in societies of the Oikoumene region-that is, in patriarchal, agrarian city-states. Gender, to borrow Judith Butler’s terminology, was performative, or rather, to be even more specific, productive. Gender identity hinged not on the degree of one’s masculinity or femininity, the direction of one’s sexual orientation, nor even one’s role in the gendered division of labor but on one’s ability to produce children, in particular males. In a patrilineal kinship system, it is the labor of male children on which the paterfamilias has the greatest claim. As anthropological research has shown, peasants around the world typically seek to improve their lot in life by having more children and thereby increasing the supply of labor for family-based production. Having male children is the central imperative of gender, as a social category, a role, and a personal identity in most patriarchal agrarian societies.
From this perspective, males or females who are unable to reproduce, who are impotent, whether for physiological or psychological reasons, or who lack or forswear heterosexual desire, including those who desire the same sex, all fail to qualify for adult male or female gender identity. Being neither, they tend instead to be categorized together as members of an alternative gender or of subdivisions of male and female genders. Like male and female, these roles are also attributed specific traits, skills, and occupations. In the same way that men’s activities are “male” and women’s are “female,” what galli, hijra or gala do comes to be seen as intrinsic to their alternative gender identities. At the same time, the distinctions

There was just a small news announcement on the radio in early July
after a short heat wave, three inmates of Vacaville Medical Facility
had died in non-air conditioned cells. Two of those prisoners, the
announcement said, may have died as a result of medical treatment. No
media inquiries were made, no major news stories developed because of
these deaths.

But what was the medical treatment that may have caused their deaths?
The Medical Facility indicates they were mind control or behavior
modification treatments. A deeper probe into the death of these two
inmates unravels a mind-boggling tale of horror that has been part of
California penal history for a long time, and one that caused
national outcries two decades ago.

Mind control experiments have been part of California for decades and
permeate mental institutions and prisons. But, it is not just in the
penal society that mind control measures have been used. Minority
children were subjected to experimentation at abandoned Nike Missile
Sites, veterans who fought for American freedom were also subjected
to the programs. Funding and experimentations of mind control have
been part of the U.S. Health, Education and Welfare Department, the
Department of Veterans Affairs, the Central Intelligence Agency
through the Phoenix Program, the Stanford Research Institute, the
Agency for International Development, the Department of Defense, the
Department of Labor, the National Institute of Mental Health, the Law
Enforcement Assistance Administration, and the National Science
Foundation.

California has been in the forefront of mind control experimentation.
Government experiments also were conducted in the Haight-Ashbury
District in San Francisco at the height of the Hippy reign. In 1974,
Senator Sam Erwin, of Watergate fame, headed a U.S. Senate
Subcommittee on Constitutional Rights studying the subject
of “Individual rights and the Federal role in behavior modification.”
Though little publicity was given to this committee’s investigation,
Senator Erwin issued a strong condemnation of the federal role in
mind control. That condemnation, however, did not halt mind control
experiments, they just received more circuitous funding.

Many of the case histories concerning individuals of whom the mind
control experiments were used, show a strange concept in the minds of
those seeking guinea pigs. Those subject to the mind control
experiments would be given indefinite sentences, his freedom was
dependent upon how well the experiment went. One individual, for
example, was arrested for joyriding, given a two-year sentence and
held for mind control experiments. He was held for 18 years.

Here are just a few experiments used in the mind control program:

A naked inmate is strapped down on a board. His wrists and ankles are
cuffed to the board and his head is rigidly held in place by a strap
around his neck and a helmet on his head. He is left in a darkened
cell, unable to remove his body wastes. When a meal is delivered, one
wrist is unlocked so he could feel around in the dark for his food
and attempt to pour liquid down his throat without being able to lift
his head.

Another experiment creates a muscle relaxant. Within 30 to 40 seconds
paralysis begins to invade the small muscles of the fingers, toes,
and eyes and then the inter costal muscles and diaphragm. The heart
slows down to about 60 beats per minute. This condition, together
with respiratory arrests, sets in for as long as two to five minutes
before the drug begins to wear off. The individual remains fully
conscious and is gasping for breath. It is “likened to dying, it is
almost like drowning” the experiment states.

Another drug induces vomiting and was administered to prisoners who
didn’t get up on time or caught swearing or lying, or even not
greeting their guards formally. The treatment brings about
uncontrolled vomiting that lasts from 15 minutes to an hour,
accompanied by a temporary cardio vascular effect involving changes
in the blood pressure.

Another deals with creating body rigidness, aching restlessness,
blurred vision, severe muscular pain, trembling and fogged cognition.
The Department of Health, Education and Welfare and the U.S. Army
have admitted mind control experiments. Many deaths have occurred.

In tracing the steps of government mind control experiments, the
trail leads to legal and illegal usages, usage for covert
intelligence operations, and experiments on innocent people who were
unaware that they were being used.

Table of Contents

———————————————————————-
———-
Second in a Series
By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

EDITOR’S NOTE: The Sentinel commenced a series on mind control in
early August and suspended it until September because of the
extensive research required after additional information was
received.
In July, two inmates died at the Vacaville Medical Facility.
According to prison officials at the time, the two may have died as a
result of medical treatment, that treatment was the use of mind
control or behavior modification drugs. A deeper study into the
deaths of the two inmates has unraveled a mind-boggling tale of
horror that has been part of California penal history for a long
time, and one that caused national outcries years ago.

In the August article, the Sentinel presented a graphic portrait of
some of the mind control experiments that have been allowed to
continue in the United States. On November 1974 a U.S. Senate Sub
committee on Constitutional Rights investigated federally-funded
behavior modification programs, with emphasis on federal involvement
in, and the possible threat to individual constitutional rights of
behavior modification, especially involving inmates in prisons and
mental institutions.

The Senate committee was appalled after reviewing documents from the
following sources:

Neuro-Research Foundation’s study entitled The Medical Epidemiology
of Criminals.

The Center for the Study and Reduction of Violence from UCLA.

The closed adolescent treatment center.

A national uproar was created by various articles in 1974, which
prompted the Senate investigation. But after all these years, the
news that two inmates at Vacaville may have died from these same
experiments indicates that though a nation was shocked in 1974,
little was done to correct the experimentations. In 1977, a Senate
subcommittee on Health and Scientific Research, chaired by Senator
Ted Kennedy, focussed on the CIA’s testing of LSD on unwitting
citizens. Only a mere handful of people within the CIA knew about the
scope and details of the program.

To understand the full scope of the problem, it is important to study
its origins. The Kennedy subcommittee learned about the CIA Operation
M.K.-Ultra through the testimony of Dr. Sidney Gottlieb. The purpose
of the program, accord ing to his testimony, was to “investigate
whether and how it was possible to modify an individual’s behavior by
covert means”. Claiming the protection of the National Security Act,
Dr. Gottlieb was unwilling to tell the Senate subcommittee what had
been learned or gained by these experiments.

He did state, however, that the program was initially engendered by a
concern that the Soviets and other enemies of the United States would
get ahead of the U.S. in this field. Through the Freedom of
Information Act, researchers are now able to obtain documents
detailing the M.K.-Ultra program and other CIA behavior modification
projects in a special reading room located on the bottom floor of the
Hyatt Regency in Rosslyn, VA.

The most daring phase of the M.K.-Ultra program involved slipping
unwitting American citizens LSD in real life situations. The idea for
the series of experiments originated in November 1941, when William
Donovan, founder and director of the Office of Strategic Services
(OSS), the forerunner of the CIA during World War Two. At that time
the intelligence agency invested $5000 for the “truth drug” program.
Experiments with scopolamine and morphine proved both unfruitful and
very dangerous. The program tested scores of other drugs, including
mescaline, barbituates, benzedrine, cannabis indica, to name a few.

The U.S. was highly concerned over the heavy losses of freighters and
other ships in the North Atlantic, all victims of German U-boats.
Information about German U-boat strategy was desperately needed and
it was believed that the information could be obtained through drug-
influenced interrogations of German naval P.O.W.s, in violation of
the Geneva Accords.

Tetrahydrocannabinol acetate, a colorless, odorless marijuana
extract, was used to lace a cigarette or food substance without
detection. Initially, the experiments were done on volunteer U.S.
Army and OSS personnel, and testing was also disguised as a remedy
for shell shock. The volunteers became known as “Donovan’s Dreamers”.
The experiments were so hush-hush, that only a few top officials knew
about them. President Franklin Roosevelt was aware of the
experiments. The “truth drug” achieved mixed success.

The experiments were halted when a memo was written: “The drug defies
all but the most expert and search analysis, and for all practical
purposes can be considered beyond analysis.” The OSS did not,
however, halt the program. In 1943 field tests of the extract were
being con ducted, despite the order to halt them. The most celebrated
test was conducted by Captain George Hunter White, an OSS agent and
ex-law enforcement official, on August Del Grazio, aka Augie Dallas,
aka Dell, aka Little Augie, a New York gangster. Cigarettes laced
with the acetate were offered to Augie without his knowledge of the
content. Augie, who had served time in prison for assault and murder,
had been one of the world’s most notorious drug dealers and
smugglers. He operated an opium alkaloid factory in Turkey and he was
a leader in the Italian underworld on the Lower East Side of New
York. Under the influence of the drug, Augie revealed volumes of
information about the under world operations, including the names of
high ranking officials who took bribes from the mob. These
experiments led to the encouragement of Donovan. A new memo was
issued: “Cigarette experiments indicated that we had a mechanism
which offered promise in relaxing prisoners to be interrogated.”

When the OSS was disbanded after the war, Captain White continued to
administer behavior modifying drugs. In 1947, the CIA replaced the
OSS. White’s service record indicates that he worked with the OSS,
and by 1954 he was a high ranking Federal Narcotics Bureau officer
who had been loaned to the CIA on a part-time basis.

White rented an apartment in Greenwich Village equipped with one-way
mirrors, surveillance gadgets and disguised himself as a seaman.
White drugged his acquaintances with LSD and brought them back to his
apartment. In 1955, the operation shifted to San Francisco. In San
Francisco, “safehouses” were established under the code name
Operation Midnight Climax. Midnight Climax hired prostitute addicts
who lured men from bars back to the safehouses after their drinks had
been spiked with LSD. White filmed the events in the safehouses. The
purpose of these “national security brothels” was to enable the CIA
to experiment with the act of lovemaking for extracting information
from men. The safehouse experiments continued until 1963 until CIA
Inspector General John Earman criticized Richard Helms, the director
of the CIA and father of the M.K.-Ultra project. Earman charged the
new director John McCone had not been fully briefed on the M.K.-Ultra
Project when he took office and that “the concepts involved in
manipulating human behavior are found by many people within and
outside the Agency to be distasteful and unethical.” He stated
that “the rights and interest of U.S. citizens are placed in
jeopardy”. The Inspector General stated that LSD had been tested on
individuals at all social levels, high and low, native American and
foreign.”

Earman’s criticisms were rebuffed by Helms, who warned, “Positive
operation capacity to use drugs is diminishing owing to a lack of
realistic testing. Tests were necessary to keep up with the Soviets.”
But in 1964, Helms had testified before the Warren Commission
investigating the assassination of President John Kennedy,
that “Soviet research has consistently lagged five years behind
Western research”.

Upon leaving government service in 1966, Captain White wrote a
startling letter to his superior. In the letter to Dr. Gottlieb,
Captain White reminisced about his work in the safehouses with LSD.
His comments were frightening. “I was a very minor missionary,
actually a heretic, but I toiled wholeheartedly in the vineyards
because it was fun, fun, fun,” White wrote. “Where else could a red-
blooded American boy lie, kill, cheat, steal, rape and pillage with
the sanction and blessing of the all-highest?”

(NEXT: How the drug experiments helped bring about the rebirth of the
mafia and the French Connection.)

Table of Contents

———————————————————————-
———-

Part Three in a Series

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

Though the CIA continued to maintain drug experiments in the streets
of America after the program was official cancelled, the United
States reaped tremendous value from it. With George Hunter Whites
connection to underworld figure Little Augie, connections were made
with Mafia king-pin Lucky Luciano, who was in Dannemore Prison.

Luciano wanted freedom, the Mafia wanted drugs, and the United States
wanted Sicily. The date was 1943. Augie was the go-between between
Luciano and the United States War Department.

Luciano was transferred to a less harsh prison and began to be
visited by representatives of the Office of Naval Intelligence and
from underworld figures, such as Meyer Lansky. A strange alliance was
formed between the U.S. Intelligence agencies and the Mafia, who
controlled the West Side docks in New York. Luciano regained active
leadership in organized crime in America.

The U.S. Intelligence community utilized Luciano’s underworld
connections in Italy. In July of 1943, Allied forces launched their
invasion of Sicily, the beginning push into occupied Europe. General
George Patton’s Seventh Army advanced through hundreds of miles of
territory that was fraught with difficulty, booby trapped roads,
snipers, confusing mountain topography, all within close range of
60,000 hostile Italian troops. All this was accomplished in four
days, a military “miracle” even for Patton.

Senate Estes Kefauver’s Senate Sub committee on Organized Crime
asked, in 1951, how all this was possible. The answer was that the
Mafia had helped to protect roads from Italian snipers, served as
guides through treacherous mountain terrain, and provided needed
intelligence to Patton’s army. The part of Sicily which Patton’s
forces traversed had at one time been completely controlled by the
Sicilian Mafia, until Benito Mussolini smashed it through the use of
police repression.

Just prior to the invasion, it was hardly even able to continue
shaking down farmers and shepherds for protection money. But the
invasion changed all this, and the Mafia went on to play a very
prominent and well-documented role in the American military
occupation of Italy.

The expedience of war opened the doors to American drug traffic and
Mafia domination. This was the beginning of the Mafia-U.S.
Intelligence alliance, an alliance that lasts to this day and helped
to support the covert operations of the CIA, such as the Iran-Contra
operations. In these covert operations, the CIA would obtain drugs
from South America and Southeast Asia, sell them to the Mafia and use
the money for the covert purchase of military equipment. These
operations accelerated when Congress cut off military funding for the
Contras.

One of the Allies top occupation priorities was to liberate as many
of their own soldiers from garrison duties so that they could
participate in the military offensive. In order to accomplish this,
Don Calogero’s Mafia were pressed into service, and in July of 1943,
the Civil Affairs Control Office of the U.S. Army appointed him mayor
of Villalba and other Mafia officials as mayors of other towns in
Sicily.

As the northern Italian offensive continued, Allied intelligence
became very concerned over the extent to which the Italian Communists
resistance to Mussolini had driven Italian politics to the left.
Community Party membership had doubled between 1943 and 1944, huge
leftist strikes had shut down factories and the Italian underground
fighting Mussolini had risen to almost 150,000 men. By mid-1944, the
situation came to a head and the U.S. Army terminated arms drops to
the Italian Resistance, and started appointing Mafia officials to
occupation administration posts. Mafia groups broke up leftists
rallies and reactivated black market operations throughout southern
Italy.

Lucky Luciano was released from prison in 1946 and deported to Italy,
where he rebuilt the heroin trade. The court’s decision to release
him was made possible by the testimony of intelligence agents at his
hearing, and a letter written by a naval officer reciting what
Luciano had done for the Navy. Luciano was supposed to have served
from 30 to 50 years in prison. Over 100 Mafia members were similarly
deported within a couple of years.

Luciano set up a syndicate which transported morphine base from the
Middle East to Europe, refined it into heroin, and then shipped it
into the United States via Cuba. During the 1950’s, Marseilles, in
Southern France, became a major city for the heroin labs and the
Corsican syndicate began to actively cooperate with the Mafia in the
heroin trade. Those became popularly known as the French Connection.

In 1948, Captain White visited Luciano and his narcotics associate
Nick Gentile in Europe. Gentile was a former American gangster who
had worked for the Allied Military Government in Sicily. By this
time, the CIA was already subsidizing Corsican and Italian gangsters
to oust Communist unions from the Port of Marseilles. American
strategic planners saw Italy and southern France as extremely
important for their Naval bases as a counterbalance to the growing
naval forces of the Soviet Union. CIO/AFL organizer Irving Brown
testified that by the time the CIA subsidies were terminated in 1953,
U.S. support was no longer needed because the profits from the heroin
traffic was sufficient to sustain operations.

When Luciano was originally jailed, the U.S. felt it had eliminated
the world’s most effective underworld leader and the activities of
the Mafia were seriously damaged. Mussolini had been waging a war
since 1924 to rid the world of the Sicilian Mafia. Thousands of Mafia
members were convicted of crimes and forced to leave the cities and
hide out in the mountains.

Mussolini’s reign of terror had virtually eradicated the
international drug syndicates. Combined with the shipping
surveillance during the war years, heroin trafficking had become
almost nil. Drug use in the United States, before Luciano’s release
from prison, was on the verge of being entirely wiped out.

Table of Contents

———————————————————————-
———-

Part Four in a Series

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

The U.S. government has conducted three types of mind-control
experiments:

Real life experiences, such as those used on Little Augie and the LSD
experiments in the safehouses of San Francisco and Greenwich Village.

Experiments on prisoners, such as in the California Medical Facility
at Vacaville.

Experiments conducted in both mental hospitals and the Veterans
Administration hospitals.
Such experimentation requires money, and the United States government
has funnelled funds for drug experiments through different agencies,
both overtly and covertly.

One of the funding agencies to contribute to the experimentation is
the Law Enforcement Assistance Administration (LEAA), a unit of the
U.S. Justice Department and one of President Richard Nixon’s favorite
pet agencies. The Nixon Administration was, at one time, putting
together a program for detaining youngsters who showed a tendency
toward violence in “concentration” camps. According to the Washington
Post, the plan was authored by Dr. Arnold Hutschnecker. Health,
Education and Welfare Secretary Robert Finch was told by John
Erlichman, Chief of Staff for the Nixon White House, to implement the
program. He proposed the screening of children of six years of age
for tendencies toward criminality. Those who failed these tests were
to be destined to be sent to the camps. The program was never
implemented.

LEAA came into existence in 1968 with a huge budget to assist various
U.S. law enforcement agencies. Its effectiveness, however, was not
considered too great. After spending $6 billion, the F.B.I. reports
general crime rose 31 percent and violent crime rose 50 percent. But
little accountability was required of LEAA on how it spent its funds.

LEAA’s role in the behavior modification research began at a meeting
held in 1970 in Colorado Springs. Attending that meeting were Richard
Nixon, Attorney General John Mitchell, John Erlichman, H.R. Haldemann
and other White House staffers. They met with Dr. Bertram Brown,
director fo the National Institute of Mental Health, and forged a
close collaboration between LEAA and the Institute. LEAA was a
product of the Justice Department and the Institute was a product of
HEW.

LEAA funded 350 projects involving medical procedures, behavior
modification and drugs for delinquency control. Money from the
Criminal Justice System was being used to fund mental health projects
and vice versa. Eventually, the leadership responsibility and control
of the Institute began to deteriorate and their scientists began to
answer to LEAA alone.

The National Institute of Mental Health went on to become one of the
greatest supporters of behavior modification research. Throughout the
1960’s, court calenders became blighted with lawsuits on the part
of “human guinea pigs” who had been experimented upon in prisons and
mental institutions. It was these lawsuits which triggered the Senate
Subcommittee on Constitutional Rights investigation, headed by
Senator Sam Erwin. The subcommittee’s harrowing report was virtually
ignored by the news media.

Thirteen behavior modification programs were conducted by the
Department of Defense. The Department of Labor had also conducted
several experiments, as well as the National Science Foundation. The
Veterans’ Administration was also deeply involved in behavior
modification and mind control. Each of these agencies, including
LEAA, and the Institute, were named in secret CIA documents as those
who provided research cover for the MK-ULTRA program.

Eventually, LEAA was using much of its budget to fund experiments,
including aversive techniques and psychosurgery, which involved, in
some cases, irreversible brain surgery on normal brain tissue for the
purpose of changing or controlling behavior and/or emotions.

Senator Erwin questioned the head of LEAA concerning ethical
standards of the behavior modification projects which LEAA had been
funding. Erwin was extremely dubious about the idea of the government
spending money on this kind of project without strict guidelines and
reasonable research supervision in order to protect the human
subjects. After Senator Erwin’s denunciation of the funding polices,
LEAA announced that it would no longer fund medical research into
behavior modification and psychosurgery. Despite the pledge by LEAA’s
director, Donald E. Santarelli, LEAA ended up funding 537 research
projects dealing with behavior modification. There is strong evidence
to indicate psychosurgery was still being used in prisons in the
1980’s. Immediately after the funding announcement by LEAA, there
were 50 psychosurgical operations at Atmore State Prison in Alabama.
The inmates became virtual zombies. The operations, according to Dr.
Swan of Fisk University, were done on black prisoners who were
considered politically active.

The Veterans’ Administration openly admitted that psychosurgery was a
standard procedure for treatment and not used just in experiments.
The VA Hospitals in Durham, Long Beach, New York, Syracuse and
Minneapolis were known to employ these products on a regular basis.
VA clients could typically be subject to these behavior alteration
procedures against their will. The Erwin subcommittee concluded that
the rights of VA clients had been violated.

LEAA also subsidized the research and development of gadgets and
techniques useful to behavior modification. Much of the technology,
whose perfection LEAA funded, had originally been developed and made
operational for use in the Vietnam War. Companies like Bangor Punta
Corporation and Walter Kidde and Co., through its subsidiary Globe
Security System, adapted these devices to domestic use in the U.S.
ITT was another company that domesticated the warfare technology for
potential use on U.S. citizens. Rand Corporation executive Paul Baran
warned that the influx back to the United State of the Vietnam War
surveillance gadgets alone, not to mention the behavior modification
hardware, could bring about “the most effective, oppressive police
state ever created”.

Table of Contents

———————————————————————-
———-

Fifth in a Series

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

One of the fascinating aspects of the scandals that plague the U.S.
Government is the fact that so often the same names appear from
scandal to scandal. From the origins of Ronald Reagan’s political
career, as Governor of California, Dr. Earl Brian and Edward Meese
played key advisory roles.

Dr. Brian’s name has been linked to the October Surprise and is a
central figure in the government’s theft of PROMIS soft ware from
INSLAW. Brian’s role touches from the Cabazon Indian scandals to
United Press International. He is one of those low-profile key
figures.

And, alas, his name appears again in the nation’s behavior
modification and mind control experiments. Dr. Brian was Reagan’s
Secretary of Health when Reagan was Governor. Dr. Brian was an
advocate of state subsidies for a research center for the study of
violent behavior. The center was to begin operations by mid-1975, and
its research was intended to shed light on why people murder or rape,
or hijack aircraft. The center was to be operated by the University
of California at Los Angeles, and its primary purpose, ac cording to
Dr. Brian, was to unify scattered studies on anti-social violence and
possibly even touch on socially tolerated violence, such as football
or war. Dr. Brian sought $1.3 million for the center.

It certainly was possible that prison inmates might be used as
volunteer subjects at the center to discover the unknowns which
triggered their violent behavior. Dr. Brian’s quest for the center
came at the same time Governor Reagan concluded his plans to phase
the state of California out of the mental hospital business by 1982.
Reagan’s plan is echoed by Governor Pete Wilson today, to place the
responsibility of rehabilitating young offenders squarely on the
shoulders of local communities.

But as the proposal became known more publicly, a swell of
controversy surrounded it. It ended in a fiasco. The inspiration for
the violence center came from three doctors in 1967, five years
before Dr. Brian and Governor Reagan unveiled their plans. Amidst
urban rioting and civil protest, Doctors Sweet, Mark and Ervin of
Harvard put forward the thesis that individuals who engage in civil
disobedience possess defective or damaged brain cells. If this
conclusion were applied to the American Revolution or the Women’s
Rights Movement, a good portion of American society would be labeled
as having brain damage.

In a letter to the Journal of the American Medical Association, they
stated: “That poverty, unemployment, slum housing, and inadequate
education underlie the nation’s urban riots is well known, but the
obviousness of these causes may have blinded us to the more subtle
role of other possible factors, including brain dysfunction in the
rioters who engaged in arson, sniping and physical assault.

“There is evidence from several sources that brain dysfunction
related to a focal lesion plays a significant role in the violent and
assaultive behavior of thoroughly studied patients. Individuals with
electroencephalographic abnormalities in the temporal region have
been found to have a much greater frequency of behavioral
abnormalities (such as poor impulse control, assaultiveness, and
psychosis) than is present in people with a normal brain wave
pattern.”

Soon after the publication in the Journal, Dr. Ervin and Dr. Mark
published their book Violence and the Brain, which included the claim
that there were as many as 10 million individuals in the United
States “who suffer from obvious brain disease”. They argued that the
data of their book provided a strong reason for starting a program of
mass screening of Americans.

“Our greatest danger no longer comes from famine or communicable
disease. Our greatest danger lies in ourselves and in our fellow
humans…we need to develop an ‘early warning test’ of limbic brain
function to detect those humans who have a low threshold for
impulsive violence…Violence is a public health problem, and the
major thrust of any program dealing with violence must be toward its
prevention,” they wrote.

The Law Enforcement Assistance Administration funded the doctors
$108,000 and the National Institute of Mental Health kicked in
another $500,000, under pressure from Congress. They believed that
psychosurgery would inevitably be performed in connection with the
program, and that, since it irreversibly impaired people’s emotional
and intellectual capacities, it could be used as an instrument of
repression and social control.

The doctors wanted screening centers established throughout the
nation. In California, the publicity associated with the doctors’
report, aided in the development of The Center for the study and
Reduction of Violence. Both the state and LEAA provided the funding.
The center was to serve as a model for future facilities to be set up
throughout the United States.

The Director of the Neurophyschiatric Institute and chairman of the
Department of Psychiatry at UCLA, Dr. Louis Jolyon West was selected
to run the center. Dr. West is alleged to have been a contract agent
for the CIA, who, as part of a network of doctors and scientists,
gathered intelligence on hallucinogenic drugs, including LSD, for the
super-secret MK-ULTRA program. Like Captain White (see part three of
the series), West conducted LSD experiments for the CIA on unwitting
citizens in the safehouses of San Francisco. He achieved notoriety
for his injection of a massive dose of LSD into an elephant at the
Oklahoma Zoo, the elephant died when West tried to revive it by
administering a combination of drugs.

Dr. West was further known as the psychiatrist who was called upon to
examine Jack Ruby, Lee Harvey Oswald’s assassin. It was on the basis
of West’s diagnosis that Ruby was compelled to be treated for mental
disorders and put on happy pills. The West examination was ordered
after Ruby began to say that he was part of a right-wing conspiracy
to kill President John Kennedy. Two years after the commencement of
treatment for mental disorder, Ruby died of cancer in prison.

After January 11, 1973, when Governor Reagan announced plans for the
Violence Center, West wrote a letter to the then Director of Health
for California, J. M. Stubblebine.

“Dear Stub:

“I am in possession of confidential in formation that the Army is
prepared to turn over Nike missile bases to state and local agencies
for non-military purposes. They may look with special favor on health-
related applications.

“Such a Nike missile base is located in the Santa Monica Mountains,
within a half-hour’s drive of the Neuropsychiatric Institute. It is
accessible, but relatively remote. The site is securely fenced, and
includes various buildings and improvements, making it suitable for
prompt occupancy.

“If this site were made available to the Neurophyschiatric Institute
as a research facility, perhaps initially as an adjunct to the new
Center for the Prevention of Violence, we could put it to very good
use. Comparative studies could be carried out there, in an isolated
but convenient location, of experimental or model programs for the
alteration of undesirable behavior.

“Such programs might include control of drug or alcohol abuse,
modification of chronic anti-social or impulsive aggressiveness, etc.
The site could also accommodate conferences or retreats for
instruction of selected groups of mental-health related professionals
and of others (e.g., law enforcement personnel, parole officers,
special educators) for whom both demonstration and participation
would be effective modes of instruction.

“My understanding is that a direct request by the Governor, or other
appropriate officers of the State, to the Secretary of Defense (or,
of course, the President), could be most likely to produce prompt
results.”

Some of the planned areas of study for the Center included:

Studies of violent individuals.

Experiments on prisoners from Vacaville and Atascadero, and
hyperkinetic children.

Experiments with violence-producing and violent inhibiting drugs.

Hormonal aspects of passivity and aggressiveness in boys.

Studies to discover and compare norms of violence among various
ethnic groups.

Studies of pre-delinquent children.

It would also encourage law enforcement to keep computer files on pre-
delinquent children, which would make possible the treatment of
children before they became delinquents.

The purpose of the Violence Center was not just research. The staff
was to include sociologists, lawyers, police officers, clergymen and
probation officers. With the backing of Governor Reagan and Dr.
Brian, West had secured guarantees of prisoner volunteers from
several California correctional institutions, including Vacaville.
Vacaville and Atascadero were chosen as the primary sources for the
human guinea pigs. These institutions had established a reputation,
by that time, of committing some of the worst atrocities in West
Coast history. Some of the experimentations differed little from what
the Nazis did in the death camps.

(NEXT: What happened to the Center?)

Table of Contents

———————————————————————-
———-

Sixth in a Series

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

Dr. Earl Brian, Governor Ronald Reagan’s Secretary of Health, was
adamant about his support for mind control centers in California. He
felt the behavior modification plan of the Violence Control Centers
was important in the prevention of crime.

The Violence Control Center was actually the brain child of William
Herrmann as part of a pacification plan for California. A counter
insurgency expert for Systems Development Corporation and an advisor
to Governor Reagan, Herrmann worked with the Stand Research
Institute, the RAND Corporation, and the Hoover Center on Violence.
Herrman was also a CIA agent who is now serving an eight year prison
sentence for his role in a CIA counterfeiting operation. He was also
directly linked with the Iran-Contra affair according to government
records and Herrmann’s own testimony.

In 1970, Herrmann worked with Colston Westbrook as his CIA control
officer when Westbrook formed and implemented the Black Cultural
Association at the Vacaville Medical Facility, a facility which in
July experienced the death of three inmates who were forcibly
subjected to behavior modification drugs. The Black Cultural
Association was ostensibly an education program designed to instill
black pride identity in prisons, the Association was really a cover
for an experimental behavior modification pilot project designed to
test the feasibility of programming unstable prisoners to become more
manageable.

Westbrook worked for the CIA in Vietnam as a psychological warfare
expert, and as an advisor to the Korean equivalent of the CIA and for
the Lon Nol regime in Cambodia. Between 1966 and 1969, he was an
advisor to the Vietnamese Police Special Branch under the cover of
working as an employee of Pacific Architects and Engineers.

His “firm” contracted the building of the interrogation/torture
centers in every province of South Vietnam as part of the CIA’s
Phoenix Program. The program was centered around behavior
modification experiments to learn how to extract information from
prisoners of war, a direct violation of the Geneva Accords.

Westbrook’s most prominent client at Vacaville was Donald DeFreeze,
who be tween 1967 and 1969, had worked for the Los Angeles Police
Department’s Public Disorder Intelligence unit and later became the
leader of the Symbionese Liberation Army. Many authorities now
believe that the Black Cultural Association at Vacaville was the
seedling of the SLA. Westbrook even designed the SLA logo, the cobra
with seven heads, and gave De Freeze his African name of Cinque. The
SLA was responsible for the assassination of Marcus Foster,
superintendent of School in Oakland and the kidnapping of Patty
Hearst.

As a counterinsurgency consultant for Systems Development
Corporation, a security firm, Herrmann told the Los Angeles Times
that a good computer intelligence system “would separate out the
activist bent on destroying the system” and then develop a master
plan “to win the hearts and minds of the people”. The San Francisco-
based Bay Guardian, recently identified Herrmann as an international
arms dealer working with Iran in 1980, and possibly involved in the
October Surprise. Herrmann is in an English prison for
counterfeiting. He allegedly met with Iranian officials to ascertain
whether the Iranians would trade arms for hostages held in Lebanon.

The London Sunday Telegraph confirmed Herrmann’s CIA connections,
tracing them from 1976 to 1986. He also worked for the FBI. This
information was revealed in his London trial.

In the 1970’s, Dr. Brian and Herrmann worked together under Governor
Reagan on the Center for the Study and Reduction of Violence, and
then, a decade later, again worked under Reagan. Both men have been
identified as working for Reagan with the Iranians.

The Violence Center, however, died an agonizing death. Despite the
Ervin Senate Committee investigation and chastation of mind control,
the experiments continued. But when the Watergate scandal broke in
the early 1970’s, Washington felt it was too politically risky to
continue to push for mind control centers.

Top doctors began to withdraw from the proposal because they felt
that there were not enough safeguards. Even the Law Enforcement
Assistance Agency, which funded the program, backed out, stating, the
proposal showed “little evidence of established research ability of
the kind of level necessary for a study of this cope”.

Eventually it became known that control of the Violence Center was
not going to rest with the University of California, but instead with
the Department of Corrections and other law enforcement officials.
This information was released publicly by the Committee Opposed to
Psychiatric Abuse of Prisoners. The disclosure of the letter resulted
in the main backers of the program bowing out and the eventual demise
of the center.

Dr. Brian’s final public statement on the matter was that the
decision to cut off funding represented “a callous disregard for
public safety”. Though the Center was not built, the mind control
experiments continue to this day.

(NEXT: What these torturous drugs do.)

Table of Contents

———————————————————————-
———-

Seventh in a Series

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

The Central Intelligence Agency held two major interests in use of
L.S.D. to alter normal behavior patterns. The first interest centered
around obtaining information from prisoners of war and enemy agents,
in contravention of the Geneva Accords. The second was to deter the
effectiveness of drugs used against the enemy on the battlefield.

The MK-ULTRA program was originally run by a small number of people
within the CIA known as the Technical Services Staff (TSS). Another
CIA department, the Office of Security, also began its own testing
program. Friction arose and then infighting broke out when the Office
of Security commenced to spy on TSS people after it was learned that
LSD was being tested on unwitting Americans.

Not only did the two branches disagree over the issue of testing the
drug on the unwitting, they also disagreed over the issue of how the
drug was actually to be used by the CIA. The office of Security
envisioned the drug as an interrogation weapon. But the TSS group
thought the drug could be used to help destabilize another country,
it could be slipped into the food or beverage of a public official in
order to make him behave foolishly or oddly in public. One CIA
document reveals that L.S.D. could be administered right before an
official was to make a public speech.

Realizing that gaining information about the drug in real life
situations was crucial to exploiting the drug to its fullest, TSS
started conducting experiments on its own people. There was an
extensive amount of self-experimentation. The Office of Security felt
the TSS group was playing with fire, especially when it was learned
that TSS was prepared to spike an annual office Christmas party punch
with LSD, the Christmas party of the CIA. L.S.D. could produce
serious insanity for periods of eight to 18 hours and possibly
longer.

One of the “victims” of the punch was agent Frank Olson. Having never
had drugs before, L.S.D. took its toll on Olson. He reported that,
every automobile that came by was a terrible monster with fantastic
eyes, out to get him personally. Each time a car passed he would
huddle down against a parapet, terribly frightened. Olson began to
behave erratically. The CIA made preparation to treat Olson at
Chestnut Lodge, but before they could, Olson checked into a New York
hotel and threw himself out from his tenth story room. The CIA was
ordered to cease all drug testing.

Mind control drugs and experiments were torturous to the victims. One
of three inmates who died in Vacaville Prison in July was scheduled
to appear in court in an attempt to stop forced administration of a
drug, the very drug that may have played a role in his death.

Joseph Cannata believed he was making progress and did not need
forced dosages of the drug Haldol. The Solano County Coroner’s Office
said that Cannata and two other inmates died of hyperthermia,
extremely elevated body temperature. Their bodies all had at least
108 degrees temperature when they died. The psychotropic drugs they
were being forced to take will elevate body temperature.

Dr. Ewen Cameron, working at McGill University in Montreal, used a
variety of experimental techniques, including keeping subjects
unconscious for months at a time, administering huge electroshocks
and continual doses of L.S.D.

Massive lawsuits developed as a result of this testing, and many of
the subjects who suffered trauma had never agreed to participate in
the experiments. Such CIA experiments infringed upon the much-honored
Nuremberg Code concerning medical ethics. Dr. Camron was one of the
members of the Nuremberg Tribunal.

L.S.D. research was also conducted at the Addiction Research Center
of the U.S. Public Health Service in Lexington, Kentucky. This
institution was one of several used by the CIA. The National
Institute of Mental Health and the U.S. Navy funded this operation.
Vast supplies of L.S.D. and other hallucinogenic drugs were required
to keep the experiments going. Dr. Harris Isbell ran the program. He
was a member of the Food and Drug Administration’s Advisory Committee
on the Abuse of Depressant and Stimulants Drugs. Almost all of the
inmates were black. In many cases, L.S.D. dosage was increased daily
for 75 days.

Some 1500 U.S. soldiers were also victims of drug experimentation.
Some claimed they had agreed to become guinea pigs only through
pressure from their superior officers. Many claimed they suffered
from severe depression and other psychological stress.

One such soldier was Master Sergeant Jim Stanley. L.S.D. was put in
Stanley’s drinking water and he freaked out. Stanley’s hallucinations
continued even after he returned to his regular duties. His service
record suffered, his marriage went on the rocks and he ended up
beating his wife and children. It wasn’t until 17 years later that
Stanley was informed by the military that he had been an L.S.D.
experiment. He sued the government, but the Supreme Court ruled no
soldier could sue the Army for the L.S.D. experiments. Justice
William Brennen disagreed with the Court decision. He
wrote, “Experimentation with unknowing human subjects is morally and
legally unacceptable.”

Private James Thornwell was given L.S.D. in a military test in 1961.
For the next 23 years he lived in a mental fog, eventually drowning
in a Vallejo swimming pool in 1984. Congress had set up a $625,000
trust fund for him. Large scale L.S.D. tests on American soldiers
were conducted at Aberdeen Proving Ground in Maryland, Fort Benning,
Georgia, Fort Leavenworth, Kansas, Dugway Proving Ground, Utah, and
in Europe and the Pacific. The Army conducted a series of L.S.D.
tests at Fort Bragg in North Carolina. The purpose of the tests were
to ascertain how well soldiers could perform their tasks on the
battlefield while under the influence of L.S.D. At Fort McClellan,
Alabama, 200 officers in the Chemical Corps were given L.S.D. in
order to familiarize them with the drug’s effects. At Edgewood
Arsenal, soldiers were given L.S.D. and then confined to sensory
deprivation chambers and later exposed to a harsh interrogation
sessions by intelligence people. In these sessions, it was discovered
that soldiers would cooperate if promised they would be allowed to
get off the L.S.D.

In Operation Derby Hat, foreign nationals accused of drug trafficking
were given L.S.D. by the Special Purpose Team, with one subject
begging to be killed in order to end his ordeal. Such experiments
were also conducted in Saigon on Viet Cong POWs. One of the most
potent drugs in the U.S. arsenal is called BZ or quinuclidinyl
benzilate. It is a long-lasting drug and brings on a litany of
psychotic experiences and almost completely isolates any person from
his environment. The main effects of BZ last up to 80 hours compared
to eight hours for L.S.D. Negative after-effects may persist for up
to six weeks.

The BZ experiments were conducted on soldiers at Edgewood Arsenal for
16 years. Many of the “victims” claim that the drug permanently
affected their lives in a negative way. It so disorientated one
paratrooper that he was found taking a shower in his uniform and
smoking a cigar. BZ was eventually put in hand grenades and a 750
pound cluster bomb. Other configurations were made for mortars,
artillery and missiles. The bomb was tested in Vietnam and CIA
documents indicate it was prepared for use by the U.S. in the event
of large-scale civilian uprisings.

In Vacaville, psychosurgery has long been a policy. In one set of
cases, experimental psychosurgery was conducted on three inmates, a
black, a Chicano and a white person. This involved the procedure of
pushing electrodes deep into the brain in order to determine the
position of defective brain cells, and then shooting enough voltage
into the suspected area to kill the defective cells. One prisoner,
who appeared to be improving after surgery, was released on parole,
but ended up back in prison. The second inmate became violent and
there is no information on the third inmate.

Vacaville also administered a “terror drug” Anectine as a way
of “suppressing hazardous behavior”. In small doses, Anectine serves
as a muscle relaxant; in huge does, it produces prolonged seizure of
the respiratory system and a sensation “worse than dying”. The drug
goes to work within 30 to 40 seconds by paralyzing the small muscles
of the fingers, toes, and eyes, and then moves into the the
intercostal muscles and the diaphragm. The heart rate subsides to 60
beats per minute, respiratory arrest sets in and the patient remains
completely conscious throughout the ordeal, which lasts two to five
minutes. The experiments were also used at Atascadero.

Several mind altering drugs were originally developed for non-
psychoactive purposes. Some of these drugs are Phenothiazine and
Thorzine. The side effects of these drugs can be a living hell. The
impact includes the feeling of drowsiness, disorientation, shakiness,
dry mouth, blurred vision and an inability to concentrate. Drugs like
Prolixin are described by users as “sheer torture” and “becoming a
zombie”.

The Veterans Administration Hospital has been shown by the General
Accounting Office to apply heavy dosages of psychotherapeutic drugs.
One patient was taking eight different drugs, three antipsychotic,
two antianxiety, one antidepressant, one sedative and one anti-
Parkinson. Three of these drugs were being given in dosages equal to
the maximum recommended. Another patient was taking seven different
drugs. One report tells of a patient who refused to take the drug. “I
told them I don’t want the drug to start with, they grabbed me and
strapped me down and gave me a forced intramuscular shot of Prolixin.
They gave me Artane to counteract the Prolixin and they gave me
Sinequan, which is a kind of tranquilizer to make me calm down, which
over calmed me, so rather than letting up on the medication, they
then gave me Ritalin to pep me up.”

Prolixin lasts for two weeks. One patient describes how the drug does
not calm or sedate nerves, but instead attacks from so deep inside
you, you cannot locate the source of the pain. “The drugs turn your
nerves in upon yourself. Against your will, your resistance, your
resolve, are directed at your own tissues, your own muscles,
reflexes, etc..” The patient continues, “The pain grinds into your
fiber, your vision is so blurred you cannot read. You ache with
restlessness, so that you feel you have to walk, to pace. And then as
soon as you start pacing, the opposite occurs to you, you must sit
and rest. Back and forth, up and down, you go in pain you cannot
locate. In such wretched anxiety you are overwhelmed because you
cannot get relief even in breathing.”

Table of Contents

———————————————————————-
———-

Eighth in a Series

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

October 15, 1991

“We need a program of psychosurgery for political control of our
society. The purpose is physical control of the mind. Everyone who
deviates from the given norm can be surgically mutilated.

“The individual may think that the most important reality is his own
existence, but this is only his personal point of view. This lacks
historical perspective.

“Man does not have the right to develop his own mind. This kind of
liberal orientation has great appeal. We must electrically control
the brain. Some day armies and generals will be controlled by
electric stimulation of the brain.” These were the remarks of Dr.
Jose Delgado as they appeared in the February 24, 1974 edition of the
Congressional Record, No. 26., Vol. 118.

Despite Dr. Delgado’s outlandish statements before Congress, his work
was financed by grants from the Office of Naval Research, the Air
Force Aero-Medical Research Laboratory, and the Public Health
Foundation of Boston.

Dr. Delgado was a pioneer of the technology of Electrical Stimulation
of the Brain (ESB). The New York Times ran an article on May 17, 1965
entitled Matador With a Radio Stops Wild Bull. The story details Dr.
Delgado’s experiments at Yale University School of Medicine and work
in the field at Cordova, Spain. The New York Times stated:

“Afternoon sunlight poured over the high wooden barriers into the
ring, as the brave bull bore down on the unarmed matador, a scientist
who had never faced fighting bull. But the charging animal’s horn
never reached the man behind the heavy red cape. Moments before that
could happen, Dr. Delgado pressed a button on a small radio
transmitter in his hand and the bull braked to a halt. Then he
pressed another button on the transmitter, and the bull obediently
turned to the right and trotted away. The bull was obeying commands
in his brain that were being called forth by electrical stimulation
by the radio signals to certain regions in which fine wires had been
painlessly planted the day before.”

According to Dr. Delgado, experiments of this type have also been
performed on humans. While giving a lecture on the Brain in 1965, Dr.
Delgado said, “Science has developed a new methodology for the study
and control of cerebral function in animals and humans.”

The late L.L. Vasiliev, professor of physiology at the University of
Leningrad wrote in a paper about hypnotism: “As a control of the
subject’s condition, when she was outside the laboratory in another
set of experiments, a radio set was used. The results obtained
indicate that the method of using radio signals substantially
enhances the experimental possibilities.” The professor continued to
write, “I.F. Tomaschevsky (a Russian physiologist) carried out the
first experiments with this subject at a distance of one or two
rooms, and under conditions that the participant would not know or
suspect that she would be experimented with. In other cases, the
sender was not in the same house, and someone else observed the
subject’s behavior. Subsequent experiments at considerable distances
were successful. One such experiment was carried out in a park at a
distance. Mental suggestions to go to sleep were complied with within
a minute.”

The Russian experiments in the control of a person’s mind through
hypnosis and radio waves were conducted in the 1930s, some 30 years
before Dr. Delgado’s bull experiment. Dr. Vasiliev definitely
demonstrated that radio transmission can produce stimulation of the
brain. It is not a complex process. In fact, it need not be implanted
within the skull or be productive of stimulation of the brain,
itself. All that is needed to accomplish the radio control of the
brain is a twitching muscle. The subject becomes hypnotized and a
muscle stimulant is implanted. The subject, while still under
hypnosis, is commanded to respond when the muscle stimulant is
activated, in this case by radio transmission.

Lincoln Lawrence wrote a book entitled Were We Controlled? Lawrance
wrote, “If the subject is placed under hypnosis and mentally
programmed to maintain a determination eventually to perform one
specific act, perhaps to shoot someone, it is suggested thereafter,
each time a particular muscle twitches in a certain manner, which is
then demonstrated by using the transmitter, he will increase this
determination even more strongly. As the hypnotic spell is renewed
again and again, he makes it his life’s purpose to carry out this act
until it is finally achieved. Thus are the two complementary aspects
of Radio-Hypnotic Intracerebral Control (RHIC) joined to reinforce
each other, and perpetuate the control, until such time as the
controlled behavior is called for. This is done by a second session
with the hypnotist giving final instructions. These might be
reinforced with radio stimulation in more frequent cycles. They could
even carry over the moments after the act to reassure calm behavior
during the escape period, or to assure that one conspirator would not
indicate that he was aware of the co-conspirator’s role, or that he
was even acquainted with him.”

RHIC constitutes the joining of two well known tools, the radio part
and the hypnotism part. People have found it difficult to accept that
an individual can be hypnotized to perform an act which is against
his moral principles. Some experiments have been conducted by the
U.S. Army which show that this popular perception is untrue. The
chairman of the Department of Psychology at Colgate University, Dr.
Estabrooks, has stated, “I can hypnotize a man without his knowledge
or consent into committing treason against the United States.”
Estabrooks was one of the nation’s most authoritative sources in the
hypnotic field. The psychologist told officials in Washington that a
mere 200 well trained hypnotists could develop an army of mind-
controlled sixth columnists in wartime United States. He laid out a
scenario of an enemy doctor placing thousands of patients under
hypnotic mind control, and eventually programming key military
officers to follow his assignment. Through such maneuvers, he said,
the entire U.S. Army could be taken over. Large numbers of saboteurs
could also be created using hypnotism through the work of a doctor
practicing in a neighborhood or foreign born nationals with close
cultural ties with an enemy power.

Dr. Estabrooks actually conducted experiments on U.S. soldiers to
prove his point. Soldiers of low rank and little formal education
were placed under hypnotism and their memories tested. Surprisingly,
hypnotists were able to control the subjects’ ability to retain
complicated verbal information. J. G. Watkins followed in Estabrooks
steps and induced soldiers of lower rank to commit acts which
conflicted not only with their moral code, but also the military code
which they had come to accept through their basic training. One of
the experiments involved placing a normal, stable army private in a
deep trance. Watkins was trying to see if he could get the private to
attack a superior officer, a cardinal sin in the military. While the
private was in a deep trance, Watkins told him that the officer
sitting across from him was an enemy soldier who was going to attempt
to kill him. In the private’s mind, it was a kill or be killed
situation. The private immediately jumped up and grabbed the officer
by the throat. The experiment was repeated several times, and in one
case the man who was hypnotized and the man who was attacked were
very close friends. The results were always the same. In one
experiment, the hypnotized subject pulled out a knife and nearly
stabbed another person.

Watkins concluded that people could be induced to commit acts
contrary to their morality if their reality was distorted by the
hypnotism. Similar experiments were conducted by Watkins using WACs
exploring the possibility of making military personnel divulge
military secrets. A related experiment had to be discontinued because
a researcher, who had been one of the subjects, was exposing numerous
top-secret projects to his hypnotist, who did not have the proper
security clearance for such information. The information was divulged
before an audience of 200 military personnel.

(NEXT: School for Assassins)

Table of Contents

———————————————————————-
———-

Ninth in a Series

Mind Control: a Navy school for assassins

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

Tuesday, October 22, 1991

In mans quest to control the behavior of humans, there was a great
breakthrough established by Pavlov, who devised a way to make dogs
salivate on cue. He perfected his conditioning response technique by
cutting holes in the cheeks of dogs and measured the amount they
salivated in response to different stimuli. Pavlov verified
that “quality, rate and frequency of the salivation changed depending
upon the quality, rate and frequency of the stimuli.”

Though Pavlov’s work falls far short of human mind control, it did
lay the groundwork for future studies in mind and behavior control of
humans. John B. Watson conducted experiments in the United States on
an 11-month-old infant. After allowing the infant to establish a
rapport with a white rat, Watson began to beat on the floor with an
iron bar every time the infant came in contact with the rat. After a
time, the infant made the association between the appearance of the
rat and the frightening sound, and began to cry every time the rat
came into view. Eventually, the infant developed a fear of any type
of small animal. Watson was the founder of the behaviorist school of
psychology.

“Give me the baby, and I’ll make it climb and use its hands in
constructing buildings or stone or wood. I’ll make it a thief, a
gunman or a dope fiend. The possibilities of shaping in any direction
are almost endless. Even gross differences in anatomical structure
limits are far less than you may think. Make him a deaf mute, and I
will build you a Helen Keller. Men are built, not born,” Watson
proclaimed. His psychology did not recognize inner feelings and
thoughts as legitimate objects of scientific study, he was only
interested in overt behavior.

Though Watson’s work was the beginning of mans attempts to control
human actions, the real work was done by B.F. Skinner, the high
priest of the behaviorists movement. The key to Skinner’s work was
the concept of operant conditioning, which relied on the notion of
reinforcement, all behavior which is learned is rooted in either a
positive or negative response to that action. There are two
corollaries of operant conditioning” Aversion therapy and
desensitization.

Aversion therapy uses unpleasant reinforcement to a response which is
undesirable. This can take the form of electric shock, exposing the
subject to fear producing situations, and the infliction of pain in
general. It has been used as a way of “curing” homosexuality,
alcoholism and stuttering. Desensitization involves forcing the
subject to view disturbing images over and over again until they no
longer produce any anxiety, then moving on to more extreme images,
and repeating the process over again until no anxiety is produced.
Eventually, the subject becomes immune to even the most extreme
images. This technique is typically used to treat people’s phobias.
Thus, the violence shown on T.V. could be said to have the
unsystematic and unintended effect of desensitization.

Skinnerian behaviorism has been accused of attempting to deprive man
of his free will, his dignity and his autonomy. It is said to be
intolerant of uncertainty in human behavior, and refuses to recognize
the private, the ineffable, and the unpredictable. It sees the
individual merely as a medical, chemical and mechanistic entity which
has no comprehension of its real interests.

Skinner believed that people are going to be manipulated. “I just
want them to be manipulated effectively,” he said. He measured his
success by the absence of resistance and counter control on the part
of the person he was manipulating. He thought that his techniques
could be perfected to the point that the subject would not even
suspect that he was being manipulated.

Dr. James V. McConnel, head of the Department of Mental Health
Research at the University of Michigan, said, “The day has come when
we can combine sensory deprivation with the use of drugs, hypnosis,
and the astute manipulation of reward and punishment to gain almost
absolute control over an individual’s behavior. We want to reshape
our society drastically.”

“It’s ironic that the German uranium intended for the Japanese, was ultimately delivered by the Americans.” – John Lansdale Jr.

By Arend Lammertink.

Abstract

For years now, revisionist authors have argued that there is something very wrong with the generally accepted historiography about the complex of factories and concentration camps known as Auschwitz. The debate so far has handled mostly about the question of whether or not it is true that the Germans systematically exterminated large numbers of people in gas chambers. For some reason, the remarkable fact that the supposed “Buna” plant within the complex produced absolutely nothing yet consumed more electricity than the entire city of Berlin thus far almost completely escaped attention. As it turns out, this is just one of the reasons to believe that this plant actually was an Uranium enrichment facility. An Uranium enrichment facility without which there would have been no A-bombing of neither Hiroshima nor Nagasaki.

Needless to say, if this were true and was to become widespread knowledge, it would have a significant impact on global politics, which thus would give us a motive for the relentless suppression of revisionists and their theses all over the Western world.

Introduction

While a lot of literature is available on the question of whether or not the Germans systematically exterminated large numbers of people in the gas chambers at Auschwitz, hardly any literature is available on what the actual purpose of the complex was, if it was not primarily an extermination camp.

The answer to that question not only shines new light on the beginning of the Atomic Age, it also explains why there is a geopolitical motive to suppress the truth about what happened at Auschwitz. At the end of the line, this suppression goes so far that a professional chemist, who wrote a report about the forensic research he conducted at the site, ended up behind bars in Germany.

So, apparently what happened at Auschwitz is important, very important. Germar Rudolf, the mentioned chemist, put it this way:

If the Holocaust were unimportant, we wouldn’t have around 20 countries on this planet outlawing its critical investigation. In fact, this is the only historical topic that is regulated by penal law. This is proof for the fact that the powers that be consider this topic to be the most important issue to keep under their strict control. Those censoring, suppressing powers are the real criminals — not the historical dissidents they send to prison.

I don’t think many people in Europe will disagree that it is important to fight racism and the spreading of hatred. In The Netherlands, where I live, this is regulated by law, which is very reasonable. In practice, such a general formulation of the law has been used successfully to prosecute a number of “holocaust deniers” in The Netherlands (see for example: [1] and [2]), which were given mild sentences in comparison to other countries.

If anything, these sentences make clear that there is absolutely no reason to explicitly make “holocaust denial” as such punishable by law, while doing so makes it next to impossible to perform independent scientific research on this important historic subject. Especially in Germany, where one cannot even defend one’s position on the subject with factual data, it is clear that the German law goes way to far, whereby the sentencing of Germar Rudolf is perhaps the most illustrative example of how well intended laws can go terribly wrong.

The story of Germar Rudolf is told in the following documentary, along with the story of Ernst Zündel and Bradley R. Smith, made by David Cole in 2007. Rudolf ended up behind bars after publishing his “Expert Report on Chemical and Technical Aspects of the “Gas Chambers” of Auschwitz”:

http://www.youtube.com/watch?v=lwentslVpXw

So far, it is clear that engaging in this debate from a historic and scientific perspective, even in The Netherlands, is not without risks. Yet, as law-abiding and freedom loving citizen, we have a moral obligation to speak out against the prosecution of people who are merely doing their job. We cannot allow science and history to be distorted because of geopolitical interests, which is clearly the case here as we are talking about the history of the Atomic weapons of mass destruction which killed at least 129,000 people in August, 1945.

Howard Zinn, who was a political science professor at Boston University, said this about our democratic responsibility to say what we want to say, especially when we deal with “deception of the public by the government in times of war“:

We have a responsibility to speak out, to speak our minds, especially now, and no matter what they say and how they cry for unity and supporting the president and getting in line. We have a democratic responsibility as citizens to speak out and say what we want to say.

One of the other things we need to do is to take a look at history, because history may be useful in helping us understand what is going on. The president isn’t giving us history and the media aren’t giving us history. They never do. Here we have this incredibly complex technologically developed media, but you don’t get the history that you need to understand what is going on today. There is one kind of history that they will give you, because history can not only be used for good purposes, but history can be abused.

History can’t give you definitive and positive answers to the issues that come up today, but it can suggest things. It can suggest skepticism about certain things. It can suggest probabilities and possibilities. There are some things you can learn from historical experience. One thing you can learn is that there is a long history of deception of the public by the government in times of war, or just before war, or to get us into war, going back to the Mexican war, when Polk lied to the nation about what was happening on the boarder between the Oasis River and the Rio Grande River.

[…]

[O]f all the things I’m going to tell you, remember two words. Governments lie. It’s a good starting point.

I’m not saying governments always lie, no they don’t always lie. But it’s a good idea to start off with the assumption that governments lie, and therefore whatever they say, especially when it comes to matters of war and foreign policy. Because when it’s a matter of domestic policy, there are things that you may be able to check up on, because its here and in this country, but something happening very far away, people don’t know very much about foreign policy. We depend on them because they’re supposed to know. They have the experts.

With this in mind, let’s take a look at David Cole, who made an extraordinary documentary about Auschwitz and addressed all of the issues which should have been openly and honestly debated, instead of having been suppressed for geopolitical reasons. Perhaps the most significant part of this documentary is Cole’s interview with Dr Franciszek Piper, “a Polish scholar, historian and author. Most of his work concerns the Holocaust, especially the history of the Auschwitz concentration camp”.

Cole managed to get Dr. Piper on tape, explaining that the alleged “gas chamber” in Auschwitz, which was said by Cole’s tourist guide to be “all original”, actually is a postwar reconstruction by the Soviets. This part starts at 35:50. I would suggest to use your own judgment and decide for yourself whether or not this documentary should be regarded as “historic review” or as “holocaust denial”:

http://www.youtube.com/watch?v=aQjNs-Ght8s

According to the transcript, these are Dr. Piper’s exact words:

So after the liberation of the camp, the former gas chamber presented a view of [an] air [raid] shelter. In order to gain an earlier view …earlier sight…of this object, the inside walls built in 1944 were removed and the openings in the ceiling were made anew.

So now this gas chamber is very similar to this one which existed in 1941-1942, but not all details were made so there is no gas-tight doors, for instance, [and the] additional entrance from the east side rested [remained] as it was made in 1944. Such changes were made after the war in order to gain [the] earlier view of this object.

This historic documentary was made over 20 years ago and today it is just as explosive as it was when it first came out. Recently, David Cole gave a 2.5 hour long radio interview about his experiences and his current view on the subject.

After this short introduction to what this debate has mostly been about, the analysis of the alleged “gas chambers” in Auschwitz and it’s importance in even present day geopolitics, we now continue with the main topic of this article, namely that because the IG Farben plant actually was a Uranium enrichment plant, people like Germar Rudolf are to be considered as having been political prisoners in modern-day Europe, a clear violation of internationally recognized Human Rights.

Hitler’s Atomic Program

For decades, few people questioned whether or not Nazi Germany came close to producing an Atomic Bomb, let alone testing one. Yet, the latter is exactly what has been suggested in 2005 by Rainer Karlsch in his book “Hitler’s Bomb”. Based on eyewitness accounts, he brings forth that in 1944 on the Baltic island of Rügen and in the spring of 1945 in Thuringia atomic bombs were tested. Also, a 1943 OSS report refers to a series of nuclear tests in the Schwabian Alps near Bisingen in July 1943. And measurements are said to have been carried out at the test site that found radioactive isotopes. Daniel W. Michaels’ review of Karlsch book reads:

Although the title of his book, Hitler’s Bomb, suggests more than the author could actually deliver, Karlsch defines the main thesis of his book much more soberly. He states very clearly that German scientists did not develop a nuclear device at all comparable to the American or Soviet hydrogen bombs of the 1950s. However, they knew in general terms how they functioned and were in a position to excite an initial nuclear reaction by means of their perfected hollow-charge technology. Only further research will determine whether their experiment represented fusion or fission reactions, or both.

Then in 2011, accordingly, “shock waves” were sent “through historians who thought that the German atomic programme was nowhere near advanced enough in WW2 to have produced nuclear waste in any quantities”:

German nuclear experts believe they have found nuclear waste from Hitler’s secret atom bomb programme in a crumbling mine near Hanover.

More than 126,000 barrels of nuclear material lie rotting over 2,000 feet below ground in an old salt mine.

[…]

Mark Walker, a US expert on the Nazi programme said: ‘Because we still don’t know about these projects, which remain cloaked in WW2 secrecy, it isn’t safe to say the Nazis fell short of enriching enough uranium for a bomb. Some documents remain top secret to this day’.

‘Claims that a nuclear weapon was tested at Ruegen in October 1944 and again at Ohrdruf in March 1945 leave open a question, did they or didn’t they?’

Dr. Joseph Farrell, author of “Reich of the Black Sun”, commented on this article on his blog, mentioning that the Nazi atom bomb tests have begun to be researched and discussed in Germany, continuing with:

This was supplemented by Carter Hydrick’s wonderful study Critical Mass, a study that in my opinion was so good that it had to be trashed by reviewers (which it was), because the story it contained was so stupendous. According to Hydrick, the Nazi nuclear program involved, at the minimum, a huge uranium enrichment program, and that program was probably successful to the point that the Nazis had enriched, to varying degrees of purity, uranium 235, and some of it was probably of fissile-weapons grade quality.

Also, he makes the point that the discovery of this nuclear waste is very significant, because it confirms Hydrick’s argument, “namely, that the Nazi program was not the haphazard, hit-and-miss, poorly coordinated laboratory affair that got no further than a few clumsy attempts by Heisenberg to build a reactor, but rather, its enrichment program was a huge concern, highly organized, and processing isotopes to a degree similar to, if not exceeding, the Manhattan project in its sheer size.”

Manhattan, we have a problem!

A study of the shipment of (bomb-grade uranium) for the past three months shows the following…: At the present rate we will have 10 kilos about February 7 and 15 kilos about May 1.

This small excerpt from a memo written by chief Los Alamos metallurgist Eric Jette, December 28, 1944[I] reveals that the Manhattan project had a serious problem. You see, the uranium bomb “Little Boy”, which was dropped on Hiroshima, would have required 50 kilos by the end of July, 1945, more than twice the amount the Manhattan project would have been able to produce themselves according to this memo.

This raises the question: “How did they solve this problem?”

In order to answer this question, we would need to know what the bottleneck in the Oak Ridge production rate was. This could be either a supply problem of raw uranium for the plant, or a problem with the production capacity of the plant itself. If raw material were the biggest problem, additional material could have come from multiple (mining) sources, including Nazi Germany. In fact, the Alsos Mission did just that.

However, if the biggest problem was the production capacity of the plant itself, then they must have gotten additional supply of enriched uranium from some external source, be it in metallic or oxide form. And that could have come from only one source: Nazi Germany.

U-235 on the U-234?

On May 14th, 1945, the German submarine U-234 surrendered to the USS Sutton, along with her precious cargo which was intended to be shipped to Japan:

The cargo included technical drawings, examples of the newest electric torpedoes, one crated Me 262 jet aircraft, a Henschel Hs 293 glide bomb and what was later listed on the US Unloading Manifest as 1,200 pounds (540 kg) of uranium oxide. In the 1997 book Hirschfeld, Wolfgang Hirschfeld reported that he saw about 50 lead cubes with 23 centimetres (9.1 in) sides, and “U-235” painted on each, loaded into the boat’s cylindrical mine shafts. According to cable messages sent from the dockyard, these containers held “U-powder”.

[…]

The fact that the ship carried .5 short tons (0.45 t) of uranium oxide remained classified for the duration of the Cold War. Author and historian Joseph M. Scalia claimed to have found a formerly secret cable at Portsmouth Navy Yard which stated that the uranium oxide had been stored in gold-lined cylinders rather than cubes as reported by Hirschfeld; the alleged document is discussed in Scalia’s book Hitler’s Terror Weapons. The exact characteristics of the uranium remain unknown.

There is little doubt that this uranium oxide was shipped to the Manhattan project, as reported by the NY Times, quoting Mr. John Lansdale Jr.:

Historians have quietly puzzled over that uranium shipment for years, wondering, among other things, what the American military did with it. Little headway was made because of Federal secrecy. Now, however, a former official of the Manhattan Project, John Lansdale Jr., says that the uranium went into the mix of raw materials used for making the world’s first atom bombs. At the time he was an Army lieutenant colonel for intelligence and security for the atom bomb project. One of his main jobs was tracking uranium.

Mr. Lansdale’s assertion in an interview raises the possibility that the American weapons that leveled the Japanese cities of Hiroshima and Nagasaki contained at least some nuclear material originally destined for Japan’s own atomic program and, perhaps, for attacks on the United States.

If confirmed, that twist of history could add a layer to the already complex debate over whether the United States had any moral justification for using its atom bombs against Japan.*

[…]

Mr. Lansdale, the former official of the Manhattan Project, displayed no doubts in the interview about the fate of the U-234’s shipment. “It went to the Manhattan District,” he said without hesitation. “It certainly went into the Manhattan District supply of uranium.”

Mr. Lansdale added that he remembered no details of the uranium’s destination in the sprawling bomb-making complex and had no opinion on whether it helped make up the material for the first atomic bomb used in war.

In the documentary “U-234-Hitler’s Last U-Boat” (2001), a few years later, Mr Lansdale did have an opinion:

http://www.youtube.com/watch?v=xw60hyA0DSw

(48:30) “I made arrangements for my staff to retrieve and test the material. I sent trucks to Porthsmouth to unload the uranium and then I sent it to Washington. After the uranium was inspected in Washington, it was sent to Oak Ridge.”

(51:16) “It’s ironic that the German uranium intended for the Japanese, was ultimately delivered by the Americans.”

(54:12) “The submarine was a God send, because it came at the right time and the right place.”

In the same documentary, Hans Bethe, former head of the Theoretical Division at the secret Los Alamos laboratory which developed the US atomic bombs, implicitly gives an estimate of the production capacity of the Oak Ridge plant, together with another person being interviewed:

(49:25) Bethe: “If you have 560 kg of uranium, it would have taken approximately a week in 1945 to separate it into weapons uranium.”

(50:27) Unknown: “500 kg of raw uranium might result in half a kg of uranium 235. Not enough to make a bomb with, but an important increment.”

Based on this, we can estimate that the production capacity of the Oak Ridge facility was approximately half a kg per week. We can compare this with the data in Jette’s memo, about 5 kg in the 12 weeks between February 7th and May 1st, which would mean an average production of about 0.42 kg per week, a pretty good match.

However, contrary to the above quote, the Wikipedia article on the U-234 states that the 560 kg of uranium oxide would have yielded about 3.5 kg of U-235 “after processing”, with a reference to the book “American Raiders” by Samuel Wolfgang. On it’s turn, this refers to “Hitler’s U-boat war: the Hunted” by Clay Blair, wherein the uranium oxide is listed as “1,232 pounds of uranium ore”. After mentioning Karl Pfaff’s (German Sailor) assistance in unloading the “boxes of uranium-oxide ore” from the submarine (also see: [3]), it states:

Scientists say this uranium ore would have yielded about 3.5 kilograms (7.7 pounds) of isotope U-235 (not a U-boat), about one-fifth of what was needed to make an atomic bomb.

Actually, the critical mass for an uranium-235 bomb is about 50 kg, but it depends on the grade: with 20% U-235 it is over 400 kg; with 15% U-235, it is well over 600 kg. So, 3.5 kilograms would be at most one-fifteenth (7%) of what was needed. The actual bomb dropped on Hiroshima used 64 kilograms of 80% enriched uranium, which in practice comprised almost 2.5 critical masses, because the fissile core was wrapped in a neutron reflector which allows a weapon design requiring less uranium.

Because U-235 constitutes about 0.711% by weight of natural uranium, 560 kg of raw uranium would result in about 3.98 kg of U-235. However, we are talking about 560 kg of uranium oxide and not pure uranium, so we have to correct for the amount of oxygen in the material in order to calculate how much uranium 235 this would yield.

If we assume the oxide to be uranium-dioxide (UO2), then we would have to take about 88% (238/(238+2*16)) of the 3.98 kg in order to correct for the oxygen, which would result in about 3.51 kg of U-235.

We can also assume the oxide to be so-called “Yellowcake”, a type of uranium powder as it would be after processing mining ore, but before enrichment. Yellowcake contains about 80% uranium oxide, of which typically 70 to 90 percent triuranium octoxide (U3O8). In that case, we have to correct by about 85% for the oxygen on top of the 80% for 20% impurities, which would result in about 2.70 kg of U-235.

Both of these numbers are significantly higher than the half a kg mentioned in the U-boat documentary, which is rather remarkable. And this is also where this story becomes intriguing, because what we see is that there are discrepancies between what is being told to the public and the hard, factual data that should corroborate with it.

Enter “Critical Mass – The Real Story of the Birth of the Atomic Bomb and the Nuclear Age” by Carter P. Hydrick, who argues that there is a lot more to this story than meets the eye, which can be found in the records of the Manhattan project:

As far as I can tell, I was the first to review the actual uranium enrichment production records, the shipping and receiving records of materials sent from Oak Ridge to Los Alamos, the metallurgical fabrication records of the making of the bombs themselves, and the records and testimony regarding failure to develop a viable triggering device for the plutonium bomb.

[…]

The critical daily production records of Oak Ridge and elsewhere have been all but ignored, though they reveal important information not previously considered in other histories, and although they tell a different story than that presently believed.

[…]

The new-found evidence taken en mass demonstrates that, despite the traditional history, the uranium captured from U-234 was enriched uranium that was commandeered into the Manhattan Project more than a month before the final uranium slugs were assembled for the uranium bomb. The Oak Ridge records of its chief uranium enrichment effort – the magnetic isotope separators known as calutrons – show that a week after Smith’s and Traynor’s 14 June conversation, the enriched uranium output at Oak Ridge nearly doubled – after six months of steady output.

Edward Hammel, a metallurgist who worked with Eric Jette at the Chicago Met Lab, where the enriched uranium was fabricated into the bomb slugs, corroborated this report of late-arriving enriched uranium. Mr. Hammel told the author that very little enriched uranium was received at the laboratory until just two or three weeks – certainly less than a month – before the bomb was dropped.

The Manhattan Project had been in desperate need of enriched uranium to fuel its lingering uranium bomb program. Now it is almost conclusively proven that U-234 provided the enriched uranium needed, as well as components for a plutonium breeder reactor.

The story so far has been recently summarized as follows by Ian Greenhalgh:

Without the German uranium and fuses, no atomic bombs would have been completed before 1946 at the earliest.

That brings us to the question: “How and where could Germany have managed to produce over 500 kg of enriched uranium?”

Buna or Uranium?

German born engineer Hans Baumann, author of a book about Hitler’s alleged escape to Argentina, recently wrote a remarkable introduction to the history of the use of high-speed centrifuges for the enrichment of uranium. He mentions that “while the U.S. had no problem creating sufficient plutonium, creating fissionable uranium proved more difficult” due to the low efficiency of the procedures they tried. In Germany, though, a professor came up with the idea of the (ultra)centrifuge, which proved successful. The rest of the article says it all:

A plant facility was built close to the Polish border (away from possible air attacks). For security reasons, the plant housing the centrifuges, was called a buna-n facility; where buna-n is an artificial rubber. At the end of the war Germany had produced 1,230 pounds of enriched uranium dioxide (UO2, containing the solidified gas of U235).

The Germans then tried to ship this heavy and radioactive metal to Japan but it never arrived.

In January 1945, the Russian army discovered this buna-n facility and evacuated the centrifuges to Russia, where they likely played an important role to create the Russian atomic bomb a few years later.

Hydrick goes into more detail:

By May 1944, compared with American production efforts that at their best resulted in enriching uranium from its raw state of .7 percent to about 10 to 12 percent on the first pass, the first German experimental ultracentrifuge succeeded with enriching the material to seven percent.

[…]

Ultracentrifuge output was so impressive, in fact, that following its very first experimental run, funding and authority were established to build ten additional production model ultracentrifuges in Kandern, a town in the southwest of Germany far from the fighting. […] The Nazis were now committed in a big way to ultracentrifuge production – and therefore to enriching uranium.

[…]

Production for the German isotope enrichment projects, once the experimental and design work were completed by Ardenne and the others, appears to have been undertaken by the I.G. Farben company under orders of the Nazi Party. The company was directed to construct at Auschwitz a buna factory, allegedly for making synthetic rubber.

Following the war, the Farben board of directors bitterly complained that no buna was ever produced despite the plant being under construction for four-and-a-half years; the employment of 25,000 workers from the concentration camp, of whom it makes note the workers were especially well-treated and well fed; and the utilization of 12,000 skilled German scientists and technicians from Farben. Farben also invested 900 million reichsmarks (equal to approximately $2 billion of today’s dollars) in the facility.

The plant used more electrical power than the entire city of Berlin yet it never made any buna, the substance it was “intended” to produce.

When these facts were described to an expert on polymer production (buna is a member of the polymer, or synthetic rubber, family), Mr. Ed Landry, Mr. Landry responded directly, “It was not a rubber plant, you can bet your bottom dollar on that.”

Landry went on to explain that while some types of buna are made by heating, which requires using relatively large amounts of energy, this energy is invariably supplied by burning coal. Coal was plentiful and well-mined in the area and was a key reason for locating the plant at Auschwitz when it was still intended to be a buna facility. The heating-of-buna process, to Landry’s knowledge, was never attempted using electricity, nor could he envision why it would have been. Landry totally dismissed the possibility that a buna plant, had it tried an electric option, would ever use more electricity than the entire city of Berlin. And the investment of $ 2 billion is, “A hell of a lot of money for a buna plant” even these days, according to Mr. Landry.

The probability of the Farben plant having been completed to make buna appears to be very slim to none. The plant contained all of the characteristics of a uranium enrichment plant, however, which undoubtedly it would never have been identified as, but it would have had an appropriate cover story to camouflage it – such as it supposedly being a buna plant. In fact, buna would have been an excellent cover because of the high level and types of technology involved in both.

From this perspective, it would make perfectly sense for the Germans to make sure the 25,000 workers from the concentration camp were well treated, well fed and even to take surprising measures in order to protect their lives from infectious disease (page 175):

The extent of the German effort to improve hygienic conditions at Auschwitz is evident from an amazing decision made in 1943/44. During the war, the Germans developed microwave ovens, not just to sterilize food, but to delouse and disinfect clothing as well. The first operational microwave apparatus was intended for use on the eastern front, to delouse and disinfect soldiers’ clothing. After direct war casualties, infectious diseases were the second greatest cause of casualties of German soldiers. But instead of utilizing these new devices at the eastern front, the German government decided to use them in Auschwitz to protect the lives of the inmates, most of whom were Jews. When it came to protecting lives threatened by infectious disease, the Germans obviously gave priority to the Auschwitz prisoners. Since they were working in the Silesian war industries, their lives were apparently considered comparably important to the lives of soldiers on the battlefield.

Cui bono?

No investigation is complete without a little exercise in “follow the money”, in this case to and from Nazi Germany. While for ages it has been said that all roads lead to Rome, it appears that all financial routes lead to “Wall Street” and have been leading there for decades already, which makes “Wall Street” a global centre of power, the spider in a gigantic web of corporations reaching all over the globe.

Perhaps the first scholar who investigated the involvement of “Wall Street” in geopolitics was Prof. Antony Sutton. In the following interview about his work, he says that “Wall Street” funded and was deeply involved in organizing three forms of socialism. These were the socialist welfare state (particularly under Roosevelt in the US), Bolshevik communism and Nazi national socialism. This gives a very good impression of the extend to which the “Wall Street” crime centre shaped the twentieth century, safely out of the public view:

http://www.youtube.com/watch?v=Sah_Xni-gtg

In “Wall Street and the rise of Hitler” Sutton wrote in his conclusions (chapter 12) about the “Pervasive Influence of International Bankers”:

Looking at the broad array of facts presented in the three volumes of the Wall Street series, we find persistent recurrence of the same names: Owen Young, Gerard Swope, Hjalmar Schacht, Bernard Baruch, etc.; the same international banks: J.P. Morgan, Guaranty Trust, Chase Bank; and the same location in New York: usually 120 Broadway.

This group of international bankers backed the Bolshevik Revolution and subsequently profited from the establishment of a Soviet Russia. This group backed Roosevelt and profited from New Deal socialism. This group also backed Hitler and certainly profited from German armament in the 1930s. When Big Business should have been running its business operations at Ford Motor, Standard of New Jersey, and so on, we find it actively and deeply involved in political upheavals, war, and revolutions in three major countries.”

A recent study by complex systems theorists at the Swiss Federal Institute of Technology in Zurich concluded that a core group of 147 tightly knit companies pretty much control half of the global economy:

AS PROTESTS against financial power sweep the world this week, science may have confirmed the protesters’ worst fears. An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy. […] “In effect, less than 1 per cent of the companies were able to control 40 per cent of the entire network,” says Glattfelder. Most were financial institutions. The top 20 included Barclays Bank, JPMorgan Chase & Co, and The Goldman Sachs Group.

In other words: what we see here is that pretty much the same names that came up in Sutton’s research as being involved with shady geopolitic activities, continue to come up in investigations into the financial web of control that shapes geopolitics today. Not one, but two US Presidents gave clear and specific warnings about the potential existence of exactly such kind of corporate control structure, which could acquire “unwarranted powers”.

Interestingly enough, two later US Presidents were closely related to an individual who was “actively and deeply involved” in the group Eisenhower, Kennedy and Sutton warned us about, as reported by “The Guardian”:

The Guardian has obtained confirmation from newly discovered files in the US National Archives that a firm of which Prescott Bush was a director was involved with the financial architects of Nazism.

His business dealings, which continued until his company’s assets were seized in 1942 under the Trading with the Enemy Act, has led more than 60 years later to a civil action for damages being brought in Germany against the Bush family by two former slave labourers at Auschwitz and to a hum of pre-election controversy.

The evidence has also prompted one former US Nazi war crimes prosecutor to argue that the late senator’s action should have been grounds for prosecution for giving aid and comfort to the enemy.

[…]

The first set of files, the Harriman papers in the Library of Congress, show that Prescott Bush was a director and shareholder of a number of companies involved with Thyssen.

The second set of papers, which are in the National Archives, are contained in vesting order number 248 which records the seizure of the company assets. What these files show is that on October 20 1942 the alien property custodian seized the assets of the UBC, of which Prescott Bush was a director. Having gone through the books of the bank, further seizures were made against two affiliates, the Holland-American Trading Corporation and the Seamless Steel Equipment Corporation. By November, the Silesian-American Company, another of Prescott Bush’s ventures, had also been seized.”

Other interesting information can be found in the public record regarding the so-called “Business Plot”, an attempt to overthrow Roosevelt:

The Business Plot (also known as The White House Coup) was a political conspiracy (see Congressional Record) in 1933 in the United States. Retired Marine Corps Major General Smedley Butler claimed that wealthy businessmen were plotting to create a fascist veterans’ organization with Butler as its leader and use it in a coup d’état to overthrow President Franklin D. Roosevelt. In 1934, Butler testified before the United States House of Representatives Special Committee on Un-American Activities (the “McCormack-Dickstein Committee”) on these claims. No one was prosecuted.

BBC4 aired a documentary about this in 2007:

The coup was aimed at toppling President Franklin D Roosevelt with the help of half-a-million war veterans. The plotters, who were alleged to involve some of the most famous families in America, (owners of Heinz, Birds Eye, Goodtea, Maxwell Hse & George Bush’s Grandfather, Prescott) believed that their country should adopt the policies of Hitler and Mussolini to beat the great depression.

Conclusion

While there is no direct evidence to prove for the full 100% that the IG Farben plant near Auschwitz indeed was an Uranium Enrichment facility, there is enough circumstantial evidence to state that it almost certainly was. The combination of the characteristics of the IG Farben plant and the U-234 shipment of Uranium oxide, with which the Manhattan project solved their production problem as well as their plutonium bomb ignition problem, leaves little doubt that the cargo of the U-234 indeed contained enriched Uranium, enriched Uranium that came from the IG Farben plant near Auschwitz. The submarine would not have been a “God send” if it would not have contained enriched Uranium. In other words, what we have here is “probable cause”, enough to warrant an in-depth investigation into the details.

From this perspective, we indeed have a clear motive for both the US as well as Russia to try and hide this story. We also identified a group, centred around “Wall Street, who has an even bigger motive to keep this story under wraps. In other words: we both see a motive and an opportunity for “Wall Street” to hide this story and to cover it up with propaganda, censorship, lies and deception.

And yes, that means that we, as democratic citizen, have a moral obligation to speak out about this and say what we want to say.

And there comes a time when one must take a position that is neither safe, nor politic, nor popular; but one must take it because it is right.

Dr. Martin Luther King, Jr.

Offline references

[I] E.R. Jette to C.S. Smith memorandum: Production rate of 25, December 28, 1944, U.S. National Archives, Washington, D.C., A-84-019-70-24, as quoted by Hydrick.

Extra: correspondence with Dutch and European Parliaments

I sent an e-mail to the chair of the Tweede Kamer at July 21st, 2016 (text below), along with this attachment. So far, I have not received confirmation the chair received my message, which is rather unusual. Normally, one receives a confirmation by snail mail within a couple of days.

At July 20th, 2015, I filed a request to the European Parliament(in Dutch), requesting them to debate the issue of “holocaust denial” in relation to the freedom of speech. I received a confirmation of receival and a letter stating that the EP has already taken some decisions on the subject and therewith declaring “case closed”.

With this, I hope it is clear that I consider this issue, above all, to be a geopolitical issue.

The text from my e-mail to Dutch Parliament:

Geachte Voorzitter,

Bijgaand mijn artikel inzake Auschwitz en het atoomtijdperk, waarin ik uiteen zet dat de IG Farben “buna” fabriek nabij Auschwitz vrijwel zeker een uraniumopwerkingsfabriek moet zijn geweest. Wat hierbij van groot geopolitiek belang is, is dat het aldaar verrijkte uranium, tezamen met ontstekingsmechanismes voor plutionumbommen, zijn weg heeft gevonden naar het Amerikaanse atoomprogramma. Dit had begin 1945 enerzijds grote problemen om voldoende verrijkt uranium te produceren en beschikte anderzijds niet over de technologie om plutioniumbommen te kunnen onsteken.

We kunnen uiteindelijk vaststellen dat de atoombombardementen op Japan in augustus 1945 niet hadden kunnen plaatsvinden zonder het in Auschwitz opgewerkte uranium en de Duitse plutioniumbom ontstekingsmechanismes.

Deze informatie stelt het debat rond het bestaan van de zogenaamde “gaskamers” in Auschwitz in een geheel nieuw perspectief, te meer ook omdat het werk van Prof. Sutton duidelijk maakt dat “Wall Street” “actief en diep betrokken was bij politieke opschudding, oorlogen en revoluties” in onder andere Nazi Duitsland en Bolsjeviek Rusland. Ook hede ten dage kunnen we de invloed van “Wall Street” herkennen in het globale netwerk van corporaties dat zo ongeveer de helft van de globale economie onder controle heeft.

Juist in deze tijd, waarin rassenhaat opnieuw de kop op steekt, is het van groot belang dat we over objectieve geschiedschrijving beschikken, omdat een goed begrip van de geschiedenis onontbeerlijk is om het heden te kunnen begrijpen.

Wat we zien in de geschiedenis rond Auschwitz is dat zowel de Amerikanen als de Russen er belang bij hebben de waarheid over het atoomprogramma aldaar te onderdrukken. Het zelfde geldt voor “Wall Street”, dat er zo mogelijk nog meer belang bij heeft haar betrokkenheid bij “politieke opschudding, oorlogen en revoluties” in de doofpot te doen belanden.

Aangezien “Wall Street” over een enorme financiele macht beschikt, en dus zonder meer in staat is het publieke debat in een voor haar wenselijke richting te sturen, kunnen we vaststellen dat “Wall Street” over zowel motief, gelegenheid als de middellen beschikte om de waarheid over dit belangrijke geopolitieke verhaal middels propaganda, leugens en bedrog te onderdrukken. Daarbij in veel Europese lidstaten geholpen door wetgeving die “holocaust ontkenning” als zodanig strafbaar stelt, zonder dat het de verdachte toegestaan wordt zichzelf te verdedigen met behulp van een forensisch-wetenschappelijk onderzoek van de feiten.

Gelukkig is dat in ons land niet het geval en is onze wetgeving in de praktijk meer dan voldoende gebleken om individuen, die de rond de Auschwitz “gaskamers” gevonden discrepanties aangrijpen om discriminerende uitingen en haat te verspreiden, te veroordelen.

Dat neemt echter niet weg dat de wetgeving in met name Duitsland te ver is doorgeschoten, waardoor individuen, zoals Germar Rudolf, die voor eigen rekening historisch technisch wetenschappelijk feitenonderzoek verrichten achter de tralies belandden. In dit geval betekent dit dat onder meer de heer Rudolf feitelijk gezien moet worden als een plotieke gevangene, een politieke gevangene binnen de grenzen van onze Europese rechtsstaat.

Met andere woorden: we constateren hier dat er binnen onze Europese rechtsstaat sprake is van politieke gevangenen, hetgeen duidelijk in strijd is met de internationaal erkende rechten van de mens en het EVRM in het bijzonder.

Ik vraag daarom nogmaals uw aandacht voor dit dossier en verzoek u er alles aan te doen wat binnen uw macht ligt om een einde te maken aan deze schrijnende gang van zaken. Ik kan me niet voorstellen dat uw kamer het aanvaardbaar acht dat er binnen de Europese Unie politieke gevangenen zijn.

Ik verzoek uw kamer tevens kennis te nemen van de inhoud van mijn artikel en mij te berichten of uw kamer van mening is dat ik met deze publicatie de grenzen van het fatsoen danwel de geest van enig door uw kamer vastgestelde wet heb overtreden, daarbij in aanmerking nemende dat het onze morele plicht is om zaken die niet door de beugel kunnen aan de kaak te stellen en dat de vrijheid van meningsuiting verankerd is in onze grondwet.

In afwachting van uw reactie verblijf ik,

Met vriendelijk groet,

Ir. Arend Lammertink,

[address]


The Merovingian Mythos, and its Roots in the Ancient Kingdom of Atlantis

By Tracy R. Twyman

The Frankish King Dagobert II, and the Merovingian dynasty from which he came, have been romantically mythologized in the annals of both local legend and modern mystical pseudo-history, but few have understood the true meaning and origins of their alluring mystery. The mystique that surrounds them includes attributions of saintliness, magical powers (derived from their long hair), and even divine origin, because of their supposed descent from the one and only Jesus Christ. However, the importance of the divine ancestry of the Merovingians, and the antiquity from whence it comes, has never to this author’s knowledge been fully explored by any writer or historian. However, I have uncovered mountains of evidence which indicates that the origins of the Merovingian race, and the mystery that surrounds them, lies ultimately with a race of beings, “Nephilim” or fallen angels, who created mankind as we know it today. It also originated with a civilization, far more ancient than recorded history, from which came all of the major arts and sciences that are basic to civilizations everywhere. As I intend to show, all of the myths and symbolism that are associated with this dynasty can, in fact, be traced back to this earlier civilization. It is known, in some cultures, as Atlantis, although there are many names for it, and it is the birthplace of agriculture, astronomy, mathematics, metallurgy, navigation, architecture, language, writing, and religion. It was also the source of the first government on Earth – monarchy. And the first kings on Earth were the gods.

Their race was known by various names. In Assyria, the Annodoti. In Sumeria, the Annunaki. In Druidic lore, the Tuatha de Danaan. In Judeo-Christian scriptures, they are called the Nephilim, “the Sons of God”, or the Watchers. They are described as having attachments such as wings, horns, and even fish scales, but from the depictions it is clear that these are costumes worn for their symbolic value, for these symbols indicated divine power and royal blood. The gods themselves had their own monarchy, with laws of succession similar to our own, and they built a global empire upon the Earth, with great cities, temples, monuments, and mighty nations established on several continents. Man was separate from the gods, like a domesticated animal, and there was a great cultural taboo amongst the gods against sharing any of their sacred information with humanity, even things such as writing and mathematics. These gods ruled directly over Egypt, Mesopotamia, and the Indus Valley, and their rule is recorded in the histories of all three civilizations.

This global monarchy was the crowning glory of the ages, and the period of their rule came to be called “the Golden Age”, or as the Egyptians called it, “the First Time”, when the gods watched over man directly, like a shepherd does his flock. In fact, they were often called “the Shepherd Kings.” One of the symbols of this world monarchy was an eye hovering over a throne, and this eye now adorns our American dollar bill, presented as the missing capstone of the Great Pyramid of Giza, underneath which are written the words “New World Order.” Clearly this New World Order is the global monarchy that or Founding Fathers (not a Democrat among them) intended for this nation to participate in all along, symbolized by a pyramid as a representation of the ideal and perfectly ordered authoritarian empire. During the Golden Age of the gods, a new king’s ascendance to the global throne would be celebrated by the sacrifice of a horse, an animal sacred to Poseidon, one of the Atlantean god-kings and Lord of the Seas. (1) In fact there is an amusing story about how King Sargon’s rebellious son Sagara tried to prevent his father’s assumption to the world throne from being solidified by stealing his sacrificial horse. The horse was not recovered until years later, and Sagara, along with the “sons of Sagara”, i.e., those members of his family who had assisted him, were forced to dig their own mass grave. This grave was oddly called “the Ocean.”

It was a rebellion such as this that led to the downfall of the entire glorious empire. At some point, it is told, some of the gods broke rank. This is again recorded in just about every culture on Earth that has a written history or oral tradition. Some of the gods, finding human females most appealing, intermarried with them, breaking a major taboo within their own culture, and creating a race of human/god hybrids. Some of these offspring are described as taking the form of giants, dragons, and sea monsters, while others are said to have borne a normal human countenance, with the exception of their shimmering white skin and their extremely long life spans. This is the bloodline that brought us Noah, Abraham, Isaac, Jacob, King David, Jesus Christ, and many others – in other words, the “Grail bloodline.” Legend has it that these beings taught mankind their secrets, including the above-mentioned arts of civilization, as well as a secret spiritual doctrine that only certain elect humans (their blood descendants) would be allowed to possess. They created ritualistic mystery schools and secret societies to pass this doctrine down through the generations.

However, these actions (the interbreeding with and sharing of secrets with humans) incurred the wrath of the Most High God, and a number of other gods who were disgusted by this interracial breeding. This sparked the massive and devastating battle of the gods that has come down to us in the legend of the “war in Heaven.” Then, in order to cleanse the Earth’s surface of the curse of humanity, they covered it with a flood. Interestingly, this flood is mentioned in the legends of almost every ancient culture on Earth, and the cause is always the same. Often the waters are described as having come from inside the Earth. “The Fountains of the deep were opened”, it is said. “Suddenly enormous volumes of water issued from the Earth.” Water was “projected from the mountain like a water spout.” The Earth began to rumble, and Atlantis, fair nation of the gods, sunk beneath the salty green waves. As we shall see, this is analogous to part of the “war in Heaven” story when the “rebellious” angels or gods were punished by being cast down “into the bowels of the Earth” – a very significant location.

To be certain, some of the Atlanteans managed to survive, and many books have been written about the Atlantean origin of the Egyptian, Sumerian, Indo-Aryan, and native South American civilizations (bringing into question the validity of the term “Native American”). Little, however, has been written about those who escaped into Western Europe, except for a passing reference in Ignatius Donnelly’s Atlantis: The Antediluvian World, in which we writes:

“The Gauls [meaning the French] possessed traditions upon the subject of Atlantis which were collected by the Roman historian Timagenes, who lived in the first century before Christ. He represents that three distinct people dwelt in Gaul: 1. The indigenous population, which I suppose to be Mongoloids, who had long dwelt in Europe; 2. the invaders from a distant land, which I understand to be Atlantis; 3. The Aryan Gaul.”

That the Merovingian bloodline came from elsewhere is clear because of the legend that surrounds their founder, King Meroveus, who is said to have been the spawn of a “Quinotaur” (a sea monster), who raped his mother when she went out to swim in the ocean. Now it becomes obvious why he is called “Meroveus”, because in French, the word “mer” means sea. And in some traditions, Atlantis was called Meru, or Maru. (2) For these gods, navigation above all was important to them, for it was their sea power that maintained their military might and their successful mercantile trade. (3) The Atlanteans were associated with the sea and were often depicted as mermen, or sea monsters, with scales, fins, and horns. They were variously associated with a number of important animals, whose symbolism they held sacred: horses, bulls, goats, rams, lions, fish, serpents, dragons, even cats and dogs. All of these things relate back to the sea imagery with which these gods were associated.

Now lets go back to the Quinotaur, which some have named as being synonymous with Poseidon, the Greek god of the sea and, according to Plato, one of the famous kings of Atlantis. Others have seen it as being emblematic of the fish symbol that Christ is associated with, thus indicating that he was in fact the origin of the Merovingian bloodline. However, the roots of this Quinotaur myth are far more ancient. The word itself can be broken down etymologically to reveal its meaning. The last syllable, “taur”, means “bull.” The first syllable “Quin”, or “Kin”, comes from the same root as “king”, as well as the Biblical name of Cain, whom many have named as the primordial father of the Grail family. (4) The idea of the “King of the World” taking the form of a sea-bull was a recurring them in many ancient cultures, most notably in ancient Mesopotamia. In fact it originated with that dynasty of kings who reigned over the antediluvian world and who were all associated with the sea, as well as this divine animal imagery. These kings included Sargon, Menes, and Narmar. Their historical reality morphed into the legends we have in many cultures of gods said to have come out of the sea at various times and to teach mankind the basic arts of civilization. They were known by various names, such as Enki, Dagon, Oannes, or Marduk (Merodach). They were depicted as half-man and half-fish, half-goat and half-fish, or half-bull and half-fish, but as I have said, in many of these depictions it is clear that this affect was achieved merely by the wearing of costumes, and that these god-kings were using this archetypal imagery to deify themselves in the minds of their subjects.

Dagon was depicted with a fish on his head, the lips protruding upward, making what were referred to as “horns.” This may be the origin for the custom (common in the ancient world) of affixing horns to the crown of a king. It has also been historically acknowledged as the origin of the miter worn by the Catholic Pope. (5) The Christian Church has always been associated with fish. Christ himself took on that imagery, as did John the Baptist, and the early Christians used the fish sign of the “Ichthys” to designate themselves. From the name “Oannes” we get the words “Uranus” and “Ouranos”, but also supposedly “Jonah”, “Janus”, and “John.” Perhaps we finally now understand why the Grand Masters of the Priory of Sion assume the symbolic name of “John” upon taking office.

The syllable “dag” merely means “fish”, which makes it interesting to note that the Dogon tribe of Africa, who have long baffled astronomers with their advanced knowledge of the faraway star-system from which they say their gods came, claim that these gods were “fish-men.” We may wonder if the words “dag” and “dog” are not etymologically related, especially since the star from whence these fish-men supposedly came is named Sirius, “the Dog Star.” From Dagon comes our word “dragon”, as well as the biblical figure of Leviathan, “the Lord of the Deep”, a title also applied to Dagon. In fact, many of these Atlantean god-kings received the titles “the Lord of the Waters”, “The Lord of the Deep”, or “the Lord of the Abyss”, which appear to have been passed down from father to son, along with the throne of the global kingdom. These kings were specifically associated with the Flood of Noah, which, as I have mentioned, destroyed their global kingdom, and was somehow linked to their disastrous breeding experiment with the human race that lead to the “Grail bloodline.” For this they were consigned to the “Abyss” or the underworld, which is why these gods were known as the lords of both.

In addition, Enki was known as the “Lord of the Earth”, and it is because of this “amphibious” nature of their progenitor, who reigned over both land and sea, that the Merovingians are associated with frogs. But this “Lord of the Earth” title is significant, for this is a title also given to Satan. It has been acknowledged elsewhere that Enki, as the “fish-goat man”, is the prototype for the Zodiac sign of Capricorn, which is itself recognized as the prototype for the modern conception of Satan or Lucifer. Furthermore, a well-known and pivotal episode in Enki’s career was his fight against his brother Enlil over the succession of the global throne. Enki eventually slew Enlil, something that is recorded in the Egyptian myth of Set murdering Osiris, and perhaps in the Biblical story of Cain murdering Abel. The connection between Enki and Enlil and Cain and Abel can be further proven by the fact that Enki and Enlil were the son of Anu (in some Sumerian legends, the first god-king on Earth), whereas Cain and Abel were the sons of the first man, called “Adamu” in Sumerian legends. “Adamu” and “Anu” appear to be etymologically related.

This family feud erupted into a long and overdrawn battle between the gods, who were split into two factions over the issue. These appear to be the same two factions who were at odds over the mating of gods and men to create the Grail bloodline. Those who supported Enki/Satan and Cain were clearly the ones who were inclined to breed with mankind, perhaps in an attempt to create a hybrid race that could assist them in retaining the throne for Cain. But they were overpowered. After they lost the “war in Heaven”, they were cast into the Abyss (according to legend, now the realm of Satan), and the Earth was flooded so as to rid it of their offspring.

Yet according to the legends, those gods who had created the hybrid race contacted one of their most favored descendants (called Uta-Napishtim in the Sumerian legends, or Noah in the Jewish), helping him to rescue himself and his family, preserving the seed of hybrid humanity. (6) We see remnants of this in the Vedic legends of the Flood, in which the Noah figure, here called “Manu”, is warned about the Flood by a horned fish (who turns out to be the Hindu god Vishnu in disguise). The fish tells Manu to build a ship, and then tie its tip to his horn. He then proceeds to tow Manu’s ship to safety upon a high mountain. So clearly Vishnu is connected to Enki, Dagon, and Oannes, and clearly he is the same one who saved Noah from the Flood. Yet this very deed became attributed, in the Old Testament, to the same god, Jehovah, who had purportedly caused the Flood to begin with. In fact the word Jehovah, or “Jah” is said to have evolved from the name of another Sumerian sea god-king, Ea, “the Lord of the Flood.” Likewise, Leviathan is responsible, according to some references, for “vomiting out the waters of the Flood.” This occurs at the Apocalypse in the Revelation of St. John the Divine as well. Leviathan, like many of these sea gods, was the Lord of the Abyss, and these waters were believed to be holding the Earth up from underneath, in the regions of Hell. Yet “Leviathan” is almost surely etymologically related to the Jewish name “Levi”, and therefore to the “tribe of Levi”, the priestly caste of the Jews that formed part of Christ’s lineage.

This dual current, being associated with both the heavenly and the infernal, with both Jesus and Jehovah, as well as Satan and Lucifer, is something that is consistently found throughout the history of the Merovingian dynasty, as well as all of the other Grail families, and the entire Grail story itself. It is at the heart of the secret spiritual doctrine symbolized by the Grail. This symbolism hits you immediately when you walk through the door of the church at Rennes-le-Chateau, France, and see those opposing statues of the demon Asmodeus and Jesus Christ staring at the same black and white chequered floor, which itself symbolizes the balance of good and evil. This principle is further elucidated by the words placed over the doorway, “This place is terrible, but it is the House of God and the Gateway to Heaven.” This phrase turns up in two significant places. One is in the Bible, when Jacob has his vision of the ladder leading to Heaven, with angels ascending and descending. The other is in The Book of Enoch, when Enoch is taken for a tour of Hell. The existence of this phrase at the entrance to the church, coupled with the images that meet you immediately therein, render the meaning obvious. For Berenger Sauniere, who arranged these strange decorations, this Church represented some kind of metaphysical gateway between Heaven and Hell.

For this reason, the double-barred Cross of Lorraine, symbolizing this duality, has come to be associated with the Merovingians. In a now famous poem by Charles Peguy, is it stated:

“The arms of Jesus are the Cross of Lorraine,
Both the blood in the artery and the blood in the vein,
Both the source of grace and the clear fountaine;

The arms of Satan are the Cross of Lorraine,
And the same artery and the same vein,
And the same blood and the troubled fountaine.”

The reference to Satan and Jesus sharing the same blood is very important. A tradition exists, one which finds support among The Book of Enoch and many others, that Jesus and Satan are brothers, both sons of the Most High God, and they both sat next to his throne in Heaven, on the right and left sides, respectively, prior to Satan’s rebellion and the War in Heaven. This may be just another version of the persistent and primordial “Cain and Abel” story. It makes sense that Satan should be a direct son of God, since he is described as God’s “most beloved angel” and “the brightest star in Heaven.” (7)

However, this symbol is far older than the modern conceptions of Christ and Satan, or Lucifer. This symbol can be traced back to the hieroglyphs of ancient Sumer, where it was pronounced “Khat”, “Kad”, and sometimes even “Kod.” This was another title for the kings who were known as gods of the sea, and the word “Khatti” became associated with this entire race. Their region’s capitol was called “Amarru” – “the Land to the West” (like Meru, the alternate term for Atlantis). This land was symbolized by a lion, which may explain the origin of the word “cat”, as well as why the lion is now a symbol of royalty. Furthermore, the word “cad” or “cod” has also become associated with fish and sea creatures in the Indo-European language system. (8) I would argue that this was at the root of the word “Cathari” (the heretics associated with the Holy Grail who occupied the Languedoc region of France that the Merovingians ruled over), as well as Adam Kadmon, the Primordial Man of alchemy, and “Caduceus”, the winged staff of Mercury. It is also the root for the name of the Mesopotamian kingdom of “Akkadia”, which itself has morphed into “Arcadia”, the Greek concept of Paradise. This further morphs into “acacia”, the traditional Masonic “sprig of hope” and symbol of resurrection after death.

Perhaps this sheds further light on the phrase “Et in Arcadia Ego”, which pops up more than once in association with the mystery of Rennes-le-Chateau and the Merovingians. This phrase was illustrated by Nicolas Poussin with the scene of a tomb, a human skull, and three shepherds. The tomb and skull clearly represent death, while the Sprig of Acacia implied by the word “Arcadia” translates to “resurrection from death.” The shepherds, furthermore, represent the divine kingship of the Atlantean gods and the Grail bloodline, for these god-monarchs were also known as the “Shepherd Kings” (a title, notably, taken up by Jesus as well). This indicates that it is the global monarchy of these Atlantean gods that shall rise again from the tomb, perhaps through the Merovingian bloodline.

This archetype of the fallen king who shall one day return, or the kingdom that disappears, only to rise again in a new, golden age, is a very common one, and one that I have shown in another article to be integral to the Grail legend. It was also one used quite effectively by the last of the Merovingian kings who effectively held the throne of the Austrasian Empire – this magazine’s mascot, Dagobert II. Dagobert’s entire life, as historically recorded, is mythological and archetypal. His name betrays the divine origins of his bloodline. “Dagobert” comes, of course, from Dagon. Now the word “bert”, as the author L.A. Waddell has shown, has its roots in the word “bara”, or “para“, or Anglicized, “pharaoh”, a “priest-king of the temple (or house).” So Dagobert’s name literally means “Priest-King of the House of Dagon.” Interestingly, a rarely-found but nonetheless authentic variation on Dagobert’s name was “Dragobert”, emphasizing his lineage from the beast of the deep waters, the dragon Leviathan.

Dagobert made use of the myth of the returning king early on in life. His father had been assassinated when he was five years old, and young Dagobert was kidnapped by then Palace Mayor Grimoald, who tried to put his own son on the throne. He was saved from death, but an elaborate ruse was laid out to make people think otherwise. Even his own mother believed he was dead, and allowed his father’s assassins to take over, placing Grimoald’s son on the throne. Dagobert was exiled to Ireland, where he lay in wait for the opportunity to reclaim his father’s throne. This opportunity showed itself in the year 671, when he married Giselle de Razes, daughter of the count of Razes and niece of the king of the Visigoths, allying the Merovingian house with the Visigothic royal house. This had the potential for creating a united empire that would have covered most of what is now modern France. This marriage was celebrated at the Church of St. Madeleine in Rhedae, the same spot where Sauniere’s Church of St. Madeleine at Rennes-le-Chateau now rests. There is an existing rumor that Dagobert found something there, a clue which lead him to a treasure buried in the nearby Montsegur, and this treasure financed what was about to come. This was the re-conquest of the Aquitaine and the throne of the Frankish kingdom. As Baigent, et. al write in Holy Blood, Holy Grail, “At once he set about asserting and consolidating his authority, taming the anarchy that prevailed throughout Austrasia and reestablishing order.” The fallen king had risen from his ashes, born anew as Dagobert II, and had come to once more establish firm rule and equilibrium in his country. The similarities to the Parzival/Grail story don’t even need to be repeated.

Sadly, Dagobert II would himself play the role of the fallen king just a few years later, in 679, and the circumstances were decidedly strange. You see, since the time of King Clovis I, the Merovingian kings had been under a pact with the Vatican, in which they had pledged their allegiance to the Mother Church in exchange for Papal backing of the their united empire of Austrasia. They would forever hold the title of “New Constantine”, a title that would later morph into “Holy Roman Emperor.” But that “allegiance” on the part of the Merovingians towards the Church began to wear thin after a while. Obviously, given their infernal and divine origin, their spiritual bent was slightly different from that of organized Christianity. In addition, as direct descendants of the historical Christ himself, they would have possessed access to the secret teachings of Christ, no doubt shockingly different from the ones promoted by the Church, and reflecting more of the “secret doctrine” of the rebellious gods that I have talked about in this article. Any public knowledge of this or the blood relationship between Christ and the Merovingians would have been disastrous for the Church. Christ would therefore be a man, with antecedents and descendants, instead of the “son of God, born of a virgin” concept promoted by the Church. Seeing in Dagobert a potential threat, the Roman church entered into a conspiracy with Palace Mayor Pepin the Fat.

On December 23, while on a hunting trip, Dagobert was lanced through the left eye by his own godson, supposedly on Pepin’s orders. There are many aspects to this event that appear to be mythologically significant. For one thing, it took place in the “Forest of Woevres”, long held sacred, and host to annual sacrificial bear hunts for the Goddess Diana. Indeed, the murder may have taken place on such a hunt. This was near the royal Merovingian residence at Stenay, a town that used to be called “Satanicum.” We must also consider the date itself, which was almost precisely at the beginning of the astrological period of Capricorn. As I have mentioned, Capricorn is based on Enki, and is thus connected to the Quinotaur that spawned the Merovingian bloodline. It is also close to the Winter Solstice, the shortest day in the year, when the Sun was said to “die”, mythologically, and turn black, descending into the underworld. This “black” period of the Sun is associated with the god Kronos or Saturn, another horned sea-god, ruler of the underworld, and king of Atlantis who figures repeatedly in this Grail/Rennes-le-Chateau mystery. (9) Secondly, the murder is said to take place at midday, which, as I have mentioned in another article, is an extremely significant moment in time for mystery schools of the secret doctrine, like Freemasonry. The parchments found by Berenger Sauniere and the related poem, Le Serpent Rouge makes a special mention of it. This is when the Sun is highest in the sky. The fact that Dagobert’s murder was committed by a family member is significant too. This is similar to the “Dolorous Stroke” that wounds the Fisher King in the Grail story, something which also took place at midday and was inflicted by the king’s own brother. In this story, the brother who wounds the Fisher King is known as the “Dark Lord”, and during the fight he is wounded in the left eye, precisely as Dagobert was wounded. The same thing happened to Horus in Egyptian mythology, fighting his uncle, Set. The “Left Eye of Horus” came to symbolize the hidden knowledge of the gods, just as the “left hand path” does today. Dagobert’s death appears to follow the same patterns as many other fallen kings or murdered gods whose death must be avenged. It is meant to symbolize the concept of the lost or fallen kingdom the same way the Dolorous Stroke does in the Grail story.

Clearly, Dagobert’s death meant the end for the Merovingian kingdom. All subsequent Merovingian kings were essentially powerless, and they were officially thought to have died out with Dagobert’s grandson, Childeric III. 49 years later, Charles Martel’s grandson, Charlemagne was anointed Holy Roman Emperor. But in 872, almost 200 years after his death, Dagobert was canonized as a Saint, and the date of his death, December 23, became “St. Dagobert’s Day.” Write Baigent, et. al.:

“The reason for Dagobert’s canonization remains unclear. According to one source it was because his relics were believed to have preserved the vicinity of Stenay against Viking raids – though this explanation begs the question, for it is not clear why the relics should have possessed such powers is the first place. Ecclesiastical authorities seem embarrassingly ignorant on the matter. They admit that Dagobert, for some reason, became the object of a fully fledged cult… But they seem utterly at a loss as to why he should have been so exalted. It is possible, of course that the Church felt guilty about its role in the king’s death.”

Guilty, or afraid? For surely they knew that this “Priest-King of the House of Dagon”, with his divine lineage, so beloved by his people that they worship him like a god 200 years later, would of course be avenged for his treacherous murder. Surely they knew, as most Dagobert’s Revenge readers know, that the Merovingian bloodline didn’t die out, surviving through his son Sigisbert, and continues to jockey for the throne of France to this very day through the actions of various royal bloodlines throughout Europe. Surely they knew that this kingdom would rise again, and that the lost king would return someday. The seeds of his return have already been planted. France is united into the political mass that Dagobert had envisioned it to be when he united Austrasia, and the “Holy Roman Empire”, which the Merovingian kings were clearly attempting to form with the help of the Vatican, has now become a reality in the form of the European Union. During WWII and immediately afterwards, the Priory of Sion, that secret order dedicated to the Merovingian agenda, openly campaigned for a United States of Europe. They even proposed a flag, consisting of stars in a circle, which is identical to the flag used by the European Union today. (10) Furthermore, the world empire of the Atlantean kings who spawned the Merovingians is more complete now than it has ever been since the gods left the earth during the Deluge. The United Nations, a feeble example, will surely give way at some point to a united world government strong enough and glorious enough to be called an empire. The fallen kingdom of the gods is clearly returning, and the new Golden Age is upon us. If this author’s hunch is correct, this is, indeed, a glorious time to be alive.

Endnotes:

(1) Recall that Merovingian King Clovis was buried with a severed horse’s head.

(2) It is also the name of the famous “world mountain” of Eastern tradition.

(3) Note that “mer” is also the origin of the word “mercantile.”

(4) Cain’s name has been said to be the origin of the word “king”

(5) Now we understand why, in the post-mortem photo of Berenger Sauniere lying on his death bed, this small parish priest is seen next to a bishop’s miter.

(6) Uta-Napishtim contains the Sumerian and Egyptian word for fish, “pish”, and perhaps we can see why some authors have claimed that the character of Noah is in fact based on Oannes, Dagon, or Enki as well.

(7) The Book of Enoch refers to the Watchers, or Nephilim, as “stars”, with various “watchtowers” in the houses of the Zodiac. Bear in mind that the ancients saw the sky above as a giant “sea”, the waters of which were kept at bay by the “Firmament of Heaven” – that is, until the Flood.

(8) At this writing, a large sea serpent 20 meters long has just been discovered off the coast of Canada named “Cadborosaurus Willsi”, nicknamed “Caddy.”

(9) Kronos or Saturn is the inspiration for the figures of Capricorn and the Judeo-Christian Satan.

(10) This flag was shown carried by a divine white horse, a symbol of Poseidon and world monarchy.
………………………………………………………………………………………………………………………….

Wayback Machine

Monarchy: The Primordial Form of Government, and its Basis in the “Lord of the Earth” Concept

By Tracy R. Twyman

When the Stewart King James VI of Scotland ascended the throne of England to become King James I of Great Britain, he made a speech that shocked and appalled the nobles sitting in Parliament, who had been waxing increasingly bold over the last few years, attempting to limit the powers of the crown to strengthen their own. What shocked them was that James used his coronation speech to remind them of the ancient, traditional belief that a monarch is chosen by God to be his emissary and representative on Earth, and ought therefore to be responsible to no one but God. In other words, James was asserting what has become known to history as “the Divine Right of Kings”, and the nobles didn’t like it one bit. Quotes from the speech show how inflammatory his words actually were:

“The state of monarchy is the most supreme thing upon earth, for kings are not only God’s lieutenants upon earth, and sit upon God’s throne, but even by God himself are called gods… In the Scriptures kings are called gods, and so their power after a certain relation is compared to divine power. Kings are also compared to fathers of families: for a king is truly Parens patriae, the politique father of his people… Kings are justly called gods, for that they exercise a manner of resemblance of divine power upon earth: for if you will consider the attributes to God, you shall see how they agree in the person of a king.”

The nobles were aghast. This fat, bloated pustule telling everyone to worship him as a god! It seemed patently ridiculous. Even more offensive, James finished up his speech by putting Parliament in its place, basically telling them that, since he ruled by the grace of God, any act or word spoken in contradiction of him was an act against God himself. James continued:

“I conclude then this point, touching the power of kings with this axiom of divinity, That as to dispute what God may do is blasphemy… so is it sedition in subjects to dispute what a king may do in the height of his power. I would not have you meddle with such ancient rights of mine as I have received from my predecessors… All novelties are dangerous as well in a politic as in a natural body, and therefore I would loath to be quarreled in my ancient rights and possessions, for that were to judge me unworthy of that which my predecessors had and left me.”

Although it was James I that made the concept famous, he certainly did not invent the idea of Divine Right. The concept is, as I shall show, as old as civilization itself.

As harsh and dictatorial as it may seem, such a system actually protected the rights of individual citizens from even larger and more powerful bullies such as the Parliament and the Pope. When power rests ultimately in the hands of a single individual, beholden to nobody except God, who need not appease anyone for either money or votes, injustices are more likely to be righted after a direct appeal to the king. Furthermore, past monarchs who held their claims to power doggedly in the face of increasing opposition from the Catholic Church managed, as long as they held their power, to save their subjects from the forced religious indoctrination and social servitude that comes with a Catholic theocracy. Author Stephen Coston wrote in 1972’s Sources of English Constitutional History that:

“Without the doctrine of the Divine Right, Roman Catholicism would have dominated history well beyond its current employment in the Dark Ages. Furthermore, Divine Right made it possible for the Protestant Reformation in England to take place, mature and spread to the rest of the world.”

The Divine Right practiced by European monarchs was actually based on a more ancient doctrine practiced by the monarchs of Judah and Israel in the Old Testament, whom many European royal families considered to be their ancestors, tracing their royal European lineage back to the Jewish King David, sometimes through the descendants of Jesus Christ. Such as line of descent was (and is) known as the “Grail bloodline.” One of Europe’s most famous monarchs, Charlemagne the Great, was often called “David” in reference to his famous ancestor, and Habsburg King Otto was called “the son of David.”(1) In fact, the European tradition of anointing kings comes from that practiced in the Old Testament. Author George Athas describes how the ceremony symbolized the Lord Yahweh adopting the new king as his own son:

“Firstly, the king was the ‘Anointed’ of Yahweh – the mesiach, from which we derive the term ‘Messiah.’ At his anointing (or his coronation), the Spirit of Yahweh entered the king, giving him superhuman qualities and allowing him to carry out the dictates of the deity. The psalmist of Psalm 45 describes the king as ‘fairer than the sons of men’, and continued to praise his majestic characteristics. This king also had eternal life granted to him by Yahweh. The deity is portrayed as saying to him, ‘You are my son – today I have sired you.’ The king was Yahweh’s Firstborn – the bekhor – who was the heir to his father’s estate. He was ‘the highest of the kings of the earth.’ Thus, the king was adopted by Yahweh at his coronation and, as such, was in closer communion with the deity than the rest of the people. On many occasions, Yahweh is called the king’s god. The king was distinguished far above the ordinary mortal, rendering him holy and his person sacred. It was regarded as a grievous offence to lay a hand on him. Thus, to overthrow the king was rebellion of the most heinous sort and an affront to the deity who had appointed the king… We can note that the king of Judah and Israel is described in divine terms. He is, for example, seen as sitting at Yahweh’s right hand, and his adopted son. We find similar motifs of Pharoahs seated to the right of a deity of Egypt. Psalm 45:7 calls the king an ‘elohim’ – a god. Psalm 45:7 also says ‘Your throne is like God’s throne.’”

Here we see the basis for King James’ claim that the scriptures likened human kings to gods. As such, kings were strongly associated with the priesthood as well, and in some cases took on priestly functions. However, traditionally, the Jewish priesthood was dominated by the tribe of Levi, which was biologically related but functionally separate from the royal line of David – that is, until Jesus came along, heir to both the kingly and priestly titles through his lineage back to both tribes. However, in other more ancient cultures, such as the Egyptian, the royal and priestly functions were inseparable. In addition to regarding their pharaohs as the literal offspring of deity, and in fact, deities themselves, the Egyptians believed that the institution of kingship itself had been given to them by the gods. Their first king had been one of their main gods, Osiris, whom all human kings were expected to emulate. Richard Cassaro, in his book, A Deeper Truth, elaborates:

“… during the First Time (the Golden Age when the gods ruled directly on Earth) a human yet eternal king named Osiris initiated a monarchial government in Egypt and imparted a wise law and spiritual wisdom to the people. At the end of his ministry, Osiris left his throne to the people. It was, thereafter, the duty of every king to rule over Egypt in the same manner Osiris had ruled.”

This concept that kingship began with a single divine ruler who all subsequent human kings are descendants of can be traced back to the oldest civilization acknowledged by history, Sumeria, and the other Mesopotamian cultures that followed, such as the Assyrians and the Babylonians. To quote Henri Frankfort:

In Mesopotamia, the king was regarded as taking on godhood at his coronation, and at every subsequent New Year festival. However, he was often seen as having been predestined to the divine throne by the gods at his birth, or even at the beginning of time. Through a sacred marriage, he had a metaphysical union with the mother goddess, who filled him with life, fertility, and blessing, which he passed onto his people.”

The Encyclopedia Britannica has identified three different types of sacred kingship that were recognized in the ancient world. The king was seen as “(1) the receptacle of supernatural or divine power, (2) the divine or semi-divine ruler; and (3) the agent or mediator of the sacred.” However, this author believes it is safe to say that all of these concepts stem from the almost universal belief that kingship descended from Heaven with a single divine being who was literally thought of as the ancestor of all those who followed. This king, I believe, was known to the ancients as Kronos, the Forgotten Father, and this is another name for the deity/planet, Saturn. He was the “brightest star in the heavens”, who fell to Earth and intermarried with the daughters of men to breed a race of human kings (the Grail Bloodline), but was thereafter imprisoned in the underworld by his father, Zeus. Some might think this contradicts the traditional association of ancient kings with the Sun-god, but in fact, Saturn himself was a sun god of a sort. Some believe that in ancient times Saturn was the dominant figure in the night sky, and as such became known as “the midnight sun” (a term later used by occultists to refer to the Grail). From its position in the sky it appeared to stand still, as the rest of the night sky revolved around it. It was therefore also called “the Central Sun.”

Interestingly, although this theory of mine has long been in the works, I have recently stumbled across an author named David Talbott who shares my hypothesis on the origin of kingship. From a piece on his website, http://www.kronia.com, entitled “Saturn as a Stationary Sun and Universal Monarch”, we read:

“A global tradition recalls an exemplary king ruling in the sky before kings ever ruled on earth.

This mythical figure appears as the first in the line of kings, the father of kings, the model of the good king. But this same figure is commonly remembered as the central luminary of the sky, often a central sun, unmoving sun, or superior sun ruling before the present sun.

And most curiously, with the rise of astronomy this celestial ‘king’ was identified as the planet Saturn.”

One can see traces of this ancient progenitor of kings just in the word “monarchy” itself. The syllable “mon” means “one” in Indo-European language systems, as in “the one king who rules over all.”, but in Egypt, it was one of the names of the sun god (also called “Amun-Re”). It denoted the Sun in its occluded state (when it passes beneath the Earth at night), and the word meant literally for them, “the Hidden One”, because the Sun ruled the world (and the underworld) from his secret subterranean prison. The syllable “ark” comes from the Greek “arche”, meaning original, or originator. As the first “monarch”, Kronos was the one originator of kings, the Forgotten Father of all royal bloodlines. Many of our commonly associated symbols of kingship date back to the time when Kronos first introduced it, and are directly derived from him. For instance, the crown symbolizes the (central) Sun, the “godhead” descending upon the brow of the wise king, and the Sumerian kings adorned their crowns with horns, just like Kronos was believed to have on his crown. The throne was Kronos’ seat on his celestial boat in heaven, and has been passed down to us as well. Kronos and his descendants were known as “shepherd kings”, an appellation used by royalty throughout history, and this is the origin of the king’s scepter, which was once a shepherd’s staff. The coronation stone and the orb surmounted by a cross are also Saturnian/solar symbols, and the Egyptian word for the Sun, “Re”, maybe the source of the French word for king, Roi.

Kronos, and the god-kings who followed him were known by the title “Lord of the Four Corners of the World.” This has given birth to the universal, recurring archetype of “le Roi du Monde”, a concept that was brilliantly explored in a book by René Guenon of the same name. In a surprising number of cultures throughout the world and throughout history, there is this concept of “the Lord of the Earth”, an omnipresent and eternal monarch who reigns from within the very center of the Earth itself, directing events on the surface with his superhuman psyche. In the Judeo-Christian tradition, “the Lord of the Earth” is a term applied to Satan, or Lucifer, who, like Saturn, was the brightest star in Heaven, but was cast down by God and, like Saturn, imprisoned inside the bowels of the Earth, in a realm called Hell. In fact, it is quite clear that the figure of Satan comes from Saturn, the “Fish-Goat-Man”, and obviously the two words are etymologically related. Perhaps this is why the “Grail bloodline” a divine lineage from which all European kings have come, is traced by many back to Lucifer. The Medieval Christian heretics known as the Cathars took this concept to its logical conclusion and insisted that, since Satan is the “King of the World” (“Rex Mundi’, as they called him), and Jehovah was, in the Bible, the one who created the world, Jehovah and Satan must be one and the same. For preaching this they were massacred unto extinction by the Papacy.

However, in the Eastern tradition, the Lord of the Earth represents the ultimate incarnate manifestation of godhood. They too saw him as ruling his kingdom from the center of the Earth, in a subterranean city called either “Shamballah” or “Agartha.” And in this tradition, the Lord of the Earth was also a super-spiritual being capable of incarnating on the surface of the Earth in a series of “avatars”, or human kings who have ruled various eras of existence. According to New Age author Alice Bailey:

“Shamballa is the seat of the ‘Lord of the World’ (who has made the sacrifice (analogous to the Bodhisattva’s vow) of remaining to watch over the evolution of men and devas until all have been ‘saved’ or enlightened.”

One of the names that the Hindus used for “the Lord of the Earth” was Manu, who, writes Guenon, is “a cosmic intelligence that reflects pure spiritual light and formulates the law (Dharma) appropriate to the conditions of our world and our cycle of existence.” Author Ferdinand Ossendowski adds:

“The Lord of the World is in touch with the thoughts of all those who direct the destiny of mankind… He knows their intentions and their ideas. If they are pleasing to God, the Lord of the world favours them with his invisible aid. But if they are displeasing to God, He puts a check on their activities.”

These are obviously activities that human kings, as incarnations of the Lord of the Earth, are expected to replicate in their own kingdoms to the best of their ability. In fact, a number of human kings throughout history have been viewed by their subjects as incarnations of the Lord of the Earth, embodying the concepts that he represents. These include Charlemagne, Alexander the Great (who was believed to have horns on his head), and Melchizedek, a mysterious priest-king mentioned repeatedly in the Old Testament and imbued with an inexplicable importance. He was called the “Prince of Salem” (as in Jerusalem), and is said to have shared bread and wine with Abraham during a ritual. Some believe that the cup which they used is the artifact that later became known as the Holy Grail. Some have also identified Melchizedek with another king of Jerusalem, Adonizedek, and with Shem, Noah’s son. Nobody knows what his ancestry is, who his descendants might have been, or why, thousands of years later, Jesus Christ was referred to in the scriptures as “a priest according to the Order of Melchizedek.” Of his significance, René Guenon writes:

“Melchizedek, or more precisely, Melki-Tsedeq, is none other than the title used by Judeo-Christian tradition to denote the function of ‘The Lord of the World’… Melki-Tsedeq is thus both king and priest. His name means ‘King of Justice’, and he is also king of Salem, that is, of ‘Peace’, so again we find ‘Justice’ and ‘Peace’ the fundamental attributes pertaining to the ‘Lord of the World.’”

Even more pertinent information is provided by René Guenon’s colleague Julius Evola, who in his book The Mystery of the Grail wrote:

“In some Syriac texts, mention is made of a stone that is the foundation, or center of the world, hidden in the ‘primordial depths, near God’s temple.’ It is put in relation with the body of the primordial man (Adam) and, interestingly enough, with an inaccessible mountain place, the access to which must not be revealed to other people; here Melchizedek, ‘in divine and eternal service’, watches over Adam’s body. In Melchizedek we find again the representation of the supreme function of the Universal Ruler, which is simultaneously regal and priestly; here this representation is associated with some kind of guardian of Adam’s body who originally possessed the Grail and who, after losing it, no longer lives. This is found together with the motifs of a mysterious stone and an inaccessible seat.”

Clearly, that foundation stone of the world is the same as the Black Sun in the center of the Earth, or the “Grail Stone” which is said to be hidden in that location. The Grail Romances provide us with much insight into the “King of the World” concept. This figure is represented in the story by one of the supporting characters, Prester John, a king who is mentioned in passing as ruling over a spiritual domain in the faraway East, and who, quite fittingly, is said to come from Davidic descent. Evola continues:

“The Tractatus pulcherrimus referred to him as ‘king of kings’ rex regnum. He combined spiritual authority with regal power… Yet essentially, ‘Prester John’ is only a title and a name, which designates not a given individual but rather a function. Thus in Wolfram von Eschenbach and in the Titurel we find ‘Prester John’ as a title; the Grail, as we will see, indicates from time to time the person who must become Prester John. Moreover, in the legend, ‘Prester John’ designates one who keeps in check the people of Gog and Magog, who exercises a visible and invisible dominion, figuratively, dominion over both natural and invisible beings, and who defends the access of his kingdom with ‘lions’ and ‘giants.’ In this kingdom is also found the ‘fountain of youth.’

The dignity of a sacred king is often accompanied by biblical reminiscences, by presenting Prester John as the son or nephew of King David, and sometimes as King David himself… ‘David, king of the Hindus, who is called by the people ‘Prester John’ – the King (Prester John) descends from the son of King David.”

The Lord of the Earth, or the figures that represent him, are often symbolized by a victory stone, or foundation stone which is emblematic of their authority. For instance, British kings are coronated on the “Stone of Destiny”, believed to have been used as a pillow by Jacob in the Old Testament. Such as stone is often referred to in mythology as having fallen from Heaven, like the Grail Stone, which fell out of Lucifer’s crown during his war with God, and became the foundation stone for the Grail kingdom, having the power, as it is written, to “make kings.” Because it fell from Heaven, the Grail is also often associated with a falling star, like that which Lucifer is represented by, and of course the Black Sun in the center of the Earth also represents Rex Mundi‘s victory stone. It is interesting, then, that in the Babylonian tongue, the word “tsar” means “rock”, and is not only an anagram of “star”, but a word that in the Russian language refers to an imperial monarch. Sometimes the monarchial foundation stone is represented as a mountain, especially the world or primordial mountain that in mythology provides the Earth with its central axis. The Sumerians referred to this as Mount Mashu, and its twin peaks were said to reach up to Heaven, while the tunnels and caves within it reached down to the depths of Hell. Jehovah in the Bible, sometimes called El Shaddai (“the Lord of the Mountain”) had Mount Zion for a foundation stone, and some believed he actually lived inside of the mountain. Later, the kingdom of Jesus Christ was said to be “founded upon the Rock of Sion.”

The stone that fell form Heaven, the royal victory stone, is also sometimes depicted under the symbolic form of a castrated phallus, such as that of Kronos, whose disembodied penis was hurled into the ocean, and there spawned the Lady Venus. This story is a recapitulation of the Osiris story, as well as the inspiration for the Grail legends, in which the Fisher King is wounded in the genitals, causing the entire kingdom to fall under a spell of perpetual malaise. The only thing that can heal the king, and therefore the kingdom, is the Grail. This is a recurring theme in world mythology: the king and/or the kingdom that temporarily falls asleep or falls under a magic spell which renders it/him ineffectual for a time, until the stars are right, or the proper conditions are met, causing the king and his kingdom to reawaken, to rise from the ashes, from the tomb, or often, to rise out of the sea. This cycle recurs in the tales of the Lord of the Earth, who alternates between periods of death-like sleep within his tomb in the center of the Earth, and rebirth, in which he once again returns to watch over his kingdom, restore righteousness and justice to the land, and preside over a new, revitalized “Golden Age.” Julius Evola writes of the archetype:

“It is a theme that dates back to the most ancient times and that bears a certain relation to the doctrine of the ‘cyclical manifestations’ or avatars, namely, the manifestation, occurring at special times and in various forms, of a single principle, which during intermediate periods exists in an unmanifested state. Thus every time a king displayed the traits of an incarnation of such a principle, the idea arose in the legend that he has not died but has withdrawn into an inaccessible seat whence once day he will manifest or that he is asleep and will awaken one day… The image of a regality in a state of sleep or apparent death, however, is akin to that of an altered, wounded, paralyzed regality, in regard not to its intangible principle but to its external and historical representatives. Hence the theme of the wounded, mutilated or weakened king who continues to live in an inaccessible center, in which time and death are suspended…. In the Hindu tradition we encounter the theme of Mahaksyapa, who sleeps in a mountain but will awaken at the sound of shells at the time of the new manifestation of the principle that previously manifested itself in the form of Buddha. Such a period is also that of the coming of a Universal Ruler (cakravartin) by the name of Samkha. Since samkha means ‘shells’, this verbal assimilation expresses the idea of the awakening from sleep of the new manifestation of the King of the World and of the same primordial tradition that the above-mentioned legend conceives to be enclosed (during the intermediate period of crisis) in a shell. When the right time comes, in conformity with the cyclical laws, a new manifestation from above will occur (Kalki-avatara) in the form of a sacred king who will triumph over the Dark Age. Kalki is symbolically thought to be born in Sambhala, one of the names that in the Hindu and Tibetan traditions designated the sacred Hyperborean center.

…many people thought that the Roman world, in its imperial and pagan phase, signified the beginning of a new Golden Age, the king of which, Kronos, was believed to be living in a state of slumber in the Hyperborean region. During Augustus’ reign, the Sibylline prophecies announced the advent of a ‘solar’ king, a rex a coelo, or ex sole missus, to which Horace seems to refer when he invokes the advent of Apollo, the Hyperborean god of the Golden Age. Virgil too seems to refer to this rex when he proclaims the imminent advent of a new Golden Age, of Apollo, and of heroes. Thus Augustus conceived this symbolic ‘filiation’ from Apollo; the phoenix, which is found in the figurations of Hadrian and of Antonius, is in strict relation to this idea of a resurrection of the primordial age through the Roman Empire… During the Byzantine age, the imperial myth received from Methodius a formulation that revived, in relation to the legend of Alexander the Great, some of the themes already considered. Here again, we find the theme of a king believed to have died, who awakens from his sleep to create a new Rome; after a short reign, the people of Gog and Magog, to whom Alexander had blocked the path, rise up again, and the ‘last battle’ takes place.”

Rene Guenon believed in this concept literally, and believed that the periods of slumber for the Lord of the Earth have been cyclically brought to a close by apocalypses, after which “le Roi du Monde” would return again to clean up the wreckage and once more look after his faithful flock. In The Revelation of St. John the Divine, three kings actually return from periods of slumber, death, or prolonged absence: Jesus, Satan, and Jehovah, and naturally, the governmental entity that God chooses for this utopian world is the one which has always been associated with holiness and righteousness: monarchy.

Monarchy was the first form of government observed by man, and it was, according to almost every culture, created by God himself. It is the primordial, archetypal form of government, the most natural – that which all other forms of government vainly try to mimic, while at the same time violating its most basic tenets. Monarchy was, for thousands of years, all mankind knew, and the idea of not having a monarch, a father figure to watch over them, to maintain the community’s relationship with the divine, represented to them not freedom, but chaos, uncertainty, and within a short time, death. The common people did not jealously vie for positions of power, nor did they desire to have any say in the decision of who would be king. In fact, most of them preferred that there be no decision to make at all: most monarchies functioned on the principle of primogeniture, passing the scepter and crown down from father to son, or in some cases, through the matrilineal line. The decision was up to nature or God, and therefore just and righteous in itself. Furthermore, the people knew they could count on their monarch to watch over them like he would their own children, to be fair and honest, to protect them from invasion, and to maintain the proper relationship between God and the kingdom. They desired to make their kingdom on Earth reflect the order and perfection that existed in God’s kingdom in Heaven. For thousands of years before the modern era, when 90% of the population was not intellectually capable of participating in government or making electoral decisions, monarchy stood as a bulwark against the disintegration of the societal unit, providing a stability that otherwise could not be achieved. If monarchy had not been invented, human history could never have happened. Richard Cassino, in A Deeper Truth, said it best:

“Since the obligation of every king… is to maintain law, order, morality, spirituality, and religion within his kingdom, then the very design of a monarchy itself was probably conceived by the superior intelligence called God so as to endow mankind with a sound system of government. In other words, the concept of kingship was designed for, and delivered to, the peoples of earth by God to teach mankind to live in a humanized social environment… Human history, with its past and present kingdoms and kings – Egypt, Assyria, Persia, Babylon, Sumer, Aztec, Inca, Jordan, Saudi Arabia, Great Britain, to name a few – stands as a testimony to the fact that the monarchial form of government has been the basis for almost every civilization.”

If monarchy is the most perfect form of government, and if it has been responsible for providing us with at least 600 years of human history, why now does it seem to be only an ancient pretension? Why is the concept of having a monarchy actually function in government considered to be a quaint but laughable thing of the past? Have we really moved beyond monarchy?

Hardly. If you were to graph the entire 6000 years of known human history and isolate the period in which civilized nations have been without monarchs, it would be merely a blip on the spectrum. In fact, of the civilized Western nations, few do not have a monarch reigning either de jure or de facto (although they continue to elect Presidents from royal European lineage). Most nations that maintain representational government still have a monarch either recognized by the government, or by the people at large, and though essentially powerless, these monarchs maintain a symbolic link between a nation and its heritage – its most sacred, most ancient traditions. They also constitute a government-in-waiting, should the thin veneer of illusory “freedom” and “equality” that maintains democracy break down. The modern system of Republican government is based not so much on the freedom of the individual, but on the free flow of money, on debt, usury, inflation, and on a monetary house of cards known as “Fractional Reserve Lending.” It would only take a major and slightly prolonged collapse of the monetary system to eliminate this governmental system. At that point, civilized man will have essentially two choices: anarchy or monarchy, and if people have any sense at all they will choose the latter, rather than subjecting themselves to a chaotic succession of despots interspersed with periods of violence and rioting, and the poverty that comes with the lack of a stable state. It would be the most natural thing in the world for the royal families of Earth, and the monarchial system which they have maintained, to just slide right into place. The kingdom of the gods, who once ruled during man’s Golden Age, would then awaken from their slumber and heed the call to duty, like Kronos, their Forgotten Father, and monarch of all, who soundly sleeps within his tomb in the primordial mountain, waiting for his chance to once again hold dominion over the Earth.

Endnote:

(1) Otto is still, to this day, the titular King of Jerusalem. EXCLUSIVE: New transcript of Rand at West Point in ’74 enthusiastically defends extermination of Native Americans

Ben Norton
October 15, 2015 12:01AM (UTC)

Ayn Rand is the patron saint of the libertarian Right. Her writings are quoted in a quasi-religious manner by American reactionaries, cited like Biblical codices that offer profound answers to all of life’s complex problems (namely, just “Free the Market”). Yet, despite her impeccable libertarian bona fides, Rand defended the colonization and genocide of what she called the “savage” Native Americans — one of the most authoritarian campaigns of death and suffering ever orchestrated.

“Any white person who brings the elements of civilization had the right to take over this continent,” Ayn Rand proclaimed, “and it is great that some people did, and discovered here what they couldn’t do anywhere else in the world and what the Indians, if there are any racist Indians today, do not believe to this day: respect for individual rights.”

Rand made these remarks before the graduating class of the U.S. Military Academy at West Point on March 6, 1974, in a little-known Q&A session. Rand’s comments in this obscure Q&A are appearing in full for the first time, here in Salon.

“Philosophy: Who Needs It” remains one of Ayn Rand’s most popular and influential speeches. The capitalist superstar delivered the talk at West Point 41 years ago. In the definitive collection of Rand’s thoughts on philosophy, Philosophy: Who Needs It, the lecture was chosen as the lead and eponymous essay. This was the last book Rand worked on before she died; that this piece, ergo, was selected as the title and premise of her final work attests to its significance as a cornerstone of her entire worldview.

The Q&A session that followed this talk, however, has gone largely unremembered — and most conveniently for the fervent Rand aficionado, at that. For it is in this largely unknown Q&A that Rand enthusiastically defended the extermination of the indigenous peoples of the Americas.

In the Q&A, a man asked Rand:

At the risk of stating an unpopular view, when you were speaking of America, I couldn’t help but think of the cultural genocide of Native Americans, the enslavement of Black men in this country, and the relocation of Japanese-Americans during World War II. How do you account for all of this in your view of America?

(A transcript of Ayn Rand’s full answer is included at the bottom of this article.)

Rand replied insisting that “the issue of racism, or even the persecution of a particular race, is as important as the persecution of individuals.” “If you are concerned with minorities, the smallest minority on Earth is an individual,” she added, before proceeding to blame racism and the mass internment of Japanese-Americans on “liberals.” “Racism didn’t exist in this country until the liberals brought it up,” Rand maintained. And those who defend “racist” affirmative action, she insisted, “are the ones who are institutionalizing racism today.”

Although the libertarian luminary expressed firm opposition to slavery, she rationalized it by saying “black slaves were sold into slavery, in many cases, by other black tribes.” She then, ahistorically, insisted that slavery “is something which only the United States of America abolished.”

Massive applause followed Rand’s comments, which clearly strongly resonated with the graduating class of the U.S. military. Rand’s most extreme and opprobrious remarks, nevertheless, were saved for her subsequent discussion of Native Americans.

“Savages” who deserved to be conquered

In a logical sleight of hand that would even confound and bewilder even Lewis Carroll, Ayn Rand proclaimed in the 1974 Q&A that it was in fact indigenous Americans who were the racists, not the white settlers who were ethnically cleansing them. The laissez-faire leader declared that Native Americans did not “have any right to live in a country merely because they were born here and acted and lived like savages.”

“Americans didn’t conquer” this land, Rand asserted, and “you are a racist if you object to that.” Since “the Indians did not have any property rights — they didn’t have the concept of property,” she said, “they didn’t have any rights to the land.”

If “a country does not protect rights,” Rand asked — referring specifically to property rights — “why should you respect the rights they do not have?” She took the thought to its logical conclusion, contending that anyone “has the right to invade it, because rights are not recognized in this country.”

Rand then blamed Native Americans for breaking the agreements they made with the Euro-American colonialists. The historical reality, though, was exactly the contrary: white settlers constantly broke the treaties they made with the indigenous, and regularly attacked them.

“Let’s suppose they were all beautifully innocent savages, which they certainly were not,” Rand persisted. “What was it that they were fighting for, if they opposed white men on this continent? For their wish to continue a primitive existence, their right to keep part of the earth untouched, unused, and not even as property, but just keep everybody out so that you will live practically like an animal?” she asked.

“Any white person who brings the elements of civilization had the right to take over this continent,” Rand said, “and it is great that some people did, and discovered here what they couldn’t do anywhere else in the world and what the Indians, if there are any racist Indians today, do not believe to this day: respect for individual rights.”

Rand’s rosy portrayal of the colonization of the modern-day Americas is in direct conflict with historical reality. In his book American Holocaust: Columbus and the Conquest of the New World, American historian David Stannard estimates that approximately 95 percent of indigenous Americans died after the beginning of European settler colonialism. “The destruction of the Indians of the Americas was, far and away, the most massive act of genocide in the history of the world,” writes Prof. Stannard. “Within no more than a handful of generations following their first encounters with Europeans, the vast majority of the Western Hemisphere’s native peoples had been exterminated.”

West Point appeared to express no concern with Rand’s extreme, white supremacist views, nevertheless. A West Point official offered final remarks after her speech, quipping: “Ms. Rand, you have certainly given us a delighted example of a major engagement in philosophy, in the wake of which you have left a long list of casualties” — to which the audience laughed and applauded. “And have tossed and gored several sacred cows,” he added. “I hope so,” Rand replied.

More than just seemingly condoning Rand’s comments, the U.S. Military Academy also admirably echoed Ayn Rand’s views. “Ms. Rand, in writing Atlas Shrugged,” the West Point official continued at the graduation ceremony, “made one remark that I thought was important to us when she said that the only proper purpose of a government is to protect Man’s rights, and the only proper functions of the government are the police, to protect our property at home; the law, to protect our rights and contracts; and the army, to protect us from foreign threats. And we appreciate your coming to the home of the Army tonight to address us.” More thunderous applause followed.

The U.S. Military Academy later republished the lecture — but not the Q&A — in a philosophy textbook, giving it the government’s seal of approval.

Tracking down the evidence

The book Ayn Rand Answers: The Best of Her Q & A includes Rand’s Manifest Destiny-esque defense of settler colonialism among some of the “best of her” public statements. Ayn Rand Answers was edited by philosophy professor Robert Mayhew, whom the Ayn Rand Institute describes as an “Objectivist scholar,” referring to the libertarian ideology created by Rand. ARI lists Prof. Mayhew as one of its Ayn Rand experts, and notes that he serves on the board of the Anthem Foundation for Objectivist Scholarship. The transcript included in Prof. Mayhew’s collection is full of errors, however, and reorders her remarks.

A recording of the West Point speech was available for free on the ARI website as early as April 2009. Up until around October 18, 2013, separate recordings of the speech and Q&A were still freely accessible. By October 22, nonetheless, ARI had removed the recordings from its website and put them up for sale.

Some copies of the 1974 recording have circulated the Internet, but in order to verify the quotes and authenticate the transcript, I ordered an official MP3 recording of the event from the Ayn Rand Institute eStore. (After all, I was working on a piece involving Ayn Rand, so I figured it was only natural that I had to buy something.) The quotes in this piece are directly transcribed from the official recording of Rand’s West Point speech and Q&A.

ARI created an entire course devoted to the single lecture in its online education program. ARI implores readers, “Come hear Rand enlighten and entertain the West Point cadets (laughter can be heard at various points in the audio).” The laughter often followed Rand’s most egregious remarks. Defending one of human history’s most horrific genocides can apparently be quite comical.

Ayn Rand speaking about racism, slavery, and Native Americans, at West Point in 1974 (TRANSCRIPT)

To begin with, there is much more to America than the issue of racism. I do not believe that the issue of racism, or even the persecution of a particular race, is as important as the persecution of individuals, because when you deprive individuals of rights, if you deprive any small group, all individuals lose their rights. Therefore, look at this fundamentally: If you are concerned with minorities, the smallest minority on Earth is an individual. If you do not respect individual rights, you will sacrifice or persecute all minorities, and then you get the same treatment given to a majority, which you can observe today in Soviet Russia.

But if you ask me well, now, should America have tolerated slavery? I would say certainly not. And why did they? Well, at the time of the Constitutional Convention, or the debates about the Constitution, the best theoreticians at the time wanted to abolish slavery right then and there—and they should have. The fact is that they compromised with other members of the debate and their compromise has caused this country a dreadful catastrophe which had to happen, and that is the Civil War. You could not have slavery existing in a country which proclaims the inalienable rights of Man. If you believe in the rights and the institution of slavery, it’s an enormous contradiction. It is to the honor of this country, which the haters of America never mention, that people died giving their lives in order to abolish slavery. There was that much strong philosophical feeling about it.

Certainly slavery was a contradiction. But before you criticize this country, remember that that is a remnant of the politics and the philosophies of Europe and of the rest of the world. The black slaves were sold into slavery, in many cases, by other black tribes. Slavery is something which only the United States of America abolished. Historically, there was no such concept as the right of the individual. The United States is based on that concept. So that so as long as men held to the American political philosophy, they had to come to the point, even of a civil war, but of eliminating the contradiction with which they could not live—namely, the institution of slavery.

Incidentally, if you study history following America’s example, slavery or serfdom was abolished in the whole civilized world during the 19th century. What abolished it? Not altruism. Not any kind of collectivism. Capitalism. The world of free trade could not coexist with slave labor. And countries like Russia, which was the most backward and had serfs liberated them, without any pressure from anyone, by economic necessity. Nobody could compete with America economically so long as they attempted to use slave labor. Now that was the liberating influence of America.

That’s in regard to the slavery of Black people. But as to the example of the Japanese people—you mean the labor camps in California? Well, that was certainly not put over by any sort of defender of capitalism or Americanism. That was done by the left-wing progressive liberal Democrats of Franklin D. Roosevelt.

[Massive applause follows, along with a minute in which the moderator asks Ayn Rand to respond to the point about the genocide of Native Americans. She continues.]

If you study reliable history, and not liberal, racist newspapers, racism didn’t exist in this country until the liberals brought it up—racism in the sense of self-consciousness and separation about races. Yes, slavery existed as a very evil institution, and there certainly was prejudice against some minorities, including the Negroes after they were liberated. But those prejudices were dying out under the pressure of free economics, because racism, in the prejudicial sense, doesn’t pay. Then, if anyone wants to be a racist, he suffers, the workings of the system is against him.

Today, it is to everyone’s advantage to form some kind of ethnic collective. The people who share your viewpoint or from whose philosophy those catchphrases come, are the ones who are institutionalizing racism today. What about the quotas in employment? The quotas in education? And I hope to God—so I am not religious, but just to express my feeling—that the Supreme Court will rule against those quotas. But if you can understand the vicious contradiction and injustice of a state establishing racism by law. Whether it’s in favor of a minority or a majority doesn’t matter. It’s more offensive when it’s in the name of a minority because it can only be done in order to disarm and destroy the majority and the whole country. It can only create more racist divisions, and backlashes, and racist feelings.

If you are opposed to racism, you should support individualism. You cannot oppose racism on one hand and want collectivism on the other.

But now, as to the Indians, I don’t even care to discuss that kind of alleged complaints that they have against this country. I do believe with serious, scientific reasons the worst kind of movie that you have probably seen—worst from the Indian viewpoint—as to what they did to the white man.

I do not think that they have any right to live in a country merely because they were born here and acted and lived like savages. Americans didn’t conquer; Americans did not conquer that country.

Whoever is making sounds there, I think is hissing, he is right, but please be consistent: you are a racist if you object to that [laughter and applause]. You are that because you believe that anything can be given to Man by his biological birth or for biological reasons.

If you are born in a magnificent country which you don’t know what to do with, you believe that it is a property right; it is not. And, since the Indians did not have any property rights—they didn’t have the concept of property; they didn’t even have a settled, society, they were predominantly nomadic tribes; they were a primitive tribal culture, if you want to call it that—if so, they didn’t have any rights to the land, and there was no reason for anyone to grant them rights which they had not conceived and were not using.

It would be wrong to attack any country which does respect—or try, for that matter, to respect—individual rights, because if they do, you are an aggressor and you are morally wrong to attack them. But if a country does not protect rights—if a given tribe is the slave of its own tribal chief—why should you respect the rights they do not have?

Or any country which has a dictatorship. Government—the citizens still have individual rights—but the country does not have any rights. Anyone has the right to invade it, because rights are not recognized in this country and neither you nor a country nor anyone can have your cake and eat it too.

In other words, want respect for the rights of Indians, who, incidentally, for most cases of their tribal history, made agreements with the white man, and then when they had used up whichever they got through agreement of giving, selling certain territory, then came back and broke the agreement, and attacked white settlements.

I will go further. Let’s suppose they were all beautifully innocent savages, which they certainly were not. What was it that they were fighting for, if they opposed white men on this continent? For their wish to continue a primitive existence, their right to keep part of the earth untouched, unused, and not even as property, but just keep everybody out so that you will live practically like an animal, or maybe a few caves about.

Any white person who brings the elements of civilization had the right to take over this continent, and it is great that some people did, and discovered here what they couldn’t do anywhere else in the world and what the Indians, if there are any racist Indians today, do not believe to this day: respect for individual rights.

I am, incidentally, in favor of Israel against the Arabs for the very same reason. There you have the same issue in reverse. Israel is not a good country politically; it’s a mixed economy, leaning strongly to socialism. But why do the Arabs resent it? Because it is a wedge of civilization—an industrial wedge—in part of a continent which is totally primitive and nomadic.

Israel is being attacked for being civilized, and being specifically a technological society. It’s for that very reason that they should be supported—that they are morally right because they represent the progress of Man’s mind, just as the white settlers of America represented the progress of the mind, not centuries of brute stagnation and superstition. They represented the banner of the mind and they were in the right.

[thunderous applause]


I got the revolution blues, I see bloody fountains,
And ten million dune buggies comin’ down the mountains.
Well, I hear that Laurel Canyon is full of famous stars,
But I hate them worse than lepers and I’ll kill them in their cars.

https://i0.wp.com/informations-documents.com/environnement/coppermine15x/albums/gallerie%20dessins/dessins%20scolaires%20zoologie/Dessins_scolaires_zoologie_317_le_poulpe_commun.jpg

The spaghetti theory of conspiracy is nothing if not psychedelic. In a psychedelic fashion, though, conspiracy theory loops back upon itself, paranoid and snarling. As I’ve tried to show with these posts just as conspiracies extend and branch out from the tiniest biological organisms to the realm of the gods themselves, conspiracy theorizing is likewise diverse, contradictory and in a marginal existence of ceaseless vicissitude.

For this reason conspiracy theories about the psychedelic movement especially, one would think, should mirror the groundless, fluctuating nature of their subject. Not necessarily so. In very recent years, in contrast, there is a fast-growing tendency to conclude that the psychedelic movement emerging out of the sixties and continuing in fractured pieces even today can be explained very simply: the whole tie-dyed cloth was designed and manufactured by the malefic Powers That Be.

A Deliberate Creation

This is exactly the thesis of a May, 2013 article by Joe Atwill and Jan Irvin entitled, “Manufacturing the Deadhead: A product of social engineering.” Beyond the title itself, the authors explicitly state their full thesis early on in the long essay.

Most today assume that the CIA and the other intelligence-gathering organizations of the U.S. government are controlled by the democratic process. They therefore believe that MK-ULTRA’s role in creating the psychedelic movement was accidental “blowback.” Very few have even considered the possibility that the entire “counterculture” was social engineering planned to debase America’s culture – as the name implies. The authors believe, however, that there is compelling evidence that indicates that the psychedelic movement was deliberately created. The purpose of this plan was to establish a neo-feudalism by the debasing of the intellectual abilities of young people to make them as easy to control as the serfs of the Dark Ages.

Such a thesis, denying “blowback,” accidents, spontaneity, unforeseen consequences, unpredictability, limited autonomy, etc. is thoroughly absolutist in nature. Absolutist conspiracy theories, as explored in previous posts, however satisfactory they are in creating a comprehensive narrative to ostensibly explain the current sociopolitical reality, do not accurately reflect the complexity and nuances that make up that reality.

There is really no doubt that government and far more nefarious agencies were and are involved in promoting and “manufacturing” various aspects of the psychedelic counterculture. The name of their game, after all, is control. However, we go far astray in our analysis, I believe, when we conclude that every facet of this movement was contrived and engineered from the get go. Such a conclusion is not only inaccurate, failing to account for obvious complexity, but it also robs us of taking inspiration in and gaining knowledge from genuinely liberatory elements of the sixties counterculture.

It is crucial that we attempt to know precisely how we are being manipulated and hoodwinked, and in this the research of Atwill and Irvin, as well as others like Dave McGown, is indispensable. We must not cling to illusions. But we must also not make the opposite mistake. The same dominant faction that gains from tweaking and prodding the counterculture in desired directions also gains in the widespread acceptance of conspiracy and revisionist theories that reject the counterculture in total. Such theories promote paralysis in the face of a seemingly omnipotent elite and they also severely limit our own options of resistance.

The present post, then, will not try to demonstrate that the counterculture which captured the attention of the world in the sixties and onward is wholly good. Neither, though, will it conclude, along with Atwill and Irvin that it was and is just a product of social engineering, just a colossal hoodwink. Instead I hope to show that any comprehensive theory of the psychedelic movement, and similar movements, must be psychedelic in itself — spaghetti-like. This doesn’t make for an easy-to-grasp, black-and-white, Hollywood storyline, but is reality ever really like this?

Mud humping

There is no need for a point-by-point refutation of Atwill and Irvin’s article. Much of their research appears pretty sound. Jan Irvin’s research on R. Gordon Wasson is especially revealing and alarming if accurate. The authors present a somewhat garbled grab-bag of every available anti-counterculture conspiracy theory and criticism, from Timothy Leary being a CIA spy to Woodstock being a designed spectacle to debase US culture through its images of stoned hippies humping in the mud. The John Birch Society in its heyday likely could not have produced a more damning indictment.

Unlike the more conventional right-wing based attacks on the counterculture, however, which made the case that the hippies were a sort of Trojan Horse for world communism, Atwill and Irvin go much further in their conclusions. The goal of the Agenda, as we’ve seen, is not communism but a neo-feudal Dark Age featuring eugenics, depopulation and near universal, back-breaking servitude for the masses.

Where did Irvin and Atwill come up with this horrific vision of the near future? In fact, their view is not so different from other absolutist conspiracy theorists like Alex Jones and especially the very articulate Alan Watt. If there is a “mainstream” of absolutist conspiracy theorizing Irvin and Atwill fall firmly within it. If anything, though, it is their emphasis that makes them unique. To bring about this New Word Order of shit-kicking peasant stinkards and their transhuman lords and masters, they conclude, the psychedelic movement was absolutely essential.

As evidence for this Agenda the authors cite the work of Terence McKenna. Didn’t McKenna, the indefatigable psychedelic evangelist, constantly promote the idea of an Archaic Revival? Isn’t the Archaic Revival entirely synonymous with the Dark Age? Didn’t McKenna admit in an interview, published in The Archaic Revival and quoted by Atwill and Irvin, that he was a “soft Dark Ager“?

I guess I’m a soft Dark Ager. I think there will be a mild dark age. I don’t think it will be anything like the dark ages that lasted a thousand years…

This certainly appears to condemn McKenna. He is clearly advocating neo-feudalism! He must be an agent of the Agenda! It is worthwhile, though, to look up McKenna’s entire quote. Jan Irvin, to his credit, constantly exhorts his readers to check his facts. I’ll take his advice. McKenna is asked if he thought, in agreement with certain futurists, that humanity would have to pass through a new dark age in order to attain a higher state of collective consciousness. Here’s his full response:

I guess I’m a soft Dark Ager. I think there will be a mild Dark Age. I don’t think it will be anything like the Dark Ages which lasted a thousand years — I think it will last more like five years — and will be a time of economic retraction, religious fundamentalism, retreat into closed communities by certain segments of the society, feudal warfare among minor states, and this sort of thing. I think it will give way in the late ’90s to the actual global future that we’re all yearning for. Then there will be basically a 15-year period where all these things are drawn together with progressively greater and greater sophistication, much in the way that modern science, and philosophy has grown with greater and greater sophistication in a single direction since the Renaissance. Sometime around the end of 2012, all of this will be boiled down into a kind of alchemical distillation of the historical experience that will be a doorway into the life of the imagination.    

Terence is obviously quite off in his timing but there is no indication that he is in any way advocating a new Dark Age as a positive end for social control — quite the contrary. He is saying that there may unfortunately be a wholly undesirable and unnecessary, yet extremely brief, period of reaction before the real goal emerges: the “doorway into the life of the imagination.

It is readily apparent to anyone spends any amount of time listening to or reading Terence McKenna that he is in no way an advocate for a Dark Age as he defines it — economic retraction, fundamentalism, closed communities, feudal warfare, etc. His advocacy of the Archaic Revival, on the other hand, is completely antithetical to this. And, once again, McKenna is very lucid in what he means by this term.

Terence argues that in a time of general crisis a society will naturally look back to a time in its history when possible solutions or the means of resolving the current crisis might be found. Thus, during the dissolution of the medieval worldview individual Europeans turned to the classical age of Rome and Greece to find new inspiration, resulting in the Renaissance.

McKenna, an admirer of both the Renaissance and classical Greece, concludes that the combined crises of modernity are so dire that we must look back even further to a time before the State, before organized religion, before the hierarchical stratification of society, before the severing of humanity’s link with the rest of nature — all key features of both the Dark Age and today.

This time is found in the long archaic (not ancient and definitely not medieval or feudal) age of the paleolithic. And, to anticipate a stupid objection, McKenna is not advocating a return to the Old Stone Age. He is saying that there are many things that we urgently need to learn from our “primitive” ancestors and still existing hunter-gatherer tribes

Fortunately, movements in art and in the wider culture and counterculture have from the late-19th century onward attempted to learn these lessons. Anyone who would equate the terms “Archaic Revival” with “Dark Age” in McKenna is either completely missing the point or is consciously misrepresenting his message.

The Fungal Bureau of Intoxication

Could it be, though, that McKenna is being devious in his presentation? If, as Irvin and Atwill assert, McKenna is an agent of the nefarious Agenda then isn’t it very possible that he is seducing people with his highly-cultivated charm and elocution to accept a vision of the Archaic Revival which is actually something completely opposite to what he says it is, namely a new Dark Age? If this is the case, Irvin and Atwill present no evidence of it. Irvin does claim, however, to have caught McKenna admitting that he is just this sort of agent.

https://i0.wp.com/upload.wikimedia.org/wikipedia/en/4/43/SecretAgent.jpg

Irvin presents this evidence, “an explosive audio clip,” in an article from August of last year explosively entitled, “NEW MKULTRA DISCOVERY: Terence McKenna admited that he was a “deep background” and “PR” agent (CIA or FBI).” The clip can be listened to here, but this is the damning quotation:

And certainly when I reached La Chorerra in 1971 I had a price on my head by the FBI, I was running out of money, I was at the end of my rope. And then THEY recruited me [laughter from his audience] and said, “you know, with a mouth like yours there’s a place for you in our organization.” And I’ve worked in deep background positions about which the less said the better. And then about 15 years ago THEY shifted me into public relations and I’ve been there to the present.

What is conspicuously absent from Jan Irvin’s account of this is the laughter. McKenna’s audience during the talk and nearly all of his subsequent listeners have realized that Terence is making a joke about being recruited by the Mushroom. Absolutist conspiracy theorists, in contrast, are notorious for not having a sense of humour. In objection to this fairly basic interpretation of McKenna’s words Jan Irvin reveals that he is definitely well within the absolutist camp:

1) Do mushrooms have organizations, deep background and public relations (propaganda)? Or does a spy agency?
2) What would mushrooms need with a public relations or propaganda department? Or is that something a spy agency would have?
3) Would mushrooms tell him the less said the better: “deep background positions about which the less said the better”, or is that something an agency would do?
4) Do mushrooms have “positions”? Or does an agency?
5) Are the mushrooms able to pay him because he’s out of money? Or is that something an agency could do? (remember he’s in trouble for smuggling)
6) Are mushrooms able to get him out of trouble with Interpol and the FBI for DRUG SMUGGLING? Or is that something an agency like the CIA or FBI could do?
7) Do mushrooms answer the story of what happened to him after his arrest? Or is that something that his employment as an agent would do?

https://i0.wp.com/www.thiel-a-vision.com/wp-content/uploads/2010/10/matango03.jpg

Wow. Irvin does seem to have a point (or seven!) here. All those who laughed will surely not laugh last. The evidence is in! If there is anything, though, to take seriously I think it is McKenna’s confession that he was recruited by the Mushroom. He is admitting to a conspiracy here, and it is one that is far vaster in scope than anything the CIA and the FBI combined could think up. Irvin, unfortunately, does not appear to take this sort of conspiracy seriously.

The less interesting, more banal story of McKenna as FBI/CIA agent has been thoroughly “debunked” elsewhere on the web so there is no reason to go over the boring business again here. It is interesting (and funny) to hear Terence’s brother Dennis’ take on the whole thing. Here is Dennis in an interview from May, 2013 (at 35 minutes in):

I just feel kind of sorry for Jan, actually. He seems to have this need to see conspiracies where none exist…. This is the web of delusion that you can fall into if you’re not careful and I think he has. … It looks like pathology to me, and a lot of people see that. But then Jan will say, well, you won’t go through these 20 databases that I’ve sent you and these 200 links. And you’ve got to understand, no Jan I won’t, because for one thing I don’t have time and the fact there are connections does not necessarily a conspiracy make. I mean, yeah, Terence talked at Esalen and Aldous Huxley talked at Esalen that doesn’t mean that Esalen is involved in some plot for world domination. … I just don’t buy it.  It just seems like a waste of time. … I would think I would know that [Terence was an agent]. I would think he would have said something. You know, we were close. But then maybe he was but he didn’t even know he was. I don’t think so. I don’t know if you’ve seen Jan’s website? What is that? This is… like the [Terence’s] Timewave in a way — this elaborate model that you come up with that explains all and everything if you could just see it. I’m not seeing it, Jan, sorry.

https://i2.wp.com/spesmagna.com/wp-content/uploads/2012/06/fiend_without_a_face02.jpg

Pathology or not (and, to be fair, Dennis is calling his brother similarly nuts), the obvious response for an absolutist conspiracy theorist would be to claim that Dennis is also a part of the conspiracy. This is essentially Jan’s response. A big deal will be made out the fact that Dennis didn’t directly deny that his brother was an agent. This, according to absolutist logic, is tantamount to admitting that he was an agent.

If this was all Irvin and Atwill had on Terence McKenna it would seem like pretty flimsy stuff. Yet of course this is not their full argument. As Dennis explains, Terence is condemned for connections, real or illusory, that he had with institutions and people like Esalen, Huxley, Teilhard de Chardin, Marshall McLuhan, etc. As a lover of synchronicity I will accept all of these connections and more. I just doubt that any of these prove that McKenna was, consciously or not, working for an Agenda to enslave humanity.

For me to try to refute these assertions would involve plunging into the “20 databases” and “200 links” and that is not really my purpose here. McKenna himself is only one small facet of Atwill and Irvin’s mega-thesis and even to definitively prove that McKenna was a saint, which he by no means was, would not really shake the core of their claim. It is a good idea to look into some of this research, though, just to see if it stands up to scrutiny.

https://i2.wp.com/media-cache-ec0.pinimg.com/736x/b2/8b/a0/b28ba029e1cd593fd655b08fa42c7105.jpg

A Dose Of Disinfo

Another key player in the conspiracy, according to Atwill and Irvin, is Albert Hofmann, the inventor of LSD. If a psychedelic conspiracy really exists then Hofmann has got to be in the thick of it, right? Atwill and Irvin present their most damning evidence against Hofmann:

Though like many of those associated with the origins of the psychedelic movement, Albert Hofmann is called “divine,” evidence has come to light which exposes him as both a CIA and French Intelligence operative. Hoffman helped the agency dose the French village Pont Saint Esprit with LSD. As a result five people died and Hofmann helped to cover up the crime. The LSD event at Pont Saint Esprit led to the famous murder of Frank Olson by the CIA because he had threatened to go public.

A footnote informs us that this “evidence” is taken from journalist Hank Albarelli’s 2009 book, A Terrible Mistake: The Murder of Frank Olson and the CIA’s Secret Cold War Experiments. If we look into the mass poisoning event in Pont-Saint-Esprit in 1951, we quickly find that Albarelli is about the only person claiming that the CIA dosed the village with LSD. Steven Kaplan, a professor of history at Cornell University who also wrote a book about the events of the French village, has described Albarelli’s theory as “absurd.”

I have numerous objections to this paltry evidence against the CIA. First of all, it’s clinically incoherent: LSD takes effects in just a few hours, whereas the inhabitants showed symptoms only after 36 hours or more. Furthermore, LSD does not cause the digestive ailments or the vegetative effects described by the townspeople…

Now it could be that Kaplan is himself a conspirator assigned the task to whitewash the odious deeds of the CIA, but oddly it is not Kaplan that Irvin and Atwill place under suspicion. It is Albarelli. Apparently it was Albarelli who attempted to thwart Irvin’s research into Gordon Wasson’s ties to the CIA:

An example of how Wasson’s activities for the CIA have been kept hidden is the work of MK-ULTRA “expert” and author Hank Albarelli, a former lawyer for the Carter administration and Whitehouse who also worked for the Treasury Department. Though Albarelli presents himself to the public as a MK-ULTRA ‘whistleblower’, he apparently attempted to derail Irvin’s investigation into Gordon Wasson.

But wait a minute. If Albarelli has been outed by Irvin and Atwill as a disinfo agent then why is he cited as the sole source of “evidence” that Albert Hofmann assisted the CIA in dosing a French village with LSD? Might not this also be disinformation? At the very least this is an example of extremely sloppy research by Irvin and Atwill. To use a source which these authors themselves go on to discredit in order to attempt to slag Hofmann is really scraping the bottom of the barrel. One wonders how much more of Irvin and Atwill’s research, if one was feeling particularly masochistic and had a ton of time to sift through it, would similarly transmute into shit.

Leveling The Playing Field For Everyone

Fortunately, though, Jan Irvin has education on his side. Real education — not the kind we plebs get from ordinary public schools and universities. Jan has rediscovered the Trivium — the ancient arts of Grammar, Logic and Rhetoric, which along with the Quadrivium make up the Seven Liberal Arts. On his website we can listen to a genuinely fascinating series of podcasts on the Trivium, largely presented by Gene Odening.

In the first interview with Odening we are told that the Trivium is the educational method, ancient in origin, which is even now taught at the boarding schools of the elite. The purpose of the Trivium is to develop critical thinking. It essentially is a tool to see through the bullshit, to expose the conditioning, propaganda and manipulation that we all face. So far so good. A foolproof methodology of critical thinking is definitely desired. The three arts are conveniently broken down as follows:

[1] General Grammar
(Answers the question of the Who, What, Where, and the When of a subject.) Discovering and ordering facts of reality comprises basic, systematic Knowledge
[2] Formal Logic
(Answers the Why of a subject.) Developing the faculty of reason in establishing valid [i.e., non-contradictory] relationships among facts, systematic Understanding

[3] Classical Rhetoric
(Provides the How of a subject.) Applying knowledge and understanding expressively comprises Wisdom or, in other words, it is systematically useable knowledge and understanding

https://i0.wp.com/www.bestmoodle.net/widgets/images/roll/roll067.jpg

Sounds great. Comprehensive and handily applicable. It actually sounds strangely familiar. Oh, I remember where I heard something like this — in a talk by Terence McKenna:

The world is so tricky that without rules and razors you are as lambs led to the slaughter. And I’m speaking of the world as we have always found it. Add onto that the world based on techniques of mass psychology, advertising, political propaganda, image manipulation…There are many forces that seek to victimize us. And the only way through this is rational analysis of what is being presented. It amazes me that this is considered a radical position. I mean, this is what used to be called a good liberal education. And then somewhere after the sixties when the government decided that universal public education only created mobs milling in the streets calling for human rights, education ceased to serve the goal of producing an informed citizenry. And instead we took an authoritarian model: the purpose of education is to produce unquestioning consumers with an alcoholic obsession for work. And so it is. [at 12:55 minutes]

Here McKenna almost sounds as if he listened to Jan Irvin’s podcast — except that this was recorded way back in 1994. The similarities between the two, though, are striking. By “a good liberal education” Terence is undoubtedly referring to the Seven Liberal Arts which includes the Trivium. His concerns are also identical to Irvin and Odening. He is advocating a “rational analysis of what is being presented,” a system of  “rules and razors,” in order to deflect the “many forces that seek to victimize us.”

https://i0.wp.com/i470.photobucket.com/albums/rr63/amarback/un-chien-andalou-razor-eye.gif

The one glaring difference between Irvin and McKenna on this point is their view of the sixties. According to McKenna students and other protesters gained their critical view of the establishment through a public liberal education and the use of psychedelics. According to Irvin and Atwill it was the use of psychedelics and the lack of a proper liberal education that so definitely duped the sixties generation. How could such divergent opinions be both generated by two seemingly sincere advocates of critical thinking and the Trivium?

But beyond this how could McKenna, that outed agent and psychedelic snake-oil salesman, be an advocate for the Trivium at all? Is he just lying? Are we to assume that every time he tells his audience to “question authority — even my own” and “try it for yourself” that he actually means “do exactly what I say”?

http://4.bp.blogspot.com/-gOhmPEB8G9s/TcoWBoP01SI/AAAAAAAAA6U/WTEoxy5enTk/s1600/pied-piper-of-hamelin-patrick-hiatt.jpg

There may be a solution to this puzzle. As we progress through Irvin’s “Trivium Education” podcasts we come to a very fascinating interview with Kevin Cole, a Trivium Method student of Odening and Irvin. Cole relates how in his own research he discovered that the Classical Trivium and the Seven Liberal Arts were actually used as a complete system of control by the elite for centuries.

The Classical Trivium, we finally learn, is entirely different from the Trivium Method (perhaps we should start to call it the Trivium Method™?) which was developed by Odening and interpreted by Irvin in order to free minds rather than to enslave them.

http://3.bp.blogspot.com/-M95faJbf8k0/TVhUaEAozLI/AAAAAAAABtI/8kwYe--Gc-8/s1600/mmy.jpg

It’s obvious, therefore, that McKenna is only an advocate of the Classical Trivium and not the liberating Trivium Method™. The similarity of language and purported methodology is only there to deceive. That clears up that. But hold on a sec — weren’t we told on the first of these podcasts that the Trivium Method™ was ancient and that it is still taught to the children of the elite? A confused commenter to the Cole episode, and a now distraught former acolyte, expresses similar concerns:

To be honest, this upset me quite a bit. This shed light on the enormous amount of bullshit about the classical trivium that was spewed for a few years by Gnostic Media and Tragedy And Hope.
Here are some questions I have for you:
What form of education, if not the classical trivium, is taught to the “elite?” It seems that all of your previous claims about the trivium being taught to the “elite” was pure conjecture.
If we are inherently free, why do we need a “liberating” education?
Why was Gene Odening so misinformed about this? Why should I, after watching this video, continue to use the “trivium method” which is now so clearly a misunderstanding of the true classical trivium on the part of a “self-taught scholar?”
These are only some of MANY questions that need to be answered. I’m sure I’m speaking on the behalf of many others who feel the same about this issue. There’s been a lot of conjecture and bullshit, and we demand answers.

[Maccari-Cicero.jpg]

Jan Irvin, master of Rhetoric, responds with his usual balance of wisdom, subtlety and eloquence:

We have ALWAYS explained that the trivium was used for mind control. If you haven’t caught on to that, you weren’t paying attention. There was 3 years of grammar alone that had to be done to flush all of the misapplication of the trivium out. Gene has always explained from day one that it was used for control. He never said it wasn’t. That was the ENTIRE PURPOSE of releasing it! To level the playing field for EVERYONE! If you want to be controlled by those who misuse it, then don’t study it and live in ignorance. It seems you weren’t even paying attention to what this video had to say, as the video explained that what Gene has put forth is the first time it’s been used for FREEDOM. Can you show us were we haven’t said it was used for control by the elites?

Ah… so the trivium is not the trivium. There is no contradiction here. The trivium can be used to both liberate and ensnare. Kind of like a good trip and a bad trip? If we accept, though, that Odening’s new Trivium Method™ is a way to liberate the masses while the old Classical Trivium is used for mind control there is no need to additionally accept that the TM™ is ancient and therefore well-tested. Like any new system of thought, or any ancient system, every aspect of it must be held up to full scrutiny.

Quisquidquandoubicur

Irvin is fond of saying, for example, “do not put your Logic before your Grammar.” By this he means to not approach a situation with a ready-made theory of why it is like it is. Instead we must first compile and examine all of the available facts of who, what, where, and when (the Grammar) and only then can we attempt an explanation (the Logic). A valid explanation can only arise if the basic facts do not contradict one another.

A problem emerges, however, with determining these “facts.” If we say, for example, that who Aldous Huxley is, is an evil promoter of eugenics and world government then we already have reasons why we have concluded this. We have already put our Logic before our Grammar. Each fact is at first a theory. But a supporter of the TM™ might say that this is acceptable because our reasons for concluding that Huxley is a supporter of eugenics and world government are also based on facts — Huxley’s family ties to the Eugenics movement etc.

http://3.bp.blogspot.com/-Lq49CIEHV3c/UqL_lqkKhiI/AAAAAAAAH40/kqDWefstXb4/s1600/Eugenesia+y+mala+memoria+de+la+humanidad+(eligelavida).jpg

This might all be valid. These facts might in turn be very sound, but we still would have reasons for accepting them as facts. A pure fact though, pure Grammar, the whats and whos and wheres, may be impossible to separate from why. This may seem like nitpicking, but over and over I’ve seen the no-logic-before-grammar clause being used by Irvin in an attempt to out argue his opponents. It doesn’t hold water.

As an example, if we accept as fact, as Grammar, that Aldous Huxley is a tireless advocate for totalitarian rule then the letter he wrote to George Orwell, cited by Irvin and Atwill, discussing which of their dystopic visions is more accurate, will strike us as being very sinister. If in contrast we view both Brave New World and 1984  as novels intending to warnpeople against creeping totalitarianism then our reading of this letter will be very different.

Within the next generation I believe that the world’s rulers will discover that infant conditioning and narco-hypnosis are more efficient, as instruments of government, than clubs and prisons, and that the lust for power can be just as completely satisfied by suggesting people into loving their servitude as by flogging and kicking them into obedience. In other words, I feel that the nightmare of Nineteen Eighty-Four is destined to modulate into the nightmare of a world having more resemblance to that which I imagined in Brave New World. The change will be brought about as a result of a felt need for increased efficiency. Meanwhile, of course, there may be a large scale biological and atomic war — in which case we shall have nightmares of other and scarcely imaginable kinds.

https://i1.wp.com/www.sevenwholedays.org/wp-content/uploads/2009/08/huxley-orwell.png

If we have already concluded as fact, as Grammar, that Albert Hofmann is a CIA agent then it is easy to believe that he helped poison a French village with LSD, even though our only source for this “fact” is from a writer that we have already discredited.

Like every other human theory, Irvin and Atwill’s theory on the manufacture of the counterculture is supported with cherry-picked “facts.” This is not so much a condemnation of their theory as it is to state that they are, like anyone else, all too human. The application of the Trivium Method™ no more guarantees the truth of their theory than does the application of the apologetics of Thomas Aquinas.

What happens when “facts” are encountered that don’t appear to fit this theory? What do we do, for instance, with Mae Brussell’s well-reasoned theory that the Manson murders were an Establishment psyop designed to disorientate and discredit the growing counterculture which directly threatened elite control?

https://i0.wp.com/31.media.tumblr.com/5c3cb510e27d912983bdc5c5f748490e/tumblr_mr4u8goXkU1rzk55no1_1280.jpg

If, as according to Irvin and Atwill, the hippies were “manufactured” in order to transform culture then why would TPTB try to bring down their own creation just a couple of years after it gained mainstream attention? Was Mae simply wrong? Was she also an agent?

And what about the conservative reaction in the Reagan eighties against all vestiges of the former counterculture? What about the “Moral Majority”? What about the promotion of “family values”? What about the “culture wars”?

https://i1.wp.com/academic.depauw.edu/aevans_web/HONR101-02/WebPages/Fall%202007/Sarah/Handmaid%27s%20Tale/Pictures/jerry_falwell0515.jpg

Are Reagan and Pat Robertson the good guys here? Did the CIA’s program fail or did another phase of their manipulation kick in — the clichéd and misunderstood Hegelian dialectic, perhaps? And then there were the nineties when the psychedelic pied pipers like McKenna and others were once again set loose to dose the imaginations of a whole new generation. Did the Agenda move back on track or did it even more come off the rails?

I’m not saying that these facts cannot be worked into the theory of Irvin and Atwill. Absolutist conspiracy theories can usually absorb any fact that is thrown at them. As far as I know, however, they have not yet been shoehorned into the mix, and when they are the resulting mess is not necessarily going to be logical.

Uncertain and Incomplete

And yet increasingly in recent years logic is equated with certainty. Debunkers and “skeptics” of every stripe are on the march. “Pseudoscience,” claims of the paranormal, conspiracy theories, spirituality, alternative medicine — the whole ball of “woo” is in the crosshairs. In the face of this, into the viper’s den of pop-up fallacies and rational wikis, steps fearless researcher and podcaster extraordinaire, James Corbett.

In a largely overlooked Aug. 2012 podcast entitled “Logic Is Not Enough,” Corbett dares to present a bit of heresy — humans are really not all that logical and logic itself can only take you so far. He illustrates this by simply showing how even the most logically sound argument can reach a false conclusion if its premises are wrong.

Beyond the scope of formal logic, Corbett explains that Heisenberg’s Uncertainty Principle in physics and Gödel’s Incompleteness Theorems in mathematics both demonstrate that even within these hardboiled fields of study unpredictability and indeterminacy rear their ugly heads. With or without logic, certainty is elusive.

Buckminster Fuller, in a conversation from 1967, takes this all much further than Heisenberg (or Corbett!):

Heisenberg said that observation alters the phenomenon observed. T.S. Eliot said that studying history alters history. Ezra Pound said that thinking in general alters what is thought about. Pound’s formulation is the most general, and I think it’s the earliest. [quoted in Hugh Kenner, The Pound Era]

http://tommytoy.typepad.com/.a/6a0133f3a4072c970b014e86e472a8970d-450wi

By studying the history of the sixties counterculture, Atwill and Irvin are altering history. By thinking and writing about their theory, I am altering it. Both alterations are fine and should be expected. The problem arises when we think that we have captured the history or the idea.

To tie a living thing down, to analyse it and to categorize it, is to change it. And by attempting to do so it changes us. It should not take a physicist or a mathematician to “prove” this. And it is, of course, the poets who would realize this first. (I’ll discuss in depth the wisdom and folly of Ezra Pound in the second part of this essay.)

In his podcast, Corbett reminds us that much of the “Agenda” aims to refashion irrational individuals into logical machines. Elite control freaks like George Bush Sr. avow that ‘‘The enemy is unpredictability. The enemy is instability.” To be truly logical is to be entirely predictable, entirely stable. A logical person, a person well-trained in the Trivium Method let’s say, can be counted on to say and do the logical thing at every step. He or she is not overly emotional, not contradictory in his or her actions and thoughts, and is entirely stable. A clockwork orange.

The usual argument on why the CIA gave up its research on LSD and other psychedelics is precisely because they have unpredictable effects. They can be used to decondition people but they are very poor at reliably reconditioning people. Who in the world has ever had a predictable psychedelic trip?

Irvin and Atwill are correct to warn us about how post-Freudian sorcerers of schlock like Edward Bernays use advertising and propaganda to target us emotionally, scramble our logic, and to direct the course of culture. Irvin and Atwill’s attack on the state education system and the entertainment industry as instruments to “dumb down” is indispensable. Critical thinking and reason, more than ever, are required.

There is a broader way to look at all of this, however. In Corbett’s podcast episode we briefly hear a clip from an interview with cognitive scientist, George Lackoff. Lackoff explains that reason, contrary to what was thought in the 18th century and what is still accepted by political and social institutions even now, is not fully conscious, unemotional or subject to formal logic. Instead it is embodied, it is driven by empathy for others over “enlightened self-interest”, and it frequently perceives metaphorically not logically.

An individual human is by no means a logical machine, nor is he or she entirely driven by irrational emotions. We are complex even contradictory creatures. It may be that there is no possible way, in disagreement with Huxley and Orwell, for our psyches to be fully bridled. On the other hand, it may be equally impossible to develop a foolproof method for preventing attempts to bridle them.

All methods fail for some and succeed for others. Psychedelics aren’t the whole answer, neither is the Trivium Method™. Contradictions are out there and in here always. As Walt Whitman wrote:

Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.

[waltwhitman-camden1891.jpg]

The conspiracy, the conspiracies, are also contradictory. They are also embodied, emotional, metaphoric, fluid, unpredictable, multitudinous. So is the counterculture. So is a psychedelic trip. So is Jan Irvin. So is this post. The pop-up fallacy machine would likely blow a gasket processing what I’ve written here. I really don’t care.

Corbett mentions one last fallacy that might help me out: the fallacy fallacy. This is the false presumption that just because a claim is poorly argued, and/or it contains many fallacies, that the claim itself is wrong. It may just be, though, that I’m making a fallacy fallacy fallacy: the equally false presumption that the fallacy fallacy somehow excuses poor argumentation and/or the use of fallacies. There’s a conspiracy theory for you. Here’s another:

Conspiracy theory, in my humble opinion, is a kind of epistemological cartoon about reality. Isn’t it so simple to believe that things are run by the greys, and that all we have to do is trade sufficient fetal tissue to them and then we can solve our technological problems, or isn’t it comforting to believe that the Jews are behind everything, or the Communist Party, or the Catholic Church, or the Masons. Well, these are epistemological cartoons, it is kindergarten in the art of amateur historiography.

I believe that the truth of the matter is far more terrifying, that the real truth that dare not speak itself is that no one is in control, absolutely no one. This stuff is ruled by the equations of dynamics and chaos. There may be entities seeking control, but to seek control is to take enormous aggravation upon yourself. It’s like trying to control a dream.

The dream or nightmare may not be controllable, but it does have a certain structure, a patterned energy, a flux of phosphenal filaments. And it is both bound and sent spinning by spaghetti.

People also love these ideas

Chapter 4
It’s like a different world. The fenced-in apartment complex in the heart of Denver is located just a short walk from glitzy boutiques and high-end restaurants, but there is no sign of prosperity here. Homeless people are camped nearby while addicts smoke crack in the parking lot.
People are socializing in front of the building’s entrance despite the midday heat. A black man is pacing the fence trying to get someone’s attention while two younger men are carrying furniture into a neighboring building. It’s a convivial neighborhood and everyone seems to know everyone else. Everyone, that is, except the older, gray-haired man who walks up to the door with a shopping bag full of vegetables at around 4 p.m. He doesn’t even look at his neighbors before disappearing into the complex without greeting any of them.
But once you’ve seen the photos from the inside of his apartment, it immediately becomes clear why the 66-year-old seeks to limit his contact with the outside world. And it becomes even more clear when you look into his past.
James Mason at his home in Denver
James Nolan Mason was an extremist even as a teenager. He joined the American Nazi Party of George Lincoln Rockwell when he was just 14 and became involved in the National Socialist Liberation Front in the 1970s. He has served several prison terms, including one stint for attacking a group of black men together with an accomplice. On another occasion, he was charged with child abuse. During a search of his apartment, the police found naked photos of a 15-year-old girl along with swastika flags and photos of Adolf Hitler and his propaganda minister, Joseph Goebbels.
In the 1980s, Mason decided to publish his fantasies of power and violence in book form, which he called “Siege.” The tome – a collection of his bizarre newsletters, on which he collaborated with the sect leader Charles Manson – is full of Holocaust denials and ad hominem attacks on both homosexuals and Jews. Above all, however, it calls for the establishment of a network of decentralized terror cells and for taking up arms against the “system.” Mason’s goal has long been that of passing along his intolerant worldview to the next generations – and for a long time, he found no success. But that all changed in 2015.
James Mason and Atomwaffen Division
Propaganda photos from the Atomwaffen Division cell in Texas
That year, the Nazi group Atomwaffen Division (“Atomwaffen” is German for atomic weapon) was founded on the internet forum ironmarch.org, a discussion platform for neo-Nazis from around the world. The extremists discovered James Mason and were excited about his crazed, radical ideas. “Siege” became a must-read and Mason their ideological doyen. But that isn’t the only thing that makes them so dangerous, according to experts on right-wing extremism. Members are heavily armed and prepared to make use of their weapons. Indeed, they are getting ready for what they see as the coming “race war” in so-called “hate camps.” Weapons training is conducted by members of the U.S. military, who are also among the group’s members. According to one former member of Atomwaffen Division, newcomers must submit to waterboarding, in addition to other such trials. But who is behind Atomwaffen Division?
The first murder took place on May 19, 2017. That’s when Devon Arthurs, 18, shot to death his two housemates, Andrew Oneschuk, 18, and Jeremy Himmelman, 22. All three were members of Atomwaffen Division, but Arthurs would later say that the other two didn’t respect his faith. Arthurs, it turned out, had slowly become estranged from the group’s right-wing extremist ideology, converted to Islam and began sympathizing with Islamic State.
Killer Arthurs (left), victims Oneschuk and Himmelman (right)
The group’s leader, Brandon Russell, likewise lived in the shared residence and the police found firearms, ammunition and bomb-making supplies in the garage. Before the discovery, Russell had told followers in internal chats of his intention to blow up a power plant. He was sentenced to five years behind bars.
The Murders Continue
On Dec. 22, 2017, 17-year-old Nicholas Giampa shot and killed his girlfriend’s parents in Reston, Virginia. They had forbidden their daughter from associating with him because of his right-wing extremist worldview. Giampa is open about both his admiration for James Mason and his membership in Atomwaffen Division. After the two killings, he shot himself as well, but survived.
Killer Giampa
The most recent murder took place not even a month later and the investigation into the incident is ongoing. Reporters from DER SPIEGEL were able to speak with police officials in Lake Forest, California, where the killing took place, in addition to the mother of the victim. We were also able to examine the private chat messages sent between the victim and friends, allowing a detailed reconstruction of the crime.
When the rest of his fellow Atomwaffen Division members learned that Samuel Woodward had been arrested for the murder of a homosexual Jew, they began celebrating his crime, referring to him as a “gay Jew wrecking crew.” For the beginning of the trial, they even had T-shirts printed with Woodward’s image, complete with a swastika on his forehead.
Atomwaffen Division is not a group of online trolls who spread derogatory images and graphics on the internet. Rather, members share their propaganda within their own social media bubble and secret communication forums. DER SPIEGEL has gained exclusive access to internal chats from the group.
Inside Atomwaffen
Those chats quickly make it clear that the group doesn’t just have it in for homosexuals, Jews and blacks. They also glorify all manner of right-wing extremist terror along with mass murderers like Timothy McVeigh, Dylann Roof and the Norwegian Anders Breivik.
Letter from Theodore Kaczynski
The group is also pen pals with the three-time murderer Theodore Kaczynski, better known as the Unabomber. They have set up a thread to discuss among themselves what questions they next want to ask of the imprisoned Kaczynski.
Yet interspersed in the discussions focused on their idols, National Socialism and violent video games, sentences such as the following can be found: “Carpetbomb your local refugee center;” “Bombing police stations is artistic expression;” and “I want to bomb a federal building.”
Bomb-building instructions
It is difficult to assess whether the online posing is an immediate precursor to concrete attacks. Members share links to archives, including hundreds of documents listing the preparations necessary for armed battle and terrorist attacks. Among them are handbooks that describe in detail how to carry out attacks on power plants, electricity grids and highway bridges – and dozens of instructions for building pipe bombs, car bombs and nail bombs along with directions for manufacturing delay detonators and powerful explosives out of household items.
A Broad Swath of Hate
But Atomwaffen Division doesn’t just glorify right-wing extremist terror: Taken together, their chat messages convey a rather confusing picture. Members post images of people who have been beheaded or murdered in other ways, including execution videos made by Islamic State. They also share extremist interpretations of Koran verses. In one posting, the school shooting at Columbine High School was referred to as “a perfect act of revolt.”
The group is also extremely misogynistic. Members refer to women as “egotistical sociopaths that have no worth,” and as “whores” and “property.” One member writes that “every rape” is “deserved.” “I wouldn’t even CALL it rape,” writes another. Pedophilia isn’t a taboo either. “She bleeds she breeds” and “birth is consent” are just a couple of many such examples.
It is, in short, a broad swath of hate, from National Socialism to child abuse to Islamic State. So, how does it all fit together?
The Hate Network
At some point, it was no longer enough for Atomwaffen Division members to simply read “Siege.” They wanted to meet its author in person. And in 2017, allegedly after searching for him for years, they tracked down James Mason, who had gone into hiding. A friendship developed along with a business relationship. By then, the marginally successful phase of Mason’s Nazi career had long since passed and he was solidly on the path toward complete insignificance. But the young Nazis from Atomwaffen Division set out to advance him into the digital era. They brought out Mason’s dusty Nazi propaganda and repackaged it under the label Siege Culture. Atomwaffen Division then began publishing his articles on a new website and also recorded podcasts with him. But the focus of Siege Culture is squarely on Mason’s books. John Cameron Denton, the group’s leader, claims to own the rights to the books.
Denton: “James Mason passed the torch on to us.”
Mason with Atomwaffen Division members (left), Denton visiting Mason (right)
Members of Atomwaffen Division take care of layout, promotion and sales. Five books are currently available, including a reissue of “Siege” and an even more bizarre collection of writings in which Mason claims that both Adolf Hitler and Charles Mason are reincarnations of Jesus Christ. Seven additional books are planned. They are printed and sold using Amazon’s self-published platform CreateSpace.
On a recent Sunday morning at 8:25, the man to whom young Nazis flock is shuffling down East Colfax Avenue in Denver. He makes his way past the park that is home to several homeless people and down the street to a bus stop, where he picks a waiting spot that is a few steps away from the others. He appears to be in a good mood. What’s he thinking about. What does this man who is so full of violence and hatred have to say? James Mason doesn’t speak with journalists and has lived in hiding for more than 10 years. But he is happy to speak to an interested tourist from Germany. The following interview was conducted with a hidden camera:
For Atomwaffen Division, the cooperation with James Mason is important primarily because of the recognition it brings within the scene. It helps the group attract violence-prone young men, and not just in the U.S. The cult surrounding Mason’s “Siege” has produced a global network of fanatics. For some, contact is limited to the internet, but others travel across the globe to meet their fellow comrades. And new chapters of Atomwaffen Division have recently begun springing up. A few examples:
Atomwaffen Division now acts as a global amalgamator of violence-prone young men, and James Mason is their inspiration. His young men promote a barbarous worldview and want to be as extreme as possible. From a rhetorical point of view, the Nazis have reached an acute level of zealotry. The only thing left is translating that hate into action.
“Many of you must step up your existential apartheid game,” one member wrote in a chat at the end of July. “The internet can only give you pointers, not experience.”
Authors, Camera, Video EditingAlexander Epp, Roman Höfner

Yog Sothoth walks fluidly though city streets, moving against the constant stream of business dealings. . .artfully finding his path through the bustling bodies of the post modern Metropolis.
He feels each being as it passes, hidden or forgotten. . .he knows their cycles and habits.

Most people are blind to his beauty, although some sense strangeness, the otherness is sometimes perceived with a fleeting thought or body sensation.

To the few, he is recognizable, even before the mind has a chance to remember.

He is the wind and light, controlled

thinkorbebeaten.wordpress.com

homelessholocaust

homelessholocaust 356-453 minutes


21PM PDT

Murder Trail, Houston, Texas (Molesting Garbage Law CIty)

HAVE YOU HEARD?

The Jonathan Foster Murder Trial Just Got Started in Texas!

What’s that you say? You don’t know who Jonathan Foster was?

On Christmas Eve Day of 2010, 12 year old Jonathan Foster was kidnapped from his home by Mona Nelson, a 44 year old Black woman.

After abducting the child, Nelson, a welder, tied his hands, and then roasted the boy alive with her blow torch. Imagine what went through Jonathan’s mind as he was being burned alive!

Jonathan’s badly burned body was soon discovered in a roadside ditch in Houston, not far from where he lived. Nelson admits to dumping the container which held the body–she was caught on film–but makes the ridiculous claim that she was given the container by one of Jonathan’s relatives.

Police suspect that Jonathan may not have been her only victim.

Houston Police Department Homicide Detective Mike Miller calls Nelson a “cold, soulless murderer who showed an absolute lack of remorse in taking the life of Jonathan Foster.”

Next question: Do you know who Trayvon Martin and George Zimmerman are?

Thug Martin (right) was shot while brutally beating Mr. Zimmerman

Of course you do!!!

Everybody in America knows about Thug Martin and George Zimmerman!

It doesn’t take a Sherlock Holmes to clearly see that there is a double standard aimed at White folks. The question my dear Mr. Watson, is, WHY does this horrible double standard exist?

And more importantly, WHO is real power behind this “anti White” media bias? (Hint: It ain’t Al Sharpton!)

And most importantly, TO WHAT END?

The “War on Whites” serves the interests of the THE NEW WORLD ORDER!

Journalist and former Presidential candidate Pat Buchanan cracks the code for us:

“Global elites view the White Western world as the main obstacle standing in the way of a future world government. Multiculturalism is a tool used by such elites to dismantle White Western civilization.”

Can you handle the truth? Are you ready to take the next step?

Where is the uproar over this “hate crime?” Why have you not heard about this trial in main stream media?

We challenge you to open the final door, to “get smart,” and solve this mystery.

   Go to Order Form

REALLY???? WHAT THE FUCK IS WRONG WITH YOU FUCKING NIGGERS???? CAN ONE OF YOU FUCKING NIGGERS EXPLAIN WHY THE FUCK YOU ANIMALS ARE SO FUCKING UNCIVILIZED?????

  • Location: Molesting Garbage Law CIty

DISCLAIMER FOR THE CLUELESS:

This is humor, folks. I am not a White Supremacist or a Skinhead.

SAVE OUR TRAILERPARKS FROM
JEWISH UFO’S AND BLACK UNITED NATIONS HELICOPTERS!!!

Nazi Space Aryans DO NOT PERFORM MEDICAL EXPERIMENTS ON WHITE PEOPLE!!!

It is a DOCUMENTED FACT that the space aliens abducting people for medical

experiments are of JEWISH origin from the star system 51 Peg.

In the 16th century, a JEWISH rabbi constructed a golem out of clay to attack

the Christian settlements around them. This golem eventually turned on the

JEWISH population and the rabbi destroyed it. This was the first DOCUMENTED

CASE OF JEWS CONSTRUCTING PEOPLE!!!

Later, as JEWISH technology and NECROMANCY became more advanced, they began

constructing BLACK PEOPLE. In the year 1904, the JEWS INVENTED THE BLACK RACE.

Even today, JEWISH “Archeologists” are creating “ancient” structures they’re

“discovering” in Africa and Egypt. For example, the pyramids were mostly

constructed by JEWISH “Archaeologists” in 1973. Just last year, they

“discovered” yet another pyramid. PRIOR TO 1904, THERE WERE NO BLACK PEOPLE

ANYWHERE! Any information to the contrary is A JEWISH HOAX.

Occasionally we Nazi’s, with the help of our Aryan Space Nazis from Tau Ceti

Prime, can capture and re-program some of these Black “People” who are in fact

really just biological androids created by THE JEWS. These are the only

“abductions” we Interstellar Space Nazis perform.

Some of the Black “people” we have reprogramed have been:

  • Louis Farrakahn
  • Jesse Jackson
  • Al Sharpton
  • Michael Jackson (We even made him LOOK White!)
  • Malcolm Little (Aka, Malcolm X. The JEWS reprogramed him after we did,so we had to have Farrakahn kill him).

All of the above people were reprogramed by our extraterrestrial Aryan friends

to become Black Nazis and servants of the Interstellar Aryan Space Corps.

THE JEWS, on the other hand, continue to have their Interstellar Zionists abduct

WHITE PEOPLE.

These JEWISH SPACE SHIPS are based in Groom Lake, which is also called Area 51.

This is an area kept secret by the US Government, also known as THE ZIONIST

OCCUPIED GOVERNMENT or ZOG. To cover up their activities, THE JEWS are burning

toxic waste there and have allowed this information to become public knowledge,

but in reality the toxic waste disposal is merely a more sinister front for

their true agenda of ABDUCTING HUMAN CHRISTIANS WITH UFO’S. In the process of

this cover-up, MANY WHITE PEOPLE HAVE BEEN POISONED with known toxins such as

Dioxin, PCB’s, Aspratame, dextromathorphan, and even monosodium glutamate which

is being illegally disposed of at Area 51. Of course, we Nazis dispose of toxic

waste by either deporting it to the east or making it into lampshades and soap.

This is almost as awful as the JEWISH SPACE ALIEN UFO ABDUCTIONS.

You will nottice that most of these JEWISH abductions are in the midwest from

trailer parks, which, coincidentally, happens to be where most of our Neo-Nazi

allies come from. What we need to do is set up anti-UFO lasers to defend

trailer parks from JEWISH UFO’s and the tornados they unleash with the aid of

BLACK HELICOPTERS FROM THE UNITED NATIONS!!!

Even now, UN Troops from UNICEF may be hovering above your astroturf making your

pink flamingo whirligigs go balistic.

In the meantime, we need to get as many of our Aryan brothers and sisters to the

safety of the Aryan Nazi UFO Mothership at the center of the Earth. Getting

there is easy. Sell everything you have. The JEWS will be eager to buy it.

Get yourself an airplane ticket to McMurdo Sound in Antarctica. Then, once you

are in Antarctica, find a volcano and jump into it. The Aryan Space Nazis will

beam you into their ship as you fall and you will land unharmed in the hands of

beautiful, superior Aryan people from the stars.

— Ernst Zundel*

My right to free speech supersedes your right to exist.


Not the real Zundel! This is

a pseudo-Ernst Zundel posting from AOL.


DISCLAIMER FOR THE CLUELESS:

This is humor, folks. I am not a White Supremacist or a Skinhead.


Train robbers are stealing rolling cargo in the Mojave. But today’s prize may be TVs or Nikes


By Phil Garlington of The Orange County Register (KRT)

MOJAVE NATIONAL PRESERVE, Calif. — National Park Ranger Tim Duncan has his hands full dealing with speeders, cactus poachers and off-roaders as the lone federal lawman for this arid, 1.4 million acres of protected high desert.

And the most serious criminal activity Duncan faces?

Train robbers.

The Union Pacific railroad line from Los Angeles to Las Vegas runs straight through the desolate heart of the eastern Mojave. Dead center in the national preserve is The Hill, an 18-mile grade, where 11/2-mile-long eastbound freight trains laden with double stacks of containers slow to a crawl.

“It’s been bad,” says the bearded, jovial Duncan. “Sometimes both sides of the track have been littered with boxes of merchandise the thieves have thrown off the train. This has been one of their favorite spots.”

It’s night, and Duncan is hunkered under a concrete railroad bridge with Union Pacific special agent David Sachs, who is dressed Ninja-fashion entirely in black, including Kevlar vest and boonie hat. Moonlight casts spectral shadows across sage, tamarisk and gray desert floor. Two other pairs of railroad police, and a German shepherd, are concealed along the tracks.

Half a mile away, another special agent, Darrell Brown, an infared night scope slung around his neck, has taken position atop a railroad signal mast. He’s wearing camouflage pants and a black T-shirt with POLICE stenciled on it in white letters.

Over Brown’s radio, the not-quite-human voice of an automated sensor reports that the coming freight’s brakes are operational:

“Two-forty-three. No defects.”

Suddenly, there’s the flash of the train’s headlamp and the squealing and wheezing of steel wheels as the 120-car freight starts up the grade.

The thieves — if there are any — will be scrunched down in “tubs” between the containers and the sides of the freight cars, invisible to anybody at ground level. But from his vantage point atop the signal mast, Brown will be able to peer directly into the tubs as the train rumbles by beneath him.

“They are difficult to see,” says Brown, a muscular veteran of two decades with the railroad. “The train crews seldom spot them, unless they’re alerted by a crew on a train going the other direction.”

As the six diesels begin the laborious climb uphill, the container train slows to 8 mph.

Often, Brown says, robbers meet the train at Yermo, five miles east of Barstow. During the day, they hole up in abandoned buildings. As night falls, they flit across the railroad yard and hop aboard eastbound double stackers carrying goods from the Pacific Rim. They travel light. Burglar tools and a quart of water. They wear several layers of clothing to cushion the spine-jarring bumping of the freight cars.

Using lengths of pipe, bolt cutters or hacksaws, they cut the seals on the containers and quickly rummage through boxes, looking for electronics, athletic shoes or expensive clothing that can be turned over for quick money.

“We’ve tried different kinds of locks, but they always figure out how to gain entrance,” Brown says.

It’s so hit-and-miss, Brown says, thieves sometimes miss valuable electronics because they can’t identify the products.

“The people hired by the gang bosses in Los Angeles to rob the trains are very low-level,” Brown says. “They’re like the drug cartel mules, guys with nothing to lose. They’re given a few hundred dollars, and the gangs look on them as being expendable.”

The thieves hop off trains and drag the loot into the desert, sometimes for half a mile, cover it with brush, and wait for a truck.

In the past month, thieves have been hitting the trains hard. One double stacker arrived in Las Vegas recently with 24 containers broached.

“We have to keep hammering ’em, or the loss of merchandise would be staggering,” says Union Pacific special agent Paul Kunze, whose usual beat is the stretch of track between Yermo and Las Vegas.

“It used to be they’d steal cigarettes, tires and booze,” Kunze says, “and the pros could smell the merchandise in the boxcars.”

Now it’s electronics and expensive clothing, although thieves also have taken outboard motors and even washing machines.

The jackpot for a container burglar, however, is finding a consignment of Nike Air Jordans, Kunze says.

A 31-year veteran of the railroad police, Kunze, at 56, is fit and athletic, and takes pride in being able to pursue fleeing suspects over miles of desert. Last month, during the pursuit of a train robber flushed from a double stacker, Kunze chased the 20-year-old suspect four miles through deep sand, gullies and thorn brush, crossing Interstate 15 twice, until a California Highway Patrol helicopter helped make the arrest. He says that afterward he had to pull the thorns out of his leg with a pair of pliers.

“In the last year we’ve arrested about 50 train robbers,” Kunze says.

Most, however, escape into the desert.

“There’s no water out there, and I don’t know how they survive,” Duncan says. “But they’re very tough.”

Even when captured, few of the thieves have been prosecuted, Brown says.

“They’re just deported. It’s impossible to get them to inform on the higher-ups. They know it’d be a death sentence for them.”

Often, the agents simply “light up” a train from their perches atop the signal masts by shining lights into the tubs, that way preventing the thieves from cracking into the boxes. “It denies them a payday,” Kunze says. “They took a long, uncomfortable, thirsty ride into the desert for nothing.”

Another tactic has been to ambush the trucks. Railroad police have recovered a couple of rental vans that got stuck in the sand when they were driven out to load stolen merchandise, Kunze says.

Railroad policing is relatively new to the Mojave. In 1994, what had been the Mojave National Scenic Area, administered by the Bureau of Land Management, became the Mojave National Preserve, run by the National Park Service.

Funding in the first year was exactly $1, because of a squabble in Washington about the amount of off-road use. The park staff got laid off, and for a year no rangers patrolled the back country.

“During that year, the stretch of track between Kelso and Nipton was littered with cartons and all sorts of merchandise the robbers couldn’t fit into their vans,” Duncan says.

“All of this clothing and jewelry started turning up along the track,” says Linda Darryl, manager of the Nipton Store. “Everybody around here was wearing T-shirts and bracelets. We thought it was strange that this stuff was falling out of locked containers.”

Rural mail carrier Mike Smith has retrieved and returned truckloads of merchandise scattered by the thieves. One park ranger found (and returned) thousands of cartons of cigarettes.

While Brown and the Union Pacific police mainly are concerned about protecting shipments, Duncan’s primary concern is the safety of park visitors.

“The park service isn’t happy about the idea of criminals out on the road trying to hitch a ride,” Duncan says. “That’s not considered part of the National Park experience.”

When Brown or Kunze spot thieves in the tubs, they radio the train’s engineer to stop at a point where agents are concealed, and the chase begins. “We’ve had some success using dogs,” says Brown, who handles one called Bet.

This night, Max and Bet are still sluggish after the four-hour ride from Vegas. “Achtung!” shouts Max’s handler, Steve Stevenson (all commands to the dogs are given in German). “You need to get them fired up.” For practice, Max, on command, attacks a padded sleeve worn by a visitor.

Kunze, who now also has clambered up on a signal mast, worries about his Navy Seal wristwatch, which gives off a faint greenish glow, and about the luminous sights on his 9mm pistol. “Can they see that? I might have to go back to a Timex.”

“I really don’t have animosity for most of these guys,” Kunze says of the train robbers. “A few of them are hardened felons. Most of them are just poor Mexicans. When I catch them they say, “I’m sorry. I’m just trying to feed my family.” But I want to catch them. With me, it’s pride. I don’t want them to outrun me.”

The former Nebraska football player always carries a bottle of water during these foot races. And a radio. He has called away section crews from their work repairing track to join the chase. When one suspect saw himself surrounded by three Navajo gandy dancers armed with pick handles, he surrendered, and then fainted, Kunzel says.

Once, during a pursuit, Kunze commandeered a squad of Marine Corps ultra-marathoners, who happened to be running alongside the tracks, to help make a collar.

Kunze also says he has to change strategy as thieves change tactics.

“We’ve been tracking them to where they stash the loot. Now they’re starting to brush out their footprints.

“They use one kind of signal marker, we catch on to it, and they start using something else. The trucks used to pick up the loot right away. Now they may bury it and pick it up weeks later.

“It’s cowboys and Indians out here.”


(c) 1998, The Orange County Register (Santa Ana, Calif.).

Visit the Register on the World Wide Web at http://www.ocregister.com/

Distributed by Knight Ridder/Tribune Information Services.

The TruTh and Facts’ about Lt. Colonel Michael A. Aquino, Ph.D. These things can be verified if you just take the time to research. This deals with Satanism in General and the crimes with clear connections to Satanism. Sure a few were Mentally Ill from the start but that Satanic link is undeniable. I will say all Faiths have their own crimes connected with them. I have links included you can look up. This is about Satanism though, and in particular Dr. Aquino.

Lt. Colonel Michael A. Aquino, Ph.D. Lt. Colonel, Psychological Operations (Ret.) (I personally knew him but in ways I can’t disclose for obvious reasons. He has bragged of doing a Satanic Ritual in the exact same place that the Occultic Nazi, Heinrich Himmler did.

https://en.wikipedia.org/wiki/Nazism_and_occultism
Dr. Michael A. Aquino, Ph.D. United States Army, he also worked with the CIA and The National Security Agency. (NSA) Michael Aquino – Church of Satan, Temple of Set

Michael Aquino accused of multiple acts of child abuse, child sex slavery, child abuse, torture and psychological human experimentation through the years. This was mainly a big story during the 1980s. There have been multiple witnesses who have ended up dead or are labeled as crazy. These are facts! He has been implicated in multiple law suites as well as filing law suits himself to cover his family and reputation. Many believe that he is connected to the Washington call boy operation that was used as blackmail operations against people in high levels of government. It is also believed by many that he is possibly connected to the #Pizzagate child sex ring.

There are tons of conspiracy theories, false stories and disinformation surrounding Dr. Michael A. Aquino but there is some truth and facts, you be the judge. One day the Truth will reveal itself.

Michael Aquino, repeatedly implicated in Satanic ritual abuse (SRA), child abuse and contemporary iterations of MKULTRA derived trauma-based programming, by numerous victims over the decades, from child victims of SRA to adult victims of Project Monarch. These allegations were never proven definitively, however remain a compelling topic when one objectively examines the testimony of those involved and the circumstantial evidence, and considering with the manner in which the case was handled (or mishandled) by the courts.

It’s worth noting that he (Michael Aquino) had been heavily involved in military PSYOPS and the NSA, in acts of torture under CIA’s Phoenix Program, and had authored the pivotal ‘MindWar’ paper on the topic of psychological operations, propaganda and mass mind control including the usage of psychotronics to that effect.

As you are likely aware, he (Aquino) had founded the Temple of Set after a falling out with Church of Satan founder Anton LaVey. The Temple of Set is a left-hand path (e.g. black magic) oriented initiatory order with an added emphasis on theistic Satanism, while incorporating the Egyptian mysteries and Hermeticism into its ritual and doctrine.

You mention the Ordo Templi Orientis, however it should be noted the Temple of Set is not directly affiliated with the O.T.O. in any official capacity, nor can it trace any lineage through the O.T.O. itself, though it was indeed heavily influenced by the work of Aleister Crowley and his pioneering research into the Hermetic arts, Western esotericism and occultism in general, incorporated with LaVeyan Satanism into a theistic form of Satanic gnosis, embodied by the Egyptian god Set and the term xeper.
If you would like to learn more about Setianism and the Temple of Set, I will refer you to their website, along with this assortment of publications and documents I’ve taken the liberty of uploading here.

https://en.wikipedia.org/wiki/List_of_satanic_ritual_abuse_allegations
https://en.wikipedia.org/wiki/Satanic_ritual_abuse

.
Born in 1946, Michael Aquino was a military intelligence officer specializing in psychological warfare. In 1969 he joined Anton LaVey’s Church of Satan and rose rapidly through the group’s ranks. Him At His Satanic Church Here on Wikipedia
https://en.wikipedia.org/wiki/Temple_of_Set

The Church Of Satan Written by Michael Aquino Here

Basically This is is Resume:

TEMPLE OF SET WEBSITE:
https://www.xeper.org/
https://xeper.org/maquino/
Email: Xeper@aol.com

It is said on Wikipedia that The Temple of Set also setup its own private intranet for private communication around the world. Wikipedia:
The Temple first registered a website in 1997, the same year as the Church of Satan. It would also establish its own intranet, allowing for communication between Setians in different parts of the world.

He is also affiliated with this occult website:
http://khprvod.org/

Dr. Michael A. Aquino, Ph.D, still remains active online and comments on many Youtube videos.

Google: https://plus.google.com/115942221653091196019/about/p/pub
Youtube: https://www.youtube.com/user/MAAquino/videos
Twitter: https://twitter.com/templeofset

One of His Websites:
http://www.rachane.org/

Timeline:

1967 – Michael Aquino began a two-year tour of duty in Vietnam, taking part in the infamous Phoenix Program. The Phoenix Program was an assassination/torture/terror operation that was initiated by the CIA, with the aim of ‘neutralizing’ the civilian infrastructure that supported the Viet Cong insurgency in South Vietnam. It was a terrifying ‘final solution’ that blatantly violated the Geneva Conventions. Targets for assassination included VC tax collectors, supply officers, political cadre, local military officials, and suspected sympathizers. However, ‘faulty intelligence’ more often than not led to the murder of innocent civilians, even young children. Sometimes orders were even given to kill US military personnel who were considered security risks. In 1971, William Colby, head of CIA in Vietnam at the time, later testified that the number killed was 20,857, while South Vietnamese government figures claimed it was 40,994 dead. This murderous psyop program had the effect of creating legions of cold-blooded psychopathic killers who would return home to the USA as completely different people than when they left. Many of them would become involved in satanism during or after their involvement in the Phoenix Program. And Michael Aquino was there to lead them into it. Soon after these killers started coming home, there began a steady rise in horrific serial murders with satanic undertones that centered around the southern California area (where Michael Aquino has always lived).

1980 – According to sworn testimony given before a US Senate in later years, MKULTRA mind-control victim Cathy O’Brien claimed that she was programmed at Fort Campbell, Kentucky, in 1980 by Lt. Col. Michael Aquino of the US Army. She stated that Aquino used barbaric trauma techniques on both her daughter Kelly and herself that involved NASA technology. Cathy O’Brien claimed that she was a ‘presidential model’ Monarch sex slave, meaning that she was specially programmed to cater to the sexual perversions of the highest-ranking politicians in the USA. She stated that during her time as a sex slave (which started as a child), she serviced a number of well-known politicians, including both Bill and Hilary Clinton, Ronald Reagan, Pierre Trudeau, Brian Mulroney, George H.W. Bush, Dick Cheney, Governors Lamar Alexander and Richard Thornburgh, Bill Bennett, Senator Patrick Leahy, Senator Robert Byrd (who she says was her handler) and Arlen Spector. O’Brien eventually gave testimony before the US Senate regarding the events she was forced to go through, and although she named her perpetrators, not one of them dared to challenge her or accuse her of slander.

1982 (September 5) – Twelve-year-old Johnny Gosch was abducted from a shopping mall parking lot in West Des Moines, Iowa, while doing his early-morning paper route, never to be seen again. Years later, during an interview with private investigator Ted Gunderson, child abductee and sex slave victim Paul Bonacci revealed that, as a child, he was directly involved in Gosch’s abduction, having acted as a lure to draw Gosch into the hands of his pedophile abductors. According to Bonacci, the abduction was ordered by Lt. Col. Michael Aquino, who later picked Gosch up at a farmhouse he was being held at and delivered him to a buyer in Colorado. For years, both boys were used for the pedophiliac pleasures of high-ranking government officials.

1985 – Allegations of ritual abuse at the Jubilation Day Care Center at Fort Bragg erupted when several children reported being sexually abused by a number of people at the day care center and several other locations, including at least two churches. Lt. Col. Michael Aquino was identified as having been present at one of those churches.

1986 (November) – Allegations emerged regarding sexual abuse being perpetrated at the US Army’s Presidio Child Development Center in San Francisco. Within a year, at least 60 victims were identified, all between the ages of three and seven. Victims told of being taken to private homes to be abused, and at least three houses were positively identified, one of them being Aquino’s. They also described being urinated and defecated upon, and being forced to ingest urine and feces. Irrefutable medical evidence documented the fact that these children were sexually abused, including five who had contracted chlamydia, and many others who showed clear signs of anal and genital trauma consistent with violent penetration. Even before the abuse was exposed, the children were exhibiting radical changes in behavior, including temper outbursts, sudden mood shifts, and poor impulse control. Both Lt. Col Michael Aquino and his satanist wife Lilith were positively identified by victims as two of the perpetrators. At least one victim was able to positively identify Aquino’s home and describe with uncanny accuracy the distinctively satanic interior of the house. Only one person was ever charged for the abuse of one child, and these charges were dismissed three months later.

1987 (August 14) – As part of the Presidio investigation, a search warrant was served on the residence of Lt. Col. Michael Aquino and his wife Lilith, and numerous videotapes, photographs, photo albums, photographic negatives, cassette tapes, and address books were confiscated. Also observed during the search was what appeared to be a soundproof room that may have been used as a torture chamber.

1987 (November) – The US Army received allegations of child abuse at fifteen of its day care centers and several elementary schools. There were also at least two other cases at Air Force day care centers, and another one at a center run by the US Navy. In addition to these, a special team of experts were sent to Panama to help determine if as many as ten children at a Department of Defense elementary school were molested and possibly infected with AIDS. Another case also emerged in a US-run facility in West Germany. These cases occurred at some of the most esteemed military bases in the country, including Fort Dix, Fort Leavenworth, Fort Jackson, and West Point. In the West Point case alone, by the end of the year, fifty children were interviewed by investigators. There were reports of satanic acts, animal sacrifices, and cult-like behavior among the abusers. An investigation led by former US Attorney Rudolph Giuliani produced no federal grand jury indictments. His investigation concluded that only one or two children were abused, in spite of all the evidence to the contrary.

1988 (November 4) – The FBI raided the Franklin Credit Union in Omaha, Nebraska, run by a man named Lawrence King. In the process, they uncovered evidence relating to drug running, pedophilia, pornography, and satanic activity involving prominent individuals in the local community and beyond. Eighty children eventually came forward and identified many of those involved, including the chief of police (who impregnated one of the victims), a local newspaper publisher, a former vice squad officer, a judge, and others. The children described satanic ceremonies involving human and animal sacrifice. Evidence that came out showed that children were abducted from shopping mall parking lots and auctioned off in Las Vegas and Toronto. Airplanes owned by the DEA were often used to transport the children. Other children were removed from orphanages and foster homes and taken to Washington, DC to take part in sex orgies with dignitaries, congressmen, and other high-ranking public officials. A number of the child victims testified that George Bush Sr. was one of the people who was often seen at these parties. Photographs were being surreptitiously taken at these orgies by the child traffickers for blackmail purposes. There was also evidence of ties to mind-control programs being conducted at Offutt Air Force Base near Omaha, Nebraska, where the head of the Strategic Air Command (SAC) is located. Minot is an area that has satanic cults operating in it that have been directly tied to the Son of Sam and Manson murders, among others.

There was no follow-up investigation when these findings were made. The US national media didn’t report on the story. Local media only focused on discrediting the witnesses. The FBI and other enforcement officers harassed and discredited victims in the aftermath, causing all but two of them – Paul Bonacci and Alisha Owen – to recant their testimonies. The child victims, rather than the perpetrators, were thrown in prison. Alisha Owen spent more time in solitary confinement than any other woman in the history of the Nebraska penal system. She received a sentence of 9 to 25 years for allegedly committing perjury, which is ten years longer than the sentence that was given to Lawrence King for looting his Franklin Credit Union of $40 million. This heavy sentence imposed on Owen was meant to serve as a warning message to all other victims who might think of talking.

The key investigator in the case, Gary Caradori, was killed when his private plane mysteriously exploded in mid-air while en route to delivering evidence to Senator Loran Schmit. His briefcase went missing from the wreckage. This was the first of many deaths of people attempting to uncover this politically connected satanic cult/sex slave/drug trafficking ring. The Discovery Channel made a documentary about this case, entitled ‘Conspiracy of Silence’, but at the last moment, a group of unidentified US Congressmen paid them $500,000 to not air it, and all copies were destroyed (one copy survived). Republican senator John DeCamp, who was on the investigative committee, wrote a book exposing the case, titled, The Franklin Cover-Up.

In 1999 (see below), Paul Bonacci, who had been kept as a child sex slave by Lawrence King, positively identified Lt. Col. Michael Aquino as an associate of King, who he said was known to the children only as ‘the Colonel’. Rusty Nelson, King’s personal photographer, also identified Aquino as the man that he once saw King give a briefcase full of money and bearer bonds to, and who King had told him was involved in the Contra gun and cocaine trafficking operation being run by George Bush Sr. and Lt. Col. Oliver North.

Michael Aquino has also been linked to Offutt Air Force Base, a Strategic Air Command post near Omaha that was implicated in the investigation by the Franklin Committee…”Aquino was also claimed to have ordered the abduction of a Des Moines, Iowa paperboy…” linked to the kidnapping and disappearance of Gosch’s son.

1989 (May) – Lt. Col. Michael Aquino was again questioned in connection with child abuse investigations. This time, at least five children in three cities were making the accusations. The children had seen Aquino in newspaper and television coverage of the Presidio case and immediately recognized him as one of their abusers. The children were from Ukiah, Santa Rosa, and Fort Bragg.

1990 (August 31) – Lt. Col. Michael Aquino was processed out of the Army after being investigated for satanic ritual child abuse in the Presidio case.Although never formally charged, according to court documents, Aquino was ‘titled’ in a Report of Investigation by the Army’s Criminal Investigative Division (CID) for “indecent acts with a child, sodomy, conspiracy, kidnapping, and false swearing”. The child abuse charges remained against Aquino because, according to the CID, the evidence of alibi offered by Aquino “was not persuasive.” Aquino has since denied that he was ever processed out of the Army and even claims that he was selected as one of their first Space Intelligence Officers during this same year, and was stationed at Cheyenne Mountain for four years of active duty before retiring. There is no evidence that this is true.

1991– (Although this entry isn’t directly connected to Michael Aquino, it directly relates to the cover up of events that he and his pedophile cronies have been involved in.) After being accused of molestation as a child by their daughter, Peter and Pamela Freyd established the False Memory Syndrome Foundation (FMSF). The original board members included doctors who were directly involved in MKULTRA mind-control programs, such as expert hypnotist Martin Orne and Dr. Louis Jolyin West, as well as many others who have been accused of child sexual abuse. One board member, Richard Ofshe, is an alleged expert on coercive persuasion techniques, and another, Margaret Singer, was a government expert on cults and cult tactics. Elizabeth Loftus is an expert on memory. The mandate of the FMSF has always been to discredit the recovered memories of people who report having been traumatically abused as children – usually by claiming that the child’s therapist has implanted false memories – and to develop legal defenses for protecting pedophiles in court. They have resorted to lies, intimidation, character assassination, legal tactics, and coercing victims to recant their claims and sue their therapists for large settlements. The FMSF has routinely argued in court cases that satanic ritual abuse (SRA) and multiple personality disorder (MPD) don’t exist, and the organization and its members have specifically targeted any therapists who claim that they do. This defense strategy, which has proven to be quite successful, has allowed victims of trauma-based mind-control and ritual abuse to be completely discredited, while allowing their perpetrators to continue their activities unimpeded.

At about the time that the FMSF was established, a number of mind-control and ritual abuse victims were starting to remember being involved in these events, and this threatened to expose the perpetrators, so it was important that a means to discredit them was put in place.

The False Memory Syndrome Foundation was created by known pedophiles and its board was fortified with CIA mind-control experts who cut their teeth on MKULTRA victims. Many of them are known to be closely associated with Michael Aquino. This organization of pedophiles and mind-control experts have been very instrumental in covering for Aquino and other pedophiles while destroying the lives and careers of their victims, the victim’s families, and their therapists, even long after these pedophiles performed their vile acts against them.

Also in 1991, Lieutenant Colonel Michael Aquino, formerly of the U.S. Army Reserves, filed suit under the Privacy Act of 1974, 5 U.S.C. § 552a (1988), against the Secretary of the Army seeking to amend an Army report of a criminal investigation about him and to recover damages caused by inaccuracies in the report. He also sued under the Administrative Procedure Act, 5 U.S.C. § 701, et seq. (1988), to review the Secretary’s refusal to amend the report. The district court entered summary judgment for the Secretary, concluding that criminal investigatory files are exempt from the provisions of the Privacy Act that were invoked by Aquino and that the Secretary’s decision not to amend was not arbitrary or capricious. 768 F. Supp. 529. Finding no reversible error, we affirm.

Aquino sued the Army in part because they refused to remove his name from the titling block or amend their report stating he was the subject of an investigation for sexual abuse and related crimes. The court document notes that several members of the Army thought there was probable cause to “Title” Aquino with offenses of indecent acts with a child, sodomy, conspiracy, kidnapping, and false swearing. Aquino tried to charge a Captain, the father who reported his child’s alleged abuse and whose child’s name appears in the victim block of the report, with “conduct unbecoming an officer.” Due to that, Aquino was titled for false swearing, in addition to “indecent acts with a child, sodomy, conspiracy, and kidnapping.” He also filed complaints against the SFPD, therapists involved in the case, journalists and CID investigating officers.

The Court Documents Can Be Seen Here:
http://law.justia.com/cases/federal/appellate-courts/F2/957/139/2044/

1995 – Diana Napolis was a Child Protection Services investigator in San Diego who was alarmed by the increasing number of children who were reporting satanic ritual abuse, starting as far back as the mid-1980s. Napolis went undercover online in 1995 and approached Aquino and several others who were associated with him, while also posting information and evidence relating to these crimes and these people’s involvement in them. In response, Aquino and his associates (several of them from the False Memory Syndrome Foundation) cyber-stalked Napolis for five years and finally tracked her down in 2000, thereby discovering her real identity. At this point, Napolis’ efforts to expose these people were defeated, with Aquino and associates using their power and influence to pose themselves as the victims and accusing her of cyber-stalking, as well as engaging in assassinating her character both online and through the media. Napolis was also targeted with directed-energy weapons (V2K) and set up to appear mentally unstable, with claims that she was stalking various celebrities. This resulted in her spending a year in jail and several more months in a mental facility, and eventually being forced to quit her job. The character assassination continued against her, with someone claiming to be Napolis posting insane ravings on the internet in order to make her appear crazy.

A reporter at the San Diego Union Tribune was working for Aquino and his cronies by painting Napolis in a bad light in news reports, accusing her of cyber-stalking, making threats, and acting crazy. Aquino was publicly complaining that she was causing serious problems for him and his fellow pedophiles. Nonetheless, the article at the first link below clearly reveals the one-sided reporting on this story by the San Diego Union Tribune and the fact that if anyone was being cyber-stalked, it was Napolis. The second link below is Napolis’ far more professional and believable response to the article:

http://www.uniontrib.com/news/uniontrib/sun/currents/news_mz1c24curio.html

http://www.konformist.com/2002/curio-tribune2.htm

Discredited Tourette Therapist Leslie Packer is a Temple of Set “Bodyguard”

The point of going after Napolis so publicly served several agendas. First, it was a public warning to anyone else who might attempt to expose the increasing satanic ritual abuse that was going on and the people behind it. Second, it acted to deflate the satanic ritual abuse scare that was mounting, making it appear to be nothing more than the ravings of delusional people. Third, it assured that stealing other people’s children using child protection services could continue. Fourth, (with the help of the FMSF) it made out children’s claims of molestation and satanic ritual abuse to be nothing more than false memories.

Some of the articles that I found posted online by Diana Napolis do make her sound a bit crazy however it is NOT known is they were really posted by the real Diana Napolis

Modification of The Court Order Can Be Found Here:
http://newsgroups.derkeiler.com/Archive/Misc/misc.legal/2008-06/msg00403.html

In 2008, Diana Napolis filed a lawsuit against Michael A. Aquino and his affilates. The court documents can be seen here:
https://www.scribd.com/doc/4981526/Diana-Napolis-vs-Michael-Aquino-lawsuit-2008

Satanic cult leader, Michael Aquino’s harassment and why…
Posted by Karen Jones on February 20, 1999
http://www.napanet.net/~moiraj/wwwboard/messages/2374.html
http://www.rumormillnews.com/cgi-bin/archive.cgi?noframes%3Bread=4435

1999 (February 5) – In US District Court in Lincoln, Nebraska, a hearing was held in the matter of Paul A. Bonacci v. Lawrence E. King, a civil action in which Bonacci charged that he had been ritualistically abused by King as part of a nationwide pedophile ring that was linked to powerful political figures in Washington and to elements of the US military and intelligence agencies.

During the hearing, Noreen Gosch, whose twelve-year-old son Johnny had been abducted in 1982, provided the court with sworn testimony linking US Army Lt. Col. Michael Aquino to the nationwide pedophile ring. She stated:

“Well, then there was a man by the name of Michael Aquino. He was in the military. He had top Pentagon clearances. He was a pedophile. He was a Satanist. He’s founded the Temple of Set. And he was a close friend of Anton LaVey. The two of them were very active in ritualistic sexual abuse. And they deferred funding from this government program to use [in] this experimentation on children.
Where they deliberately split off the personalities of these children into multiples, so that when they’re questioned or put under oath or questioned under lie detector, that unless the operator knows how to question a multiple-personality disorder, they turn up with no evidence.
They used these kids to sexually compromise politicians or anyone else they wish to have control of. This sounds so far out and so bizarre I had trouble accepting it in the beginning myself until I was presented with the data. We have the proof. In black and white.”

Paul Bonacci, who was a victim of this nationwide pedophile crime syndicate, subsequently identified Aquino as the man who ordered the kidnapping of Johnny Gosch.

Three weeks after the hearing, on February 27, Judge Warren K. Urbom ordered Lawrence King to pay $1 million in damages to Paul Bonacci.

* * *
The question here isn’t whether Michael Aquino is guilty of being one of the world’s most despicable pedophiles and mind-control programmers ever to crawl out of a toilet, which the evidence makes quite clear. Rather, the question is whether there is a conspiracy against him by all of these people (including young children) making these allegations against him over the years, and for what reason? After all, this is exactly what he claims to be the case, and this is how he has attempted to excuse these many claims against him.

Conspiracy or not, it certainly is quite unusual for his name to come up in so many cases of child abuse and sex slavery. I counted up to 200 child victims in the above listed events, all of who would have had to be carefully coached to lie without being tripped up by a more intelligent adult during questioning. And then there are all the doctors who were involved in examining these children and who claimed that many of them had definite physical signs of having been sexually violated, who would also have to be in on it. And there are also the children’s parents who would have had to have been either very easily deceived by their own children, or were in on the conspiracy as well. And of course, there are also all of the investigators, lawyers, judges, etc. who supposedly conspired against Aquino in the Presidio case.

SOURCE:
http://exposinginfragard.blogspot.com/2014/02/the-case-against-michael-aquino-satanic.html

2010-Letter To U.S. Attorney Eric Holder and The First Lady Michelle Obama
From:
America’s Bureau of Investigation
and Loving Intervention for our Nation’s Children
Douglas R. Millar
PO Box 464. Santa Rosa, CA 95404
(707) 396-8215

In this letter they request a Federal Grand Jury investigation into Army/Special Forces/CIA
Lt. Col. Michael Angelo(MichaeltheAngel) Aquino and his suspected criminal acts of Satanic ritual abuse (SRA).

You Can See The Letter Written By Douglas R. Millar Here:

Documented Evidence
Police Reports, Requests For Investigation and Other Documented Evidence Here:

YOUTUBE LINKS:

MICHAEL AQUINO ON THE OPRAH SHOW:

Dr. Michael A. Aquino and Lilith Aquino Interview

Michael Aquino Ted Gunderson and a Jesuit on Geraldo

Michael Aquino and The Disappearance of Kevin Collins

Rusty Nelson testimony – Lt. Col. Michael Aquino

MindWar paper by NSA Gen. Michael Aquino

Doug Millar – Michael Aquino & Satanic Sex Ritual Child Abuse

Michael Aquino – Satanisun

In This Video He Confirms Government Human Experimentation Such As MKUltra But He Does Not Admit To Being A Part of It.
RADIO INTERVIEW WITH MICHAEL AQUINO Aug 3, 2016

Satan’s Soldiers: Devil Worship in the US Military

WANTED DEAD OR ALIVE: U.S. Army Lt. Col. Michael Aquino

The Devil’s Advocate: An Interview with Dr. Michael Aquino
http://disinfo.com/2013/09/devils-advocate-interview-dr-michael-aquino/

Satanic Subversion of the U.S. Military
http://www.abeldanger.net/2015/11/us-armypentagonnsas-six-degrees-of.html

Michael Aquino
http://www.konformist.com/2001/aquino.htm

CHILD ABUSE AND THE AMERICAN GOVERNMENT
http://aangirfan.blogspot.com/2009/09/child-abuse-and-american-government.html

Article on Lawrence E. King Jr: Overachiever (The Franklin Coverup)

Lawrence E. King Jr: Overachiever

Satanic Ritual Abuse 2016: Child Trafficking/ILLUMINATI-FREEMASON Ritual Abuse

June 21, 1974, The Washington Post, Behind Psychological Assessment’s Door, A CIA Operation, by Laurence Stern,
https://www.diigo.com/item/note/27gb8/oq82

***NOTE*** I do NOT agree with everything in this video as a lots of it is part of the sensationalized “Satanic Panic” from the 80’s which lots has been debunked HOWEVER there are clear links with Satanists and Murder and other crimes.

Here are some Great Links. This one is a book on “Black Metal” (Satanic Metal) which is one I’ve read. Most of those into this music. Look at the case of the very mentally ill guy Per Ohlin (Dead) he committed suicide. He often wrote about it and even felt he didn’t have normal blood going through is veins. Read about how Satanist / Nazi / Church Burner Varg Vikernes “Count Grishnackh” killed fellowed Satanist Oystein Asrseth. “Euronymous” who ran a Satanic shop called Helvete which I believe means “Hell”. This book has lots of interviews with these people I’ve read. Satanism, Nazism, Church Burnings are common themes. That is just straight out fact.

Other Satanic Murders.

Here is a Satanic Group that openly admits to being okay with Human Sacrifice and if you go to there site you can download their PDF’s which are disturbing.

The case here of Adolf Constanzo is especially brutal. He was more into Palo Mayombe though but still evil in Origin.

Murray
Visits: 5070 · Online: 0
Save as PDF
© 2019 justpaste.it

Account Terms Privacy Cookies Blog About

Pagan Pioneers:  Founders, Elders, Leaders and Others

Michael A. Aquino

(The Temple of Set)

Written and compiled by George Knowles.

Michael A. Aquino is an Amerian born occultist, satanist, and author of: The Book of Coming Forth By Night. He is also known as the 13th Baron of Rachane (Clan of Campbell) in Scotland, UK. A former Lt. Colonel in the United States Army, Aquino had been a specialist in psychological warfare operations during the Vietmam War, but is perhaps best known as a High Priest of the Church of Satan (CoS) founded by Anton Szandor LaVey in 1966, and as the founder and High Priest of the Temple of Set (ToS) in 1975, which today is one of the largest “neo-satanic” Churches in the USA.

Aquino was born in San Francisco on the 16th October 1946. His father, Michael Aquino Sr. had been a Sergeant in Patton’s 3rd Army during World War II, and serving with distinction was decorated with a Purple Heart for wounds received during combat. His mother Marian Dorothy Elisabeth Ford (affectionately known as Betty Ford) was a child prodigy, after only three years of early formal education and at the age of just fourteen years, she was enrolled as one of the youngest students ever admitted into Stanford University. Three years later she completed a B.A. (Hon) degree in English, the youngest ever to receive such an honour at that time.

Betty Ford c. 1920, and later c. 1980

Michael A. Aquino was raised and took his early education in Santa Barbara, California, from where he graduated the Santa Barbara High School in 1964. He then enrolled at the University of California (1964-1968) earning a B.A. degree in Political Science, returning later (1974-1976) to earn a M.A. degree. Also in 1968 he joined the Army as an Intelligence Officer specialising in Psychological Warfare. In the following year while on leave from training to marry his first wife Janet, with whom he had a son called Dorien, he joined the newly created Church of Satan (CoS) founded by Anton Szandor LaVey in 1966, but soon after in 1970 left on a tour of duty in Vietnam. On his return to the United States in 1971, he resumed his association with the CoS and was ordained a High Priest, after which he established his own core group (termed a grotto) that met and practised at his home in Santa Barbara.

2nd Lieutenant Michael Aquino (circa. 1968) – Janet Aquino as the Egyptian Goddess Nepthys (circa. 1970)

As a High Priest of the CoS, Aquino quickly rose to a position of prominence, but soon grew dissatisfied with LaVey’s administrative leadership and philosophical approach to Satanism as a true religion. In 1972 together with other disillusioned members, Aquino resigned from the CoS and was joined by Lilith Sinclair, a High Priestess from the New York who later became his second wife.

Lilith Sinclair

Over the next two years through ritual invocation and meditation Aquino claims to have received a communication from Satan himself in the guise Set, the ancient Egyptian deity, who inspired him to write the book: The Book of Coming Forth by Night, and further to found a new Church in his name which would supersede the CoF. As a result in 1975, a new Church called the Temple of Set was founded in Santa Barbara and formally incorporated as a non-profit organization with both federal and tax-exempt status in California. Today the Temple of Set is consider the leading Satanic Church in the United States.

In 1976 Aquino returned to academia to finish his doctorial program at the University of California earning a PhD in Political Science in 1980 with a dissertation on “The Neutron Bomb.” He then took a position as an adjunct professor of Political Science at the Golden Gate University in San Francisco. It was shortly after this during the mid 1980s that rumours of satanic child abuse began to surface in connection with a day-care Centre at the Presidio Army Base in California, a base where Aquino had once been assigned. When the media picked up on Aquino’s name and his active association with army Psychological Warfare Operations, and then privately with the Church of Satan and the Temple of Set, the whole thing became a media witch-hunt leading to over a decade of personal persecution.

It is not my intention in this brief to argue the guilt or innocence of such allegations against Aquino, for that the reader should conduct their researches. However it is fair to report that after months of research and scrolling through reams and reams of some of the most heartbreaking allegations levelled against him, that despite numerous and exhaustive investigations conducted Military and Civilian law enforcement agencies, including the CIA and FBI, no proof of any wrong doing has ever been found and no formal charges have ever brought against the him.

In 1985 Aquino’s devoted mother ‘Betty’ passed away having died of cancer in San Francisco. Betty, who had steadfastly supported her son through all his endeavours including the child-abuse allegations, was also a High Priestess in his Temple of Set. On her death she left her him a $3.2 million estate, which included a house leased by Project Care for Children and the Marin County Child Abuse Council (I have so far found no association here with the earlier allegations). A year later in 1986 Aquino married his second wife Lilith (formally Patricia Sinclair – dob. 21st April 1942) who had been a prominent High Priestess of a CoS and a leader of a grotto in New York before resigning with Aquino in 1972.

Michael & Lilith Aquino

Despite being dogged by repeated allegations of child abuse, and most probably because of his continued association with the Temple of Set, Aquino continued his professional military career rising to the rank of Lt Colonel with Military Intelligence. Initially he was involved in military psychological operations (“psy-ops”), but he also qualified as a Special-Forces officer (Green Berets), as a Civil Affairs officer and as a Defence Attaché.

In addition Aquino was a graduate of the Industrial College of the Armed Forces, the Command and General Staff College, the National Defence University, the Defence Intelligence College, the US Army Space Institute and the US State Departments’ Foreign Service Institute. His decorations are equally impressive and include: the Bronze Star, the Army Commendation Medal (3 awards), the Air Medal, the Special Forces Tab, the Parachutist Badge, the Republic of Vietnam Gallantry Cross and the long service Meritorious Service Medal.

Lt. Colonel Michael A. Aquino

In 1990 at the end of his full-time 22-year contract of active duty, despite accusations that he was forced to leave due to all the previous allegations made against him, he continued his service as a part-time active USAR officer for another four years and was assigned to the Headquarters of the US Space Command with an above “Top Secret” clearance. He finally retired in 1994 with an unblemished distinguished record and remains in the Army reserve with the rank of Lt. Colonel (USAR-Retired).

The Barony of Rachane

In 2004 Aquino applied for and became the present caretaker of the Barony of Rachane in the County of Dumbarton, Argyllshire, Scotland, UK, to which he now holds legal title as the 13th Baron of Rachane, of Clan Campbell. The Coat-of-Arms and title of the Barony is now recorded in the Register of Sasines, Scotland, and is recognised on behalf of the Crown by the Lord Lyon King of Arms. In the United States it is also a Registered Trademark with both the California Secretary of State and the United States Patent and Trademark Office.

The Baroness and Baron of Rachane, Lilith and Michael A. Aquino – The Coat-of-Arms

In 2000 the Abolition of Feudal Tenure Act concluded all the land-tenure aspects of the Scottish feudal system as at 28th November 2004. The effect upon baronies was to end their superior/vassal attachment to specific areas of land, while continuing and preserving them as titles in the Noblesse of Scotland. The present Baron and Baroness of Rachane – Michael and Lilith Aquino have dedicated their Barony towards charitable support of animal protection, rescue and welfare.

In 2007 a Fellowship of the Barony of Rachane was inaugurated to formally honour people of dignity, wisdom, enlightenment and accomplishment, such as is known to the Baron and Baroness. Fellows are presented with the Crest Badge of the Barony, which they hope will become a symbol of benevolence and goodwill, as was once a tradition in Scotland.

The Temple of Set (over-view)

The Temple of Set is a left-hand initiatory occult Order founded by Michael A. Aquino and incorporated as a non-profit religious organization in the State of California in 1975. Initiates and members of the Order are known as “Setian’s.”

Aquino had been a lead figure and High Priest in the Church of Satan (CoS) founded by Anton Szandor LaVey earlier in 1966, but after administrative and philosophical disagreements with LaVey in 1972, and together with other disillusioned members that same year Aquino resigned. Later through ritual invocation and meditation Aquino sought a new mandate from the “Prince of Darkness” in the guise of “Set,” the Egyptian god of death and the underworld who inspired him to write the book: “The Book of Coming Forth by Night,” and to found a new Church in his name as the “Temple of Set”.

Anton Szandor LaVey

While based on a similar hierarchy structure as the CoS, the Temple of Set (ToS) appears to be a more intellectually evolved form of Satanism, in that the CoS only use the name of Satan symbolically and do not really believe that he exists. They use his name merely to draw attention to and boost their hedonistic aims of self-indulgence and elitism. The ToS members however believe that a real Satan exists in the form of “Set” who they consider to be the true “Prince of Darkness”. The worship of Set can be traced back to ancient times, images of which have been dated to 3200 BC with inscriptions dated to 5000 BC.

In the ToS, the figure of Set is understood as a principle but is not worshipped as a god. He is considered a “role model” for initiates, a being totally apart from the objective universe. They consider him ageless and the only god with an independent existence. He is described as having given humanity through means of non-natural evolution a questioning intellect that sets humans apart from nature and gives us the possibility to attain divinity.

The philosophy of the Temple of Set is heavily influenced by the writings and ritual’s of Aleister Crowley’s A.A., and the earlier Hermetic Order of the Golden Dawn. Emphasis through its degree structure is based on the individual’s “Xeper”. Xeper is a term use by the ToS to mean the true nature of “becoming” or “coming into being” and teaches that the true self, or the essence of self is immortal, and that through self-initiation and development, or Xeper, one gains the ability to align consciousness with this essence.

Aleister Crowley

There are several stages of degrees within the ToS that indicates an individual’s development and skill in magic, or black magic if you will. The ToS terms the progression through the degrees as “recognitions”, and because their philosophy prefers individuals to self-initiate, after a time of assessment they acknowledge their progress by granting an appropriate degree.

The degrees of the ToS are:

The first degree is that of Setian

The second degree is that of Adept

The third degree is that of High Priest/Priestess

The fourth degree is that of Magister/Magistra Templi

The fifth degree is that of Magus/Maga

The sixth and final degree is that of Ipsissimus/Ipsissima

A “Council of Nine” holds the main power of authority within the structure of the TOS and is responsible for appointing both the operating High Priest/Priestess, who act as the public face of the Order, and an Executive Director, whose main task is to deal with administrative issues. Members of the Council of Nine are elected to office from the main body of third degree High Priests/Priestesses, or higher, for a term of nine years with a new member being elected each year during the annual International Conclave.

On joining, a new initiate is provisionally admitted as a first degree Setian and receives a copy of Aquino’s The Book of Coming Forth By Night, their newsletter The Scroll of Set, and a set of encyclopaedias entitled The Jeweled Tablets of Set. This material contains all the organizational, philosophical and magical information they will need to qualify for full membership into the second degree, that of Adept. They also receive information on active “Pylons and Orders” (see more below) sponsored by the ToS with open access to their on-line forums and archives through which they can communicate with others should they have questions with which they may need help.

New members then have a two-year time limit to qualify for recognition as a second degree Adept. Certification and recognition is awarded by third degree members of the ToS, but only after demonstrating they have successfully mastered and applied the essential principles of magic, or black magic if you will. If such recognition is not received by that time, full membership is declined.

Once full membership as a second degree Adept is attained, most members are happy to remain in that degree and to continue to learn and advance their knowledge through the Order’s teachings in achieving individual self-realisation and self-development of free will (Xeper). Advancement to the third degree, that of High Priest/Priestess, involves much greater responsibilities towards the ToS, such as holding office in the ToS hierarchy and acting as official representatives.

The fourth degree, that of Magister/Magistra Templi, is granted by the reigning High Priest/Priestess in acknowledgement of an individual’s advancement in magical skills to such a level that they can found their own specialized “Schools of Magic” within the structure of active Pylons and Orders of the ToS.

Advancement to the fifth degree Magus/Maga can only be awarded by a unanimous decision of the Council of Nine. A fifth degree member has the power to define concepts affecting the philosophy of the ToS, such as the concept of Xeper as defined by Aquino in 1975. The final sixth degree Ipsissimus/Ipsissima, represents a Magus/Maga whose task is complete. Only a very few members of the Order achieve this position, although any fifth degree member can assume it based on his own assessment.

The Temple of Set does not tolerate docile new members and expects them to prove themselves capable as “cooperative philosophers and magicians”. To demonstrate this, the ToS has loosely structured interest groups where specific themes and issues are addressed. Local and regional “Pylons” are meetings and seminars where discussions and magical work take place. These are hosted and led by second degree Adepts or higher called Sentinels. There are also various Orders providing specific Schools of Magic and differing paths of initiation. These are led by a forth degree Magister/Magistra Templi who will usually be the founder of that Order. The ToS also holds an annual Conclave where official business takes place, and where workshops are held in which members can take part in a wide variety of topics and activities. The annual Conclave usually lasts for about a week and is held in various global locations.

The ToS emphasizes that magic, or black magic if you will, can be as dangerous to a newcomer as volatile chemicals are to an inexperienced lab technician. They also stress that the practice of magic, black or otherwise, is not for unstable, immature, or emotionally weak-minded individuals, and that their teachings offer nothing that an enlightened, mature intellectual would regard as undignified, sadistic, criminal or depraved.

Resources:

http://www.churchofsatan.com/index.php

http://www.rachane.org/History.html

http://www.trapezoid.org/mission.html

Email Contact – Xeper@sbcglobal.net.

Plus way to many to include here.

Written and compiled by George Knowles © 24th June 2016

Best Wishes and Blessed Be.

Site Contents – Links to all Pages

Home Page

A Universal Message:

Let there be peace in the world  –   Where have all the flowers gone?

About me:

My Personal PageMy Place in England / My Family Tree (Ancestry)

Wicca & Witchcraft

Wicca/Witchcraft /  What is WiccaWhat is Magick

Traditional Writings:

The Wiccan RedeCharge of the GoddessCharge of the God  /  The Three-Fold Law (includes The Law of Power and The Four Powers of the Magus) /  The Witches ChantThe Witches CreedDescent of the GoddessDrawing Down the MoonThe Great Rite InvocationInvocation of the Horned GodThe 13 Principles of Wiccan Belief /  The Witches Rede of ChivalryA Pledge to Pagan Spirituality

Correspondence Tables:

IncenseCandlesColoursMagickal DaysStones and GemsElements and Elementals

Traditions:

Traditions Part 1  –  Alexandrian Wicca /  Aquarian Tabernacle Church (ATC) / Ár Ndraíocht Féin (ADF) / Blue Star Wicca / British Traditional (Druidic Witchcraft) /  Celtic Wicca / Ceremonial Magic / Chaos Magic / Church and School of Wicca / Circle Sanctuary / Covenant of the Goddess (COG) / Covenant of Unitarian Universalist Pagans (CUUPS) / Cyber Wicca / Dianic Wicca / Eclectic Wicca / Feri Wicca /

Traditions Part 2  –  Gardnerian Wicca /  Georgian Tradition / Henge of Keltria /  Hereditary Witchcraft / Hermetic Order of the Golden Dawn (H.O.G.D.) / Kitchen Witch (Hedge Witch) /  Minoan Brotherhood and Minoan Sisterhood Tradition / Nordic Paganism / Pagan Federation / Pectic-Wita /  Seax-Wica /  Shamanism /  Solitary /  Strega /  Sylvan Tradition /  Vodoun or Voodoo / Witches League of Public Awareness (WLPA) /

Other things of interest:

Gods and Goddesses (Greek Mythology) /  Esbats & Full MoonsLinks to Personal Friends & ResourcesWicca/Witchcraft ResourcesWhat’s a spell?Circle Casting and Sacred Space Pentagram – PentacleMarks of a WitchThe Witches PowerThe Witches Hat An esoteric guide to visiting LondonSatanismPow-wowThe Unitarian Universalist Association /  Numerology:  Part 1 Part 2  /  Part 3A history of the Malleus Maleficarum:  includes:  Pope Innocent VIII / The papal Bull / The Malleus Maleficarum / An extract from the Malleus Maleficarum / The letter of approbation  / Johann Nider’s Formicarius /  Jacob Sprenger / Heinrich Kramer / Stefano Infessura / Montague Summers  /  The Waldenses / The Albigenses / The Hussites /  The Native American Sun DanceShielding (Occult and Psychic Protection) The History of ThanksgivingAuras  – Part 1 and Part 2 /  “Doreen Valiente Witch” (A Book Review) /

Sabbats and Festivals:

The Sabbats in History and Mythology / Samhain (October 31st) / Yule (December 21st) / Imbolc (February 2nd) / Ostara (March 21st) / Beltane (April 30th) /  Litha (June 21st) /  Lammas/Lughnasadh (August 1st) / Mabon (September 21st)

Rituals contributed by Crone:

Samhain / YuleImbolcOstara /  BeltaneLithaLammasMabon

Tools:

Tools of a Witch  /  The Besom (Broom) /  Poppets and DollsPendulums / Cauldron MagickMirror Gazing

Animals:

Animals in Witchcraft (The Witches Familiar and Totem Animals) /  AntelopeBatsCrowFoxFrog and ToadsGoat / HoneybeeKangarooLionOwlPhoenixRabbits and HaresRavenRobin RedbreastSheepSpiderSquirrelSwansUnicornWild BoarWolf / Serpent / Pig / Stag / Horse / Mouse / Cat /  Rats /  Unicorn

Trees:

In Worship of Trees – Myths, Lore and the Celtic Tree Calendar.  For descriptions and correspondences of the thirteen sacred trees of Wicca/Witchcraft see the following:  Birch / Rowan / Ash / Alder / WillowHawthorn / Oak / Holly / Hazel / Vine / Ivy / Reed / Elder

Sacred Sites:

Mystical Sacred Sites  –  Stonehenge /  Glastonbury Tor /  Malta – The Hypogeum of Hal Saflieni /  Avebury /  Cerne Abbas – The Chalk Giant /  Ireland – Newgrange /

Rocks and Stones:

Stones – History, Myths and Lore

Articles contributed by Patricia Jean Martin:

Apophyllite  / AmberAmethystAquamarineAragoniteAventurineBlack TourmalineBloodstoneCalciteCarnelianCelestiteCitrineChrysanthemum StoneDiamond  /  Emerald / FluoriteGarnet /  HematiteHerkimer DiamondLabradoriteLapis LazuliMalachiteMoonstoneObsidianOpalPyriteQuartz (Rock Crystal)Rose QuartzRubySeleniteSeraphinite  /  Silver and GoldSmoky QuartzSodaliteSunstoneThundereggTree AgateZebra Marble

Wisdom and Inspiration:

Knowledge vs Wisdom by Ardriana Cahill /  I Talk to the TreesAwakeningThe Witch in YouA Tale of the WoodsI have a Dream by Martin Luther King /

Articles and Stories about Witchcraft:

Murdered by WitchcraftThe Fairy Witch of ClonmelA Battleship, U-boat, and a WitchThe Troll-Tear (A story for Children) /  Goody Hawkins – The Wise Goodwife /  The Story of Jack-O-LanternThe Murder of the Hammersmith Ghost /  Josephine Gray (The Infamous Black Widow) /  The Two Brothers – Light and Dark

Old Masters of Academia:

Pliny the ElderHesiodPythagoras

Biographies

A “Who’s Who” of Witches, Pagans and other associated People

(Ancient, Past and Present)

Remembered at Samhain

(Departed Pagan Pioneers, Founders, Elders and Others)

Pagan Pioneers:  Founders, Elders, Leaders and Others

Abramelin the Mage /  AgrippaAidan A KellyAlbertus Magnus – “Albert the Great” /  Aleister Crowley – “The Great Beast” /  Alex Sanders – “King of the Witches” /  Alison Harlow /   Allan Bennett – the Ven. Ananda Metteyya / Amber KAnna FranklinAnodea JudithAnton Szandor LaVey /  Arnold CrowtherArthur Edward Waite /  Austin Osman Spare /  Biddy Early /  Bridget Cleary – The Fairy Witch of Clonmel /  Carl ” Llewellyn” WeschckeCecil Hugh WilliamsonCharles Godfrey Leland /   Charles WaltonChristina Oakley Harrington Damh the Bard – “Dave Smith” /  Dion Fortune /  Dolores Aschroft-NowickiDoreen ValienteDorothy MorrisonDr. John Dee & Edward Kelly /  Dr. Leo Louis Martello /  Edward FitchEleanor Ray Bone “Matriarch of British Witchcraft” Eliphas Levi /  Ernest Thompson Seton /  Ernest Westlake /  Fiona Horne /   Frederick McLaren Adams – Feraferia /  Friedrich von Spee /  Francis Barrett /  Gavin and Yvonne Frost and the School and Church of Wicca /  Gerald B. Gardner – The father of contemporary Witchcraft /  Gwydion PendderwenHans HolzerHelen Duncan /   Herman Slater – Horrible Herman /  Isaac BonewitsIsrael RegardieIvo Domínguez Jr.Jack Whiteside Parsons – Rocket Science and Magick /  James “Cunning” Murrell – The Master of Witches /  Janet Farrar and Gavin BoneJessie Wicker Bell – “Lady Sheba” / Johann Weyer  / Johannes Junius – “The Burgomaster of Bamberg” /  John Belham-PayneJohn George Hohman – “Pow-wow” /  John Gerard /  John Gordon Hargrave and the Kibbo Kith Kindred /  John Michael Greer /  John ScoreJoseph John Campbell /  Karl von EckartshausenLady Gwen Thompson – and “The Rede of the Wiccae” / Laurie Cabot  – “the Official Witch of Salem” /  Lewis SpenceMadeline Montalban and the Order of the Morning Star /  Margaret Alice MurrayMargot AdlerMichael Howard and the UK “Cauldron Magazine” /  Marie Laveau – ” the Voodoo Queen of New Orleans” /  Marion WeinsteinMatthew Hopkins – “The Witch-Finder General” /   Max Ehrmann and the “Desiderata” /  Michael A. Aquino – and The Temple of Set /  Monique WilsonMontague Summers /  Nicholas CulpeperNicholas RemyM. R. SellarsMrs. Maud Grieve – “A Modern Herbal” /  Oberon Zell-Ravenheart and Morning GloryOld Dorothy Clutterbuck /  Old George PickingillOlivia Durdin-Robertson – co-founder of the Fellowship of Isis /  Paddy SladePamela Colman-SmithParacelsus /  Patricia CrowtherPatricia Monaghan /  Patricia “Trish” TelescoPaul Foster Case and the “Builders of the Adytum” mystery school /    Philip HeseltonRaven GrimassiRaymond Buckland /  Reginald Scot /  Robert CochraneRobert ‘von Ranke’ Graves and the “The White Goddess” /  Rosaleen Norton – “The Witch of Kings Cross” /  Rossell Hope Robbins /   Ross Nichols and the ” Order of Bards, Ovates & Druids” (OBOD) /  Rudolf SteinerSabrina Underwood – “The Ink Witch” /  Scott CunninghamSelena Fox – founder of “Circle Sanctuary” /  Silver RavenwolfSir Francis Dashwood /  Sir James George Frazer and the “ The Golden BoughS.L. MacGregor Mathers and the “Hermetic Order of the Golden Dawn” /  Starhawk /  Stewart Farrar /  Sybil LeekTed AndrewsThe Mather Family – (includes:  Richard Mather, Increase Mather and Cotton Mather ) /   Thomas AdyT. Thorn CoyleVera ChapmanVictor & Cora Anderson and the ” Feri Tradition” /  Vivianne CrowleyWalter Brown GibsonWalter Ernest ButlerWilliam Butler YeatsZsuzsanna Budapest /

Many of the above biographies are briefs and far from complete.  If you know about any of these individuals and can help with additional information, please contact me privately at my email address below.  Many thanks for reading  🙂

“FAIR USE NOTICE”

While I have taken due care and diligence to credit all sources where possible, this website may contain copyrighted material which has not been specifically authorized by the copyright owner.  My use of making such material available here is done so in my efforts to advance our understanding of religious discrimination, the environmental and social justice issues etc.   If you wish to use copyrighted material from this website for purposes of your own then you must obtain permission from the relevant copyright owner yourself.

Any queries please contact me at email – George@controverscial.com

Email_Witches

My online email discussion group:

http://groups.yahoo.com/group/Email_Witches

Dove of Peace

Help send a message of peace around the world!  The Dove of Peace flies from site to site, through as many countries as possible.  It does not belong to ANY belief system.  Please help make a line around the globe by taking it with you to your site, by giving it to someone for their site, by passing it on to another continent or to the conflict areas of the world.  May trouble and strife be vanquished in it’s path.

mailto:George@controverscial.com

  • Pali Hello, Boxcar he guitar guy. (: Does this affect your charity also? I cannot believe as rich the SPCA is that they have the damn gall to Cut Funding as part of their new 2030 vision plan. What kind of a plan? They used to make Billions by selling carcasses to slaughter houses to be ground up into Swine and Poultry fodder, but this was covered up as a ‘Conspiracy Theory,” I was living on a Cattle Ranch.They have been CUT-OFF from Selling Euthanized animals to Pprocessing for Corporate Hog and Chicken Processing plants. Their cannibalistic funding source has been the cause of neurological death.  this video shows Creutzfeldt-Jakob Disease and Other Prion Diseases – Brian Appleby, M.D. Seattle Science Foundation Seattle Science Foundation Seattle Science Foundation, a private 501(c)(3), offers unparalleled professional training and educational resou… Occurrence and Transmission | Creutzfeldt-Jakob Disease … Creutzfeldt-Jakob Disease USA 2 in 1 Mil – Google Search … including the United States, at a rate of roughly 1 to 1.5 cases per 1 million population per year, although rates of up to two cases per million are not unusual. … CreutzfeldtJakob Disease Deaths and Age-Adjusted Death Rate, United States, … Young-onset sporadic Creutzfeldt–Jakob disease with atypical phenotypic … Durjoy Lahiri Sporadic Creutzfeldt–Jakob disease, with a mean survival of 6 months, is duly considered among the most fatal ne… Published on Mar 14, 2019 http://www.seattlesciencefoundation.org Seattle Science Foundation is a non-profit organization dedicated to the international collaboration among physicians, scientists, technologists, engineers and educators. The Foundation’s training facilities and extensive internet connectivity have been designed to foster improvements in health care through professional medical education, training, creative dialogue and innovation. NOTE: All archived recorded lectures are available for informational purposes only and are only eligible for self-claimed Category II credit. They are not intended to serve as, or be the basis of a medical opinion, diagnosis, prognosis, or treatment for any particular patient. YouTube Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the wor… Pali Hello, Boxcar he guitar guy. (: Does this affect your charity also? I cannot believe as rich the SPCA is that they have the damn gall to Cut Funding as part of their new 2030 vision plan. What kind of a plan? They used to make Billions by selling carcasses to slaughter houses to be ground up into Swine and Poultry fodder, but this was covered up as a ‘Conspiracy Theory,” I was living on a Cattle Ranch. Published on Mar 14, 2019 http://www.seattlesciencefoundation.org Seattle Science Foundation is a non-profit organization dedicated to the international collaboration among physicians, scientists, technologists, engineers and educators. The Foundation’s training facilities and extensive internet connectivity have been designed to foster improvements in health care through professional medical education, training, creative dialogue and innovation. NOTE: All archived recorded lectures are available for informational purposes only and are only eligible for self-claimed Category II credit. They are not intended to serve as, or be the basis of a medical opinion, diagnosis, prognosis, or treatment for any particular patient. Dr. Valerie Sim: Combination Therapies for Human Prion Disease CJD Foundation Creutzfeld-Jacob Disease Foundation videos regarding prion disease 147 Occurrence and Transmission | Creutzfeldt-Jakob Disease … cjd cases 2019 – Google Search Whereas the majority of cases of CJD (about 85%) occur as sporadic disease, a smaller proportion of patients (5-15%) develop CJD because of inherited … News Stories | Creutzfeldt-Jakob Disease Foundation News Stories If you’re looking for the latest research news on Prion Disease, you’ve come to the right place. We’ve grouped t… Prion Disease Research July 2019 mBio: Chronic Wasting Disease in Cervids: Implications … The reporting of this case as probable vCJD – a disease linked to … —– Forwarded Message —– From: Katherine D’Amato <kdamato@shanti.org> To: Katherine D’Amato <kdamato@shanti.org> Sent: Tuesday, September 24, 2019, 3:17:17 p.m. PDT Subject: Important update about SPCA changes Dear PAWS client, Hope you are doing well. We are writing to let you know about changes in how PAWS and The San Francisco SPCA will work together going forward. SPCA remains a key partner of PAWS. SPCA will be changing how they provide services to PAWS clients starting October 1st, as part of their new 2030 vision plan. There are some important changes for PAWS clients: ·       SPCA hospitals are not offering free Wellness Checks or free vaccines to PAWS clients at this time. ·       SPCA will no longer be providing PAWS clients with an ongoing 25% discount at the SPCA hospitals. Instead, SPCA is offering a one-time 30% discount of up to $250/year. ·       The SPCA will no longer be offering Helping Hand funds for diagnostics (such as x-rays). We know these are significant changes and we are happy to talk through them with you. You can still use your PAWS funds for visits at either SPCA hospital (Pacific Heights or Mission), but free or discounted care will no longer be available at the SPCA hospitals. The voucher process will remain the same: If you go to the SPCA, you do not need to call PAWS to request a voucher. Please continue to call PAWS if you are going to any other vet office partner such as Blue Cross (25% discount), Mission Pet Hospital (25% discount), SFVS (10% discount), All Pets Hospital (10% discount), or any other partner hospitals. PAWS will continue to provide at least $200 in vet funds per client per calendar year, plus more funds if they are available. We know this is big news. If you have any questions or concerns or want to talk further, please contact your PAWS Care Navigator. Ali Sutch is the Care Navigator for clients last names A-J (asutch@shanti.org or 415-265-9208). Richard Goldman is the Care Navigator for clients last names K-Z (rgoldman@shanti.org or 415-815-8244). You can also contact the SPCA main number at 415-554-3030. Sincerely, Katherine, Prado, Richard, & Ali

Many academic studies, government reports and news articles have analyzed the role of religion (or the misinterpretation of religious concepts and scripture) in radicalizing Muslims and mobilizing them to wage “Holy War” against their enemies around the globe. Few have discussed how right-wing extremism exploits Christianity and the Bible to radicalize and mobilize its violent adherents toward criminality and terrorism. Much like Al-Qaeda and the Islamic State, violent right-wing extremists — who refer to themselves as “Soldiers of Odin,” “Phineas Priests,” or “Holy Warriors” — are also inspired by religious concepts and scriptural interpretations to lash out and kill in the name of religion. Yet very little is said or written about such a connection.

White supremacists, sovereign citizens, militia extremists and violent anti-abortion adherents use religious concepts and scripture to justify threats, criminal activity and violence. This discussion of religious extremism should not be confused with someone being extremely religious. It should also not be misconstrued as an assault on Christianity. Rather, it represents an exploration of the links between violent right-wing extremism and its exploitation of Christianity and other religions to gain a better understanding of how American extremists recruit, radicalize and mobilize their adherents toward violence and terrorism.

White Supremacy

Researchers have long known that white supremacists, such as adherents of Christian Identity (a racist, antisemitic religious philosophy) and racial Nordic mythology, use religion to justify acts of violence and condone criminal activity. Lesser known are the ways other white supremacy groups, such as the Ku Klux Klan and the Creativity Movement (formerly known as the Church of Creator or World Church of the Creator), incorporate religious teachings, texts, and symbolism into their group ideology and activities to justify violating the law and committing violent acts.

The Kloran, a universal KKK handbook, features detailed descriptions of the roles and responsibilities of various KKK positions, ceremonies, and procedures. There are many biblical references in the Kloran, as well as biblical symbolism in the detailed KKK ceremonies. Also, the KKK’s primary symbol (e.g. “Blood Drop Cross” or Mystic Insignia of a Klansman) — a white cross with a red tear drop at the center — symbolizes the atonement and sacrifice of Jesus Christ and those willing to die in his name.

A lesser-known white supremacist group is the neo-Nazi Creativity Movement. Ben Klassen is credited with creating this new religion for the white race in Florida in 1973. Klassen authored two primary religious texts for the Creativity Movement; “Nature’s Eternal Religion” and “the White Man’s Bible.” Creativity emphasizes moral conduct and behavior for the white race (e.g. “your race is your religion”) including its “Sixteen Commandments” and the “Five Fundamental Beliefs of Creativity.” Klassen had a vision that every worthy member of the Creativity religion would become an ordained minister in the Church.

Two other examples of entirely racist religious movements within white supremacy are the Christian Identity movement and racist Nordic mythology. The Christian Identity movement is comprised of both self-proclaimed followers who operate independently and organized groups that meet regularly or even live within insular communities. In contrast, racist Nordic mythology rarely consists of organized groups or communities, preferring to operate through an autonomous, loose-knit network of adherents who congregate in prison or online.

A unique concept within Christian Identity is the “Phineas [sic.] Priesthood.” Phineas Priests believe they have been called to be “God’s Holy Warriors” for the white race. The term Phineas Priest is derived from the biblical story of Phineas, which adherents interpret as justifying the killing of interracial couples. Followers have advocated martyrdom and violence against homosexuals, mixed-race couples, and abortion providers.

Matt Hale of the World Church of the Creator received 40 years in prison for plotting to assassinate a federal judge.

Racial Nordic mysticism is most commonly embraced by neo-Nazis, racist skinheads and Aryan prison gang members. It is most prolific among younger white supremacists. Odinism and Asatru are the most popular Nordic mythological religions among white supremacists. These non-Christian religious philosophies are not inherently racist, but have been exploited and embraced by white supremacists due to their symbolically strong image of “Aryan” life and Nordic heritage. Aryan prison gang members may also have another reason for declaring affiliation with Odinism and Asatru due to prison privileges — such as special dietary needs or extra time to worship — given to those inmates who claim membership in a religious group.

Chip Berlet, a former senior analyst at Political Research Associates, points out that some white supremacists may be attracted to Nordic mythological religions as a result of their affinity toward Greek mythology, Celtic lore or interest in Nazi Germany, whose leaders celebrated Nordic myths and used Nordic symbolism for their image of heroic warriors during World War II. Neo-Nazi groups, such as the National Alliance and Volksfront, have used Norse symbolism, such as the life rune, in their group insignias and propaganda. Racist prison gangs have also been known to write letters and inscribe messages on tattoos using the runic alphabet. “These myths were the basis of Wagner’s “Ring” opera cycle, and influenced Hitler, who merged them with his distorted understanding of Nietzsche’s philosophy of the centrality of will and the concept of the Ubermensch, which Hitler turned into the idea of an Aryan ‘Master Race,’” says Berlet.

Militia Extremists

The militia movement compares itself to the “Patriots” of the American Revolution in an attempt to “save” the ideals and original intent of the U.S. Constitution and return America to what they perceive to be the country’s Judeo-Christian roots. They have adopted some of the symbols associated with the American Revolution, such as using the term “Minutemen” in group names, hosting anti-tax events (much like the Boston Tea Party), celebrating April 19 — the anniversary date of the Battles of Lexington and Concord in 1775 — and using the Gadsden Minutemen flag with its revolutionary “Don’t Tread on Me” slogan.

Many militia members have a deep respect and reverence for America’s founding fathers. Their admiration takes on religious overtones, believing the U.S. Constitution was “divinely inspired” and that the founding fathers were actually chosen and led by God to create the United States of America. For example, an Indiana Militia Corps’ citizenship recruitment pamphlet states, “The Christian faith was the anchor of the founding fathers of these United States.” The manual also states, “People of faith, Christians in particular, recognize that God is the source of all things, and that Rights come from God alone.” The militia movement erroneously believes that the principles the founding fathers used to create the U.S. Constitution are derived solely from the Bible.

Nine members of the Hutaree militia were arrested in March 2010 for conspiring to attack police officers and blow up their funeral processions.

Antigovernment conspiracy theories and apocalyptic “end times” Biblical prophecies are known to motivate militia members and groups to stockpile food, ammunition, and weapons. These apocalyptic teachings have also been linked with the radicalization of militia extremist members. For example, nine members of the Hutaree militia in Lenawee County, Michigan, were arrested in March 2010 for conspiring to attack police officers and blow up their funeral processions. According to the Hutaree, its doctrine is “based on faith and most of all the testimony of Jesus.” Charges against all nine were eventually dismissed.

On their website, the Hutaree referenced the story of the 10 virgins (Matthew 25: 1-12) as the basis for their existence. The verses declare, “The wise ones took enough oil to last the whole night, just in case the bridegroom was late. The foolish ones took not enough oil to last the whole night and figured that the bridegroom would arrive earlier than he did.” According to the Hutaree, the bridegrooms represented the Christian church today; the oil represented faith; and, those with enough faith could last through the darkest and most doubtful times, which Hutaree members believed were upon them. Further, militia members often reason that defending themselves, their families, and communities against the New World Order is a literal battle between good (i.e. God) and evil (i.e. Satan or the devil).

The militia movement has historically both feared and anticipated a cataclysmic event that could lead to the collapse of the United States. Some militia members believe that such cataclysmic events are based in biblical prophecies. For example, some militia members believe that the so-called “Anti-Christ” in the last days predicted in the Book of Revelation is a world leader that unites all nations under a “one world government” before being exposed as the agent of Satan. They further believe that Jesus will battle the Anti-Christ before restoring his kingdom on earth. Militia members cite the creation of Communism, the establishment of the United Nations, and attacks against their Constitutional rights as “signs” or “evidence” that the Anti-Christ is actively working to create the “one world government” predicted in the Bible (e.g. Book of Revelation). Towards the end of the 1990s, many in the militia movement prepared for the turn of the millennium (e.g. Y2K) due to the impending belief that American society would collapse and result in anarchy and social chaos. The failure of the Y2K prophecy left many in the militia movement disillusioned and they left as a result.

More recently, militia extremists have begun organizing armed protests outside of Islamic centers and mosques fearing a rise in Muslim terrorism, perceived encroachment of Sharia law in America and/or out of pure hatred of Muslims and Islam. Some militia extremists have also provided support to gun stores and firing ranges in Arkansas, Florida and Oklahoma that were declared “Muslim Free Zones” by their owners. These types of activities are meant to harass and intimidate an entire faith-based community. They are likely inspired by militia extremists’ personal religious views of preserving America as a Christian nation.

Sovereign Citizens

Sovereign citizen extremists believe their doctrine is both inspired and sanctioned by God. Many have their own version of law that is derived from a combination of the Magna Carta, the Bible, English common law, and various 19th century state constitutions. Central to their argument is the view of a Supreme Being having endowed every person with certain inalienable rights as stated in the U.S. Declaration of Independence, the Bill of Rights, and the Bible.

David Brutsche (L), 42, and Devon Newman, 67, were arrested for allegedly plotting to capture and kill a police officer. Authorities say they were part of the anti-government “sovereign citizen” movement.

In particular, since there is a strong anti-tax component to the sovereign citizen movement, many adherents use Biblical passages to justify not paying income or property taxes to the government. They most often cite Old Testament scriptures, which reference paying usury and taking money from the poor, such as Ezekiel 22:12-13, Proverbs 28:8, Deuteronomy 23:19, and Leviticus 25:36-37. Sovereign citizen extremists further cite Nehemiah 9:32-37 to bolster the belief that oppressive taxation results from sin. Also, 1 Kings 12:13-19 is used to justify rebellion against the government for oppressive taxation.

Sovereign citizen extremists have also been known to avoid paying taxes. They misuse a financial option called “corporation sole.” In general, sovereign citizen extremists misuse the corporation sole (e.g. forming a religious organization or claiming to be a religious figure such as a pastor or minister) tax exemption to avoid paying income and property taxes. They typically obtain a fake pastoral certification or minister certificate through a mail-order seminary or other bogus religious school. Then they change their residence to a “church.” Courts have routinely rejected this tax avoidance tactic as frivolous, upheld criminal tax evasion convictions against those making or promoting such arguments, and imposed civil penalties for falsely claiming corporation sole status.

Violent Anti-Abortion Extremists

The majority of violent anti-abortion extremist ideology is based on Christian religious beliefs and use of Biblical scripture. A review of violent anti-abortion extremist propaganda online is filled with Biblical references to God and Jesus Christ. Many of the Biblical scriptures quoted in violent anti-abortion extremist propaganda focus on protecting children, fighting against evil doers, and standing up to iniquity or sin.

The ultimate goal of anti-abortion extremists is to rid the country of the practice of abortion and those who perform and assist in its practice. They use religious and moral beliefs to justify violence against abortion providers, their staff, and facilities. Violent anti-abortion extremists believe that human life begins at conception. For this reason, some equate abortion to murder. Using this logic, they rationalize that those performing abortions are murdering other human beings. Anti-abortion extremists also equate the practice of abortion to a “silent holocaust.” Some anti-abortion extremists go as far as claiming abortion providers are actually “serial killers” and worthy of death. This sentiment is echoed in passages from the Army of God (AOG) manual in which they declare that the killing of abortion providers is morally acceptable and justified as doing God’s work.

The AOG perpetuates the belief that violent anti-abortion extremists literally represent soldiers fighting in God’s Army and that a divine power is at the helm of their cause. “The Army of God is a real Army, and God is the General and Commander-in-Chief,” the AOG says. Their manual further states, “The soldiers, however, do not usually communicate with one another. Very few have ever met each other. And when they do, each is usually unaware of the other’s soldier status.”

Robert Dear admitted killing three people at a Planned Parenthood office in Colorado. He called the attack a “righteous crusade.”

The AOG also utilizes religious symbolism in its name and logo. The AOG name literally compares its adherents to soldiers in battle with Satan. They are fighting a war with Jesus Christ at their side in an effort to save the unborn. The AOG logo also includes a white cross (e.g. symbolizing the crucifixion of Christ and his resurrection). The logo has a soldier’s helmet hanging off the cross with a bomb featuring a lit fuse inside a box. The words “The Army of God” are inscribed over and below the cross and bomb. The AOG also uses the symbol of a white rose; a reference to the White Rose Banquet, an annual anti-abortion extremist event organized by convicted abortion clinic arsonist Michael Bray.

Religious concepts — such as Christian end times prophecy, millennialism and the belief that the Second Coming of Jesus Christ is imminent — play a vital role in the recruitment, radicalization and mobilization of violent right-wing extremists and their illegal activities in the United States. For example, white supremacists have adopted Christian concepts and Norse mythology into their extremist ideology, group rituals and calls for violence. Similarly, sovereign citizens use God and scriptural interpretation to justify breaking “man-made” laws, circumventing government regulation, avoiding taxation, and other criminal acts. Violent anti-abortion extremists have used Biblical references to create divine edicts from God and Jesus Christ to kill others and destroy property. And militia extremists and groups use religious concepts and scripture to defy the government, break laws, and stockpile food, ammunition and weapons to hasten or await the end of the world. As a result, religious concepts and scriptures have literally been hijacked by right-wing extremists, who twist religious doctrine and scriptures, to justify threats, criminal behavior and violent attacks.

Religion and scriptural interpretations have played an essential role in armed confrontations between right-wing extremists and the U.S. government during the 1980s and 1990s (e.g. the Covenant, Sword, Arm of the Lord standoff in 1985, the siege at Ruby Ridge in 1992, and raid and standoff at Waco in 1993) as well as today (e.g. the 2014 Bunkerville standoff and the takeover of the Malheur Wildlife Refuge in 2016).

These events not only demonstrate extremists rebelling against the U.S. government and its laws, but also served as declarations of their perceived divinely inspired and Constitutional rights. They also serve as radicalization and recruitment nodes to boost the ranks of white supremacists, militia extremists, sovereign citizens, and other radical anti-government adherents who view the government’s response to these standoffs as tyrannical and overreaching.

Thanksgiving 2018 Ellie asked me to Play Guitar at her Shop Closure after I bought a Wonderful Texas Rancher’s Hat at her shop & Prattled on anout being from Waco, Texas. Well, Ellie was from Texas also, so that was a great opportunity.

This lst time I walked up Hyde Street her shop is still Vacant.

For two friends, the opening of Anthophile—a new vintage and flower shop that opened last month at 611 Hyde St. (between Geary and Post)—is the fulfillment of a lifelong dream.

Four years ago, Ellie Bobrowski and Meryll Cawn met in the San Francisco bar community. The two bonded over their shared backgrounds in design and interest in collecting vintage wares.

The two friends have kicked around the idea of opening a flower/vintage shop in the Tenderloin since August 2016, when Cawn quit her job in sales at a start-up.

But until they learned about the space at 611 Hyde earlier this year, their dream remained just that. Once they saw the space, they jumped on the opportunity, signing a lease the very next day.

Both women have a connection to the neighborhood: Bobrowski lives here and says it’s the home she has been looking for since leaving Texas several years ago. Cawn lives in the East Bay but works as a bartender at the Hi Lo Club on Polk Street, just a few blocks away from the new store.

Anthophile’s exterior at 611 Hyde St. | Photo: Anthophile/Facebook

The new shop offers vintage goods and accessories along with floral arrangements. Bobrowski regularly goes to Texas to buy wares for the store, and also delivers handcrafted floral arrangements across the city.

The two share similar tastes, but are 10 years apart in age, so their preferences are different enough to appeal to a wider audience, Cawn said.

Along with vintage clothes and accessories, the women are working to bring new brands to the city. For some brands, Anthophile will be their first retailer in the city or in California.

The goal is to keep prices affordable for people who live in the neighborhood. Most clothing items range from $20-$60 each, and most new jewelry below $60.

Cawn is also making jewelry for the store to sell. She converts broken vintage pieces that she can pick up fairly cheaply, and she says she likes to pass that value on.

There will be some pricier items, but “we like a good bargain ourselves,” Cawn said, “we don’t want to bring stuff in that is way outside of our price range.”

“There is such great community here,” Bobrowski said. “We want to be respectful of the neighborhood and the people who live here.”

Although Cawn and Bobrowski are hoping to make Anthophile a success, that doesn’t mean that they’re quitting their day jobs just yet.

Both owners have other jobs to help the pay the bills: Cawn bartends at the Hi Lo Club on Polk and Bobrowski works for Intel — but they share a desire for Anthophile to become a permanent part of the neighborhood.

The two have a month-to-month lease through August, and then the option to sign a longer-term one. At that point they will reevaluate the situation and see if it’s still the right spot, or if they need more room.

Previously, the space had been a men’s clothing boutique, KnoxSF. After it shuttered in 2013, the space housed a number of different pop-up businesses.

“I want to make it into a destination for the neighborhood,” Bobrowski said. “Some combination of a neighborhood boutique you can find in Europe, with San Francisco flavor and a touch of Texas thrown in.”

Stop by and welcome Anthophile to the neighborhood. Its current hours are 1pm-8pm Tuesday-Friday and 11am-8pm on Saturdays, but those may change, so check out its Facebook page before you go.


To mark the 49th anniversary this week of the founding of the American Indian Movement (AIM), we’re taking a look at the FBI file of John Trudell, esteemed Santee Dakota poet, writer, speaker, and musician who was a key member of AIM, rising to the rank of National Chairman by the mid seventies.

To the Bureau, Trudell was a renowned “agitator,” but within his community he was a motivator who inspired Indigenous peoples across the nation to strive for a better life.

Trudell first came to the attention of the FBI in 1969 when he and other AIM members occupied Alcatraz Island in an attempt to form an Indigenous colony. Over the course of the next decade, the Bureau built up a 138 page case file, utilizing open source intelligence from newspapers and a number of confidential sources.

In September 1972, Trudell and other AIM leadership and membership occupied the Bureau of Indian Affairs headquarters in Washington, DC. The year after, in February 1973, Trudell would go with hundreds of other AIM members to Wounded Knee, South Dakota on the Pine Ridge Reservation, the site of the massacre of 300 Sioux, including women and children, by US cavalry in 1890. The massacre effectively ended the last chapter of the Indian Wars.

While there, he participated in an armed occupation in protest against the treaties broken by the US government and against the Pine Ridge Tribal Chairman Richard Wilson and his Guardians of the Oglala Nation (GOONs), a brutal security force on the Reservation. After the occupation ended in May 1973, Trudell returned to a transient way of life, joining a protest in New Mexico for better conditions in mines for Navajo workers.

In 1975, Trudell held up the Duck Valley Trading Post with a pistol, demanding food for starving elders on the Reservation and a drop in prices. In the process, Trudell fired a single round into the wall behind the clerk, who was not injured. The FBI believed the incident was staged in order to get publicity for the brutal living conditions at Duck Valley, and they devoted a considerable amount of time trying to bring him up on an assault with a deadly weapon charge. The Bureau took control of the investigation and even went so far as to forensically examine the can of Hawaiian Punch that the bullet had passed through.

The file also includes an investigation into the alleged 1979 arson of Trudell’s home on the Duck Valley Reservation. His wife and children all perished in the fire, which began on the roof – an extremely unlikely and suspicious place for a fire to begin, especially when one takes into account that less than 24 hours prior Trudell had burned an American flag in front of the FBI’s DC headquarters as an act of protest. Congressman Ronald Dellums from California wrote to the then-Director of the FBI William Webster, imploring him to investigate “the suspicious circumstances,” and saying “I strongly feel that the Nevada fire deserves your immediate and thorough investigation.” The Bureau begged to differ, instead sticking with the party line that the BIA had conducted their investigation and it resulted in an “accidental” finding, thus the investigation would be not be pursued further.

To this day, the case has never been fully investigated, and likely will never be.

Other highlights from his file include the charges that the FBI considered bringing Trudell up on, including the exceptionally rare “Insurrection or Rebellion” Title 18 United States Code, Section 2383. They also batted around Section 2384, “Seditious Conspiracy.” If anything is to be taken away from that, it is how fearful the FBI was of his efforts to organize and motivate the Indigenous nations to strive for something greater than reservation status. This makes sense especially when considering what sources said about him, and how long the Bureau had special agents out investigating him and trying to apprehend him.

An article the FBI had used as part of the investigation, which appeared around the time that the leaders of AIM were arrested for their role in the occupation of Wounded Knee, quoted Trudell speaking about the US government. “The government is dragging us through the court system so that American consciousness can pretend at humanity. Americans should begin to think about their government, as it is the one instrument that can bring people together or keep them at odds.”

After presenting the remarks without comment, the brief simply skips to Trudell acting as an organizer and spokesman for the Iroquois armed takeover at Eagle Bay, New York, in 1974. The occupation of the Ganienkeh Territory, as it is called by the Mohawk Iroquois, was another episode of Indigenous people struggling to take back land ruthlessly stolen from them. The briefs are full of accounts featuring Trudell traveling across the country organizing Indigenous people, and telling them to not be afraid to challenge their meager status quo, and to if necessary fight for their right to sovereignty.

One source had a particularly glowing assessment of Trudell, culminating with the fitting line, “Trudell has the ability to meet with a group of ‘pacifists’ and in a short time have them yelling and screaming ‘right on!’”

Whatever you think of his politics, Trudell was a man singularly devoted to improving the lives of his people. And for his efforts, he received this extensive FBI file, his family most likely murdered, and federal agents haranguing him for years on end. His speeches survive on YouTube and elsewhere on the internet, and it is highly recommended that you check them out, along with the rest of his file below.


Image via American Indian Movement

We Are an Old People, We Are a New People

Part Three,  Cybele and Her Gallae

by Cathryn Platine

When discussing the pre-history of the Mother Goddess it is best to start with a brief discussion of socio-political blinders that much of the prior research has suffered from.  Even the very word “mother” conjures images and expectations that, upon closer examination, bear little relation to the Anatolian Great Mother Goddess, yet colour almost every account written about Her  In her introduction to In Search of God the Mother, Lynn Roller touches on many of these issues along with an excellent recap of the history of the examination of the very concept of a Mother Goddess and deconstructs both the paternal “Mother Goddess primitive, father God superior” linear viewpoint of the majority of scholars as well as the “Golden Age free of strife and warfare” views of many modern dianic pagans.  Were the ancient Anatolian civilizations matriarchal?  The plain fact of the matter is we may never know.  My own guess is that they were equalitarian in nature, but to a western world that only recently started to grant equal rights to women in the past hundred years, I suppose an equalitarian society could look downright matriarchal.  To those who feel that a progression from a Mother Goddess to a father god is progress, I’d remind them that the religion of Cybele was the official religion of Rome for 600 years as well a major part of the religious landscape of the known world.   When the Roman empire turned from Magna Mater to christianity, the empire promptly fell and the long dark ages began…….hardly progress unless progress means something entirely different to you than to me.

We started our journey into the ancient past at Catal Huyuk where the first representation of the Great Mother was found in a granary bin circa 7000 BCE.  I am giving one of the earlier dates for this representation that is used because there has been a marked pushing backwards of the dating of ancient civilizations with increased knowledge recently.  In part one of this series we examined how neo-lithic life was considerably more advanced than most people realize.  Before dismissing Catal Huyuk as the exception, know that several other settlements from Anatolia show the same level of advanced home building and home life as well as considerable trade over a wide area.  Indeed, several newer discoveries have pushed back the date of the ancient neo-lithic Anatolian civilization past 10,000 BCE and there are literally hundreds of known sites that haven’t been touched yet.

Lynn Roller dismisses the connection of Catal Huyuk seated Goddess with Cybele and although she gives excellent reasons, I feel she also overlooks compelling evidence for the direct connection of the seated Catal Huyuk Goddess to Her.  Many writers on the subject of ancient Goddess worship assume, based on widespread finds of female figurines from the neo-lithic age on an almost universal Goddess religion.  While I agree that you cannot assume these were all Goddess representations it is also apparent that the concept of a Great Mother Goddess associated with lions and bulls spread from ancient Anatolia to Sumeria, India, Egypt and the Minoans by 3000 BCE.  Sometimes, as in Sumeria, a formally somewhat minor Goddess was elevated to this position.  Sometimes the Mother appears alone.  That this happened cannot be denied and is readily apparent by looking at timelines, deities and maps of the ancient world.  The point of origin is clearly central Anatolia.  Also quite telling is that the Great Mother is almost never associated with children, but rather with wild places and beasts and the very earth, moon and sun.  As we have seen in parts one and two, transsexual priestesses are almost always associated with Mother by Her various names.  Just as interesting is that when one digs back into the various mythologies of Her origins how often one finds vague references to Mother originating as a hermaphrodite.  By Phrygian times, this hermaphroditic connection is transferred the consort Attis but even the earliest versions of the Attis myths start with Cybele as a hermaphrodite Herself.

Central to Lynn Roller’s discussion of the Phrygian Mother, Cybele, being of later origin is that the Phrygian people were preceded by the Hittites and co-existed with the neo-Hittites.  Compared to some of the other groups we’ve discussed, the Hittites are relative late comers, rising to power circa 1600 BCE and apparently worshipping an entire pantheon of gods and goddesses.  While the name associated most with Magna Mater, Cybele or Kybele almost certainly was from this period, it is important to note that the Phrygians themselves simply referred to Her as “Mother”.  As we have seen, this concept of Mother Goddess is far older and widespread.  Prior to the Hittites as far back as 4000 BCE we find a Mother Goddess associated with both cattle and lions in the Halaf culture of eastern Anatolia.

We know a flourishing civilization was in place in central Anatolia by 10,000 BCE.  We know it abruptly ended around 4000 BCE.  We know that Mother Goddess worship was central to this civilization.  So what happened?  Walled cities appear around this time period throughout the Middle East.  The answer to many of these previous mysteries is fairly simple.  There was a mini ice age that affected the area that began around 4000 BCE and lasted roughly 1000-1500 years.  Central Anatolia was simply not fit for civilized life during that period and it’s people spread out east and south.   This is when the Minoan civilization arose, the flowering of the Tigris and Euphrates civilizations and the migration of people and ideas to the Indian subcontinent.  This is also when areas that had pantheons, such as the Sumerians, adopted a Mother Goddess to head them, the elevation of Inanna being one the best known examples.

Looking at timelines and migrations what literally jumps out at you, if you are looking, is that Mother Goddess spread from ancient Anatolia and the banks of the Caspian Sea throughout the Middle East, the Mediterranean and all the way to India all in the same period of time as the ending of the ancient neo-lithic cultures of Anatolia.  When this mass migration started, we then started seeing the appearance of walled cities as conflicts arose between those migrating and those already in the areas. It is the period between 10,000 BCE and 4000 BCE that was the model for the “peaceful matriarchal civilizations” of the modern Dianics……except it wasn’t a matriarchy and the conflicts didn’t start because of the introduction of patriarchal thought.  It was simply a matter of people competing for increasingly less resources as a result of weather forced migration.

Moving ahead to around 2500-1500 BCE, much of the non-archaeological material on the religious practices of Anatolia come to us from Greek and Roman sources.  Considering that these sources were dependant on oral traditions, for the most part, and comparing our own misunderstandings of Greek and Roman history today, a similar distance in time, over reliance on this material could be misleading.  When you add the factors of ethnocentric thinking (cultural bias) and the fact that the accounts come to us from ancient scholars who were not part of a Mother Goddess religion themselves and add a pinch of transphobia the bias is practically assured.  No, what is remarkable is that associations between the concept of a Mother Goddess, bulls, snakes, bees, transsexual priestesses, and lions reoccur over and over in the same  general area in different civilizations.  Even more remarkable when you consider that the Phrygians themselves, who only worshipped Mother, lost some of these associations and yet as soon as Cybele encountered the Aegean people (proto greeks) these associations were once again added back.

We have accounts that Mother’s priestesses not only practiced in cities but also roamed in small nomadic groups and did so throughout the Phrygian, Hellenistic and Roman periods in Anatolia.  It is a small step to suppose that these groups also predated the Phrygian period and provided the link in traditions that is so clear from culture to culture.  We need only look at the modern example of christianity to understand that the central figure of one religion can be incorporated into another as happened with Hinduism and Islam. Again, we need look no further than the Catholic church to see that even in a poppa god religion, Mother will once again rise as She has done there in the Marian movement within Catholicism.

To understand Cybele’s relationship to the Greek and Roman schools of religion it is necessary to deconstruct widely held misconceptions about the various gods and goddesses.  The Cybeline faith was the first of the mystery religions.  A mystery religions teaches with stories, plays and oral traditions.   The various stories about Attis, Cybele’s consort son/daughter, appeared around the same time as the origins of the Greek mythological stories.  Attis and the so-called Greek Gods were never meant to be taken as literal truth, but rather as poetic expressions of the world and morality stories.  It is no accident that the only “stories” Cybele appears in are those about Attis yet, as we shall see, She was above all the various gods and goddesses in both ancient Greece and later Rome.  1600 years of literalist christian tradition makes understanding the nature of the Greek and Roman pantheons all but impossible for the average person today.  The famous Greek mystery schools developed from the Aegean contact with the Phrygian Cybelines.  The faith spread throughout the Mediterranean as far as Spain and southern Italy at a much earlier time than previously had been believed.  In Alexandria, Cybele was worshipped by the Greek population as “The Mother of the Gods, the Saviour who Hears our Prayers” and as “The Mother of the Gods, the Accessible One.” Ephesus, one of the major trading centres of the area, was devoted to Cybele as early the tenth century BCE, and the city’s ecstatic celebration, the Ephesia, honoured her. It was also around this time that Mother’s temples underwent a change from a beehive shape to a more Grecian looking columned pattern.  This shows in the various “doorway” shrines to Kubaba/Kubele that appeared during this period throughout the Phrygian mountains.

During the Phrygian period, Cybele’s Gallae priestesses were wandering priestesses as well as  those living in religious communities mixed with Mellissae priestesses.  We know that both were fairly common in Greece from various accounts such as the mistreatment one Galla received in Athens. She was killed by throwing her in a pit.  Athens fortunes fell so low afterwards a Maetreum was built and dedicated Cybele that was viewed as so important that all of the official records of Athens were kept there.  There is also much evidence that Sappho of Lesbos was a Mellissa priestess and several Phrygianae were spread thoughout the islands.  Careful examination of art work showing the greek gods often reveals Cybele’s image over them which continued into the Roman times.

The story of Cybele’s presence in Rome begins circa the early sixth century BCE at the dawn of Roman history.  According to the story, King Tarquinius Superbus the Seventh (and last) legendary King of Rome, was approached by an old woman bearing nine scrolls of prophecies by the Sibyl.  She asked for three hundred gold pieces for the set, but Tarquinius thought she was a fraud and refused.  She then burned three of the scrolls in his hearth and again offered the remaining six scrolls for the same three hundred gold pieces.  Once again Tarquinius refused.  Again she burned three more scrolls.  When she offered the remaining three scrolls for the same three hundred gold pieces, Targuinius suspected he was dealing with the Sibyl of Cumae herself and agreed.  These were the original Sibylline prophecies of Rome.  They were housed in the Capitoline temples as the most sacred books of Rome and accesses to them limited to a specially appointed priesthood who only consulted them in times of threat to Rome.

One such threat to Rome came during the second Punic Wars.  Rome was being badly beaten,  rains of stones from heaven falling on the city itself, and according to legend, numerous other ill portents.  The Sibylline scrolls were consulted and it was found that if a foreign foe should carry war to Italy, if Magna Mater Idaea was brought to Rome from Pessinus, Rome would not only endure, but prosper.  This was made all the more impressive by the arrival of pronouncements of the Sibyl of Delphi of a similar nature at this exact moment.  Romans had prided themselves on their Phrygian origins from Troy so the introduction of a Phrygian religion was actually embraced.  Five of Rome’s leading citizens travelled to Perganum by way of Delphi to see King Attalus.  The Sibyl of Delphi confirmed that Rome’s salvation could be had from Attalus and that when Cybele arrived in Rome She must be accorded a fitting reception.  They went to Attalus’ royal residence at Perganum, were conducted to Pessinus and arrangement were made for the Mother of Gods to Rome. Word was sent ahead and the senate voted young Scipio the best and noblest and he was given the task of greeting Magna Mater at Ostia and overseeing Her procession to Rome.

Scipio was accompanied to Ostia by the Matrons of Rome, who were to carry Magna Mater (in the form of a statue with a black meteoric stone in Her forehead) by hand to Rome from Ostia.  When the ship arrived, it became stuck at the mouth of the Tiber and resisted all attempts to free it.  Among the Matrons of Rome was Claudia Quinta, who’s reputation had been questioned.  She waded into the waters, shoo’d off the men and pulled the ship free by herself according to the legend.  Thus she restored her reputation.  Cybele arrived in Rome April 12’th, 203 BCE and was greeted with rejoicing, games, offerings and a lectisternium (7-day city-wide feast).  Until the mid fourth century CE this event was celebrated in Rome with games, festivals and feasts as Megalesian every year.

Cybele was installed in the Temple of Victory on the Palatine close to where Her own temple was already under construction.  That summer Scipio defeated Hannibal and Rome’s devotion to Cybele was cemented.  The Cybeline faith remained the only “official” religion in Rome up until the introduction of Mithraism, a faith that allowed male priests.  The Maetreum on the Palatine was dedicated in 194 BCE

We Are an Old People, We Are a New People

Part Two,  Transsexual Priestesses, Sexuality and the Goddess

by Cathryn Platine


Sexual “morality” is one of the major blind spots to understanding the past.  The Western world has become so enmeshed in the Judaeo-Christian view of sexuality that it takes a major effort for most to take an unbiased viewpoint of cultures that had a much healthier view of human sexuality.  Even today’s neo-Pagan, who is taught that all acts of pleasure, that harm none, are forms of Her worship, often still struggle with the “morality” of same-sex relationships and even the existence of transsexuals so it should not be a surprise that much written about ancient sexuality is tainted with unexamined bias.  The term “temple prostitute” is an excellent example.  The very term is extremely negatively emotionally loaded.  To avoid this, I shall refer to those who practiced the institutional sacred sex role as hierodules, a greek term without that loading to the modern reader.

One other term widely used incorrectly is eunuch.  Historians apply this term indiscriminately with clearly no idea of it’s meaning.  It conjures up visions of large castrated male harem guards and castrati singers of Middle Ages which falls within the true meaning of the word but is widely applied to the transsexual priestesses of the Goddess, which is misleading at least, and at any rate, insulting in the extreme to those ancient transsexual women.  Today it is widely applied to the Hijra of India as well, also in blatant disrespect of their own identity.  When the term eunuch is not used, we find in it’s stead “castrated male priests” almost universally.  Gay and feminist historians are particularly guilty of this last use.  So what is the truth?  The truth lies in examining the lives of these priestesses, for deeds speak louder than words and how they lived is the best record we have of who they were.

Some things never change regardless of culture.  As any woman can tell you, men place a very high sense of their identity on their genitals and always have and so the idea that thousands upon thousands of “men” would willingly castrate themselves and then live as women the rest of their lives is just as absurd then as it is today.  We aren’t talking about involuntary castrations of infants or young males by others such as is the source of the historic eunuchs, we are talking about individuals cutting off their own genitalia in order to live as priestesses.  Any transsexual woman reading the accounts decodes the mystery instantly and effortlessly…….these individuals are not males, they are transsexual women.  Knowledge about transsexuality is widespread enough today among the educated that continuing to refer to these ancient women as “castrated male priests” or eunuchs is out and out transparently transphobic.  Unfortunately this transphobia runs rampant everywhere even today.  Despite their expressed wishes, despite the way they live their lives, almost all accounts today of the Hijra of India refer to them as eunuchs or “neither male nor female”, a sort of third sex.  If you ask a hijra about her sex, she’ll tell you she is female in her eyes just as any modern transsexual woman would.  If you observe their lives, they live and function (as much as they are allowed to) as women.  Even in our own culture it has been only a few years past that the press even wrote guidelines regarding the pronouns to use when writing about transsexuals and even with those guidelines, a lurid, post-mortem insult of “man living as a woman” is still too often the default of the press when one of us is murdered.  Transphobia is rooted in gynophobia, it is the last socially “acceptable” form of bigotry, but is pure bigotry nonetheless.  In ancient times as well as today, the imperative to bring one’s body in conformity with one’s identity cannot be truly understood by those who don’t have it.  The non-transsexual will just have to accept the word of those of us called, a call that cannot be denied.  Now we have the key to unlock truth of the transsexual priestesses, for indeed, that was what they were.

How common were transsexual priestesses in the ancient world?  Almost every form of the Goddess was associated with them.  Inanna, also called Ishtar, had Her Assinnu.  The Assinnu were the hierodule priestesses of Inanna whose change was performed by  crushing the testicles between two rocks in the earliest references.  Inanna also had transgendered priests who did not do this and who wore clothing that was female on one side and male on the other called the Kurgarru.  They were two distinct groups.  Becoming an Assinnu was a mes, a call from the Goddess.  This mes is a common thread among all transsexual priestesses.  It was recognized that transforming one’s life and body was not a choice but a destiny, the call usually in the form of dreams of Inanna when young.  We have several different accounts of Inanna’s decent to the underworld and rescue from Her sister, Ereshkigal.  In one, Asushunamir (She whose face is Light), the first Assinnu, was created to save Inanna.   In another version,  two beings, the first Kurgarru and Kalaturru, neither males nor females, are created by Enki from the dirt under his fingernails for the mission.  As hierodules, the Assinnu were seen as mortal representatives of Inanna and sex with Assinnu was congress with the Goddess Herself.  As magicians, their amulets and talismans were the most powerful of magick to protect the wearer from harm, even just touching the head of an Assinnu was believed to bestow on a warrior the power to conquer his enemies.  As ritual artists they played the lyre, cymbals, 2 string lutes and flutes and composed hymns and lamentations all in Emesal, the women’s language, said to be a direct gift of Inanna, as opposed to the common language of men, Eme-ku.

In Canaan we find the Goddess as Athirat also called Asherah or Astarte and Her hierodule transsexual priestesses, the Qedshtu.  It should be noted that just as Gallae is changed into Gallus denying the very gender of these priestesses and erasing the truth of their lives, the bible refers to them as Qedeshim (masculine).  The functions of the Qedshtu were almost identical to those of the Assinnu and sexual congress with the Qedshtu was considered sex with Athirat Herself.  Apparently they also practiced a tantric sexual rite accompanied by drums and other instruments and also used flagellation to obtain an estatic state.  The worship of Athirat dates back as far as 8000 BCE by the Natufians who were replaced around 4000 BCE by the Yarmukians.  The young consort, Baal added around this time and somewhat better known in biblical times as El.  By around 2000 BCE the Qedshtu worn long flowing caftans made of mixed colours, interweaved with gold and silver threads intended to envoke a vision of Athirat in Her full glory in the springtime and are thought to have also worn veils over their faces.  They were renown for charity, maintained the garden like groves and temples of Athirat and were prized potters and weavers.  Among the surviving rites was the preparation of a sacred ritual food made from a mixture of milk, butter, mint and coriander blended in a cauldron and blessed by lighting seven blocks of incense over the top while accompanied by music played by other Qedshtu.

The invasion of Canaan by the bloodthirty, patriarchial and fanatical followers of Yahweh, the people later known as the Israelites, took place around 1000 BCE.  Yahweh’s worshipers insisted he was a jealous god that would have no rivals.  Unable to completely conquer the Canaanites, they lived in close proximity for a while.  It’s no wonder that the Israelite women were drawn to Athirat, now often called Asherah, whose followers believed in equality of the sexes.  It is no wonder that the sexually repressed Israelite men would also want to participate in Her rites.  For a time the religions mixed enough that Yahweh and Asherah were considered co-deities.  The Levite priests of Yahweh were at their wits end, since even their wives often openly worshiped Asherah.  That some of their “sons” became Qedshtu, can be decoded in the story of Joseph and his “coat of many colours”.  It is believed that Rachel, Joseph’s mother, was a priestess of Asherah and the coat came from her.  We’ve mentioned the colourful caftans with gold and silver threads that were the marks of the Qedshtu, both transsexual and non transsexual priestesses.  Small wonder that Joseph’s brothers, devotees of Yahweh, would react badly to their brother becoming a woman, a hierodule priestess of Asherah, for indeed this is what the story indicates.

Almost all of the various levitian laws came from this period as an attempt to kept the Israelites from worshiping Asherah.  Outlawed was the “wearing of cloth made from mixed fibres”, banned from the presence of Yahweh were the eunuchs who “had crushed their testicles between stones”, outlawed was the wearing of clothing of the opposite sex.  Israelite men were given permission, even directed, to kill their own wives and children if they did not follow their teachings. The Levites were essentialists and not only would not recognize the womanhood of the trans-Qedshtu, but referred to them as men who laid with men.  Among the Canaanites, homosexual behaviour wasn’t uncommon and was widely accepted.  There are ample examples of artwork showing these relations that are clearly not with Qedshtu.  Then, as today, these essentialists failed to understand the difference between a transsexual and a homosexual.  It wasn’t so much the homoerotic sex that upset them, it was the idea that a man would become a woman and chose to live that way that terrified them.

The open warfare between the Israelites and followers of Athirat began in earnest soon after the rule of Solomon when Canaan was divided into Israel and Judah.  That many Hebrew rulers were not only tolerant of the worship of Athirat, but sometimes were themselves worshipers cannot be denied.  Qedshtu were welcomed and openly practiced in Hebrew temples. Jeroboam, Rehoboam and Abijam all openly worshiped Athirat and Baal.  Rehoboam’s mother was a Qedshtu. Abijam’s son, Asa, who ruled between 908 to 867 BCE, converted wholy to Yehweh and exiled many Qedshtu and destroyed their temples and burned their groves.  He removed his own mother, Maacah, from the throne because she was a Qedshtu priestess. Jehosphaphat of Judah, went further and “the remnants of the male cult prostitutes who remained….he exterminated.” ( 1 Kings 22:46 RSV)  The war on the followers of Athirat continued, it’s interesting to note that Athirat was so feared, She is not even mentioned, but rather the biblical text refer to the followers of Baal, her consort, only.  This pattern is repeated in much of the old testament.  King Jehu, whose murderous attempt at genocide of Athirat and Baal’s worshipers is called “cunning”,  pretended to convert and called all the Qedshtu together for a mass celebration at the temple of Jerusalem.  When he had gathered them all together and invited them to partake of their rituals, he had the doors locked and his guards murder everyone and then throw their bodies on the city garbage dump.  King Josiah, yet another son of a follower of Athirat, Amon, in the tenth year of his rule ordered all images of Athirat and Baal gathered together at Kidron and burned.  Not content with this, he then committed total sacrilege and ordered all the bones of Her worshipers dug up and burned on the altars and then scattered to the winds.  Then he proceeded to hunt out the remaining worshipers in their communal homes and temples (he broke down the houses of the male cult prostitutes……where the women wove hangings) and killed them all.  The christian decendants of the Israelites a thousand years later would repeat these deeds, but more of that later.  Let us now journey to ancient Russia then back to Anatolia and then on to Greece and Rome.

Reaching back as far as  8000 BCE the people of the area known today as Russia and the Ukanine worshiped a Mother Goddess.  Our first records give Her name as Artimpasa or Argimpasa and like most other Mother Goddess aspects, She had her transsexual priestesses.  What they called themselves is lost in the mists of time, we know them ony by the names the Greeks gave them, insulting names, the least of which was Enarees, meaning un-manned.  Many authors suggest they are the spiritual decendants of the paleolithic shamans of Siberia and the source of the “twin-spirits” of the AmerIndians and Inuit.  We do know something about them.  The not only “lived like women” but also “play the woman” in all things.  Artimpasa was associated with plant life and particularly cannabis.  Like Cybele, She is accompanied often by a lion.  We know the Enarees wore the clothes of women and spoke the women’s language and performed the tasks associated with women.  Writing about them, the Greeks, who were somewhat transphobic, claimed that they were who they were as a punishment and made jokes about how they were the Scythians who had castrated themselves by spending too much time in the saddle.  From a plaque that formed the front of a queen’s tiara dating from between 4000 to 3000 BCE, we know that they probably served the same function as priestesses as almost everywhere She was worshiped.  From Herodotus we learned that they acted as diviners by “taking a piece of the inner bark of the linden tree” and cutting it “into three pieces and twisting and untwisting it around their fingers”.  We also know that part of the rites included making a “sweat lodge” and burning cannabis inside to obtain an estatic state, some of the tripods, braziers and charcoal with remains of cannabis have been found in various digs in the area.

By the time of the Scythians, the relations with the Enarees were mixed, both respected as priestesses and seers and also ridiculed.  This was the common pattern as societies turned more patriarchial and the fear of men of being “called” as were the trans-priestesses was given voice.  By the sixth century BCE the Scythians had a far reaching empire and one of the more interesting tales was how they came into conflict with Amazons, made a truce and intermarried for a while then separating.  Another tells of a Scythian noble, Anarcharsis, who traveled to the west in search of wisdom around 600 BCE.  He joined a mystery religion and while visiting Cyzicus encounters a festival of Cybele.  He made a vow that if he was able to return home safely, he would worship Cybele just as he’d seen.  Good to his word, upon returning to his homelands he donned the dress of the Gallae and “went through the ceremonies with all the proper rites and observances”.  As we shall see, the “proper rites and observances” of the Gallae included an initiation by working up an estatic state and then quickly removing both the penis and testicles with a sharp object and thereafter donning the robes and dress of the Gallae and living as female.  Anarcharsis would have had to done this if he made the proper rites.  He no doubt witnessed the rites if he attended the major festival of Cybele and would have been aware of this had he been initiated into one of the various mystery religions.  While Aanrcharsis is refered to in the masculine throughout the account, it comes to us via those horrified by the Gallae.  Fearing the re-introduction of Goddess worship to the Scythians, who had just recently separated into two camps, Anarcharsis’ brother, King Saulius murdered her.  Centuries later, Clement of Alexandra wrote of it:

“Blessings be upon the Scythian king”  ” When a countryman of his own (his brother) was imitating among the Scythians the rite of the Mother of the Gods as practiced at Cyzicus, by beating the drum and clanging the cymbal, and by having images of the Goddess suspended from his neck after the manner of a priest of Cybele, this king (Saulius) slew him (Anacharisis) with an arrow, on the ground that the man, having been deprived of his own virility in Greece, was now communicating the effeminate disease to his fellow Scythians.”

A few words about hierodules are in order.  These priestesses were both transsexual and non transsexual women.  Often, during the festivals of almost all these various aspects of Her, women who hadn’t also dedicated their lives to Her would take part and children concieved at these times were considered special gifts of the Goddess.  Because transsexual women could not become pregnant, they had a special regard and sacred sexual relations with them, which were indeed viewed as a sacred rite and not some wild orgy, brought the partner into an even more sacred state.  Today we’ve lost the connection of the sacred with sex because of the repressive nature of Judeao-Christian traditions towards anything pleasurable.  Tantric sexual worship is still practiced today in India.  Now let us talk of the Mother of the Gods Herself, Cybele, Her consort son/daughter Attis and Her Gallae.



Main Page

PRIESTS OF
THE GODDESS

Gender Transgression in Ancient Religion

Related Materials
Theorizing the Third…or How I Became a Queen in the Empire of Gender
“The radically ‘de-oedipalized’ body of the priest of the goddess, in ways mysterious to us, is bound up with techniques of ecstasy no less historically tenacious than the weight brought to bear against it by the patriarchal Judeo-Christian tradition. The defiant presence of this figure in the midst of prevailing phallocentrism remains striking and unexpected. Today, long after the last temple of Cybele fell into ruin, we are discovering that the boundaries of gender are no less friable, and that the human body, which has been so deeply inscribed with the cultural construction of its meaning as to seem for all purposes what it is represented to be—natural and fixed—may yet be reinscribed with other meanings and other constructions.”

Goddess of Catul Huyuk
The earliest known depiction of a goddess with the attributes later associated with Cybele—seated in a throne and flanked by lions. From the early Neolithic site at Catal Hüyük, 5th millennium BCE
The following is an excerpt from the introduction and conclusion of my article on priests of the goddess in the Old World, from the devotees of Inanna/Ishtar in Sumer, Assyria, and Babylonia to the followers of Cybele and Attis in Roman times and the hijra of contemporary and ancient India. The full article with notes can be found in History of Religions 35(3) (1996): 295-330.

Phrygia
Phyrgia, in modern day Turkey, was the homeland of Cybele and Attis, and their galli priests.
In the competition between Christian and pagan in the ancient world neither side hesitated to broadcast the most outrageous and shocking accusations against its opponents in the most inflammatory rhetoric it could muster. “In their very temples,” wrote Firmicus Maternus in the mid-fourth century, “can be seen deplorable mockery before a moaning crowd, men taking the part of women, revealing with boastful ostentation this ignominy of impure and unchaste bodies (impuri et impudici). They broadcast their crimes and confess with superlative delight the stain of their polluted bodies (contaminati corporis)” (De errore profanarum religionum 4.2). These infamous men, with their impure, unchaste, polluted bodies, were none other than the galli, priests of the gods Cybele and Attis, whose mystery religion constituted one of early Christianity’s major rivals. Time and again, Christian apologists cited the galli as representative of all they abhorred in pagan culture and religion. And of all the outrages of the galli, none horrified them more than the radical manner in which they transgressed the boundaries of gender.
“They wear effeminately nursed hair,” continued Firmicus Maternus,“and dress in soft clothes. They can barely hold their heads up on their limp necks. Then, having made themselves alien to masculinity, swept up by playing flutes, they call their Goddess to fill them with an unholy spirit so as to seemingly predict the future to idle men. What sort of monstrous and unnatural thing is this?” A century later, Saint Augustine found the galli no less shocking: “Even till yesterday, with dripping hair and painted faces, with flowing limbs and feminine walk, they passed through the streets and alleys of Carthage, exacting from merchants that by which they might shamefully live” (De civitate Dei 7.26).

Malibu Cybele
Marble statue of Cybele as she was typically depicted in the Roman era, 50-60 CE

Inanna/Ishtar
The Mesopotamiam goddess Inanna/Ishtar. In mythological accounts, Inanna is rescued from the underworld by two beings described as “neither male nor female.” Various classes of priests in Sumerian and Assyrian religion occupied alternative gender roles distinct from those of men and women.

It would be easy to dismiss the numerous references to galli in ancient literature, both Christian and pagan, as exoticisms equivalent to today’s fascination with gender transgression as evidenced by such films as M. Butterfly and The Crying Game. Unlike the modern figure of the transvestite, however, galli were part of an official Roman state religion with manifestations in every part of the Greco-Roman world and at every level of society. One finds the Roman elite worshiping Cybele with bloody animal sacrifices officiated by state-appointed archigalli; common freedman and plebians forming fraternal associations, such as the dendrophori and canophori, to perform various roles in her annual festivals; and the poor and slaves swept up by the frenzy of her rites, often to the consternation and alarm of their social superiors.
It is the widespread dispersion and great historical depth of the Cybele and Attis cult, as well as its appeal to multiple levels of ancient Mediterranean societies, that make its study fascinating on its own, not to mention its relevance to current debates concerning the social construction of sexuality and gender. The galli become even more interesting, however, when placed next to evidence of similar patterns of religious gender transgression from the Near East and south Asia, which suggests that goddess-oriented cults and priests are part of an ancient cultural legacy of the broad world-historical region Marshall Hodgson referred to as the “Oikoumene.”
In the discussion that follows, I will focus on three of the better-documented cases of goddess-centered priesthoods: the Greco-Roman galli, the priests of the goddess called Inanna in Sumeria and Ishtar in Akkad, and the hijra of contemporary India and Pakistan. The parallels between these priesthoods and the social roles and identities of their personnel are detailed and striking. Without ruling out dispersion as a factor, I will argue that these priesthoods are largely independent inventions whose shared features reflect commonalties in the social dynamics of the societies in which they arose, specifically, the agrarian city-state. The presence of goddess-centered priesthoods in the regions where the urban lifestyle first developed raises unexpected and challenging questions concerning the role of gender diversity in the origins of civilization….

Galli: tertium sexus
[See full article]
Hijra: neither man nor woman
[See full article]
Gala et al.: penis+anus
[See full article]

Social origins and social meanings of gender transgression
At the time of the birth of Christ, cults of men devoted to a goddess flourished throughout the broad region extending from the Mediterranean to south Asia. While galli were missionizing the Roman Empire, kalû, kurgarrû, and assinnu continued to carry out ancient rites in the temples of Mesopotamia, and the third-gender predecessors of the hijra were clearly evident. To complete the picture we should also mention the eunuch priests of Artemis at Ephesus; the western Semitic qedeshim, the male “temple prostitutes” known from the Hebrew Bible and Ugaritic texts of the late second millennium; and the keleb, priests of Astarte at Kition and elsewhere. Beyond India, modern ethnographic literature documents gender variant shaman-priests throughout southeast Asia, Borneo, and Sulawesi. All these roles share the traits of devotion to a goddess, gender transgression and homosexuality, ecstatic ritual techniques (for healing, in the case of galli and Mesopotamian priests, and fertility in the case of hijra), and actual (or symbolic) castration. Most, at some point in their history, were based in temples and, therefore, part of the religious-economic administration of their respective city-states.
The goddesses who stand at the head of these cults—Cybele, Bahuchara Mata, and Inanna/Ishtar—also share important traits. All three are credited with the power to inspire divine madness, which can include the transformation of gender. Their mythologies clearly place them outside the patriarchal domestic sphere: Cybele roams the mountains with her wild devotees; Inanna/Ishtar is the patron of the battlefield; and Bahuchara Mata becomes deified while on a journey between cities (see synopses of myths in table 2). Indeed, all three transgress patriarchal roles and structures just as much as their male followers: Cybele begets a child out of wedlock, which infuriates her father; Ishtar, the goddess of sexuality, is notoriously promiscuous, never marries, and, indeed, is herself a transvestite; and Bahuchara Mata, at the other extreme, cuts off her breasts in an act of asceticism to avoid unwanted heterosexual contact. The influence of these goddesses over human affairs is often as destructive as it is beneficial. “To destroy, to build up, to tear out and to settle are yours, Inanna,” reads one Sumerian text, and in the next line, “To turn a man into a woman and a woman into a man are yours, Inanna.” Despite the common reference to these goddesses as “mother” by their worshippers, there is much in their nature that exceeds and confounds our present-day connotations of the maternal.
How can we account for such a consistent pattern over such a broad area and time span? Without ruling out diffusion as a factor—the spread of Cybele and Attis was due in part to missionizing by galli themselves, and the influence of Mesopotamian religion certainly reached Syria and Anatolia-simple cultural exchange nonetheless seems the least likely explanation. A more promising approach would be to address three interrelated questions: What were the belief systems of the societies in which these priesthoods existed, in particular, beliefs concerning sex, gender, and sexuality? What was the nature of the social systems in which these roles originated? What was the source of their long-term popular appeal?
The eclectic approach implied by these questions-encompassing cultural, social and psychological analysis-is key to understanding cultural phenomena as social constructions. When we refuse to regard femininity, masculinity, heterosexuality, homosexuality, and social inequality in general as precultural givens, we necessarily make our task as historians and social theorists more complicated, for cultural facts are always multiply determined and their explication requires analysis of the social wholes in which they occur. The goal must be a unified analysis, one that integrates the synchronic viewpoint of culture afforded by anthropology with the diachronic perspective of historical study. In the case of the ancient priesthoods of the goddess described here, such an approach reveals their roles to be not accidental but, indeed, consistent features of the societies in which they flourished.
To begin with, in all three cultural regions, goddess-inspired priests were conceptualized as occupying a distinct gender category. As we have seen, hijra routinely refer to themselves as “neither men nor women,” consistent with the ancient Sanskrit designation trtiya prakrti. (The galli, as we saw, were also described as a tertium genus.) Similarly, the Sumerian myth called “The Creation of Man” (ca. 2000 B.C.E.) relates how Ninmah fashioned seven types of physically challenged persons, including “the woman who cannot give birth” and “the one who has no male organ, no female organ.” Enki finds each one an occupation and position in society-the sexless one “stands before the king,” while the barren woman is the prototype for the naditum priestesses. These proceedings are echoed in the Akkadian myth of Atrahasis (Atra-hasis) (ca. 1700 B.C.E.), where Enki instructs the Lady of Birth (Nintu) to establish a “third (category) among the people,” which includes barren women, a demon who seizes babies from their mothers, and priestesses who are barred from childbearing (3.7.1).

Kybele
The worship of Cybele was originally centered in Phrygia (central Turkey), where she was known Kubaba or Kybele. The Romans formally adopted her worship in 204 BCE, when they brought a statue representing her from her main shrine in the Phrygian city of Pergamum back to Rome. This statue, from a site in Anatolia dating to the eighth or early seventh century BCE, depicts the Phrygian goddess with two youthful attendents playing a flute and harp.

Attis
Attis, the Phrygian shepherd, whose worship became part of the cult of Cybele. In varying mythological accounts, Attis is killed or is driven insane and castrates himself, as the result of jealousy and passions arising from an ill-fated love affair. Attis’ fate served as the model for the galli priests, who underwent castration to become Cybele’s dedicated and chaste servants.

Gallus
Marble relief of a gallus, or priest of Cybele, with various ritual objects, 2nd cen

Archigalluls
Marble relief depicting an archigallus, or head galli priest, 3rd century CE

Cybele Temple
Remains of the temple of Cybele on the Palatine Hill in Rome

Hijra
Contemporary hijra in India

Clearly, the underlying conceptualization of gender implied by these taxonomies is at variance with the idea that physical sex is fixed, marked by genitalia, and binary. Recent reviews of Greek and Roman medical texts, for example, reveal a notion of gender as grounded in physiology, but the physiology involved is inherently unstable. Masculinity and femininity depend on relative levels of heat and cold in the body (and, secondarily, moisture and dryness). These factors determine the sex of developing fetuses, but even after birth an individual’s gender status was subject to fluctuations in bodily heat. If men were not at risk of literally becoming females, they were in danger of being feminized by any number of causes. A similar hydraulic construction of the body, as Wendy Doniger has termed it, is evident in Hindu belief as well.
The frequent references to priests of the goddess as “eunuchs” or “impotent” males points to another important commonality in the ancient construction of male and female genders. A little known episode in Roman legal history is especially revealing in this regard. In 77 B.C.E., a slave named Genucius, a priest of Cybele, attempted to take possession of goods left him in a will by a freedman, but this was disallowed by the authorities on the grounds that he had voluntarily mutilated himself (amputatis sui ipsius) and could not be counted among either women or men (neque virorum neque mulierum numero) (Valerius Maximus, 7.7.6). Presumably, only women and men qualified to exercise inheritance rights, and this privilege of their gender identity was, in turn, a function of their ability to reproduce. This seemingly minor case nonetheless underscores the way in which gender identity and citizenship were linked in societies of the Oikoumene region-that is, in patriarchal, agrarian city-states. Gender, to borrow Judith Butler’s terminology, was performative, or rather, to be even more specific, productive. Gender identity hinged not on the degree of one’s masculinity or femininity, the direction of one’s sexual orientation, nor even one’s role in the gendered division of labor but on one’s ability to produce children, in particular males. In a patrilineal kinship system, it is the labor of male children on which the paterfamilias has the greatest claim. As anthropological research has shown, peasants around the world typically seek to improve their lot in life by having more children and thereby increasing the supply of labor for family-based production. Having male children is the central imperative of gender, as a social category, a role, and a personal identity in most patriarchal agrarian societies.
From this perspective, males or females who are unable to reproduce, who are impotent, whether for physiological or psychological reasons, or who lack or forswear heterosexual desire, including those who desire the same sex, all fail to qualify for adult male or female gender identity. Being neither, they tend instead to be categorized together as members of an alternative gender or of subdivisions of male and female genders. Like male and female, these roles are also attributed specific traits, skills, and occupations. In the same way that men’s activities are “male” and women’s are “female,” what galli, hijra or gala do comes to be seen as intrinsic to their alternative gender identities. At the same time, the distinctions

There was just a small news announcement on the radio in early July
after a short heat wave, three inmates of Vacaville Medical Facility
had died in non-air conditioned cells. Two of those prisoners, the
announcement said, may have died as a result of medical treatment. No
media inquiries were made, no major news stories developed because of
these deaths.

But what was the medical treatment that may have caused their deaths?
The Medical Facility indicates they were mind control or behavior
modification treatments. A deeper probe into the death of these two
inmates unravels a mind-boggling tale of horror that has been part of
California penal history for a long time, and one that caused
national outcries two decades ago.

Mind control experiments have been part of California for decades and
permeate mental institutions and prisons. But, it is not just in the
penal society that mind control measures have been used. Minority
children were subjected to experimentation at abandoned Nike Missile
Sites, veterans who fought for American freedom were also subjected
to the programs. Funding and experimentations of mind control have
been part of the U.S. Health, Education and Welfare Department, the
Department of Veterans Affairs, the Central Intelligence Agency
through the Phoenix Program, the Stanford Research Institute, the
Agency for International Development, the Department of Defense, the
Department of Labor, the National Institute of Mental Health, the Law
Enforcement Assistance Administration, and the National Science
Foundation.

California has been in the forefront of mind control experimentation.
Government experiments also were conducted in the Haight-Ashbury
District in San Francisco at the height of the Hippy reign. In 1974,
Senator Sam Erwin, of Watergate fame, headed a U.S. Senate
Subcommittee on Constitutional Rights studying the subject
of “Individual rights and the Federal role in behavior modification.”
Though little publicity was given to this committee’s investigation,
Senator Erwin issued a strong condemnation of the federal role in
mind control. That condemnation, however, did not halt mind control
experiments, they just received more circuitous funding.

Many of the case histories concerning individuals of whom the mind
control experiments were used, show a strange concept in the minds of
those seeking guinea pigs. Those subject to the mind control
experiments would be given indefinite sentences, his freedom was
dependent upon how well the experiment went. One individual, for
example, was arrested for joyriding, given a two-year sentence and
held for mind control experiments. He was held for 18 years.

Here are just a few experiments used in the mind control program:

A naked inmate is strapped down on a board. His wrists and ankles are
cuffed to the board and his head is rigidly held in place by a strap
around his neck and a helmet on his head. He is left in a darkened
cell, unable to remove his body wastes. When a meal is delivered, one
wrist is unlocked so he could feel around in the dark for his food
and attempt to pour liquid down his throat without being able to lift
his head.

Another experiment creates a muscle relaxant. Within 30 to 40 seconds
paralysis begins to invade the small muscles of the fingers, toes,
and eyes and then the inter costal muscles and diaphragm. The heart
slows down to about 60 beats per minute. This condition, together
with respiratory arrests, sets in for as long as two to five minutes
before the drug begins to wear off. The individual remains fully
conscious and is gasping for breath. It is “likened to dying, it is
almost like drowning” the experiment states.

Another drug induces vomiting and was administered to prisoners who
didn’t get up on time or caught swearing or lying, or even not
greeting their guards formally. The treatment brings about
uncontrolled vomiting that lasts from 15 minutes to an hour,
accompanied by a temporary cardio vascular effect involving changes
in the blood pressure.

Another deals with creating body rigidness, aching restlessness,
blurred vision, severe muscular pain, trembling and fogged cognition.
The Department of Health, Education and Welfare and the U.S. Army
have admitted mind control experiments. Many deaths have occurred.

In tracing the steps of government mind control experiments, the
trail leads to legal and illegal usages, usage for covert
intelligence operations, and experiments on innocent people who were
unaware that they were being used.

Table of Contents

———————————————————————-
———-
Second in a Series
By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

EDITOR’S NOTE: The Sentinel commenced a series on mind control in
early August and suspended it until September because of the
extensive research required after additional information was
received.
In July, two inmates died at the Vacaville Medical Facility.
According to prison officials at the time, the two may have died as a
result of medical treatment, that treatment was the use of mind
control or behavior modification drugs. A deeper study into the
deaths of the two inmates has unraveled a mind-boggling tale of
horror that has been part of California penal history for a long
time, and one that caused national outcries years ago.

In the August article, the Sentinel presented a graphic portrait of
some of the mind control experiments that have been allowed to
continue in the United States. On November 1974 a U.S. Senate Sub
committee on Constitutional Rights investigated federally-funded
behavior modification programs, with emphasis on federal involvement
in, and the possible threat to individual constitutional rights of
behavior modification, especially involving inmates in prisons and
mental institutions.

The Senate committee was appalled after reviewing documents from the
following sources:

Neuro-Research Foundation’s study entitled The Medical Epidemiology
of Criminals.

The Center for the Study and Reduction of Violence from UCLA.

The closed adolescent treatment center.

A national uproar was created by various articles in 1974, which
prompted the Senate investigation. But after all these years, the
news that two inmates at Vacaville may have died from these same
experiments indicates that though a nation was shocked in 1974,
little was done to correct the experimentations. In 1977, a Senate
subcommittee on Health and Scientific Research, chaired by Senator
Ted Kennedy, focussed on the CIA’s testing of LSD on unwitting
citizens. Only a mere handful of people within the CIA knew about the
scope and details of the program.

To understand the full scope of the problem, it is important to study
its origins. The Kennedy subcommittee learned about the CIA Operation
M.K.-Ultra through the testimony of Dr. Sidney Gottlieb. The purpose
of the program, accord ing to his testimony, was to “investigate
whether and how it was possible to modify an individual’s behavior by
covert means”. Claiming the protection of the National Security Act,
Dr. Gottlieb was unwilling to tell the Senate subcommittee what had
been learned or gained by these experiments.

He did state, however, that the program was initially engendered by a
concern that the Soviets and other enemies of the United States would
get ahead of the U.S. in this field. Through the Freedom of
Information Act, researchers are now able to obtain documents
detailing the M.K.-Ultra program and other CIA behavior modification
projects in a special reading room located on the bottom floor of the
Hyatt Regency in Rosslyn, VA.

The most daring phase of the M.K.-Ultra program involved slipping
unwitting American citizens LSD in real life situations. The idea for
the series of experiments originated in November 1941, when William
Donovan, founder and director of the Office of Strategic Services
(OSS), the forerunner of the CIA during World War Two. At that time
the intelligence agency invested $5000 for the “truth drug” program.
Experiments with scopolamine and morphine proved both unfruitful and
very dangerous. The program tested scores of other drugs, including
mescaline, barbituates, benzedrine, cannabis indica, to name a few.

The U.S. was highly concerned over the heavy losses of freighters and
other ships in the North Atlantic, all victims of German U-boats.
Information about German U-boat strategy was desperately needed and
it was believed that the information could be obtained through drug-
influenced interrogations of German naval P.O.W.s, in violation of
the Geneva Accords.

Tetrahydrocannabinol acetate, a colorless, odorless marijuana
extract, was used to lace a cigarette or food substance without
detection. Initially, the experiments were done on volunteer U.S.
Army and OSS personnel, and testing was also disguised as a remedy
for shell shock. The volunteers became known as “Donovan’s Dreamers”.
The experiments were so hush-hush, that only a few top officials knew
about them. President Franklin Roosevelt was aware of the
experiments. The “truth drug” achieved mixed success.

The experiments were halted when a memo was written: “The drug defies
all but the most expert and search analysis, and for all practical
purposes can be considered beyond analysis.” The OSS did not,
however, halt the program. In 1943 field tests of the extract were
being con ducted, despite the order to halt them. The most celebrated
test was conducted by Captain George Hunter White, an OSS agent and
ex-law enforcement official, on August Del Grazio, aka Augie Dallas,
aka Dell, aka Little Augie, a New York gangster. Cigarettes laced
with the acetate were offered to Augie without his knowledge of the
content. Augie, who had served time in prison for assault and murder,
had been one of the world’s most notorious drug dealers and
smugglers. He operated an opium alkaloid factory in Turkey and he was
a leader in the Italian underworld on the Lower East Side of New
York. Under the influence of the drug, Augie revealed volumes of
information about the under world operations, including the names of
high ranking officials who took bribes from the mob. These
experiments led to the encouragement of Donovan. A new memo was
issued: “Cigarette experiments indicated that we had a mechanism
which offered promise in relaxing prisoners to be interrogated.”

When the OSS was disbanded after the war, Captain White continued to
administer behavior modifying drugs. In 1947, the CIA replaced the
OSS. White’s service record indicates that he worked with the OSS,
and by 1954 he was a high ranking Federal Narcotics Bureau officer
who had been loaned to the CIA on a part-time basis.

White rented an apartment in Greenwich Village equipped with one-way
mirrors, surveillance gadgets and disguised himself as a seaman.
White drugged his acquaintances with LSD and brought them back to his
apartment. In 1955, the operation shifted to San Francisco. In San
Francisco, “safehouses” were established under the code name
Operation Midnight Climax. Midnight Climax hired prostitute addicts
who lured men from bars back to the safehouses after their drinks had
been spiked with LSD. White filmed the events in the safehouses. The
purpose of these “national security brothels” was to enable the CIA
to experiment with the act of lovemaking for extracting information
from men. The safehouse experiments continued until 1963 until CIA
Inspector General John Earman criticized Richard Helms, the director
of the CIA and father of the M.K.-Ultra project. Earman charged the
new director John McCone had not been fully briefed on the M.K.-Ultra
Project when he took office and that “the concepts involved in
manipulating human behavior are found by many people within and
outside the Agency to be distasteful and unethical.” He stated
that “the rights and interest of U.S. citizens are placed in
jeopardy”. The Inspector General stated that LSD had been tested on
individuals at all social levels, high and low, native American and
foreign.”

Earman’s criticisms were rebuffed by Helms, who warned, “Positive
operation capacity to use drugs is diminishing owing to a lack of
realistic testing. Tests were necessary to keep up with the Soviets.”
But in 1964, Helms had testified before the Warren Commission
investigating the assassination of President John Kennedy,
that “Soviet research has consistently lagged five years behind
Western research”.

Upon leaving government service in 1966, Captain White wrote a
startling letter to his superior. In the letter to Dr. Gottlieb,
Captain White reminisced about his work in the safehouses with LSD.
His comments were frightening. “I was a very minor missionary,
actually a heretic, but I toiled wholeheartedly in the vineyards
because it was fun, fun, fun,” White wrote. “Where else could a red-
blooded American boy lie, kill, cheat, steal, rape and pillage with
the sanction and blessing of the all-highest?”

(NEXT: How the drug experiments helped bring about the rebirth of the
mafia and the French Connection.)

Table of Contents

———————————————————————-
———-

Part Three in a Series

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

Though the CIA continued to maintain drug experiments in the streets
of America after the program was official cancelled, the United
States reaped tremendous value from it. With George Hunter Whites
connection to underworld figure Little Augie, connections were made
with Mafia king-pin Lucky Luciano, who was in Dannemore Prison.

Luciano wanted freedom, the Mafia wanted drugs, and the United States
wanted Sicily. The date was 1943. Augie was the go-between between
Luciano and the United States War Department.

Luciano was transferred to a less harsh prison and began to be
visited by representatives of the Office of Naval Intelligence and
from underworld figures, such as Meyer Lansky. A strange alliance was
formed between the U.S. Intelligence agencies and the Mafia, who
controlled the West Side docks in New York. Luciano regained active
leadership in organized crime in America.

The U.S. Intelligence community utilized Luciano’s underworld
connections in Italy. In July of 1943, Allied forces launched their
invasion of Sicily, the beginning push into occupied Europe. General
George Patton’s Seventh Army advanced through hundreds of miles of
territory that was fraught with difficulty, booby trapped roads,
snipers, confusing mountain topography, all within close range of
60,000 hostile Italian troops. All this was accomplished in four
days, a military “miracle” even for Patton.

Senate Estes Kefauver’s Senate Sub committee on Organized Crime
asked, in 1951, how all this was possible. The answer was that the
Mafia had helped to protect roads from Italian snipers, served as
guides through treacherous mountain terrain, and provided needed
intelligence to Patton’s army. The part of Sicily which Patton’s
forces traversed had at one time been completely controlled by the
Sicilian Mafia, until Benito Mussolini smashed it through the use of
police repression.

Just prior to the invasion, it was hardly even able to continue
shaking down farmers and shepherds for protection money. But the
invasion changed all this, and the Mafia went on to play a very
prominent and well-documented role in the American military
occupation of Italy.

The expedience of war opened the doors to American drug traffic and
Mafia domination. This was the beginning of the Mafia-U.S.
Intelligence alliance, an alliance that lasts to this day and helped
to support the covert operations of the CIA, such as the Iran-Contra
operations. In these covert operations, the CIA would obtain drugs
from South America and Southeast Asia, sell them to the Mafia and use
the money for the covert purchase of military equipment. These
operations accelerated when Congress cut off military funding for the
Contras.

One of the Allies top occupation priorities was to liberate as many
of their own soldiers from garrison duties so that they could
participate in the military offensive. In order to accomplish this,
Don Calogero’s Mafia were pressed into service, and in July of 1943,
the Civil Affairs Control Office of the U.S. Army appointed him mayor
of Villalba and other Mafia officials as mayors of other towns in
Sicily.

As the northern Italian offensive continued, Allied intelligence
became very concerned over the extent to which the Italian Communists
resistance to Mussolini had driven Italian politics to the left.
Community Party membership had doubled between 1943 and 1944, huge
leftist strikes had shut down factories and the Italian underground
fighting Mussolini had risen to almost 150,000 men. By mid-1944, the
situation came to a head and the U.S. Army terminated arms drops to
the Italian Resistance, and started appointing Mafia officials to
occupation administration posts. Mafia groups broke up leftists
rallies and reactivated black market operations throughout southern
Italy.

Lucky Luciano was released from prison in 1946 and deported to Italy,
where he rebuilt the heroin trade. The court’s decision to release
him was made possible by the testimony of intelligence agents at his
hearing, and a letter written by a naval officer reciting what
Luciano had done for the Navy. Luciano was supposed to have served
from 30 to 50 years in prison. Over 100 Mafia members were similarly
deported within a couple of years.

Luciano set up a syndicate which transported morphine base from the
Middle East to Europe, refined it into heroin, and then shipped it
into the United States via Cuba. During the 1950’s, Marseilles, in
Southern France, became a major city for the heroin labs and the
Corsican syndicate began to actively cooperate with the Mafia in the
heroin trade. Those became popularly known as the French Connection.

In 1948, Captain White visited Luciano and his narcotics associate
Nick Gentile in Europe. Gentile was a former American gangster who
had worked for the Allied Military Government in Sicily. By this
time, the CIA was already subsidizing Corsican and Italian gangsters
to oust Communist unions from the Port of Marseilles. American
strategic planners saw Italy and southern France as extremely
important for their Naval bases as a counterbalance to the growing
naval forces of the Soviet Union. CIO/AFL organizer Irving Brown
testified that by the time the CIA subsidies were terminated in 1953,
U.S. support was no longer needed because the profits from the heroin
traffic was sufficient to sustain operations.

When Luciano was originally jailed, the U.S. felt it had eliminated
the world’s most effective underworld leader and the activities of
the Mafia were seriously damaged. Mussolini had been waging a war
since 1924 to rid the world of the Sicilian Mafia. Thousands of Mafia
members were convicted of crimes and forced to leave the cities and
hide out in the mountains.

Mussolini’s reign of terror had virtually eradicated the
international drug syndicates. Combined with the shipping
surveillance during the war years, heroin trafficking had become
almost nil. Drug use in the United States, before Luciano’s release
from prison, was on the verge of being entirely wiped out.

Table of Contents

———————————————————————-
———-

Part Four in a Series

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

The U.S. government has conducted three types of mind-control
experiments:

Real life experiences, such as those used on Little Augie and the LSD
experiments in the safehouses of San Francisco and Greenwich Village.

Experiments on prisoners, such as in the California Medical Facility
at Vacaville.

Experiments conducted in both mental hospitals and the Veterans
Administration hospitals.
Such experimentation requires money, and the United States government
has funnelled funds for drug experiments through different agencies,
both overtly and covertly.

One of the funding agencies to contribute to the experimentation is
the Law Enforcement Assistance Administration (LEAA), a unit of the
U.S. Justice Department and one of President Richard Nixon’s favorite
pet agencies. The Nixon Administration was, at one time, putting
together a program for detaining youngsters who showed a tendency
toward violence in “concentration” camps. According to the Washington
Post, the plan was authored by Dr. Arnold Hutschnecker. Health,
Education and Welfare Secretary Robert Finch was told by John
Erlichman, Chief of Staff for the Nixon White House, to implement the
program. He proposed the screening of children of six years of age
for tendencies toward criminality. Those who failed these tests were
to be destined to be sent to the camps. The program was never
implemented.

LEAA came into existence in 1968 with a huge budget to assist various
U.S. law enforcement agencies. Its effectiveness, however, was not
considered too great. After spending $6 billion, the F.B.I. reports
general crime rose 31 percent and violent crime rose 50 percent. But
little accountability was required of LEAA on how it spent its funds.

LEAA’s role in the behavior modification research began at a meeting
held in 1970 in Colorado Springs. Attending that meeting were Richard
Nixon, Attorney General John Mitchell, John Erlichman, H.R. Haldemann
and other White House staffers. They met with Dr. Bertram Brown,
director fo the National Institute of Mental Health, and forged a
close collaboration between LEAA and the Institute. LEAA was a
product of the Justice Department and the Institute was a product of
HEW.

LEAA funded 350 projects involving medical procedures, behavior
modification and drugs for delinquency control. Money from the
Criminal Justice System was being used to fund mental health projects
and vice versa. Eventually, the leadership responsibility and control
of the Institute began to deteriorate and their scientists began to
answer to LEAA alone.

The National Institute of Mental Health went on to become one of the
greatest supporters of behavior modification research. Throughout the
1960’s, court calenders became blighted with lawsuits on the part
of “human guinea pigs” who had been experimented upon in prisons and
mental institutions. It was these lawsuits which triggered the Senate
Subcommittee on Constitutional Rights investigation, headed by
Senator Sam Erwin. The subcommittee’s harrowing report was virtually
ignored by the news media.

Thirteen behavior modification programs were conducted by the
Department of Defense. The Department of Labor had also conducted
several experiments, as well as the National Science Foundation. The
Veterans’ Administration was also deeply involved in behavior
modification and mind control. Each of these agencies, including
LEAA, and the Institute, were named in secret CIA documents as those
who provided research cover for the MK-ULTRA program.

Eventually, LEAA was using much of its budget to fund experiments,
including aversive techniques and psychosurgery, which involved, in
some cases, irreversible brain surgery on normal brain tissue for the
purpose of changing or controlling behavior and/or emotions.

Senator Erwin questioned the head of LEAA concerning ethical
standards of the behavior modification projects which LEAA had been
funding. Erwin was extremely dubious about the idea of the government
spending money on this kind of project without strict guidelines and
reasonable research supervision in order to protect the human
subjects. After Senator Erwin’s denunciation of the funding polices,
LEAA announced that it would no longer fund medical research into
behavior modification and psychosurgery. Despite the pledge by LEAA’s
director, Donald E. Santarelli, LEAA ended up funding 537 research
projects dealing with behavior modification. There is strong evidence
to indicate psychosurgery was still being used in prisons in the
1980’s. Immediately after the funding announcement by LEAA, there
were 50 psychosurgical operations at Atmore State Prison in Alabama.
The inmates became virtual zombies. The operations, according to Dr.
Swan of Fisk University, were done on black prisoners who were
considered politically active.

The Veterans’ Administration openly admitted that psychosurgery was a
standard procedure for treatment and not used just in experiments.
The VA Hospitals in Durham, Long Beach, New York, Syracuse and
Minneapolis were known to employ these products on a regular basis.
VA clients could typically be subject to these behavior alteration
procedures against their will. The Erwin subcommittee concluded that
the rights of VA clients had been violated.

LEAA also subsidized the research and development of gadgets and
techniques useful to behavior modification. Much of the technology,
whose perfection LEAA funded, had originally been developed and made
operational for use in the Vietnam War. Companies like Bangor Punta
Corporation and Walter Kidde and Co., through its subsidiary Globe
Security System, adapted these devices to domestic use in the U.S.
ITT was another company that domesticated the warfare technology for
potential use on U.S. citizens. Rand Corporation executive Paul Baran
warned that the influx back to the United State of the Vietnam War
surveillance gadgets alone, not to mention the behavior modification
hardware, could bring about “the most effective, oppressive police
state ever created”.

Table of Contents

———————————————————————-
———-

Fifth in a Series

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

One of the fascinating aspects of the scandals that plague the U.S.
Government is the fact that so often the same names appear from
scandal to scandal. From the origins of Ronald Reagan’s political
career, as Governor of California, Dr. Earl Brian and Edward Meese
played key advisory roles.

Dr. Brian’s name has been linked to the October Surprise and is a
central figure in the government’s theft of PROMIS soft ware from
INSLAW. Brian’s role touches from the Cabazon Indian scandals to
United Press International. He is one of those low-profile key
figures.

And, alas, his name appears again in the nation’s behavior
modification and mind control experiments. Dr. Brian was Reagan’s
Secretary of Health when Reagan was Governor. Dr. Brian was an
advocate of state subsidies for a research center for the study of
violent behavior. The center was to begin operations by mid-1975, and
its research was intended to shed light on why people murder or rape,
or hijack aircraft. The center was to be operated by the University
of California at Los Angeles, and its primary purpose, ac cording to
Dr. Brian, was to unify scattered studies on anti-social violence and
possibly even touch on socially tolerated violence, such as football
or war. Dr. Brian sought $1.3 million for the center.

It certainly was possible that prison inmates might be used as
volunteer subjects at the center to discover the unknowns which
triggered their violent behavior. Dr. Brian’s quest for the center
came at the same time Governor Reagan concluded his plans to phase
the state of California out of the mental hospital business by 1982.
Reagan’s plan is echoed by Governor Pete Wilson today, to place the
responsibility of rehabilitating young offenders squarely on the
shoulders of local communities.

But as the proposal became known more publicly, a swell of
controversy surrounded it. It ended in a fiasco. The inspiration for
the violence center came from three doctors in 1967, five years
before Dr. Brian and Governor Reagan unveiled their plans. Amidst
urban rioting and civil protest, Doctors Sweet, Mark and Ervin of
Harvard put forward the thesis that individuals who engage in civil
disobedience possess defective or damaged brain cells. If this
conclusion were applied to the American Revolution or the Women’s
Rights Movement, a good portion of American society would be labeled
as having brain damage.

In a letter to the Journal of the American Medical Association, they
stated: “That poverty, unemployment, slum housing, and inadequate
education underlie the nation’s urban riots is well known, but the
obviousness of these causes may have blinded us to the more subtle
role of other possible factors, including brain dysfunction in the
rioters who engaged in arson, sniping and physical assault.

“There is evidence from several sources that brain dysfunction
related to a focal lesion plays a significant role in the violent and
assaultive behavior of thoroughly studied patients. Individuals with
electroencephalographic abnormalities in the temporal region have
been found to have a much greater frequency of behavioral
abnormalities (such as poor impulse control, assaultiveness, and
psychosis) than is present in people with a normal brain wave
pattern.”

Soon after the publication in the Journal, Dr. Ervin and Dr. Mark
published their book Violence and the Brain, which included the claim
that there were as many as 10 million individuals in the United
States “who suffer from obvious brain disease”. They argued that the
data of their book provided a strong reason for starting a program of
mass screening of Americans.

“Our greatest danger no longer comes from famine or communicable
disease. Our greatest danger lies in ourselves and in our fellow
humans…we need to develop an ‘early warning test’ of limbic brain
function to detect those humans who have a low threshold for
impulsive violence…Violence is a public health problem, and the
major thrust of any program dealing with violence must be toward its
prevention,” they wrote.

The Law Enforcement Assistance Administration funded the doctors
$108,000 and the National Institute of Mental Health kicked in
another $500,000, under pressure from Congress. They believed that
psychosurgery would inevitably be performed in connection with the
program, and that, since it irreversibly impaired people’s emotional
and intellectual capacities, it could be used as an instrument of
repression and social control.

The doctors wanted screening centers established throughout the
nation. In California, the publicity associated with the doctors’
report, aided in the development of The Center for the study and
Reduction of Violence. Both the state and LEAA provided the funding.
The center was to serve as a model for future facilities to be set up
throughout the United States.

The Director of the Neurophyschiatric Institute and chairman of the
Department of Psychiatry at UCLA, Dr. Louis Jolyon West was selected
to run the center. Dr. West is alleged to have been a contract agent
for the CIA, who, as part of a network of doctors and scientists,
gathered intelligence on hallucinogenic drugs, including LSD, for the
super-secret MK-ULTRA program. Like Captain White (see part three of
the series), West conducted LSD experiments for the CIA on unwitting
citizens in the safehouses of San Francisco. He achieved notoriety
for his injection of a massive dose of LSD into an elephant at the
Oklahoma Zoo, the elephant died when West tried to revive it by
administering a combination of drugs.

Dr. West was further known as the psychiatrist who was called upon to
examine Jack Ruby, Lee Harvey Oswald’s assassin. It was on the basis
of West’s diagnosis that Ruby was compelled to be treated for mental
disorders and put on happy pills. The West examination was ordered
after Ruby began to say that he was part of a right-wing conspiracy
to kill President John Kennedy. Two years after the commencement of
treatment for mental disorder, Ruby died of cancer in prison.

After January 11, 1973, when Governor Reagan announced plans for the
Violence Center, West wrote a letter to the then Director of Health
for California, J. M. Stubblebine.

“Dear Stub:

“I am in possession of confidential in formation that the Army is
prepared to turn over Nike missile bases to state and local agencies
for non-military purposes. They may look with special favor on health-
related applications.

“Such a Nike missile base is located in the Santa Monica Mountains,
within a half-hour’s drive of the Neuropsychiatric Institute. It is
accessible, but relatively remote. The site is securely fenced, and
includes various buildings and improvements, making it suitable for
prompt occupancy.

“If this site were made available to the Neurophyschiatric Institute
as a research facility, perhaps initially as an adjunct to the new
Center for the Prevention of Violence, we could put it to very good
use. Comparative studies could be carried out there, in an isolated
but convenient location, of experimental or model programs for the
alteration of undesirable behavior.

“Such programs might include control of drug or alcohol abuse,
modification of chronic anti-social or impulsive aggressiveness, etc.
The site could also accommodate conferences or retreats for
instruction of selected groups of mental-health related professionals
and of others (e.g., law enforcement personnel, parole officers,
special educators) for whom both demonstration and participation
would be effective modes of instruction.

“My understanding is that a direct request by the Governor, or other
appropriate officers of the State, to the Secretary of Defense (or,
of course, the President), could be most likely to produce prompt
results.”

Some of the planned areas of study for the Center included:

Studies of violent individuals.

Experiments on prisoners from Vacaville and Atascadero, and
hyperkinetic children.

Experiments with violence-producing and violent inhibiting drugs.

Hormonal aspects of passivity and aggressiveness in boys.

Studies to discover and compare norms of violence among various
ethnic groups.

Studies of pre-delinquent children.

It would also encourage law enforcement to keep computer files on pre-
delinquent children, which would make possible the treatment of
children before they became delinquents.

The purpose of the Violence Center was not just research. The staff
was to include sociologists, lawyers, police officers, clergymen and
probation officers. With the backing of Governor Reagan and Dr.
Brian, West had secured guarantees of prisoner volunteers from
several California correctional institutions, including Vacaville.
Vacaville and Atascadero were chosen as the primary sources for the
human guinea pigs. These institutions had established a reputation,
by that time, of committing some of the worst atrocities in West
Coast history. Some of the experimentations differed little from what
the Nazis did in the death camps.

(NEXT: What happened to the Center?)

Table of Contents

———————————————————————-
———-

Sixth in a Series

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

Dr. Earl Brian, Governor Ronald Reagan’s Secretary of Health, was
adamant about his support for mind control centers in California. He
felt the behavior modification plan of the Violence Control Centers
was important in the prevention of crime.

The Violence Control Center was actually the brain child of William
Herrmann as part of a pacification plan for California. A counter
insurgency expert for Systems Development Corporation and an advisor
to Governor Reagan, Herrmann worked with the Stand Research
Institute, the RAND Corporation, and the Hoover Center on Violence.
Herrman was also a CIA agent who is now serving an eight year prison
sentence for his role in a CIA counterfeiting operation. He was also
directly linked with the Iran-Contra affair according to government
records and Herrmann’s own testimony.

In 1970, Herrmann worked with Colston Westbrook as his CIA control
officer when Westbrook formed and implemented the Black Cultural
Association at the Vacaville Medical Facility, a facility which in
July experienced the death of three inmates who were forcibly
subjected to behavior modification drugs. The Black Cultural
Association was ostensibly an education program designed to instill
black pride identity in prisons, the Association was really a cover
for an experimental behavior modification pilot project designed to
test the feasibility of programming unstable prisoners to become more
manageable.

Westbrook worked for the CIA in Vietnam as a psychological warfare
expert, and as an advisor to the Korean equivalent of the CIA and for
the Lon Nol regime in Cambodia. Between 1966 and 1969, he was an
advisor to the Vietnamese Police Special Branch under the cover of
working as an employee of Pacific Architects and Engineers.

His “firm” contracted the building of the interrogation/torture
centers in every province of South Vietnam as part of the CIA’s
Phoenix Program. The program was centered around behavior
modification experiments to learn how to extract information from
prisoners of war, a direct violation of the Geneva Accords.

Westbrook’s most prominent client at Vacaville was Donald DeFreeze,
who be tween 1967 and 1969, had worked for the Los Angeles Police
Department’s Public Disorder Intelligence unit and later became the
leader of the Symbionese Liberation Army. Many authorities now
believe that the Black Cultural Association at Vacaville was the
seedling of the SLA. Westbrook even designed the SLA logo, the cobra
with seven heads, and gave De Freeze his African name of Cinque. The
SLA was responsible for the assassination of Marcus Foster,
superintendent of School in Oakland and the kidnapping of Patty
Hearst.

As a counterinsurgency consultant for Systems Development
Corporation, a security firm, Herrmann told the Los Angeles Times
that a good computer intelligence system “would separate out the
activist bent on destroying the system” and then develop a master
plan “to win the hearts and minds of the people”. The San Francisco-
based Bay Guardian, recently identified Herrmann as an international
arms dealer working with Iran in 1980, and possibly involved in the
October Surprise. Herrmann is in an English prison for
counterfeiting. He allegedly met with Iranian officials to ascertain
whether the Iranians would trade arms for hostages held in Lebanon.

The London Sunday Telegraph confirmed Herrmann’s CIA connections,
tracing them from 1976 to 1986. He also worked for the FBI. This
information was revealed in his London trial.

In the 1970’s, Dr. Brian and Herrmann worked together under Governor
Reagan on the Center for the Study and Reduction of Violence, and
then, a decade later, again worked under Reagan. Both men have been
identified as working for Reagan with the Iranians.

The Violence Center, however, died an agonizing death. Despite the
Ervin Senate Committee investigation and chastation of mind control,
the experiments continued. But when the Watergate scandal broke in
the early 1970’s, Washington felt it was too politically risky to
continue to push for mind control centers.

Top doctors began to withdraw from the proposal because they felt
that there were not enough safeguards. Even the Law Enforcement
Assistance Agency, which funded the program, backed out, stating, the
proposal showed “little evidence of established research ability of
the kind of level necessary for a study of this cope”.

Eventually it became known that control of the Violence Center was
not going to rest with the University of California, but instead with
the Department of Corrections and other law enforcement officials.
This information was released publicly by the Committee Opposed to
Psychiatric Abuse of Prisoners. The disclosure of the letter resulted
in the main backers of the program bowing out and the eventual demise
of the center.

Dr. Brian’s final public statement on the matter was that the
decision to cut off funding represented “a callous disregard for
public safety”. Though the Center was not built, the mind control
experiments continue to this day.

(NEXT: What these torturous drugs do.)

Table of Contents

———————————————————————-
———-

Seventh in a Series

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

The Central Intelligence Agency held two major interests in use of
L.S.D. to alter normal behavior patterns. The first interest centered
around obtaining information from prisoners of war and enemy agents,
in contravention of the Geneva Accords. The second was to deter the
effectiveness of drugs used against the enemy on the battlefield.

The MK-ULTRA program was originally run by a small number of people
within the CIA known as the Technical Services Staff (TSS). Another
CIA department, the Office of Security, also began its own testing
program. Friction arose and then infighting broke out when the Office
of Security commenced to spy on TSS people after it was learned that
LSD was being tested on unwitting Americans.

Not only did the two branches disagree over the issue of testing the
drug on the unwitting, they also disagreed over the issue of how the
drug was actually to be used by the CIA. The office of Security
envisioned the drug as an interrogation weapon. But the TSS group
thought the drug could be used to help destabilize another country,
it could be slipped into the food or beverage of a public official in
order to make him behave foolishly or oddly in public. One CIA
document reveals that L.S.D. could be administered right before an
official was to make a public speech.

Realizing that gaining information about the drug in real life
situations was crucial to exploiting the drug to its fullest, TSS
started conducting experiments on its own people. There was an
extensive amount of self-experimentation. The Office of Security felt
the TSS group was playing with fire, especially when it was learned
that TSS was prepared to spike an annual office Christmas party punch
with LSD, the Christmas party of the CIA. L.S.D. could produce
serious insanity for periods of eight to 18 hours and possibly
longer.

One of the “victims” of the punch was agent Frank Olson. Having never
had drugs before, L.S.D. took its toll on Olson. He reported that,
every automobile that came by was a terrible monster with fantastic
eyes, out to get him personally. Each time a car passed he would
huddle down against a parapet, terribly frightened. Olson began to
behave erratically. The CIA made preparation to treat Olson at
Chestnut Lodge, but before they could, Olson checked into a New York
hotel and threw himself out from his tenth story room. The CIA was
ordered to cease all drug testing.

Mind control drugs and experiments were torturous to the victims. One
of three inmates who died in Vacaville Prison in July was scheduled
to appear in court in an attempt to stop forced administration of a
drug, the very drug that may have played a role in his death.

Joseph Cannata believed he was making progress and did not need
forced dosages of the drug Haldol. The Solano County Coroner’s Office
said that Cannata and two other inmates died of hyperthermia,
extremely elevated body temperature. Their bodies all had at least
108 degrees temperature when they died. The psychotropic drugs they
were being forced to take will elevate body temperature.

Dr. Ewen Cameron, working at McGill University in Montreal, used a
variety of experimental techniques, including keeping subjects
unconscious for months at a time, administering huge electroshocks
and continual doses of L.S.D.

Massive lawsuits developed as a result of this testing, and many of
the subjects who suffered trauma had never agreed to participate in
the experiments. Such CIA experiments infringed upon the much-honored
Nuremberg Code concerning medical ethics. Dr. Camron was one of the
members of the Nuremberg Tribunal.

L.S.D. research was also conducted at the Addiction Research Center
of the U.S. Public Health Service in Lexington, Kentucky. This
institution was one of several used by the CIA. The National
Institute of Mental Health and the U.S. Navy funded this operation.
Vast supplies of L.S.D. and other hallucinogenic drugs were required
to keep the experiments going. Dr. Harris Isbell ran the program. He
was a member of the Food and Drug Administration’s Advisory Committee
on the Abuse of Depressant and Stimulants Drugs. Almost all of the
inmates were black. In many cases, L.S.D. dosage was increased daily
for 75 days.

Some 1500 U.S. soldiers were also victims of drug experimentation.
Some claimed they had agreed to become guinea pigs only through
pressure from their superior officers. Many claimed they suffered
from severe depression and other psychological stress.

One such soldier was Master Sergeant Jim Stanley. L.S.D. was put in
Stanley’s drinking water and he freaked out. Stanley’s hallucinations
continued even after he returned to his regular duties. His service
record suffered, his marriage went on the rocks and he ended up
beating his wife and children. It wasn’t until 17 years later that
Stanley was informed by the military that he had been an L.S.D.
experiment. He sued the government, but the Supreme Court ruled no
soldier could sue the Army for the L.S.D. experiments. Justice
William Brennen disagreed with the Court decision. He
wrote, “Experimentation with unknowing human subjects is morally and
legally unacceptable.”

Private James Thornwell was given L.S.D. in a military test in 1961.
For the next 23 years he lived in a mental fog, eventually drowning
in a Vallejo swimming pool in 1984. Congress had set up a $625,000
trust fund for him. Large scale L.S.D. tests on American soldiers
were conducted at Aberdeen Proving Ground in Maryland, Fort Benning,
Georgia, Fort Leavenworth, Kansas, Dugway Proving Ground, Utah, and
in Europe and the Pacific. The Army conducted a series of L.S.D.
tests at Fort Bragg in North Carolina. The purpose of the tests were
to ascertain how well soldiers could perform their tasks on the
battlefield while under the influence of L.S.D. At Fort McClellan,
Alabama, 200 officers in the Chemical Corps were given L.S.D. in
order to familiarize them with the drug’s effects. At Edgewood
Arsenal, soldiers were given L.S.D. and then confined to sensory
deprivation chambers and later exposed to a harsh interrogation
sessions by intelligence people. In these sessions, it was discovered
that soldiers would cooperate if promised they would be allowed to
get off the L.S.D.

In Operation Derby Hat, foreign nationals accused of drug trafficking
were given L.S.D. by the Special Purpose Team, with one subject
begging to be killed in order to end his ordeal. Such experiments
were also conducted in Saigon on Viet Cong POWs. One of the most
potent drugs in the U.S. arsenal is called BZ or quinuclidinyl
benzilate. It is a long-lasting drug and brings on a litany of
psychotic experiences and almost completely isolates any person from
his environment. The main effects of BZ last up to 80 hours compared
to eight hours for L.S.D. Negative after-effects may persist for up
to six weeks.

The BZ experiments were conducted on soldiers at Edgewood Arsenal for
16 years. Many of the “victims” claim that the drug permanently
affected their lives in a negative way. It so disorientated one
paratrooper that he was found taking a shower in his uniform and
smoking a cigar. BZ was eventually put in hand grenades and a 750
pound cluster bomb. Other configurations were made for mortars,
artillery and missiles. The bomb was tested in Vietnam and CIA
documents indicate it was prepared for use by the U.S. in the event
of large-scale civilian uprisings.

In Vacaville, psychosurgery has long been a policy. In one set of
cases, experimental psychosurgery was conducted on three inmates, a
black, a Chicano and a white person. This involved the procedure of
pushing electrodes deep into the brain in order to determine the
position of defective brain cells, and then shooting enough voltage
into the suspected area to kill the defective cells. One prisoner,
who appeared to be improving after surgery, was released on parole,
but ended up back in prison. The second inmate became violent and
there is no information on the third inmate.

Vacaville also administered a “terror drug” Anectine as a way
of “suppressing hazardous behavior”. In small doses, Anectine serves
as a muscle relaxant; in huge does, it produces prolonged seizure of
the respiratory system and a sensation “worse than dying”. The drug
goes to work within 30 to 40 seconds by paralyzing the small muscles
of the fingers, toes, and eyes, and then moves into the the
intercostal muscles and the diaphragm. The heart rate subsides to 60
beats per minute, respiratory arrest sets in and the patient remains
completely conscious throughout the ordeal, which lasts two to five
minutes. The experiments were also used at Atascadero.

Several mind altering drugs were originally developed for non-
psychoactive purposes. Some of these drugs are Phenothiazine and
Thorzine. The side effects of these drugs can be a living hell. The
impact includes the feeling of drowsiness, disorientation, shakiness,
dry mouth, blurred vision and an inability to concentrate. Drugs like
Prolixin are described by users as “sheer torture” and “becoming a
zombie”.

The Veterans Administration Hospital has been shown by the General
Accounting Office to apply heavy dosages of psychotherapeutic drugs.
One patient was taking eight different drugs, three antipsychotic,
two antianxiety, one antidepressant, one sedative and one anti-
Parkinson. Three of these drugs were being given in dosages equal to
the maximum recommended. Another patient was taking seven different
drugs. One report tells of a patient who refused to take the drug. “I
told them I don’t want the drug to start with, they grabbed me and
strapped me down and gave me a forced intramuscular shot of Prolixin.
They gave me Artane to counteract the Prolixin and they gave me
Sinequan, which is a kind of tranquilizer to make me calm down, which
over calmed me, so rather than letting up on the medication, they
then gave me Ritalin to pep me up.”

Prolixin lasts for two weeks. One patient describes how the drug does
not calm or sedate nerves, but instead attacks from so deep inside
you, you cannot locate the source of the pain. “The drugs turn your
nerves in upon yourself. Against your will, your resistance, your
resolve, are directed at your own tissues, your own muscles,
reflexes, etc..” The patient continues, “The pain grinds into your
fiber, your vision is so blurred you cannot read. You ache with
restlessness, so that you feel you have to walk, to pace. And then as
soon as you start pacing, the opposite occurs to you, you must sit
and rest. Back and forth, up and down, you go in pain you cannot
locate. In such wretched anxiety you are overwhelmed because you
cannot get relief even in breathing.”

Table of Contents

———————————————————————-
———-

Eighth in a Series

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

October 15, 1991

“We need a program of psychosurgery for political control of our
society. The purpose is physical control of the mind. Everyone who
deviates from the given norm can be surgically mutilated.

“The individual may think that the most important reality is his own
existence, but this is only his personal point of view. This lacks
historical perspective.

“Man does not have the right to develop his own mind. This kind of
liberal orientation has great appeal. We must electrically control
the brain. Some day armies and generals will be controlled by
electric stimulation of the brain.” These were the remarks of Dr.
Jose Delgado as they appeared in the February 24, 1974 edition of the
Congressional Record, No. 26., Vol. 118.

Despite Dr. Delgado’s outlandish statements before Congress, his work
was financed by grants from the Office of Naval Research, the Air
Force Aero-Medical Research Laboratory, and the Public Health
Foundation of Boston.

Dr. Delgado was a pioneer of the technology of Electrical Stimulation
of the Brain (ESB). The New York Times ran an article on May 17, 1965
entitled Matador With a Radio Stops Wild Bull. The story details Dr.
Delgado’s experiments at Yale University School of Medicine and work
in the field at Cordova, Spain. The New York Times stated:

“Afternoon sunlight poured over the high wooden barriers into the
ring, as the brave bull bore down on the unarmed matador, a scientist
who had never faced fighting bull. But the charging animal’s horn
never reached the man behind the heavy red cape. Moments before that
could happen, Dr. Delgado pressed a button on a small radio
transmitter in his hand and the bull braked to a halt. Then he
pressed another button on the transmitter, and the bull obediently
turned to the right and trotted away. The bull was obeying commands
in his brain that were being called forth by electrical stimulation
by the radio signals to certain regions in which fine wires had been
painlessly planted the day before.”

According to Dr. Delgado, experiments of this type have also been
performed on humans. While giving a lecture on the Brain in 1965, Dr.
Delgado said, “Science has developed a new methodology for the study
and control of cerebral function in animals and humans.”

The late L.L. Vasiliev, professor of physiology at the University of
Leningrad wrote in a paper about hypnotism: “As a control of the
subject’s condition, when she was outside the laboratory in another
set of experiments, a radio set was used. The results obtained
indicate that the method of using radio signals substantially
enhances the experimental possibilities.” The professor continued to
write, “I.F. Tomaschevsky (a Russian physiologist) carried out the
first experiments with this subject at a distance of one or two
rooms, and under conditions that the participant would not know or
suspect that she would be experimented with. In other cases, the
sender was not in the same house, and someone else observed the
subject’s behavior. Subsequent experiments at considerable distances
were successful. One such experiment was carried out in a park at a
distance. Mental suggestions to go to sleep were complied with within
a minute.”

The Russian experiments in the control of a person’s mind through
hypnosis and radio waves were conducted in the 1930s, some 30 years
before Dr. Delgado’s bull experiment. Dr. Vasiliev definitely
demonstrated that radio transmission can produce stimulation of the
brain. It is not a complex process. In fact, it need not be implanted
within the skull or be productive of stimulation of the brain,
itself. All that is needed to accomplish the radio control of the
brain is a twitching muscle. The subject becomes hypnotized and a
muscle stimulant is implanted. The subject, while still under
hypnosis, is commanded to respond when the muscle stimulant is
activated, in this case by radio transmission.

Lincoln Lawrence wrote a book entitled Were We Controlled? Lawrance
wrote, “If the subject is placed under hypnosis and mentally
programmed to maintain a determination eventually to perform one
specific act, perhaps to shoot someone, it is suggested thereafter,
each time a particular muscle twitches in a certain manner, which is
then demonstrated by using the transmitter, he will increase this
determination even more strongly. As the hypnotic spell is renewed
again and again, he makes it his life’s purpose to carry out this act
until it is finally achieved. Thus are the two complementary aspects
of Radio-Hypnotic Intracerebral Control (RHIC) joined to reinforce
each other, and perpetuate the control, until such time as the
controlled behavior is called for. This is done by a second session
with the hypnotist giving final instructions. These might be
reinforced with radio stimulation in more frequent cycles. They could
even carry over the moments after the act to reassure calm behavior
during the escape period, or to assure that one conspirator would not
indicate that he was aware of the co-conspirator’s role, or that he
was even acquainted with him.”

RHIC constitutes the joining of two well known tools, the radio part
and the hypnotism part. People have found it difficult to accept that
an individual can be hypnotized to perform an act which is against
his moral principles. Some experiments have been conducted by the
U.S. Army which show that this popular perception is untrue. The
chairman of the Department of Psychology at Colgate University, Dr.
Estabrooks, has stated, “I can hypnotize a man without his knowledge
or consent into committing treason against the United States.”
Estabrooks was one of the nation’s most authoritative sources in the
hypnotic field. The psychologist told officials in Washington that a
mere 200 well trained hypnotists could develop an army of mind-
controlled sixth columnists in wartime United States. He laid out a
scenario of an enemy doctor placing thousands of patients under
hypnotic mind control, and eventually programming key military
officers to follow his assignment. Through such maneuvers, he said,
the entire U.S. Army could be taken over. Large numbers of saboteurs
could also be created using hypnotism through the work of a doctor
practicing in a neighborhood or foreign born nationals with close
cultural ties with an enemy power.

Dr. Estabrooks actually conducted experiments on U.S. soldiers to
prove his point. Soldiers of low rank and little formal education
were placed under hypnotism and their memories tested. Surprisingly,
hypnotists were able to control the subjects’ ability to retain
complicated verbal information. J. G. Watkins followed in Estabrooks
steps and induced soldiers of lower rank to commit acts which
conflicted not only with their moral code, but also the military code
which they had come to accept through their basic training. One of
the experiments involved placing a normal, stable army private in a
deep trance. Watkins was trying to see if he could get the private to
attack a superior officer, a cardinal sin in the military. While the
private was in a deep trance, Watkins told him that the officer
sitting across from him was an enemy soldier who was going to attempt
to kill him. In the private’s mind, it was a kill or be killed
situation. The private immediately jumped up and grabbed the officer
by the throat. The experiment was repeated several times, and in one
case the man who was hypnotized and the man who was attacked were
very close friends. The results were always the same. In one
experiment, the hypnotized subject pulled out a knife and nearly
stabbed another person.

Watkins concluded that people could be induced to commit acts
contrary to their morality if their reality was distorted by the
hypnotism. Similar experiments were conducted by Watkins using WACs
exploring the possibility of making military personnel divulge
military secrets. A related experiment had to be discontinued because
a researcher, who had been one of the subjects, was exposing numerous
top-secret projects to his hypnotist, who did not have the proper
security clearance for such information. The information was divulged
before an audience of 200 military personnel.

(NEXT: School for Assassins)

Table of Contents

———————————————————————-
———-

Ninth in a Series

Mind Control: a Navy school for assassins

By Harry V. Martin and David Caul

Copyright, Napa Sentinel, 1991

Tuesday, October 22, 1991

In mans quest to control the behavior of humans, there was a great
breakthrough established by Pavlov, who devised a way to make dogs
salivate on cue. He perfected his conditioning response technique by
cutting holes in the cheeks of dogs and measured the amount they
salivated in response to different stimuli. Pavlov verified
that “quality, rate and frequency of the salivation changed depending
upon the quality, rate and frequency of the stimuli.”

Though Pavlov’s work falls far short of human mind control, it did
lay the groundwork for future studies in mind and behavior control of
humans. John B. Watson conducted experiments in the United States on
an 11-month-old infant. After allowing the infant to establish a
rapport with a white rat, Watson began to beat on the floor with an
iron bar every time the infant came in contact with the rat. After a
time, the infant made the association between the appearance of the
rat and the frightening sound, and began to cry every time the rat
came into view. Eventually, the infant developed a fear of any type
of small animal. Watson was the founder of the behaviorist school of
psychology.

“Give me the baby, and I’ll make it climb and use its hands in
constructing buildings or stone or wood. I’ll make it a thief, a
gunman or a dope fiend. The possibilities of shaping in any direction
are almost endless. Even gross differences in anatomical structure
limits are far less than you may think. Make him a deaf mute, and I
will build you a Helen Keller. Men are built, not born,” Watson
proclaimed. His psychology did not recognize inner feelings and
thoughts as legitimate objects of scientific study, he was only
interested in overt behavior.

Though Watson’s work was the beginning of mans attempts to control
human actions, the real work was done by B.F. Skinner, the high
priest of the behaviorists movement. The key to Skinner’s work was
the concept of operant conditioning, which relied on the notion of
reinforcement, all behavior which is learned is rooted in either a
positive or negative response to that action. There are two
corollaries of operant conditioning” Aversion therapy and
desensitization.

Aversion therapy uses unpleasant reinforcement to a response which is
undesirable. This can take the form of electric shock, exposing the
subject to fear producing situations, and the infliction of pain in
general. It has been used as a way of “curing” homosexuality,
alcoholism and stuttering. Desensitization involves forcing the
subject to view disturbing images over and over again until they no
longer produce any anxiety, then moving on to more extreme images,
and repeating the process over again until no anxiety is produced.
Eventually, the subject becomes immune to even the most extreme
images. This technique is typically used to treat people’s phobias.
Thus, the violence shown on T.V. could be said to have the
unsystematic and unintended effect of desensitization.

Skinnerian behaviorism has been accused of attempting to deprive man
of his free will, his dignity and his autonomy. It is said to be
intolerant of uncertainty in human behavior, and refuses to recognize
the private, the ineffable, and the unpredictable. It sees the
individual merely as a medical, chemical and mechanistic entity which
has no comprehension of its real interests.

Skinner believed that people are going to be manipulated. “I just
want them to be manipulated effectively,” he said. He measured his
success by the absence of resistance and counter control on the part
of the person he was manipulating. He thought that his techniques
could be perfected to the point that the subject would not even
suspect that he was being manipulated.

Dr. James V. McConnel, head of the Department of Mental Health
Research at the University of Michigan, said, “The day has come when
we can combine sensory deprivation with the use of drugs, hypnosis,
and the astute manipulation of reward and punishment to gain almost
absolute control over an individual’s behavior. We want to reshape
our society drastically.”

“It’s ironic that the German uranium intended for the Japanese, was ultimately delivered by the Americans.” – John Lansdale Jr.

By Arend Lammertink.

Abstract

For years now, revisionist authors have argued that there is something very wrong with the generally accepted historiography about the complex of factories and concentration camps known as Auschwitz. The debate so far has handled mostly about the question of whether or not it is true that the Germans systematically exterminated large numbers of people in gas chambers. For some reason, the remarkable fact that the supposed “Buna” plant within the complex produced absolutely nothing yet consumed more electricity than the entire city of Berlin thus far almost completely escaped attention. As it turns out, this is just one of the reasons to believe that this plant actually was an Uranium enrichment facility. An Uranium enrichment facility without which there would have been no A-bombing of neither Hiroshima nor Nagasaki.

Needless to say, if this were true and was to become widespread knowledge, it would have a significant impact on global politics, which thus would give us a motive for the relentless suppression of revisionists and their theses all over the Western world.

Introduction

While a lot of literature is available on the question of whether or not the Germans systematically exterminated large numbers of people in the gas chambers at Auschwitz, hardly any literature is available on what the actual purpose of the complex was, if it was not primarily an extermination camp.

The answer to that question not only shines new light on the beginning of the Atomic Age, it also explains why there is a geopolitical motive to suppress the truth about what happened at Auschwitz. At the end of the line, this suppression goes so far that a professional chemist, who wrote a report about the forensic research he conducted at the site, ended up behind bars in Germany.

So, apparently what happened at Auschwitz is important, very important. Germar Rudolf, the mentioned chemist, put it this way:

If the Holocaust were unimportant, we wouldn’t have around 20 countries on this planet outlawing its critical investigation. In fact, this is the only historical topic that is regulated by penal law. This is proof for the fact that the powers that be consider this topic to be the most important issue to keep under their strict control. Those censoring, suppressing powers are the real criminals — not the historical dissidents they send to prison.

I don’t think many people in Europe will disagree that it is important to fight racism and the spreading of hatred. In The Netherlands, where I live, this is regulated by law, which is very reasonable. In practice, such a general formulation of the law has been used successfully to prosecute a number of “holocaust deniers” in The Netherlands (see for example: [1] and [2]), which were given mild sentences in comparison to other countries.

If anything, these sentences make clear that there is absolutely no reason to explicitly make “holocaust denial” as such punishable by law, while doing so makes it next to impossible to perform independent scientific research on this important historic subject. Especially in Germany, where one cannot even defend one’s position on the subject with factual data, it is clear that the German law goes way to far, whereby the sentencing of Germar Rudolf is perhaps the most illustrative example of how well intended laws can go terribly wrong.

The story of Germar Rudolf is told in the following documentary, along with the story of Ernst Zündel and Bradley R. Smith, made by David Cole in 2007. Rudolf ended up behind bars after publishing his “Expert Report on Chemical and Technical Aspects of the “Gas Chambers” of Auschwitz”:

http://www.youtube.com/watch?v=lwentslVpXw

So far, it is clear that engaging in this debate from a historic and scientific perspective, even in The Netherlands, is not without risks. Yet, as law-abiding and freedom loving citizen, we have a moral obligation to speak out against the prosecution of people who are merely doing their job. We cannot allow science and history to be distorted because of geopolitical interests, which is clearly the case here as we are talking about the history of the Atomic weapons of mass destruction which killed at least 129,000 people in August, 1945.

Howard Zinn, who was a political science professor at Boston University, said this about our democratic responsibility to say what we want to say, especially when we deal with “deception of the public by the government in times of war“:

We have a responsibility to speak out, to speak our minds, especially now, and no matter what they say and how they cry for unity and supporting the president and getting in line. We have a democratic responsibility as citizens to speak out and say what we want to say.

One of the other things we need to do is to take a look at history, because history may be useful in helping us understand what is going on. The president isn’t giving us history and the media aren’t giving us history. They never do. Here we have this incredibly complex technologically developed media, but you don’t get the history that you need to understand what is going on today. There is one kind of history that they will give you, because history can not only be used for good purposes, but history can be abused.

History can’t give you definitive and positive answers to the issues that come up today, but it can suggest things. It can suggest skepticism about certain things. It can suggest probabilities and possibilities. There are some things you can learn from historical experience. One thing you can learn is that there is a long history of deception of the public by the government in times of war, or just before war, or to get us into war, going back to the Mexican war, when Polk lied to the nation about what was happening on the boarder between the Oasis River and the Rio Grande River.

[…]

[O]f all the things I’m going to tell you, remember two words. Governments lie. It’s a good starting point.

I’m not saying governments always lie, no they don’t always lie. But it’s a good idea to start off with the assumption that governments lie, and therefore whatever they say, especially when it comes to matters of war and foreign policy. Because when it’s a matter of domestic policy, there are things that you may be able to check up on, because its here and in this country, but something happening very far away, people don’t know very much about foreign policy. We depend on them because they’re supposed to know. They have the experts.

With this in mind, let’s take a look at David Cole, who made an extraordinary documentary about Auschwitz and addressed all of the issues which should have been openly and honestly debated, instead of having been suppressed for geopolitical reasons. Perhaps the most significant part of this documentary is Cole’s interview with Dr Franciszek Piper, “a Polish scholar, historian and author. Most of his work concerns the Holocaust, especially the history of the Auschwitz concentration camp”.

Cole managed to get Dr. Piper on tape, explaining that the alleged “gas chamber” in Auschwitz, which was said by Cole’s tourist guide to be “all original”, actually is a postwar reconstruction by the Soviets. This part starts at 35:50. I would suggest to use your own judgment and decide for yourself whether or not this documentary should be regarded as “historic review” or as “holocaust denial”:

http://www.youtube.com/watch?v=aQjNs-Ght8s

According to the transcript, these are Dr. Piper’s exact words:

So after the liberation of the camp, the former gas chamber presented a view of [an] air [raid] shelter. In order to gain an earlier view …earlier sight…of this object, the inside walls built in 1944 were removed and the openings in the ceiling were made anew.

So now this gas chamber is very similar to this one which existed in 1941-1942, but not all details were made so there is no gas-tight doors, for instance, [and the] additional entrance from the east side rested [remained] as it was made in 1944. Such changes were made after the war in order to gain [the] earlier view of this object.

This historic documentary was made over 20 years ago and today it is just as explosive as it was when it first came out. Recently, David Cole gave a 2.5 hour long radio interview about his experiences and his current view on the subject.

After this short introduction to what this debate has mostly been about, the analysis of the alleged “gas chambers” in Auschwitz and it’s importance in even present day geopolitics, we now continue with the main topic of this article, namely that because the IG Farben plant actually was a Uranium enrichment plant, people like Germar Rudolf are to be considered as having been political prisoners in modern-day Europe, a clear violation of internationally recognized Human Rights.

Hitler’s Atomic Program

For decades, few people questioned whether or not Nazi Germany came close to producing an Atomic Bomb, let alone testing one. Yet, the latter is exactly what has been suggested in 2005 by Rainer Karlsch in his book “Hitler’s Bomb”. Based on eyewitness accounts, he brings forth that in 1944 on the Baltic island of Rügen and in the spring of 1945 in Thuringia atomic bombs were tested. Also, a 1943 OSS report refers to a series of nuclear tests in the Schwabian Alps near Bisingen in July 1943. And measurements are said to have been carried out at the test site that found radioactive isotopes. Daniel W. Michaels’ review of Karlsch book reads:

Although the title of his book, Hitler’s Bomb, suggests more than the author could actually deliver, Karlsch defines the main thesis of his book much more soberly. He states very clearly that German scientists did not develop a nuclear device at all comparable to the American or Soviet hydrogen bombs of the 1950s. However, they knew in general terms how they functioned and were in a position to excite an initial nuclear reaction by means of their perfected hollow-charge technology. Only further research will determine whether their experiment represented fusion or fission reactions, or both.

Then in 2011, accordingly, “shock waves” were sent “through historians who thought that the German atomic programme was nowhere near advanced enough in WW2 to have produced nuclear waste in any quantities”:

German nuclear experts believe they have found nuclear waste from Hitler’s secret atom bomb programme in a crumbling mine near Hanover.

More than 126,000 barrels of nuclear material lie rotting over 2,000 feet below ground in an old salt mine.

[…]

Mark Walker, a US expert on the Nazi programme said: ‘Because we still don’t know about these projects, which remain cloaked in WW2 secrecy, it isn’t safe to say the Nazis fell short of enriching enough uranium for a bomb. Some documents remain top secret to this day’.

‘Claims that a nuclear weapon was tested at Ruegen in October 1944 and again at Ohrdruf in March 1945 leave open a question, did they or didn’t they?’

Dr. Joseph Farrell, author of “Reich of the Black Sun”, commented on this article on his blog, mentioning that the Nazi atom bomb tests have begun to be researched and discussed in Germany, continuing with:

This was supplemented by Carter Hydrick’s wonderful study Critical Mass, a study that in my opinion was so good that it had to be trashed by reviewers (which it was), because the story it contained was so stupendous. According to Hydrick, the Nazi nuclear program involved, at the minimum, a huge uranium enrichment program, and that program was probably successful to the point that the Nazis had enriched, to varying degrees of purity, uranium 235, and some of it was probably of fissile-weapons grade quality.

Also, he makes the point that the discovery of this nuclear waste is very significant, because it confirms Hydrick’s argument, “namely, that the Nazi program was not the haphazard, hit-and-miss, poorly coordinated laboratory affair that got no further than a few clumsy attempts by Heisenberg to build a reactor, but rather, its enrichment program was a huge concern, highly organized, and processing isotopes to a degree similar to, if not exceeding, the Manhattan project in its sheer size.”

Manhattan, we have a problem!

A study of the shipment of (bomb-grade uranium) for the past three months shows the following…: At the present rate we will have 10 kilos about February 7 and 15 kilos about May 1.

This small excerpt from a memo written by chief Los Alamos metallurgist Eric Jette, December 28, 1944[I] reveals that the Manhattan project had a serious problem. You see, the uranium bomb “Little Boy”, which was dropped on Hiroshima, would have required 50 kilos by the end of July, 1945, more than twice the amount the Manhattan project would have been able to produce themselves according to this memo.

This raises the question: “How did they solve this problem?”

In order to answer this question, we would need to know what the bottleneck in the Oak Ridge production rate was. This could be either a supply problem of raw uranium for the plant, or a problem with the production capacity of the plant itself. If raw material were the biggest problem, additional material could have come from multiple (mining) sources, including Nazi Germany. In fact, the Alsos Mission did just that.

However, if the biggest problem was the production capacity of the plant itself, then they must have gotten additional supply of enriched uranium from some external source, be it in metallic or oxide form. And that could have come from only one source: Nazi Germany.

U-235 on the U-234?

On May 14th, 1945, the German submarine U-234 surrendered to the USS Sutton, along with her precious cargo which was intended to be shipped to Japan:

The cargo included technical drawings, examples of the newest electric torpedoes, one crated Me 262 jet aircraft, a Henschel Hs 293 glide bomb and what was later listed on the US Unloading Manifest as 1,200 pounds (540 kg) of uranium oxide. In the 1997 book Hirschfeld, Wolfgang Hirschfeld reported that he saw about 50 lead cubes with 23 centimetres (9.1 in) sides, and “U-235” painted on each, loaded into the boat’s cylindrical mine shafts. According to cable messages sent from the dockyard, these containers held “U-powder”.

[…]

The fact that the ship carried .5 short tons (0.45 t) of uranium oxide remained classified for the duration of the Cold War. Author and historian Joseph M. Scalia claimed to have found a formerly secret cable at Portsmouth Navy Yard which stated that the uranium oxide had been stored in gold-lined cylinders rather than cubes as reported by Hirschfeld; the alleged document is discussed in Scalia’s book Hitler’s Terror Weapons. The exact characteristics of the uranium remain unknown.

There is little doubt that this uranium oxide was shipped to the Manhattan project, as reported by the NY Times, quoting Mr. John Lansdale Jr.:

Historians have quietly puzzled over that uranium shipment for years, wondering, among other things, what the American military did with it. Little headway was made because of Federal secrecy. Now, however, a former official of the Manhattan Project, John Lansdale Jr., says that the uranium went into the mix of raw materials used for making the world’s first atom bombs. At the time he was an Army lieutenant colonel for intelligence and security for the atom bomb project. One of his main jobs was tracking uranium.

Mr. Lansdale’s assertion in an interview raises the possibility that the American weapons that leveled the Japanese cities of Hiroshima and Nagasaki contained at least some nuclear material originally destined for Japan’s own atomic program and, perhaps, for attacks on the United States.

If confirmed, that twist of history could add a layer to the already complex debate over whether the United States had any moral justification for using its atom bombs against Japan.*

[…]

Mr. Lansdale, the former official of the Manhattan Project, displayed no doubts in the interview about the fate of the U-234’s shipment. “It went to the Manhattan District,” he said without hesitation. “It certainly went into the Manhattan District supply of uranium.”

Mr. Lansdale added that he remembered no details of the uranium’s destination in the sprawling bomb-making complex and had no opinion on whether it helped make up the material for the first atomic bomb used in war.

In the documentary “U-234-Hitler’s Last U-Boat” (2001), a few years later, Mr Lansdale did have an opinion:

http://www.youtube.com/watch?v=xw60hyA0DSw

(48:30) “I made arrangements for my staff to retrieve and test the material. I sent trucks to Porthsmouth to unload the uranium and then I sent it to Washington. After the uranium was inspected in Washington, it was sent to Oak Ridge.”

(51:16) “It’s ironic that the German uranium intended for the Japanese, was ultimately delivered by the Americans.”

(54:12) “The submarine was a God send, because it came at the right time and the right place.”

In the same documentary, Hans Bethe, former head of the Theoretical Division at the secret Los Alamos laboratory which developed the US atomic bombs, implicitly gives an estimate of the production capacity of the Oak Ridge plant, together with another person being interviewed:

(49:25) Bethe: “If you have 560 kg of uranium, it would have taken approximately a week in 1945 to separate it into weapons uranium.”

(50:27) Unknown: “500 kg of raw uranium might result in half a kg of uranium 235. Not enough to make a bomb with, but an important increment.”

Based on this, we can estimate that the production capacity of the Oak Ridge facility was approximately half a kg per week. We can compare this with the data in Jette’s memo, about 5 kg in the 12 weeks between February 7th and May 1st, which would mean an average production of about 0.42 kg per week, a pretty good match.

However, contrary to the above quote, the Wikipedia article on the U-234 states that the 560 kg of uranium oxide would have yielded about 3.5 kg of U-235 “after processing”, with a reference to the book “American Raiders” by Samuel Wolfgang. On it’s turn, this refers to “Hitler’s U-boat war: the Hunted” by Clay Blair, wherein the uranium oxide is listed as “1,232 pounds of uranium ore”. After mentioning Karl Pfaff’s (German Sailor) assistance in unloading the “boxes of uranium-oxide ore” from the submarine (also see: [3]), it states:

Scientists say this uranium ore would have yielded about 3.5 kilograms (7.7 pounds) of isotope U-235 (not a U-boat), about one-fifth of what was needed to make an atomic bomb.

Actually, the critical mass for an uranium-235 bomb is about 50 kg, but it depends on the grade: with 20% U-235 it is over 400 kg; with 15% U-235, it is well over 600 kg. So, 3.5 kilograms would be at most one-fifteenth (7%) of what was needed. The actual bomb dropped on Hiroshima used 64 kilograms of 80% enriched uranium, which in practice comprised almost 2.5 critical masses, because the fissile core was wrapped in a neutron reflector which allows a weapon design requiring less uranium.

Because U-235 constitutes about 0.711% by weight of natural uranium, 560 kg of raw uranium would result in about 3.98 kg of U-235. However, we are talking about 560 kg of uranium oxide and not pure uranium, so we have to correct for the amount of oxygen in the material in order to calculate how much uranium 235 this would yield.

If we assume the oxide to be uranium-dioxide (UO2), then we would have to take about 88% (238/(238+2*16)) of the 3.98 kg in order to correct for the oxygen, which would result in about 3.51 kg of U-235.

We can also assume the oxide to be so-called “Yellowcake”, a type of uranium powder as it would be after processing mining ore, but before enrichment. Yellowcake contains about 80% uranium oxide, of which typically 70 to 90 percent triuranium octoxide (U3O8). In that case, we have to correct by about 85% for the oxygen on top of the 80% for 20% impurities, which would result in about 2.70 kg of U-235.

Both of these numbers are significantly higher than the half a kg mentioned in the U-boat documentary, which is rather remarkable. And this is also where this story becomes intriguing, because what we see is that there are discrepancies between what is being told to the public and the hard, factual data that should corroborate with it.

Enter “Critical Mass – The Real Story of the Birth of the Atomic Bomb and the Nuclear Age” by Carter P. Hydrick, who argues that there is a lot more to this story than meets the eye, which can be found in the records of the Manhattan project:

As far as I can tell, I was the first to review the actual uranium enrichment production records, the shipping and receiving records of materials sent from Oak Ridge to Los Alamos, the metallurgical fabrication records of the making of the bombs themselves, and the records and testimony regarding failure to develop a viable triggering device for the plutonium bomb.

[…]

The critical daily production records of Oak Ridge and elsewhere have been all but ignored, though they reveal important information not previously considered in other histories, and although they tell a different story than that presently believed.

[…]

The new-found evidence taken en mass demonstrates that, despite the traditional history, the uranium captured from U-234 was enriched uranium that was commandeered into the Manhattan Project more than a month before the final uranium slugs were assembled for the uranium bomb. The Oak Ridge records of its chief uranium enrichment effort – the magnetic isotope separators known as calutrons – show that a week after Smith’s and Traynor’s 14 June conversation, the enriched uranium output at Oak Ridge nearly doubled – after six months of steady output.

Edward Hammel, a metallurgist who worked with Eric Jette at the Chicago Met Lab, where the enriched uranium was fabricated into the bomb slugs, corroborated this report of late-arriving enriched uranium. Mr. Hammel told the author that very little enriched uranium was received at the laboratory until just two or three weeks – certainly less than a month – before the bomb was dropped.

The Manhattan Project had been in desperate need of enriched uranium to fuel its lingering uranium bomb program. Now it is almost conclusively proven that U-234 provided the enriched uranium needed, as well as components for a plutonium breeder reactor.

The story so far has been recently summarized as follows by Ian Greenhalgh:

Without the German uranium and fuses, no atomic bombs would have been completed before 1946 at the earliest.

That brings us to the question: “How and where could Germany have managed to produce over 500 kg of enriched uranium?”

Buna or Uranium?

German born engineer Hans Baumann, author of a book about Hitler’s alleged escape to Argentina, recently wrote a remarkable introduction to the history of the use of high-speed centrifuges for the enrichment of uranium. He mentions that “while the U.S. had no problem creating sufficient plutonium, creating fissionable uranium proved more difficult” due to the low efficiency of the procedures they tried. In Germany, though, a professor came up with the idea of the (ultra)centrifuge, which proved successful. The rest of the article says it all:

A plant facility was built close to the Polish border (away from possible air attacks). For security reasons, the plant housing the centrifuges, was called a buna-n facility; where buna-n is an artificial rubber. At the end of the war Germany had produced 1,230 pounds of enriched uranium dioxide (UO2, containing the solidified gas of U235).

The Germans then tried to ship this heavy and radioactive metal to Japan but it never arrived.

In January 1945, the Russian army discovered this buna-n facility and evacuated the centrifuges to Russia, where they likely played an important role to create the Russian atomic bomb a few years later.

Hydrick goes into more detail:

By May 1944, compared with American production efforts that at their best resulted in enriching uranium from its raw state of .7 percent to about 10 to 12 percent on the first pass, the first German experimental ultracentrifuge succeeded with enriching the material to seven percent.

[…]

Ultracentrifuge output was so impressive, in fact, that following its very first experimental run, funding and authority were established to build ten additional production model ultracentrifuges in Kandern, a town in the southwest of Germany far from the fighting. […] The Nazis were now committed in a big way to ultracentrifuge production – and therefore to enriching uranium.

[…]

Production for the German isotope enrichment projects, once the experimental and design work were completed by Ardenne and the others, appears to have been undertaken by the I.G. Farben company under orders of the Nazi Party. The company was directed to construct at Auschwitz a buna factory, allegedly for making synthetic rubber.

Following the war, the Farben board of directors bitterly complained that no buna was ever produced despite the plant being under construction for four-and-a-half years; the employment of 25,000 workers from the concentration camp, of whom it makes note the workers were especially well-treated and well fed; and the utilization of 12,000 skilled German scientists and technicians from Farben. Farben also invested 900 million reichsmarks (equal to approximately $2 billion of today’s dollars) in the facility.

The plant used more electrical power than the entire city of Berlin yet it never made any buna, the substance it was “intended” to produce.

When these facts were described to an expert on polymer production (buna is a member of the polymer, or synthetic rubber, family), Mr. Ed Landry, Mr. Landry responded directly, “It was not a rubber plant, you can bet your bottom dollar on that.”

Landry went on to explain that while some types of buna are made by heating, which requires using relatively large amounts of energy, this energy is invariably supplied by burning coal. Coal was plentiful and well-mined in the area and was a key reason for locating the plant at Auschwitz when it was still intended to be a buna facility. The heating-of-buna process, to Landry’s knowledge, was never attempted using electricity, nor could he envision why it would have been. Landry totally dismissed the possibility that a buna plant, had it tried an electric option, would ever use more electricity than the entire city of Berlin. And the investment of $ 2 billion is, “A hell of a lot of money for a buna plant” even these days, according to Mr. Landry.

The probability of the Farben plant having been completed to make buna appears to be very slim to none. The plant contained all of the characteristics of a uranium enrichment plant, however, which undoubtedly it would never have been identified as, but it would have had an appropriate cover story to camouflage it – such as it supposedly being a buna plant. In fact, buna would have been an excellent cover because of the high level and types of technology involved in both.

From this perspective, it would make perfectly sense for the Germans to make sure the 25,000 workers from the concentration camp were well treated, well fed and even to take surprising measures in order to protect their lives from infectious disease (page 175):

The extent of the German effort to improve hygienic conditions at Auschwitz is evident from an amazing decision made in 1943/44. During the war, the Germans developed microwave ovens, not just to sterilize food, but to delouse and disinfect clothing as well. The first operational microwave apparatus was intended for use on the eastern front, to delouse and disinfect soldiers’ clothing. After direct war casualties, infectious diseases were the second greatest cause of casualties of German soldiers. But instead of utilizing these new devices at the eastern front, the German government decided to use them in Auschwitz to protect the lives of the inmates, most of whom were Jews. When it came to protecting lives threatened by infectious disease, the Germans obviously gave priority to the Auschwitz prisoners. Since they were working in the Silesian war industries, their lives were apparently considered comparably important to the lives of soldiers on the battlefield.

Cui bono?

No investigation is complete without a little exercise in “follow the money”, in this case to and from Nazi Germany. While for ages it has been said that all roads lead to Rome, it appears that all financial routes lead to “Wall Street” and have been leading there for decades already, which makes “Wall Street” a global centre of power, the spider in a gigantic web of corporations reaching all over the globe.

Perhaps the first scholar who investigated the involvement of “Wall Street” in geopolitics was Prof. Antony Sutton. In the following interview about his work, he says that “Wall Street” funded and was deeply involved in organizing three forms of socialism. These were the socialist welfare state (particularly under Roosevelt in the US), Bolshevik communism and Nazi national socialism. This gives a very good impression of the extend to which the “Wall Street” crime centre shaped the twentieth century, safely out of the public view:

http://www.youtube.com/watch?v=Sah_Xni-gtg

In “Wall Street and the rise of Hitler” Sutton wrote in his conclusions (chapter 12) about the “Pervasive Influence of International Bankers”:

Looking at the broad array of facts presented in the three volumes of the Wall Street series, we find persistent recurrence of the same names: Owen Young, Gerard Swope, Hjalmar Schacht, Bernard Baruch, etc.; the same international banks: J.P. Morgan, Guaranty Trust, Chase Bank; and the same location in New York: usually 120 Broadway.

This group of international bankers backed the Bolshevik Revolution and subsequently profited from the establishment of a Soviet Russia. This group backed Roosevelt and profited from New Deal socialism. This group also backed Hitler and certainly profited from German armament in the 1930s. When Big Business should have been running its business operations at Ford Motor, Standard of New Jersey, and so on, we find it actively and deeply involved in political upheavals, war, and revolutions in three major countries.”

A recent study by complex systems theorists at the Swiss Federal Institute of Technology in Zurich concluded that a core group of 147 tightly knit companies pretty much control half of the global economy:

AS PROTESTS against financial power sweep the world this week, science may have confirmed the protesters’ worst fears. An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy. […] “In effect, less than 1 per cent of the companies were able to control 40 per cent of the entire network,” says Glattfelder. Most were financial institutions. The top 20 included Barclays Bank, JPMorgan Chase & Co, and The Goldman Sachs Group.

In other words: what we see here is that pretty much the same names that came up in Sutton’s research as being involved with shady geopolitic activities, continue to come up in investigations into the financial web of control that shapes geopolitics today. Not one, but two US Presidents gave clear and specific warnings about the potential existence of exactly such kind of corporate control structure, which could acquire “unwarranted powers”.

Interestingly enough, two later US Presidents were closely related to an individual who was “actively and deeply involved” in the group Eisenhower, Kennedy and Sutton warned us about, as reported by “The Guardian”:

The Guardian has obtained confirmation from newly discovered files in the US National Archives that a firm of which Prescott Bush was a director was involved with the financial architects of Nazism.

His business dealings, which continued until his company’s assets were seized in 1942 under the Trading with the Enemy Act, has led more than 60 years later to a civil action for damages being brought in Germany against the Bush family by two former slave labourers at Auschwitz and to a hum of pre-election controversy.

The evidence has also prompted one former US Nazi war crimes prosecutor to argue that the late senator’s action should have been grounds for prosecution for giving aid and comfort to the enemy.

[…]

The first set of files, the Harriman papers in the Library of Congress, show that Prescott Bush was a director and shareholder of a number of companies involved with Thyssen.

The second set of papers, which are in the National Archives, are contained in vesting order number 248 which records the seizure of the company assets. What these files show is that on October 20 1942 the alien property custodian seized the assets of the UBC, of which Prescott Bush was a director. Having gone through the books of the bank, further seizures were made against two affiliates, the Holland-American Trading Corporation and the Seamless Steel Equipment Corporation. By November, the Silesian-American Company, another of Prescott Bush’s ventures, had also been seized.”

Other interesting information can be found in the public record regarding the so-called “Business Plot”, an attempt to overthrow Roosevelt:

The Business Plot (also known as The White House Coup) was a political conspiracy (see Congressional Record) in 1933 in the United States. Retired Marine Corps Major General Smedley Butler claimed that wealthy businessmen were plotting to create a fascist veterans’ organization with Butler as its leader and use it in a coup d’état to overthrow President Franklin D. Roosevelt. In 1934, Butler testified before the United States House of Representatives Special Committee on Un-American Activities (the “McCormack-Dickstein Committee”) on these claims. No one was prosecuted.

BBC4 aired a documentary about this in 2007:

The coup was aimed at toppling President Franklin D Roosevelt with the help of half-a-million war veterans. The plotters, who were alleged to involve some of the most famous families in America, (owners of Heinz, Birds Eye, Goodtea, Maxwell Hse & George Bush’s Grandfather, Prescott) believed that their country should adopt the policies of Hitler and Mussolini to beat the great depression.

Conclusion

While there is no direct evidence to prove for the full 100% that the IG Farben plant near Auschwitz indeed was an Uranium Enrichment facility, there is enough circumstantial evidence to state that it almost certainly was. The combination of the characteristics of the IG Farben plant and the U-234 shipment of Uranium oxide, with which the Manhattan project solved their production problem as well as their plutonium bomb ignition problem, leaves little doubt that the cargo of the U-234 indeed contained enriched Uranium, enriched Uranium that came from the IG Farben plant near Auschwitz. The submarine would not have been a “God send” if it would not have contained enriched Uranium. In other words, what we have here is “probable cause”, enough to warrant an in-depth investigation into the details.

From this perspective, we indeed have a clear motive for both the US as well as Russia to try and hide this story. We also identified a group, centred around “Wall Street, who has an even bigger motive to keep this story under wraps. In other words: we both see a motive and an opportunity for “Wall Street” to hide this story and to cover it up with propaganda, censorship, lies and deception.

And yes, that means that we, as democratic citizen, have a moral obligation to speak out about this and say what we want to say.

And there comes a time when one must take a position that is neither safe, nor politic, nor popular; but one must take it because it is right.

Dr. Martin Luther King, Jr.

Offline references

[I] E.R. Jette to C.S. Smith memorandum: Production rate of 25, December 28, 1944, U.S. National Archives, Washington, D.C., A-84-019-70-24, as quoted by Hydrick.

Extra: correspondence with Dutch and European Parliaments

I sent an e-mail to the chair of the Tweede Kamer at July 21st, 2016 (text below), along with this attachment. So far, I have not received confirmation the chair received my message, which is rather unusual. Normally, one receives a confirmation by snail mail within a couple of days.

At July 20th, 2015, I filed a request to the European Parliament(in Dutch), requesting them to debate the issue of “holocaust denial” in relation to the freedom of speech. I received a confirmation of receival and a letter stating that the EP has already taken some decisions on the subject and therewith declaring “case closed”.

With this, I hope it is clear that I consider this issue, above all, to be a geopolitical issue.

The text from my e-mail to Dutch Parliament:

Geachte Voorzitter,

Bijgaand mijn artikel inzake Auschwitz en het atoomtijdperk, waarin ik uiteen zet dat de IG Farben “buna” fabriek nabij Auschwitz vrijwel zeker een uraniumopwerkingsfabriek moet zijn geweest. Wat hierbij van groot geopolitiek belang is, is dat het aldaar verrijkte uranium, tezamen met ontstekingsmechanismes voor plutionumbommen, zijn weg heeft gevonden naar het Amerikaanse atoomprogramma. Dit had begin 1945 enerzijds grote problemen om voldoende verrijkt uranium te produceren en beschikte anderzijds niet over de technologie om plutioniumbommen te kunnen onsteken.

We kunnen uiteindelijk vaststellen dat de atoombombardementen op Japan in augustus 1945 niet hadden kunnen plaatsvinden zonder het in Auschwitz opgewerkte uranium en de Duitse plutioniumbom ontstekingsmechanismes.

Deze informatie stelt het debat rond het bestaan van de zogenaamde “gaskamers” in Auschwitz in een geheel nieuw perspectief, te meer ook omdat het werk van Prof. Sutton duidelijk maakt dat “Wall Street” “actief en diep betrokken was bij politieke opschudding, oorlogen en revoluties” in onder andere Nazi Duitsland en Bolsjeviek Rusland. Ook hede ten dage kunnen we de invloed van “Wall Street” herkennen in het globale netwerk van corporaties dat zo ongeveer de helft van de globale economie onder controle heeft.

Juist in deze tijd, waarin rassenhaat opnieuw de kop op steekt, is het van groot belang dat we over objectieve geschiedschrijving beschikken, omdat een goed begrip van de geschiedenis onontbeerlijk is om het heden te kunnen begrijpen.

Wat we zien in de geschiedenis rond Auschwitz is dat zowel de Amerikanen als de Russen er belang bij hebben de waarheid over het atoomprogramma aldaar te onderdrukken. Het zelfde geldt voor “Wall Street”, dat er zo mogelijk nog meer belang bij heeft haar betrokkenheid bij “politieke opschudding, oorlogen en revoluties” in de doofpot te doen belanden.

Aangezien “Wall Street” over een enorme financiele macht beschikt, en dus zonder meer in staat is het publieke debat in een voor haar wenselijke richting te sturen, kunnen we vaststellen dat “Wall Street” over zowel motief, gelegenheid als de middellen beschikte om de waarheid over dit belangrijke geopolitieke verhaal middels propaganda, leugens en bedrog te onderdrukken. Daarbij in veel Europese lidstaten geholpen door wetgeving die “holocaust ontkenning” als zodanig strafbaar stelt, zonder dat het de verdachte toegestaan wordt zichzelf te verdedigen met behulp van een forensisch-wetenschappelijk onderzoek van de feiten.

Gelukkig is dat in ons land niet het geval en is onze wetgeving in de praktijk meer dan voldoende gebleken om individuen, die de rond de Auschwitz “gaskamers” gevonden discrepanties aangrijpen om discriminerende uitingen en haat te verspreiden, te veroordelen.

Dat neemt echter niet weg dat de wetgeving in met name Duitsland te ver is doorgeschoten, waardoor individuen, zoals Germar Rudolf, die voor eigen rekening historisch technisch wetenschappelijk feitenonderzoek verrichten achter de tralies belandden. In dit geval betekent dit dat onder meer de heer Rudolf feitelijk gezien moet worden als een plotieke gevangene, een politieke gevangene binnen de grenzen van onze Europese rechtsstaat.

Met andere woorden: we constateren hier dat er binnen onze Europese rechtsstaat sprake is van politieke gevangenen, hetgeen duidelijk in strijd is met de internationaal erkende rechten van de mens en het EVRM in het bijzonder.

Ik vraag daarom nogmaals uw aandacht voor dit dossier en verzoek u er alles aan te doen wat binnen uw macht ligt om een einde te maken aan deze schrijnende gang van zaken. Ik kan me niet voorstellen dat uw kamer het aanvaardbaar acht dat er binnen de Europese Unie politieke gevangenen zijn.

Ik verzoek uw kamer tevens kennis te nemen van de inhoud van mijn artikel en mij te berichten of uw kamer van mening is dat ik met deze publicatie de grenzen van het fatsoen danwel de geest van enig door uw kamer vastgestelde wet heb overtreden, daarbij in aanmerking nemende dat het onze morele plicht is om zaken die niet door de beugel kunnen aan de kaak te stellen en dat de vrijheid van meningsuiting verankerd is in onze grondwet.

In afwachting van uw reactie verblijf ik,

Met vriendelijk groet,

Ir. Arend Lammertink,

[address]


The Merovingian Mythos, and its Roots in the Ancient Kingdom of Atlantis

By Tracy R. Twyman

The Frankish King Dagobert II, and the Merovingian dynasty from which he came, have been romantically mythologized in the annals of both local legend and modern mystical pseudo-history, but few have understood the true meaning and origins of their alluring mystery. The mystique that surrounds them includes attributions of saintliness, magical powers (derived from their long hair), and even divine origin, because of their supposed descent from the one and only Jesus Christ. However, the importance of the divine ancestry of the Merovingians, and the antiquity from whence it comes, has never to this author’s knowledge been fully explored by any writer or historian. However, I have uncovered mountains of evidence which indicates that the origins of the Merovingian race, and the mystery that surrounds them, lies ultimately with a race of beings, “Nephilim” or fallen angels, who created mankind as we know it today. It also originated with a civilization, far more ancient than recorded history, from which came all of the major arts and sciences that are basic to civilizations everywhere. As I intend to show, all of the myths and symbolism that are associated with this dynasty can, in fact, be traced back to this earlier civilization. It is known, in some cultures, as Atlantis, although there are many names for it, and it is the birthplace of agriculture, astronomy, mathematics, metallurgy, navigation, architecture, language, writing, and religion. It was also the source of the first government on Earth – monarchy. And the first kings on Earth were the gods.

Their race was known by various names. In Assyria, the Annodoti. In Sumeria, the Annunaki. In Druidic lore, the Tuatha de Danaan. In Judeo-Christian scriptures, they are called the Nephilim, “the Sons of God”, or the Watchers. They are described as having attachments such as wings, horns, and even fish scales, but from the depictions it is clear that these are costumes worn for their symbolic value, for these symbols indicated divine power and royal blood. The gods themselves had their own monarchy, with laws of succession similar to our own, and they built a global empire upon the Earth, with great cities, temples, monuments, and mighty nations established on several continents. Man was separate from the gods, like a domesticated animal, and there was a great cultural taboo amongst the gods against sharing any of their sacred information with humanity, even things such as writing and mathematics. These gods ruled directly over Egypt, Mesopotamia, and the Indus Valley, and their rule is recorded in the histories of all three civilizations.

This global monarchy was the crowning glory of the ages, and the period of their rule came to be called “the Golden Age”, or as the Egyptians called it, “the First Time”, when the gods watched over man directly, like a shepherd does his flock. In fact, they were often called “the Shepherd Kings.” One of the symbols of this world monarchy was an eye hovering over a throne, and this eye now adorns our American dollar bill, presented as the missing capstone of the Great Pyramid of Giza, underneath which are written the words “New World Order.” Clearly this New World Order is the global monarchy that or Founding Fathers (not a Democrat among them) intended for this nation to participate in all along, symbolized by a pyramid as a representation of the ideal and perfectly ordered authoritarian empire. During the Golden Age of the gods, a new king’s ascendance to the global throne would be celebrated by the sacrifice of a horse, an animal sacred to Poseidon, one of the Atlantean god-kings and Lord of the Seas. (1) In fact there is an amusing story about how King Sargon’s rebellious son Sagara tried to prevent his father’s assumption to the world throne from being solidified by stealing his sacrificial horse. The horse was not recovered until years later, and Sagara, along with the “sons of Sagara”, i.e., those members of his family who had assisted him, were forced to dig their own mass grave. This grave was oddly called “the Ocean.”

It was a rebellion such as this that led to the downfall of the entire glorious empire. At some point, it is told, some of the gods broke rank. This is again recorded in just about every culture on Earth that has a written history or oral tradition. Some of the gods, finding human females most appealing, intermarried with them, breaking a major taboo within their own culture, and creating a race of human/god hybrids. Some of these offspring are described as taking the form of giants, dragons, and sea monsters, while others are said to have borne a normal human countenance, with the exception of their shimmering white skin and their extremely long life spans. This is the bloodline that brought us Noah, Abraham, Isaac, Jacob, King David, Jesus Christ, and many others – in other words, the “Grail bloodline.” Legend has it that these beings taught mankind their secrets, including the above-mentioned arts of civilization, as well as a secret spiritual doctrine that only certain elect humans (their blood descendants) would be allowed to possess. They created ritualistic mystery schools and secret societies to pass this doctrine down through the generations.

However, these actions (the interbreeding with and sharing of secrets with humans) incurred the wrath of the Most High God, and a number of other gods who were disgusted by this interracial breeding. This sparked the massive and devastating battle of the gods that has come down to us in the legend of the “war in Heaven.” Then, in order to cleanse the Earth’s surface of the curse of humanity, they covered it with a flood. Interestingly, this flood is mentioned in the legends of almost every ancient culture on Earth, and the cause is always the same. Often the waters are described as having come from inside the Earth. “The Fountains of the deep were opened”, it is said. “Suddenly enormous volumes of water issued from the Earth.” Water was “projected from the mountain like a water spout.” The Earth began to rumble, and Atlantis, fair nation of the gods, sunk beneath the salty green waves. As we shall see, this is analogous to part of the “war in Heaven” story when the “rebellious” angels or gods were punished by being cast down “into the bowels of the Earth” – a very significant location.

To be certain, some of the Atlanteans managed to survive, and many books have been written about the Atlantean origin of the Egyptian, Sumerian, Indo-Aryan, and native South American civilizations (bringing into question the validity of the term “Native American”). Little, however, has been written about those who escaped into Western Europe, except for a passing reference in Ignatius Donnelly’s Atlantis: The Antediluvian World, in which we writes:

“The Gauls [meaning the French] possessed traditions upon the subject of Atlantis which were collected by the Roman historian Timagenes, who lived in the first century before Christ. He represents that three distinct people dwelt in Gaul: 1. The indigenous population, which I suppose to be Mongoloids, who had long dwelt in Europe; 2. the invaders from a distant land, which I understand to be Atlantis; 3. The Aryan Gaul.”

That the Merovingian bloodline came from elsewhere is clear because of the legend that surrounds their founder, King Meroveus, who is said to have been the spawn of a “Quinotaur” (a sea monster), who raped his mother when she went out to swim in the ocean. Now it becomes obvious why he is called “Meroveus”, because in French, the word “mer” means sea. And in some traditions, Atlantis was called Meru, or Maru. (2) For these gods, navigation above all was important to them, for it was their sea power that maintained their military might and their successful mercantile trade. (3) The Atlanteans were associated with the sea and were often depicted as mermen, or sea monsters, with scales, fins, and horns. They were variously associated with a number of important animals, whose symbolism they held sacred: horses, bulls, goats, rams, lions, fish, serpents, dragons, even cats and dogs. All of these things relate back to the sea imagery with which these gods were associated.

Now lets go back to the Quinotaur, which some have named as being synonymous with Poseidon, the Greek god of the sea and, according to Plato, one of the famous kings of Atlantis. Others have seen it as being emblematic of the fish symbol that Christ is associated with, thus indicating that he was in fact the origin of the Merovingian bloodline. However, the roots of this Quinotaur myth are far more ancient. The word itself can be broken down etymologically to reveal its meaning. The last syllable, “taur”, means “bull.” The first syllable “Quin”, or “Kin”, comes from the same root as “king”, as well as the Biblical name of Cain, whom many have named as the primordial father of the Grail family. (4) The idea of the “King of the World” taking the form of a sea-bull was a recurring them in many ancient cultures, most notably in ancient Mesopotamia. In fact it originated with that dynasty of kings who reigned over the antediluvian world and who were all associated with the sea, as well as this divine animal imagery. These kings included Sargon, Menes, and Narmar. Their historical reality morphed into the legends we have in many cultures of gods said to have come out of the sea at various times and to teach mankind the basic arts of civilization. They were known by various names, such as Enki, Dagon, Oannes, or Marduk (Merodach). They were depicted as half-man and half-fish, half-goat and half-fish, or half-bull and half-fish, but as I have said, in many of these depictions it is clear that this affect was achieved merely by the wearing of costumes, and that these god-kings were using this archetypal imagery to deify themselves in the minds of their subjects.

Dagon was depicted with a fish on his head, the lips protruding upward, making what were referred to as “horns.” This may be the origin for the custom (common in the ancient world) of affixing horns to the crown of a king. It has also been historically acknowledged as the origin of the miter worn by the Catholic Pope. (5) The Christian Church has always been associated with fish. Christ himself took on that imagery, as did John the Baptist, and the early Christians used the fish sign of the “Ichthys” to designate themselves. From the name “Oannes” we get the words “Uranus” and “Ouranos”, but also supposedly “Jonah”, “Janus”, and “John.” Perhaps we finally now understand why the Grand Masters of the Priory of Sion assume the symbolic name of “John” upon taking office.

The syllable “dag” merely means “fish”, which makes it interesting to note that the Dogon tribe of Africa, who have long baffled astronomers with their advanced knowledge of the faraway star-system from which they say their gods came, claim that these gods were “fish-men.” We may wonder if the words “dag” and “dog” are not etymologically related, especially since the star from whence these fish-men supposedly came is named Sirius, “the Dog Star.” From Dagon comes our word “dragon”, as well as the biblical figure of Leviathan, “the Lord of the Deep”, a title also applied to Dagon. In fact, many of these Atlantean god-kings received the titles “the Lord of the Waters”, “The Lord of the Deep”, or “the Lord of the Abyss”, which appear to have been passed down from father to son, along with the throne of the global kingdom. These kings were specifically associated with the Flood of Noah, which, as I have mentioned, destroyed their global kingdom, and was somehow linked to their disastrous breeding experiment with the human race that lead to the “Grail bloodline.” For this they were consigned to the “Abyss” or the underworld, which is why these gods were known as the lords of both.

In addition, Enki was known as the “Lord of the Earth”, and it is because of this “amphibious” nature of their progenitor, who reigned over both land and sea, that the Merovingians are associated with frogs. But this “Lord of the Earth” title is significant, for this is a title also given to Satan. It has been acknowledged elsewhere that Enki, as the “fish-goat man”, is the prototype for the Zodiac sign of Capricorn, which is itself recognized as the prototype for the modern conception of Satan or Lucifer. Furthermore, a well-known and pivotal episode in Enki’s career was his fight against his brother Enlil over the succession of the global throne. Enki eventually slew Enlil, something that is recorded in the Egyptian myth of Set murdering Osiris, and perhaps in the Biblical story of Cain murdering Abel. The connection between Enki and Enlil and Cain and Abel can be further proven by the fact that Enki and Enlil were the son of Anu (in some Sumerian legends, the first god-king on Earth), whereas Cain and Abel were the sons of the first man, called “Adamu” in Sumerian legends. “Adamu” and “Anu” appear to be etymologically related.

This family feud erupted into a long and overdrawn battle between the gods, who were split into two factions over the issue. These appear to be the same two factions who were at odds over the mating of gods and men to create the Grail bloodline. Those who supported Enki/Satan and Cain were clearly the ones who were inclined to breed with mankind, perhaps in an attempt to create a hybrid race that could assist them in retaining the throne for Cain. But they were overpowered. After they lost the “war in Heaven”, they were cast into the Abyss (according to legend, now the realm of Satan), and the Earth was flooded so as to rid it of their offspring.

Yet according to the legends, those gods who had created the hybrid race contacted one of their most favored descendants (called Uta-Napishtim in the Sumerian legends, or Noah in the Jewish), helping him to rescue himself and his family, preserving the seed of hybrid humanity. (6) We see remnants of this in the Vedic legends of the Flood, in which the Noah figure, here called “Manu”, is warned about the Flood by a horned fish (who turns out to be the Hindu god Vishnu in disguise). The fish tells Manu to build a ship, and then tie its tip to his horn. He then proceeds to tow Manu’s ship to safety upon a high mountain. So clearly Vishnu is connected to Enki, Dagon, and Oannes, and clearly he is the same one who saved Noah from the Flood. Yet this very deed became attributed, in the Old Testament, to the same god, Jehovah, who had purportedly caused the Flood to begin with. In fact the word Jehovah, or “Jah” is said to have evolved from the name of another Sumerian sea god-king, Ea, “the Lord of the Flood.” Likewise, Leviathan is responsible, according to some references, for “vomiting out the waters of the Flood.” This occurs at the Apocalypse in the Revelation of St. John the Divine as well. Leviathan, like many of these sea gods, was the Lord of the Abyss, and these waters were believed to be holding the Earth up from underneath, in the regions of Hell. Yet “Leviathan” is almost surely etymologically related to the Jewish name “Levi”, and therefore to the “tribe of Levi”, the priestly caste of the Jews that formed part of Christ’s lineage.

This dual current, being associated with both the heavenly and the infernal, with both Jesus and Jehovah, as well as Satan and Lucifer, is something that is consistently found throughout the history of the Merovingian dynasty, as well as all of the other Grail families, and the entire Grail story itself. It is at the heart of the secret spiritual doctrine symbolized by the Grail. This symbolism hits you immediately when you walk through the door of the church at Rennes-le-Chateau, France, and see those opposing statues of the demon Asmodeus and Jesus Christ staring at the same black and white chequered floor, which itself symbolizes the balance of good and evil. This principle is further elucidated by the words placed over the doorway, “This place is terrible, but it is the House of God and the Gateway to Heaven.” This phrase turns up in two significant places. One is in the Bible, when Jacob has his vision of the ladder leading to Heaven, with angels ascending and descending. The other is in The Book of Enoch, when Enoch is taken for a tour of Hell. The existence of this phrase at the entrance to the church, coupled with the images that meet you immediately therein, render the meaning obvious. For Berenger Sauniere, who arranged these strange decorations, this Church represented some kind of metaphysical gateway between Heaven and Hell.

For this reason, the double-barred Cross of Lorraine, symbolizing this duality, has come to be associated with the Merovingians. In a now famous poem by Charles Peguy, is it stated:

“The arms of Jesus are the Cross of Lorraine,
Both the blood in the artery and the blood in the vein,
Both the source of grace and the clear fountaine;

The arms of Satan are the Cross of Lorraine,
And the same artery and the same vein,
And the same blood and the troubled fountaine.”

The reference to Satan and Jesus sharing the same blood is very important. A tradition exists, one which finds support among The Book of Enoch and many others, that Jesus and Satan are brothers, both sons of the Most High God, and they both sat next to his throne in Heaven, on the right and left sides, respectively, prior to Satan’s rebellion and the War in Heaven. This may be just another version of the persistent and primordial “Cain and Abel” story. It makes sense that Satan should be a direct son of God, since he is described as God’s “most beloved angel” and “the brightest star in Heaven.” (7)

However, this symbol is far older than the modern conceptions of Christ and Satan, or Lucifer. This symbol can be traced back to the hieroglyphs of ancient Sumer, where it was pronounced “Khat”, “Kad”, and sometimes even “Kod.” This was another title for the kings who were known as gods of the sea, and the word “Khatti” became associated with this entire race. Their region’s capitol was called “Amarru” – “the Land to the West” (like Meru, the alternate term for Atlantis). This land was symbolized by a lion, which may explain the origin of the word “cat”, as well as why the lion is now a symbol of royalty. Furthermore, the word “cad” or “cod” has also become associated with fish and sea creatures in the Indo-European language system. (8) I would argue that this was at the root of the word “Cathari” (the heretics associated with the Holy Grail who occupied the Languedoc region of France that the Merovingians ruled over), as well as Adam Kadmon, the Primordial Man of alchemy, and “Caduceus”, the winged staff of Mercury. It is also the root for the name of the Mesopotamian kingdom of “Akkadia”, which itself has morphed into “Arcadia”, the Greek concept of Paradise. This further morphs into “acacia”, the traditional Masonic “sprig of hope” and symbol of resurrection after death.

Perhaps this sheds further light on the phrase “Et in Arcadia Ego”, which pops up more than once in association with the mystery of Rennes-le-Chateau and the Merovingians. This phrase was illustrated by Nicolas Poussin with the scene of a tomb, a human skull, and three shepherds. The tomb and skull clearly represent death, while the Sprig of Acacia implied by the word “Arcadia” translates to “resurrection from death.” The shepherds, furthermore, represent the divine kingship of the Atlantean gods and the Grail bloodline, for these god-monarchs were also known as the “Shepherd Kings” (a title, notably, taken up by Jesus as well). This indicates that it is the global monarchy of these Atlantean gods that shall rise again from the tomb, perhaps through the Merovingian bloodline.

This archetype of the fallen king who shall one day return, or the kingdom that disappears, only to rise again in a new, golden age, is a very common one, and one that I have shown in another article to be integral to the Grail legend. It was also one used quite effectively by the last of the Merovingian kings who effectively held the throne of the Austrasian Empire – this magazine’s mascot, Dagobert II. Dagobert’s entire life, as historically recorded, is mythological and archetypal. His name betrays the divine origins of his bloodline. “Dagobert” comes, of course, from Dagon. Now the word “bert”, as the author L.A. Waddell has shown, has its roots in the word “bara”, or “para“, or Anglicized, “pharaoh”, a “priest-king of the temple (or house).” So Dagobert’s name literally means “Priest-King of the House of Dagon.” Interestingly, a rarely-found but nonetheless authentic variation on Dagobert’s name was “Dragobert”, emphasizing his lineage from the beast of the deep waters, the dragon Leviathan.

Dagobert made use of the myth of the returning king early on in life. His father had been assassinated when he was five years old, and young Dagobert was kidnapped by then Palace Mayor Grimoald, who tried to put his own son on the throne. He was saved from death, but an elaborate ruse was laid out to make people think otherwise. Even his own mother believed he was dead, and allowed his father’s assassins to take over, placing Grimoald’s son on the throne. Dagobert was exiled to Ireland, where he lay in wait for the opportunity to reclaim his father’s throne. This opportunity showed itself in the year 671, when he married Giselle de Razes, daughter of the count of Razes and niece of the king of the Visigoths, allying the Merovingian house with the Visigothic royal house. This had the potential for creating a united empire that would have covered most of what is now modern France. This marriage was celebrated at the Church of St. Madeleine in Rhedae, the same spot where Sauniere’s Church of St. Madeleine at Rennes-le-Chateau now rests. There is an existing rumor that Dagobert found something there, a clue which lead him to a treasure buried in the nearby Montsegur, and this treasure financed what was about to come. This was the re-conquest of the Aquitaine and the throne of the Frankish kingdom. As Baigent, et. al write in Holy Blood, Holy Grail, “At once he set about asserting and consolidating his authority, taming the anarchy that prevailed throughout Austrasia and reestablishing order.” The fallen king had risen from his ashes, born anew as Dagobert II, and had come to once more establish firm rule and equilibrium in his country. The similarities to the Parzival/Grail story don’t even need to be repeated.

Sadly, Dagobert II would himself play the role of the fallen king just a few years later, in 679, and the circumstances were decidedly strange. You see, since the time of King Clovis I, the Merovingian kings had been under a pact with the Vatican, in which they had pledged their allegiance to the Mother Church in exchange for Papal backing of the their united empire of Austrasia. They would forever hold the title of “New Constantine”, a title that would later morph into “Holy Roman Emperor.” But that “allegiance” on the part of the Merovingians towards the Church began to wear thin after a while. Obviously, given their infernal and divine origin, their spiritual bent was slightly different from that of organized Christianity. In addition, as direct descendants of the historical Christ himself, they would have possessed access to the secret teachings of Christ, no doubt shockingly different from the ones promoted by the Church, and reflecting more of the “secret doctrine” of the rebellious gods that I have talked about in this article. Any public knowledge of this or the blood relationship between Christ and the Merovingians would have been disastrous for the Church. Christ would therefore be a man, with antecedents and descendants, instead of the “son of God, born of a virgin” concept promoted by the Church. Seeing in Dagobert a potential threat, the Roman church entered into a conspiracy with Palace Mayor Pepin the Fat.

On December 23, while on a hunting trip, Dagobert was lanced through the left eye by his own godson, supposedly on Pepin’s orders. There are many aspects to this event that appear to be mythologically significant. For one thing, it took place in the “Forest of Woevres”, long held sacred, and host to annual sacrificial bear hunts for the Goddess Diana. Indeed, the murder may have taken place on such a hunt. This was near the royal Merovingian residence at Stenay, a town that used to be called “Satanicum.” We must also consider the date itself, which was almost precisely at the beginning of the astrological period of Capricorn. As I have mentioned, Capricorn is based on Enki, and is thus connected to the Quinotaur that spawned the Merovingian bloodline. It is also close to the Winter Solstice, the shortest day in the year, when the Sun was said to “die”, mythologically, and turn black, descending into the underworld. This “black” period of the Sun is associated with the god Kronos or Saturn, another horned sea-god, ruler of the underworld, and king of Atlantis who figures repeatedly in this Grail/Rennes-le-Chateau mystery. (9) Secondly, the murder is said to take place at midday, which, as I have mentioned in another article, is an extremely significant moment in time for mystery schools of the secret doctrine, like Freemasonry. The parchments found by Berenger Sauniere and the related poem, Le Serpent Rouge makes a special mention of it. This is when the Sun is highest in the sky. The fact that Dagobert’s murder was committed by a family member is significant too. This is similar to the “Dolorous Stroke” that wounds the Fisher King in the Grail story, something which also took place at midday and was inflicted by the king’s own brother. In this story, the brother who wounds the Fisher King is known as the “Dark Lord”, and during the fight he is wounded in the left eye, precisely as Dagobert was wounded. The same thing happened to Horus in Egyptian mythology, fighting his uncle, Set. The “Left Eye of Horus” came to symbolize the hidden knowledge of the gods, just as the “left hand path” does today. Dagobert’s death appears to follow the same patterns as many other fallen kings or murdered gods whose death must be avenged. It is meant to symbolize the concept of the lost or fallen kingdom the same way the Dolorous Stroke does in the Grail story.

Clearly, Dagobert’s death meant the end for the Merovingian kingdom. All subsequent Merovingian kings were essentially powerless, and they were officially thought to have died out with Dagobert’s grandson, Childeric III. 49 years later, Charles Martel’s grandson, Charlemagne was anointed Holy Roman Emperor. But in 872, almost 200 years after his death, Dagobert was canonized as a Saint, and the date of his death, December 23, became “St. Dagobert’s Day.” Write Baigent, et. al.:

“The reason for Dagobert’s canonization remains unclear. According to one source it was because his relics were believed to have preserved the vicinity of Stenay against Viking raids – though this explanation begs the question, for it is not clear why the relics should have possessed such powers is the first place. Ecclesiastical authorities seem embarrassingly ignorant on the matter. They admit that Dagobert, for some reason, became the object of a fully fledged cult… But they seem utterly at a loss as to why he should have been so exalted. It is possible, of course that the Church felt guilty about its role in the king’s death.”

Guilty, or afraid? For surely they knew that this “Priest-King of the House of Dagon”, with his divine lineage, so beloved by his people that they worship him like a god 200 years later, would of course be avenged for his treacherous murder. Surely they knew, as most Dagobert’s Revenge readers know, that the Merovingian bloodline didn’t die out, surviving through his son Sigisbert, and continues to jockey for the throne of France to this very day through the actions of various royal bloodlines throughout Europe. Surely they knew that this kingdom would rise again, and that the lost king would return someday. The seeds of his return have already been planted. France is united into the political mass that Dagobert had envisioned it to be when he united Austrasia, and the “Holy Roman Empire”, which the Merovingian kings were clearly attempting to form with the help of the Vatican, has now become a reality in the form of the European Union. During WWII and immediately afterwards, the Priory of Sion, that secret order dedicated to the Merovingian agenda, openly campaigned for a United States of Europe. They even proposed a flag, consisting of stars in a circle, which is identical to the flag used by the European Union today. (10) Furthermore, the world empire of the Atlantean kings who spawned the Merovingians is more complete now than it has ever been since the gods left the earth during the Deluge. The United Nations, a feeble example, will surely give way at some point to a united world government strong enough and glorious enough to be called an empire. The fallen kingdom of the gods is clearly returning, and the new Golden Age is upon us. If this author’s hunch is correct, this is, indeed, a glorious time to be alive.

Endnotes:

(1) Recall that Merovingian King Clovis was buried with a severed horse’s head.

(2) It is also the name of the famous “world mountain” of Eastern tradition.

(3) Note that “mer” is also the origin of the word “mercantile.”

(4) Cain’s name has been said to be the origin of the word “king”

(5) Now we understand why, in the post-mortem photo of Berenger Sauniere lying on his death bed, this small parish priest is seen next to a bishop’s miter.

(6) Uta-Napishtim contains the Sumerian and Egyptian word for fish, “pish”, and perhaps we can see why some authors have claimed that the character of Noah is in fact based on Oannes, Dagon, or Enki as well.

(7) The Book of Enoch refers to the Watchers, or Nephilim, as “stars”, with various “watchtowers” in the houses of the Zodiac. Bear in mind that the ancients saw the sky above as a giant “sea”, the waters of which were kept at bay by the “Firmament of Heaven” – that is, until the Flood.

(8) At this writing, a large sea serpent 20 meters long has just been discovered off the coast of Canada named “Cadborosaurus Willsi”, nicknamed “Caddy.”

(9) Kronos or Saturn is the inspiration for the figures of Capricorn and the Judeo-Christian Satan.

(10) This flag was shown carried by a divine white horse, a symbol of Poseidon and world monarchy.
………………………………………………………………………………………………………………………….

Wayback Machine

Monarchy: The Primordial Form of Government, and its Basis in the “Lord of the Earth” Concept

By Tracy R. Twyman

When the Stewart King James VI of Scotland ascended the throne of England to become King James I of Great Britain, he made a speech that shocked and appalled the nobles sitting in Parliament, who had been waxing increasingly bold over the last few years, attempting to limit the powers of the crown to strengthen their own. What shocked them was that James used his coronation speech to remind them of the ancient, traditional belief that a monarch is chosen by God to be his emissary and representative on Earth, and ought therefore to be responsible to no one but God. In other words, James was asserting what has become known to history as “the Divine Right of Kings”, and the nobles didn’t like it one bit. Quotes from the speech show how inflammatory his words actually were:

“The state of monarchy is the most supreme thing upon earth, for kings are not only God’s lieutenants upon earth, and sit upon God’s throne, but even by God himself are called gods… In the Scriptures kings are called gods, and so their power after a certain relation is compared to divine power. Kings are also compared to fathers of families: for a king is truly Parens patriae, the politique father of his people… Kings are justly called gods, for that they exercise a manner of resemblance of divine power upon earth: for if you will consider the attributes to God, you shall see how they agree in the person of a king.”

The nobles were aghast. This fat, bloated pustule telling everyone to worship him as a god! It seemed patently ridiculous. Even more offensive, James finished up his speech by putting Parliament in its place, basically telling them that, since he ruled by the grace of God, any act or word spoken in contradiction of him was an act against God himself. James continued:

“I conclude then this point, touching the power of kings with this axiom of divinity, That as to dispute what God may do is blasphemy… so is it sedition in subjects to dispute what a king may do in the height of his power. I would not have you meddle with such ancient rights of mine as I have received from my predecessors… All novelties are dangerous as well in a politic as in a natural body, and therefore I would loath to be quarreled in my ancient rights and possessions, for that were to judge me unworthy of that which my predecessors had and left me.”

Although it was James I that made the concept famous, he certainly did not invent the idea of Divine Right. The concept is, as I shall show, as old as civilization itself.

As harsh and dictatorial as it may seem, such a system actually protected the rights of individual citizens from even larger and more powerful bullies such as the Parliament and the Pope. When power rests ultimately in the hands of a single individual, beholden to nobody except God, who need not appease anyone for either money or votes, injustices are more likely to be righted after a direct appeal to the king. Furthermore, past monarchs who held their claims to power doggedly in the face of increasing opposition from the Catholic Church managed, as long as they held their power, to save their subjects from the forced religious indoctrination and social servitude that comes with a Catholic theocracy. Author Stephen Coston wrote in 1972’s Sources of English Constitutional History that:

“Without the doctrine of the Divine Right, Roman Catholicism would have dominated history well beyond its current employment in the Dark Ages. Furthermore, Divine Right made it possible for the Protestant Reformation in England to take place, mature and spread to the rest of the world.”

The Divine Right practiced by European monarchs was actually based on a more ancient doctrine practiced by the monarchs of Judah and Israel in the Old Testament, whom many European royal families considered to be their ancestors, tracing their royal European lineage back to the Jewish King David, sometimes through the descendants of Jesus Christ. Such as line of descent was (and is) known as the “Grail bloodline.” One of Europe’s most famous monarchs, Charlemagne the Great, was often called “David” in reference to his famous ancestor, and Habsburg King Otto was called “the son of David.”(1) In fact, the European tradition of anointing kings comes from that practiced in the Old Testament. Author George Athas describes how the ceremony symbolized the Lord Yahweh adopting the new king as his own son:

“Firstly, the king was the ‘Anointed’ of Yahweh – the mesiach, from which we derive the term ‘Messiah.’ At his anointing (or his coronation), the Spirit of Yahweh entered the king, giving him superhuman qualities and allowing him to carry out the dictates of the deity. The psalmist of Psalm 45 describes the king as ‘fairer than the sons of men’, and continued to praise his majestic characteristics. This king also had eternal life granted to him by Yahweh. The deity is portrayed as saying to him, ‘You are my son – today I have sired you.’ The king was Yahweh’s Firstborn – the bekhor – who was the heir to his father’s estate. He was ‘the highest of the kings of the earth.’ Thus, the king was adopted by Yahweh at his coronation and, as such, was in closer communion with the deity than the rest of the people. On many occasions, Yahweh is called the king’s god. The king was distinguished far above the ordinary mortal, rendering him holy and his person sacred. It was regarded as a grievous offence to lay a hand on him. Thus, to overthrow the king was rebellion of the most heinous sort and an affront to the deity who had appointed the king… We can note that the king of Judah and Israel is described in divine terms. He is, for example, seen as sitting at Yahweh’s right hand, and his adopted son. We find similar motifs of Pharoahs seated to the right of a deity of Egypt. Psalm 45:7 calls the king an ‘elohim’ – a god. Psalm 45:7 also says ‘Your throne is like God’s throne.’”

Here we see the basis for King James’ claim that the scriptures likened human kings to gods. As such, kings were strongly associated with the priesthood as well, and in some cases took on priestly functions. However, traditionally, the Jewish priesthood was dominated by the tribe of Levi, which was biologically related but functionally separate from the royal line of David – that is, until Jesus came along, heir to both the kingly and priestly titles through his lineage back to both tribes. However, in other more ancient cultures, such as the Egyptian, the royal and priestly functions were inseparable. In addition to regarding their pharaohs as the literal offspring of deity, and in fact, deities themselves, the Egyptians believed that the institution of kingship itself had been given to them by the gods. Their first king had been one of their main gods, Osiris, whom all human kings were expected to emulate. Richard Cassaro, in his book, A Deeper Truth, elaborates:

“… during the First Time (the Golden Age when the gods ruled directly on Earth) a human yet eternal king named Osiris initiated a monarchial government in Egypt and imparted a wise law and spiritual wisdom to the people. At the end of his ministry, Osiris left his throne to the people. It was, thereafter, the duty of every king to rule over Egypt in the same manner Osiris had ruled.”

This concept that kingship began with a single divine ruler who all subsequent human kings are descendants of can be traced back to the oldest civilization acknowledged by history, Sumeria, and the other Mesopotamian cultures that followed, such as the Assyrians and the Babylonians. To quote Henri Frankfort:

In Mesopotamia, the king was regarded as taking on godhood at his coronation, and at every subsequent New Year festival. However, he was often seen as having been predestined to the divine throne by the gods at his birth, or even at the beginning of time. Through a sacred marriage, he had a metaphysical union with the mother goddess, who filled him with life, fertility, and blessing, which he passed onto his people.”

The Encyclopedia Britannica has identified three different types of sacred kingship that were recognized in the ancient world. The king was seen as “(1) the receptacle of supernatural or divine power, (2) the divine or semi-divine ruler; and (3) the agent or mediator of the sacred.” However, this author believes it is safe to say that all of these concepts stem from the almost universal belief that kingship descended from Heaven with a single divine being who was literally thought of as the ancestor of all those who followed. This king, I believe, was known to the ancients as Kronos, the Forgotten Father, and this is another name for the deity/planet, Saturn. He was the “brightest star in the heavens”, who fell to Earth and intermarried with the daughters of men to breed a race of human kings (the Grail Bloodline), but was thereafter imprisoned in the underworld by his father, Zeus. Some might think this contradicts the traditional association of ancient kings with the Sun-god, but in fact, Saturn himself was a sun god of a sort. Some believe that in ancient times Saturn was the dominant figure in the night sky, and as such became known as “the midnight sun” (a term later used by occultists to refer to the Grail). From its position in the sky it appeared to stand still, as the rest of the night sky revolved around it. It was therefore also called “the Central Sun.”

Interestingly, although this theory of mine has long been in the works, I have recently stumbled across an author named David Talbott who shares my hypothesis on the origin of kingship. From a piece on his website, http://www.kronia.com, entitled “Saturn as a Stationary Sun and Universal Monarch”, we read:

“A global tradition recalls an exemplary king ruling in the sky before kings ever ruled on earth.

This mythical figure appears as the first in the line of kings, the father of kings, the model of the good king. But this same figure is commonly remembered as the central luminary of the sky, often a central sun, unmoving sun, or superior sun ruling before the present sun.

And most curiously, with the rise of astronomy this celestial ‘king’ was identified as the planet Saturn.”

One can see traces of this ancient progenitor of kings just in the word “monarchy” itself. The syllable “mon” means “one” in Indo-European language systems, as in “the one king who rules over all.”, but in Egypt, it was one of the names of the sun god (also called “Amun-Re”). It denoted the Sun in its occluded state (when it passes beneath the Earth at night), and the word meant literally for them, “the Hidden One”, because the Sun ruled the world (and the underworld) from his secret subterranean prison. The syllable “ark” comes from the Greek “arche”, meaning original, or originator. As the first “monarch”, Kronos was the one originator of kings, the Forgotten Father of all royal bloodlines. Many of our commonly associated symbols of kingship date back to the time when Kronos first introduced it, and are directly derived from him. For instance, the crown symbolizes the (central) Sun, the “godhead” descending upon the brow of the wise king, and the Sumerian kings adorned their crowns with horns, just like Kronos was believed to have on his crown. The throne was Kronos’ seat on his celestial boat in heaven, and has been passed down to us as well. Kronos and his descendants were known as “shepherd kings”, an appellation used by royalty throughout history, and this is the origin of the king’s scepter, which was once a shepherd’s staff. The coronation stone and the orb surmounted by a cross are also Saturnian/solar symbols, and the Egyptian word for the Sun, “Re”, maybe the source of the French word for king, Roi.

Kronos, and the god-kings who followed him were known by the title “Lord of the Four Corners of the World.” This has given birth to the universal, recurring archetype of “le Roi du Monde”, a concept that was brilliantly explored in a book by René Guenon of the same name. In a surprising number of cultures throughout the world and throughout history, there is this concept of “the Lord of the Earth”, an omnipresent and eternal monarch who reigns from within the very center of the Earth itself, directing events on the surface with his superhuman psyche. In the Judeo-Christian tradition, “the Lord of the Earth” is a term applied to Satan, or Lucifer, who, like Saturn, was the brightest star in Heaven, but was cast down by God and, like Saturn, imprisoned inside the bowels of the Earth, in a realm called Hell. In fact, it is quite clear that the figure of Satan comes from Saturn, the “Fish-Goat-Man”, and obviously the two words are etymologically related. Perhaps this is why the “Grail bloodline” a divine lineage from which all European kings have come, is traced by many back to Lucifer. The Medieval Christian heretics known as the Cathars took this concept to its logical conclusion and insisted that, since Satan is the “King of the World” (“Rex Mundi’, as they called him), and Jehovah was, in the Bible, the one who created the world, Jehovah and Satan must be one and the same. For preaching this they were massacred unto extinction by the Papacy.

However, in the Eastern tradition, the Lord of the Earth represents the ultimate incarnate manifestation of godhood. They too saw him as ruling his kingdom from the center of the Earth, in a subterranean city called either “Shamballah” or “Agartha.” And in this tradition, the Lord of the Earth was also a super-spiritual being capable of incarnating on the surface of the Earth in a series of “avatars”, or human kings who have ruled various eras of existence. According to New Age author Alice Bailey:

“Shamballa is the seat of the ‘Lord of the World’ (who has made the sacrifice (analogous to the Bodhisattva’s vow) of remaining to watch over the evolution of men and devas until all have been ‘saved’ or enlightened.”

One of the names that the Hindus used for “the Lord of the Earth” was Manu, who, writes Guenon, is “a cosmic intelligence that reflects pure spiritual light and formulates the law (Dharma) appropriate to the conditions of our world and our cycle of existence.” Author Ferdinand Ossendowski adds:

“The Lord of the World is in touch with the thoughts of all those who direct the destiny of mankind… He knows their intentions and their ideas. If they are pleasing to God, the Lord of the world favours them with his invisible aid. But if they are displeasing to God, He puts a check on their activities.”

These are obviously activities that human kings, as incarnations of the Lord of the Earth, are expected to replicate in their own kingdoms to the best of their ability. In fact, a number of human kings throughout history have been viewed by their subjects as incarnations of the Lord of the Earth, embodying the concepts that he represents. These include Charlemagne, Alexander the Great (who was believed to have horns on his head), and Melchizedek, a mysterious priest-king mentioned repeatedly in the Old Testament and imbued with an inexplicable importance. He was called the “Prince of Salem” (as in Jerusalem), and is said to have shared bread and wine with Abraham during a ritual. Some believe that the cup which they used is the artifact that later became known as the Holy Grail. Some have also identified Melchizedek with another king of Jerusalem, Adonizedek, and with Shem, Noah’s son. Nobody knows what his ancestry is, who his descendants might have been, or why, thousands of years later, Jesus Christ was referred to in the scriptures as “a priest according to the Order of Melchizedek.” Of his significance, René Guenon writes:

“Melchizedek, or more precisely, Melki-Tsedeq, is none other than the title used by Judeo-Christian tradition to denote the function of ‘The Lord of the World’… Melki-Tsedeq is thus both king and priest. His name means ‘King of Justice’, and he is also king of Salem, that is, of ‘Peace’, so again we find ‘Justice’ and ‘Peace’ the fundamental attributes pertaining to the ‘Lord of the World.’”

Even more pertinent information is provided by René Guenon’s colleague Julius Evola, who in his book The Mystery of the Grail wrote:

“In some Syriac texts, mention is made of a stone that is the foundation, or center of the world, hidden in the ‘primordial depths, near God’s temple.’ It is put in relation with the body of the primordial man (Adam) and, interestingly enough, with an inaccessible mountain place, the access to which must not be revealed to other people; here Melchizedek, ‘in divine and eternal service’, watches over Adam’s body. In Melchizedek we find again the representation of the supreme function of the Universal Ruler, which is simultaneously regal and priestly; here this representation is associated with some kind of guardian of Adam’s body who originally possessed the Grail and who, after losing it, no longer lives. This is found together with the motifs of a mysterious stone and an inaccessible seat.”

Clearly, that foundation stone of the world is the same as the Black Sun in the center of the Earth, or the “Grail Stone” which is said to be hidden in that location. The Grail Romances provide us with much insight into the “King of the World” concept. This figure is represented in the story by one of the supporting characters, Prester John, a king who is mentioned in passing as ruling over a spiritual domain in the faraway East, and who, quite fittingly, is said to come from Davidic descent. Evola continues:

“The Tractatus pulcherrimus referred to him as ‘king of kings’ rex regnum. He combined spiritual authority with regal power… Yet essentially, ‘Prester John’ is only a title and a name, which designates not a given individual but rather a function. Thus in Wolfram von Eschenbach and in the Titurel we find ‘Prester John’ as a title; the Grail, as we will see, indicates from time to time the person who must become Prester John. Moreover, in the legend, ‘Prester John’ designates one who keeps in check the people of Gog and Magog, who exercises a visible and invisible dominion, figuratively, dominion over both natural and invisible beings, and who defends the access of his kingdom with ‘lions’ and ‘giants.’ In this kingdom is also found the ‘fountain of youth.’

The dignity of a sacred king is often accompanied by biblical reminiscences, by presenting Prester John as the son or nephew of King David, and sometimes as King David himself… ‘David, king of the Hindus, who is called by the people ‘Prester John’ – the King (Prester John) descends from the son of King David.”

The Lord of the Earth, or the figures that represent him, are often symbolized by a victory stone, or foundation stone which is emblematic of their authority. For instance, British kings are coronated on the “Stone of Destiny”, believed to have been used as a pillow by Jacob in the Old Testament. Such as stone is often referred to in mythology as having fallen from Heaven, like the Grail Stone, which fell out of Lucifer’s crown during his war with God, and became the foundation stone for the Grail kingdom, having the power, as it is written, to “make kings.” Because it fell from Heaven, the Grail is also often associated with a falling star, like that which Lucifer is represented by, and of course the Black Sun in the center of the Earth also represents Rex Mundi‘s victory stone. It is interesting, then, that in the Babylonian tongue, the word “tsar” means “rock”, and is not only an anagram of “star”, but a word that in the Russian language refers to an imperial monarch. Sometimes the monarchial foundation stone is represented as a mountain, especially the world or primordial mountain that in mythology provides the Earth with its central axis. The Sumerians referred to this as Mount Mashu, and its twin peaks were said to reach up to Heaven, while the tunnels and caves within it reached down to the depths of Hell. Jehovah in the Bible, sometimes called El Shaddai (“the Lord of the Mountain”) had Mount Zion for a foundation stone, and some believed he actually lived inside of the mountain. Later, the kingdom of Jesus Christ was said to be “founded upon the Rock of Sion.”

The stone that fell form Heaven, the royal victory stone, is also sometimes depicted under the symbolic form of a castrated phallus, such as that of Kronos, whose disembodied penis was hurled into the ocean, and there spawned the Lady Venus. This story is a recapitulation of the Osiris story, as well as the inspiration for the Grail legends, in which the Fisher King is wounded in the genitals, causing the entire kingdom to fall under a spell of perpetual malaise. The only thing that can heal the king, and therefore the kingdom, is the Grail. This is a recurring theme in world mythology: the king and/or the kingdom that temporarily falls asleep or falls under a magic spell which renders it/him ineffectual for a time, until the stars are right, or the proper conditions are met, causing the king and his kingdom to reawaken, to rise from the ashes, from the tomb, or often, to rise out of the sea. This cycle recurs in the tales of the Lord of the Earth, who alternates between periods of death-like sleep within his tomb in the center of the Earth, and rebirth, in which he once again returns to watch over his kingdom, restore righteousness and justice to the land, and preside over a new, revitalized “Golden Age.” Julius Evola writes of the archetype:

“It is a theme that dates back to the most ancient times and that bears a certain relation to the doctrine of the ‘cyclical manifestations’ or avatars, namely, the manifestation, occurring at special times and in various forms, of a single principle, which during intermediate periods exists in an unmanifested state. Thus every time a king displayed the traits of an incarnation of such a principle, the idea arose in the legend that he has not died but has withdrawn into an inaccessible seat whence once day he will manifest or that he is asleep and will awaken one day… The image of a regality in a state of sleep or apparent death, however, is akin to that of an altered, wounded, paralyzed regality, in regard not to its intangible principle but to its external and historical representatives. Hence the theme of the wounded, mutilated or weakened king who continues to live in an inaccessible center, in which time and death are suspended…. In the Hindu tradition we encounter the theme of Mahaksyapa, who sleeps in a mountain but will awaken at the sound of shells at the time of the new manifestation of the principle that previously manifested itself in the form of Buddha. Such a period is also that of the coming of a Universal Ruler (cakravartin) by the name of Samkha. Since samkha means ‘shells’, this verbal assimilation expresses the idea of the awakening from sleep of the new manifestation of the King of the World and of the same primordial tradition that the above-mentioned legend conceives to be enclosed (during the intermediate period of crisis) in a shell. When the right time comes, in conformity with the cyclical laws, a new manifestation from above will occur (Kalki-avatara) in the form of a sacred king who will triumph over the Dark Age. Kalki is symbolically thought to be born in Sambhala, one of the names that in the Hindu and Tibetan traditions designated the sacred Hyperborean center.

…many people thought that the Roman world, in its imperial and pagan phase, signified the beginning of a new Golden Age, the king of which, Kronos, was believed to be living in a state of slumber in the Hyperborean region. During Augustus’ reign, the Sibylline prophecies announced the advent of a ‘solar’ king, a rex a coelo, or ex sole missus, to which Horace seems to refer when he invokes the advent of Apollo, the Hyperborean god of the Golden Age. Virgil too seems to refer to this rex when he proclaims the imminent advent of a new Golden Age, of Apollo, and of heroes. Thus Augustus conceived this symbolic ‘filiation’ from Apollo; the phoenix, which is found in the figurations of Hadrian and of Antonius, is in strict relation to this idea of a resurrection of the primordial age through the Roman Empire… During the Byzantine age, the imperial myth received from Methodius a formulation that revived, in relation to the legend of Alexander the Great, some of the themes already considered. Here again, we find the theme of a king believed to have died, who awakens from his sleep to create a new Rome; after a short reign, the people of Gog and Magog, to whom Alexander had blocked the path, rise up again, and the ‘last battle’ takes place.”

Rene Guenon believed in this concept literally, and believed that the periods of slumber for the Lord of the Earth have been cyclically brought to a close by apocalypses, after which “le Roi du Monde” would return again to clean up the wreckage and once more look after his faithful flock. In The Revelation of St. John the Divine, three kings actually return from periods of slumber, death, or prolonged absence: Jesus, Satan, and Jehovah, and naturally, the governmental entity that God chooses for this utopian world is the one which has always been associated with holiness and righteousness: monarchy.

Monarchy was the first form of government observed by man, and it was, according to almost every culture, created by God himself. It is the primordial, archetypal form of government, the most natural – that which all other forms of government vainly try to mimic, while at the same time violating its most basic tenets. Monarchy was, for thousands of years, all mankind knew, and the idea of not having a monarch, a father figure to watch over them, to maintain the community’s relationship with the divine, represented to them not freedom, but chaos, uncertainty, and within a short time, death. The common people did not jealously vie for positions of power, nor did they desire to have any say in the decision of who would be king. In fact, most of them preferred that there be no decision to make at all: most monarchies functioned on the principle of primogeniture, passing the scepter and crown down from father to son, or in some cases, through the matrilineal line. The decision was up to nature or God, and therefore just and righteous in itself. Furthermore, the people knew they could count on their monarch to watch over them like he would their own children, to be fair and honest, to protect them from invasion, and to maintain the proper relationship between God and the kingdom. They desired to make their kingdom on Earth reflect the order and perfection that existed in God’s kingdom in Heaven. For thousands of years before the modern era, when 90% of the population was not intellectually capable of participating in government or making electoral decisions, monarchy stood as a bulwark against the disintegration of the societal unit, providing a stability that otherwise could not be achieved. If monarchy had not been invented, human history could never have happened. Richard Cassino, in A Deeper Truth, said it best:

“Since the obligation of every king… is to maintain law, order, morality, spirituality, and religion within his kingdom, then the very design of a monarchy itself was probably conceived by the superior intelligence called God so as to endow mankind with a sound system of government. In other words, the concept of kingship was designed for, and delivered to, the peoples of earth by God to teach mankind to live in a humanized social environment… Human history, with its past and present kingdoms and kings – Egypt, Assyria, Persia, Babylon, Sumer, Aztec, Inca, Jordan, Saudi Arabia, Great Britain, to name a few – stands as a testimony to the fact that the monarchial form of government has been the basis for almost every civilization.”

If monarchy is the most perfect form of government, and if it has been responsible for providing us with at least 600 years of human history, why now does it seem to be only an ancient pretension? Why is the concept of having a monarchy actually function in government considered to be a quaint but laughable thing of the past? Have we really moved beyond monarchy?

Hardly. If you were to graph the entire 6000 years of known human history and isolate the period in which civilized nations have been without monarchs, it would be merely a blip on the spectrum. In fact, of the civilized Western nations, few do not have a monarch reigning either de jure or de facto (although they continue to elect Presidents from royal European lineage). Most nations that maintain representational government still have a monarch either recognized by the government, or by the people at large, and though essentially powerless, these monarchs maintain a symbolic link between a nation and its heritage – its most sacred, most ancient traditions. They also constitute a government-in-waiting, should the thin veneer of illusory “freedom” and “equality” that maintains democracy break down. The modern system of Republican government is based not so much on the freedom of the individual, but on the free flow of money, on debt, usury, inflation, and on a monetary house of cards known as “Fractional Reserve Lending.” It would only take a major and slightly prolonged collapse of the monetary system to eliminate this governmental system. At that point, civilized man will have essentially two choices: anarchy or monarchy, and if people have any sense at all they will choose the latter, rather than subjecting themselves to a chaotic succession of despots interspersed with periods of violence and rioting, and the poverty that comes with the lack of a stable state. It would be the most natural thing in the world for the royal families of Earth, and the monarchial system which they have maintained, to just slide right into place. The kingdom of the gods, who once ruled during man’s Golden Age, would then awaken from their slumber and heed the call to duty, like Kronos, their Forgotten Father, and monarch of all, who soundly sleeps within his tomb in the primordial mountain, waiting for his chance to once again hold dominion over the Earth.

Endnote:

(1) Otto is still, to this day, the titular King of Jerusalem. EXCLUSIVE: New transcript of Rand at West Point in ’74 enthusiastically defends extermination of Native Americans

Ben Norton
October 15, 2015 12:01AM (UTC)

Ayn Rand is the patron saint of the libertarian Right. Her writings are quoted in a quasi-religious manner by American reactionaries, cited like Biblical codices that offer profound answers to all of life’s complex problems (namely, just “Free the Market”). Yet, despite her impeccable libertarian bona fides, Rand defended the colonization and genocide of what she called the “savage” Native Americans — one of the most authoritarian campaigns of death and suffering ever orchestrated.

“Any white person who brings the elements of civilization had the right to take over this continent,” Ayn Rand proclaimed, “and it is great that some people did, and discovered here what they couldn’t do anywhere else in the world and what the Indians, if there are any racist Indians today, do not believe to this day: respect for individual rights.”

Rand made these remarks before the graduating class of the U.S. Military Academy at West Point on March 6, 1974, in a little-known Q&A session. Rand’s comments in this obscure Q&A are appearing in full for the first time, here in Salon.

“Philosophy: Who Needs It” remains one of Ayn Rand’s most popular and influential speeches. The capitalist superstar delivered the talk at West Point 41 years ago. In the definitive collection of Rand’s thoughts on philosophy, Philosophy: Who Needs It, the lecture was chosen as the lead and eponymous essay. This was the last book Rand worked on before she died; that this piece, ergo, was selected as the title and premise of her final work attests to its significance as a cornerstone of her entire worldview.

The Q&A session that followed this talk, however, has gone largely unremembered — and most conveniently for the fervent Rand aficionado, at that. For it is in this largely unknown Q&A that Rand enthusiastically defended the extermination of the indigenous peoples of the Americas.

In the Q&A, a man asked Rand:

At the risk of stating an unpopular view, when you were speaking of America, I couldn’t help but think of the cultural genocide of Native Americans, the enslavement of Black men in this country, and the relocation of Japanese-Americans during World War II. How do you account for all of this in your view of America?

(A transcript of Ayn Rand’s full answer is included at the bottom of this article.)

Rand replied insisting that “the issue of racism, or even the persecution of a particular race, is as important as the persecution of individuals.” “If you are concerned with minorities, the smallest minority on Earth is an individual,” she added, before proceeding to blame racism and the mass internment of Japanese-Americans on “liberals.” “Racism didn’t exist in this country until the liberals brought it up,” Rand maintained. And those who defend “racist” affirmative action, she insisted, “are the ones who are institutionalizing racism today.”

Although the libertarian luminary expressed firm opposition to slavery, she rationalized it by saying “black slaves were sold into slavery, in many cases, by other black tribes.” She then, ahistorically, insisted that slavery “is something which only the United States of America abolished.”

Massive applause followed Rand’s comments, which clearly strongly resonated with the graduating class of the U.S. military. Rand’s most extreme and opprobrious remarks, nevertheless, were saved for her subsequent discussion of Native Americans.

“Savages” who deserved to be conquered

In a logical sleight of hand that would even confound and bewilder even Lewis Carroll, Ayn Rand proclaimed in the 1974 Q&A that it was in fact indigenous Americans who were the racists, not the white settlers who were ethnically cleansing them. The laissez-faire leader declared that Native Americans did not “have any right to live in a country merely because they were born here and acted and lived like savages.”

“Americans didn’t conquer” this land, Rand asserted, and “you are a racist if you object to that.” Since “the Indians did not have any property rights — they didn’t have the concept of property,” she said, “they didn’t have any rights to the land.”

If “a country does not protect rights,” Rand asked — referring specifically to property rights — “why should you respect the rights they do not have?” She took the thought to its logical conclusion, contending that anyone “has the right to invade it, because rights are not recognized in this country.”

Rand then blamed Native Americans for breaking the agreements they made with the Euro-American colonialists. The historical reality, though, was exactly the contrary: white settlers constantly broke the treaties they made with the indigenous, and regularly attacked them.

“Let’s suppose they were all beautifully innocent savages, which they certainly were not,” Rand persisted. “What was it that they were fighting for, if they opposed white men on this continent? For their wish to continue a primitive existence, their right to keep part of the earth untouched, unused, and not even as property, but just keep everybody out so that you will live practically like an animal?” she asked.

“Any white person who brings the elements of civilization had the right to take over this continent,” Rand said, “and it is great that some people did, and discovered here what they couldn’t do anywhere else in the world and what the Indians, if there are any racist Indians today, do not believe to this day: respect for individual rights.”

Rand’s rosy portrayal of the colonization of the modern-day Americas is in direct conflict with historical reality. In his book American Holocaust: Columbus and the Conquest of the New World, American historian David Stannard estimates that approximately 95 percent of indigenous Americans died after the beginning of European settler colonialism. “The destruction of the Indians of the Americas was, far and away, the most massive act of genocide in the history of the world,” writes Prof. Stannard. “Within no more than a handful of generations following their first encounters with Europeans, the vast majority of the Western Hemisphere’s native peoples had been exterminated.”

West Point appeared to express no concern with Rand’s extreme, white supremacist views, nevertheless. A West Point official offered final remarks after her speech, quipping: “Ms. Rand, you have certainly given us a delighted example of a major engagement in philosophy, in the wake of which you have left a long list of casualties” — to which the audience laughed and applauded. “And have tossed and gored several sacred cows,” he added. “I hope so,” Rand replied.

More than just seemingly condoning Rand’s comments, the U.S. Military Academy also admirably echoed Ayn Rand’s views. “Ms. Rand, in writing Atlas Shrugged,” the West Point official continued at the graduation ceremony, “made one remark that I thought was important to us when she said that the only proper purpose of a government is to protect Man’s rights, and the only proper functions of the government are the police, to protect our property at home; the law, to protect our rights and contracts; and the army, to protect us from foreign threats. And we appreciate your coming to the home of the Army tonight to address us.” More thunderous applause followed.

The U.S. Military Academy later republished the lecture — but not the Q&A — in a philosophy textbook, giving it the government’s seal of approval.

Tracking down the evidence

The book Ayn Rand Answers: The Best of Her Q & A includes Rand’s Manifest Destiny-esque defense of settler colonialism among some of the “best of her” public statements. Ayn Rand Answers was edited by philosophy professor Robert Mayhew, whom the Ayn Rand Institute describes as an “Objectivist scholar,” referring to the libertarian ideology created by Rand. ARI lists Prof. Mayhew as one of its Ayn Rand experts, and notes that he serves on the board of the Anthem Foundation for Objectivist Scholarship. The transcript included in Prof. Mayhew’s collection is full of errors, however, and reorders her remarks.

A recording of the West Point speech was available for free on the ARI website as early as April 2009. Up until around October 18, 2013, separate recordings of the speech and Q&A were still freely accessible. By October 22, nonetheless, ARI had removed the recordings from its website and put them up for sale.

Some copies of the 1974 recording have circulated the Internet, but in order to verify the quotes and authenticate the transcript, I ordered an official MP3 recording of the event from the Ayn Rand Institute eStore. (After all, I was working on a piece involving Ayn Rand, so I figured it was only natural that I had to buy something.) The quotes in this piece are directly transcribed from the official recording of Rand’s West Point speech and Q&A.

ARI created an entire course devoted to the single lecture in its online education program. ARI implores readers, “Come hear Rand enlighten and entertain the West Point cadets (laughter can be heard at various points in the audio).” The laughter often followed Rand’s most egregious remarks. Defending one of human history’s most horrific genocides can apparently be quite comical.

Ayn Rand speaking about racism, slavery, and Native Americans, at West Point in 1974 (TRANSCRIPT)

To begin with, there is much more to America than the issue of racism. I do not believe that the issue of racism, or even the persecution of a particular race, is as important as the persecution of individuals, because when you deprive individuals of rights, if you deprive any small group, all individuals lose their rights. Therefore, look at this fundamentally: If you are concerned with minorities, the smallest minority on Earth is an individual. If you do not respect individual rights, you will sacrifice or persecute all minorities, and then you get the same treatment given to a majority, which you can observe today in Soviet Russia.

But if you ask me well, now, should America have tolerated slavery? I would say certainly not. And why did they? Well, at the time of the Constitutional Convention, or the debates about the Constitution, the best theoreticians at the time wanted to abolish slavery right then and there—and they should have. The fact is that they compromised with other members of the debate and their compromise has caused this country a dreadful catastrophe which had to happen, and that is the Civil War. You could not have slavery existing in a country which proclaims the inalienable rights of Man. If you believe in the rights and the institution of slavery, it’s an enormous contradiction. It is to the honor of this country, which the haters of America never mention, that people died giving their lives in order to abolish slavery. There was that much strong philosophical feeling about it.

Certainly slavery was a contradiction. But before you criticize this country, remember that that is a remnant of the politics and the philosophies of Europe and of the rest of the world. The black slaves were sold into slavery, in many cases, by other black tribes. Slavery is something which only the United States of America abolished. Historically, there was no such concept as the right of the individual. The United States is based on that concept. So that so as long as men held to the American political philosophy, they had to come to the point, even of a civil war, but of eliminating the contradiction with which they could not live—namely, the institution of slavery.

Incidentally, if you study history following America’s example, slavery or serfdom was abolished in the whole civilized world during the 19th century. What abolished it? Not altruism. Not any kind of collectivism. Capitalism. The world of free trade could not coexist with slave labor. And countries like Russia, which was the most backward and had serfs liberated them, without any pressure from anyone, by economic necessity. Nobody could compete with America economically so long as they attempted to use slave labor. Now that was the liberating influence of America.

That’s in regard to the slavery of Black people. But as to the example of the Japanese people—you mean the labor camps in California? Well, that was certainly not put over by any sort of defender of capitalism or Americanism. That was done by the left-wing progressive liberal Democrats of Franklin D. Roosevelt.

[Massive applause follows, along with a minute in which the moderator asks Ayn Rand to respond to the point about the genocide of Native Americans. She continues.]

If you study reliable history, and not liberal, racist newspapers, racism didn’t exist in this country until the liberals brought it up—racism in the sense of self-consciousness and separation about races. Yes, slavery existed as a very evil institution, and there certainly was prejudice against some minorities, including the Negroes after they were liberated. But those prejudices were dying out under the pressure of free economics, because racism, in the prejudicial sense, doesn’t pay. Then, if anyone wants to be a racist, he suffers, the workings of the system is against him.

Today, it is to everyone’s advantage to form some kind of ethnic collective. The people who share your viewpoint or from whose philosophy those catchphrases come, are the ones who are institutionalizing racism today. What about the quotas in employment? The quotas in education? And I hope to God—so I am not religious, but just to express my feeling—that the Supreme Court will rule against those quotas. But if you can understand the vicious contradiction and injustice of a state establishing racism by law. Whether it’s in favor of a minority or a majority doesn’t matter. It’s more offensive when it’s in the name of a minority because it can only be done in order to disarm and destroy the majority and the whole country. It can only create more racist divisions, and backlashes, and racist feelings.

If you are opposed to racism, you should support individualism. You cannot oppose racism on one hand and want collectivism on the other.

But now, as to the Indians, I don’t even care to discuss that kind of alleged complaints that they have against this country. I do believe with serious, scientific reasons the worst kind of movie that you have probably seen—worst from the Indian viewpoint—as to what they did to the white man.

I do not think that they have any right to live in a country merely because they were born here and acted and lived like savages. Americans didn’t conquer; Americans did not conquer that country.

Whoever is making sounds there, I think is hissing, he is right, but please be consistent: you are a racist if you object to that [laughter and applause]. You are that because you believe that anything can be given to Man by his biological birth or for biological reasons.

If you are born in a magnificent country which you don’t know what to do with, you believe that it is a property right; it is not. And, since the Indians did not have any property rights—they didn’t have the concept of property; they didn’t even have a settled, society, they were predominantly nomadic tribes; they were a primitive tribal culture, if you want to call it that—if so, they didn’t have any rights to the land, and there was no reason for anyone to grant them rights which they had not conceived and were not using.

It would be wrong to attack any country which does respect—or try, for that matter, to respect—individual rights, because if they do, you are an aggressor and you are morally wrong to attack them. But if a country does not protect rights—if a given tribe is the slave of its own tribal chief—why should you respect the rights they do not have?

Or any country which has a dictatorship. Government—the citizens still have individual rights—but the country does not have any rights. Anyone has the right to invade it, because rights are not recognized in this country and neither you nor a country nor anyone can have your cake and eat it too.

In other words, want respect for the rights of Indians, who, incidentally, for most cases of their tribal history, made agreements with the white man, and then when they had used up whichever they got through agreement of giving, selling certain territory, then came back and broke the agreement, and attacked white settlements.

I will go further. Let’s suppose they were all beautifully innocent savages, which they certainly were not. What was it that they were fighting for, if they opposed white men on this continent? For their wish to continue a primitive existence, their right to keep part of the earth untouched, unused, and not even as property, but just keep everybody out so that you will live practically like an animal, or maybe a few caves about.

Any white person who brings the elements of civilization had the right to take over this continent, and it is great that some people did, and discovered here what they couldn’t do anywhere else in the world and what the Indians, if there are any racist Indians today, do not believe to this day: respect for individual rights.

I am, incidentally, in favor of Israel against the Arabs for the very same reason. There you have the same issue in reverse. Israel is not a good country politically; it’s a mixed economy, leaning strongly to socialism. But why do the Arabs resent it? Because it is a wedge of civilization—an industrial wedge—in part of a continent which is totally primitive and nomadic.

Israel is being attacked for being civilized, and being specifically a technological society. It’s for that very reason that they should be supported—that they are morally right because they represent the progress of Man’s mind, just as the white settlers of America represented the progress of the mind, not centuries of brute stagnation and superstition. They represented the banner of the mind and they were in the right.

[thunderous applause]


I got the revolution blues, I see bloody fountains,
And ten million dune buggies comin’ down the mountains.
Well, I hear that Laurel Canyon is full of famous stars,
But I hate them worse than lepers and I’ll kill them in their cars.

https://i0.wp.com/informations-documents.com/environnement/coppermine15x/albums/gallerie%20dessins/dessins%20scolaires%20zoologie/Dessins_scolaires_zoologie_317_le_poulpe_commun.jpg

The spaghetti theory of conspiracy is nothing if not psychedelic. In a psychedelic fashion, though, conspiracy theory loops back upon itself, paranoid and snarling. As I’ve tried to show with these posts just as conspiracies extend and branch out from the tiniest biological organisms to the realm of the gods themselves, conspiracy theorizing is likewise diverse, contradictory and in a marginal existence of ceaseless vicissitude.

For this reason conspiracy theories about the psychedelic movement especially, one would think, should mirror the groundless, fluctuating nature of their subject. Not necessarily so. In very recent years, in contrast, there is a fast-growing tendency to conclude that the psychedelic movement emerging out of the sixties and continuing in fractured pieces even today can be explained very simply: the whole tie-dyed cloth was designed and manufactured by the malefic Powers That Be.

A Deliberate Creation

This is exactly the thesis of a May, 2013 article by Joe Atwill and Jan Irvin entitled, “Manufacturing the Deadhead: A product of social engineering.” Beyond the title itself, the authors explicitly state their full thesis early on in the long essay.

Most today assume that the CIA and the other intelligence-gathering organizations of the U.S. government are controlled by the democratic process. They therefore believe that MK-ULTRA’s role in creating the psychedelic movement was accidental “blowback.” Very few have even considered the possibility that the entire “counterculture” was social engineering planned to debase America’s culture – as the name implies. The authors believe, however, that there is compelling evidence that indicates that the psychedelic movement was deliberately created. The purpose of this plan was to establish a neo-feudalism by the debasing of the intellectual abilities of young people to make them as easy to control as the serfs of the Dark Ages.

Such a thesis, denying “blowback,” accidents, spontaneity, unforeseen consequences, unpredictability, limited autonomy, etc. is thoroughly absolutist in nature. Absolutist conspiracy theories, as explored in previous posts, however satisfactory they are in creating a comprehensive narrative to ostensibly explain the current sociopolitical reality, do not accurately reflect the complexity and nuances that make up that reality.

There is really no doubt that government and far more nefarious agencies were and are involved in promoting and “manufacturing” various aspects of the psychedelic counterculture. The name of their game, after all, is control. However, we go far astray in our analysis, I believe, when we conclude that every facet of this movement was contrived and engineered from the get go. Such a conclusion is not only inaccurate, failing to account for obvious complexity, but it also robs us of taking inspiration in and gaining knowledge from genuinely liberatory elements of the sixties counterculture.

It is crucial that we attempt to know precisely how we are being manipulated and hoodwinked, and in this the research of Atwill and Irvin, as well as others like Dave McGown, is indispensable. We must not cling to illusions. But we must also not make the opposite mistake. The same dominant faction that gains from tweaking and prodding the counterculture in desired directions also gains in the widespread acceptance of conspiracy and revisionist theories that reject the counterculture in total. Such theories promote paralysis in the face of a seemingly omnipotent elite and they also severely limit our own options of resistance.

The present post, then, will not try to demonstrate that the counterculture which captured the attention of the world in the sixties and onward is wholly good. Neither, though, will it conclude, along with Atwill and Irvin that it was and is just a product of social engineering, just a colossal hoodwink. Instead I hope to show that any comprehensive theory of the psychedelic movement, and similar movements, must be psychedelic in itself — spaghetti-like. This doesn’t make for an easy-to-grasp, black-and-white, Hollywood storyline, but is reality ever really like this?

Mud humping

There is no need for a point-by-point refutation of Atwill and Irvin’s article. Much of their research appears pretty sound. Jan Irvin’s research on R. Gordon Wasson is especially revealing and alarming if accurate. The authors present a somewhat garbled grab-bag of every available anti-counterculture conspiracy theory and criticism, from Timothy Leary being a CIA spy to Woodstock being a designed spectacle to debase US culture through its images of stoned hippies humping in the mud. The John Birch Society in its heyday likely could not have produced a more damning indictment.

Unlike the more conventional right-wing based attacks on the counterculture, however, which made the case that the hippies were a sort of Trojan Horse for world communism, Atwill and Irvin go much further in their conclusions. The goal of the Agenda, as we’ve seen, is not communism but a neo-feudal Dark Age featuring eugenics, depopulation and near universal, back-breaking servitude for the masses.

Where did Irvin and Atwill come up with this horrific vision of the near future? In fact, their view is not so different from other absolutist conspiracy theorists like Alex Jones and especially the very articulate Alan Watt. If there is a “mainstream” of absolutist conspiracy theorizing Irvin and Atwill fall firmly within it. If anything, though, it is their emphasis that makes them unique. To bring about this New Word Order of shit-kicking peasant stinkards and their transhuman lords and masters, they conclude, the psychedelic movement was absolutely essential.

As evidence for this Agenda the authors cite the work of Terence McKenna. Didn’t McKenna, the indefatigable psychedelic evangelist, constantly promote the idea of an Archaic Revival? Isn’t the Archaic Revival entirely synonymous with the Dark Age? Didn’t McKenna admit in an interview, published in The Archaic Revival and quoted by Atwill and Irvin, that he was a “soft Dark Ager“?

I guess I’m a soft Dark Ager. I think there will be a mild dark age. I don’t think it will be anything like the dark ages that lasted a thousand years…

This certainly appears to condemn McKenna. He is clearly advocating neo-feudalism! He must be an agent of the Agenda! It is worthwhile, though, to look up McKenna’s entire quote. Jan Irvin, to his credit, constantly exhorts his readers to check his facts. I’ll take his advice. McKenna is asked if he thought, in agreement with certain futurists, that humanity would have to pass through a new dark age in order to attain a higher state of collective consciousness. Here’s his full response:

I guess I’m a soft Dark Ager. I think there will be a mild Dark Age. I don’t think it will be anything like the Dark Ages which lasted a thousand years — I think it will last more like five years — and will be a time of economic retraction, religious fundamentalism, retreat into closed communities by certain segments of the society, feudal warfare among minor states, and this sort of thing. I think it will give way in the late ’90s to the actual global future that we’re all yearning for. Then there will be basically a 15-year period where all these things are drawn together with progressively greater and greater sophistication, much in the way that modern science, and philosophy has grown with greater and greater sophistication in a single direction since the Renaissance. Sometime around the end of 2012, all of this will be boiled down into a kind of alchemical distillation of the historical experience that will be a doorway into the life of the imagination.    

Terence is obviously quite off in his timing but there is no indication that he is in any way advocating a new Dark Age as a positive end for social control — quite the contrary. He is saying that there may unfortunately be a wholly undesirable and unnecessary, yet extremely brief, period of reaction before the real goal emerges: the “doorway into the life of the imagination.

It is readily apparent to anyone spends any amount of time listening to or reading Terence McKenna that he is in no way an advocate for a Dark Age as he defines it — economic retraction, fundamentalism, closed communities, feudal warfare, etc. His advocacy of the Archaic Revival, on the other hand, is completely antithetical to this. And, once again, McKenna is very lucid in what he means by this term.

Terence argues that in a time of general crisis a society will naturally look back to a time in its history when possible solutions or the means of resolving the current crisis might be found. Thus, during the dissolution of the medieval worldview individual Europeans turned to the classical age of Rome and Greece to find new inspiration, resulting in the Renaissance.

McKenna, an admirer of both the Renaissance and classical Greece, concludes that the combined crises of modernity are so dire that we must look back even further to a time before the State, before organized religion, before the hierarchical stratification of society, before the severing of humanity’s link with the rest of nature — all key features of both the Dark Age and today.

This time is found in the long archaic (not ancient and definitely not medieval or feudal) age of the paleolithic. And, to anticipate a stupid objection, McKenna is not advocating a return to the Old Stone Age. He is saying that there are many things that we urgently need to learn from our “primitive” ancestors and still existing hunter-gatherer tribes

Fortunately, movements in art and in the wider culture and counterculture have from the late-19th century onward attempted to learn these lessons. Anyone who would equate the terms “Archaic Revival” with “Dark Age” in McKenna is either completely missing the point or is consciously misrepresenting his message.

The Fungal Bureau of Intoxication

Could it be, though, that McKenna is being devious in his presentation? If, as Irvin and Atwill assert, McKenna is an agent of the nefarious Agenda then isn’t it very possible that he is seducing people with his highly-cultivated charm and elocution to accept a vision of the Archaic Revival which is actually something completely opposite to what he says it is, namely a new Dark Age? If this is the case, Irvin and Atwill present no evidence of it. Irvin does claim, however, to have caught McKenna admitting that he is just this sort of agent.

https://i0.wp.com/upload.wikimedia.org/wikipedia/en/4/43/SecretAgent.jpg

Irvin presents this evidence, “an explosive audio clip,” in an article from August of last year explosively entitled, “NEW MKULTRA DISCOVERY: Terence McKenna admited that he was a “deep background” and “PR” agent (CIA or FBI).” The clip can be listened to here, but this is the damning quotation:

And certainly when I reached La Chorerra in 1971 I had a price on my head by the FBI, I was running out of money, I was at the end of my rope. And then THEY recruited me [laughter from his audience] and said, “you know, with a mouth like yours there’s a place for you in our organization.” And I’ve worked in deep background positions about which the less said the better. And then about 15 years ago THEY shifted me into public relations and I’ve been there to the present.

What is conspicuously absent from Jan Irvin’s account of this is the laughter. McKenna’s audience during the talk and nearly all of his subsequent listeners have realized that Terence is making a joke about being recruited by the Mushroom. Absolutist conspiracy theorists, in contrast, are notorious for not having a sense of humour. In objection to this fairly basic interpretation of McKenna’s words Jan Irvin reveals that he is definitely well within the absolutist camp:

1) Do mushrooms have organizations, deep background and public relations (propaganda)? Or does a spy agency?
2) What would mushrooms need with a public relations or propaganda department? Or is that something a spy agency would have?
3) Would mushrooms tell him the less said the better: “deep background positions about which the less said the better”, or is that something an agency would do?
4) Do mushrooms have “positions”? Or does an agency?
5) Are the mushrooms able to pay him because he’s out of money? Or is that something an agency could do? (remember he’s in trouble for smuggling)
6) Are mushrooms able to get him out of trouble with Interpol and the FBI for DRUG SMUGGLING? Or is that something an agency like the CIA or FBI could do?
7) Do mushrooms answer the story of what happened to him after his arrest? Or is that something that his employment as an agent would do?

https://i0.wp.com/www.thiel-a-vision.com/wp-content/uploads/2010/10/matango03.jpg

Wow. Irvin does seem to have a point (or seven!) here. All those who laughed will surely not laugh last. The evidence is in! If there is anything, though, to take seriously I think it is McKenna’s confession that he was recruited by the Mushroom. He is admitting to a conspiracy here, and it is one that is far vaster in scope than anything the CIA and the FBI combined could think up. Irvin, unfortunately, does not appear to take this sort of conspiracy seriously.

The less interesting, more banal story of McKenna as FBI/CIA agent has been thoroughly “debunked” elsewhere on the web so there is no reason to go over the boring business again here. It is interesting (and funny) to hear Terence’s brother Dennis’ take on the whole thing. Here is Dennis in an interview from May, 2013 (at 35 minutes in):

I just feel kind of sorry for Jan, actually. He seems to have this need to see conspiracies where none exist…. This is the web of delusion that you can fall into if you’re not careful and I think he has. … It looks like pathology to me, and a lot of people see that. But then Jan will say, well, you won’t go through these 20 databases that I’ve sent you and these 200 links. And you’ve got to understand, no Jan I won’t, because for one thing I don’t have time and the fact there are connections does not necessarily a conspiracy make. I mean, yeah, Terence talked at Esalen and Aldous Huxley talked at Esalen that doesn’t mean that Esalen is involved in some plot for world domination. … I just don’t buy it.  It just seems like a waste of time. … I would think I would know that [Terence was an agent]. I would think he would have said something. You know, we were close. But then maybe he was but he didn’t even know he was. I don’t think so. I don’t know if you’ve seen Jan’s website? What is that? This is… like the [Terence’s] Timewave in a way — this elaborate model that you come up with that explains all and everything if you could just see it. I’m not seeing it, Jan, sorry.

https://i2.wp.com/spesmagna.com/wp-content/uploads/2012/06/fiend_without_a_face02.jpg

Pathology or not (and, to be fair, Dennis is calling his brother similarly nuts), the obvious response for an absolutist conspiracy theorist would be to claim that Dennis is also a part of the conspiracy. This is essentially Jan’s response. A big deal will be made out the fact that Dennis didn’t directly deny that his brother was an agent. This, according to absolutist logic, is tantamount to admitting that he was an agent.

If this was all Irvin and Atwill had on Terence McKenna it would seem like pretty flimsy stuff. Yet of course this is not their full argument. As Dennis explains, Terence is condemned for connections, real or illusory, that he had with institutions and people like Esalen, Huxley, Teilhard de Chardin, Marshall McLuhan, etc. As a lover of synchronicity I will accept all of these connections and more. I just doubt that any of these prove that McKenna was, consciously or not, working for an Agenda to enslave humanity.

For me to try to refute these assertions would involve plunging into the “20 databases” and “200 links” and that is not really my purpose here. McKenna himself is only one small facet of Atwill and Irvin’s mega-thesis and even to definitively prove that McKenna was a saint, which he by no means was, would not really shake the core of their claim. It is a good idea to look into some of this research, though, just to see if it stands up to scrutiny.

https://i2.wp.com/media-cache-ec0.pinimg.com/736x/b2/8b/a0/b28ba029e1cd593fd655b08fa42c7105.jpg

A Dose Of Disinfo

Another key player in the conspiracy, according to Atwill and Irvin, is Albert Hofmann, the inventor of LSD. If a psychedelic conspiracy really exists then Hofmann has got to be in the thick of it, right? Atwill and Irvin present their most damning evidence against Hofmann:

Though like many of those associated with the origins of the psychedelic movement, Albert Hofmann is called “divine,” evidence has come to light which exposes him as both a CIA and French Intelligence operative. Hoffman helped the agency dose the French village Pont Saint Esprit with LSD. As a result five people died and Hofmann helped to cover up the crime. The LSD event at Pont Saint Esprit led to the famous murder of Frank Olson by the CIA because he had threatened to go public.

A footnote informs us that this “evidence” is taken from journalist Hank Albarelli’s 2009 book, A Terrible Mistake: The Murder of Frank Olson and the CIA’s Secret Cold War Experiments. If we look into the mass poisoning event in Pont-Saint-Esprit in 1951, we quickly find that Albarelli is about the only person claiming that the CIA dosed the village with LSD. Steven Kaplan, a professor of history at Cornell University who also wrote a book about the events of the French village, has described Albarelli’s theory as “absurd.”

I have numerous objections to this paltry evidence against the CIA. First of all, it’s clinically incoherent: LSD takes effects in just a few hours, whereas the inhabitants showed symptoms only after 36 hours or more. Furthermore, LSD does not cause the digestive ailments or the vegetative effects described by the townspeople…

Now it could be that Kaplan is himself a conspirator assigned the task to whitewash the odious deeds of the CIA, but oddly it is not Kaplan that Irvin and Atwill place under suspicion. It is Albarelli. Apparently it was Albarelli who attempted to thwart Irvin’s research into Gordon Wasson’s ties to the CIA:

An example of how Wasson’s activities for the CIA have been kept hidden is the work of MK-ULTRA “expert” and author Hank Albarelli, a former lawyer for the Carter administration and Whitehouse who also worked for the Treasury Department. Though Albarelli presents himself to the public as a MK-ULTRA ‘whistleblower’, he apparently attempted to derail Irvin’s investigation into Gordon Wasson.

But wait a minute. If Albarelli has been outed by Irvin and Atwill as a disinfo agent then why is he cited as the sole source of “evidence” that Albert Hofmann assisted the CIA in dosing a French village with LSD? Might not this also be disinformation? At the very least this is an example of extremely sloppy research by Irvin and Atwill. To use a source which these authors themselves go on to discredit in order to attempt to slag Hofmann is really scraping the bottom of the barrel. One wonders how much more of Irvin and Atwill’s research, if one was feeling particularly masochistic and had a ton of time to sift through it, would similarly transmute into shit.

Leveling The Playing Field For Everyone

Fortunately, though, Jan Irvin has education on his side. Real education — not the kind we plebs get from ordinary public schools and universities. Jan has rediscovered the Trivium — the ancient arts of Grammar, Logic and Rhetoric, which along with the Quadrivium make up the Seven Liberal Arts. On his website we can listen to a genuinely fascinating series of podcasts on the Trivium, largely presented by Gene Odening.

In the first interview with Odening we are told that the Trivium is the educational method, ancient in origin, which is even now taught at the boarding schools of the elite. The purpose of the Trivium is to develop critical thinking. It essentially is a tool to see through the bullshit, to expose the conditioning, propaganda and manipulation that we all face. So far so good. A foolproof methodology of critical thinking is definitely desired. The three arts are conveniently broken down as follows:

[1] General Grammar
(Answers the question of the Who, What, Where, and the When of a subject.) Discovering and ordering facts of reality comprises basic, systematic Knowledge
[2] Formal Logic
(Answers the Why of a subject.) Developing the faculty of reason in establishing valid [i.e., non-contradictory] relationships among facts, systematic Understanding

[3] Classical Rhetoric
(Provides the How of a subject.) Applying knowledge and understanding expressively comprises Wisdom or, in other words, it is systematically useable knowledge and understanding

https://i0.wp.com/www.bestmoodle.net/widgets/images/roll/roll067.jpg

Sounds great. Comprehensive and handily applicable. It actually sounds strangely familiar. Oh, I remember where I heard something like this — in a talk by Terence McKenna:

The world is so tricky that without rules and razors you are as lambs led to the slaughter. And I’m speaking of the world as we have always found it. Add onto that the world based on techniques of mass psychology, advertising, political propaganda, image manipulation…There are many forces that seek to victimize us. And the only way through this is rational analysis of what is being presented. It amazes me that this is considered a radical position. I mean, this is what used to be called a good liberal education. And then somewhere after the sixties when the government decided that universal public education only created mobs milling in the streets calling for human rights, education ceased to serve the goal of producing an informed citizenry. And instead we took an authoritarian model: the purpose of education is to produce unquestioning consumers with an alcoholic obsession for work. And so it is. [at 12:55 minutes]

Here McKenna almost sounds as if he listened to Jan Irvin’s podcast — except that this was recorded way back in 1994. The similarities between the two, though, are striking. By “a good liberal education” Terence is undoubtedly referring to the Seven Liberal Arts which includes the Trivium. His concerns are also identical to Irvin and Odening. He is advocating a “rational analysis of what is being presented,” a system of  “rules and razors,” in order to deflect the “many forces that seek to victimize us.”

https://i0.wp.com/i470.photobucket.com/albums/rr63/amarback/un-chien-andalou-razor-eye.gif

The one glaring difference between Irvin and McKenna on this point is their view of the sixties. According to McKenna students and other protesters gained their critical view of the establishment through a public liberal education and the use of psychedelics. According to Irvin and Atwill it was the use of psychedelics and the lack of a proper liberal education that so definitely duped the sixties generation. How could such divergent opinions be both generated by two seemingly sincere advocates of critical thinking and the Trivium?

But beyond this how could McKenna, that outed agent and psychedelic snake-oil salesman, be an advocate for the Trivium at all? Is he just lying? Are we to assume that every time he tells his audience to “question authority — even my own” and “try it for yourself” that he actually means “do exactly what I say”?

http://4.bp.blogspot.com/-gOhmPEB8G9s/TcoWBoP01SI/AAAAAAAAA6U/WTEoxy5enTk/s1600/pied-piper-of-hamelin-patrick-hiatt.jpg

There may be a solution to this puzzle. As we progress through Irvin’s “Trivium Education” podcasts we come to a very fascinating interview with Kevin Cole, a Trivium Method student of Odening and Irvin. Cole relates how in his own research he discovered that the Classical Trivium and the Seven Liberal Arts were actually used as a complete system of control by the elite for centuries.

The Classical Trivium, we finally learn, is entirely different from the Trivium Method (perhaps we should start to call it the Trivium Method™?) which was developed by Odening and interpreted by Irvin in order to free minds rather than to enslave them.

http://3.bp.blogspot.com/-M95faJbf8k0/TVhUaEAozLI/AAAAAAAABtI/8kwYe--Gc-8/s1600/mmy.jpg

It’s obvious, therefore, that McKenna is only an advocate of the Classical Trivium and not the liberating Trivium Method™. The similarity of language and purported methodology is only there to deceive. That clears up that. But hold on a sec — weren’t we told on the first of these podcasts that the Trivium Method™ was ancient and that it is still taught to the children of the elite? A confused commenter to the Cole episode, and a now distraught former acolyte, expresses similar concerns:

To be honest, this upset me quite a bit. This shed light on the enormous amount of bullshit about the classical trivium that was spewed for a few years by Gnostic Media and Tragedy And Hope.
Here are some questions I have for you:
What form of education, if not the classical trivium, is taught to the “elite?” It seems that all of your previous claims about the trivium being taught to the “elite” was pure conjecture.
If we are inherently free, why do we need a “liberating” education?
Why was Gene Odening so misinformed about this? Why should I, after watching this video, continue to use the “trivium method” which is now so clearly a misunderstanding of the true classical trivium on the part of a “self-taught scholar?”
These are only some of MANY questions that need to be answered. I’m sure I’m speaking on the behalf of many others who feel the same about this issue. There’s been a lot of conjecture and bullshit, and we demand answers.

[Maccari-Cicero.jpg]

Jan Irvin, master of Rhetoric, responds with his usual balance of wisdom, subtlety and eloquence:

We have ALWAYS explained that the trivium was used for mind control. If you haven’t caught on to that, you weren’t paying attention. There was 3 years of grammar alone that had to be done to flush all of the misapplication of the trivium out. Gene has always explained from day one that it was used for control. He never said it wasn’t. That was the ENTIRE PURPOSE of releasing it! To level the playing field for EVERYONE! If you want to be controlled by those who misuse it, then don’t study it and live in ignorance. It seems you weren’t even paying attention to what this video had to say, as the video explained that what Gene has put forth is the first time it’s been used for FREEDOM. Can you show us were we haven’t said it was used for control by the elites?

Ah… so the trivium is not the trivium. There is no contradiction here. The trivium can be used to both liberate and ensnare. Kind of like a good trip and a bad trip? If we accept, though, that Odening’s new Trivium Method™ is a way to liberate the masses while the old Classical Trivium is used for mind control there is no need to additionally accept that the TM™ is ancient and therefore well-tested. Like any new system of thought, or any ancient system, every aspect of it must be held up to full scrutiny.

Quisquidquandoubicur

Irvin is fond of saying, for example, “do not put your Logic before your Grammar.” By this he means to not approach a situation with a ready-made theory of why it is like it is. Instead we must first compile and examine all of the available facts of who, what, where, and when (the Grammar) and only then can we attempt an explanation (the Logic). A valid explanation can only arise if the basic facts do not contradict one another.

A problem emerges, however, with determining these “facts.” If we say, for example, that who Aldous Huxley is, is an evil promoter of eugenics and world government then we already have reasons why we have concluded this. We have already put our Logic before our Grammar. Each fact is at first a theory. But a supporter of the TM™ might say that this is acceptable because our reasons for concluding that Huxley is a supporter of eugenics and world government are also based on facts — Huxley’s family ties to the Eugenics movement etc.

http://3.bp.blogspot.com/-Lq49CIEHV3c/UqL_lqkKhiI/AAAAAAAAH40/kqDWefstXb4/s1600/Eugenesia+y+mala+memoria+de+la+humanidad+(eligelavida).jpg

This might all be valid. These facts might in turn be very sound, but we still would have reasons for accepting them as facts. A pure fact though, pure Grammar, the whats and whos and wheres, may be impossible to separate from why. This may seem like nitpicking, but over and over I’ve seen the no-logic-before-grammar clause being used by Irvin in an attempt to out argue his opponents. It doesn’t hold water.

As an example, if we accept as fact, as Grammar, that Aldous Huxley is a tireless advocate for totalitarian rule then the letter he wrote to George Orwell, cited by Irvin and Atwill, discussing which of their dystopic visions is more accurate, will strike us as being very sinister. If in contrast we view both Brave New World and 1984  as novels intending to warnpeople against creeping totalitarianism then our reading of this letter will be very different.

Within the next generation I believe that the world’s rulers will discover that infant conditioning and narco-hypnosis are more efficient, as instruments of government, than clubs and prisons, and that the lust for power can be just as completely satisfied by suggesting people into loving their servitude as by flogging and kicking them into obedience. In other words, I feel that the nightmare of Nineteen Eighty-Four is destined to modulate into the nightmare of a world having more resemblance to that which I imagined in Brave New World. The change will be brought about as a result of a felt need for increased efficiency. Meanwhile, of course, there may be a large scale biological and atomic war — in which case we shall have nightmares of other and scarcely imaginable kinds.

https://i1.wp.com/www.sevenwholedays.org/wp-content/uploads/2009/08/huxley-orwell.png

If we have already concluded as fact, as Grammar, that Albert Hofmann is a CIA agent then it is easy to believe that he helped poison a French village with LSD, even though our only source for this “fact” is from a writer that we have already discredited.

Like every other human theory, Irvin and Atwill’s theory on the manufacture of the counterculture is supported with cherry-picked “facts.” This is not so much a condemnation of their theory as it is to state that they are, like anyone else, all too human. The application of the Trivium Method™ no more guarantees the truth of their theory than does the application of the apologetics of Thomas Aquinas.

What happens when “facts” are encountered that don’t appear to fit this theory? What do we do, for instance, with Mae Brussell’s well-reasoned theory that the Manson murders were an Establishment psyop designed to disorientate and discredit the growing counterculture which directly threatened elite control?

https://i0.wp.com/31.media.tumblr.com/5c3cb510e27d912983bdc5c5f748490e/tumblr_mr4u8goXkU1rzk55no1_1280.jpg

If, as according to Irvin and Atwill, the hippies were “manufactured” in order to transform culture then why would TPTB try to bring down their own creation just a couple of years after it gained mainstream attention? Was Mae simply wrong? Was she also an agent?

And what about the conservative reaction in the Reagan eighties against all vestiges of the former counterculture? What about the “Moral Majority”? What about the promotion of “family values”? What about the “culture wars”?

https://i1.wp.com/academic.depauw.edu/aevans_web/HONR101-02/WebPages/Fall%202007/Sarah/Handmaid%27s%20Tale/Pictures/jerry_falwell0515.jpg

Are Reagan and Pat Robertson the good guys here? Did the CIA’s program fail or did another phase of their manipulation kick in — the clichéd and misunderstood Hegelian dialectic, perhaps? And then there were the nineties when the psychedelic pied pipers like McKenna and others were once again set loose to dose the imaginations of a whole new generation. Did the Agenda move back on track or did it even more come off the rails?

I’m not saying that these facts cannot be worked into the theory of Irvin and Atwill. Absolutist conspiracy theories can usually absorb any fact that is thrown at them. As far as I know, however, they have not yet been shoehorned into the mix, and when they are the resulting mess is not necessarily going to be logical.

Uncertain and Incomplete

And yet increasingly in recent years logic is equated with certainty. Debunkers and “skeptics” of every stripe are on the march. “Pseudoscience,” claims of the paranormal, conspiracy theories, spirituality, alternative medicine — the whole ball of “woo” is in the crosshairs. In the face of this, into the viper’s den of pop-up fallacies and rational wikis, steps fearless researcher and podcaster extraordinaire, James Corbett.

In a largely overlooked Aug. 2012 podcast entitled “Logic Is Not Enough,” Corbett dares to present a bit of heresy — humans are really not all that logical and logic itself can only take you so far. He illustrates this by simply showing how even the most logically sound argument can reach a false conclusion if its premises are wrong.

Beyond the scope of formal logic, Corbett explains that Heisenberg’s Uncertainty Principle in physics and Gödel’s Incompleteness Theorems in mathematics both demonstrate that even within these hardboiled fields of study unpredictability and indeterminacy rear their ugly heads. With or without logic, certainty is elusive.

Buckminster Fuller, in a conversation from 1967, takes this all much further than Heisenberg (or Corbett!):

Heisenberg said that observation alters the phenomenon observed. T.S. Eliot said that studying history alters history. Ezra Pound said that thinking in general alters what is thought about. Pound’s formulation is the most general, and I think it’s the earliest. [quoted in Hugh Kenner, The Pound Era]

http://tommytoy.typepad.com/.a/6a0133f3a4072c970b014e86e472a8970d-450wi

By studying the history of the sixties counterculture, Atwill and Irvin are altering history. By thinking and writing about their theory, I am altering it. Both alterations are fine and should be expected. The problem arises when we think that we have captured the history or the idea.

To tie a living thing down, to analyse it and to categorize it, is to change it. And by attempting to do so it changes us. It should not take a physicist or a mathematician to “prove” this. And it is, of course, the poets who would realize this first. (I’ll discuss in depth the wisdom and folly of Ezra Pound in the second part of this essay.)

In his podcast, Corbett reminds us that much of the “Agenda” aims to refashion irrational individuals into logical machines. Elite control freaks like George Bush Sr. avow that ‘‘The enemy is unpredictability. The enemy is instability.” To be truly logical is to be entirely predictable, entirely stable. A logical person, a person well-trained in the Trivium Method let’s say, can be counted on to say and do the logical thing at every step. He or she is not overly emotional, not contradictory in his or her actions and thoughts, and is entirely stable. A clockwork orange.

The usual argument on why the CIA gave up its research on LSD and other psychedelics is precisely because they have unpredictable effects. They can be used to decondition people but they are very poor at reliably reconditioning people. Who in the world has ever had a predictable psychedelic trip?

Irvin and Atwill are correct to warn us about how post-Freudian sorcerers of schlock like Edward Bernays use advertising and propaganda to target us emotionally, scramble our logic, and to direct the course of culture. Irvin and Atwill’s attack on the state education system and the entertainment industry as instruments to “dumb down” is indispensable. Critical thinking and reason, more than ever, are required.

There is a broader way to look at all of this, however. In Corbett’s podcast episode we briefly hear a clip from an interview with cognitive scientist, George Lackoff. Lackoff explains that reason, contrary to what was thought in the 18th century and what is still accepted by political and social institutions even now, is not fully conscious, unemotional or subject to formal logic. Instead it is embodied, it is driven by empathy for others over “enlightened self-interest”, and it frequently perceives metaphorically not logically.

An individual human is by no means a logical machine, nor is he or she entirely driven by irrational emotions. We are complex even contradictory creatures. It may be that there is no possible way, in disagreement with Huxley and Orwell, for our psyches to be fully bridled. On the other hand, it may be equally impossible to develop a foolproof method for preventing attempts to bridle them.

All methods fail for some and succeed for others. Psychedelics aren’t the whole answer, neither is the Trivium Method™. Contradictions are out there and in here always. As Walt Whitman wrote:

Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.

[waltwhitman-camden1891.jpg]

The conspiracy, the conspiracies, are also contradictory. They are also embodied, emotional, metaphoric, fluid, unpredictable, multitudinous. So is the counterculture. So is a psychedelic trip. So is Jan Irvin. So is this post. The pop-up fallacy machine would likely blow a gasket processing what I’ve written here. I really don’t care.

Corbett mentions one last fallacy that might help me out: the fallacy fallacy. This is the false presumption that just because a claim is poorly argued, and/or it contains many fallacies, that the claim itself is wrong. It may just be, though, that I’m making a fallacy fallacy fallacy: the equally false presumption that the fallacy fallacy somehow excuses poor argumentation and/or the use of fallacies. There’s a conspiracy theory for you. Here’s another:

Conspiracy theory, in my humble opinion, is a kind of epistemological cartoon about reality. Isn’t it so simple to believe that things are run by the greys, and that all we have to do is trade sufficient fetal tissue to them and then we can solve our technological problems, or isn’t it comforting to believe that the Jews are behind everything, or the Communist Party, or the Catholic Church, or the Masons. Well, these are epistemological cartoons, it is kindergarten in the art of amateur historiography.

I believe that the truth of the matter is far more terrifying, that the real truth that dare not speak itself is that no one is in control, absolutely no one. This stuff is ruled by the equations of dynamics and chaos. There may be entities seeking control, but to seek control is to take enormous aggravation upon yourself. It’s like trying to control a dream.

The dream or nightmare may not be controllable, but it does have a certain structure, a patterned energy, a flux of phosphenal filaments. And it is both bound and sent spinning by spaghetti.

People also love these ideas

Chapter 4
It’s like a different world. The fenced-in apartment complex in the heart of Denver is located just a short walk from glitzy boutiques and high-end restaurants, but there is no sign of prosperity here. Homeless people are camped nearby while addicts smoke crack in the parking lot.
People are socializing in front of the building’s entrance despite the midday heat. A black man is pacing the fence trying to get someone’s attention while two younger men are carrying furniture into a neighboring building. It’s a convivial neighborhood and everyone seems to know everyone else. Everyone, that is, except the older, gray-haired man who walks up to the door with a shopping bag full of vegetables at around 4 p.m. He doesn’t even look at his neighbors before disappearing into the complex without greeting any of them.
But once you’ve seen the photos from the inside of his apartment, it immediately becomes clear why the 66-year-old seeks to limit his contact with the outside world. And it becomes even more clear when you look into his past.
James Mason at his home in Denver
James Nolan Mason was an extremist even as a teenager. He joined the American Nazi Party of George Lincoln Rockwell when he was just 14 and became involved in the National Socialist Liberation Front in the 1970s. He has served several prison terms, including one stint for attacking a group of black men together with an accomplice. On another occasion, he was charged with child abuse. During a search of his apartment, the police found naked photos of a 15-year-old girl along with swastika flags and photos of Adolf Hitler and his propaganda minister, Joseph Goebbels.
In the 1980s, Mason decided to publish his fantasies of power and violence in book form, which he called “Siege.” The tome – a collection of his bizarre newsletters, on which he collaborated with the sect leader Charles Manson – is full of Holocaust denials and ad hominem attacks on both homosexuals and Jews. Above all, however, it calls for the establishment of a network of decentralized terror cells and for taking up arms against the “system.” Mason’s goal has long been that of passing along his intolerant worldview to the next generations – and for a long time, he found no success. But that all changed in 2015.
James Mason and Atomwaffen Division
Propaganda photos from the Atomwaffen Division cell in Texas
That year, the Nazi group Atomwaffen Division (“Atomwaffen” is German for atomic weapon) was founded on the internet forum ironmarch.org, a discussion platform for neo-Nazis from around the world. The extremists discovered James Mason and were excited about his crazed, radical ideas. “Siege” became a must-read and Mason their ideological doyen. But that isn’t the only thing that makes them so dangerous, according to experts on right-wing extremism. Members are heavily armed and prepared to make use of their weapons. Indeed, they are getting ready for what they see as the coming “race war” in so-called “hate camps.” Weapons training is conducted by members of the U.S. military, who are also among the group’s members. According to one former member of Atomwaffen Division, newcomers must submit to waterboarding, in addition to other such trials. But who is behind Atomwaffen Division?
The first murder took place on May 19, 2017. That’s when Devon Arthurs, 18, shot to death his two housemates, Andrew Oneschuk, 18, and Jeremy Himmelman, 22. All three were members of Atomwaffen Division, but Arthurs would later say that the other two didn’t respect his faith. Arthurs, it turned out, had slowly become estranged from the group’s right-wing extremist ideology, converted to Islam and began sympathizing with Islamic State.
Killer Arthurs (left), victims Oneschuk and Himmelman (right)
The group’s leader, Brandon Russell, likewise lived in the shared residence and the police found firearms, ammunition and bomb-making supplies in the garage. Before the discovery, Russell had told followers in internal chats of his intention to blow up a power plant. He was sentenced to five years behind bars.
The Murders Continue
On Dec. 22, 2017, 17-year-old Nicholas Giampa shot and killed his girlfriend’s parents in Reston, Virginia. They had forbidden their daughter from associating with him because of his right-wing extremist worldview. Giampa is open about both his admiration for James Mason and his membership in Atomwaffen Division. After the two killings, he shot himself as well, but survived.
Killer Giampa
The most recent murder took place not even a month later and the investigation into the incident is ongoing. Reporters from DER SPIEGEL were able to speak with police officials in Lake Forest, California, where the killing took place, in addition to the mother of the victim. We were also able to examine the private chat messages sent between the victim and friends, allowing a detailed reconstruction of the crime.
When the rest of his fellow Atomwaffen Division members learned that Samuel Woodward had been arrested for the murder of a homosexual Jew, they began celebrating his crime, referring to him as a “gay Jew wrecking crew.” For the beginning of the trial, they even had T-shirts printed with Woodward’s image, complete with a swastika on his forehead.
Atomwaffen Division is not a group of online trolls who spread derogatory images and graphics on the internet. Rather, members share their propaganda within their own social media bubble and secret communication forums. DER SPIEGEL has gained exclusive access to internal chats from the group.
Inside Atomwaffen
Those chats quickly make it clear that the group doesn’t just have it in for homosexuals, Jews and blacks. They also glorify all manner of right-wing extremist terror along with mass murderers like Timothy McVeigh, Dylann Roof and the Norwegian Anders Breivik.
Letter from Theodore Kaczynski
The group is also pen pals with the three-time murderer Theodore Kaczynski, better known as the Unabomber. They have set up a thread to discuss among themselves what questions they next want to ask of the imprisoned Kaczynski.
Yet interspersed in the discussions focused on their idols, National Socialism and violent video games, sentences such as the following can be found: “Carpetbomb your local refugee center;” “Bombing police stations is artistic expression;” and “I want to bomb a federal building.”
Bomb-building instructions
It is difficult to assess whether the online posing is an immediate precursor to concrete attacks. Members share links to archives, including hundreds of documents listing the preparations necessary for armed battle and terrorist attacks. Among them are handbooks that describe in detail how to carry out attacks on power plants, electricity grids and highway bridges – and dozens of instructions for building pipe bombs, car bombs and nail bombs along with directions for manufacturing delay detonators and powerful explosives out of household items.
A Broad Swath of Hate
But Atomwaffen Division doesn’t just glorify right-wing extremist terror: Taken together, their chat messages convey a rather confusing picture. Members post images of people who have been beheaded or murdered in other ways, including execution videos made by Islamic State. They also share extremist interpretations of Koran verses. In one posting, the school shooting at Columbine High School was referred to as “a perfect act of revolt.”
The group is also extremely misogynistic. Members refer to women as “egotistical sociopaths that have no worth,” and as “whores” and “property.” One member writes that “every rape” is “deserved.” “I wouldn’t even CALL it rape,” writes another. Pedophilia isn’t a taboo either. “She bleeds she breeds” and “birth is consent” are just a couple of many such examples.
It is, in short, a broad swath of hate, from National Socialism to child abuse to Islamic State. So, how does it all fit together?
The Hate Network
At some point, it was no longer enough for Atomwaffen Division members to simply read “Siege.” They wanted to meet its author in person. And in 2017, allegedly after searching for him for years, they tracked down James Mason, who had gone into hiding. A friendship developed along with a business relationship. By then, the marginally successful phase of Mason’s Nazi career had long since passed and he was solidly on the path toward complete insignificance. But the young Nazis from Atomwaffen Division set out to advance him into the digital era. They brought out Mason’s dusty Nazi propaganda and repackaged it under the label Siege Culture. Atomwaffen Division then began publishing his articles on a new website and also recorded podcasts with him. But the focus of Siege Culture is squarely on Mason’s books. John Cameron Denton, the group’s leader, claims to own the rights to the books.
Denton: “James Mason passed the torch on to us.”
Mason with Atomwaffen Division members (left), Denton visiting Mason (right)
Members of Atomwaffen Division take care of layout, promotion and sales. Five books are currently available, including a reissue of “Siege” and an even more bizarre collection of writings in which Mason claims that both Adolf Hitler and Charles Mason are reincarnations of Jesus Christ. Seven additional books are planned. They are printed and sold using Amazon’s self-published platform CreateSpace.
On a recent Sunday morning at 8:25, the man to whom young Nazis flock is shuffling down East Colfax Avenue in Denver. He makes his way past the park that is home to several homeless people and down the street to a bus stop, where he picks a waiting spot that is a few steps away from the others. He appears to be in a good mood. What’s he thinking about. What does this man who is so full of violence and hatred have to say? James Mason doesn’t speak with journalists and has lived in hiding for more than 10 years. But he is happy to speak to an interested tourist from Germany. The following interview was conducted with a hidden camera:
For Atomwaffen Division, the cooperation with James Mason is important primarily because of the recognition it brings within the scene. It helps the group attract violence-prone young men, and not just in the U.S. The cult surrounding Mason’s “Siege” has produced a global network of fanatics. For some, contact is limited to the internet, but others travel across the globe to meet their fellow comrades. And new chapters of Atomwaffen Division have recently begun springing up. A few examples:
Atomwaffen Division now acts as a global amalgamator of violence-prone young men, and James Mason is their inspiration. His young men promote a barbarous worldview and want to be as extreme as possible. From a rhetorical point of view, the Nazis have reached an acute level of zealotry. The only thing left is translating that hate into action.
“Many of you must step up your existential apartheid game,” one member wrote in a chat at the end of July. “The internet can only give you pointers, not experience.”
Authors, Camera, Video EditingAlexander Epp, Roman Höfner

Yog Sothoth walks fluidly though city streets, moving against the constant stream of business dealings. . .artfully finding his path through the bustling bodies of the post modern Metropolis.
He feels each being as it passes, hidden or forgotten. . .he knows their cycles and habits.

Most people are blind to his beauty, although some sense strangeness, the otherness is sometimes perceived with a fleeting thought or body sensation.

To the few, he is recognizable, even before the mind has a chance to remember.

He is the wind and light, controlled power with the strength of mountains and streams.
The giver, he pushes in; feeling all, remembering all, being all, encompassing all. . .phenomenal and non phenomenal alike.

He has found the balance it takes to be in all space and all time, venturing in and out of the human realms at will.

Taking on his preferred forms: bubbles, oaks, particles, gas, and fauns… he journeys.

I call out his name. . .my god, my king, my magician.
Directly in my eye he looks, the love and wisdom of a thousand eternities, moving both forward and backward, up and down; all dimensions are linked within his eyes and flow through his earthly veins.

Immortal and mortal.
Blood moves with the loud whispers of chanting and beating of hands on flesh.

All pulses. . .not just with life, but with death, with time and space, beyond anything a machine can comprehend with our gray minds and categorizing intellect.

Everything pulses, everything is constantly shifting within the space he inhabits. This beating creates the vibrations that radiate out of him, speaking for him when words make no sense.

His tentacles penetrate deeply, sexually, visually, audibly.

With his sounds, he invokes the ancient ways, thus allowing beings to shed themselves, fully to his ways.

He moves with purpose, each movement a calculated ritual of interaction. Speaking to all: living, dead and in between. Rough and smooth, loving and demanding, encompassing all dualities, holding them with tenderness and authority.

An eternity of shifting and changing has created the ultimate flexibility in muscles, bones, dimensions, and space. All flows in and out, like the ocean waves set both to fast forward and rewind, moving in both directions at the same time.

Yog Sothoth does not promise peace. Comfort and contentment are not within his realm. He encompasses the greatness and forceful oneness of times past and future, the sacred spaces in between ALL.

He holds all that is ancient, all that has existed, all that will ever exist.
Stirring inside with the energy of solar systems and galaxies beyond mortal comprehension.

This is not peace. It is an all encompassing treasure that lives in electricity and true emotions. It lives in struggle and anticipation, it lives in Work and opening.

After innumerable experiments and eons, he has found a way to penetrate and voyage through all universes. He holds this information dear, revealing his secrets to the few who devote themselves with sacrifice and blood…
Working always to understand with their beings the nature of time and the elements.

He demands servitude. And he guards well, with the invisible spears and daggers at his ready, with the iron clad gates and poison tipped leaves.

He holds his knowledge back, waiting for the seekers to show themselves.
Waiting for them to ask, to beg to bleed.

He is visible in the places of this earth, gliding through these earthly spaces like water. Moving undisturbed through human artifice, moving between the spaces like smoke.

And we throw ourselves to him, not because he offers us peace, but because he straddles all realities, because he knows the gate.

To venture beyond is not for individual peace, but to Work for All.

And when he puts himself inside, he pushes with the force of all; all time, all past and present, all future. He holds me up to the edge, shows me the death below. He holds me there, alive and moving with all matter, holding all the matter and energy between us.

Some nights he pushes me over the steep edge, only to watch me flail and soar. He gathers me with kisses, then blesses me with another beating.
My god, my king.
He holds all, he knows all.
He gives all.

Ain’t paying extra to carry a &200 $200 value old bike.
I am I just had Very High & Lightweight Seat Post installed. I am 5’2″ and it ain’t all ways up. a 5’6″ could ride.

with the strength of mountains and streams.
The giver, he pushes in; feeling all, remembering all, being all, encompassing all. . .phenomenal and non phenomenal alike.

He has found the balance it takes to be in all space and all time, venturing in and out of the human realms at will.

Taking on his preferred forms: bubbles, oaks, particles, gas, and fauns… he journeys.

I call out his name. . .my god, my king, my magician.
Directly in my eye he looks, the love and wisdom of a thousand eternities, moving both forward and backward, up and down; all dimensions are linked within his eyes and flow through his earthly veins.

Immortal and mortal.
Blood moves with the loud whispers of chanting and beating of hands on flesh.

All pulses. . .not just with life, but with death, with time and space, beyond anything a machine can comprehend with our gray minds and categorizing intellect.

Everything pulses, everything is constantly shifting within the space he inhabits. This beating creates the vibrations that radiate out of him, speaking for him when words make no sense.

His tentacles penetrate deeply, sexually, visually, audibly.

With his sounds, he invokes the ancient ways, thus allowing beings to shed themselves, fully to his ways.

He moves with purpose, each movement a calculated ritual of interaction. Speaking to all: living, dead and in between. Rough and smooth, loving and demanding, encompassing all dualities, holding them with tenderness and authority.

An eternity of shifting and changing has created the ultimate flexibility in muscles, bones, dimensions, and space. All flows in and out, like the ocean waves set both to fast forward and rewind, moving in both directions at the same time.

Yog Sothoth does not promise peace. Comfort and contentment are not within his realm. He encompasses the greatness and forceful oneness of times past and future, the sacred spaces in between ALL.

He holds all that is ancient, all that has existed, all that will ever exist.
Stirring inside with the energy of solar systems and galaxies beyond mortal comprehension.

This is not peace. It is an all encompassing treasure that lives in electricity and true emotions. It lives in struggle and anticipation, it lives in Work and opening.

After innumerable experiments and eons, he has found a way to penetrate and voyage through all universes. He holds this information dear, revealing his secrets to the few who devote themselves with sacrifice and blood…
Working always to understand with their beings the nature of time and the elements.

He demands servitude. And he guards well, with the invisible spears and daggers at his ready, with the iron clad gates and poison tipped leaves.

He holds his knowledge back, waiting for the seekers to show themselves.
Waiting for them to ask, to beg to bleed.

He is visible in the places of this earth, gliding through these earthly spaces like water. Moving undisturbed through human artifice, moving between the spaces like smoke.

And we throw ourselves to him, not because he offers us peace, but because he straddles all realities, because he knows the gate.

To venture beyond is not for individual peace, but to Work for All.

And when he puts himself inside, he pushes with the force of all; all time, all past and present, all future. He holds me up to the edge, shows me the death below. He holds me there, alive and moving with all matter, holding all the matter and energy between us.

Some nights he pushes me over the steep edge, only to watch me flail and soar. He gathers me with kisses, then blesses me with another beating.
My god, my king.
He holds all, he knows all.
He gives all.

Ain’t paying extra to carry a &200 $200 value old bike.
I am I just had Very High & Lightweight Seat Post installed. I am 5’2″ and it ain’t all ways up. a 5’6″ could ride.