Tuesday, December 30, 2008

Whither the Planet? Whither the Species?--An Introduction

Over the past few decades the mighty social engines of democracy and capitalism have lifted hundreds of millions of people out of lives of poverty and hunger. This has been most marked in Asia and Africa, but has been evident in Europe and the Americas as well. At the same time these engines have been responsible for summoning the apocalypse by feeding the twin daemons of overpopulation and technological suicide. How does this vast and complicated game sort out? What is to become of us and our only planet?

We know, for example, that--thanks to sophisticated modern farming (thanks to genetic seed modifications, industrial scaling of food production and distribution, and international spread of financial risks through commodities trading)--there is enough food produced in the world to feed the entire human population; yet because of political and financial greed, a billion people go to bed hungry every night. We know that through the bounty of sunlight, the internal heat of the Earth, and the winds and tides, there is enough energy available--helped by the wisdom of ecology and of efficient conservation--to satisfy the energy needs of humanity a hundred times over; yet we exhaust our fossil resources, destroy our planet's beautiful and valuable diversity, and poison--perhaps irretrievably--the very air we breath, the water we drink, and the soil that nourishes us. We know that through the miracles of science and technology, the major ills of humanity can be cured, our every travel and communication whim can be satisfied, and staggering questions about who we are and how the world works can be answered; yet those same miraculous adventures, science and technology, stand ready to mutilate, poison, burn, and irradiate catastrophically--if not to annihilate--our entire human species and our biological brethren as well.

So there is good news, and there is bad news. We have unprecedented power to manipulate and control the sources of human wealth and happiness. At the same time we must show--for our very survival--wisdom and restraint, long-range planning and philosophical perspective. Yet our society's development seems characterized by nearsightedness and hedonistic exuberance.

The signs are clear. There are barbarians at the gate. The tower guards are sounding a furious alarm. We must rise from our slumber; we must hear and heed the calls to action--now, before it is too late.

Monday, December 29, 2008

Defining Health

Is health the absence of disease, the absence of symptoms, the absence of disability? That seems like a rather negative way to go about defining something that should be a positive, enhancing aspect of life. Yet that is essentially the approach of "modern" medicine (sometimes called "Western" or "allopathic" medicine). "Treatment" consists of fighting off the offending symptoms. If the patient has a fever, give an antipyretic drug such as aspirin to bring the fever down; if nasal congestion, a decongestant such as pseudoephedrine; if an infection, an antibiotic such as penicillin; if a headache, an analgesic ("pain killer") such as acetaminophen (Tylenol).

This seems at first blush like a sensible approach. It sees a diseased human as a smoothly functioning biological machine on which some discordant process has been imposed; counteract the process, and smooth functioning will be restored. But this approach has problems, both theoretically and clinically.

From a theoretical standpoint, it basically represents a reductionist view--that a healthy person is the sum of a lot of healthy parts; if one part is disrupted, one can simply restore that part to smooth functioning and the overall person is restored to health. But the "parts" of a human being are too complicated and interdependent for this view to be workable. For example, a fever may be one of the body's orchestrated suite of ways to fight an infection: there are very few infective bacteria that can reproduce if the body has a few degrees of fever, say a core temperature of 103 degrees Fahrenheit. Or a fever can result from a systemic inflammatory response such as rheumatoid arthritis; from an environmental imbalance such as a hot room with sweating inhibited by dehydration; from a poison; or even from a cancer or other causes. So treating a fever with an antipyretic may be at least irrelevant and at worst an interference with the body's normal, healthy coping mechanisms.

For another example, treating a bacterial infection with a chemical that kills the bacteria such as an antibiotic like penicillin may, at first, seem like an obvious approach. But there are a couple of things wrong with this theoretically--and these are borne out clinically. First, an antibiotic doesn't just kill the egregious organisms, it kills other bacteria as well; and a healthy human lives in dynamic equilibrium with many kinds of bacteria--most classically with the ones that make vitamin K in our intestines, but in fact with many different kinds of bacteria that interact with and are in balance with many body processes and systems. So, giving an antibiotic does not usually just kill off the offending organisms, it decimates the body's normal, health-promoting bacterial friends as well.

Second, when a bacterial species is exposed to an antibiotic, its metabolic and genetic selection processes go to work to outwit that antibiotic. Since bacteria reproduce and evolve very rapidly, over a matter of hours and days the infecting pathogen is likely to produce resistant strains so that the antibiotic is ineffective and the infective disruption of the body's processes can get back under way.

Rather than taking such a reductionist or "separate parts" view, it is wiser, from a theoretical standpoint, to see a human being or a human life as a wholistic* orchestration of a lot of different systems, subsystems, and processes on a lot of different levels. After considerable thought and observation (including clinical experience), some people have characterized the most abstract or generalized levels as these four--

physical--this is the "lowest" (least important) level; it refers to the body's machinery: from that for locomotion to digestion, from vision to reproduction

emotional--this is the next higher level; it refers to how one experiences life: from happiness, satisfaction, and love to boredom, depression, and hate

mental--above and more important than the "emotional" level is the "mental"level; this refers to the clarity and alacrity of cognitive functions such as memory and problem solving

spiritual--this highest, most important, level refers to the sense of purpose in life, the sense of belonging, of having meaningful goals and moving toward them

Note: Many people would feel intuitively that the "emotional" level is more important than the "mental" level--in other words, that feeling love, happiness, joy--even sadness and anger--is more important than mental clarity and sureness of memory, but this is not born out by the careful observation of "disease" and how it is experienced by the patient, nor of healing processes. Some would even say that the "physical" level ranks high--that physical pain, for example, can distract you from a full emotional life and from mental clarity. But clinical observation ranks these four levels as I have listed them with the "spiritual" level highest and the "physical" level lowest, less important than the "emotional" or "mental."

The definition of health, then, from a wholistic perspective, is that all these levels are working well and are integrated smoothly so that the human being is physically able and active, is emotionally happy, has mental clarity, and experiences a sense of belonging and of moving toward meaningful goals.

This seems like a broader, more positive definition of "health" than "the absence of medical signs and symptoms." It is also in consonance with a range of observations of the human experience and has significant implications for healing systems--as we shall see in further essays.

* Note: Although this word is more commonly spelled "holistic," I prefer the equally correct "wholistic" because both semantically and etymologically the word is related to "whole" rather than "hole" or "holy."

Sunday, December 28, 2008

Where Do Symptoms Go To Die?

Western or allopathic medicine represents a powerful perspective on health and healing. It provides a viewpoint that is logical and objective--in a word, scientific. It has been carefully built over centuries; it has been hard won from the grip of magical and religious thinking. It dictates that we observe illnesses and their comings and goings carefully, dispassionately, with the questions ever uppermost,"What was the cause?" and "What might be the cure?" It calls dedicated and compassionate people to its study; it aims to expand our understandings and to lighten the burdens of human suffering.

But Western or allopathic medicine--what we have been taught is "normal," "usual" medical thinking; what we often consider ALL THERE IS to health and healing--actually has some significant limitations. Conceptually, it is reductionist or atomistic: it is built on the perspective that the human body is the sum of many sub-systems and parts. There is a circulatory system of blood and lymph "plumbing" that delivers nutrients to and removes waste from the various parts of the body; in addition, there is a nervous system--comprised of the "central" nervous system (the brain and spinal cord) and their "peripheral" connections (the nerves and sensory organs)--that collects information about the environment and about the workings of the body parts, and directs the body's activities; further, there is a musculo-skeletal system made up of bones and muscles that physically support and protect the body and assure both the physical interrelationships among the parts and also the body's overall locomotion. Conceptually, there are many other body systems as well.

But a human being is more than the sum of all these systems; a living, functioning, loving, and hurting person is a wholistic* expression of these parts: their "super-summation," their integration and their coordinated effects on themselves and on other human beings and the surrounding world.

One of the results of this conceptual limitation is that Western medicine fails to appreciate that some aspects of being a human being are, frankly, more important than others. "Normal" medical thinking tends to focus on physical attributes more than on emotions, mental clarity, or spiritual satisfactions. In fact, these higher planes of human experience have a definite hierarchical relationship with one another. Most people would rather feel happiness and love (emotional health) than sacrifice these in order to be relieved of physical pains and limitations (physical health). Similarly, people often experience that mental clarity--faithfulness of memory and accuracy of understanding--take precedence over diffuse emotional "warmth." And surely spiritual health--that is, a sense of meaningfulness and purpose in life and of connection with treasured, long-term goals--trumps all the other levels of "health."

This conceptual limitation of our usual medical thinking--its lack of a wholistic and hierarchical view--would not be particularly important if it did not have significant clinical effects. But careful observations indicate that it does. For example, medical treatments commonly have "side effects." Often these are trivial--a dry mouth, transient drowsiness, or loose bowels; certainly not as important or as imposing on one's overall sense of well-being as the original symptoms the medicine was prescribed to counteract. But sometimes the "side effects" are truly annoying, even debilitating or dangerous--for example, a headache, fainting spells, or intestinal bleeding. When the symptoms being treated are "traded in" for other significant symptoms, this is referred to as "symptom substitution." But without a wholistic or hierarchical perspective, there is rarely an appreciation for the extent to which interlocking body systems may be damaged when one system--with one set of symptoms--is singled out for "treatment," that is, for symptom suppression. It is rarely recognized how interfering atomistically with one or another of the body's functions--perhaps of the body's defenses or reactions to insult** or imbalances--can lead to broader decline in well-being. A physical symptom may be removed, but "disease" on the emotional, mental, or even spiritual planes may emerge and, when we use careful (wholistically sensitive) clinical observation, often does.

Where do symptoms, when treated (or suppressed) allopathically, go to die? Often they do not simply disappear, they spread to different body systems where they become different symptoms; often they reemerge more distressing and debilitating than they started.

Notes: * Although the more commonly accepted spelling of this word is "holistic," I prefer "wholistic" with a "w" because, after all, we are talking about "wholes" not "holes" (or "holy").

** In medicine, an "insult" is a shock or challenge to the body, such as a physical injury or a overdose of a drug.

Saturday, December 27, 2008

Paying for Health Care

Health-care costs in the U.S. have increased over the past couple of decades faster than the rate of inflation or growth of the gross domestic product (GDP)*. In other words, the U.S. is going bankrupt paying for health care. In 2007 health-care spending increased by 6.9%, twice the rate of inflation. The $2.3 trillion spent on health care in 2007 represented 16% of GDP; by 2016 this is expected to rise to $4.2 trillion which will be 20% of GDP. The U.S. spends a higher proportion of its wealth on health care than any other industrialized nation; Switzerland, Germany, Canada, and France, for example, spend between 9.5 and 10.9% of their GDPs on health care--and those countries have universal health-care insurance coverage; in the U.S. some 46 million citizens have no health-care insurance. Moreover there is no evidence that health-care services received in the U.S. are of better quality than in these other nations; by most measures (such as chronic disease rates and death rates) they are worse.

This lamentable situation has been attributed to inefficiencies, excessive administrative expenses, inflated prices, poor management, waste, and fraud. In other words, it is due to the federal government's misplaced passion, since Ronald Reagan, for deregulation. And as the medical resources of the country--the doctors, hospitals, pharmacies, pharmaceutical suppliers, health insurance companies, etc.--have been pretty much given their head trusting that free-market economics with a dash of ethics and common sense would inexorably set things aright, the same thing has happened that we witnessed so graphically in the financial sector--things have run enthusiastically and irresponsibly amok; costs are up and quality of services is down.

The cure is obvious though it is not easy. The U.S. government needs to develop transparent and robust regulatory controls. This would start with the development of a universal health-care insurance system for all citizens (just as other industrialized nations have). Tens of billions of dollars would be saved by fair and consistent pricing of medical goods and services. Tens of billion more by accepting generic drugs**--the Veterans Administration and many foreign governments already do; Medicare does not. And further tens of billions of dollars by developing and installing a uniform, computerized medical records system*** worthy of our information age.

The problem is severe. The cure is clear. The missing ingredient is the political will.

There are two problems with health care, however, that this does not address. One is the incongruity, discussed in prior essays, between the flawed reductionistic approach of usual, Western (allopathic) medicine and a more integrated, wholistic perspective. This can only be addressed by education, public awareness, and gradual cultural paradigm shifts. For now and the foreseeable future we are stuck with the fragmented, "body parts" viewpoint of allopathy--with a dash of acupuncture, homeopathy, chiropractics, and nutritional approaches gratuitously thrown in.

There does seem to be a growing emphasis on "wellness" rather than on "illness," on maintenance of high-level health (through health education, healthful nutrition, and lifestyle changes) and thus on "prevention" of health problems rather than on "cure." This may go a long way towards solving this first problem.

The other unsolved--perhaps unsolvable--problem is the "myth of infinite entitlement." In theory, every single individual is entitled to "the best" medical care that money can buy--be it organ transplants, genetic manipulations, or other expensive therapeutic interventions. In fact, our society cannot afford that kind of care for every citizen. We have to turn poor folks away from some doors that are open to rich folks. There is no philosophic rationale for this; it is simple pragmatism: whether we like it or not, "ordinary people" can get just a certain amount of medical care and no more while rich folks can take advantage of every modern medical miracle.

This second problem has been true for only the past 50 to 75 years. Prior to that the "best" medical care was not so sophisticated, technological, and specialized that ordinary folks could not afford it. But increasingly in recent years the "best" medical care is simply too expensive for anyone but the very wealthy.

Notes: * Gross domestic product, or GDP, is the broadest measure of a country's economy; it amounts to the total market value of all the goods and services produced in a country during a certain period.

** Generic drugs are the same chemicals as non-generic drugs, but they are marketed without the advertising, the gifts to doctors, and the corporate profits that inflate the prices of "named" drugs from "named" companies. Arguments have been made that buying generic rather than proprietary drugs interferes with private research and development of new drugs; but such R&D funding should be done with academic, philanthropic, or governmental resources rather than by entrepreneurs and drug companies in search of private profits.

*** Although developing a huge, integrated, computerized medical-records system would cost many millions of dollars, this cost would be graciously, philanthropically born--as Ross Perot pointed out--by any of a number of computer billionaires such as Ross, himself, or Bill Gates.

Friday, December 26, 2008


Philanthropy is a powerful force in our modern world. Why? Because people need people--and even the most powerful agragates of people--governments and businesses--cannot always take care of peoples' needs.

Some few people live, survive--even thrive--alone, perhaps as hermits, monks, hunters and trappers following their callings in the wilderness, or the insane extruded from their homes and hearths. But most people live in groups, often in many levels of groups such as families (and extended families), communities, villages and towns, cities, regions (geographical or political), nations, even alliances of nations, and--these days--the international, global "community."

People live in groups because groups can provide services for the collective that individuals cannot provide for themselves. Defense is the first and most obvious of these. A large group can, if necessary, forcefully overcome an individual or a smaller group; and the bigger a group is, the better it can afford bigger and better weapons, and more highly trained police and military forces.

Another reason people aggregate into groups is to share specialized skills. Although a farmer or hunter/gatherer with good health and skills might survive alone, an iron smith or carpenter just can't go it alone (much less an accountant or a computer programmer).

Benjamin Franklin invented and promoted several kinds of shared, community resources: a post office system, a public library system, a patent system (so people could share and profit from their inventive ideas), and a public fire department (most prior fire services were private, like the Roman companies that rushed to the scene of a burning house and gave the owner a choice between selling it at a cut-throat price and watching it burn to the ground).

Banking, welfare, law and order--these are other traditional services better provided by a community than by an individual. Even education and health care may work better if they are "massified" (to borrow a term from Alvin Toffler).

In addition to government programs, private businesses often satisfy group needs. For example, news organizations with newspaper, radio, and TV outlets can afford to send reporters and news-gathering equipment and resources around the world (although in recent years these news-gathering and reporting functions have been increasingly taken over by the Internet). Transportation, communications, and shipping have also commonly been done not by individuals for their own needs, but by private businesses (or governments) for groups.

But there are always circumstances in which some unmet human needs fall between the cracks of government and private businesses. And this has called for the development of the third major way that resources are communally reallocated: philanthropy. The beggar by the side of the road presents a picture of a traditional human activity that is older than history itself. Religious- and state-run orphanages and poor houses also date from far back in antiquity. But the complexities of the modern world have spawned ever more grand and sophisticated philanthropic endeavors. The Red Cross and Red Crescent and hundreds of other beneficent, non-governmental organizations (NGOs) attend to overwhelmed, suffering, and forgotten peoples around the world.

Why do philanthropists give? John D. Rockefeller made a fortune developing the petroleum industry in the last few decades of the 19th century. He was the first American billionaire and the richest man in the world--he is sometimes regarded as the richest person in history. And he asked himself, what does one do when one has amassed enormous wealth? His conclusion: one gives it away--carefully, thoughtfully, supporting social activities and causes that cannot be funded in any other way. Rockefeller developed the philanthropic foundation system that has been used by many wealthy people since Rockefeller's day. Through his philanthropic foundation he had a major effect on medicine, education, and scientific research.

Henry Ford of automobile-manufacturing fame--the inventor of assembly-line techniques and mass production--was another very wealthy man, a contemporary of Rockefeller, who turned his wealth toward philanthropy.

Consider this scenario: In 1961 India was on the brink of mass famine. But during the previous decade and a half Norman Borlaug's work developing genetically superior varieties of wheat had rescued Mexico from a similar disaster--in 1943, when Borlaug began his work there, Mexico imported half of its wheat and was facing national famine and economic collapse; by 1964 Mexico exported half a million tons of wheat annually. India needed Borlaug's help, but the government was politically prevented from soliciting it and funding his activities. The Rockefeller and Ford Foundations stepped in where government could not, and private enterprise would not, tread. They funded programs of agricultural research, extension, and infrastructure development. Over the next decade India's rice yields rose from two tons per acre to ten tons per acre; millions of people were saved from starvation.

Or this scenario: Bill Gates, the Microsoft-computer billionaire, and his wife Melinda, through their foundation, responded to a tragic flaw in the health-care systems for the impoverished masses of Africa. Western pharmaceutical companies cannot afford to research or produce drugs for which there is no paying population. Furthermore, although in wealthy Western nations, diseases like malaria and tuberculosis can be cured--even the symptoms and progression of Aids can be significantly ameliorated--these diseases debilitate and kill millions in the less affluent developing world. The philanthropic work of the Bill and Melinda Gates Foundation funding health care in (and pharmaceutical research for) developing nations has saved millions of lives.

Philanthropy plays an important part in providing for needs to which government and business activities cannot--or do not--attend.

Water, Water Everywhere

The largest mass poisoning in history is under way right now in Bangladesh--and it is man-made; in fact, it is the result of a massive, misguided, philanthropic intervention by the United Nations, specifically the United Nations Children's Fund (UNICEF), and the World Bank along with the World Health Organization (WHO). The scope of the disaster exceeds the radioactive contamination and deaths caused by Chernobyl, the world's worst nuclear power plant catastrophe, and by most of the major wars and infectious epidemics of history.

Although the area now called Bangladesh, the homeland of the "Bengals" or "Bengali" people, has been the home of evolving civilizations for more than 4,000 years, as a discrete political entity it was first partitioned from India in 1947 becoming Eastern Pakistan. Religious and cultural unrest led to the bloody India-Pakistan War of 1971, culminating in the establishment of the independent country of Bangladesh.

A densely populated and impoverished nation, Bangladesh is flooded yearly by devastating monsoon rains. Its economy was once dependent on the export of jute plant fibers but these are now largely replaced worldwide by plastics. It now depends on the cultivation of rice and the production of cloth for export. The embryonic democratic government has been plagued by corruption, bureaucracy, and inefficiency.

Prior to 1970, despite limited water purification and poor sanitation, most of the population of Bangladesh depended on the surface waters of rivers and lakes for drinking. Epidemics of cholera and other infectious diseases from contaminated drinking water were frequent; with no national or other effective health services, the death rates--for example, of children--were some of the highest in the world.

But the geographic area of Bangladesh lies over a vast, alluvial water table where clear waters seep slowly, for millennia, from the Himalayan spring run-offs and monsoon rains towards the sea. And in 1971, principally under the impetus of UNICEF with the help of the World Bank, the international community embarked on an enormous project to fund the drilling of more than a million tube wells throughout Bangladesh to tap this water table and bring water which was free of bacterial contamination to tens of thousands of impoverished villages. Local populations were employed to drill the wells, and a widespread publicity campaign was implemented to introduce local peoples to this new source of clear water.

There were local myths that called the deep flows "devil's water." Some local legends knew the deep water to be subtly poisonous. But the drilling and publicity campaigns proceeded.

Arsenic poisoning is indeed subtle, and chronic. It may take years--even a decade or more--for the toxin to build up in the human system and wreak its disastrous effects. Through the 1980s more and more cases of skin sores and cancers, of internal organ failures, and of neurological deterioration emerged. When the sufferers were seen at modern clinics and hospitals, the diagnosis of arsenic poisoning could be made. But health resources in Bangladesh, both for diagnosis and for treatment, were few and far between.

Scientists first found toxic levels of arsenic in Bangladesh drinking water in the late 1980s. These findings were first published in the scientific literature in 1990. Yet official recognition of the problem was not announced until 1993. And a significant country-wide evaluation project was not initiated until 1998. Due to bureaucratic inefficiencies, by 2001 only about one percent of the tube wells in Bangladesh had been tested for arsenic contamination. But evidence was mounting that the problem was widespread and severe.

It is now estimated that some 50 to 75 million people--up to half the population of Bangladesh--are suffering or in danger from arsenic contamination of tube-well water. And the vast majority of these people are beyond the reach of medical diagnostic or treatment services.

People ask, why wasn't the deep water tested for arsenic early on? It was well known that a lot of ground water in South-East Asia is contaminated with arsenic. And why weren't early signs heeded with alarm, and responded to with concern and urgency? And why, even now, does it seem so difficult to mobilize effective remediation--testing widely, and then sealing off wells that are contaminated; drilling of deeper--or in some cases more shallow--wells to access uncontaminated reservoirs; and getting effective diagnosis and treatment to these suffering and endangered poor?

Philanthropy can be a powerful force for good. And yet it must be administered very thoughtfully, very carefully.

U. S. Education

Education is the foundation of a nation's--

HEALTH, not only of its individual citizens but also of its broader institutions and of itself as a collective,

WEALTH, in our modern, post-agrarian and post-mineral-resources world, and

STRENGTH, in facing a rapidly evolving international political environment.

The U.S. education system has had a strong, storied past.

The first half of the 20th century was the time of the "high school movement" in U.S. education (actually more precisely considered 1910 to 1940). There were a great many high schools built throughout the U.S.; they were under the control of local school boards; they were funded by taxes--local, state, and federal. They were open to all and were very forgiving of the inevitable transgressions of adolescents . Their focus was academic (rather than vocational) but they were seen as "preparation for life" rather than explicitly for college.

In Europe and throughout the world, other industrialized countries did not follow this trend until the later decades. In the U.K., for example (but also throughout most of Europe), education for teenagers during this period was relatively closed to the general populace, was centrally run by the national government with uniform standards, was notoriously unforgiving, and, though it was explicitly academic for some youngsters (in preparation for college), it was designed to be trade training or preparation for industrial labor for many others.

After the Second World War a different emphasis emerged in U. S. education. The remarkable educational benefits awarded to veterans under the G. I. Bill created a wave of post-secondary education. This continued through the Korean and Viet Nam Wars so that by 1975 over 16 million returning vets had used their G. I. Bill benefits to attend college. The period from 1945 to 1975 might rightly be called the "college movement" in U. S. education.

The G. I. Bill is often considered the most important single factor responsible for creating the enormous growth of the middle class in the U.S. during this period--people who owned their own homes, who had families and individual jobs (they were not hired and fired en masse), and who expected their children to finish high school if not college and to do better than their parents in pursuing the American Dream.

The last quarter of the 20th century represents yet another phase in the history of U. S. education. During this period the U.S. reigned supreme as the "graduate school of the world." In 2000 some 25% of transnational higher-education students were studying in the U.S. with the proportion tipped significantly higher than 25% in post-graduate studies.

During the last decade of the 20th century some flaws appeared in the U. S. education mystique. Elementary and high school students in Japan and several European countries were found, on standardized tests, to be outperforming U.S. school children. Then, Bush's "No Child Left Behind" initiative of 2001 largely backfired for three reasons. First, lack of funding: although the federal government mandated standardized school-achievement goals, federal funds were not provided to help advance educational efforts to achieve these goals. Second, in order to meet these standards, schools widely began to "teach to the tests," that is, to shape their curricula specifically to the areas and the kinds of questions the federal tests required. This led to less diverse curricula and less flexibility in responding to teachers' and students' personalities, special interests, and strengths. And third, in addition, the Bush administration's anti-scientism (in appointments, policies, and legislation) also undermined educational efforts.

President Obama has emphasized the importance of education--pre-school through graduate school--in strengthening the U.S.'s place in the world and in assuring our continued leadership not only in science and engineering, but in all aspects of civilized life. Let us hope and pray he is successful in realigning the U.S. with high educational goals, and let us all work with him arduously to achieve them.

Freud's Psychoanalysis

A hundred years ago, during the first decade of the 20th century, an intellectual storm arose in Vienna, Austria, and spread outward through Europe and the Western World. This started in 1900 when Sigmund Freud, a little-known, Jewish-Austrian psychiatrist, published The Interpretation of Dreams. In this book he proposed that each of us keeps a lot of our own thoughts away from our conscious awareness because we regard those thoughts as morally unacceptable and, most importantly, that this requires effort--it consumes mental energy. In other words, thoughts and feelings that well up naturally but are contrary to accepted societal norms "cost" some psychological energy to suppress. Freud believed--and he reported clinical cases to support his argument--that neurotic symptoms such as anxiety and depression are the result of this "cost." Anxiety is a diffuse but busy fearfulness that the forbidden thoughts might rise into consciousness--a kind of mental standing guard to squelch offensive ideas. Depression is mental exhaustion from the effort of psychological suppression.

Freud made several significant contributions to Western culture in addition to the "psychoanalytic" formulation of psychological dynamics outlined above. Perhaps foremost, Freud introduced the concept--now universally accepted--of "talking therapy," that simply "talking things through" could help one feel better. In his approach called "psychoanalysis," under the observation of a trained "psychoanalyst," the patient would "free associate," that is, report whatever came to mind. Through this process the patient would have "insights" and discover--with the therapist's help--patterns of personal self-deception. Then, armed with these "insights" and discoveries, the patient would be able to make changes in personal behavior and emotional responses and live a happier, more productive, non-neurotic life.

Freud did indeed make significant contributions to psychological theory and to the practice of psychotherapy. But perhaps Freud's greatest contributions to Western culture came because of his showmanship, his flare for drama. He coined a series of colorful terms--ego, superego, Oedipus complex, Electra complex, etc.--based on classical scholarship and Greek mythology. He built a picture of the mind divided against itself, at war with itself. He linked this to fantasies of sex and death, two of the most exciting taboos of Victorian morality. And he traveled and lectured widely to garner public acceptance of his ideas.

Freud spent several decades "selling" psychoanalysis. By the 1930s and 1940s "psychoanalysis" was a popular intellectual buzzword. Every intelligent (and reasonably well-to-do) member of society had been "in" psychoanalysis. It became a dominant perspective for discussing not only mental illness and emotional discomfort, but also music, theater, poetry, art, and literature, and even history, politics, business, and other pursuits.

Psychoanalysis changed both how artists work and how critics respond. Not only, for instance, was the "psychological novel" born--and with it, the only narrative perspective we can now consider "realistic"--but it has also become impossible to read Shakespeare, Chaucer, Arthur Conan Doyle, or Milton from the same perspective as those authors' works were written--in fact, we can't even "think" in non-psychological terms now: Hamlet, Lear, and Iago; The Pardoner and the Wife of Bath; Sherlock Holmes; and even Satan were forever recast as "case studies." So, in a nutshell, Freud re-wrote the past as well as redirecting the future. Today, even the likes of Superman have complex, tortured souls. Since millions more people are exposed to Superman--and even to Shakespeare--than are exposed to Freud's own works directly, there has clearly been a "ripple effect" throughout the culture. No one today would question, even for one moment, the reality of the "psycho-killer" personality, even though, fortunately, almost none of us have met such a person face to face. And those types we do meet--the obsessive-compulsive, the passive-aggressive, the enabler, the abuser and the abused, the person who is "in denial," and the over-compensator--even high school dropouts today know that such aren't abstract case studies: they are "us."

During the latter half of the 20th century, psychoanalysis branched and metamorphosed to take hold in a wide variety of intellectual circles. But it also came into significant dispute because, as a "therapy" or treatment for psychological disorders, it was notoriously expensive and often not very effective. In addition, the evolving neuroanatomy and neurophysiological discoveries of the late 20th century did not support Freud's theoretical formulations well. There seemed to be no identifiable brain areas that corresponded to "conscious" versus "unconscious" much less to an an "ego," "id," or "superego"; or brain processes that seemed to function like "repression" much less the more elaborate and specific defense mechanisms such as "denial" and "projection." Moreover, scientific philosophy theoreticians came to criticize Freud's ideas as unverifiable and therefore unscientific.

But Freud's overall contribution to Western culture was immense. Because of him the 20th century became "the age of psychology." Psychoanalysis became an important part of scholarly inquiry; it was widely discussed as part of a variety of intellectual debates. Moreover, although it faded in importance late in the 20th century, as fate would have it, recent discoveries in the neurosciences over the past few years (essentially during the first decade of the 21st century) have shown some signs of validating and resurrecting Freud's psychological theories. For example, Freud's notoriously controversial "discovery," the "unconscious," is once again a hot topic of study in the fields of experimental and social psychology (e.g., implicit attitude measures, fMRI and PET scans, studies of subliminal information transfer, and other indirect tests).

Thursday, December 25, 2008

The Essence of Psychotherapy

Sigmund Freud introduced the concept of "psychotherapy," which came to mean that by simply talking things through, one could feel and function better. Through his popular, worldwide lectures about his clinical experiences and psychological theories, Freud brought about a paradigm shift in thinking about mental illness and emotional distress. These were no longer seen as mysterious and inexplicable, perhaps as intrusions of devilish forces or the results of structural brain malformations. They came to be thought of as learned maladaptive patterns of mental activity--learned, for the most part, in early childhood so they were not experienced as "add-ons" amenable to change; but with the right kind of intensive psychological "work," they could be "cured."

During the first half of the 20th century as Freud's "psychoanalysis" became widely known (and revered), two practical objections to it emerged. First, it was terribly expensive and prolonged: a course of treatment typically cost many thousands of dollars and took many months if not years to complete. And second, often it turned out to be ineffective therapeutically; it was a wonderful technique for introspective self-study and for elucidating, for educational purposes, the complexities of mental function, but all too often the patient's original symptoms persisted or returned after psychoanalysis was completed.

As a result of these objections, a raft of alternative "psychotherapies" emerged. The Wikipedia lists 150 different kinds of "psychotherapy," over 100 of which are "talking therapies" which rely primarily on verbal interchange--the others use massage, music, dance, acting, aromas, special sounds, various kinds of meditation, etc.

One important branch of psychology, loosely derived from Pavlov's famous work on the bell-triggered salivation of dogs, was called "behavioral psychology." This claimed that emotional reactions were essentially learned, conditioned reflexes. The "behavior therapy" derived from this used, among other techniques, trained relaxation to reduce anxiety while the patient was exposed to increasingly anxiety-provoking stimuli such as pictures of snakes or of intimate social situations. Behavior therapy worked (and works) well in certain clinical situations although suppression and substitution of symptoms were often a problem. (See my essay titled "Where Do Symptoms Go To Die?)

Another branch, the "cognitive therapies," essentially used the power of rational understanding of mental mechanisms and the symptoms they caused (based largely on Freud's theories) to gain mental leverage and release the patient from the grip of uncomfortable psychological symptoms. The so-called "cognitive therapies" also came in a variety of forms and had (and have) some successes.

Some other well-known systems of psychotherapy were Gestalt Therapy, Person-Centered Therapy, and Transactional Analysis. Each of these had powerful, charismatic founders and practitioners, and each had (and has) some successes.

Finally we should note the "group" therapies as part of the phenomenon of "talking" psychotherapies derived, directly or indirectly, from Freud's legacy. These included insight-oriented groups and devastatingly powerful "sensitivity" and "T" (for "training") groups, and such systematized derivatives as the "Square Games Club" of Synanon and the wave of enlightenment seminars such as est, Actualization, and Living Love.

Freud invented psychotherapy, the "talking" therapies, which provided the opportunity and theoretical framework for "insight," that is for understanding why one acted "neurotically" (i.e., foolishly)--and in recurrent (foolish) patterns. But insight alone was not enough to end neurotic symptoms without two additional factors: (1) the motivation to change, for example, due to emotionally painful embarrassment and (2) the hard work of practicing healthy patterns. Some alternate therapies derived from Freud's work provided these factors through charismatic leaders (which Freud would have called "transference"), special techniques (which could be seen as crassly manipulative), or group pressure (since we are all susceptible to the power of the opinions of our peers).

Insight or awareness of the problem; plus motivation to change; plus practice of healthy patterns: those are the three essential ingredients of an effective "psychotherapy." For example, I find myself in a brief, minor social situation; I feel inexplicably enraged; I feel puzzled, helpless, and foolish that this situation should upset me so; I struggle within myself to contain or undo the rage; then I feel anxious and depressed. As I think about this situation, perhaps by talking it through with a therapist, I realize that it is a recurrent pattern in my life. That I have an automatic, competitive reaction even in minor encounters. I discover that I learned this reaction as a young child through oft-repeated interactions with my father in which I felt inadequate and humiliated. I understand, as an adult looking back at this, that my father was trying to teach me to be emotionally strong; it was an expression of his love for me. I decide that whenever this cycle of rage, anxiety, and depression overcomes me in the future, I will remind myself that it is because the situation (and person) in question reminds me of my father's misguided interaction with me and that, in truth, my father acted toward me the way he did because he loved me and was trying to help my emotional resilience, my emotional growth and maturation. I find, as I practice this insightful formulation and revised emotional reaction, that gradually, whenever I encounter a situation which might have enraged me, instead I feel relieved, joyous, and loved.

For another example, I notice that whenever I sit down at an ordinary meal, I overeat. This gradually makes me overweight, and I feel logy and unattractive and, as a result, socially isolated, lonely, anxious, and depressed. In discussing this pattern with a psychotherapist, I discover that when I was a child, my mother urged me to overeat because she felt anxious about my health. I understand, as an adult looking back on this, that my mother, with all her urging and psychological manipulations--including shouting at me and crying tearfully--was, in a warped way, expressing her love for me and her concern for my health. I decide that in the future whenever I am tempted to overeat, I will remind myself that it is because of my mother's absurd urgings which were actually an expression of her love for me, and that I can actually experience her love and concern for my health more if I do not overeat. As I practice this reformulation over a period of time, I come to feel relaxed and to eat healthfully, I attain a more normal weight, and I become more active in life and physically attractive in social situations.

This process of talking through to insight and practice is the essence of psychotherapy. It is the valuable legacy Freud bequeathed us.

Wednesday, December 24, 2008

Brain Glitches

Your brain is an enormously complex computer with 100 billion nerve cells of several different kinds, each cell with potentially hundreds or thousands of connections. The whole is organized--to borrow some concepts from the world of computers--into systems, routines, subroutines, and applications. This all starts out, shortly after conception, as a few cells with elaborate DNA plans. Then, over the next few years, these cells multiply and build, boot up, program, and then train and retrain the various parts of this fantastic computer.

Considering the enormous complexity of building, wiring, and programming the brain, it is no surprise that things can go wrong. In fact, almost always something or other does go wrong--usually lots of little things develop differently from the ways that our DNA has planned them. But one of the wonders of the brain and its engineering is that there is often redundancy--the brain has more than one section that can accomplish the same function; the brain has lots of checks and balances to correct or counter minor structural mistakes.

If the mistakes cannot be corrected and they are big ones, important to the functioning and survival of the organism, then the fetus dies in utero or the baby dies during the first few years of life.

That leaves the rest of us. Yes, each of us grows to adulthood with some of the many thousands of our brain functions not working as well as they should. Take, for example, the very elementary process of telling which way is left and which way is right. For most people learning this is quick and easy; they hardly notice that they have learned to tell this difference--they just always, instantly know. But for some people, this tiny, almost insignificant brain function never works--in fact, left-right confusion is actually quite common, affecting--in some degree of severity--about 10% of the general population. But most of these people learn "work-arounds," ways to help themselves know left from right by substituting other brain functions that do work. One man, when he needed to know which way was left and which was right, would take a quick pause from the conversation he was involved in--perhaps a quarter or half a second, not long enough to be noticed--and say to himself, "I write with my right"; then he would imagine writing and, sure enough, that was his right hand (or side or direction) and the other was his left. Another person with this problem, a woman, would quickly spin her wedding ring with her thumb, and that would alert her to which was her left hand, her left side.

A problem very similar to this can affect some children learning to read--these are children who cannot tell a "d" from a "b" or a "p" from a "q." You can see that this would be very debilitating in second- or third-grade reading classes. And this particular problem--of the thousands that can affect a developing brain and, of these, the dozens that can interfere with learning to read--is actually quite common. Moreover, it represents a very important initial chapter in the history of "dyslexia," that is, of the study of reading disorders and their treatment by "special education techniques," because it was discovered that if children with this problem are given special practice books in which all the "p's" are one color, say red, and all the "q's" are a different color, say blue, the children can then readily tell the "p's" from the "q's," but--most importantly--after a couple of weeks of practice, their ability to tell the difference between these two letters transfers to reading books in which all the letters are black.

Let me repeat for clarity and emphasis: a certain child cannot tell a "p" from a "q," but can tell a red letter from a blue letter, so they practice reading with red "p's" and blue "q's" for a while, and pretty soon they have learned to tell the "p's" from the "q's" even if they are the same color.

So here are the important ideas to remember about brain glitches:

First, we all have them--lots of them.

Second, sometimes they happen to be important in activities that society values--in reading, in talking, in spelling, in adding numbers, in finding our way around, in remembering names and faces, or in a thousand other ways.

Third, often the brain cleverly compensates (finds "work-arounds") so that these disabilities disappear into the fabric of our lives.

Fourth, however, sometimes a teacher with special training is needed to help a child (or an adult) analyze a specific deficit and find ways to overcome it.

Tuesday, December 23, 2008

The Limits of Perception

It is a warm autumn afternoon. I am sitting on a grassy hillside that slopes to the west, watching, on the distant horizon, the Sun begin to settle toward the ocean. There is a faint breeze, barely palpable against my cheek, but I can hear the trees behind me rustle from time to time. And there is the twitter of songbirds somewhere in the trees; and the whirr and buzz of occasional insects in the grass nearby. High above I see a red-tailed hawk circling slowly. I almost think there is the faint smell of flowers on the breeze, but perhaps it is only the warm fullness of the air, the caress of sunshine, the joy and peace of the moment.

It all fills up my senses. Surely there can be no more. And yet that hawk has vision five times more acute than mine; from high overhead it watches for the telltale quivers of a blade of grass that show it a mouse is passing--quivers I could not see if they were only ten feet away. Some of those insects have vision that extends far beyond mine in a different way, into the ultraviolet so that spider webs, pale and silky, draped among the tall grasses, light up like holiday parade banners to them. Nor do I feel the higher frequencies from the Sun that caress into life the tanning pigments of my skin or that claw at my skin's elastic tissue and sneak in their pre-cancerous attacks on my DNA. I cannot see that near the horizon the light from the Sun is polarized, although many birds and aquatic creatures navigate hundreds and thousands of miles using that polarization as a guiding beacon.

I do not hear some of the songbirds' artistry that trills and twitters too quickly for my ear to resolve it, or that soars into octaves that are too high in pitch for my ears to hear. I am not aware of the many tiny vibrations that insects in the grass are making to communicate food, aggression, sex, and the complexity of their worlds to other members of their species.

If I had a radio--or a suitable set of radios--I might become aware of some of the thousands of channels of words, music--even TV pictures and signals from car doors, gates, gps locators, and the like--that course in and around and through me.

My senses of smell and taste are crude bludgeons compared to the rapier sensitivity of a dog's nose; how keenly and enthusiastically it samples and savors the faint breeze! Some pollinating insects can detect suitable flowers over a mile away. Some fish swarm toward an attractive taste in their water emanating from several miles away.

It is true that my sensory apparatus bombards me with information and fills me with delight, but it, in fact, notices barely one hundredth of one percent of the data flow around me.

Monday, December 22, 2008

Self-Image and Consciousness

The brain is for problem solving. By the term "problem solving," I mean something far more primitive and basic than "where did I leave the car keys?" or even "is that an itch I feel on my arm?" If one had to put it into words--and it is more primitive than words--it might be, "oops-dee-doo, keep the blood pressure up in that earlobe" or "there goes raindrop number 41,379,028."

Speaking of raindrops, there's a good example of what I mean by "problem solving." Say there are a million raindrops falling on and around you, but they're not raindrops until you "say" to yourself, "that's a raindrop" ("and that's another one, and another one..."). And "problem solving" is the process--all the steps and sortings and comparisons with prior data and all that--that the brain goes through to come to the solution, "yep, that's a raindrop all right."

Further examples of the brain's problem solving activities are those for identifying food (in other words, for distinguishing it from non-edible environment); for figuring out when it is time to (and how to) "pull up the covers" when the body feels cold; for scratching where it itches; etc. The brain represents an amazing array of adaptive, responsive, problem-solving tools--thought tools.

Some of these are relatively simple and automatic--pulling your finger back when you touch something hot; going into a "red alert" status when someone screams "HELP!"; waking up when someone calls your name. I say "relatively simple" and I should emphasize "relatively"; clearly these are complex, delicately orchestrated actions, but some of the problem-solving thought activities we mobilize are many layers, many dimensions of complexity greater. Consider the problem-solving thoughts that go into driving a car or holding an elementary conversation, much less into buying groceries or--the mind reels--buying a house or choosing a college.

How does the brain solve problems? First, it gathers sensory data and sorts them into useful, meaningful "information" by associating them with memories of earlier similar data. Then it postulates a conclusion or solution (which consists of a series of brain images). And then it imagines what steps might be taken to connect the input with the desired output. It is this intermediate process that leads to consciousness--imagining various steps and trying out this and that pathway (in one's imagination). Why does this lead to consciousness, that is, to awareness of oneself? Because it involves concocting an image of oneself trying out this and that, doing this and that. And that's what consciousness is--an image of yourself doing this and that. Or, in a more high falutin terminology, "self-awareness."

Of course, a lot of the brain's problem solving goes on outside of awareness--it is unconscious (or preconscious--which means you could pull it into consciousness if you bothered to). This is usually because the problem is too simple, or too routine, or the solving process is too quick for us to notice it much in consciousness. Consider that your mouth fills with saliva, so you swallow; or your eye feels dry or itchy, so you blink. Clearly these are rather complicated problem-solving activities, but you hardly notice them. But now consider some slightly higher-level problems: an arm starts to itch so you scratch it; or you step over a puddle while walking down a garden path. You have to try out scratching events in your (unconscious) imagination to hit the right spot on your arm with the right force and abrasive success; you have to coordinate some rather complicated muscle movements, even consider (in your unconscious imagination) jumping over the puddle or going around it. These problems require imagining yourself in action--creating a visual and kinesthetic image of yourself interacting with a 4-D image of the environment (three dimensions of space plus one dimension of time).

What about some even higher, more complex levels of problems--what to have for dinner, what to do this evening, who to call on the phone, and how to talk to them (and about what)? These require using the highest levels of brain generalizations and abstractions, levels we call "emotions." Emotions such as feeling love, hate, anger, or disgust--or, more moderately, feeling like or dislike, interested curiosity or annoyance--group together vast assortments of data; they give us tremendous power to manipulate these data in problem solving. And these complex levels of problem solving require manipulating complicated, multi-dimensional images of ourselves--images that include humor or gruffness, patience or hurriedness: all the elements of personality and personal style.

So it is out of these high-level problem-solving strategies that even the subtlest aspects of self-image and consciousness emerge.

Sunday, December 21, 2008

Dr. Taylor's Stroke

On the morning of Tuesday, December 10, 1996, Dr. Jill Bolte Taylor, a Ph.D. neuroanatomist working with the Harvard Medical School Department of Psychiatry, awoke with a headache--a severe headache, throbbing, behind her left eye. It was the result of a stroke; a blood vessel in her left temporal cortex had ruptured and the expanding blood clot was pressing on her brain.

Her experience of the stroke was remarkable. She found her internal mental business--the ongoing activity we all have that analyzes our incoming flood of sensory data, compares its details with past memories, and makes plans for how to react to it--would shut down for a while, then reawaken, then shut down again. During the periods when that analytical activity (which she came to think of as her left-brain functions) was out of operation, she experienced the flood of incoming sensory data as overwhelming and confusing, but enormously satisfying. She felt she was "in this present moment, right here, right now," "an energy being connected with all the energy of the universe," "perfect, whole, and beautiful." She looked at her arm, and "could no longer define the boundaries of my body"; the "molecules blended with the environment." She felt "enormous," "expansive," "like a genie just liberated from her bottle." Her "spirit soared free like a great whale gliding through the sea." She felt she had "found Nirvana."

This experience of cosmic consciousness (or Nirvana), which is the ultimate goal of meditation--of stilling the mind so that its chatter becomes quiet--has been reported many times. Paramahansa Yogananda wrote, "My body became immovably rooted; breath was drawn out of my lungs as if by some huge magnet. Soul and mind instantly lost their physical bondage and streamed out like a fluid-piercing light from my every pore. The flesh was as though dead; yet in my intense awareness I knew that never before had I been fully alive. My sense of identity was no longer narrowly confined to a body but embraced the circumambient atoms. . . . All objects within my panoramic gaze trembled and vibrated like quick motion pictures. My body, Master's, the pillared courtyard, the furniture and floor, the trees and sunshine, occasionally became violently agitated, until all melted into a luminescent sea; even as sugar crystals, thrown into a glass of water, dissolve after being shaken. The unifying light alternated with materializations of form, the metamorphoses revealing the law of cause and effect in creation. An oceanic joy broke upon calm endless shores of my soul."

Probably most people who gain this experience do not have the forum or the verbal skills to communicate it successfully to others. In fact, probably, since such arduous meditative discipline is very difficult, most people who experience this state do so, as Dr. Taylor did, as the result of brain injury or impending death. They die or are rendered mute by the stroke; they never "return" to even try to convey to the rest of us the wonderful magic and mystery they have encountered; we, their forlorn brethren, are left behind.

We are fortunate that some few people have the wisdom, endurance, benevolent hearts, and luck to bring back reports to us. Moreover, I believe we shall all have a chance to experience this soaring, timeless state of oneness with the universe when the time comes for each of us to "shuffle off this mortal coil."

Saturday, December 20, 2008

Kaplan's Mahler

As the 19th century drew to a triumphant and colorful close in Europe, the Classical music of Haydn grew, through Beethoven, into the Romanticism of Wagner--ever grander with singers and instrumentalists numbering into the hundreds. But the culmination of the century's spectacular Romantic music is embodied in the symphonies of Gustav Mahler, particular the staggeringly vast and passionate Second Symphony, "The Resurrection." With a huge orchestra, a chorus, two vocal soloists, and an invisible offstage group of brass and percussion--often with one principle and two subordinate conductors--the five movements spanning one and a half hours tell of the life, death, and personal resurrection of the individual human spirit. The melodies are beautiful; the thematic construction, complex; the harmonies, richly Romantic and even tending to presage the dissolution of tonality that would reflect the twentieth century's strange, deep journey into the savage unconscious of Freud, the astounding new cosmology of Einstein, and the disturbing unreasonableness of quantum nuclear physics.

In 1965 a performance of Mahler's Second Symphony at Carnegie Hall was attended by a young financier, the 24-year-old Gilbert Kaplan. He was overwhelmed; he was confused but inspired. He went back to another performance the next day and found himself sobbing uncontrollably. He said the work "threw its arms around me and would not let go." It changed his life. Although he continued to pursue the career in financial publishing on which he was just embarking (in 1965 he founded the magazine "Institutional Investor," served as the publisher and editor in chief, and then sold it in 1984 for $72 million), he turned his life to the music of Mahler--more specifically, to Mahler's Second Symphony.

He studied the work note by note, second by second, for months and years. He attended every performance he could find. He talked with conductors and musicians--famous and unknown. And gradually he became the planet's most knowledgeable scholar and most impassioned advocate of the staggeringly difficult composition. He found it an aesthetic, artistic, and spiritual accomplishment without peer, and he set out at last to share his revelation and understanding of it with the world.

In September 1982, Kaplan funded and conducted a performance of the American Symphony Orchestra at Lincoln Center--a private performance for his friends, for financiers and politicians. Private, yes, but the musical world discovered he had matched himself to this stupendous work and mastered it as no other conductor ever had. Over the ensuing quarter century, he was invited to conduct more than 50 orchestras in performances of the work. His recording with the London Symphony Orchestra became the best selling, most widely known performance of the work ever offered to the public. In late 2008 his invitations for further performances extend some three years into the future.

Is this a Horatio Alger story of rags to riches? In a way, it is. Kaplan was certainly a very successful, self-made financier. But this story is more than that. Is it a personal awakening, a personal spiritual quest, a personal aesthetic triumph--an iteration of the Great American Dream? Yes, but it is more than that, too. It is a resurrection of a profound musical work and, with it, of an individual life--it is each of us reborn to a higher, more impassioned human existence.

Friday, December 19, 2008

Computer Smarts

In 1965 Gordon Moore observed that for nearly two decades computer circuitry had gotten miniaturized at an incredible rate: every two years the number of transistors that could be put into an integrated circuit had roughly doubled. He predicted that this trend would continue for some time in the future.

And it has, for nearly half a century now.

This is remarkable--it means computer capacity has doubled 22 times. If you had planted a pea plant one inch high in 1965 and it had sustained this growth rate, it would now tower well above the stratosphere. If you had wisely invested one dollar at this rate in 1965, the value of your investment would now be over $5 million.

In addition, computer innards have gotten more and more cleverly linked together (thanks to advances in computer design and programming), and relationships among computers have gotten faster and more extensive (thanks to networks within a room or a building, or even more widely like the Internet and the World Wide Web).

Computers have gotten quite a bit cleverer, too. They used to think algorithmically, that is, step by step like a cooking recipe. But they also learned to think heuristically, that is, using generalizations and approximations and rules of thumb. Moreover, with the development of a field called "artificial intelligence," they became increasingly able to learn from experience, that is, to try something and if that didn't work, to try something else (and to remember not to try the first method again in tackling future problems).

Chess is probably the hardest, most complicated game around. In the 1960s, '70s, and '80s, chess-playing computers emerged from their kindergarten to become more and more competitive against human players. By the 1990s they were tying and sometimes beating chess grandmasters; nowadays they regularly beat the greatest grandmasters in the world.

But there are still some mental activities that the largest, best, fastest computers cannot do as well as humans. These are particularly visual processing, spatial reasoning, and some forms of problem solving. In fact, there are some websites designed to get people to help out computers (and the scientists and business folks they work for).

For example, http://www.galaxy.org gets some 160,000 home computer users to help classify images of millions of galaxies that have been photographed by giant telescopes, and to find patterns and oddities. This has been remarkably successful at spotting curiosities that computers might not have noticed.

Another website, http://fold.it, which now has over 60,000 home participants, presents difficult puzzles in 3-D folding of protein molecules. Several remarkable discoveries on the complex structure of proteins have emerged from participants work (or "play") at this website.

Google has set up a website game, http://images.google.com/imagelabeler, to get human observers to help find the best, most recognizable names for a wide variety of pictures--of birds or wildlife, of industrial or city scenes, or of abstract or obscure objects, for example.

And finally, http://www.gwap.com, provides "games with a purpose" to help train computer vision and artificial intelligence systems. It has some 120,000 user/participants.

Computers have grown up fast over the past few decades. They have revolutionized our lives and continue to rush forward, creating the future at an accelerating rate. Every generation should exceed the abilities and accomplishments of its parents; our silicon offspring have overtaken us and left us standing by the roadside in many ways. But there are still ways in which they need our help.

Thursday, December 18, 2008

Encyclopedia A. dubia of Human Knowledge

All the information needed to make a human being is contained in the human genome. These instructions are written in a kind of computer code in the sequence of three billion base pairs of DNA which make up the 46 chromosomes that appear in just about every cell of the human body. This represents about 750 megabytes of information (slightly more than the information on a typical compact disk or "CD"), divided into about 20,000 genes. Each gene produces one or more of the proteins needed for the design, growth, and repair of the human body, and for the digestive, metabolic, reproductive, and other processes that keep the body going.

Actually only about 1.5% of the genome codes for proteins; a small part of the rest is RNA genes, regulatory sequences, and introns (spacer regions within a gene that do not provide actual code); but about 95% is so-called "junk DNA" that has no known use.

Where did all this junk come from? There are occasional mutations and errors in the gene duplication process--although these are very few and far between. Nevertheless, over hundreds and thousands of duplications, the small fraction of genetic material that is broken and useless gradually grows. The cell's chemical processes that carefully and faithfully reproduce the genetic material each time a cell divides have no way of telling the useful DNA from the broken DNA that is no longer useful, so ALL of the genome's DNA pairs are replicated--in perfect sequence--regardless of whether or not they carry useful information. If the DNA that has been damaged was essential for biological functions, the cell dies and the individual dies or has no offspring. Otherwise the individual simply carries along this extra baggage of junk DNA and passes it on to future generations.

Presumably, then, the junk DNA is left over from past genetic coding material that, through mutations and evolutionary changes, has lost its place in the puzzle of life. Some of it--about 15%--codes for proteins of no known value; the rest of the 95% doesn't code for anything at all. In other words, within each human cell there is over 700 megabytes of junk computer code that doesn't appear to have any function but is faithfully carried along in tact from generation to generation.

Some creatures have more genetic material than humans, some have less. The winner on the low side is the pufferfish, Takifuga rubripes, which has a genome about one-tenth the size of the human genome (although it has a comparable number of genes). On the high side, the tiny Amoeba dubia has some 670 billion base pairs of DNA in its genome, more than 200 times the number in the human genome. In fact, this little amoeba, barely visible to the naked eye, is carrying around a collection of nearly 1,000 CDs of information space that it is not using, but that it is faithfully duplicating, generation after generation, century after century, millennium after millennium.

Suppose those CDs were put to a good use. Suppose that several encyclopedias, unabridged dictionaries (in several languages), atlases of maps, and even recordings of music and literary works were carefully encoded into that DNA. The little amoeba would go on its way, exploring, eating, and mating, but carrying an extensive record of human knowledge and accomplishments with it. Our geneticists could even give it a slight competitive genetic advantage over other amoebae--make it a little stronger, a little faster, or a little more durable--so that, over a few decades, the amoebae carrying the "Encyclopedia Amoeba dubia of Human Knowledge" would gradually spread all over the world. There would be trillions of copies.

Perhaps millions of years from now when the human race is long extinct, some creature will evolve the scientific and technical knowledge to decode the genomes of numerous species, and will find the record of what we humans have accomplished. On the other hand, before we go mucking around with the junk DNA of any species, perhaps we should look carefully and see if some past civilization has left its record there for us to find.

Wednesday, December 17, 2008

The Magic Sunstone

A thousand years ago the Vikings were the master sailors of the world. Their "longships," up to 100 feet long and carrying two or three dozen men, raided and traded several thousand miles from their homeland southward along the coast of Europe and then eastward through the full length of the Mediterranean to Constantinople; farther north, up the Volga River into Russia; and even westward across the stormy and uncharted, iceberg-laden North Atlantic Ocean.

At the turn of the millennium, Leif Eriksen (that is, "Leif, the son of Erik"--his father was Erik the Red), at the mature, experienced age of 25 (a boy became a man at age 12; a man was old if he lived to be 40) set out with a few dozen hearty men (to row if there was no wind to fill the square sail--to fight if unknown peoples were encountered) and even with a few women and some cattle (to forge a settlement if hospitable lands were found). His hope was to find the lands that had been seen on the far distant horizon by earlier intrepid, ocean-faring Norse explorers before they turned back homeward.

Leif's ship was not deep-keeled--it drafted less than three feet--it could be run up on a sandy beech. It was steered by a wooden board lashed on the right side at the rear (a "steering board"; the "star-board" side is still the right side of a ship; the other side of the ship was always the side tied to the dock in port, hence the "port side"). It had a square sail, small by any future maritime standards, but when the wind was fair and in the right direction (the ship could not tack or turn away from the wind), it helped the rowers propel the ship.

Across the sometimes stormy and often cloudy--even foggy--ocean they ventured. They had no magnetic compass--it would not be invented (or imported from China) for hundreds of years--and they often could not see the Sun or stars for days and nights on end. How did they navigate? A ship at sea without any shoreline, Sun, stars, or compass to guide it, will tend to travel in a circle; the smallest deviation from a straight course turns into a long, slow arc until the helmsman finds, to his consternation, that he is following in his own wake.

But Viking navigators had discovered a magic stone--they called it a "sunstone." It was a large crystal of rare, Icelandic feldspar. This mineral, technically a form of double-refracting cordierite, refracts, or bends, light that passes through it with the peculiar property that if that light is polarized (that is, filtered so that the waves of light are aligned), it "doubly refracts" this light along two distinct paths. Very few materials are known, even in our wonderful modern age of materials science, that have this double-refracting property.

The light from the Sun is partially polarized by the atmosphere. Although this polarization is not detectable by the unaided human eye (although some birds and insects are able to use it to navigate), the Norsemen discovered they could hold their magic sunstone toward the clouded or foggy horizon and then turn until the two refracted images were aligned, and that was the direction of the Sun--the Sun they needed to steer by but could not see.

The magic sunstone was often written of in the Norselanders' sagas, but its true nature and use have only been uncovered in the past few years.

Tuesday, December 16, 2008

A Clever Experiment

The chemistry of life is the chemistry of the carbon atom. Of the 92 naturally occurring chemical elements--from hydrogen, the smallest, to uranium, the heftiest--only carbon has the complex and delicate balance of atomic forces necessary to form the vast array of chemical compounds on which life depends.

Some elements--such as the noble gases like helium, neon, and krypton--are loners; they stand aloof from making many compounds with other elements. On the other hand, some elements--such as nitrogen, oxygen, and sulfur--are far more social; each makes hundreds or thousands of different compounds with other elements. But carbon, by contrast, makes millions. In fact, carbon makes more different chemical compounds than all the other 91 elements added together.

Next, a word about the isotopes of carbon. Most carbon has an atomic weight of 12--its nucleus has six protons and six neutrons; each proton and each neutron has an atomic weight of one. But there is a heavy form of carbon with two extra neutrons in its nucleus; it still has six protons so it has exactly the same chemical properties as c-12, but it is two units heavier, hence c-14.

An important difference between c-12 and c-14 is that c-14 is radioactive; the nucleus is unstable and decays over a period of time (a long period of time--the half-life of c-14 is just over 5,000 years). As an atom of c-14 decays, a beta particle is emitted; these can be readily detected. Although the concentration of c-14 is typically very low--in the atmosphere, about one trillionth the concentration of c-12--by counting the frequency of beta radiation, the amount of c-14 present in a sample of carbon (or one of its many compounds) can be measured very precisely.

There is always new c-14 being made. High in the atmosphere, nitrogen is bombarded by cosmic rays, and when a nucleus of nitrogen is hit just right, it turns into a c-14 nucleus. In the atmosphere, this c-14 combines with oxygen to form radioactive (that is, c-14-containing) carbon dioxide which mixes with regular c-12 carbon dioxide throughout the atmosphere all over the world.

To return now to the story of the chemistry of life: green plants take up the carbon dioxide from the air (both the c-12 variety of carbon dioxide and the rare c-14 form) and, through photosynthesis, make simple sugars that enter into the metabolic processes of living cells to make the vast array of carbon compounds that living organisms need.

As carbon proceeds on its metabolic journey, perhaps being passed from one to another of a thousand different organisms, perhaps being transformed from one to another of a thousand different carbon compounds, the c-14 gradually decays. Carbon sources that are hundreds of thousands of years old (like coal or oil) have practically no c-14 left; all of their carbon is the c-12 variety.

It is important, in studying atmospheric pollution and greenhouse gasses, to be able to trace carbon dioxide in the atmosphere--it is the main greenhouse gas responsible for global warming--and to determine whether it is formed from new-grown plants or derived from fossil fuels such as coal and oil. One group of researchers hit on a clever way of getting plant samples from precisely known locations and times in the past. For fine wines, a careful record is kept of the precise location of the vineyard and date of harvesting. The researchers were able to acquire 175 fine wines representing vineyards around the world and dating back several decades. By determining the precise concentration of c-14 in the wines, they were able to plot the pollution caused by fossil fuels.

How wonderful that the superficial drivel of the world of oenology should inadvertently make a contribution to the elegant, high-cultural endeavor of scientific inquiry. Never mind that the particular scientific research involved produced (like almost all scientific research) no information of value. In science it is the careful, disciplined mental exercise that really matters, not the result; just as in the "science" of fine wines, it is the debauchery that defines the ultimate success.

It is also noteworthy--when considering the cleverness of this experiment--that only a few ounces of wine was needed from each bottle for the chemical analysis, so at the end of the experiment the researchers had more than three-fourths of a bottle of each of 175 fine wines "left over."

So some data from the absurd and trivial world of wines provided clever scientists with useful information in pursuing their scholarly inquiry; and these same clever scientists, being also regular folks just like the rest of us, were able to use stuffy academic and federal grant money to juice up a few good parties.

Monday, December 15, 2008

White Bikes and Zipcars

It is 10:50 Tuesday morning. Sally Druthers is just getting out of a class on "Conservation Biology and Ecological Sustainability" at Arizona State University. Suddenly she realizes that when she left home to peddle to campus earlier this morning, she left the oven on; not only will the soy-roast she is cooking for dinner be ruined by the time she gets home this afternoon, the house will probably be filled with smoke if not consumed in flames.

Sally whips out her Blackberry telephone and makes a quick call--she completes a simple form. Then she bikes two blocks to pick up the zipcar she has just signed up for. She will drive it home to turn off the oven and save the day (and the soy-roast) and then back to campus and leave the car where she found it (where her bike awaits her). The total trip will take her less than an hour and cost her $7.00, which includes gas, insurance--in fact, that one small fee includes everything she needs for the quick drive home and back, even roadside emergency assistance if she needs it. So Sally does not need to own a car of her own; she participates in a more ecologically sensible arrangement, effectively sharing a single car with many other people.

Such is the wonder of zipcars. Sally filled out an application several months ago so they have her driver's license and credit card on file. She would have paid a $35 membership fee for the first year, but that was waived because she got a special deal (because she is a sustainability major).

Zipcar was founded in 1999 in Cambridge, Massachusetts. The company now operates in a couple of dozen cities and college campuses in the U.S. and Canada, and has some 200,000 user members.

Zipcars have an historical antecedent in the famous White Bikes of Amsterdam. In the early 1960s an ecologically minded philanthropist named Luud Schimmelpenninck put a fleet of humble, utility bikes, all painted white, out on the streets of Amsterdam in the Netherlands. People were supposed to use them once as they needed them, and then leave them for someone else. However, a high rate of theft and vandalism brought the project to a halt after only a few months. Numerous subsequent attempts have been made, mostly in cities in Europe, and some have been quite successful. In some cases, special bikes have been manufactured for a project so that they could not be broken up and sold for parts. And substantial publicity campaigns have improved responsible public acceptance and use.

Today, there are several cities in Europe and a few in the United States, including Washington D.C., Chapel Hill/Carrboro, NC, and Ft. Collins, CO that have free-bike programs. But over the past ten years, many more have come to host programs offering shared zipcars.

Sunday, December 14, 2008

Covering Your Carbon Footprints

We all live on the Earth. We all eat the food, drink the water, and breath the air. And we all contribute pollutants that will significantly affect the quality of life of generations to come.

For example:

(1) Driving a medium-size car produces about 0.75 pounds of greenhouse gasses (mostly carbon dioxide) per mile. (This is regardless of the number of passengers.) For a common amount of personal automobile travel, that comes to about 10,000 pounds of carbon dioxide per year.

(2) Flying in a commercial jet releases just over a pound of greenhouse gasses (i.e., carbon dioxide) per mile for each passenger. So flying between San Francisco and Boston produces about 6,000 pounds of carbon dioxide per passenger (round trip).

(3) If the typical power-utilities bill for your house (combining natural gas and electricity use) runs about $130 per month, that equates to about 12,000 pounds of carbon dioxide per year

How much pollution damage each individual does is called one's "carbon footprint." It's size depends a lot on life-style choices. In addition, one can reduce one's net carbon footprint by buying carbon-pollution offsets that help to fund non-polluting alternative energy projects.

Businesses and industries trade carbon-pollution offsets: a factory, for example, that produces inordinate greenhouse gas emissions can balance this by paying a fee to another non-polluting production activity. In addition, a company named TerraPass (http://www.terrapass.com) sells carbon-footprint offsets for individuals. The projects funded by TerraPass reduce methane (a powerful greenhouse gas) released from landfills or from dairy operations. Some carbon-offset plan companies have come under criticism for not being very faithful (or truthful). But TerraPass assures its users that all its funding projects are carefully scrutinized, audited as to their effectiveness, and certified by third-party engineers and laboratories.

The TerraPass website also provides several fascinating calculators for figuring out the carbon expense of particular lifestyle choices--the kind of car one drives and how much one drives it, home-energy utilities (gas and electricity) patterns, etc.

Carbon-offset shares from Terrapass cost $70 per year to cover an average car, $120 per year for an average home, and $180 per year for a typical individual.

Take a look at the TerraPass website and consider making a contribution to antipollution equal to the pollution costs of your stay on Earth. Your children and grandchildren will be glad you did.

Friday, December 12, 2008

A Mini Nuclear Power Generator

For only $25 million (and a 5-year wait) you can get a sealed concrete block the size of a household refrigerator that gets buried in the ground and heats its surface to 500 degree Fahrenheit for many years. It is sealed at the factory; there is no servicing needed or any reason to open it--ever. It has no moving parts and, if it is ever dismantled, it has a radioactive residue to dispose of that is about the size of a quart jar; a radioactive residue of the kind that is readily reprocessed.

This mini nuclear generator is non-polluting (it has no emissions). It never requires fuel to be added. It takes no land (a house can be built or a field of crops grown right over it). It churns out heat regardless of whether or not the Sun is shining or the wind is blowing. It cannot go supercritical (explode) or "melt down"; it has no components worth stealing--none that would be useful for making a nuclear bomb. And it is entirely autonomous--it can be located in a desolate jungle, a remote desert, or on a small ocean island far from civilization.

You can boil water (even dirty, contaminated water or seawater) by running it over the surface of the block, and then use the steam to run an electric generator or condense it to obtain clean, fresh water. If used to generate electricity, it can produce 25 MWe--enough to provide electricity for about 20,000 average-size American homes or the industrial equivalent.

The science for this remarkable unit was confirmed and the engineering requirements designed at the Los Alamos National Laboratory in New Mexico. The company that markets it is named Hyperion; they call it the Hyperion Power Module.

So if you have $25 million dollars lying around that you don't know what to do with, why not buy one of these units for a downtrodden, rural, third-world neighborhood? They'll be glad you did.

Thursday, December 11, 2008

Earth Energy

The human race uses a tremendous amount of energy--particularly with the expansion of population and the advance of living standards during the past century. The worldwide human population is estimated to have been about 1.6 billion in 1900; today it is more than four times that at some 6.7 billion. But energy use has increased even faster--much faster--than the general population numbers. In 2005, total worldwide energy consumption was estimated to be 5 x 10^20 joules (equivalent to 5 million Hiroshima-size atomic bombs); this was some 20 million times more than a century earlier. How long can the planet sustain this tremendous energy usage?

The two principle sources of energy on Earth are sunlight that falls on the surface of the planet and radioactive uranium-238 that heats the Earth's innards. In addition there are some other minor sources such as gravity (of the Moon and Sun--and also the gravitational collapse of the Earth itself), the rotational momentum of the Earth, other radioactive decays (such as uranium-235, potassium-40, and thorium-232), and other bombardments from outer space (like cosmic rays and meteors). But these are dwarfed by the big two--sunlight and U-238.

The total amount of energy from sunlight that falls on the Earth each day is truly staggering--some 8,000 times the total present energy used by humans. Most of it falls on the oceans where it warms the water causing evaporation (and therefore rain, and rivers, and hydroelectric power, and all that). Some of it falls on land or sea areas where it happens to hit the chlorophyll of green plants. There it can convert water and carbon dioxide into sugars that feed into food chains providing metabolic energy for all the plants and animals around us. Some of the sunlight that is converted by chlorophyll to usable biological energy winds up sequestered for millions of years in "fossil fuels"--in coal, natural gas, and oil--so even those energy sources originally came from sunlight. (And they are a major source of energy for humans: about 85% of the energy consumption of the human species is currently derived from fossil fuels.) Of course the movements of the winds and the currents of the oceans are also churned up and driven by energy from the Sun, with some contribution from the angular momentum of the rotation of the planet.

Then there is geothermal energy, mainly derived from the radioactive decay of uranium-238. Over the past few decades we have been amazed to discover that vast colonies of exotic plants and animals live in the depths of the oceans around hydrothermal vents from which they get their energy--not from sunlight, but from heat welling up from the inner core and mantle of the Earth and from sulfide compounds this heat releases. Things are very hot down there: temperatures in the Earth's core reach 11,000 degrees Fahrenheit, hotter than the surface of the Sun. And even in the mantle around the core, the temperature averages about 4,000 degrees F. Water around the thermal vents is several hundred degrees F.--much hotter than it could be at surface pressures where water boils to become steam at 212 degrees F.

Perhaps as surprising as the discovery of this exotic biomass--using energy and metabolic transformations entirely different from those we use on the surface--is its extent. After some cursory deep-water surveys, it is believed that there is more living material down there than there is in all the surface creatures around us combined--all the animals (including fish) and plants (including algae--which outweigh the rest of the plant kingdom) and bacteria (which outweigh all other surface life combined, including all plants and animals that live on the surface of the continents and in the Sun-fed, near-surface levels of the oceans). Clearly there is a tremendous biomass preying on the subterranean heat, just as there is pressure of utilization on surface sunlight.

What about the question of running out of these sources of energy? On the one hand, the Sun is expected to radiate livable amounts of energy towards the Earth for several billion years. On the other, uranium-238 has a half-life of 4.47 billion years and even though the renewal of the Earth's central heating is gradually getting behind--it loses more energy outward each year than it generates inside--this is only at the overall rate of about one hundred degrees every billion years.

Granted, we cannot be so profligate in our use of energy as we have come to be in the recent past. Conservation of energy use and improved energy efficiency must be a major part of our future industrial and general economic development. But our basic energy supplies on Earth, both sunlight and U-238, are immense. If we learn to use them wisely, the energy future of the planet should be secure for quite some time to come (far, far longer, in fact, than human beings are likely to be around to worry about it).