Please note: All files marked with a copyright notice are subject to normal copyright restrictions. These files may, however, be downloaded for personal use. Electronically distributed texts may easily be corrupted, deliberately or by technical causes. When you base other works on such texts, double-check with a printed source if possible.

Somatic medicine abuses psychiatry

— and neglects causal research

by Per Dalén

 
(på svenska)

 

THERE ARE MANY INDICATIONS that the popularity of modern medicine is declining. Doctors are facing various problems that seem to be growing, such as sceptical and inquisitive patients who tend to seek information and help outside conventional medicine. In Sweden at least, the professional debate shows rather plainly that many doctors are adapting less than cheerfully to what is going on, and often tend to react with frustration when their professional authority is not being fully respected. Traditional values such as "science", and "evidence-based medicine" are being defended. Alternative and complementary methods are beyond the pale. In many other Western countries physicians are now openly exploring such methods, but in Sweden this is a deviation that is not acceptable in licensed practitioners. There is hardly any opinion among medical professionals against these restrictions.[1]

On the other hand, there is a theme that not only survives inside the medical culture in spite of an almost total lack of scientific support, but actually thrives there due to the support given by leading circles. This is the use of psychological theories as a means of reclassifying bodily symptoms as mental problems in cases where conventional medicine is at a loss for an explanation, particularly patients with so-called new diagnoses. Patients often feel insulted by this act of reclassification, which is often accompanied by signs of impatience on the part of the doctor. Many health professionals tend to be provoked by patients who themselves suggest a diagnosis that lies outside the conventional medical world-view. If a doctor should begin to accept that electrosensitivity and amalgam illness do in fact "exist", this puts his/her reputation in jeopardy and may (in Sweden) even lead to suspicions of possible malpractice. In some cases the National Board of Health and Welfare may feel called upon to start an investigation.

”Since I am a psychiatrist, I have for a long time been intrigued by the extraordinary use of psychiatric causal explanations for illnesses that not only go with predominantly somatic symptoms, but also lack any basic similarity to known mental disorders.”

Since I am a psychiatrist, I have for a long time been intrigued by the extraordinary use of psychiatric causal explanations for illnesses that not only go with predominantly somatic symptoms, but also lack any basic similarity to known mental disorders. Are patients being helped by this peculiar way of interpreting their illnesses? No, it would be a gross exaggeration to maintain this, at least when there are pronounced complaints of some considerable duration. On the other hand there is no denying that certain interested parties outside medicine are being helped, for instance the electric and electronic industries, as well as those who are responsible for the continued use of mercury in dentistry. I cannot exclude the possibility that psychiatry is being abused in order to sweep certain sensitive problems under the carpet.

We have here a possible ethical problem. If physicians in general were in the habit of thinking independently and, in appropriate circumstances, were willing to show civil disobedience, problems like these would never have to arise. Earlier examples of abuse of psychiatry in Nazi Germany and the Soviet Union unfortunately show that physicians are no more upright than others in the face of signals from people they regard as their superiors. The herd instinct may even be stronger in my colleagues than among people in general.

Starting from the background outlined above I should like to discuss different aspects of medical attitudes to questions that somehow involve all those people who have been unfortunate enough to fall victim to illnesses that are officially counted as probably non-existent.

Medical science has a really weak side that is not being discussed very often. Research into causes is making no progress in important areas, which gives a hollow ring to all this proud talk of preventive work. It would be at least partly correct to say that there is no real research into causes, but that this has been replaced by a search for "mechanisms". The causes are expected to fall into our laps like ripe fruits when enough details about the mechanisms of a disease have become known. Going straight for the cause is regarded as something that adventurers may be dreaming about, but real scientists are not supposed to do things like this. From the point of view of your career it would be very unwise to mention that you hope to be able to search for unknown causes of disease. Everybody "knows" that this is far too difficult, and that you will have to be a pretentious fool to be talking openly of such goals for your scientific efforts.

On closer analysis the uses of the word "cause" are far from unambiguous, even in scientific discourse. Sometimes mechanisms of disease are called causes. For example, antibodies directed against our own tissues may be said to cause autoimmune diseases, such as multiple sclerosis. Instead of attempting some kind of philosophical analysis, I shall stick to a simple definition in this article. The basis is the assumption that we are normally in a state of health. Causes of disease are such external things that either initiate, or contribute substantially to the emergence of a state of illness, and will then maintain this abnormal state until the body manages to restore normality, with or without outside help.

Autoimmunity does not arise from nothing, but unfortunately we know very little about its causes. Prevention and treatment are poorly developed. In these days of enthusiasm for molecular biology many scientists are dreaming of drugs that are tailored for a specific purpose, such as blocking the autoimmune mechanism before it has led to manifest symptoms. This will probably be a difficult approach, fraught with failures. There is first of all a considerable risk that serious adverse effects will occur when chemical interventions are being made in central parts of this delicate apparatus which has evolved through millions of years. We should rather look for something altogether different, namely an external disturbing factor (including nutritional deficiencies) which enters the chain of events at an earlier stage, and which we might eliminate. But unfortunately very little research is going on that can be expected to achieve this goal.

If there are several possible causes, common sense will tell us that we had better concentrate on those that we can in fact hope to do something about with reasonable, and preferably harmless means. Today this is no longer self-evident, since the mapping of genetic mechanisms at the molecular level seems to open entirely new perspectives. But here we risk losing our way because of a simple misunderstanding.

Our genes have also been tested during millions of years, and with rare exceptions they serve us exceedingly well as long as the environment keeps within reasonable limits. Medical genetics used to deal with rare diseases in which a gene is in fact abnormal in such a way that a disease will arise even in a perfectly normal environment. Such "faults in the construction" are due to mutations, which may in some cases escape elimination by natural selection. It is sometimes possible to limit the transmission of such genes if those who carry them refrain from having children of their own. This approach becomes more problematic if compulsory measures are being considered, particularly if such plans are being guided by pseudo-scientific genetics, as was the case during the first half of the 20th century. During this period schizophrenia and several other mental disorders and anomalies were regarded as obviously hereditary in many countries. As we know, this belief was the "scientific" underpinning for eugenic legislation in which compulsory sterilization was an important part, not only in Nazi Germany. Today it is obvious that the "hereditary background" that was taken for granted in these cases was not at all of the kind where abnormal genes could be eliminated by sterilization. The relevant facts were known already in the 1930s, but this criticism was suppressed for political reasons, and for reasons of prestige.

The important misunderstanding that survives even today is that a "hereditary background" to a disease means an involvement of genes that are abnormal, and therefore should be "repaired" if it is impossible to eliminate them from the population. But this is rarely the case. It is quite easy to show that many important and widespread diseases are to some extent controlled by genetic factors, but it has to be assumed that this is a question of complex and diffuse contributions from several genes, each of which are in themselves functioning normally. Due to chance, some of us have more than an average endowment of genes that increase the risk of developing diseases like obesity, allergy, cardiovascular disease, or diabetes. Even so, most of the facts point towards the environment as decisive in these cases. If the incidence of a disease is increasing rapidly in the population, this increase cannot be due to a sudden change in the frequency of certain genes. Blaming heredity can often become an evasive argument, particularly so in medical science, with its lamentably poor record of research into causes, particularly where environmental factors are concerned.

”Genetic research is now enjoying an unparalleled boom, with great expectations of future sales of knowledge. It is not in the interests of geneticists to cut the hype and the misunderstandings down to size as long as their plans for the future are supported by those popular beliefs.”

Take it with a pinch of salt if you are told that your illness may have a genetic background! Your doctor will be saying so with the best intentions, but such a piece of information will rarely be helpful in your efforts to regain your health. To the doctor this is also a way of confirming his belief that our genes are a kind of ultimate basis for all that happens in our bodies, in health as well as in disease. Thoughts like this are highly contagious. Genetic research is now enjoying an unparalleled boom, with great expectations of future sales of knowledge. It is not in the interests of geneticists to cut the hype and the misunderstandings down to size as long as their plans for the future are supported by those popular beliefs. And the less we talk about the environment, the better for the business projects of the geneticists.

In the future, laboratory methods of genetic diagnosis will become generally available for routine use. They will show whether a person carries a certain gene, and may for instance guide the choice of drug treatments in particular situations. It remains to be seen whether this will become practically useful, but today it is still in the pipeline, like so many other applications of modern genetics.

I am sure my statement above that medicine has no real research into causes will already have given rise to objections from some readers. Genetics is regarded as a solid basis for future insights into causes, but in reality it is still only of use in connection with certain rare diseases. Epidemiology is another specialty that is reputed to carry the keys to causes of disease. Originally this was the science of the occurrence of (infective) diseases in populations. Now it has developed into a versatile discipline, which uses statistics as its basis. Among other things, epidemiologists examine correlations between the incidence of diseases and various environmental factors, hoping to spot causal factors.

Epidemiology has almost acquired the status of a basic science in medicine. Much of its reputation stems from the successful solution of the problem of smoking and lung cancer by Austin Bradford Hill and Richard Doll in Britain during the 1950s. At that time 90 % of all British men were smokers. The strain of World War II certainly contributed to this extremely high prevalence of the smoking habit. Lung cancer was increasing at an alarming rate, but initially even those who solved the problem found it hard to believe that smoking was the cause: "cigarette smoking was such a normal thing", as Doll said in a 1991 interview. Doll also felt that motor exhausts, or possibly the tarring of roads were more likely candidates — see "The Rise and Fall of Modern Medicine" by James Le Fanu.[2]

Thus the researchers were dealing with a true mass exposure, which meant that it was difficult to find sufficient numbers of non-smoking controls. Bradford Hill and Doll were successful, but their studies had to be made very large in spite of the fact that lung cancer is a distinct disease that is easily diagnosed. In many other situations of mass exposure the conditions are less favourable. The picture of the suspected adverse effects is often complicated by so-called "unspecific symptoms", which makes it harder to define and delimit the problem. In such situations epidemiology is often rendered useless because of a sharp decline in the sensitivity of the methods. Mass exposure thus reduces the chances of detecting adverse effects that are relatively weak, or occur at a low rate. This is further aggravated if there are also problems of classification or definition. Epidemiology works best under ideal circumstances. There is nothing remarkable about this, but it would be better if the weaknesses were declared openly, so that expectations might become more realistic.

After nearly 50 years epidemiology has still not surpassed its early feats in tobacco research. This is not due to a lack of resources, but the problem of the frequently low sensitivity of the methods used has become a major hindrance. Many attempts have ended in uncertainty about the meaning of the findings. What is found are of course statistical associations, not causal links. A disease may thus be somewhat more common in a group of people who are exposed above the average level for the general population to some factor that might be harmful. Such associations are often found, but there are almost always objections to the hypothesis that this might represent a real causal link. For various reasons the epidemiologists themselves will usually not be in a position to bring the matter further by doing other kinds of research on the problem in question. As a result a great number of common factors in our food and our environment will be entered on a list of suspects. Science is then often unable to pass a final judgement in many of these cases.

An example that most people will be familiar with is that of the health effects of a small intake of alcohol, say a glass of wine per day. Scientific opinion seems to shift continually. At present such a dose is predominantly good for you, but this is sure to change, as it has been doing in the past. The claim that science is superior to politics for handling such problems will quickly lose its credibility if no reliable guidance is available in a question like this. There are unfortunately other similar examples of great importance for everybody. I am thinking of various dietary guidelines, particularly concerning fat, where different expert opinions have appeared in succession during decades.

 Fact box: Epidemiology
  
Epidemiology was initially the study of the dissemination of infectious diseases (as opposed to endemic diseases, which were considered to occur at a certain location only). Today, however, epidemiology deals with all kinds of diseases and their distribution and causes. Epidemiology goes back to the time of Hippocrates. A renaissance pioneer was French physician Guillaume de Baillou, who among other things studied whooping cough.

The idea is to find patterns in how illnesses appear, either through studying existing cases and how they during their anamnesis were exposed to suspected pathogens, or, by monitoring the future health development of groups of patients, that have been subject to disparate amounts of exposure. Epidemiology might also be divided into descriptive studies (when, for instance, one group is studied at one point in time) and comparative/analytic studies (when, for instance, several groups could be studied at several points in time).

Through its history, epidemiology has found the connection between , for instance, people who had endured cowpox and their protection against smallpox. The discovery of the connection between tobacco smoking and lung cancer is also an epidemiological endeavor.

The working tool within epidemiology is foremost statistics, which is best suited for the study of relatively common diseases or ailments. Sometimes, as is the case when trying to prove harmful effects of dental amalgam, the problem is to find control groups without exposure, in the amalgam case this means a large enough group that completely lacks dental fillings.

Epidemiology is not well-suited for studying more rare effects of mass exposure, since the methods used are not sensitive enough to indicate a connection that is actually there and which is of importance. Still, one often encounters so called "negative studies", which readily are interpreted as evidence of how harmless some suspected factor is. Absence of evidence is, however, not evidence of absence.
 

Behind every product on the list of suspects there is at least one worried branch of industry. The tobacco industry learned early on how to handle epidemiology by sowing doubt about the meaning of the findings, which is a very useful method of "damage control". Epidemiology actually offers obvious opportunities for staving off restrictions. When modern PR consultants are trying to clear an industrial product from suspicions they take it for granted that only epidemiology can produce final proof in such matters. And the medical establishment will nod approvingly without giving much thought to the problem. Medical professionals are still impressed by the fact that smoking could be shown to be the culprit in the case of lung cancer, and they are not likely to bear in mind that many other medical problems have been handed over to epidemiology without ever being solved. Doctors are even less likely to see how industrial interests are actively influencing the course of what is ostensibly a scientific discussion.

When science, unassisted, is unable to fully answer important controversial questions, other parties will gradually take control of the situation. First of all, of course, any industries involved, then public authorities and political assemblies. This being so, it is quite natural to employ the precautionary principle in some form or other, in order to make it possible to take important decisions even before the scientists have reached consensus, which may take a very long time. It is hardly reasonable to put public health in jeopardy when a branch of industry is opposing restrictions and manages to maintain doubt and disagreement among scientists. The manipulations of the tobacco industry have been thoroughly exposed, and there is no reason whatsoever to assume that this is a unique example.

Epidemiology is thus open to criticism because its methods generate many associations that tend to worry the general public, associations that will tend to linger indefinitely without science always being able to decide the matter one way or the other. What is lacking is often further circumstantial evidence of a different kind from statistics. Epidemiologists are first of all statisticians, and many do in fact lack an ordinary medical education and experience from clinical work with patients. They can evaluate statistical associations and see whether these are being influenced by so-called confounding factors, which is often the case. But their opinions on causal connections must often be taken with a pinch of salt, particularly when it is a question of denying a connection on purely statistical or theoretical grounds.

”What makes an individual human being ill cannot be determined by statistics.”

What makes an individual human being ill cannot be determined by statistics. This has long been accepted in the study of unusual side effects of drugs, where case reports are absolutely necessary in order to acquire any knowledge. Requiring statistical proof is an absurdity if a given side effect occurs in one case per thousand treated or even more rarely. Each drug also has a number of different side effects that may be rare. Nobody will be ready to finance the huge epidemiological studies that might possibly be able to replace case reports.

If sponsors from the industry could have their way, epidemiologists would produce even more of so-called "negative studies", which in other words do not show an effect of the factor studied. They should of course be formally correct in all details, and will then have the great advantage of not reflecting badly on any product. It used to be common knowledge among scientists that such studies don't prove anything at all, and journal editors were not particularly keen on publishing them. This has changed, perhaps due to the lobbying efforts of the tobacco industry in this area.

There are many possible reasons why an epidemiological study yields a "negative" result. The association that was looked for might in fact be non-existent. If there is a weak or moderate causal connection there are certain requirements that must be fulfilled in order to get a significant statistical association. First of all, the study material must be of a certain minimum size. Even if the dimensions are sufficiently large, various confounding factors as well as weaknesses of design and defects in the observational material may still lower the sensitivity under the level where it is possible to get a positive result. It is as a rule expensive to perform epidemiological studies that are of high quality and large enough, and therefore a certain proportion of studies will turn out to be negative even though there is a causal connection. If the problem under study is not new and unknown, an experienced epidemiologist should be able to assess beforehand what the chances are of a positive outcome. Many of the people who order studies are also likely to have a flair for this. A great deal of experience will have accumulated in the tobacco and the chemical industries. It is thus quite possible to plan for a negative study, though we will rarely know whether this actually happened in a specific case.

Epidemiology is of course risking its reputation by having too much to do with research that cannot prove anything. On the other hand there are strong expectations from many quarters that somebody should be able to tell us which things are harmful, and which are innocuous among all that we are exposed to more or less collectively in our food and our environment. At present it is generally believed that epidemiology has this competence, but it is easy to show that this is not the case. Obviously, therefore, the precautionary principle combined with common sense should guide us in any decisions in this tricky area.

 Fact box: The precautionary principle
  
The precautionary principle was adopted at the United Nations Conference on Environment and Development, UNCED, at Rio de Janeiro in 1992. The principle is number 15 of the Rio Declaration:

"In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation."

The principle has become a matter of national legislation in Sweden and Germany, and it is also included in several international agreements. Its further implementation in Europe is handled by the European Commission.

The precautionary principle has been criticized for hindering scientific development and trade, and for shifting the burden of proof onto those who wish to introduce a certain new technique or activity. Others claim that, apart from protecting the environment, it also challenges scientists to think along different lines and gives them better opportunity to study the complex interactions among several systems at work in our present day society.
 

The above-mentioned book by Le Fanu[2] became a great success in Great Britain, in spite of, or perhaps rather thanks to its highly critical treatment of several dominant tendencies in medicine of today. He particularly focuses on two phenomena, epidemiology, and the over-rating of modern genetics. A quote will show how drastically the author can summarise his view of the matter:

Under the banyan tree nothing grows, and the banyan tree of genetics and epidemiology now casts such long shadows that the fresh green shoots of medical research are stifled.

Le Fanu convincingly shows how the great forward strides in medicine during a few decades after World War II arose from conditions that were radically different from those prevailing today, in an increasingly commercialized research imitating industrial production. Remarkably often it was seemingly chance that showed individual researchers outside the establishments a way that led to important progress. At present there is an obvious risk that we may simply be looking in the wrong places and fail to see novel and unexplored possibilities that are coming our way. All large-scale enterprises have difficulties with adaptations at short notice. No new knowledge can be found incorporated in long-term planning, only applications of things we already believe we know!

This situation may look paradoxical, but this is hardly a correct description of the situation. Since really new knowledge is always more or less surprising, it cannot in fact be planned into existence. A leader of research who has managed to acquire control of large economic resources is likely to be sceptical of relatively unplanned, spontaneous activities. This is not the kind of thing that will win the confidence of sponsors, nor will it guarantee a stable job situation for employees who are dependent on their principal's ability to make long-term plans. How easy, then, to adopt the view that it must be possible to run research according to the same principles as industrial development work. It may take years to discover that this method is sterile — even those around this affluent institution will be convinced that research is a question of large investments and a rather slow "production".

Science has become something of a successor of religion to many people today thanks to the fact that it has expanded our knowledge in such a marvellous way in many important areas. Today we have highly developed theories on medical problems that were either unknown not very long ago, or were partly in the hands of charlatans, quacks, and representatives of folk medicine. For such reasons research has long been favoured in the competition for economic resources. The number of scientists active today far surpasses the accumulated number of those who have lived earlier during the history of mankind. Investments often yield a very good return, but this is true particularly of technical and applied research. This is where it is possible to solve problems by a massive mobilization of resources. Once you have found out how to make a car, or a computer, the thing is to improve the product all the time, and this does not require too many entirely new discoveries.

The situation is very different if for instance the task is to do something about unsolved medical problems, where so much is unknown that there is no basis for a broad development project on an industrial scale. We have seen that Richard Nixon's "declaration of war" against cancer in 1971 did not lead to any great breakthroughs. It is easier to land people on the moon than "solving the riddle of cancer". Why? Well, the fundamentals of how to solve the problems of space flights have long been known, but when it comes to cancer we are still merely scratching the surface. One could also say that the causes of why this or that will happen when you are trying to fly to the moon are largely known — the problem is to invent technical solutions and test them. It is hardly surprising that we are better at developing or repairing machines that we have constructed ourselves!

The simplest thought model for understanding and discussing living organisms is to regard them as complicated machines. Up to a point this is an entirely natural and purposeful approach. We do in fact find it difficult to think in any other way, even when we realize that something better is needed. As a rule psychological theories also have a basically mechanistic structure, which is somewhat less evident. What might work better than this thought model? "Holistic medicine" has long been a positive catch-phrase, and can be seen as a warning against one-sidedness and "reductionism". It is never a good idea to oversimplify, acquiring only parts of available knowledge and then applying this in a cut-and-dried way. It is of course better to be able to consider several different mechanistic models simultaneously and intuitively apply the one that seems best in an individual case. This will introduce intuition as a tool, which is perhaps not regarded as "scientific" enough. The result may be excellent, but the method is not quite compatible with today's medical culture.

 The history of "the mechanics of humans" started, one might say, with Descartes, went on with la Mettries "L'Homme Machine", and continued up to early psychologists, such as Wundt or Gall, the latter being the inventor of phrenology. Along this pathway we will also find Gustav Theodor Fechner (see picture), 1801-1887, who initiated the discipline of psychophysics, where he among other things tried to define a physical regularity of our mental life.

The question of reductionism vs. holistic medicine would be less of a problem to a physician in clinical work if the gaps in our mechanistic knowledge were more manageable. Lack of knowledge is actually a considerable handicap, particularly in the treatment of chronic diseases. Many patients are aware of this and turn to alternative practitioners. How does medicine handle this last-mentioned problem?

Officially, established medicine is the place where only methods of proven value are being used. It is sometimes obvious that a treatment works, for example when administration of an antibiotic is immediately followed by improvement of a severe infection, and it was known from a laboratory test that the particular infective agent is sensitive to the drug. Otherwise a method should in principle have been proven superior to placebo in trials following certain rules. Drugs are usually not difficult to test in this way, but it may be much harder with other kinds of treatment of which acupuncture is one example. Effects that take some considerable time to develop are of course also harder to test under "double-blind" conditions. An intervention should preferably have a prompt and distinct effect in order to avoid problems of telling what is likely to be due to the treatment, and what might be a result of the natural and sometimes unpredictable course of the disease.

Alternative methods may accordingly qualify for a place among established treatments. This does not happen very often, largely due to the fact that great economic resources are required in order to perform a correct trial that is large enough to make a positive result likely. In many cases it is also impossible to devise a credible placebo, and then the patients cannot be kept in the dark about whether they are receiving active or sham treatment.

Two examples of drugs from alternative medicine that have recently become more or less accepted are St. John's Wort for depression, and glucosamine for certain joint diseases. Such drugs cannot be patented, and are therefore not particularly interesting to the pharmaceutical giants. If we are not allowed to have a "free sector", certain valuable drugs will therefore disappear from the market.

Treatments should in principle be chosen according to diagnosis, both in conventional medicine and in alternative practices, but the methods of diagnosis may differ considerably between these two. In such cases the diagnostic system of established medicine usually takes precedence, which is a further difficulty.

Alternative methods rarely carry the official stamp of approval, but in spite of this more and more people are turning to this sector for help. This is of course a matter of concern to established medicine, which has no ready explanation why its share of the market is diminishing. The idea that some alternative methods actually yield good results is mostly avoided. Instead recourse is had to a seemingly obvious explanation, namely that all the positive effects claimed to have resulted from alternative treatments are simply placebo effects.

Here certain rules that have become established since the middle of the 20th century are being exploited. It is regarded as self-evident that a method of treatment should be demonstrably better than placebo (or on a par with already accepted methods) in order to become officially approved. Clinical judgement or other informal ways of sifting evidence are not accepted.

From this point of view alternative methods that have not been subjected to the prescribed tests simply lack merits, and might hypothetically work only because patients believe in them. Whether due to prudence or not, the analysis is usually not carried further than this. It might sometimes be the case that a certain method has a positive effect of long duration, while placebo effects are by their nature short-lived. Quite often alternative treatments are given to people with chronic complaints that established medicine has totally failed to relieve. If so, it has a hollow ring, to say the least, if a favourable result of an alternative treatment is a priori attributed to some more or less undefined charismatic quality of the therapist. Still this kind of face-saving reasoning is often being used without hesitation.

”The otherwise so meticulously critical medical community has accordingly been living for decades with a picture of reality that has not at all been checked. Why? Could the reason perhaps be that this story was too useful in its original form?”

What size are placebo effects? This is a very poorly studied question, but according to a statement that has been cited innumerable times since the 1950s a placebo makes the patient feel better in about 35 % of cases (irrespective of diagnosis!). The source of this statement is a very famous survey by HK Beecher, "The powerful placebo" (1955)[3]. It would take 40 years before a German physician, Gunver Sophia Kienle, exposed this still very often quoted paper as full of careless mistakes and misinterpretations. The 35 % improvement rate is plainly a gross exaggeration.[4] The otherwise so meticulously critical medical community has accordingly been living for decades with a picture of reality that has not at all been checked. Why? Could the reason perhaps be that this story was too useful in its original form?

Today some further facts have been added to Kienle's revealing analysis, and the placebo effect has shrunk into something that can only just be shown to exist with available methods. I, for one, believe it does exist, but that its importance will have to be demonstrated in each disease, separately. It would be utterly remarkable if no great differences were to be found between different diseases and situations regarding the immediate and long-term influence of diffuse psychological factors. Lumping them all together, as has been done for nearly half a century, would simply be intellectually dishonest now that the state of the Emperor's attire has become generally known.

In May 2001 an article appeared that challenged the uncritical belief in strong placebo effects in a novel way[5] Two researchers in Copenhagen had scrutinized all published studies where not only a randomly selected group of patients had been given placebo, but another group was included where no treatment had been given. They found 114 studies that satisfied their criteria. An effect of the patients' expectations should only appear in the placebo group. What was found in the analysis was that placebo was possibly somewhat better than no treatment when pain was studied, and also in other situations where the outcome was estimated by means of some kind of continuous scale. When improvement was measured as a simple "yes or no" there was no tendency for placebo to be more effective than no treatment.

This study has great impact because it will be hard to find other data that contradict the results. There are so few other methods to really estimate the strength of placebo effects. The method used here has long been known, but in spite of this it has not been used in any systematic way.

Accordingly, placebo is not the strong factor it has long been believed to be, and the effects of this discovery will keep medicine busy for a number of years to come. Quite a lot of rethinking will be required in important areas.

Placebo theorizing has been built on fairly uncritical arguments like the following: "since it is possible that psychological factors can restore health (just consider the Biblical miracles!), then we have to assume that this is a fairly common phenomenon". Beecher's 35 % fits this picture, and has just been handed over, almost like an urban legend. But there is an equally unscientific mirror image of this that is very important:

"Since it is possible to develop various symptoms of disease via psychological mechanisms (just consider classical hysteria), then this phenomenon may very well be common enough to underlie all those manifestations of disease that medical science cannot at present explain in bodily terms."

The starting-point was a fairly rare and in several ways extreme illness that was earlier called Briquet's syndrome, or Briquet's Hysteria, a chronic hysterical disorder with a miscellany of predominantly somatic symptoms which usually starts in the teens and is much more common in women. The following quote from a long article on sociopathy by a psychologist, Linda Mealey tells us something about how the first-mentioned disease was regarded before it became the object of conscious expansion:

[...] MacMillan and Kofoed (1984) presented a model of male sociopathy based on the premise that sexual opportunism and manipulation are the key features driving both the individual sociopath and the evolution of sociopathy. Harpending and Sobus (1987) posited a similar basis for the evolution and behavioral manifestations of Briquet's Hysteria in women, suggesting that this syndrome of promiscuity, fatalistic dependency, and attention-getting, is the female analogue, and homologue, of male sociopathy.[6]

Later this condition was given a new name, Somatization Disorder, and was subsequently inflated and changed into something that is now claimed to be very common, particularly in the waiting-rooms of general practitioners. Today it is common to talk about somatization as if this were something that is really understood. It is supposed to be a condition with psychological causes, where looking for somatic explanations is useless, or should rather be avoided, because it may make the patient even more preoccupied with bodily complaints.

Briquet's Hysteria is fortunately a rare destiny of sorts, where many things in life go wrong from an early age and the background is a serious personality disorder that disrupts social life. It is easy to see that this belongs to the domain of psychiatry. But the label "somatization" covers so very much more, and is only remotely similar to the original. In any case, Briquet's Hysteria is the supposed "scientific" ancestor, and that this disease should have psychological causes is not at all self-evident, and today few psychiatrists are likely to maintain such a hypothesis.

We are consequently dealing with a concept that is a blend of old and new. It is hardly a natural category, but was pieced together and adapted by moving boundaries and stretching earlier assumptions about what is possible and conceivable. The result is a rather pretentious thing. Therefore it must be noted that there is no proof that it is justified to apply the label of somatization to such diverse conditions as electrosensitivity, amalgam illness, fibromyalgia, chronic fatigue syndrome, multiple chemical sensitivity, and several more illnesses that established medicine has so far failed to explain scientifically. There are explanations, to be sure, but none that satisfies the exacting standards officially applied to all medical problems!

Incidentally, the boundaries of the somatization syndrome largely coincide with the current limits of received medical knowledge. Physicians are being offered an easy-to-use "sink", which gives them an opportunity to appear as more knowledgeable than they are and saves them from the embarrassment of saying, "I don't know". When chain saws became common in forest work a new kind of disturbance of sensitivity and blood circulation started appearing in the hands of many lumbermen. Since the physicians didn't understand this phenomenon, they automatically started talking about psychosomatic disorders. But new insights arrived rather quickly, and soon vibration-related diseases became a well-known concept.

Today the medical lack of knowledge has come to the fore in other areas. Psychological theories now make it possible for doctors to keep their prestige intact in discussions with patients who for example insist that they get symptoms when exposed to certain electromagnetic or other electrical phenomena, or others who are convinced that their mercury-containing dental fillings have had a detrimental effect on their health.

”Somatization and placebo effect are in fact two sides of the same coin. Since the belief that placebo is a powerful factor has been found to lack support in facts, growing doubts can be expected to appear that somatization is really such a self-evident mechanism.”

Theories of somatization would probably be hard to disseminate in a population that is not already influenced by Freud's teachings on hysteria and other things, but it can be argued that placebo is at least as important as a background. Somatization and placebo effect are in fact two sides of the same coin. Since the belief that placebo is a powerful factor has been found to lack support in facts, growing doubts can be expected to appear that somatization is really such a self-evident mechanism. If more and more people start asking for scientific evidence, the inflated bladder will soon have been punctured.

Psychosomatic disease was a well-known concept long before somatization became promoted as a useful notion. For quite some time peptic ulcer, arterial hypertension, asthma, ulcerous colitis, migraine, tension headache, painful menstruation, and certain skin diseases were regarded as highly interesting from a psychosomatic point of view. For good reasons, most of the diseases enumerated are nowadays being treated with predominantly somatic methods. Somatization is a somewhat diffuse offshoot from the tree of psychosomatics, a rather anomalous field where ordinary somatic diagnostics have been shelved for the time being. It should be noted that somatization is a psychiatric diagnosis, which physicians in somatic disciplines are encouraged to apply to patients with predominantly somatic complaints. Psychiatrists don't usually see "typical" cases of somatization, but are still regarded as guarantors of the concept.

It is not contradictory to say that someone has migraine and that this is due to somatization. But we hardly ever hear things like this. Somatization is a label you use when no conventional diagnosis seems to fit, and then it is usually the "final" diagnosis, no matter which somatic symptoms are involved. This greatly simplifies the discussion. Every conventional somatic diagnosis is connected to various causal theories and bodily mechanisms. By stating as an alleged fact that the causes are psychological in these cases, and making this the basis of a special kind of diagnosis, you will be spared the trouble of considering other possible explanations of those obviously somatic symptoms.

We are faced with a kind of paradox, since research into causes is one of the weakest branches in medical science, with many embarrassing gaps. It is hard to follow the steps of thought that have led to the abandonment of the highly esteemed diagnostic culture of medicine within a sector that is said to be large and important from the point of view of general practitioners. One condition is of course that the physician has tried hard, but failed to fit the patient into a conventional category. According to a logic that is not exactly crystal clear, this leads to the conclusion that there is no somatic illness in this case, since the physician has been unable to make an unexceptionable somatic diagnosis.

Diseases that are not found in today's book of somatic diagnoses will in other words have to be mental. At once the physician even "knows" what caused all the symptoms, which is more rarely the case in somatic medicine. We should bear in mind that a diagnosis is usually a classifying label for something of more or less unknown origin.

Why should somatization be a scientifically satisfying causal explanation of a great variety of symptoms? This is a very good question, which is heard all too rarely. Only a few decades ago, borreliosis (Lyme disease) was a "non-existent" disease, and of course many patients were then regarded as psychosomatic cases, just because of medical ignorance. It didn't matter that they often had acutely inflamed joints, as well as other indisputably somatic symptoms.[7]

The question is what medicine is doing in this back yard where lack of firm knowledge is converted into speculative assertions without any critical voices being heard. Many doctors would never let themselves be caught with woolly ideas about the possible causes of cancer, multiple sclerosis, or cardiovascular diseases. But just mention the word somatization, and they will feel free to engage in uncritical speculation. Provided, of course, that no conventional diagnosis is applicable.

Don't hesitate to ask questions about the scientific evidence behind this talk about somatization. Be persistent — the patient has a right to know, because a diagnosis of somatization is definitely not an innocuous label. It will close various doors and lead the planning of treatments into a track that usually gets you nowhere. But be prepared, "resistance" against the diagnosis will be taken as confirmation that it is correct! This last-mentioned oddity is a rather typical remnant from the heyday of psychoanalysis.

As a psychiatrist, I have to say it is rather distressing to witness how unconcernedly certain colleagues are abusing psychiatry, allowing other interests than those of the patients to take precedence, even though they are not actually being forced to do so. There is not even any kind of "scientific necessity" behind the whole thing, since the starting-point is an embarrassingly simple misunderstanding. If the somatic doctors feel that they cannot find any explanation or accepted diagnosis in a given case, this certainly does not mean that the causes must necessarily be psychological.

”In earlier historical cases where psychiatry has been abused it was often formally possible for those involved to excuse themselves on the grounds that they were against the abuse, but had to obey orders.”

Of course there are psychiatrists who are convinced that they are serving important social interests by making a psychiatric diagnosis in controversial cases — perhaps even more so if they have the privilege of teaching young physicians or dentists how to apply the same kind of reasoning. I know of course that many somatic doctors are looking down upon psychiatrists, but this kind of subservience at the expense of the patients will hardly raise the status of the specialty. In earlier historical cases where psychiatry has been abused it was often formally possible for those involved to excuse themselves on the grounds that they were against the abuse, but had to obey orders. This is not the case today. So far modern physicians don't see that anything questionable is going on, and nobody is giving them straightforward orders. Only a minority of psychiatrists are actively involved, and most of the patients who receive these diagnoses are seen by general practitioners.

But some people are being made aware that something must be wrong with the diffuse ideas behind so-called somatization. I am thinking first of all of the patients who have been subjected to a diagnosis of somatization. Many of them have been looking desperately for help during several years for an obscure disease that has deprived them of their working capacity and made life miserable. Such a history is particularly common among cases of so-called amalgam illness.

Mats Hanson[8] gives an excellent description of health problems that may occur if our bodies are being exposed to mercury, an extremely toxic metal for which there is no natural biological need. Until some 50 years ago physicians contributed to this exposure by using a number of drugs containing mercury. Today it is only the dentists who are actively and directly exposing our inner environments to this treacherous poison. Its many negative effects on living organisms have been known for centuries. We are also being exposed to mercury from certain foodstuffs, but in bearers of dental amalgams ("silver fillings"), this latter source is clearly the most important.

Dental amalgam has been in general use for 150 years, and this fact alone may be taken as a kind of proof of its harmlessness. Dental trade organizations have been doing their best to reinforce this view. Criticism has not been lacking, but this has so far been contained within such limits that amalgam is still in use all around the world, with only limited restrictions. This filling material is cheap and easy to use, and the great expansion of dentistry since the middle of the 19th century would have been impossible without it. Today it is no longer indispensable, but many older dentists feel uncomfortable with modern materials.

It would be possible today to phase out amalgam, but the strategists within the trade fear the avalanche of litigation that would be set in motion if the side effects of amalgam were to be recognized. Above all this applies to the U.S.A., where the official attitudes to mercury-free dentists can be quite brutal. Dentists who make themselves conspicuous by removing amalgam fillings for health reasons run the risk of being de-licensed, and may in this way be forced to leave the country. American dentists are being supervised by state Dental Boards manned with dentists who usually are reliable adherents of the amalgam policy of the American Dental Association (ADA).

 Fact box: The ADA and mercury fillings
  
The ADA was founded in 1859 as an advocacy group for mercury use in dentistry, to oppose the American Society of Dental Surgeons, which demanded of its members that they did not use mercury fillings. The use of mercury in dental practice was at this time very much disputed (see Talbot 1882, Talbot 1883, Sheffield 1896), and members of the Society were even suspended for malpractice if they did use it. Today, with the ADA, it is the other way around. Dentists who refuse to use mercury, might get suspended, for the same reason — malpractice.

In 1995 ADA featured a statement about amalgam on their web site, titled "Dental Amalgam: 150 years of Safety and Effectiveness", saying: "People are exposed to more total mercury from food, water and air than from the minuscule amounts of mercury vapor generated from amalgam fillings."

This is not true. The WHO estimates the average human daily dose of mercury from dental amalgam to 3.0—17.0 micrograms (mercury vapor), from fish and seafood to 2.3 micrograms (methylmercury), from other foods to 0.3 micrograms (inorganic mercury), and from air and water they consider the amounts negligible.

In the present (October, 2002) "ADA Statement on Dental Amalgam", presented at their web page, the passage quoted above is removed, but it still appears on several dentist sites on the web as well as at educational sites for school children. But the ADA text has kept most of its 1995 content. Thus, the ADA still claim amalgam to be a "a mixture of metals such as silver, copper and tin, in addition to mercury, which chemically binds these components into a hard, stable and safe substance."

Stable and safe? According to the WHO the emission of mercury vapor is 3.0—17.0 micrograms/day. The Environmental Protection Agency has set the safety limit value for mercury vapor exposure to 10 micrograms per day.

In a statement issued in July 2002 the ADA quote one of their research directors, Frederick Eichmiller, who says: "Similar to the way that sodium and chlorine (both hazardous in their pure state) combine to form ordinary table salt, the mercury in dental amalgam combines with other metals to form a stable dental filling." This is not true. A table salt crystal does not leak sodium or chlorine, but a piece of amalgam emits mercury vapor.

(Sources: Burton Goldberg, "Chronic Fatigue, Fibromyalgia & Environmental Illness", The ADA Website (www.ada.org) , World Health Organization, "Environmental Health Criteria 118: Inorganic Mercury", Geneva, 1991.)
 

The dental profession has long been involved in a very serious conflict of interests by taking part in the scientific discussion of the general health risks of amalgam. It is very remarkable that this problem has not yet been recognized. This must be a knotty problem for the PR people of ADA and other trade organizations. What can be done to limit the damage once the obvious becomes generally known, namely that dentists have usually been taken for objective experts for 150 years when they have in fact been defending their own interests?

According to the ADA amalgam has absolutely no side effects apart from very rare allergies:

In 150 years of use, there have only been 100 documented cases of allergic reactions to amalgam in dental literature.[9]

It is of course absurd to claim that an implanted material is so extremely harmless to the human body. There are no pharmaceutical products with as little side effects as that, and the properties of mercury are definitely such that a whole array of adverse effects should be expected from chronic exposure. The number of people worldwide who have received amalgam fillings during 150 years probably far exceeds one billion. No reliable system for spotting and reporting side effects of amalgam has ever existed. The truth is of course that we have no idea of the extent of this problem. Patients don't consult their dentists for symptoms outside the oral area, and physicians usually give no attention to their patients' dental fillings.

Since amalgam is continually leaking mercury and other metals, it obvious that it will sometimes give rise to side effects. Unfortunately, it is equally obvious that dentists don't want to know more about this problem than is absolutely necessary. When criticism against the use of amalgam has become too embarrassing, various countries have appointed official expert groups which have written more or less thorough reports. Their task has been to study whether amalgam gives rise to such risks that its use should be stopped. The answers have invariably been in the negative.

There is a gap in the logic here, at least in the way the official reports have been interpreted. Drugs have sometimes been evaluated in a corresponding way, and now and then this has resulted in withdrawals if serious side effects have occurred too often. But, the fact that a drug is allowed to remain on the market does not mean that it is free from side effects. The medical profession (and as a rule also the patients) are aware that side effects do occur, and that a certain risk is being taken. Manufacturers and official agencies are responsible for the registration of those side effects. It is totally different with amalgam. The official answer to the question whether side effects occur is in principle a persistent no, no. It has been like this for 150 years, and therefore it is of course very important to stick to the tactics of denial even now. There is in fact no other choice if you are trying to build up a defense against litigation in the U.S.A.

”You can only acquire thorough knowledge of side effects by using case reports. If 'scientific evidence' were required, most of the observations on side effects in standard books like the Physicians' Desk Reference would have to be scrapped.”

Those expert groups who are believed to have officially denied that amalgam has any side effects have as a rule had dentists among their members or staff. Not surprisingly, findings from "negative" epidemiological studies have been included as important evidence. Toxicologists have estimated whether mercury from amalgams will give rise to toxic concentrations in the body. Experts with a background in work with reports of side effects of drugs have not been involved, as far as I know. This is an obvious problem.

It is remarkable that toxicology no longer has very much in common from a practical point of view with the science of side effects. The methods are quite different. You can only acquire thorough knowledge of side effects by using case reports; this now officially disregarded method. If "scientific evidence" were required, most of the observations on side effects in standard books like the Physicians' Desk Reference would have to be scrapped. It has not always been like this. The man who has been called the father of modern toxicology, Louis Lewin (1850-1929) started his career by publishing a book on the side effects of drugs in 1881[10] Toxicology was later to dominate his published work, but for Lewin there never developed a gap between the two branches of his subject. His famous textbook of toxicology appeared with its fourth edition in 1929, the year of his death.[11] It has been reprinted unchanged ever since, and is still available! Lewin's attitude to amalgam fillings was clearly critical, see note for translation.[12]

Side effects typically appear at doses and concentrations that are not toxic. Of course drugs should not be given in toxic doses, if this can be avoided. Otherwise a majority of patients would be made ill by the treatment. The peculiar thing with side effects is individual sensitivity, which may occur rather seldom, but can be the basis also of very serious reactions. A drug may have many different, rare side effects, and when they are added together the sum total is a far from trifling problem, which will have to be mapped by means of case reports. Since single side effects are often as rare as less than one case per thousand treated it is practically useless to try and find them with epidemiological methods.

What all systems of reporting side effects have in common is that they don't fulfill the requirements of strictly "scientific" proof that are otherwise the rule in medicine. In spite of this all the existing systems yield satisfactory results in the drug area. There are international guidelines for assessing the causal connection, and an important ingredient in this are observations that can be made when a patient stops taking a drug. It stands to reason that disappearance of symptoms after the exposure has ceased will considerably strengthen suspicions of a causal link. In the literature the term is "dechallenge", and explicit questions about such observations are included in the forms to be used to report side effects.[13]

More often than not side effects of amalgam develop slowly and without any distinct relation to such things as dental work. In actual practice it is impossible to accumulate knowledge of these side effects if you have no information on what happens after the amalgam has been removed, in other words what happens after "dechallenge". But what is self-evident when reports are collected on the side effects of drugs is unknown or unthinkable when dentists undertake to build a registry for side effects of dental materials. Norway was the first country to start such a registry, followed by Sweden a few years later. This Scandinavian concept is now being promoted as a model for the European Union. But there is no question in the forms about what happened after the suspected material was removed! As far as I know not a single case of side effects (outside the oral region, and non-allergic) of amalgam has been accepted in either country, but they have all been labeled as "unclassifiable".

Accordingly, the ADA on the other side of the Atlantic has not so far had to worry about any possible surprises from their colleagues in Scandinavia that might upset their plans for the defense in the lawsuits that are now developing. In Sweden odontology may lose control of the side effects registry after the Medical Products Agency took over the supervision of medical devices earlier this year. The financially strong ADA has every reason to give massive PR assistance in the background. The obvious goal would be to prolong the period of 150 years when organized dentistry has had the privilege of controlling inquiries into the health effects of their own activities, including the part that lies outside their competence.

After spending my professional life in the medical culture I know perfectly well that much prestige is attached to the idea that our scientific standards are high. When an insider like myself keeps looking behind the scenes without shunning the contradictions, the glossy official picture will gradually become more cloudy and untenable. I am sure my colleagues believe they understand science, but still, as we have seen, they may be taken in by fairly simple and superficial messages that obviously serve purely "mundane" goals. The examples discussed here seem to concern the medical profession's traditional position of respect and authority on the one hand, and our increasingly important relations to the powers that be in the economic area on the other. Both of these goals cannot be reached at the same time because of mutual contradictions, but it will take some time before a majority of physicians have realized this.

I believe it is important for the health-conscious public to become aware of those doubtful points that are directly relevant to the efforts of individual people to improve and maintain their own health. Today many of us feel that it is necessary to gain knowledge that goes far beyond what is being offered in established medicine. I have tried to discuss certain areas where the official view apparently lacks a firm contact with reality. When all is said and done, reality is the final arbiter, even when we are dealing with science.

The amalgam question is something of a prime example of how a propaganda version of a problem may deceive a group of highly educated people who regard themselves as scientifically alert. At least nine out of ten doctors will believe that it has been scientifically established that you cannot become ill from amalgam. Nevertheless every doctor knows that all drugs have side effects. When mercury was common in drugs it had side effects of course, more than most of the other drugs, as has repeatedly been described in the literature. Today's doctors evidently cannot draw the logical conclusions of this, but trust the absurd assertion that mercury from amalgam is completely free from side effects, apart from rare local complaints in the oral cavity.

It is useful for you and everybody else to be able to recognize when a scientific attitude is merely a thin varnish, and to make plain that you will not be impressed by specious arguments. In this way we can also help physicians to wake up and start doing something about the situation.


Notes:

1. However, an interesting theme issue of Läkartidningen (the Journal of the Swedish Medical Association) 14 Nov 2001 (vol 98, no. 46) may be a signal that the Swedish medical profession is heading towards greater openness in these matters. [Back]

2. James Le Fanu, "The Rise and Fall of Modern Medicine", Little, Brown & Company, London, 1999; page 49 of the paperback edition (Abacus, 2000). [Back]

3. Journal of the American Medical Association (JAMA) 1955; 159: 1602-6. [Back]

4. Kienle GS, Der sogenannte Placeboeffekt, Schattauer, Stuttgart och New York, 1995. [Back]

5. Hrobjartsson A, Gøtzsche PC, Is the placebo powerless? — An analysis of clinical trials comparing placebo with no treatment N Engl J Med 2001; 344: 1594-602. [Back]

6. See http://cogprints.ecs.soton.ac.uk/bbs/Archive/bbs.mealey.html [Back]

7. The following summary is taken from an article by Peter Wahlberg in Nordisk Medicin 1993; 108(5):157-8:

"The message from Lyme
The background to the discovery of Lyme disease teaches a salutary lesson. The symptoms and signs of this disease had been observed by doctors for a century, particularly in the Scandinavian countries, without anybody being able to draw the right conclusions. The first patients were identified in the USA by their relatives or by themselves. Recognition of their plight by the medical profession was chiefly due to the patients' pertinacity. We must remember to pay attention to what patients tell us; they rnay often be right, even when they seem to be wrong. Where fact and theory are incompatible, it is theory, not fact, that needs to be amended. In all likelihood, we all from time to time observe disorders in our patients that are inconsistent with established scientific models, but which we nevertheless attempt to squeeze into these models. Such an approach is not uncommon in the history of medicine. The message from Lyme calls for humility and reflexion."
[Back]

8. See Mats Hanson, "A hundred and fifty years of misuse of mercury and dental amalgam", The Art Bin, 2002. [Back]

9. This quote was taken from an ADA News Release published on the web probably in 1995. In May-June 2001 the ADA website was subject to a total overhaul (a major lawsuit concerning amalgam had been announced), and during this time lots of interesting materials that had been available for years were removed.See also fact box. [Back]

10. Louis Lewin, "Die Nebenwirkungen der Arzneimittel", A. Hirschwald, Berlin, 1881. New editions 1893, 1898 and 1909. All editions are now quite expensive in the second-hand market. Translations: English 1882 and 1883, Russian 1895. In the 1893 edition more than 70 pages are devoted to the side effects of various mercury compounds! [Back]

11. Louis Lewin, "Gifte und Vergiftungen — Lehrbuch der Toxikologie", fourth edition, Stilke, Berlin, 1929. The title of the three first editions (1885, 1897, 1899) was Lehrbuch der Toxikologie. The edition that is now in print is the sixth, published by Karl F. Haug, Heidelberg, 1992. [Back]

12. Below is my translation from Lewin's Gifte und Vergiftungen (1992), page 255:

From amalgam fillings, especially copper amalgam, the metal may be vaporized into the oral cavity, or may in some chemical form or other pass from the dental cavity to be absorbed by the circulation, giving rise to a chronic intoxication. Apart from local oral lesions, this manifests itself in a great variety of disturbances of bodily organs, particularly as a loss of normal functions of the brain and nervous system. Such disturbances are not always due to an increased sensitivity to mercury. Ever since the beginning of this century I have not only taught this in my lectures, but I have drawn the consequences of this knowledge when amalgam-bearing people came to me with obscure symptoms of nervous illness. I then always recommended the removal of such fillings, which resulted in improvements, even in professors [footnote: I informed Prof. Stock about these things, and he regained his health.].
[Back]

13. See further details on the web: http://www.who-umc.org/defs.html, http://www.fda.gov/medwatch/report/cberguid/define.htm, http://www.medsafe.govt.nz/Profs/adverse/causality.htm. [Back]


Copyright © Per Dalén, 2003.

(Fact boxes interspersed in the main text were written or compiled by the Art Bin editor.)


About the author


Links to sections:
All issues | Articles and essays | Origo | The Gallery | Contributors
[English Homepage] | [Svensk bassida]