EXCERPTS FROM OF BANGS AND BRAIDS -- Cosmology's Mathematical Abstractions
1. (From AFTER THE BOMB: THE BIRTH OF THE BANG)
Gamow's Nuclear Pressure-Cooker
In 1946, Russian-born George Gamow, who had worked on the theory of nuclear synthesis in the 1930s and been involved in the Manhattan Project, conjectured that if an atomic bomb could, in a fraction of a millionth of a second, create elements detectable at the test site in the desert years later, then perhaps an explosion on a colossal scale could have produced the elements making up the universe as we know it. Given high enough temperatures, the range of atomic nuclei found in nature could be built up through a succession starting with hydrogen, the lightest, which consists of one proton. Analysis of astronomical spectra showed the universe to consist of around 75 percent hydrogen, 24 percent helium, and the rest a mix continuing on through lithium, beryllium, boron and so on of the various heavier elements. Although all of the latter put together formed just a trace in comparison to the amount of hydrogen and helium, earlier attempts at constructing a theoretical model had predicted far less than was observed--the discrepancy being in the order of ten orders of magnitude in the case of intermediate mass elements such as carbon, nitrogen, and oxygen, and getting rapidly worse (in fact, exponentially) beyond those.
Using point-like initial conditions of the GRT equations, Gamow, working with Ralph Alpher and Robert Herman, modeled the explosion of a titanic super-bomb in which, as the fireball expanded, the rapidly falling temperature would pass a point where the heavier nuclei formed from nuclear fusions in the first few minutes would cease being broken down again. The mix of elements that existed at that moment would thus be "locked in," providing the raw material for the subsequently evolving universe. By adjusting the parameters that determined density, Gamow and his colleagues developed a model that within the first 30 minutes of the Bang yielded a composition close to that which was observed.
Unlike Lemaître's earlier proposal, the Gamow theory was well received by the scientific community, particularly the new generation of physicists versed in nuclear technicalities, and became widely popularized. Einstein had envisaged a universe that was finite in space but curved and hence unbounded, as the surface of a sphere is in three dimensions. The prevailing model now became one that was also finite in time. Although cloaked in the language of particle physics and quantum mechanics, the return to what was essentially a medieval world-view was complete, raising again all the metaphysical questions about what had come before the Bang. If space and time themselves had come into existence along with all the matter and energy of the universe as some theorists maintained, where had it all come from? If the explosion had suddenly come about from a state that had endured for some indefinite period previously, what had triggered it? It seemed to be a one-time event. By the early 1950s, estimates of the total amount of mass in the universe appeared to rule out the solutions in which it oscillated between expansion and contraction. There wasn't enough to provide sufficient gravity to halt the expansion, which therefore seemed destined to continue forever. What the source of the energy might have been to drive such an expansion--exceeding all the gravitational energy contained in the universe-- was also an unsolved problem.
Hoyle and Supernovas as "Little Bang" Element Factories
Difficulties for the theory mounted when the British astronomer Fred Hoyle showed that the unique conditions of a Big Bang were not necessary to account for the abundance of heavy elements; processes that are observable today could do the job. It was accepted by then that stars burned by converting hydrogen to helium, which can take place at temperatures as low as 10 million degrees--attainable in a star's core. Reactions beyond helium require higher temperatures, which Gamow had believed stars couldn't achieve. However, the immense outward pressure of fusion radiation balanced the star's tendency to fall inward under its own gravity. When the hydrogen fuel was used up, its conversion to helium would cease, upsetting the balance and allowing the star to collapse. The gravitational energy released in the collapse would heat the core further, eventually reaching the billion degrees necessary to initiate the fusion of helium nuclei into carbon, with other elements appearing through neutron capture along the lines Gamow had proposed. A new phase of radiation production would ensue, arresting the collapse and bringing the star into a new equilibrium until the helium was exhausted. At that point another cycle would repeat in which oxygen could be manufactured, and so on through to iron, in the middle of the range of elements, which is as far as the fusion process can go. Elements heavier than iron would come about in the huge supernova explosions that would occur following the further collapse of highly massive stars at the end of their nuclear burning phase--"little bangs" capable of supplying all the material required for the universe without need of any primordial event to stock it up from the beginning.
The Steady-State Theory
Having dethroned the Big Bang as the only mechanism capable of producing heavy elements, Hoyle went on, with Thomas Gold and Herman Bondi, to propose an alternative that would replace it completely. The Hubble redshift was still accepted by most as showing that the universe we see is expanding away in all directions to the limits of observation. But suppose, Hoyle and his colleagues argued, that instead of this being the result of a one-time event, destined to die away into darkness and emptiness as the galaxies recede away from each other, new matter is all the time coming into existence at a sufficient rate to keep the overall density of the universe the same. Thus, as old galaxies disappear beyond the remote visibility "horizon" and are lost, new matter being created diffusely through all of space would be coming together to form new galaxies, resulting in a universe populated by a whole range of ages--analogous to a forest consisting of all forms of trees, from young saplings to aging giants.
The rate of creation of new matter necessary to sustain this situation worked out at one hydrogen atom per year in a cube of volume measuring a hundred meters along a side, which would be utterly undetectable. Hence, the theory was not based on any hard observational data. Its sole justification was philosophical. The long-accepted "cosmological principle" asserted that, taken at a large-enough scale, the universe looked the same anywhere and in any direction. The Hoyle-Bondi-Gold approach introduced a "perfect cosmological principle" extending to time also, making the universe unchanging. It became known, therefore, as the Steady State theory.
The Steady State model had its problems too. One in particular was that surveys of the more distant galaxies, and hence ones seen from an earlier epoch, showed progressively more radio sources; hence the universe hadn't looked the same at all times, and so the principle of its maintaining a steady, unvarying state was violated. But it attracted a lot of scientists away from the Big Bang fold. The two major theories continued to rival each other, each with its adherents and opponents. And so things remained through into the sixties.
Then, in 1965, two scientists at Bell Telephone Laboratories, Arno Penzias and Robert Wilson, after several months of measurement and double-checking, confirmed a faint glow of radiation emanating evenly from every direction in the heavens with a frequency spectrum corresponding to a temperature of 2.70K. This was widely acclaimed and publicized as settling the issue in favor of the Big Bang theory.
The Cosmic Background Radiation: News but Nothing New
Big Bang had been wrestling with the problem of where the energy came from to drive the expansion of the "open" universe that earlier observations had seemed to indicate--a universe that would continue expanding indefinitely due to there being too little gravitating mass to check it. Well, suppose the estimates were light, and the universe was in fact just "closed"--meaning that the amount of mass was just enough to eventually halt the expansion, at which point everything would all start falling in on itself again, recovering the energy that had been expended in driving the expansion. This would simplify things considerably, making it possible to consider an oscillating model again, in which the current Bang figures as simply the latest of an indeterminate number of cycles. Also, it did away with all the metaphysics of asking who put the match to whatever blew up, and what had been going on before.
A group at Princeton looked into the question of whether such a universe could produce the observed amount of helium, which was still one of Big Bang's strong points. (Steady State had gotten the abundance of heavier elements about right but was still having trouble accounting for all the helium.) They found that it could. With the conditions adjusted to match the observed figure for helium, expansion would have cooled the radiation of the original fireball to a diffuse background pervading all of space that should still be detectable -- at a temperature of 30oK. Gamow's collaborators, Alpher and Herman, in their original version had calculated 5oK for the temperature resulting from the expansion alone, which they stated would be increased by the energy production of stars, and a later publication of Gamow's put the figure at 50oK.
The story is generally repeated that the discovery of the 2.70oK microwave background radiation confirmed a prediction of the Big Bang theory. In fact, the figures predicted were an order of magnitude higher. We're told that those models were based on an idealized density somewhat higher than that actually reported by observation, and (mumble-mumble, shuffle-shuffle) it's not really too far off when you allow for the uncertainties. In any case, the Big Bang proponents maintained, the diffuseness of this radiation across space, emanating from no discernible source, meant that it could only be a relic of the original explosion.
It's difficult to follow the insistence on why this had to be so. A basic principle of physics is that a structure that emits wave energy at a given frequency (or wavelength) will also absorb energy at the same frequency--a tuning fork, for example, is set ringing by the same tone that it sounds when struck. An object in thermal equilibrium with--i.e. that has reached the same temperature as--its surroundings will emit the same spectrum of radiation that it absorbs. Every temperature has a characteristic spectrum, and an ideal, perfectly black body absorbing and re-radiating totally is said to be a "blackbody" radiator at that temperature. The formula relating the total radiant energy emitted by a blackbody to its temperature was found experimentally by Stefan in 1879 and derived theoretically by Boltzmann in 1884. Thus, given the energy density of a volume, it was possible to calculate its temperature.
Many studies had applied these principles to estimating the temperature of "space." These included Guillaume (1896), who obtained a figure of 5-6oK, based on the radiative output of stars; Eddington (1926), 3.180oK; Regener (1933), 2.80oK, allowing also for the cosmic ray flux; Nernst (1938), 0.750oK; Herzberg (1941), 2.30oK; Finlay-Freundlich (1953 & 1954), using a "tired light" model for the redshift (light losing energy due to some static process not involving expansion), 1.9o K to 6oK. The significant thing about all these results is that they were based on a static, non-expanding universe, yet consistently give figures closer to the one that Penzias and Wilson eventually measured than any of the much-lauded predictions derived from Big Bang models. (It also seems to me that radiation from the Bang would long ago have escaped outward, not be coming inward, but apparently it's not done to ask such questions.)
Furthermore, the discrepancy was worse than it appeared. The amount of energy in a radiation field is proportional to the fourth power of the temperature, which means that the measured background field was thousands of times less than was required by the theory. Translated into the amount of mass implied, this measurement made the universe even more diffuse than Gamow's original, non-oscillating model, not denser, and so the problem that oscillation had been intended to solve--where the energy driving the expansion had come from--became worse instead of better.
An oscillating model was clearly ruled out. But with some modifications to the gravity equations--justified by no other reason than that they forced an agreement with the measured radiation temperature--the open-universe version could be preserved, and at the same time made to yield abundances for helium, deuterium, and lithium which again were close to those observed. The problem of what energy source propelled this endless expansion was still present--in fact exacerbated--but quietly forgotten. Excited science reporters had a story, and the New York Times carried the front-page headline SIGNALS IMPLY A BIG BANG UNIVERSE.
Resting upon three pillars of evidence--the Hubble redshifts, light-element abundance, and the existence of the cosmic background radiation--Big Bang triumphed and became what is today the accepted standard cosmological model.
Quasar and Smoothness Enigmas. Enter, the Mathematicians
At about this time, a new class of astronomical objects was discovered that came to be known as quasars, with redshifts higher than anything previously measured, which by the conventional interpretation of redshift made them the most distant objects known. To be as bright as they appeared at those distances they would also have to be astoundingly energetic, emitting up to a hundred thousand times the energy radiated by an entire galaxy. The only processes that could be envisaged as capable of pouring put such amounts of energy were ones resulting from intense gravity fields produced by the collapse of enormous amounts of mass. This was the stuff of General Relativity, and with Big Bang now the reigning cosmology, the field became dominated by mathematical theoreticians. By 1980, around ninety-five percent of papers published on the subject were devoted to mathematical models essentially sharing the same fundamental assumptions. Elegance, internal consistency, and preoccupation with technique replaced grounding in observation as modelers produced equations from which they described in detail and with confidence what had happened in the first few fractions of a millionth of a second of time, fifteen billion years ago. From an initial state of mathematical perfection and symmetry, a new version of Genesis was written, rigorously deducing the events that must have followed. That the faith might be . . . well, wrong, became simply inconceivable.
But in fact, serious disagreements were developing between these idealized realms of thought and what astronomers surveying reality were actually finding. For one thing, despite all the publicity it had been accorded as providing the "clincher," there was still a problem with the background radiation. Although the equations could be made to agree with the observed temperature, the observed value itself was just too uniform--everywhere. An exploding ball of symmetrically distributed energy and particles doesn't form itself into the grossly uneven distribution of clustered matter and empty voids that we see. It simply expands as a "gas" of separating particles becoming progressively more rarified and less likely to interact with each other to form into anything. To produce the galaxies and clusters of galaxies that are observed, some initial unevenness would have to be present in the initial fireball to provide the focal points where condensing matter clouds would gravitate together and grow. Such irregularities should have left their imprint as hot spots on the background radiation field, but it wasn't there. Observation showed the field to be smooth in every direction to less than a part in ten thousand, and every version of the theory required several times that amount.
Another way of stating this was that the universe didn't contain enough matter to have provided the gravitation for galaxies to form in the time available. There needed to be a hundred times more than observation could account for. But it couldn't simply be ordinary matter lurking among or between the galaxies in some invisible form, because the abundance of elements also depended critically on density, and increasing it a hundredfold would upset one of the other predictions that the Big Bang rested on, producing far too much helium and not enough deuterium and lithium. So another form of matter--"dark matter"--was assumed to be there with the required peculiar properties, and the cosmologists turned to the particle physicists, who had been rearing their own zoo of exotic mathematical creations, for entities that might fill the role. Candidates included heavy neutrinos, axions, a catch-all termed Weakly Interacting Massive Particles, or "WIMPS," photinos, strings, superstrings, quark nuggets, none of which had been observed, but had emerged from attempts at formulating unified field theories. The one possibility that was seemingly impermissible to consider was that the reason why the "missing mass" was missing might be that it wasn't there.
Finally, to deal with the smoothness problem and the related "flatness" problem, the notion of "inflation" was introduced, whereby the universe began in a superfast expansion phase of doubling in size every 10-35 seconds until 10-33 seconds after the beginning, at which point it consisted of regions flung far apart but identical in properties as a result of having been all born together, whereupon the inflation suddenly ceased and the relatively sluggish Big Bang rate of expansion took over and has been proceeding ever since.
Let's pause for a moment to reflect on what we're talking about here. We noted in the section on Evolution that a picosecond, 10-12 seconds, is about the time light would take to cross the width of a human hair. If we represent a picosecond by the distance to the nearest star, Alpha Centauri (4.3 light-years), then, on the same scale, 10-35 seconds would measure around half a micron, or a quarter the width of a typical bacterium--far below the resolving power of the human eye. Fine-tuning of these mathematical models reached such extremes that the value of a crucial number expressed as a part in 58 decimal places at an instant some10-43 seconds into the age of the universe made the difference between its collapsing or dispersing in less than a second.
But theory had already dispersed out of sight from reality anyway. By the second half of the 1980s, cosmic structures were being discovered that could never have come into being since the time of the Big Bang, whatever the inhomogeneities or fast footwork in the first few moments to smooth out the background picture. The roughly spherical, ten-million-or-so-light-year-diameter clusters of galaxies themselves turned out to be concentrated in ribbonlike agglomerations termed superclusters, snaking through space for perhaps several hundred million light-years, separated by comparatively empty voids. And then the superclusters were found to be aligned to form planes, stacked in turn as if forming parts of still larger structures--vast sheets and "walls" extending for billions of light-years, in places across a quarter of the observable universe. The problem for Big Bang is that relative to the sizes of these immense structures, their component units are moving too slowly for these regularities to have formed in the time available. In the case of the largest void and shell pattern identified, 150 billion light-years would have been needed at least--8 times the longest that Big Bang allows. New ad-hoc patches made their appearance: light had slowed down, so things had progressed further than we were aware; another form of inflation had accelerated the formation of the larger, early structures, which had then been slowed down by hypothetical forces invented for the purpose. But tenacious resistance persisted to any suggestion that the theory could be in trouble.
Yet the groundwork for an alternative picture that perhaps explains all the anomalies in terms of familiar, observable processes had been laid in the 1930s. Investigations of the electrical activity of the Sun and its connection with the auroras, or "northern lights," had led a small group of scientists to develop a "plasma cosmology" that saw the universe as primarily electrical in nature, shaped at the larger scale by electromagnetic forces--a not unreasonable position to take, since 99 percent of the evident matter in it takes the form of electrically charge ions and electrons. Their views had been largely ignored or dismissed because they didn't follow the reigning theory that treats gravity as the sole influence governing the form of the cosmos--even though gravity is forty orders of magnitude weaker than the electric force between charged bodies.
2. (From REDSHIFT WITHOUT EXPANSION AT ALL)
Molecular Hydrogen -- The Invisible Energy-Absorber
The Steady State and Kleine's antimatter theories both accepted the conventional interpretation of the redshift but sought causes for it other than the Big Bang. But what if it has nothing to do with expansion of the universe at all? We already saw that Finlay-Freundlich's derivation of the background temperature in the early fifties considered a "tired light" explanation that Born analyzed in terms of photon-photon interactions. More recently, the concept has featured in the work of Paul Marmet, a former physicist at the University of Ottawa, and before that, senior researcher at the Herzberg Institute of Astrophysics of the National Research Council of Canada.
It has long been known that space is permeated by hydrogen, readily detectable by its 21 centimeter emission line, or absorption at that wavelength from background sources. This signal arises from the spin of the hydrogen atom. Monatomic hydrogen, however, is extremely unstable and reacts promptly to form diatomic hydrogen molecules, H2. Molecular hydrogen is very stable, and once formed does not easily dissociate again. Hence, if space is pervaded by large amounts of atomic hydrogen, then molecular hydrogen should exist there too--according to the calculations of Marmet and his colleagues, building up to far greater amounts than the atomic kind. Molecular hydrogen, however, is extraordinarily difficult to detect--in fact, it is the most transparent of diatomic molecules. But in what seems a peculiar omission, estimates of the amount of hydrogen in the universe have traditionally failed to distinguish between the two kinds and report only the immediately detectable atomic variety. Using the European Space Agency's Infrared Space Observatory, E.A. Valentijn and P.P. van der Werf recently confirmed the existence of huge amounts of molecular hydrogen in NGC891, a galaxy seen edge-on, 30 million light-years away. This discovery was based on new techniques capable of detecting the radiation from rotational state transitions that occur in hydrogen molecules excited to relatively hot conditions. Cold molecular hydrogen is still undetectable, but predictions from observed data put it at 5 to 15 times the amount of atomic hydrogen that has long been confirmed. This amount of hitherto invisible hydrogen in the universe would have a crucial effect on the behavior of light passing through it.
Most people having a familiarity with physics have seen the demonstration of momentum transfer performed with two pendulums, each consisting of a rod weighted by a ball, suspended adjacently such that when both are at rest the balls just touch. When one pendulum is moved away and released, it stops dead on striking the other, which absorbs the momentum and flies away in the same direction as the first was moving. The collision is never perfectly "elastic," meaning that some of the impact energy is lost as heat, and the return swing of the second pendulum will not quite reverse the process totally, bringing the system eventually to rest.
Something similar happens when a photon of light collides with a molecule of a transparent medium. The energy is absorbed and re-emitted in the same, forward direction, but with a slight energy loss--about 10-13 of the energy of the incoming photon. (Note this is not the same as the transverse "Rayleigh scattering" that produces angular dispersion and produces the blueness of the sky, which is far less frequent. The refractive index of a transparent medium is a measure of light's being slowed down by successive forward re-emissions. In the case of air it is 1.0003, indicating that photons traveling 100 meters are delayed 3 centimeters, corresponding to about a billion collisions. But there is no noticeable fuzziness in images at such distances.)
What this means is that light traveling across thousands, or millions, or billions of light-years of space experiences innumerable such collisions, losing a small fraction of its energy at each one and hence undergoing a minute reddening. The spectrum of the light will thus be shifted progressively toward the red by an amount that increases with distance--a result indistinguishable from the distance relationship derived from an assumed Doppler effect. So no expansion of the universe is inferred, and hence there's no call for any Big Bang, to have caused it.
Two further observations that have been known for a long time lend support to this interpretation. The Sun has a redshift not attributable to gravity, which is greater at the edges of the disk than in the center. This could be explained by sunlight from the edge having to pass through a greater thickness of lower solar atmosphere, where more electrons are concentrated. (It's the electrons in H2 molecules that do the absorbing and re-emitting.) Second, it has been known since 1911 that the spectra of hot, bright blue OB type stars in our galaxy show a slight but significant redshift. No satisfactory explanation has ever been agreed. But it was not concluded that we are located in the center of an expanding shell of OB stars.
So the redshift doesn't have to imply an expansion of the universe. An infinite, static universe is compatible with other interpretations--and ones, at that, based on solid bodies of observational data rather than deduction from assumptions. However, none of the models we've looked at so far questions the original Hubble relationship relating the amount of the shift to distance (although the value of the number relating it has been reappraised several times). But what if the redshifts are not indicators of distance at all?
3. (From THE ULTIMATE HERESY: QUESTIONING THE HUBBLE LAW)
The completely revolutionary threat to toppling the last of Big Bang's supporting pillars came not from outside mavericks or the fringes, but from among the respected ranks of the professionals. And from its reactions, it seems that the Establishment reserves its most savage ire for insiders who dare to question the received dogma by putting observation before theory and seeing the obvious when it's what the facts seem to say.
Halton Arp's Quasar Counts
Halton Arp comes from a background of being one of America's most respected and productive observational astronomers, an old hand at the world-famous observatories in California and a familiar face at international conferences. Arp's Atlas of Peculiar Galaxies has become a standard reference source. Then, in the 1960s and 70s, "Chip" started finding excess densities of high-redshift quasars concentrated around low-redshift galaxies.
A large redshift is supposed to mean that an object is receding rapidly away from us, the larger the shift, the greater the recession velocity and the distance. With the largest shifts ever measured, quasars are by this reckoning the most distant objects known, located billions of light-years away. A galaxy showing a moderate shift might be thousands or millions of times less. But the recurring pattern of quasars lying conspicuously close to certain kinds of bright galaxies suggested an association between them. Of course, chance alignments of background objects are bound to happen from time to time in a sky containing millions of galaxies. However, calculating how frequently they should occur is a routine statistical exercise, and what Arp was saying was that they were being found in significantly greater numbers than chance could account for. In other words, these objects were associated in some kind of way. A consistently recurring pattern was that the quasars appeared as pairs straddling a galaxy
The first reactions from the orthodoxy were simply to reject the observations as being incorrect--because they had to be. Then a theoretician named Claude Canizares suggested an explanation whereby the foreground galaxy acted as a "gravitational lens," magnifying and displacing the apparent position of a background quasar. According to Einstein's theory, light rays passing close to a massive body will be bent by its gravity (although, as discussed later in the section on Relativity, other interpretations see it as regular optical refraction). So imagine a massive foreground galaxy aligned with a distant quasar as viewed from Earth. As envisaged by the lensing explanation, light from the quasar that would otherwise pass by around the galaxy is pulled inward into a cone--like light passing through a convex optical lens--and focused in our vicinity. Viewed back along the line of sight, it would be seen ideally as a magnified ring of light surrounding the galaxy. Less than ideal conditions would yield just pieces of the ring, and where these happened to be diametrically opposed they would create the illusion of two quasars straddling the intervening galaxy. In other cases, where the alignment was less than perfect, the ring becomes a segment of arc to some greater or lesser degree, offset to one side--maybe just a point. So quasar images are found close to galaxies in the sky more often than you'd expect.
But images split in that way would have the same spectra and redshift. This wasn't observed. The locations didn't match fragmented parts of rings. So it became "microlensing" by small objects such as stars and even planets within galaxies. But for that to work, either the number of background quasars would need to increase sharply with faintness, whereas actual counts showed the number flattening off as they got fainter. Such a detail might sound trivial to the lay public, but it's the kind of thing that can have immense repercussions within specialist circles. When Arp submitted this fact to Astronomy and Astrophysics the editor refused to believe it until it was substantiated by an acknowledged lens theorist. When Arp complied with that condition, he was then challenged for his prediction as to how the counts of quasars should vary as a function of their apparent brightness. By this time Arp was becoming sure that regardless of wrecking ball it would send through the whole cosmological edifice, the association was a real, physical one, and so the answer was pretty easy. If the quasars were associated with bright, nearby galaxies, they would be distributed in space the same way. And the fit between the curves showing quasar counts by apparent magnitude and luminous Sb spiral galaxies such as M31 an M81--galaxies resembling our own--was extraordinarily close, matching even the humps and minor nonlinearities.
Arp's paper detailing all this, giving five independent reasons why gravitational lensing could not account for the results and demonstrating that only physical association with the galaxies could explain the quasar counts, was published in 1990. It should have been decisive. But four years later, papers were still reporting statistical associations of quasars with "foreground" galaxy clusters. Arp quotes the authors of one as stating, "We interpret this observation as being due to the statistical gravitational lensing of background QSO's [Quasi-Stellar Objects, i.e. quasars] by galaxy clusters. However, this . . . overdensity . . . cannot be accounted for in any cluster lensing model . . ." You figure it out. The first part is obligatory, required by custom; the second part is unavoidable, demanded by the data. So I suppose the only answer is to acknowledge both with an Orwellian capacity to hold two contradictory statements and believe both of them. Arp's paper conclusively disproving lensing was not even referenced.
Taking on an Established Church
It's probably worth restating just what's at stake here. Since 1929, when Edwin Hubble formulated the law that redshift increases proportionally with distance, redshift has been the key to interpreting the size of the universe, as well as being the prime evidence indicating it to be expanding from an initially compact object. If the redshifts have been misunderstood, then inferred distances can be wrong by a factor of from 10 to 100, and luminosities and masses wrong by factors up to 10,000. The founding premise to an academic, political, and social institution that has stood for three generations would be not just in error but catastrophically misconceived. It's not difficult to see why to many, such a possibility would be literally inconceivable. As inconceivable as the thought once was that Ptolemy could have been wrong.
It began when Arp was studying the evolution of galaxies and found a consistent pattern showing pairs of radio sources sitting astride energetic, disturbed galaxies. It seemed that the sources had been ejected from the galaxies, and the ejection had caused the disturbance. This was in line with accepted thinking, for it had been acknowledged since 1948 that galaxies eject radio-emitting material in opposite directions. Then came the shock that time and time again the sources turned out to be quasars, often showing other attributes of matter in an excited state, such as X-ray emissions and optical emission lines of highly energized atoms. And the galaxies they appeared to have been ejected from were not vastly distant from our own, but close by.
These associations had been accumulating since the late 60s, but in that time another kind of pattern made itself known also. A small group of Arp's less conformist colleagues, who even if perhaps not sharing his convictions totally, remained sufficiently open-minded to be sympathetic. From time to time one of them would present observational data showing another pair of radio or X-ray sources straddling a relatively nearby low-redshift galaxy which coincided with the optical images of Blue Stellar Objects--quasar candidates. To confirm that they were quasars required allocation of observation time to check their spectra for extreme quasar redshifts. At that point a dance of evasion would begin of refusals to look through the telescopes--literally. The requests would be turned down or ignored, even when they came from such figures as the Director of the X-Ray Institute. When resourceful observers cut corners and made their own arrangements, and their findings were eventually submitted for publication, hostile referees would mount delaying tactics in the form of petty objections that could hold things up for years.
In the 1950s, the American astronomer Karl Seyfert had discovered a class of energetic galaxies characterized by having a sharp, brilliant nucleus with an emission line spectrum signifying that large amounts of energy were being released there. Arp found their association with quasar pairs to be so strong that it could almost be said to be a predictable attribute of Seyfert galaxies. Spectroscopically, quasars look like pieces of Seyfert nuclei. One of the most active nearby spiral galaxies, NGC4258, has a Seyfert nucleus from which the French astronomer G. Courtès, in 1961, discovered a pair of proto-spiral arms emerging, consisting of glowing gaseous matter also emitting the "synchrotron" radiation of high-energy electrons spiraling in magnetic fields. An X-ray astronomer called Wolfgang Piestch established that the arms of gas led like rocket trails to a pair of X-ray sources coinciding perfectly with two Blue Stellar Objects. When the ritual of obstructionism to obtain the spectra of the BSOs ensued, Margaret Burbridge, a Briton with over 50 years of observational experience, bypassed the regular channels to make the measurement herself using the relatively small 3 meter reflector telescope on Mount Hamilton outside San Jose, in California, and confirmed them to be quasars. Arp put the probability of such a chance pairing as being less than 1 in 2.5 million.
His paper giving all the calculations deemed to be scientifically necessary, along with four other examples each with a chance of being coincidental that was less than one in a million, was not even rejected--just put on indefinite hold and never acted upon since. When the number of examples continued growing, as did Arp's persistence, his tenure was suddenly terminated and he was denied further access to the major American observatories. After facing censorship from the journals and ferocious personal attacks in public by prestigious figures at conferences, he left the U.S. in 1984 to join the Max-Planck Institut für Astrophysik in Germany, who he says have been cooperative and hospitable.
Eyes Closed and Eyes Open: Professionals and Amateurs
A new generation of high-resolution telescopes and more-sensitive instruments produced further examples of gaseous bridges emitting in the X-ray bands, connecting the quasars to their source galaxies. The configurations could be seen as a composite, physically connected object. But the response of those trained to the orthodox view was not to see them. They were dismissed as artefacts of random noise or instrument errors. I've witnessed this personally. On mentioning Arp's work to a recent astrophysics graduate I was cut off with, "Those are just background noise," although I hadn't mentioned bridges. I asked him if he'd seen any of the pictures. He replied stonily, "I haven't read anything of Arp's, but I have read the critics." Whence, knowing the approved answers is presumably all that is needed. Shades of the Scholastics.
In 1990, the Max Planck Institut für Extraterrestrische Physik (MPE) launched the X-ray telescope ROSAT (Röntgen Observatory Satellite Telescope) , which was later used to look for a filament connecting the violently disrupted spiral galaxy NGC4319 to the quasar-like object Markarian 205, whose association had been disputed since 1971. Although the prime aim failed (Arp thinks the connection is probably too old now to show up at the energies searched for), it did reveal two new X-ray filaments coming out of Mark205 and leading to point-like X-ray sources. So the high redshift, quasar-like Seyfert ejected from the low redshift spiral was itself ejecting a pair of yet-higher-redshift sources, which turned out to be quasars. The NGC4319-Mark205 connection was subsequently established by a high-school teacher, when the NASA announced a program making 10 percent of the time on the orbiting Hubble Space Telescope available to the community of amateur astronomers. It seems that the amateur community--for whom Halton Arp has an extremely high regard--had taken a great interest in his work and were arranging more investigations of nearby quasar connections, drawing their subject matter mainly from Arp's 1987 book Quasars, Redshifts, and Controversies, which the NASA committees that allocated observation time had been avoiding like the plague. After another amateur used his assigned time for a spectroscopic study of an Arp connecting filament, the Space Telescope Science Institute suspended th amateur program on the grounds that it was "too great a strain on its expert personnel." No doubt.