Annual Review of Astronomy and Astrophysics Jack of All

This article is basically a scientific autobiography from a long and very rewarding career, covering childhood, education, theoretical work, observations, instrumentation, and some social activities. It is not meant to be a review of anything except an incomplete picture of my life, and the relatively few references are to some of my work, work related to mine, and work that had a very large influence on my life and research, so apologies in advance to those I left out in subjects I discuss. I have not in any way attempted to discuss scientific results; those you can go read. I have used more words on old things than new, with the idea that most readers of this article are much more familiar with the field in the last couple of decades than before. My career spans almost six, and there may be things to learn from antiquity. stepping


INTRODUCTION
I evidently have a reputation as a Jack of All Trades and even, perhaps, master of one or two. There was a welcoming article in the Princeton Alumni Weekly shortly after my wife Jill and I moved to Princeton in 1980 which dealt with my work and history as a theorist, observer, and instrument maker, the combination of which they found quite unusual. The paper was titled and claimed I was a "Triple Threat." I will begin by telling you at some length how I got to this place, then describe my views on our beautiful subject, its relation to society, and end with views on the future of the subject, which would be mostly rosy except that it and we have to live in the society and times in which we find ourselves.

CHILDHOOD AND GROWING UP
I was born in 1938, a rather long time ago, to a mother who was an incredible artist but gave up art as soon as she married my father, though motherhood (me) came only some years after the marriage. My father was an exploration geophysicist who worked for Gulf Oil, and his work and outlook are imprinted indelibly on my work and outlook.
His work took him all over the southern United States in search of oil deposits. He led a small team whose mission was to find places where the gravitational acceleration is slightly lower than the mean, indicating that the rock below has slightly lower density than the mean, and perhaps indicating that there is a thick, low-density salt deposit called a salt dome, whose formation often creates reservoirs where oil and natural gas collect. Of course, once promising locations had been identified, it was time to move on to find more, and so we moved about over the southeast, where most of these geological formations are found. We typically stayed in a place for of the order of a year, sometimes less, very occasionally more.
Since fuel was very scarce during the war, this work was deemed crucial and he was not asked/allowed(?) to serve in the military. The war, however, affected all of life, though certainly not so much so as in places in which it was actually fought. The work required vehicles to get to remote places, scientific instruments, primitive by the standards of today but delicate and very finicky, and so lots of parts and maintenance just to keep things going. None of these were available. My father met this problem by installing a small machine shop in a trailer which we carried with us when we moved. Lathe, band saw, grinder, drill press, electric and oxyacetylene welders, hand tools of all kinds. He was self-taught master of them all. The work went ahead, often with great difficulty, but it went ahead. stepfather was stationed in Japan, and my mother and I moved back to Beeville, where I was to finish high school.
In high school I met a man who was one of the formative influences of my life, a math teacher called Ed Sullivan. He was a frustrated academic, and he saw in me some potential that I did not know about. Starting almost immediately upon return to Beeville and showing up in his Algebra I classroom, I was learning calculus, analytic geometry, linear algebra, and some number theory, all from impromptu after-school sessions. I became well enough prepared that when the high school physics teacher left under rather unfortunate circumstances I took over the class, and I managed to breeze through almost all of my undergraduate math classes a little later.
It was during this time that I started on my most ambitious amateur telescope project which I was not to finish until much later, an eight-inch f/5 Newtonian, for which I ground and polished the primary. It was pretty good, though the books warn against something so large and fast as a first attempt. It was a little astigmatic, and I later had it touched up by a professional. Meanwhile it had become financially necessary to sell the trailer/machine shop, which had suffered from neglect when we were away. But I bought a small lathe from part of the proceeds and made friends with a cabinetmaker who had a nice wood shop and was very indulgent with this brash young man who insisted on absurd accuracies for his wooden telescope mount parts.
It was in high school that I acquired and devoured the second astronomy book (after The Stars for Sam) that sealed my fate-Fred Hoyle's (1955) Frontiers of Astronomy. Suddenly it became clear that my mathematical skills were actually useful for understanding the Universe and the things that live in it. I built telescopes; I enjoyed and was pretty good at using them; I was facile with mathematics and mathematics was a tool with which stars and spacetime and the Universe could be understood. My "triple threat" status was already well under way.
Once graduated from high school, I was off to Rice Institute (now Rice University) as an undergrad. Finally in my element, surrounded by people motivated by the same things that drove me. An applied mathematician named Jim Douglas took me under his wing. as did a physicist, Bud Rorschach. Douglas, in particular, made opportunities for me, and introduced me in those early years to computational mathematics. He was interested in the numerical solution of elliptic partial differential equations and related parabolic equations, particularly using the then newly-invented Peaceman-Rachford alternating directions scheme. I had summer employment at Standard Oil Labs in Houston working on this and had a couple of papers before I graduated. I hesitate to tell you that the application of this work was to understand oil flow as a result of high-pressure water injection into wells. The term "fracking" had not yet been invented, and had certainly not raised any public attention, but that is what I was working on.
Physics turned out to be every bit as fascinating as I had anticipated it would be, but it became clear very early that an enormous amount of work had to be done to really understand, to do things right. That involved stealing a key to the physics building so that I could spend several hours each night to be able to do the Millikan oil-drop experiment sufficiently well that I could believe that the charge on the electron was actually close to what it was known to be. I did not get caught, but when the report was turned in it was clear to Bud Rorschach what I had done, and I dutifully gave the key back.
Meanwhile work on the eight-inch telescope at home over summers went on, and it was finished a year before I graduated from Rice. Solid-state electronics was new. I wanted to take astronomical photographs, and for that one needed a clock drive. I had built a drive into the telescope mount, but I needed to be in the country away from lights. My uncle was the agricultural county agent for Bee County and was friends with nearly all the ranchers in the area. One of them let me pour a small concrete base in the corner of one of his pastures, and I installed a welded steel column on the base to hold my telescope. There was no power. So I built a solid-state oscillator and power 4 Gunn , . • · � -Review in Advance first posted on March 24, 2020. (Changes may still occur before final publication.) amplifier which ran off a car battery to run my telescope motor, learned to deal with fancy Kodak astronomical emulsions, which were available quite cheaply on 35 mm film then, played with commercial emulsions, and learned about astronomical photography. I submitted a description of the telescope, its drive and camera, and several photographs to Sky and Telescope, which resulted in my first two astronomical publications, Gunn (1961a) and Gunn (1961b).

GRADUATE SCHOOL AT CALTECH
What next? Physics? Mathematics? No, astronomy. I accepted an offer of admission from Caltech. Caltech had no real "departments," and the relationship between the astronomers and physicists was very close. It was very famous, very good, had the Palomar Observatory with the world's largest telescope, was delving into radio and infrared astronomy. And it had H.P. Robertson, with whom I wanted very badly to work on general relativity and cosmology. I had been involved since high school with a lovely young woman, Rosemary Wilson, who also came to Rice a year after I did, and we were married in the summer of 1961, just before going to California for graduate school. She was a year behind me so did not finish at Rice, but would go back to school in California.
So Pasadena, rather different from today. We found a place to live fairly close to campus, with delightful neighbors and delightful friends, with whom we were fully to partake of the incredible sixties.
Photography was the recording medium for most of astronomy in 1960, though electronic photometry with IR detectors, radio astronomy, of course, and photomultipliers for the optical were actively used. But spectroscopy, the workhorse of optical astronomy then as now, was a photographic endeavor. Since I had had a fair amount of experience with astronomical photography, I was able to talk Bob Kraft, who was then at Mount Wilson, into taking me on as a first-year graduate student (fairly unheard-of at the time) on a project using the X spectrograph on Mount Wilson to look at metallicities and gravities for a statistical sample of F stars, including some known-to-be strange beasts. A photographic spectrum is a far cry from a modern CCD one. The detector is frightfully nonlinear, nonuniform unless one REALLY works at it, and very difficult to learn much from. Bev Oke, who was my teacher, associate, and later very good friend, was very absorbed in trying to find a way to at least semiautomatically calibrate photographic spectra so that one could get an output linear in flux if not spectrophotometrically calibrated.
He had a friend working at Hewlett Packard who had developed an incredibly Rube Goldberg device which worked to do this. Caltech had a photometer which had a motorized table which could move a photographic plate at constant rate under a microscope which had a photomultiplier at its output. A good, stable light source projected a small spot or line on the plate, so one got a signal from the photomultiplier as the plate moved which was accurately proportional to the transmission through the plate-but not, of course, to the flux which darkened the plate. But one could take and develop with your plate a calibration plate which had a linear run of intensity with position, and scanning that plate gave a transmission signal which corresponded to a linear ramp in exposing intensity. So one could draw a graph showing transmission versus intensity, or intensity versus transmission. Then all one had to do was to enter this graph for 6000 or so resolution elements on the spectral plate. The HP gizmo was an X-Y plotter table which had a vertical column which was on a servo which would move to an x position linearly in response to a voltage input. The column carried a puck with a pair of coils which fed an amplifier which output a signal which changed polarity when the puck crossed a conductor which was fed a suitable AC current. A servomotor could move the puck vertically. So the procedure was to draw the aforementioned graph on a piece of vellum with conducting ink, and fasten the graph to the the photomultiplier drove the column in x, and, if you were lucky, the puck would center itself in y on the curve you had drawn, and output a voltage proportional to its height which could be fed to a strip chart recorder. Nothing digital, not yet. So I set about using this device to reduce my F star spectra, but there had to be a better way. The device mostly worked, but it took hours to do one plate, and was not reliable enough just to go away and leave.
I knew how transistors and diodes worked from my work on telescope drives, and was taking a wonderful solid-state electronics course from Alvin Tollestrup. I was convinced I could do better, and convinced Bev that I could. He then encouraged me to build my first instrument at Caltech, a piecewise linear arbitrary function generator which took a DC input and output an arbitrary function of that input determined by 20 little line segments, which were actually nicely smoothed by the diode transitions. So now you draw the graph, read 20 numbers from it, set 20 dials, and go. Hours became minutes-the speed was limited by the chart recorder. Doing this now you would use an analog-to-digital converter and output the digital signal to some storage medium, but would not use my function generator, because if you have the digitized transmission the computer can do the arbitrary function, but then was then and now is now.
But I had gone to Caltech to work with Robertson, who died the summer before I arrived, and there were no relativists on the faculty. Frank Estabrook, at JPL, was drafted to teach a(n excellent) relativity course, which I took and learned enough from with the differential geometry I already knew to do research, but there were no thesis possibilities.
So I kept doing instrumental things, worked with new emulsions Kodak was making which had detective quantum efficiencies approaching a whole percent and talked Guido Muench into directing a thesis in which I would take 48-inch Palomar Schmidt plates with these wonderful new emulsions, count galaxies, and attempt to measure the correlation function of galaxies in space. I did not know that Jim Peebles was pursuing the same subject. The thesis was done, accepted, but I never published it, because Jim did a much better job. I had chosen linear combinations of gaussians to represent the correlation function, which allowed trivial recovery of the 3D object, but did not realize when I was done that the function I derived was actually a quite excellent −0.8 power law in projection and −1.8 in space.
One of the members of my exam committee was Dick Feynman, whom I had gone to for advice shortly after choosing a thesis topic and had talked with a fair amount while working on the thesis. Amazingly intelligent and gentle, he had almost thrown me out of his office at first encounter, because I arrived with an armful of computer printout paper, which I liked to use to write on because the paper was big. He thought I was there to discuss computing with him, for which he had great disdain and thought graduate students should not be allowed to use computers. . .he later became quite interested in computing algorithms and invented amazing algorithms for evaluating simple transcendental functions which, sadly, have never made it into the technology.
While working on my thesis, I did a number of other things. I was interested in gravitational optics, and published papers on the effects of inhomogeneities like galaxies and clusters on cosmological observations. This was before we knew anything about dark matter (though we should have known a lot more than we did-the evidence was all there) so the theory was good, but the conclusions were not very useful.
I was also thinking about the intergalactic medium and galaxy formation, and went to the famous colloquium by Maarten Schmidt in my last year as a graduate student, in which he announced that he had found yet higher-redshift quasars including 3C9 (Schmidt 1965), with a redshift of 2.01. One could see Lyman alpha in the spectrum, redshifted to about 3660 angstroms and there was light beyond Lyman alpha, and both fellow student Bruce Peterson and I knew there should not be if there were any appreciable amount of neutral hydrogen in intergalactic space. We The citation history of the Gunn-Peterson paper (Gunn & Peterson 1965), which essentially traces the history of high-redshift astronomy.
immediately wrote the paper (Gunn & Peterson 1965), which has one of the most interesting (and gratifying, if one is an author) citation histories in astronomy (Figure 1).

BEGINNING A CAREER: PRINCETON, 1967
Just before I graduated, Jesse Greenstein called me into his office to tell me that I could expect a faculty offer from Caltech, so I suppose I had not alienated too many of the faculty. It was not obvious at all to me for several reasons that Caltech would want me back, but I was suitably thrilled at the prospect. But I had, I think rather unwisely, been enrolled in ROTC at Rice, and I had an Army Engineers obligation to satisfy. Jesse had extensive connections and managed to get me assigned on a scientific liaison position to JPL. Since I was a second lieutenant and the Vietnam War was just getting underway, I probably owe my life to Jesse. . .the mean lifetime of second lieutenants in Vietnam was pretty short. I went through some mandatory training, during which, among other things, I learned to parachute, which I found incredibly exciting but never went back to as a civilian.
Thus to JPL, where I continued to work on gravitational optics and just optics. Also worked on the Surveyor project, during which we convinced the spacecraft people to leave the camera on into the cold lunar night and so we got wonderful images of the solar corona as the Sun set on the moon. Also, the astronomy group was just setting up a very nice 24-inch telescope on Table  Mountain in the nearby San Gabriels, and I did the electronic drive and a photometer for that telescope and also designed a very large coudè spectrograph for it for planetary spectroscopy.
In the meantime, I had gotten an offer from Lyman Spitzer for a faculty position at Princeton University. Jesse urged me to take it, saying that if I came right away to Caltech I would never do theory. He was doubtless right, but I do not think he had much doubt that I would return. So I went to Princeton to be a theorist, and met Jerry Ostriker, the most delightful collaborator of my career. Jocelyn Bell had just discovered pulsars, and after the original little green men notions had gone away, the prevailing view came to be the present one. . .they were highly magnetic rotating neutron stars. The discovery of the optical pulsations from the Crab remnant pretty well nailed that idea. Jerry and I worked on the notion that the fundamental energy loss mechanism was dipole www.annualreviews.org • Jack of All 7 , . • · � -Review in Advance first posted on March 24, 2020. (Changes may still occur before final publication.) radiation from a rotating neutron star with field strengths in the neighborhood of 10 12 gauss. I worked on particle motion in such waves, and showed that if the formal gyrofrequency of particles in the wave magnetic field was larger than the wave frequency (the wave is "nonlinear") the particles will be accelerated to quite high energies, high enough to explain the synchrotron radiation from the Crab remnant and plausibly the optical pulses, though we did not even get close to explaining the coherent pulsed radio emission. We also put forward the notion that the dipole radiation from the rapidly spinning remnant at birth could add significantly to the energetics of the supernova outburst, an idea recently revived to help explain the magnetar phenomenon (Ostriker & Gunn 1971).
Work was ongoing at Princeton while I was there on the Copernicus UV satellite, and Lyman Spitzer was busy shepherding an enormous NASA project to build a 120-inch telescope in spacethe Large Space Telescope (LST), later, when somewhat reduced in size, the Space Telescope (ST), and later, the Hubble Space Telescope. More anon.
It was while I was in Princeton that I began work which was to last a long time, namely the study of fully nonlinear but approximately spherically symmetric perturbations in cosmology. With Rich Gott, then a graduate student at Princeton, we studied how a simple top-hat perturbation in an otherwise homogeneous cosmological model could collapse into a structure like the Coma cluster, and that such collapse almost inevitably is followed by the slow growth of the mass of the structure through the addition of ever more loosely bound infalling material (Gunn & Gott 1972). We put forward the idea that the known early-type galaxy populations in clusters can arise at least partially through ram-pressure stripping of galaxies falling into the hot intracluster medium, the medium itself consisting partially of gas stripped from galaxies and in part shocked material not yet incorporated into galaxies at the cluster's initial collapse.
Incredibly exciting times. Pulsars, quasars, the microwave background, all in a very few years. But I could not forget Palomar, and when a formal offer from Caltech came, I accepted and bounced back to California. Rosemary had stopped schooling when we went to Princeton, and she was anxious to get back.

BACK TO CALTECH, 1970
Once back at Caltech, I continued to work on perturbations in cosmology, and on optics in the inhomogeneous cosmologies that the perturbations begat. I was lucky enough to be enlisted by Kip Thorne, who had a group umbrella grant which supported almost everyone working on relativity including yours truly, to help him teach the graduate relativity course, which was an amazing refresher for me, and the students seemed to survive, and even like the course.
I acquired one of the best students of my career, Steve Shectman, who did a very inventive thesis looking for correlations in the background light imprinted by the correlations in the galaxy population, by scanning and digitizing wide-field photographic plates from the Schmidt (Shectman 1974). The signal was roughly what was expected, but the whole subject was pretty new and I thought the work did not get proper attention, though Shec has gone on to glorious things.
I was also very interested in "the cosmological problem"-the determination of the two numbers H 0 and q 0 which were supposed at the time to tell us all there is to know about the Universe we live in, supposing that it is a Robertson-Walker Universe dominated by pressureless matter, though it was already clear in 1970 that that matter might not be the ordinary stuff commonly called "baryonic" matter today.
The deceleration parameter q 0 , just half the density parameter 0 , (which was not then commonly used) for pressureless matter could, in principle, be measured by various tests, outlined by Allan Sandage in his famous ApJ 133 paper (Sandage 1961) on the subject. The most sensitive and favorite one was the use of some "standard candle," lately type Ia supernovae but then the brightest galaxies in clusters (BCGs), which exhibited a very narrow luminosity function, separated by unknown astrophysical effects from the luminosity function of the rest of the cluster population. Bev Oke and I began work with his marvelous multichannel spectrometer on BCGs in clusters I was finding with my work with the new Kodak emulsions on the 48-inch Schmidt.
Supporting this work involved work with the absolute spectrophotometric flux scale, a subject which had absorbed Bev for some years, and we managed finally to produce a flux scale (AB79), accurate to a few percent and suitable for extension to faint sources with large telescopes (Oke & Gunn 1983).
To secure this work required the knowledge of the evolution of the luminous output of these giant galaxies and good knowledge of their spectral energy distributions (SEDs), since the redshifts (of the order of 0.5 or a little higher for the clusters we were finding) move the whole spectrum to the red. The spectrometer allowed us to measure the SEDs directly, sidestepping the common "k-correction" problem with broadband photometry, but the intrinsic evolution of the stellar population, which certainly changes the luminosity and SED of the galaxy, needed to be understood in order to interpret the data at the level of accuracy required to determine the deceleration parameter.
So Bev and I hired Beatrice Tinsley, fairly fresh out of the University of Texas, incredibly bright and inventive, who was working on the first really quantitative formalism, "evolutionary synthesis" designed to allow one to understand and, indeed, think about, the stellar populations in galaxies. There is a wonderful review of the subject as she developed it in an unfortunately defunct journal, Tinsley (1980). We understood to some extent how individual stars evolve, and the synthesis scheme essentially was to posit that stars are born at some rate with some initial luminosity function with some metallicity distribution (though that was a finer point added later) and then all one needs to know is how the individual stars evolve. It seems so obvious now, but was pretty revolutionary at the time. In a star-forming spiral galaxy, what one can do from observations of the galaxy at any epoch is limited because the population is of very mixed ages and metallicities, and extracting the statistics from imperfect observations was extremely difficult (and still is, though that does not prevent some of my colleagues from, um, some overinterpretation of their data). For old early type systems, however, it is clear that there has been little if any star formation for a long time, and the problem reduces to the rate which stars evolve off the main sequence (most of the luminosity comes from giants), which in turn is determined by the slope of the initial mass function (Tinsley & Gunn 1976). The fall of luminosity with decreasing mass on the main sequence is so rapid that the SED of light from the main sequence is also essentially determined by this slope.
We set about trying to determine this slope from the integrated light of elliptical galaxies, and never really succeeded. The subject, of course, is still alive and well, and Beatrice (Tinsley & Gunn 1976) presciently pointed out that the discovery and measurement of the FeH Wing-Ford band, present in M dwarfs but absent in giants, would allow a direct measurement of the slope (presuming it to be a power law, which it is not in the Galaxy). Pieter van Dokkum and Charlie Conroy and Alexa Villaume may have done it, finally (Conroy et al. 2017). My fascination with the population synthesis problem has continued for many years, and Charlie, another of my astonishingly good PhD students, caught the fever and has carried it on quite brilliantly.
But all of this came to essentially nothing, because even if we had determined the slope perfectly, we had neglected an evolutionary effect of the same order or larger than the stellar evolution, which was pointed out by Jerry Ostriker and Scott Tremaine (Ostriker & Tremaine 1975), namely that galaxies in clusters will spiral in through dynamical friction and join the central giant, so the mass grows, and with it, the luminosity. The uncertainties in this cannibalism are enormous, and the realization that the effect exists and must be playing a pivotal role in the evolution of these www.annualreviews.org • Jack of All 9 , . • · � -Review in Advance first posted on March 24, 2020. (Changes may still occur before final publication.) March 13, 2020 15:2 objects essentially ended the quest to measure q 0 using BCGs. It was especially irksome to me, not only because it killed a line of research which had occupied-let's be honest, obsessed-me for years but also because I had some years before worked with Sverre Aarseth at Cambridge University on some small n-body simulations of clusters in which the effect was obvious, and I gave talks in which I pointed out that 4889 and 4874, the two supergiants in Coma, would certainly merge in a billion or two years, but was too dense to see what I was saying! The cluster work continued, now motivated by trying to understand the clusters themselves instead of the search for magic cosmological numbers. This proved to be very fruitful, as we will discuss shortly. Cambridge. Yes, Cambridge. For several years, beginning with my time in Princeton, I spent substantial fractions of summers at the Institute of Astronomy in Cambridge. Martin Rees was there, Malcolm Longair across the street, Fred Hoyle still at the beginning, and just across the way Roger Griffin, a very inventive instrumentalist and stellar astronomer whom I had met at Caltech while he was a postdoc and I a student. We worked together later for a long time on radial velocities; we will get to this shortly. But mostly theory.
What is the dark matter? Lee & Weinberg (1977) invented and published the "Wimp miracle" idea-the fact that if a weakly-interacting massive particle (WIMP) of (plausible) mass somewhere in the neighborhood of 10 Gev/c, perhaps (the lightest) supersymmetric partner of some known particle, existed, and was present in the very early Universe in thermal numbers, the relic density expected from weak cross-sections could well dominate the mass density in the Universe and be the dark matter. The existence of dark matter was by now essentially certain and the fact that it could almost certainly not be ordinary baryonic matter almost as certain. I worked with a number of people on one of the first papers on the cosmological ramifications of the existence of such a particle (Gunn et al. 1978), in which we pointed out, among other things, that several of the most mysterious things about the growth of structure were neatly explained by such a form of dark matter. A few years later Jim Peebles (Peebles 1982), who, as usual, was miles ahead of the rest of us, along with two other groups, put this all together and gave us the Cold Dark Matter paradigm, an essentially complete picture of cosmological structure growth and CMB perturbations, which still forms the basis of how we believe the Universe works, though dark energy had not yet raised its (ugly?) head.
On that subject, though, Beatrice and I published a paper called "An Accelerating Universe" (Gunn & Tinsley 1975) in which we attempted to explain preliminary results from the multichannel observations of BCGs which indicated that BCGs at large redshift were too faint and therefore that the Universe might be accelerating as a result of the effect of a cosmological constant. The paper was emphatically not prescient; it was just wrong. We were not measuring the fluxes from distant galaxies accurately enough because we could not reliably see them on the aperture of the multichannel to center them properly with the inadequate guide camera (an SEC vidicon, which detector we will meet again later).
The issue of the density of the Universe remained one of my chief scientific concerns, and I was looking for ways to measure it without touching the deceleration parameter, preferably ways which involved physics, not magic ill-understood astronomical relations (such as the narrow luminosity function of BCGs, which I think it fair to say we still do not understand very well). Beatrice, Rich Gott, David Schramm (all postdocs at Caltech at the time) and I endeavored to sift through all the data available on the subject. From the cluster studies and preliminary work on the SEDs of cluster galaxies, we knew a fair amount about mass-to-light (M/L) ratios, and knew roughly how M/L changes as a population ages, so could get some handle on the total M/L of ordinary spiral galaxies. From the existing census data on galaxies, estimates of the fraction of galaxies in clusters and the distribution of colors of galaxies, we came to the conclusion that the matter density parameter for gravitating matter in the Universe was in the neighborhood of 0.1 10 Gunn , . • · � -Review in Advance first posted on March 24, 2020. (Changes may still occur before final publication.)

AA58CH01_Gunn
ARjats.cls March 13, 2020 15:2 (Gott et al. 1974), and was very unlikely to be as large as unity unless there is heavy stuff much more uniformly distributed than galaxies. That paper one might legitimately regard as prescient.
On this same subject, I spent a fair amount of time worrying about the fate of the Galaxy/Andromeda system, and was able to calculate rough orbits and total masses using the formalism that Rich Gott and I had used for the nonlinear perturbation work. The masses are important, because they give information about M/L for the kinds of small systems most galaxies find themselves in. We knew about flat rotation curves, but did not know how far they extend, so the local group was an excellent laboratory. The answer, a couple 10 12 solar masses is correct, but I was unaware that the problem had been solved and published previously by Kahn & Woltjer (1959). The emphasis in that work was not cosmology, but the problem was the problem. The lesson: read widely.
Is the rotation curve of the Galaxy flat, as was clearly emerging to be typical among large external spirals? The understanding of the parameters of the Galaxy was somewhat hampered by a completely weird dictum of the IAU, that one should describe things within the Galaxy by a standard model in which the distance R 0 to the galactic center is 10 kpc, and the rotation velocity at the Sun, 0 is 250 km/s. This is vaguely similar to cosmological discussions in which quantities are referred to a "standard" cosmological model with H 0 = 100 km/s/Mpc, but there this is just used as a parameter. The Galaxy model was much more pernicious. It was the truth, and questioning it was a bit heretical. Jill Knapp, then a postdoc at Caltech and some years later to be my wife, Scott Tremaine, and I began to look at the existing 21-cm data for the Galaxy and were persuaded to question the orthodoxy on this subject. We found an R 0 of about 8.5 kpc and a 0 of about 220 km/s, and Oort constants A ∼ −B of about 12 km/sec/kpc , consistent with a constant rotation velocity, in contrast to the fairly steeply falling rotation curve implied by the canonical model. I believe history has been kind to us in the ensuing years-the Galaxy appears to be pretty much as we described it.
Most of the work I have described has been theoretical, with some observational input, but I was busy essentially the whole time with various instrumentation projects as well. It seemed not difficult at all to switch back and forth, and I have always been struck that changing one's modus operandi leads to clarity of thinking and flexibility, almost as if solving problems in one area helps in completely unrelated areas in completely unexpected ways-maybe as simple as breaking a train of thought merrily following a dead-end road. I mentioned that I had come to know Roger Griffin while I was a grad student. He had invented and reasonably perfected a method of measuring stellar radial velocities with an analog cross-correlation technique. The spectrum of some template star is obtained and scanned, and then a very high-contrast copy of the spectrum is made, opaque in the continuum and transparent below some carefully chosen level in the absorption lines, using a light source on the same machine as used to scan the original spectrum. This mask is then installed in the spectrograph in place of the original plate, and a large Fabry lens and a photomultiplier are placed behind the mask. A star image is placed on the slit, the mask is scanned in the wavelength direction, and a minimum in the transmitted signal, where the star's absorption lines are aligned with the clear areas on the mask, corresponds to a shift in the spectrum from which the geocentric radial velocity can be deduced. We decided to build such a machine for the coudè spectrograph on the Hale telescope, with the goal of being able to measure the radial velocities of globular cluster stars well enough (∼1 km/s) to be able to say something about their dynamics and M/L ratio. To get this kind of resolution required a fairly narrow slit, and seeing noise would be prohibitive unless the scanning were precisely controlled and very fast, so that one could average out the seeing noise. This was accomplished not by moving the mask but by virtually moving the slit by means of a small tilting silica block just behind the slit, driven by a cam, tangent arm, and a stepping motor.
www.annualreviews.org • Jack of All 11 , . • · � -Review in Advance first posted on March 24, 2020. (Changes may still occur before final publication.) The photomultiplier signal was pulse-counted and collected by a multichannel analyzer borrowed from physics (Griffin & Gunn 1974). All digital! The machine was spectacularly successful, and we achieved accuracies on brighter stars of better than 100 m/s. We were not even thinking of planets, and were not quite there yet, but in retrospect not too far away. The work on globulars went well, as did studies on several galactic clusters including M67, NGC188, and the Hyades, and Roger, keeping an interest which had started him down this path, the orbits of binaries, pushed us to continue this work, which led to hundreds of good orbits. The globular cluster data led me to work on dynamical models of globulars, extending Ivan King's original models to accommodate anisotropic velocity distribution functions, which were clearly required to fit the velocities in the outer parts, and an approximate treatment of rotation, which was necessary for M13. Though proper three-integral models are clearly necessary for strong rotation with anisotropy, we found that the M13 models did well with the assumption that the total angular momentum is an integral (Gunn & Griffin 1979, Lupton et al. 1987. The work on the Hyades led to a moving-cluster perspective analysis using accurate radial velocities instead of proper motions, and yielded the best Hyades distance scale at the time (Gunn et al. 1988)-no longer so good in the era of Hipparcos and Gaia, but quite useful then. The Hyades distance then was a major linchpin in the distance scale, but no longer so crucial today with Hipparcos and Gaia.
But my major interest was in detectors, for both imaging and spectroscopy. When I first came back to Caltech, there was a controversy raging about how far away the quasars are, i.e., were their redshifts due to distance and the general expansion of the Universe, which most workers accepted, mostly because it seemed impossible to explain the redshifts in any other physically acceptable way, or was there some mysterious new physics involved in objects somehow ejected from the nuclei of galaxies? A way to answer this question seemed to prove association with objects whose redshift distances were undoubted-galaxies and clusters of galaxies. Since I was imaging clusters, and there were a number of relatively low-redshift quasars, it seemed like a good idea to investigate the question of whether these objects were at least projected on clusters, and then obtain cluster galaxy redshifts to see if they were the same as that of the quasar. There were projections, the velocities were the same, and the answer seemed clear (Gunn 1971). Eventually the controversy evaporated, but there were and I am sure still are, holdouts, who continue to point to unusual configurations and coincidences, generally, IMHO, without anything like adequate investigation of the parent statistical space-spectacular examples of a posteriori statistics.
But spectroscopy of faint galaxies is hard, and the work originally was done with an image-tube spectrograph using a multistage magnetically focused image tube onto a photographic plate, and Bev's multichannel spectrometer, which could go satisfactorily faint but had rather poor spectral resolution when used in its most sensitive mode. It had 32 photomultipliers, and so when "wide open" had a resolving power of about 32. I was also using image tubes in imaging, getting deeper images of the clusters found on the Schmidt plates with a large image tube at prime focus of the 200-inch. But for all the image-tube work the final detector was a photographic plate with all of its problems: limited dynamic range, nonlinearity, and nonuniformity.
Something better was clearly needed. The first step was instrumentation using a silicon vidicon, and it was through this work that I became friends with Jim Westphal, a truly amazing individual in Planetary Science at Caltech, who was to play a very large role in my scientific life. The silicon vidicon was a television tube with a silicon target; the great advantage of silicon was its nearunity quantum efficiency. But the target was read by a magnetically steered electron beam, and the effective read noise was hundreds of electrons. Westphal and his group used these tubes very effectively in planetary observations with lots of photons to get very high signal-to-noise images, but the read noise made them essentially useless for faint-object work.
Soon after came the SIT vidicon, a tube which had a photoemissive cathode feeding an image intensifier, but the electrons, instead of impinging on a phosphor screen to expose a photographic plate, fell on a silicon target-the tube after the target was a silicon vidicon. These detectors had the quantum efficiency limitation of cathodes, but the target multiplied the impinging charge by large factors, so the effective read noise was reasonably small, and the signal could be amplified and digitized, so one could get a digital image. About this time I received a Sloan fellowship, and Bev and I used the money to build a small spectrograph for the 200-inch with an SIT vidicon for a detector. It worked very well, and was used by many to obtain faint object spectra. Westphal had built a data system designed by his electronics guru, Richard Lucinio, and the images went directly to magnetic tape. There were other machines being used which were one-dimensional, usually with some kind of array for the object spectrum and another for the sky, but the SIT produced a full 2D image.
But the real breakthrough was coming. A group at JPL under Jim Janesick were working with Texas Instruments on an adaptation of a Bell Labs invention, the charge-coupled device, which combined the very high quantum efficiency of the silicon vidicon and a direct read through a very low-noise amplifier, achieving QEs of almost unity and read noise of 10-ish electrons. The development work was a NASA program for planetary probes, but the ties between JPL and Caltech were pretty strong, and Jim Westphal and I were able to convince Janesick et al. that the devices could be proven on ground-based telescopes. The rest, as they say, is history.
The first serious CCD instrument we built was given the rather jocular name Prime Focus Universal Extragalactic Instrument (PFUEI; (pronounced, of course, "fooey") which, I believe, was the first of the breed of so-called parallel-beam-box instruments (Gunn & Westphal 1981).
Building it was an introduction as well to an excellent young machinist and self-taught engineer, Mike Carr, who was to work with me on instruments for essentially my whole career thereafter. PFUEI was a reimager that had a collimator that fed either a filter or a grating, then to a camera that made an image on the CCD. With the camera tilted so that its axis was parallel to the collimator and a filter in place, one could image and place an object of interest in the center of the focal plane. Without changing the guiding, a slit could be inserted and adjusted to be in the same place in the image as the object of interest. Then the filter could be removed, a grating inserted, and voila!, a supremely sensitive low-dispersion spectrograph with the faint object of interest securely in the slit.
It worked very well, first with the early 500 × 500 TI CCDs, and later with the 800 × 800 (both 15-micron pixel) detectors destined for the NASA planetary program. We worked with many people, including my students John Hoessel and Don Schneider, and Malcolm Longair and Julia Riley (on faint radio galaxies). Hundreds of cluster redshifts from the cluster survey were obtained, first breaking the record held for so many years at z = 0.46 by 3C295, and finally reaching z = 1.
During this time, the Space Telescope project was proceeding, but NASA was worried about imaging. The camera for ST (later Hubble, but just ST for now) had been given to Princeton to develop on the basis of Lyman Spitzer's enormous contributions to the project, but there was trouble. The detector chosen was an electron-impact vidicon, the SEC, like the SIT, but with a very large and delicate target made of potassium chloride. The device had the read-noise issues of the SIT but worse because the electron gain in the target was lower than with silicon-but even worse, the tubes could not be made reliable, and NASA knew full well that ST would not be counted a successful mission, whatever the science, without pictures. The camera was, therefore, put out for open proposals from the community, with the hope that something better than the SEC would emerge. And, of course, it did; the CCDs developed by JPL/TI were very much better detectors. Westphal and I decided we had to propose, and after much haggling and twisting of arms I persuaded him to be the PI, and I the deputy. I knew my nonexistent administrative skills would www.annualreviews.org • Jack of All 13 , . • · � -Review in Advance first posted on March 24, 2020. (Changes may still occur before final publication.) not work with the NASA bureaucracy, but I was comfortable being my share of the technical brains.
We were very worried that the CCDs had no ultraviolet response, and, of course, one of the major advantages of going to space was the availability of the UV spectrum. The SEC vidicon, with its magnesium fluoride cathode window, worked well (when working at all) in the UV, and the Princeton group had tons of UV experience with sounding rockets and the Copernicus satellite. At some point someone mentioned phosphors, and we went looking for substances which convert UV photons into visible ones. The search was quickly rewarded, with two substances, coronene and lumigen, which, when deposited on the CCD surface convert UV photons with high efficiency into visible ones, in the case of coronene (one of the interstellar PAHs !) in the blue-green and in the case of lumigen (the chemiluminescent substance which lights fireflies) in the yellow. Coronene turned out to be the more suitable, and we proved that it worked and did not harm the CCDs. We got the contract for the original Wide Field/Planetary camera.
Meanwhile, on the ground, something bigger and better than PFUEI seemed necessary, and my colleagues were anxious to have a real user CCD instrument instead of the rather fussy juryrigged PFUEI, which was not exactly user-friendly. NASA were persuaded to let us (and fund us to) build an instrument for the ground which was a sort of optical clone of the WF/PC. It had four reimaging cameras which viewed a four-faceted pyramidal mirror essentially in the focal plane at the Cassegrain focus of the 200-inch, giving a contiguous field four times as large as a single 800 × 800 CCD. It was called four-shooter ; eventually a spectrograph was added and two more pyramids made, one with a slit and the other with an opening in which a photographic slit mask could be placed to allow acquiring multiple spectra of, for example, galaxies or stars in clusters. It was not finished until after I left Caltech in 1980 to return to Princeton. Part of this development, of course, was the operating software, and it was largely written by a marvelous programmer named Barbara Zimmerman, all in a real-time language called Forth, actually developed for instrument and telescope control. I am enormously indebted to Barbara, not only for the excellent work but also for introducing me to Forth. I later developed a Forth compiler in C which can be linked against C code for things that need to go very fast, and use it today as my primary computing tool for essentially everything I do. The four-shooter crew are shown with the instrument in the Cassegrain cage of the 200-inch telescope in Figure 2.
A development I had followed fairly closely in CCD technology, and one which was to be enormously important later, was the so-called TDI technique, in which the field moves parallel to the columns in the device and the charge is clocked in lockstep, to produce not just a picture but a continuous mosaic. It has been used extensively by the military in aerial and satellite reconnaissance by cameras looking down. But as Earth rotates, the sky goes by, and one can clearly use the same technique in astronomy. The advantages are many; data-gathering is 100% efficient, because one does not have to read the device, reposition the telescope, take another picture, etc., and the nonuniformities in the detector, a serious problem with these early detectors, are averaged fully in one dimension on the device. Maarten Schmidt was intrigued by the idea to do a survey for faint high-redshift quasars, and persuaded me to modify the PFUEI electronics to work in this mode. After some hacking on gratings that would chill the blood of most optical people, in order to get both a strong zero-order image and a strong spectrum, necessary to measure redshifts with accuracy in a single pass, we were on the air, and were almost immediately rewarded with a quasar at a redshift of 4.73 (Schneider et al. 1989). Other high-redshift ones came later, but none higher than this. Much later, it was time to use the technique on four-shooter, which, with its four cameras, was a significantly more difficult project. But it worked, and with similarly hacked custom gratings, which for four-shooter had to be made on wedges to maintain the straight-through optical path, we were under way. Again, almost immediately, Schneider, Schmidt, and I found an object at 4.89, 14 Gunn , . • · � -Review in Advance first posted on March 24, 2020. (Changes may still occur before final publication.)

Figure 2
The four-shooter crew at the Palomar Observatory. From left to right are Ernie Lorenz, J. DeVere Smith, Barbara Zimmerman, Richard Lucinio, Mike Carr, Vic Nenow, James Gunn, Don Schneider, Ed Danielson, and Jim Westphal. Photo provided by James Gunn. ) but again it was not followed by even higher redshifts. The record held until the Sloan Survey, which we will discuss anon.

BACK TO PRINCETON, 1980
Life had changed. I had separated from Rosemary and had become involved with Jill Knapp. We had new offers from Princeton, and both of us decided it was time for a change. We left Caltech and arrived in Princeton in the fall of 1980. Jerry Ostriker was department chairman, having taken the job over from Lyman Spitzer, who had held it for decades. I was to inherit the Higgins professorship, held until very shortly before by Martin Schwarzchild, whose shoes I could not possibly fill, but I took the job; the very thought of Jerry and me being Lyman and Martin was pretty frightening. I was especially gratified about the offer, since in some very real sense I had stolen the Space Telescope camera from Lyman, but Lyman was a gentleman, and the architect of the finest, most human, most collegial, and best-working department of my experience.
My attention at first was on the WF/PC, which was essentially complete but had many team/scientific/administrative details to be worked out, and finishing four-shooter, which had run into severe delays with the delivery of satisfactory reimaging optics. I acquired two really excellent PhD students within the first year I was here, Robert Lupton, who worked on four-shooter crowded-field globular cluster photometry to complement the dynamical studies with Griffin, and Jeremy Goodman, who wrote on dynamical relaxation in stellar systems. Both are back at Princeton and have been for many years, and Robert was to work with me on software throughout SDSS, currently on PFS, and is playing a major role in LSST software. Jeremy is the theoretical genius he always was.
www.annualreviews.org • Jack of All 15 , . • · � -Review in Advance first posted on March 24, 2020. (Changes may still occur before final publication.) When four-shooter was finally working properly, and working in TDI scanning mode, we began a distant cluster survey in earnest with it (Postman et al. 1996), and as soon as the gratings, electronics, and software were ready, the slitless quasar survey which yielded the z = 4.89 object. We (Schneider, Schmidt, and I) were able to get a reasonable luminosity function for quasars  and were able to show that the comoving density peaked at between redshifts 2 and 3; Maarten had already shown years before that the numbers dropped precipitously from redshift 2 to the present. We know now, of course, that the star formation rate in the Universe peaked strongly in the same period, which has come to be called the "High Noon" of the Universe.
Alan Dressler and I then did multiobject spectroscopic surveys of clusters, and confirmed that the populations of galaxies in great clusters was markedly different even at a redshift of 0.5 than today [Butcher & Oemler (1978) had already found this photometrically], in that the fraction of blue, still star-forming galaxies was much higher then, and also that there was a new population of galaxies we called "E+A", which had a large elliptical (E) population of old stars (Dressler & Gunn 1992), but whose blue spectrum was dominated by a moderately young (A) population which represented only a small fraction of the mass, but which indicated a truncated burst of star formation less than a gigayear ago. These we identified with the infalling galaxies which Rich Gott and I had discussed years before, which are stripped of their low-density gas by the intracluster medium but have their high-density clouds compressed by it and a star-formation burst triggered. It is clear that the jury is still out on what causes these starbursts, and how they are related to the general quenching phenomenon.
Meanwhile the Space Telescope schedule was slipping at a frightening rate, and in the press to maintain the schedule several critical optical checks were never completed, with ghastly results later. Then in 1986 the Challenger disaster occurred, and it was unclear for a while whether the launch would ever happen, or the shuttle ever fly again. That was resolved, of course, and the launch finally happened in 1990, soon after which it was clear that there had been a terrible mistake in the optics which was finally traced to the incorrect assembly of the null test lens at Perkin Elmer. There was an essentially perfect backup mirror that had been built by Kodak, but the bad mirror was in orbit and the good one on the ground; changing mirrors in orbit was not in the cards, and it was felt that bringing the telescope back was neither safe nor feasible. All of this has been discussed ad nauseam in books and publications-just read the Wikipedia article for a good (and, I think, accurate) summary.
It appeared that years of work (not to mention a billion dollars) was simply gone. There were difficult personal issues in my life as well, and I resigned from the team-I had no confidence that the problem could be fixed in any reasonable way, though there were lots of ideas floating around. I was wrong, of course; the system was fixed by the installation of corrective optics for the other instruments and replacing the WF/PC with a completely new instrument more-or-less identical except for redesigned relay optics to correct for the bad primary mirror (and rather drastically reduced capability). The astronauts who performed this work on orbit were properly recognized heroes. And, of course, ST, which, had been named for Edwin Hubble in 1983, came to be called just "Hubble" after it began working properly, and has been an incredible resource for astronomy. I do not, however, regret the decision to leave; I think the Sloan survey would never have happened had I stayed.

SDSS
So let us talk about the Sloan Digital Sky Survey (SDSS), which has certainly been the most important and successful thing I have been involved in in my scientific career, one for which I have been very gratifyingly recognized.
It all began in about 1986 (whether before or after Challenger I do not remember). I was in Jim Westphal's office at Caltech and Morley Blouke, the CCD guru who had been at TI and was the brains behind the development of the CCDs there, walked in. He had a couple of years before moved from TI to Tektronix, the leading oscilloscope manufacturer in the world at the time. They had an idea for superfast oscilloscopes in which the electron beam wrote on a CCD instead of a phosphor screen. This required the development of large CCDs. Morley pulled from his briefcase a wafer carrier, and on the wafer was a single CCD which occupied most of the four-inch wafer area, a device with 2400 × 2400 28-micron pixels. It did not yet work, but working devices were coming, and the needed thinning and interconnect technology were being developed. There would be working devices within a couple of years, a little smaller and with a little smaller pixels, but still enormous compared to anything else extant.
The Palomar Sky Survey, a photographic survey conducted with the 48-inch Schmidt telescope in the late 1940s, had been the survey tool for astronomy for years. But it was photographic, was not very deep, was on "analog" images from which it was difficult to extract any quantitative photometric or astrometric information even if it were there. Technology must be developing to do a better job, and it had just emerged from Morley Blouke's briefcase.
Thoughts of how such a device might be used occupied me for a long time thereafter, and not too much later I was at Kitt Peak headquarters for a meeting to try to help NOAO figure out what to do with a 3.5-meter mirror which had been made in a pilot project to test Roger Angel's new technique for casting borosilicate mirrors. This, of course, went on to be enormously successful for making really big mirrors. This mirror wound up in the WIYN telescope, and a twin was procured by the ARC consortium of which Princeton was a founding member, for a budding observatory on Apache Point in New Mexico, which is an important part of this story. Figure 3 shows an areal view of the observatory taken after the SDSS telescope was built.
It was at this KPNO meeting that the idea of a wide-field imaging telescope with a focal plane paved with these large CCDs working in TDI mode, and similar CCDs in a battery of fiber-fed spectrographs for the same telescope to be used for spectroscopic work when the weather was not quite as pristine as it needs to be for imaging, was born. I went back home and set to work to try The Astrophysical Research Consortium (ARC) observatory site at Apache Point, New Mexico. The Sloan Digital Sky Survey telescope with its rollaway enclosure is on the left. Next to it is the 0.6-m photometric telescope, used for photometric calibration of the survey, then the 0.7-m New Mexico State University telescope, and the 3.5-m ARC telescope connected by a covered walkway to the operations building at far right. Photo from the APO/SDSS collection by Dan Long (CC-BY 4.0).
www.annualreviews.org • Jack of All 17 , . • · � -Review in Advance first posted on March 24, 2020. (Changes may still occur before final publication.) to design an optical system to do this. I had developed a very fast vector-based optics code some years before while I was at JPL, which was a satisfactory tool for the task. It soon became clear that a somewhat smaller telescope than 3.5 meters was optimal for this project, given the parameters of the detectors, the need for wide field, and expectations for seeing in US sites. A preliminary outline of a project built around a 2.5-meter telescope with a new distortion-free (necessary for TDI) Ritchey-Chretien-like design with an almost flat field, a survey strategy which could cover the northern galactic cap in several colors with almost simultaneous photometry reaching at least two magnitudes deeper than the Palomar Survey in a few years, and fiber spectroscopy for of the order of a million galaxies and a hundred thousand quasars emerged, all with accurate digital data for photometry, morphology, astrometry, and spectrophotometry. To me, an incredibly exciting prospect. It was to occupy most of my time and attention for the next two decades. When I was pretty sure it could all be built without enormous and expensive development effort, I had a long discussion with Jerry Ostriker, then department chair and later provost of the university. I will never forget this conversation. At the end, Jerry declared "let's do it. You build it, and I will pay for it." Shortly later, Princeton alum and financier Keith Gollust walked into Jerry's office with an offer to help fund some new, exciting astronomical project, and we were underway as an ARC project with Don York as Director and yours truly as project scientist (York et al. 2000).
I will not dwell here on the development of the project. It has been well recorded by Ann Finkbeiner in her book A Grand and Bold Thing (Finkbeiner 2012), though, IMHO, an overzealous editor removed much of the contention and controversy in the project engendered partly by style conflicts between universities and government labs; Fermilab was a major player in the project. This was fueled partly by scarcity of funding, and partly by inevitable delays and an inadequate management structure for a project of this size. We were in serious danger for a long time of going under when it was abundantly clear that the first cost estimates were a factor of several smaller than the real costs would be, and that the proposed schedule was nonsensical. I think there were important lessons to be learned from the trouble, perhaps the most important one that management skills are vastly important for projects of this size, and scientific skill and management skill are not synonymous. But Hirsh Cohen and several other folks at the Sloan Foundation, where our initial large funding (and the name) came from, believed in the project, and, in particular that it could be done by the existing team, and should be done. Jerry was very successful in keeping the universities behind us, and we were able to bring it into operation in 2000. We were aided in no small part by the management skills of Jim Crocker, who came as project manager in our darkest hours, brought in by John Peoples, Fermilab director who was by then directing the project. Its success speaks well for itself, but the road was a very tortuous one.
One of the biggest problems, which was not nearly well enough recognized (or funded) at the beginning, was the development of software. There was some vague idea at the beginning that it would be all done by faculty and graduate students. To be sure, there was important work by graduate students; Connie Rockosi, who was very much my right-hand person during the construction, did wonderful things in both software and hardware, but most of the software was developed by folks like Robert Lupton, who directed the imaging pipeline work and wrote much of the code, and was hired by the project to do so. Jill ably managed the software effort at Princeton. Figure 4 shows most of the Princeton-based camera team, just before the camera was shipped.
So at Princeton the development work on the imaging pipeline, later a second spectroscopic pipeline masterminded mostly by David Schlegel, Doug Finkbeiner, and Scott Burles, and essentially all the work on the camera and the spectrograph detectors and cryostats, was done. I had lured Mike Carr from Caltech to do the detailed engineering, so our collaboration continued. The

Figure 5
The instrument rotator of the 2.5-m Sloan Digital Sky Survey (SDSS) telescope, showing the two 300-fiber spectrographs (green) and the mounted imaging camera between them, with the rear cover off showing the cryogenic plumbing. There are 8 cryostats carrying 30 imaging CCDs and 22 astrometric calibration chips. The spectrographs each have a red and blue channel, splitting the 3800 Å to 9000 Å spectral range at about 6000 Å. The camera is now in the Smithsonian Institution, and the spectrographs are still in service, winding down SDSS IV. Photo provided by James Gunn.
But perhaps as important as the science was a new way to do science, in particular to do astronomy, but actually to do science at all. The SDSS in one way or another has produced 9299 scientific papers to date, which have been cited half a million times. Far fewer than half of these papers have arisen from within the collaboration, even while the original survey was active. A secret of its success, as important or more important than the optics and detectors and software, was the openness of the data and the open tools to access, organize, and work with the data. This was a difficult and long-fought battle within the consortium; large collaborations were a new thing to the field, and completely alien to most astronomers who worked in at most small groups of carefully chosen collaborators, if not completely alone. The desire to carve up the output of the survey into bits of turf belonging to individuals or groups was very prevalent, and the policy that anyone could do anything, even if somebody else was already doing it, even if some felt that it was being done incorrectly, was put into place; interestingly, the opposition to the policy melted away over time, and the richness it engendered came to be properly appreciated. Collaborations of people with vastly different skills but common interests sprang up all over the world, and I think we and the field are enormously richer for it.
I was sufficiently busy with technical matters, both directly instrumental and conduct-of-survey, that I was actually involved in very little of the science. I am asked whether I regret this, and the 20 Gunn , . • · � -Review in Advance first posted on March 24, 2020. (Changes may still occur before final publication.)

Figure 6
Dramatic photo of the Sloan Digital Sky Survey telescope at Apache Point, New Mexico, at sunset. There is no dome; the cage around the telescope is a wind baffle that moves with, but does not touch, the telescope. Photo from the APO/SDSS collection by Dan Long (CC-BY 4.0).
easy answer is that I do not. The SDSS was my child, and its wonderful group of (mostly young) researchers also my children. They have done very well, and have almost universally gone on to great things...again, despite worries that very talented people get lost in large collaborations. They had enough faith in the project to hang in through the very difficult times, and, I think, have been rewarded for their efforts. Thank you all. I will in a bit write a short piece on where I think the field is going, and what I am doing now, but I will interject a parenthetical section on a particular facet of my life and, the interaction of the field and science and academia in general with a pernicious social problem in the United States.

TEACHING IN PRISON
In 2005, among the new postdocs arriving in the department was Mark Krumholz from UC Berkeley. Mark had been deeply involved in Jody Lewen's Prison University Project, in which volunteers from regional universities teach accredited college courses in San Quentin Prison. Mark, Jill Knapp, and Jenny Greene, then also a postdoc in the department, worked with the local Mercer Community College and the state Department of Corrections to set up a similar effort in the NJ state prisons. There were small existing programs teaching specialized courses-a small-business certificate, a certificate in religious studies-but little of a more generally academic nature. A couple of years later I was persuaded to join this effort, and it has been an enormously www.annualreviews.org • Jack of All 21 , . • · � -Review in Advance first posted on March 24, 2020. (Changes may still occur before final publication.) enlightening and rewarding endeavor. That it has accomplished good things there is no doubt whatsoever; the sadness is that it is necessary in the first place in this sick society, and that we reach so few people. Why do this? You may know that the United States incarcerates its citizens at a per-capita and absolute rate which are the highest in the world, roughly an order of magnitude higher than any nation in Europe, and that that incarceration is so racially unbalanced as to be a blatant demonstration of racism in the United States at essentially every stage of the process-economics, education, arrests, incarceration. The vast majority of the citizens in prison, typically over 90 percent, are there as a result of plea bargaining, not the jury trial to which the Constitution assures citizens a right, and the ratios of people of color to whites in prison is close to the inverse of the ratio in the population at large. Watching the inmates stream past during one of the endless waits in the state prisons, you see black face after black face, a few brown ones, and the occasional white. This ghastly ratio varies from state to state, and New Jersey is the worst in the country.
To be sure, the students in prisons whom we reach are a self-selected group who are committed to making a better life for themselves. They are the equal intellectually to undergraduates at fine universities but usually appallingly badly educated. There is a courageous program at New Jersey's state university Rutgers, initiated by history professor Donald Roden, to admit students recently released from prison; most of these young men and women were involved in the in-prison college program. This produced the first two Truman scholars from Rutgers after a hiatus of more than 10 years, and the general academic performance well exceeds the university average. After graduating from university, the career success is amazingly high, and we are especially proud that much of this success is in technical fields. We are all deeply grateful for the foresight and courage of Rutgers, the Princeton department, which has strongly supported this effort every step of the way, and our accrediting institutions, Mercer County and Raritan Valley community colleges. The effort does show what can happen when people get together for an important cause.
But while there are many prison teaching programs across the country, there is, in many, a striking dearth of interest in STEM fields, and the aversion to mathematics, in particular (and how can we teach science without math?) is widespread and pernicious. This despite the obvious fact that the difficult reentry into society from prison, even via a university, is vastly easier in technical fields than in others, and despite the many talented students inside.
My own experience, and that of my colleagues who participate in prison teaching, produces both hope and outrage. Seeing the power of education elicits hope, but the outrage comes from observing the social injustices firsthand which have denied whole segments of the population the education which the privileged take completely for granted and is supposed to be part of the fabric of the country, and from the simple realization that our efforts are too small to do anything but alleviate a tiny piece of the problem. While the roots of this social outrage run very deep and certainly begin with inadequate schooling in poor, often black areas, the racial profiling, so in the news at the arrest level, is prevalent at all stages of life.
We are educators and educators live by teaching. But one could reasonably ask why a professional scientist should care about any of this. Our science is alive, healthy, vigorous and enormously successful as it is: can you really make a case for the enormous effort which would be required by the current scientific community to open the field to all talented and interested people? Not, I think, for the sake of the science; but for the sake of our own and and the nation's soul. It could be you, me, anyone locked up with no chance to do what we are put here to do. There is a wonderful film about the life of James Baldwin called I Am Not Your Negro, which I strongly recommend you see if you have not. Living in a country which so cruelly mistreats a large fraction of its population erodes us all.

AFTER SDSS
SDSS made a valuable census of the Universe at "present"-i.e., within the last two billion years or so for galaxies, but covers far too short a cosmic period to be able to say much about evolution. What is clearly needed is a survey much like SDSS, with both imaging and spectroscopy that goes deep enough to do so. We were deeply involved with the Japanese community during SDSS, and they made very large contributions to the project, and to the camera in particular. Maki Sekiguchi had pioneered multi-CCD cameras in Japan and for Subaru, and his student, Satoshi Miyazaki, was building a very large camera, Hyper Suprime-Cam (HSC) (Miyazaki et al. 2018), for Subaru. Princeton became involved with that project, and a large imaging survey with that instrument is now well underway. The spectroscopy will come later. After a long and tortuous adventure in the NSF WFMOS project, a large fiber spectrograph destined to be installed on the Subaru telescope (because nobody else in the world had had the foresight to build large telescopes with big fields), in which this country did not acquit itself very well, the Japanese secured initial funding for their own very large multifiber spectrograph for Subaru, PFS (Tamura et al. 2018). I was involved in the design of this instrument from the beginning, and it has been my primary pursuit for most of the last decade. Robert Lupton's software group, which wrote much of the HSC pipeline, are, in collaboration mostly with colleagues in France, Japan, and Taiwan, writing much of both the operations software and reduction software for PFS. We and our long-time SDSS colleagues at Johns Hopkins University are doing the detectors and twelve large cryostats for the spectrograph cameras. I am a kind of architect-without-portfolio for the project and have my hands in almost everything. The project is fully as technically challenging as SDSS was, and despite being spread all over the world (significant development work is going on in Japan, Taiwan, Hawaii, both coasts in the United States, Brazil, Britain, France, and Germany) the project is cohesive, much more collegial than SDSS was in its construction phase, and entirely a pleasure to work within. We are, of course, short of both time and money, and serious problems which are traceable mostly to not having had enough time early on to completely solve problems we did recognize, and to recognize those we did not, arise all too often for comfort, but to no greater extent than in other projects in which I have been involved.
We will be on the sky in a couple of years, after which it is not clear what I will do. I "retired" in 2012, already well past normal retirement age, and will be in my middle eighties, if I am lucky enough to last that long, when the PFS survey is underway. It is clear that my ability to do things on which my scientific life has depended, things like writing code and building electronics and handling optics and detectors, is declining rapidly with age-and, of course, this plays poorly with the increasing complexity of those activities. I have often said, and I mean, that I will depart the field when I am no longer useful, but what I will do then is not clear.

NOW AND THE FUTURE
The growth and maturation of the field during the almost six decades my career has spanned has been nothing short of astonishing. When I began in the 1960s we were using photographic plates for almost all investigations, there was a raging controversy over factor of two of values for the Hubble constant (which at least, gratifyingly, bracketed the current value, and about which, people currently haggle at the one percent level), there were no exoplanets, quasars, pulsars, black holes, cosmological or galaxy formation simulations, and no real recognition that dark matter even existed.