PHYSICS

PUBLIC LECTURES BY PROFESSOR STEPHEN HAWKINS The Beginning of Time In this lecture, I would like to discuss whether time itself has a beginning, and whether it will have an end. All the evidence seems to indicate, that the universe has not existed forever, but that it had a beginning, about 15 billion years ago. This is probably the most remarkable discovery of modern cosmology. Yet it is now taken for granted. We are not yet certain whether the universe will have an end. When I gave a lecture in Japan, I was asked not to mention the possible re-collapse of the universe, because it might affect the stock market. However, I can re-assure anyone who is nervous about their investments that it is a bit early to sell: even if the universe does come to an end, it won't be for at least twenty billion years. By that time, maybe the GATT trade agreement will have come into effect. The time scale of the universe is very long compared to that for human life. It was therefore not surprising that until recently, the universe was thought to be essentially static, and unchanging in time. On the other hand, it must have been obvious, that society is evolving in culture and technology. This indicates that the present phase of human history can not have been going for more than a few thousand years. Otherwise, we would be more advanced than we are. It was therefore natural to believe that the human race, and maybe the whole universe, had a beginning in the fairly recent past. However, many people were unhappy with the idea that the universe had a beginning, because it seemed to imply the existence of a supernatural being who created the universe. They preferred to believe that the universe, and the human race, had existed forever. Their explanation for human progress was that there had been periodic floods, or other natural disasters, which repeatedly set back the human race to a primitive state. This argument about whether or not the universe had a beginning, persisted into the 19th and 20th centuries. It was conducted mainly on the basis of theology and philosophy, with little consideration of observational evidence. This may have been reasonable, given the notoriously unreliable character of cosmological observations, until fairly recently. The cosmologist, Sir Arthur Eddington, once said, 'Don't worry if your theory doesn't agree with the observations, because they are probably wrong.' But if your theory disagrees with the Second Law of Thermodynamics, it is in bad trouble. In fact, the theory that the universe has existed forever is in serious difficulty with the Second Law of Thermodynamics. The Second Law, states that disorder always increases with time. Like the argument about human progress, it indicates that there must have been a beginning. Otherwise, the universe would be in a state of complete disorder by now, and everything would be at the same temperature. In an infinite and everlasting universe, every line of sight would end on the surface of a star. This would mean that the night sky would have been as bright as the surface of the Sun. The only way of avoiding this problem would be if, for some reason, the stars did not shine before a certain time. In a universe that was essentially static, there would not have been any dynamical reason, why the stars should have suddenly turned on, at some time. Any such "lighting up time" would have to be imposed by an intervention from outside the universe. The situation was different, however, when it was realised that the universe is not static, but expanding. Galaxies are moving steadily apart from each other. This means that they were closer together in the past. One can plot the separation of two galaxies, as a function of time. If there were no acceleration due to gravity, the graph would be a straight line. It would go down to zero separation, about twenty billion years ago. One would expect gravity, to cause the galaxies to accelerate towards each other. This will mean that the graph of the separation of two galaxies will bend downwards, below the straight line. So the time of zero separation, would have been less than twenty billion years ago. At this time, the Big Bang, all the matter in the universe, would have been on top of itself. The density would have been infinite. It would have been what is called, a singularity. At a singularity, all the laws of physics would have broken down. This means that the state of the universe, after the Big Bang, will not depend on anything that may have happened before, because the deterministic laws that govern the universe will break down in the Big Bang. The universe will evolve from the Big Bang, completely independently of what it was like before. Even the amount of matter in the universe, can be different to what it was before the Big Bang, as the Law of Conservation of Matter, will break down at the Big Bang. Since events before the Big Bang have no observational consequences, one may as well cut them out of the theory, and say that time began at the Big Bang. Events before the Big Bang, are simply not defined, because there's no way one could measure what happened at them. This kind of beginning to the universe, and of time itself, is very different to the beginnings that had been considered earlier. These had to be imposed on the universe by some external agency. There is no dynamical reason why the motion of bodies in the solar system can not be extrapolated back in time, far beyond four thousand and four BC, the date for the creation of the universe, according to the book of Genesis. Thus it would require the direct intervention of God, if the universe began at that date. By contrast, the Big Bang is a beginning that is required by the dynamical laws that govern the universe. It is therefore intrinsic to the universe, and is not imposed on it from outside. Although the laws of science seemed to predict the universe had a beginning, they also seemed to predict that they could not determine how the universe would have begun. This was obviously very unsatisfactory. So there were a number of attempts to get round the conclusion, that there was a singularity of infinite density in the past. One suggestion was to modify the law of gravity, so that it became repulsive. This could lead to the graph of the separation between two galaxies, being a curve that approached zero, but didn't actually pass through it, at any finite time in the past. Instead, the idea was that, as the galaxies moved apart, new galaxies were formed in between, from matter that was supposed to be continually created. This was the Steady State theory, proposed by Bondi, Gold, and Hoyle. The Steady State theory, was what Karl Popper would call, a good scientific theory: it made definite predictions, which could be tested by observation, and possibly falsified. Unfortunately for the theory, they were falsified. The first trouble came with the Cambridge observations, of the number of radio sources of different strengths. On average, one would expect that the fainter sources would also be the more distant. One would therefore expect them to be more numerous than bright sources, which would tend to be near to us. However, the graph of the number of radio sources, against there strength, went up much more sharply at low source strengths, than the Steady State theory predicted. There were attempts to explain away this number count graph, by claiming that some of the faint radio sources, were within our own galaxy, and so did not tell us anything about cosmology. This argument didn't really stand up to further observations. But the final nail in the coffin of the Steady State theory came with the discovery of the microwave background radiation, in 1965. This radiation is the same in all directions. It has the spectrum of radiation in thermal equilibrium at a temperature of 2 point 7 degrees above the Absolute Zero of temperature. There doesn't seem any way to explain this radiation in the Steady State theory. Another attempt to avoid a beginning to time, was the suggestion, that maybe all the galaxies didn't meet up at a single point in the past. Although on average, the galaxies are moving apart from each other at a steady rate, they also have small additional velocities, relative to the uniform expansion. These so-called "peculiar velocities" of the galaxies, may be directed sideways to the main expansion. It was argued, that as you plotted the position of the galaxies back in time, the sideways peculiar velocities, would have meant that the galaxies wouldn't have all met up. Instead, there could have been a previous contracting phase of the universe, in which galaxies were moving towards each other. The sideways velocities could have meant that the galaxies didn't collide, but rushed past each other, and then started to move apart. There wouldn't have been any singularity of infinite density, or any breakdown of the laws of physics. Thus there would be no necessity for the universe, and time itself, to have a beginning. Indeed, one might suppose that the universe had oscillated, though that still wouldn't solve the problem with the Second Law of Thermodynamics: one would expect that the universe would become more disordered each oscillation. It is therefore difficult to see how the universe could have been oscillating for an infinite time. This possibility, that the galaxies would have missed each other, was supported by a paper by two Russians. They claimed that there would be no singularities in a solution of the field equations of general relativity, which was fully general, in the sense that it didn't have any exact symmetry. However, their claim was proved wrong, by a number of theorems by Roger Penrose and myself. These showed that general relativity predicted singularities, whenever more than a certain amount of mass was present in a region. The first theorems were designed to show that time came to an end, inside a black hole, formed by the collapse of a star. However, the expansion of the universe, is like the time reverse of the collapse of a star. I therefore want to show you, that observational evidence indicates the universe contains sufficient matter, that it is like the time reverse of a black hole, and so contains a singularity. In order to discuss observations in cosmology, it is helpful to draw a diagram of events in space and time, with time going upward, and the space directions horizontal. To show this diagram properly, I would really need a four dimensional screen. However, because of government cuts, we could manage to provide only a two dimensional screen. I shall therefore be able to show only one of the space directions. As we look out at the universe, we are looking back in time, because light had to leave distant objects a long time ago, to reach us at the present time. This means that the events we observe lie on what is called our past light cone. The point of the cone is at our position, at the present time. As one goes back in time on the diagram, the light cone spreads out to greater distances, and its area increases. However, if there is sufficient matter on our past light cone, it will bend the rays of light towards each other. This will mean that, as one goes back into the past, the area of our past light cone will reach a maximum, and then start to decrease. It is this focussing of our past light cone, by the gravitational effect of the matter in the universe, that is the signal that the universe is within its horizon, like the time reverse of a black hole. If one can determine that there is enough matter in the universe, to focus our past light cone, one can then apply the singularity theorems, to show that time must have a beginning. How can we tell from the observations, whether there is enough matter on our past light cone, to focus it? We observe a number of galaxies, but we can not measure directly how much matter they contain. Nor can we be sure that every line of sight from us will pass through a galaxy. So I will give a different argument, to show that the universe contains enough matter, to focus our past light cone. The argument is based on the spectrum of the microwave background radiation. This is characteristic of radiation that has been in thermal equilibrium, with matter at the same temperature. To achieve such an equilibrium, it is necessary for the radiation to be scattered by matter, many times. For example, the light that we receive from the Sun has a characteristically thermal spectrum. This is not because the nuclear reactions, which go on in the centre of the Sun, produce radiation with a thermal spectrum. Rather, it is because the radiation has been scattered, by the matter in the Sun, many times on its way from the centre. In the case of the universe, the fact that the microwave background has such an exactly thermal spectrum indicates that it must have been scattered many times. The universe must therefore contain enough matter, to make it opaque in every direction we look, because the microwave background is the same, in every direction we look. Moreover, this opacity must occur a long way away from us, because we can see galaxies and quasars, at great distances. Thus there must be a lot of matter at a great distance from us. The greatest opacity over a broad wave band, for a given density, comes from ionised hydrogen. It then follows that if there is enough matter to make the universe opaque, there is also enough matter to focus our past light cone. One can then apply the theorem of Penrose and myself, to show that time must have a beginning. The focussing of our past light cone implied that time must have a beginning, if the General Theory of relativity is correct. But one might raise the question, of whether General Relativity really is correct. It certainly agrees with all the observational tests that have been carried out. However these test General Relativity, only over fairly large distances. We know that General Relativity can not be quite correct on very small distances, because it is a classical theory. This means, it doesn't take into account, the Uncertainty Principle of Quantum Mechanics, which says that an object can not have both a well defined position, and a well defined speed: the more accurately one measures the position, the less accurately one can measure the speed, and vice versa. Therefore, to understand the very high-density stage, when the universe was very small, one needs a quantum theory of gravity, which will combine General Relativity with the Uncertainty Principle. Many people hoped that quantum effects, would somehow smooth out the singularity of infinite density, and allow the universe to bounce, and continue back to a previous contracting phase. This would be rather like the earlier idea of galaxies missing each other, but the bounce would occur at a much higher density. However, I think that this is not what happens: quantum effects do not remove the singularity, and allow time to be continued back indefinitely. But it seems that quantum effects can remove the most objectionable feature, of singularities in classical General Relativity. This is that the classical theory, does not enable one to calculate what would come out of a singularity, because all the Laws of Physics would break down there. This would mean that science could not predict how the universe would have begun. Instead, one would have to appeal to an agency outside the universe. This may be why many religious leaders, were ready to accept the Big Bang, and the singularity theorems. It seems that Quantum theory, on the other hand, can predict how the universe will begin. Quantum theory introduces a new idea, that of imaginary time. Imaginary time may sound like science fiction, and it has been brought into Doctor Who. But nevertheless, it is a genuine scientific concept. One can picture it in the following way. One can think of ordinary, real, time as a horizontal line. On the left, one has the past, and on the right, the future. But there's another kind of time in the vertical direction. This is called imaginary time, because it is not the kind of time we normally experience. But in a sense, it is just as real, as what we call real time. The three directions in space, and the one direction of imaginary time, make up what is called a Euclidean space-time. I don't think anyone can picture a four dimensional curve space. But it is not too difficult to visualise a two dimensional surface, like a saddle, or the surface of a football. In fact, James Hartle of the University of California Santa Barbara, and I have proposed that space and imaginary time together, are indeed finite in extent, but without boundary. They would be like the surface of the Earth, but with two more dimensions. The surface of the Earth is finite in extent, but it doesn't have any boundaries or edges. I have been round the world, and I didn't fall off. If space and imaginary time are indeed like the surface of the Earth, there wouldn't be any singularities in the imaginary time direction, at which the laws of physics would break down. And there wouldn't be any boundaries, to the imaginary time space-time, just as there aren't any boundaries to the surface of the Earth. This absence of boundaries means that the laws of physics would determine the state of the universe uniquely, in imaginary time. But if one knows the state of the universe in imaginary time, one can calculate the state of the universe in real time. One would still expect some sort of Big Bang singularity in real time. So real time would still have a beginning. But one wouldn't have to appeal to something outside the universe, to determine how the universe began. Instead, the way the universe started out at the Big Bang would be determined by the state of the universe in imaginary time. Thus, the universe would be a completely self-contained system. It would not be determined by anything outside the physical universe, that we observe. The no boundary condition, is the statement that the laws of physics hold everywhere. Clearly, this is something that one would like to believe, but it is a hypothesis. One has to test it, by comparing the state of the universe that it would predict, with observations of what the universe is actually like. If the observations disagreed with the predictions of the no boundary hypothesis, we would have to conclude the hypothesis was false. There would have to be something outside the universe, to wind up the clockwork, and set the universe going. Of course, even if the observations do agree with the predictions, that does not prove that the no boundary proposal is correct. But one's confidence in it would be increased, particularly because there doesn't seem to be any other natural proposal, for the quantum state of the universe. The no boundary proposal, predicts that the universe would start at a single point, like the North Pole of the Earth. But this point wouldn't be a singularity, like the Big Bang. Instead, it would be an ordinary point of space and time, like the North Pole is an ordinary point on the Earth, or so I'm told. I have not been there myself. According to the no boundary proposal, the universe would have expanded in a smooth way from a single point. As it expanded, it would have borrowed energy from the gravitational field, to create matter. As any economist could have predicted, the result of all that borrowing, was inflation. The universe expanded and borrowed at an ever-increasing rate. Fortunately, the debt of gravitational energy will not have to be repaid until the end of the universe. Eventually, the period of inflation would have ended, and the universe would have settled down to a stage of more moderate growth or expansion. However, inflation would have left its mark on the universe. The universe would have been almost completely smooth, but with very slight irregularities. These irregularities are so little, only one part in a hundred thousand, that for years people looked for them in vain. But in 1992, the Cosmic Background Explorer satellite, COBE, found these irregularities in the microwave background radiation. It was an historic moment. We saw back to the origin of the universe. The form of the fluctuations in the microwave background agree closely with the predictions of the no boundary proposal. These very slight irregularities in the universe would have caused some regions to have expanded less fast than others. Eventually, they would have stopped expanding, and would have collapsed in on themselves, to form stars and galaxies. Thus the no boundary proposal can explain all the rich and varied structure, of the world we live in. What does the no boundary proposal predict for the future of the universe? Because it requires that the universe is finite in space, as well as in imaginary time, it implies that the universe will re-collapse eventually. However, it will not re-collapse for a very long time, much longer than the 15 billion years it has already been expanding. So, you will have time to sell your government bonds, before the end of the universe is nigh. Quite what you invest in then, I don't know. Originally, I thought that the collapse, would be the time reverse of the expansion. This would have meant that the arrow of time would have pointed the other way in the contracting phase. People would have gotten younger, as the universe got smaller. Eventually, they would have disappeared back into the womb. However, I now realise I was wrong, as these solutions show. The collapse is not the time reverse of the expansion. The expansion will start with an inflationary phase, but the collapse will not in general end with an anti inflationary phase. Moreover, the small departures from uniform density will continue to grow in the contracting phase. The universe will get more and more lumpy and irregular, as it gets smaller, and disorder will increase. This means that the arrow of time will not reverse. People will continue to get older, even after the universe has begun to contract. So it is no good waiting until the universe re-collapses, to return to your youth. You would be a bit past it, anyway, by then. The conclusion of this lecture is that the universe has not existed forever. Rather, the universe, and time itself, had a beginning in the Big Bang, about 15 billion years ago. The beginning of real time, would have been a singularity, at which the laws of physics would have broken down. Nevertheless, the way the universe began would have been determined by the laws of physics, if the universe satisfied the no boundary condition. This says that in the imaginary time direction, space-time is finite in extent, but doesn't have any boundary or edge. The predictions of the no boundary proposal seem to agree with observation. The no boundary hypothesis also predicts that the universe will eventually collapse again. However, the contracting phase, will not have the opposite arrow of time, to the expanding phase. So we will keep on getting older, and we won't return to our youth. Because time is not going to go backwards, I think I better stop now. Space and Time Warps This lecture is the intellectual property of Professor S.W. Hawking. You may not reproduce, edit or distribute this document in anyway for monetary advantage. In science fiction, space and time warps are a commonplace. They are used for rapid journeys around the galaxy, or for travel through time. But today's science fiction, is often tomorrow's science fact. So what are the chances for space and time warps. The idea that space and time can be curved, or warped, is fairly recent. For more than two thousand years, the axioms of Euclidean geometry, were considered to be self evident. As those of you that were forced to learn Euclidean geometry at school may remember, one of the consequences of these axioms is, that the angles of a triangle, add up to a hundred and 80 degrees. However, in the last century, people began to realize that other forms of geometry were possible, in which the angles of a triangle, need not add up to a hundred and 80 degrees. Consider, for example, the surface of the Earth. The nearest thing to a straight line on the surface of the Earth, is what is called, a great circle. These are the shortest paths between two points, so they are the roots that air lines use. Consider now the triangle on the surface of the Earth, made up of the equator, the line of 0 degrees longitude through London, and the line of 90 degrees longtitude east, through Bangladesh. The two lines of longitude, meet the equator at a right angle, 90 degrees. The two lines of longitude also meet each other at the north pole, at a right angle, or 90 degrees. Thus one has a triangle with three right angles. The angles of this triangle add up to two hundred and seventy degrees. This is greater than the hundred and eighty degrees, for a triangle on a flat surface. If one drew a triangle on a saddle shaped surface, one would find that the angles added up to less than a hundred and eighty degrees. The surface of the Earth, is what is called a two dimensional space. That is, you can move on the surface of the Earth, in two directions at right angles to each other: you can move north south, or east west. But of course, there is a third direction at right angles to these two, and that is up or down. That is to say, the surface of the Earth exists in three-dimensional space. The three dimensional space is flat. That is to say, it obeys Euclidean geometry. The angles of a triangle, add up to a hundred and eighty degrees. However, one could imagine a race of two dimensional creatures, who could move about on the surface of the Earth, but who couldn't experience the third direction, of up or down. They wouldn't know about the flat three-dimensional space, in which the surface of the Earth lives. For them, space would be curved, and geometry would be non-Euclidean. It would be very difficult to design a living being that could exist in only two dimensions. Food that the creature couldn't digest would have to be spat out the same way it came in. If there were a passage right the way through, like we have, the poor animal would fall apart. So three dimensions, seems to be the minimum for life. But just as one can think of two dimensional beings living on the surface of the Earth, so one could imagine that the three dimensional space in which we live, was the surface of a sphere, in another dimension that we don't see. If the sphere were very large, space would be nearly flat, and Euclidean geometry would be a very good approximation over small distances. But we would notice that Euclidean geometry broke down, over large distances. As an illustration of this, imagine a team of painters, adding paint to the surface of a large ball. As the thickness of the paint layer increased, the surface area would go up. If the ball were in a flat three-dimensional space, one could go on adding paint indefinitely, and the ball would get bigger and bigger. However, if the three-dimensional space, were really the surface of a sphere in another dimension, its volume would be large but finite. As one added more layers of paint, the ball would eventually fill half the space. After that, the painters would find that they were trapped in a region of ever decreasing size, and almost the whole of space, was occupied by the ball, and its layers of paint. So they would know that they were living in a curved space, and not a flat one. This example shows that one can not deduce the geometry of the world from first principles, as the ancient Greeks thought. Instead, one has to measure the space we live in, and find out its geometry by experiment. However, although a way to describe curved spaces, was developed by the German, George Friedrich Riemann, in 1854, it remained just a piece of mathematics for sixty years. It could describe curved spaces that existed in the abstract, but there seemed no reason why the physical space we lived in, should be curved. This came only in 1915, when Einstein put forward the General Theory of Relativity. Space and Time Warps Cont... General Relativity was a major intellectual revolution that has transformed the way we think about the universe. It is a theory not only of curved space, but of curved or warped time as well. Einstein had realized in 1905, that space and time, are intimately connected with each other. One can describe the location of an event by four numbers. Three numbers describe the position of the event. They could be miles north and east of Oxford circus, and height above sea level. On a larger scale, they could be galactic latitude and longitude, and distance from the center of the galaxy. The fourth number, is the time of the event. Thus one can think of space and time together, as a four-dimensional entity, called space-time. Each point of space-time is labeled by four numbers, that specify its position in space, and in time. Combining space and time into space-time in this way would be rather trivial, if one could disentangle them in a unique way. That is to say, if there was a unique way of defining the time and position of each event. However, in a remarkable paper written in 1905, when he was a clerk in the Swiss patent office, Einstein showed that the time and position at which one thought an event occurred, depended on how one was moving. This meant that time and space, were inextricably bound up with each other. The times that different observers would assign to events would agree if the observers were not moving relative to each other. But they would disagree more, the faster their relative speed. So one can ask, how fast does one need to go, in order that the time for one observer, should go backwards relative to the time of another observer. The answer is given in the following Limerick. There was a young lady of Wight, Who traveled much faster than light, She departed one day, In a relative way, And arrived on the previous night. So all we need for time travel, is a space ship that will go faster than light. Unfortunately, in the same paper, Einstein showed that the rocket power needed to accelerate a space ship, got greater and greater, the nearer it got to the speed of light. So it would take an infinite amount of power, to accelerate past the speed of light. Einstein's paper of 1905 seemed to rule out time travel into the past. It also indicated that space travel to other stars, was going to be a very slow and tedious business. If one couldn't go faster than light, the round trip to the nearest star, would take at least eight years, and to the center of the galaxy, at least eighty thousand years. If the space ship went very near the speed of light, it might seem to the people on board, that the trip to the galactic center had taken only a few years. But that wouldn't be much consolation, if everyone you had known was dead and forgotten thousands of years ago, when you got back. That wouldn't be much good for space Westerns. So writers of science fiction, had to look for ways to get round this difficulty. In his 1915 paper, Einstein showed that the effects of gravity could be described, by supposing that space-time was warped or distorted, by the matter and energy in it. We can actually observe this warping of space-time, produced by the mass of the Sun, in the slight bending of light or radio waves, passing close to the Sun. This causes the apparent position of the star or radio source, to shift slightly, when the Sun is between the Earth and the source. The shift is very small, about a thousandth of a degree, equivalent to a movement of an inch, at a distance of a mile. Nevertheless, it can be measured with great accuracy, and it agrees with the predictions of General Relativity. We have experimental evidence, that space and time are warped. The amount of warping in our neighbourhood, is very small, because all the gravitational fields in the solar system, are weak. However, we know that very strong fields can occur, for example in the Big Bang, or in black holes. So, can space and time be warped enough, to meet the demands from science fiction, for things like hyper space drives, wormholes, or time travel. At first sight, all these seem possible. For example, in 1948, Kurt Goedel found a solution of the field equations of General Relativity, which represents a universe in which all the matter was rotating. In this universe, it would be possible to go off in a space ship, and come back before you set out. Goedel was at the Institute of Advanced Study, in Princeton, where Einstein also spent his last years. He was more famous for proving you couldn't prove everything that is true, even in such an apparently simple subject as arithmetic. But what he proved about General Relativity allowing time travel really upset Einstein, who had thought it wouldn't be possible. We now know that Goedel's solution couldn't represent the universe in which we live, because it was not expanding. It also had a fairly large value for a quantity called the cosmological constant, which is generally believed to be zero. However, other apparently more reasonable solutions that allow time travel, have since been found. A particularly interesting one contains two cosmic strings, moving past each other at a speed very near to, but slightly less than, the speed of light. Cosmic strings are a remarkable idea of theoretical physics, which science fiction writers don't really seem to have caught on to. As their name suggests, they are like string, in that they have length, but a tiny cross section. Actually, they are more like rubber bands, because they are under enormous tension, something like a hundred billion billion billion tons. A cosmic string attached to the Sun would accelerate it naught to sixty, in a thirtieth of a second. Cosmic strings may sound far-fetched, and pure science fiction, but there are good scientific reasons to believed they could have formed in the very early universe, shortly after the Big Bang. Because they are under such great tension, one might have expected them to accelerate to almost the speed of light. What both the Goedel universe, and the fast moving cosmic string space-time have in common, is that they start out so distorted and curved, that travel into the past, was always possible. God might have created such a warped universe, but we have no reason to think that He did. All the evidence is, that the universe started out in the Big Bang, without the kind of warping needed, to allow travel into the past. Since we can't change the way the universe began, the question of whether time travel is possible, is one of whether we can subsequently make space-time so warped, that one can go back to the past. I think this is an important subject for research, but one has to be careful not to be labeled a crank. If one made a research grant application to work on time travel, it would be dismissed immediately. No government agency could afford to be seen to be spending public money, on anything as way out as time travel. Instead, one has to use technical terms, like closed time like curves, which are code for time travel. Although this lecture is partly about time travel, I felt I had to give it the scientifically more respectable title, Space and Time warps. Yet, it is a very serious question. Since General Relativity can permit time travel, does it allow it in our universe? And if not, why not. Closely related to time travel, is the ability to travel rapidly from one position in space, to another. As I said earlier, Einstein showed that it would take an infinite amount of rocket power, to accelerate a space ship to beyond the speed of light. So the only way to get from one side of the galaxy to the other, in a reasonable time, would seem to be if we could warp space-time so much, that we created a little tube or wormhole. This could connect the two sides of the galaxy, and act as a short cut, to get from one to the other and back while your friends were still alive. Such wormholes have been seriously suggested, as being within the capabilities of a future civilization. But if you can travel from one side of the galaxy, to the other, in a week or two, you could go back through another wormhole, and arrive back before you set out. You could even manage to travel back in time with a single wormhole, if its two ends were moving relative to each other. One can show that to create a wormhole, one needs to warp space-time in the opposite way, to that in which normal matter warps it. Ordinary matter curves space-time back on itself, like the surface of the Earth. However, to create a wormhole, one needs matter that warps space-time in the opposite way, like the surface of a saddle. The same is true of any other way of warping space-time to allow travel to the past, if the universe didn't begin so warped, that it allowed time travel. What one would need, would be matter with negative mass, and negative energy density, to make space-time warp in the way required. Energy is rather like money. If you have a positive bank balance, you can distribute it in various ways. But according to the classical laws that were believed until quite recently, you weren't allowed to have an energy overdraft. So these classical laws would have ruled out us being able to warp the universe, in the way required to allow time travel. However, the classical laws were overthrown by Quantum Theory, which is the other great revolution in our picture of the universe, apart from General Relativity. Quantum Theory is more relaxed, and allows you to have an overdraft on one or two accounts. If only the banks were as accommodating. In other words, Quantum Theory allows the energy density to be negative in some places, provided it is positive in others. Space and Time Warps Cont... The reason Quantum Theory can allow the energy density to be negative, is that it is based on the Uncertainty Principle. This says that certain quantities, like the position and speed of a particle, can't both have well defined values. The more accurately the position of a particle is defined, the greater is the uncertainty in its speed, and vice versa. The uncertainty principle also applies to fields, like the electro-magnetic field, or the gravitational field. It implies that these fields can't be exactly zeroed, even in what we think of as empty space. For if they were exactly zero, their values would have both a well-defined position at zero, and a well-defined speed, which was also zero. This would be a violation of the uncertainty principle. Instead, the fields would have to have a certain minimum amount of fluctuations. One can interpret these so called vacuum fluctuations, as pairs of particles and anti particles, that suddenly appear together, move apart, and then come back together again, and annihilate each other. These particle anti particle pairs, are said to be virtual, because one can not measure them directly with a particle detector. However, one can observe their effects indirectly. One way of doing this, is by what is called the Casimir effect. One has two parallel metal plates, a short distance apart. The plates act like mirrors for the virtual particles and anti particles. This means that the region between the plates, is a bit like an organ pipe, and will only admit light waves of certain resonant frequencies. The result is that there are slightly fewer vacuum fluctuations, or virtual particles, between the plates, than outside them, where vacuum fluctuations can have any wavelength. The reduction in the number of virtual particles between the plates means that they don't hit the plates so often, and thus don't exert as much pressure on the plates, as the virtual particles outside. There is thus a slight force pushing the plates together. This force has been measured experimentally. So virtual particles actually exist, and produce real effects. Because there are fewer virtual particles, or vacuum fluctuations, between the plates, they have a lower energy density, than in the region outside. But the energy density of empty space far away from the plates, must be zero. Otherwise it would warp space-time, and the universe wouldn't be nearly flat. So the energy density in the region between the plates, must be negative. We thus have experimental evidence from the bending of light, that space-time is curved, and confirmation from the Casimir effect, that we can warp it in the negative direction. So it might seem possible, that as we advance in science and technology, we might be able to construct a wormhole, or warp space and time in some other way, so as to be able to travel into our past. If this were the case, it would raise a whole host of questions and problems. One of these is, if sometime in the future, we learn to travel in time, why hasn't someone come back from the future, to tell us how to do it. Even if there were sound reasons for keeping us in ignorance, human nature being what it is, it is difficult to believe that someone wouldn't show off, and tell us poor benighted peasants, the secret of time travel. Of course, some people would claim that we have been visited from the future. They would say that UFO's come from the future, and that governments are engaged in a gigantic conspiracy to cover them up, and keep for themselves, the scientific knowledge that these visitors bring. All I can say is, that if governments were hiding something, they are doing a pretty poor job, of extracting useful information from the aliens. I'm pretty skeptical of conspiracy theories, believing the cock up theory is more likely. The reports of sightings of UFO's can't all be caused by extra terrestrials, because they are mutually contradictory. But once you admit that some are mistakes, or hallucinations, isn't it more probable that they all are, than that we are being visited by people from the future, or the other side of the galaxy? If they really want to colonize the Earth, or warn us of some danger, they are being pretty ineffective. A possible way to reconcile time travel, with the fact that we don't seem to have had any visitors from the future, would be to say that it can occur only in the future. In this view, one would say space-time in our past was fixed, because we have observed it, and seen that it is not warped enough, to allow travel into the past. On the other hand, the future is open. So we might be able to warp it enough, to allow time travel. But because we can warp space-time only in the future, we wouldn't be able to travel back to the present time, or earlier. This picture would explain why we haven't been over run by tourists from the future. But it would still leave plenty of paradoxes. Suppose it were possible to go off in a rocket ship, and come back before you set off. What would stop you blowing up the rocket on its launch pad, or otherwise preventing you from setting out in the first place. There are other versions of this paradox, like going back, and killing your parents before you were born, but they are essentially equivalent. There seem to be two possible resolutions. One is what I shall call, the consistent histories approach. It says that one has to find a consistent solution of the equations of physics, even if space-time is so warped, that it is possible to travel into the past. On this view, you couldn't set out on the rocket ship to travel into the past, unless you had already come back, and failed to blow up the launch pad. It is a consistent picture, but it would imply that we were completely determined: we couldn't change our minds. So much for free will. The other possibility is what I call, the alternative histories approach. It has been championed by the physicist David Deutsch, and it seems to have been what Stephen Spielberg had in mind when he filmed, Back to the Future. In this view, in one alternative history, there would not have been any return from the future, before the rocket set off, and so no possibility of it being blown up. But when the traveler returns from the future, he enters another alternative history. In this, the human race makes a tremendous effort to build a space ship, but just before it is due to be launched, a similar space ship appears from the other side of the galaxy, and destroys it. David Deutsch claims support for the alternative histories approach, from the sum over histories concept, introduced by the physicist, Richard Feinman, who died a few years ago. The idea is that according to Quantum Theory, the universe doesn't have just a unique single history. Instead, the universe has every single possible history,each with its own probability. There must be a possible history in which there is a lasting peace in the Middle East, though maybe the probability is low. In some histories space-time will be so warped, that objects like rockets will be able to travel into their pasts. But each history is complete and self contained, describing not only the curved space-time, but also the objects in it. So a rocket can not transfer to another alternative history, when it comes round again. It is still in the same history, which has to be self consistent. Thus, despite what Deutsch claims, I think the sum over histories idea, supports the consistent histories hypothesis, rather than the alternative histories idea. It thus seems that we are stuck with the consistent histories picture. However, this need not involve problems with determinism or free will, if the probabilities are very small, for histories in which space-time is so warped, that time travel is possible over a macroscopic region. This is what I call, the Chronology Protection Conjecture: the laws of physics conspire to prevent time travel, on a macroscopic scale. It seems that what happens, is that when space-time gets warped almost enough to allow travel into the past, virtual particles can almost become real particles, following closed trajectories. The density of the virtual particles, and their energy, become very large. This means that the probability of these histories is very low. Thus it seems there may be a Chronology Protection Agency at work, making the world safe for historians. But this subject of space and time warps is still in its infancy. According to string theory, which is our best hope of uniting General Relativity and Quantum Theory, into a Theory of Everything, space-time ought to have ten dimensions, not just the four that we experience. The idea is that six of these ten dimensions are curled up into a space so small, that we don't notice them. On the other hand, the remaining four directions are fairly flat, and are what we call space-time. If this picture is correct, it might be possible to arrange that the four flat directions got mixed up with the six highly curved or warped directions. What this would give rise to, we don't yet know. But it opens exciting possibilities. The conclusion of this lecture is that rapid space-travel, or travel back in time, can't be ruled out, according to our present understanding. They would cause great logical problems, so let's hope there's a Chronology Protection Law, to prevent people going back, and killing our parents. But science fiction fans need not lose heart. There's hope in string theory. Since we haven't cracked time travel yet, I have run out of time. Thank you for listening. Does God Play Dice? This lecture is about whether we can predict the future, or whether it is arbitrary and random. In ancient times, the world must have seemed pretty arbitrary. Disasters such as floods or diseases must have seemed to happen without warning, or apparent reason. Primitive people attributed such natural phenomena, to a pantheon of gods and goddesses, who behaved in a capricious and whimsical way. There was no way to predict what they would do, and the only hope was to win favour by gifts or actions. Many people still partially subscribe to this belief, and try to make a pact with fortune. They offer to do certain things, if only they can get an A-grade for a course, or pass their driving test. Gradually however, people must have noticed certain regularities in the behaviour of nature. These regularities were most obvious, in the motion of the heavenly bodies across the sky. So astronomy was the first science to be developed. It was put on a firm mathematical basis by Newton, more than 300 years ago, and we still use his theory of gravity to predict the motion of almost all celestial bodies. Following the example of astronomy, it was found that other natural phenomena also obeyed definite scientific laws. This led to the idea of scientific determinism, which seems first to have been publicly expressed by the French scientist, Laplace. I thought I would like to quote you Laplace's actual words, so I asked a friend to track them down. They are in French of course, not that I expect that would be any problem with this audience. But the trouble is, Laplace was rather like Prewst, in that he wrote sentences of inordinate length and complexity. So I have decided to para-phrase the quotation. In effect what he said was, that if at one time, we knew the positions and speeds of all the particles in the universe, then we could calculate their behaviour at any other time, in the past or future. There is a probably apocryphal story, that when Laplace was asked by Napoleon, how God fitted into this system, he replied, 'Sire, I have not needed that hypothesis.' I don't think that Laplace was claiming that God didn't exist. It is just that He doesn't intervene, to break the laws of Science. That must be the position of every scientist. A scientific law, is not a scientific law, if it only holds when some supernatural being, decides to let things run, and not intervene. The idea that the state of the universe at one time determines the state at all other times, has been a central tenet of science, ever since Laplace's time. It implies that we can predict the future, in principle at least. In practice, however, our ability to predict the future is severely limited by the complexity of the equations, and the fact that they often have a property called chaos. As those who have seen Jurassic Park will know, this means a tiny disturbance in one place, can cause a major change in another. A butterfly flapping its wings can cause rain in Central Park, New York. The trouble is, it is not repeatable. The next time the butterfly flaps its wings, a host of other things will be different, which will also influence the weather. That is why weather forecasts are so unreliable. Despite these practical difficulties, scientific determinism, remained the official dogma throughout the 19th century. However, in the 20th century, there have been two developments that show that Laplace's vision, of a complete prediction of the future, can not be realised. The first of these developments was what is called, quantum mechanics. This was first put forward in 1900, by the German physicist, Max Planck, as an ad hoc hypothesis, to solve an outstanding paradox. According to the classical 19th century ideas, dating back to Laplace, a hot body, like a piece of red hot metal, should give off radiation. It would lose energy in radio waves, infra red, visible light, ultra violet, x-rays, and gamma rays, all at the same rate. Not only would this mean that we would all die of skin cancer, but also everything in the universe would be at the same temperature, which clearly it isn't. However, Planck showed one could avoid this disaster, if one gave up the idea that the amount of radiation could have just any value, and said instead that radiation came only in packets or quanta of a certain size. It is a bit like saying that you can't buy sugar loose in the supermarket, but only in kilogram bags. The energy in the packets or quanta, is higher for ultra violet and x-rays, than for infra red or visible light. This means that unless a body is very hot, like the Sun, it will not have enough energy, to give off even a single quantum of ultra violet or x-rays. That is why we don't get sunburn from a cup of coffee. Planck regarded the idea of quanta, as just a mathematical trick, and not as having any physical reality, whatever that might mean. However, physicists began to find other behaviour, that could be explained only in terms of quantities having discrete, or quantised values, rather than continuously variable ones. For example, it was found that elementary particles behaved rather like little tops, spinning about an axis. But the amount of spin couldn't have just any value. It had to be some multiple of a basic unit. Because this unit is very small, one does not notice that a normal top really slows down in a rapid sequence of discrete steps, rather than as a continuous process. But for tops as small as atoms, the discrete nature of spin is very important. It was some time before people realised the implications of this quantum behaviour for determinism. It was not until 1926, that Werner Heisenberg, another German physicist, pointed out that you couldn't measure both the position, and the speed, of a particle exactly. To see where a particle is, one has to shine light on it. But by Planck's work, one can't use an arbitrarily small amount of light. One has to use at least one quantum. This will disturb the particle, and change its speed in a way that can't be predicted. To measure the position of the particle accurately, you will have to use light of short wave length, like ultra violet, x-rays, or gamma rays. But again, by Planck's work, quanta of these forms of light have higher energies than those of visible light. So they will disturb the speed of the particle more. It is a no win situation: the more accurately you try to measure the position of the particle, the less accurately you can know the speed, and vice versa. This is summed up in the Uncertainty Principle that Heisenberg formulated; the uncertainty in the position of a particle, times the uncertainty in its speed, is always greater than a quantity called Planck's constant, divided by the mass of the particle. Laplace's vision, of scientific determinism, involved knowing the positions and speeds of the particles in the universe, at one instant of time. So it was seriously undermined by Heisenberg's Uncertainty principle. How could one predict the future, when one could not measure accurately both the positions, and the speeds, of particles at the present time? No matter how powerful a computer you have, if you put lousy data in, you will get lousy predictions out. Einstein was very unhappy about this apparent randomness in nature. His views were summed up in his famous phrase, 'God does not play dice'. He seemed to have felt that the uncertainty was only provisional: but that there was an underlying reality, in which particles would have well defined positions and speeds, and would evolve according to deterministic laws, in the spirit of Laplace. This reality might be known to God, but the quantum nature of light would prevent us seeing it, except through a glass darkly. Einstein's view was what would now be called, a hidden variable theory. Hidden variable theories might seem to be the most obvious way to incorporate the Uncertainty Principle into physics. They form the basis of the mental picture of the universe, held by many scientists, and almost all philosophers of science. But these hidden variable theories are wrong. The British physicist, John Bell, who died recently, devised an experimental test that would distinguish hidden variable theories. When the experiment was carried out carefully, the results were inconsistent with hidden variables. Thus it seems that even God is bound by the Uncertainty Principle, and can not know both the position, and the speed, of a particle. So God does play dice with the universe. All the evidence points to him being an inveterate gambler, who throws the dice on every possible occasion. Other scientists were much more ready than Einstein to modify the classical 19th century view of determinism. A new theory, called quantum mechanics, was put forward by Heisenberg, the Austrian, Erwin Schroedinger, and the British physicist, Paul Dirac. Dirac was my predecessor but one, as the Lucasian Professor in Cambridge. Although quantum mechanics has been around for nearly 70 years, it is still not generally understood or appreciated, even by those that use it to do calculations. Yet it should concern us all, because it is a completely different picture of the physical universe, and of reality itself. In quantum mechanics, particles don't have well defined positions and speeds. Instead, they are represented by what is called a wave function. This is a number at each point of space. The size of the wave function gives the probability that the particle will be found in that position. The rate, at which the wave function varies from point to point, gives the speed of the particle. One can have a wave function that is very strongly peaked in a small region. This will mean that the uncertainty in the position is small. But the wave function will vary very rapidly near the peak, up on one side, and down on the other. Thus the uncertainty in the speed will be large. Similarly, one can have wave functions where the uncertainty in the speed is small, but the uncertainty in the position is large. The wave function contains all that one can know of the particle, both its position, and its speed. If you know the wave function at one time, then its values at other times are determined by what is called the Schroedinger equation. Thus one still has a kind of determinism, but it is not the sort that Laplace envisaged. Instead of being able to predict the positions and speeds of particles, all we can predict is the wave function. This means that we can predict just half what we could, according to the classical 19th century view. Although quantum mechanics leads to uncertainty, when we try to predict both the position and the speed, it still allows us to predict, with certainty, one combination of position and speed. However, even this degree of certainty, seems to be threatened by more recent developments. The problem arises because gravity can warp space-time so much, that there can be regions that we don't observe. Interestingly enough, Laplace himself wrote a paper in 1799 on how some stars could have a gravitational field so strong that light could not escape, but would be dragged back onto the star. He even calculated that a star of the same density as the Sun, but two hundred and fifty times the size, would have this property. But although Laplace may not have realised it, the same idea had been put forward 16 years earlier by a Cambridge man, John Mitchell, in a paper in the Philosophical Transactions of the Royal Society. Both Mitchell and Laplace thought of light as consisting of particles, rather like cannon balls, that could be slowed down by gravity, and made to fall back on the star. But a famous experiment, carried out by two Americans, Michelson and Morley in 1887, showed that light always travelled at a speed of one hundred and eighty six thousand miles a second, no matter where it came from. How then could gravity slow down light, and make it fall back. Does God Play Dice? Cont... This was impossible, according to the then accepted ideas of space and time. But in 1915, Einstein put forward his revolutionary General Theory of Relativity. In this, space and time were no longer separate and independent entities. Instead, they were just different directions in a single object called space-time. This space-time was not flat, but was warped and curved by the matter and energy in it. In order to understand this, considered a sheet of rubber, with a weight placed on it, to represent a star. The weight will form a depression in the rubber, and will cause the sheet near the star to be curved, rather than flat. If one now rolls marbles on the rubber sheet, their paths will be curved, rather than being straight lines. In 1919, a British expedition to West Africa, looked at light from distant stars, that passed near the Sun during an eclipse. They found that the images of the stars were shifted slightly from their normal positions. This indicated that the paths of the light from the stars had been bent by the curved space-time near the Sun. General Relativity was confirmed. Consider now placing heavier and heavier, and more and more concentrated weights on the rubber sheet. They will depress the sheet more and more. Eventually, at a critical weight and size, they will make a bottomless hole in the sheet, which particles can fall into, but nothing can get out of. What happens in space-time according to General Relativity is rather similar. A star will curve and distort the space-time near it, more and more, the more massive and more compact the star is. If a massive star, which has burnt up its nuclear fuel, cools and shrinks below a critical size, it will quite literally make a bottomless hole in space-time, that light can't get out of. Such objects were given the name Black Holes, by the American physicist John Wheeler, who was one of the first to recognise their importance, and the problems they pose. The name caught on quickly. To Americans, it suggested something dark and mysterious, while to the British, there was the added resonance of the Black Hole of Calcutta. But the French, being French, saw a more risqué meaning. For years, they resisted the name, trou noir, claiming it was obscene. But that was a bit like trying to stand against le weekend, and other franglais. In the end, they had to give in. Who can resist a name that is such a winner? We now have observations that point to black holes in a number of objects, from binary star systems, to the centre of galaxies. So it is now generally accepted that black holes exist. But, apart from their potential for science fiction, what is their significance for determinism. The answer lies in a bumper sticker that I used to have on the door of my office: Black Holes are Out of Sight. Not only do the particles and unlucky astronauts that fall into a black hole, never come out again, but also the information that they carry, is lost forever, at least from our region of the universe. You can throw television sets, diamond rings, or even your worst enemies into a black hole, and all the black hole will remember, is the total mass, and the state of rotation. John Wheeler called this, 'A Black Hole Has No Hair.' To the French, this just confirmed their suspicions. As long as it was thought that black holes would continue to exist forever, this loss of information didn't seem to matter too much. One could say that the information still existed inside the black hole. It is just that one can't tell what it is, from the outside. However, the situation changed, when I discovered that black holes aren't completely black. Quantum mechanics causes them to send out particles and radiation at a steady rate. This result came as a total surprise to me, and everyone else. But with hindsight, it should have been obvious. What we think of as empty space is not really empty, but it is filled with pairs of particles and anti particles. These appear together at some point of space and time, move apart, and then come together and annihilate each other. These particles and anti particles occur because a field, such as the fields that carry light and gravity, can't be exactly zero. That would mean that the value of the field, would have both an exact position (at zero), and an exact speed or rate of change (also zero). This would be against the Uncertainty Principle, just as a particle can't have both an exact position, and an exact speed. So all fields must have what are called, vacuum fluctuations. Because of the quantum behaviour of nature, one can interpret these vacuum fluctuations, in terms of particles and anti particles, as I have described. These pairs of particles occur for all varieties of elementary particles. They are called virtual particles, because they occur even in the vacuum, and they can't be directly measured by particle detectors. However, the indirect effects of virtual particles, or vacuum fluctuations, have been observed in a number of experiments, and their existence confirmed. If there is a black hole around, one member of a particle anti particle pair may fall into the hole, leaving the other member without a partner, with which to annihilate. The forsaken particle may fall into the hole as well, but it may also escape to a large distance from the hole, where it will become a real particle, that can be measured by a particle detector. To someone a long way from the black hole, it will appear to have been emitted by the hole. This explanation of how black holes ain't so black, makes it clear that the emission will depend on the size of the black hole, and the rate at which it is rotating. But because black holes have no hair, in Wheeler's phrase, the radiation will be otherwise independent of what went into the hole. It doesn't matter whether you throw television sets, diamond rings, or your worst enemies, into a black hole. What comes back out will be the same. So what has all this to do with determinism, which is what this lecture is supposed to be about. What it shows is that there are many initial states, containing television sets, diamond rings, and even people, that evolve to the same final state, at least outside the black hole. But in Laplace's picture of determinism, there was a one to one correspondence between initial states, and final states. If you knew the state of the universe at some time in the past, you could predict it in the future. Similarly, if you knew it in the future, you could calculate what it must have been in the past. The advent of quantum theory in the 1920s reduced the amount one could predict by half, but it still left a one to one correspondence between the states of the universe at different times. If one knew the wave function at one time, one could calculate it at any other time. With black holes, however, the situation is rather different. One will end up with the same state outside the hole, whatever one threw in, provided it has the same mass. Thus there is not a one to one correspondence between the initial state, and the final state outside the black hole. There will be a one to one correspondence between the initial state, and the final state both outside, and inside, the black hole. But the important point is that the emission of particles, and radiation by the black hole, will cause the hole to lose mass, and get smaller. Eventually, it seems the black hole will get down to zero mass, and will disappear altogether. What then will happen to all the objects that fell into the hole, and all the people that either jumped in, or were pushed? They can't come out again, because there isn't enough mass or energy left in the black hole, to send them out again. They may pass into another universe, but that is not something that will make any difference, to those of us prudent enough not to jump into a black hole. Even the information, about what fell into the hole, could not come out again when the hole finally disappears. Information can not be carried free, as those of you with phone bills will know. Information requires energy to carry it, and there won't be enough energy left when the black hole disappears. What all this means is, that information will be lost from our region of the universe, when black holes are formed, and then evaporate. This loss of information will mean that we can predict even less than we thought, on the basis of quantum theory. In quantum theory, one may not be able to predict with certainty, both the position, and the speed of a particle. But there is still one combination of position and speed that can be predicted. In the case of a black hole, this definite prediction involves both members of a particle pair. But we can measure only the particle that comes out. There's no way even in principle that we can measure the particle that falls into the hole. So, for all we can tell, it could be in any state. This means we can not make any definite prediction, about the particle that escapes from the hole. We can calculate the probability that the particle has this or that position, or speed. But there's no combination of the position and speed of just one particle that we can definitely predict, because the speed and position will depend on the other particle, which we don't observe. Thus it seems Einstein was doubly wrong when he said, God does not play dice. Not only does God definitely play dice, but He sometimes confuses us by throwing them where they can't be seen. Many scientists are like Einstein, in that they have a deep emotional attachment to determinism. Unlike Einstein, they have accepted the reduction in our ability to predict, that quantum theory brought about. But that was far enough. They didn't like the further reduction, which black holes seemed to imply. They have therefore claimed that information is not really lost down black holes. But they have not managed to find any mechanism that would return the information. It is just a pious hope that the universe is deterministic, in the way that Laplace thought. I feel these scientists have not learnt the lesson of history. The universe does not behave according to our pre-conceived ideas. It continues to surprise us. One might not think it mattered very much, if determinism broke down near black holes. We are almost certainly at least a few light years, from a black hole of any size. But, the Uncertainty Principle implies that every region of space should be full of tiny virtual black holes, which appear and disappear again. One would think that particles and information could fall into these black holes, and be lost. Because these virtual black holes are so small, a hundred billion billion times smaller than the nucleus of an atom, the rate at which information would be lost would be very low. That is why the laws of science appear deterministic, to a very good approximation. But in extreme conditions, like in the early universe, or in high energy particle collisions, there could be significant loss of information. This would lead to unpredictability, in the evolution of the universe. To sum up, what I have been talking about, is whether the universe evolves in an arbitrary way, or whether it is deterministic. The classical view, put forward by Laplace, was that the future motion of particles was completely determined, if one knew their positions and speeds at one time. This view had to be modified, when Heisenberg put forward his Uncertainty Principle, which said that one could not know both the position, and the speed, accurately. However, it was still possible to predict one combination of position and speed. But even this limited predictability disappeared, when the effects of black holes were taken into account. The loss of particles and information down black holes meant that the particles that came out were random. One could calculate probabilities, but one could not make any definite predictions. Thus, the future of the universe is not completely determined by the laws of science, and its present state, as Laplace thought. God still has a few tricks up his sleeve. Life in the Universe In this talk, I would like to speculate a little, on the development of life in the universe, and in particular, the development of intelligent life. I shall take this to include the human race, even though much of its behaviour through out history, has been pretty stupid, and not calculated to aid the survival of the species. Two questions I shall discuss are, 'What is the probability of life existing else where in the universe?' and, 'How may life develop in the future?' It is a matter of common experience, that things get more disordered and chaotic with time. This observation can be elevated to the status of a law, the so-called Second Law of Thermodynamics. This says that the total amount of disorder, or entropy, in the universe, always increases with time. However, the Law refers only to the total amount of disorder. The order in one body can increase, provided that the amount of disorder in its surroundings increases by a greater amount. This is what happens in a living being. One can define Life to be an ordered system that can sustain itself against the tendency to disorder, and can reproduce itself. That is, it can make similar, but independent, ordered systems. To do these things, the system must convert energy in some ordered form, like food, sunlight, or electric power, into disordered energy, in the form of heat. In this way, the system can satisfy the requirement that the total amount of disorder increases, while, at the same time, increasing the order in itself and its offspring. A living being usually has two elements: a set of instructions that tell the system how to sustain and reproduce itself, and a mechanism to carry out the instructions. In biology, these two parts are called genes and metabolism. But it is worth emphasising that there need be nothing biological about them. For example, a computer virus is a program that will make copies of itself in the memory of a computer, and will transfer itself to other computers. Thus it fits the definition of a living system, that I have given. Like a biological virus, it is a rather degenerate form, because it contains only instructions or genes, and doesn't have any metabolism of its own. Instead, it reprograms the metabolism of the host computer, or cell. Some people have questioned whether viruses should count as life, because they are parasites, and can not exist independently of their hosts. But then most forms of life, ourselves included, are parasites, in that they feed off and depend for their survival on other forms of life. I think computer viruses should count as life. Maybe it says something about human nature, that the only form of life we have created so far is purely destructive. Talk about creating life in our own image. I shall return to electronic forms of life later on. What we normally think of as 'life' is based on chains of carbon atoms, with a few other atoms, such as nitrogen or phosphorous. One can speculate that one might have life with some other chemical basis, such as silicon, but carbon seems the most favourable case, because it has the richest chemistry. That carbon atoms should exist at all, with the properties that they have, requires a fine adjustment of physical constants, such as the QCD scale, the electric charge, and even the dimension of space-time. If these constants had significantly different values, either the nucleus of the carbon atom would not be stable, or the electrons would collapse in on the nucleus. At first sight, it seems remarkable that the universe is so finely tuned. Maybe this is evidence, that the universe was specially designed to produce the human race. However, one has to be careful about such arguments, because of what is known as the Anthropic Principle. This is based on the self-evident truth, that if the universe had not been suitable for life, we wouldn't be asking why it is so finely adjusted. One can apply the Anthropic Principle, in either its Strong, or Weak, versions. For the Strong Anthropic Principle, one supposes that there are many different universes, each with different values of the physical constants. In a small number, the values will allow the existence of objects like carbon atoms, which can act as the building blocks of living systems. Since we must live in one of these universes, we should not be surprised that the physical constants are finely tuned. If they weren't, we wouldn't be here. The strong form of the Anthropic Principle is not very satisfactory. What operational meaning can one give to the existence of all those other universes? And if they are separate from our own universe, how can what happens in them, affect our universe. Instead, I shall adopt what is known as the Weak Anthropic Principle. That is, I shall take the values of the physical constants, as given. But I shall see what conclusions can be drawn, from the fact that life exists on this planet, at this stage in the history of the universe. There was no carbon, when the universe began in the Big Bang, about 15 billion years ago. It was so hot, that all the matter would have been in the form of particles, called protons and neutrons. There would initially have been equal numbers of protons and neutrons. However, as the universe expanded, it would have cooled. About a minute after the Big Bang, the temperature would have fallen to about a billion degrees, about a hundred times the temperature in the Sun. At this temperature, the neutrons will start to decay into more protons. If this had been all that happened, all the matter in the universe would have ended up as the simplest element, hydrogen, whose nucleus consists of a single proton. However, some of the neutrons collided with protons, and stuck together to form the next simplest element, helium, whose nucleus consists of two protons and two neutrons. But no heavier elements, like carbon or oxygen, would have been formed in the early universe. It is difficult to imagine that one could build a living system, out of just hydrogen and helium, and anyway the early universe was still far too hot for atoms to combine into molecules. The universe would have continued to expand, and cool. But some regions would have had slightly higher densities than others. The gravitational attraction of the extra matter in those regions, would slow down their expansion, and eventually stop it. Instead, they would collapse to form galaxies and stars, starting from about two billion years after the Big Bang. Some of the early stars would have been more massive than our Sun. They would have been hotter than the Sun, and would have burnt the original hydrogen and helium, into heavier elements, such as carbon, oxygen, and iron. This could have taken only a few hundred million years. After that, some of the stars would have exploded as supernovas, and scattered the heavy elements back into space, to form the raw material for later generations of stars. Other stars are too far away, for us to be able to see directly, if they have planets going round them. But certain stars, called pulsars, give off regular pulses of radio waves. We observe a slight variation in the rate of some pulsars, and this is interpreted as indicating that they are being disturbed, by having Earth sized planets going round them. Planets going round pulsars are unlikely to have life, because any living beings would have been killed, in the supernova explosion that led to the star becoming a pulsar. But, the fact that several pulsars are observed to have planets suggests that a reasonable fraction of the hundred billion stars in our galaxy may also have planets. The necessary planetary conditions for our form of life may therefore have existed from about four billion years after the Big Bang. Our solar system was formed about four and a half billion years ago, or about ten billion years after the Big Bang, from gas contaminated with the remains of earlier stars. The Earth was formed largely out of the heavier elements, including carbon and oxygen. Somehow, some of these atoms came to be arranged in the form of molecules of DNA. This has the famous double helix form, discovered by Crick and Watson, in a hut on the New Museum site in Cambridge. Linking the two chains in the helix, are pairs of nucleic acids. There are four types of nucleic acid, adenine, cytosine, guanine, and thiamine. I'm afraid my speech synthesiser is not very good, at pronouncing their names. Obviously, it was not designed for molecular biologists. An adenine on one chain is always matched with a thiamine on the other chain, and a guanine with a cytosine. Thus the sequence of nucleic acids on one chain defines a unique, complementary sequence, on the other chain. The two chains can then separate and each act as templates to build further chains. Thus DNA molecules can reproduce the genetic information, coded in their sequences of nucleic acids. Sections of the sequence can also be used to make proteins and other chemicals, which can carry out the instructions, coded in the sequence, and assemble the raw material for DNA to reproduce itself. We do not know how DNA molecules first appeared. The chances against a DNA molecule arising by random fluctuations are very small. Some people have therefore suggested that life came to Earth from elsewhere, and that there are seeds of life floating round in the galaxy. However, it seems unlikely that DNA could survive for long in the radiation in space. And even if it could, it would not really help explain the origin of life, because the time available since the formation of carbon is only just over double the age of the Earth. One possibility is that the formation of something like DNA, which could reproduce itself, is extremely unlikely. However, in a universe with a very large, or infinite, number of stars, one would expect it to occur in a few stellar systems, but they would be very widely separated. The fact that life happened to occur on Earth, is not however surprising or unlikely. It is just an application of the Weak Anthropic Principle: if life had appeared instead on another planet, we would be asking why it had occurred there. If the appearance of life on a given planet was very unlikely, one might have expected it to take a long time. More precisely, one might have expected life to appear just in time for the subsequent evolution to intelligent beings, like us, to have occurred before the cut off, provided by the life time of the Sun. This is about ten billion years, after which the Sun will swell up and engulf the Earth. An intelligent form of life, might have mastered space travel, and be able to escape to another star. But otherwise, life on Earth would be doomed. There is fossil evidence, that there was some form of life on Earth, about three and a half billion years ago. This may have been only 500 million years after the Earth became stable and cool enough, for life to develop. But life could have taken 7 billion years to develop, and still have left time to evolve to beings like us, who could ask about the origin of life. If the probability of life developing on a given planet, is very small, why did it happen on Earth, in about one 14th of the time available. The early appearance of life on Earth suggests that there's a good chance of the spontaneous generation of life, in suitable conditions. Maybe there was some simpler form of organisation, which built up DNA. Once DNA appeared, it would have been so successful, that it might have completely replaced the earlier forms. We don't know what these earlier forms would have been. One possibility is RNA. This is like DNA, but rather simpler, and without the double helix structure. Short lengths of RNA, could reproduce themselves like DNA, and might eventually build up to DNA. One can not make nucleic acids in the laboratory, from non-living material, let alone RNA. But given 500 million years, and oceans covering most of the Earth, there might be a reasonable probability of RNA, being made by chance. As DNA reproduced itself, there would have been random errors. Many of these errors would have been harmful, and would have died out. Some would have been neutral. That is they would not have affected the function of the gene. Such errors would contribute to a gradual genetic drift, which seems to occur in all populations. And a few errors would have been favourable to the survival of the species. These would have been chosen by Darwinian natural selection. The process of biological evolution was very slow at first. It took two and a half billion years, to evolve from the earliest cells to multi-cell animals, and another billion years to evolve through fish and reptiles, to mammals. But then evolution seemed to have speeded up. It only took about a hundred million years, to develop from the early mammals to us. The reason is, fish contain most of the important human organs, and mammals, essentially all of them. All that was required to evolve from early mammals, like lemurs, to humans, was a bit of fine-tuning. But with the human race, evolution reached a critical stage, comparable in importance with the development of DNA. This was the development of language, and particularly written language. It meant that information can be passed on, from generation to generation, other than genetically, through DNA. There has been no detectable change in human DNA, brought about by biological evolution, in the ten thousand years of recorded history. But the amount of knowledge handed on from generation to generation has grown enormously. The DNA in human beings contains about three billion nucleic acids. However, much of the information coded in this sequence, is redundant, or is inactive. So the total amount of useful information in our genes, is probably something like a hundred million bits. One bit of information is the answer to a yes no question. By contrast, a paper back novel might contain two million bits of information. So a human is equivalent to 50 Mills and Boon romances. A major national library can contain about five million books, or about ten trillion bits. So the amount of information handed down in books, is a hundred thousand times as much as in DNA. Even more important, is the fact that the information in books, can be changed, and updated, much more rapidly. It has taken us several million years to evolve from the apes. During that time, the useful information in our DNA, has probably changed by only a few million bits. So the rate of biological evolution in humans, is about a bit a year. By contrast, there are about 50,000 new books published in the English language each year, containing of the order of a hundred billion bits of information. Of course, the great majority of this information is garbage, and no use to any form of life. But, even so, the rate at which useful information can be added is millions, if not billions, higher than with DNA. This has meant that we have entered a new phase of evolution. At first, evolution proceeded by natural selection, from random mutations. This Darwinian phase, lasted about three and a half billion years, and produced us, beings who developed language, to exchange information. But in the last ten thousand years or so, we have been in what might be called, an external transmission phase. In this, the internal record of information, handed down to succeeding generations in DNA, has not changed significantly. But the external record, in books, and other long lasting forms of storage, has grown enormously. Some people would use the term, evolution, only for the internally transmitted genetic material, and would object to it being applied to information handed down externally. But I think that is too narrow a view. We are more than just our genes. We may be no stronger, or inherently more intelligent, than our cave man ancestors. But what distinguishes us from them, is the knowledge that we have accumulated over the last ten thousand years, and particularly, over the last three hundred. I think it is legitimate to take a broader view, and include externally transmitted information, as well as DNA, in the evolution of the human race. The time scale for evolution, in the external transmission period, is the time scale for accumulation of information. This used to be hundreds, or even thousands, of years. But now this time scale has shrunk to about 50 years, or less. On the other hand, the brains with which we process this information have evolved only on the Darwinian time scale, of hundreds of thousands of years. This is beginning to cause problems. In the 18th century, there was said to be a man who had read every book written. But nowadays, if you read one book a day, it would take you about 15,000 years to read through the books in a national Library. By which time, many more books would have been written. This has meant that no one person can be the master of more than a small corner of human knowledge. People have to specialise, in narrower and narrower fields. This is likely to be a major limitation in the future. We certainly can not continue, for long, with the exponential rate of growth of knowledge that we have had in the last three hundred years. An even greater limitation and danger for future generations, is that we still have the instincts, and in particular, the aggressive impulses, that we had in cave man days. Aggression, in the form of subjugating or killing other men, and taking their women and food, has had definite survival advantage, up to the present time. But now it could destroy the entire human race, and much of the rest of life on Earth. A nuclear war, is still the most immediate danger, but there are others, such as the release of a genetically engineered virus. Or the green house effect becoming unstable. There is no time, to wait for Darwinian evolution, to make us more intelligent, and better natured. But we are now entering a new phase, of what might be called, self designed evolution, in which we will be able to change and improve our DNA. There is a project now on, to map the entire sequence of human DNA. It will cost a few billion dollars, but that is chicken feed, for a project of this importance. Once we have read the book of life, we will start writing in corrections. At first, these changes will be confined to the repair of genetic defects, like cystic fibrosis, and muscular dystrophy. These are controlled by single genes, and so are fairly easy to identify, and correct. Other qualities, such as intelligence, are probably controlled by a large number of genes. It will be much more difficult to find them, and work out the relations between them. Nevertheless, I am sure that during the next century, people will discover how to modify both intelligence, and instincts like aggression. Laws will be passed, against genetic engineering with humans. But some people won't be able to resist the temptation, to improve human characteristics, such as size of memory, resistance to disease, and length of life. Once such super humans appear, there are going to be major political problems, with the unimproved humans, who won't be able to compete. Presumably, they will die out, or become unimportant. Instead, there will be a race of self-designing beings, who are improving themselves at an ever-increasing rate. If this race manages to redesign itself, to reduce or eliminate the risk of self-destruction, it will probably spread out, and colonise other planets and stars. However, long distance space travel, will be difficult for chemically based life forms, like DNA. The natural lifetime for such beings is short, compared to the travel time. According to the theory of relativity, nothing can travel faster than light. So the round trip to the nearest star would take at least 8 years, and to the centre of the galaxy, about a hundred thousand years. In science fiction, they overcome this difficulty, by space warps, or travel through extra dimensions. But I don't think these will ever be possible, no matter how intelligent life becomes. In the theory of relativity, if one can travel faster than light, one can also travel back in time. This would lead to problems with people going back, and changing the past. One would also expect to have seen large numbers of tourists from the future, curious to look at our quaint, old-fashioned ways. It might be possible to use genetic engineering, to make DNA based life survive indefinitely, or at least for a hundred thousand years. But an easier way, which is almost within our capabilities already, would be to send machines. These could be designed to last long enough for interstellar travel. When they arrived at a new star, they could land on a suitable planet, and mine material to produce more machines, which could be sent on to yet more stars. These machines would be a new form of life, based on mechanical and electronic components, rather than macromolecules. They could eventually replace DNA based life, just as DNA may have replaced an earlier form of life. This mechanical life could also be self-designing. Thus it seems that the external transmission period of evolution, will have been just a very short interlude, between the Darwinian phase, and a biological, or mechanical, self design phase. This is shown on this next diagram, which is not to scale, because there's no way one can show a period of ten thousand years, on the same scale as billions of years. How long the self-design phase will last is open to question. It may be unstable, and life may destroy itself, or get into a dead end. If it does not, it should be able to survive the death of the Sun, in about 5 billion years, by moving to planets around other stars. Most stars will have burnt out in another 15 billion years or so, and the universe will be approaching a state of complete disorder, according to the Second Law of Thermodynamics. But Freeman Dyson has shown that, despite this, life could adapt to the ever-decreasing supply of ordered energy, and therefore could, in principle, continue forever. What are the chances that we will encounter some alien form of life, as we explore the galaxy. If the argument about the time scale for the appearance of life on Earth is correct, there ought to be many other stars, whose planets have life on them. Some of these stellar systems could have formed 5 billion years before the Earth. So why is the galaxy not crawling with self designing mechanical or biological life forms? Why hasn't the Earth been visited, and even colonised. I discount suggestions that UFO's contain beings from outer space. I think any visits by aliens, would be much more obvious, and probably also, much more unpleasant. What is the explanation of why we have not been visited? One possibility is that the argument, about the appearance of life on Earth, is wrong. Maybe the probability of life spontaneously appearing is so low, that Earth is the only planet in the galaxy, or in the observable universe, in which it happened. Another possibility is that there was a reasonable probability of forming self reproducing systems, like cells, but that most of these forms of life did not evolve intelligence. We are used to thinking of intelligent life, as an inevitable consequence of evolution. But the Anthropic Principle should warn us to be wary of such arguments. It is more likely that evolution is a random process, with intelligence as only one of a large number of possible outcomes. It is not clear that intelligence has any long-term survival value. Bacteria, and other single cell organisms, will live on, if all other life on Earth is wiped out by our actions. There is support for the view that intelligence, was an unlikely development for life on Earth, from the chronology of evolution. It took a very long time, two and a half billion years, to go from single cells to multi-cell beings, which are a necessary precursor to intelligence. This is a good fraction of the total time available, before the Sun blows up. So it would be consistent with the hypothesis, that the probability for life to develop intelligence, is low. In this case, we might expect to find many other life forms in the galaxy, but we are unlikely to find intelligent life. Another way, in which life could fail to develop to an intelligent stage, would be if an asteroid or comet were to collide with the planet. We have just observed the collision of a comet, Schumacher-Levi, with Jupiter. It produced a series of enormous fireballs. It is thought the collision of a rather smaller body with the Earth, about 70 million years ago, was responsible for the extinction of the dinosaurs. A few small early mammals survived, but anything as large as a human, would have almost certainly been wiped out. It is difficult to say how often such collisions occur, but a reasonable guess might be every twenty million years, on average. If this figure is correct, it would mean that intelligent life on Earth has developed only because of the lucky chance that there have been no major collisions in the last 70 million years. Other planets in the galaxy, on which life has developed, may not have had a long enough collision free period to evolve intelligent beings. A third possibility is that there is a reasonable probability for life to form, and to evolve to intelligent beings, in the external transmission phase. But at that point, the system becomes unstable, and the intelligent life destroys itself. This would be a very pessimistic conclusion. I very much hope it isn't true. I prefer a fourth possibility: there are other forms of intelligent life out there, but that we have been overlooked. There used to be a project called SETI, the search for extra-terrestrial intelligence. It involved scanning the radio frequencies, to see if we could pick up signals from alien civilisations. I thought this project was worth supporting, though it was cancelled due to a lack of funds. But we should have been wary of answering back, until we have develop a bit further. Meeting a more advanced civilisation, at our present stage, might be a bit like the original inhabitants of America meeting Columbus. I don't think they were better off for it. That is all I have to say. Thank you for listening. ADVANCED LECTURES Inflation: An Open and Shut Case (April '98) Slides and audio for this talk can be downloaded from the ITP at UCSB. This talk will be based on joint work with Neil Turok, at Cambridge. Neil has always been interested in what might be called, alternative cosmology. He pushed the idea that topological defects like cosmic strings or textures, were the origin of the large scale structure of the universe. And he was a proponent of what is called, open inflation. This is the idea that the universe is infinitely large, and of low density, despite having been through a period of exponential expansion, in the very early stages. My opinion was, that these were all nice ideas, but that nature probably hadn't chosen the use any of them. I included open inflation in that list, because I believed strongly that the universe came into being, at a finite size, and I felt that implied that the universe now, was still of finite size, or closed. However, after Neil gave a seminar on open inflation in Cambridge, we got talking. We realized it was possible for the universe to come into existence, at a finite size, but nevertheless, be either a finite, or an infinitely large universe now. My talk will be about this idea, and new developments that have occur since then. One of these, which is so recent that it is not yet fully worked out, is that it seems there is an observational signature, of the kind of inflation Neil and I are proposing. The present measurements are not sensitive enough to see this effect, but it should be possible to test for it, in the observations the Planck satellite will make. Another recent development, is that observations of supernovas, have suggested that the universe may have a small cosmological constant, at the present time. Even before these observations, Neil and I had realized that if there was a four form gauge field, one could invoke anthropic arguments to make it cancel the cosmological constant, that one would expect from symmetry breaking. But the anthropic argument would not require it to cancel exactly. So there could be a small residual cosmological constant. This is exciting. I shall describe the observational evidence later. As you probably know, the universe is remarkably isotropic on a large scale. That is to say, it looks the same in all directions, if one goes beyond such local irregularities as the Milky Way, and the Local Group of galaxies. By far the most accurate measurement of the isotropy of the universe, is the faint background of microwave radiation, first discovered in 1965. At the present time at least, the universe is transparant to microwaves, in directions out of the plane of our galaxy. Thus the microwave background, must have propagated to us, from distances of the order of the Hubble radius, or greater. It should therefore give a sensitive measurement, of any anisotropy in the universe. The remarkable fact is, that the microwave background is the same in every direction, to a high degree of accuracy. It wasn't until 1982, that differences between different directions were detected, at the level of one part in a thousand, with a dipole pattern on the sky. However, this could be interpreted as a consequence of our galaxy's motion through the universe, which blue shifted the microwave radiation in one direction, and red shifted it in the opposite direction. It need not represent any intrinsic anisotropy in the universe. It was not until 1992, that tiny fluctuations on angular scales of 10 degrees, were detected by the Cosmic Background Explorer satellite, cobee. Since then, similar fluctuations have been found on smaller angular scales. The shape of the spectrum of fluctuations against angular scale, is still rather uncertain, but it is clear that the general size of the departures from uniformity, is only one part in ten to the five. This uniformity of the microwave background in different directions, was very difficult to understand. It seems the microwave background, is the last remnant of the radiation, that filled the hot early universe. What we observe, would have propagated freely to us, from a time of last scattering, when the universe was a thousandth of the size now. But according to the accepted hot big bang theory, the radiation coming from directions on the sky more than a degree apart, would be coming from regions of the early universe, that hadn't been in communication since the big bang. It was therefore truely remarkable, that the microwaves we observe in different directions, are the same to one part in ten to the five. How did different regions in the early universe, know to be at almost exactly the same temperature. It is a bit like handing in home work. If the whole class produce exactly the same, you can be sure they have communicated with each other. But according to the hot big bang model, there wasn't time since the big bang, for signals to get from one region to another. So how did all the regions, come up with the same temperature for the microwaves. If we assume that the universe is roughly homogeneous and isotropic, it can be described by one of the Friedmann Robertson Walker models. These are characterized by a scale factor S, which gives the distance between two neighbouring points, in the expanding universe. There are three kinds of Friedmann model, according to the sign of k, the curvature of the surfaces of constant time. If k =+1, the surfaces of constant time, are three spheres, and the universe is closed and finite in space. If k = minus 1, the surfaces have negative curvature, like a saddle, and the universe is infinite in spatial extent. The third possibility, k=0, a spatially flat universe, is of measure zero, but it is an important limiting case. Because the universe is expanding, the scale factor, S, is increasing with time. The second derivative of S, is given by the Einstein equation, in terms of the energy density and pressure, of matter in the universe, and the cosmological constant, lambda. For the moment, I will take lambda to be zero. For normal matter, both the energy density and pressure, will be positive. Thus the expansion of the universe, will be slowing down. In particular, for a universe dominated by radiation, like the early stages of the hot big bang model, the scale factor will go like t to the half. In such a model, one can ask how far one can see, before one sees right back to the big bang. It is easy to work out, that this is just the integral of one over the scale factor. For the hot big bang model, this integral converges. This means that a point in an early hot big bang universe, could have communicated only with a small region round it. Why then did it have almost exactly the same temperature, as regions far away. A possible explanation, was provided by the theory of inflation, which was put forward independently in the Soviet union, and the west, around 1980. The idea was to make regions able to communicate, by changing the expansion of the early universe, so that S double dot was positive, rather than negative. In other words, so that the expansion of the universe was being accelerated, rather than slowed down by gravity. As you can see from the Einstein equations, such accelerating expansion, or inflation, as it was called, required either negative energy, or negative pressure. One gets in a lot of trouble, if one allows negative energy. One would get runaway creation of particle pairs, one with positive energy, and the other with negative. But there is no reason to rule out negative pressure. That is just tension, which is a very common condition in the modern world. The original idea for inflation, was that in some way, the universe got trapped in what was called, a false vacuum state. A false vacuum state, is a Lorentz invariant meta-stable state, that has more energy than the true vacuum, which is taken to have zero energy density. Because a false vacuum is Lorentz invariant, its energy momentum tensor must be proportional to the metric. Since the false vacuum has positive energy density, the coefficient of proportionality must be negative. This means that the pressure in the false vacuum, is minus the energy density. The Einstein equations, then imply that the scale factor, increases exponentially with time. In such a universe, the integral of one over the scale factor, diverges as one goes back in time. This means that different regions in the early universe, could have communicated with each other, and come to equilibrium at a common state, explaining why the microwaves, look the same in different directions. The original model of inflation, which came to be known as old inflation, had various problems. How did the universe get into a false vacuum state in the first place, and how did it get out again. Various modifications were proposed, that went under the names of new inflation, or extended inflation. I won't describe them, because I have got into trouble in the past, about who should have credit for what, and because I now consider them irrelevant. As Lindeh first pointed out, it is not necessary for the universe to be in a false vacuum, to get inflation. A scalar field with a potential V, will have the energy momentum tensor shown on the screen. If the field is nearly constant in a region, the gradient terms will be small, and the energy momentum tensor, will be minus half V, times the metric. This is just what one needs for inflation. In the false vacuum case, the scalar field sits in a local minimum of the potential, V. In that case, the field equation allows the scalar field, to remain constant in space and time. If the scalar field is not at a local minimum, it can not remain constant in time, even if it is initially constant in space. However, Lindeh pointed out that if the potential is not too steep, the expansion of the universe, will slow down the rate at which the field rolls down the potential, to the minimum. The gradient terms in the energy momentum tensor, will remain small, and the scale factor will increase almost exponentially. One can get inflation with any reasonable potential V, even if it didn't have local minima, corresponding to false vacua. The work that Neil and I have done, is a logical extension of Andrei's idea. But I'm not sure if Andrei agrees with it, though I think he's coming round. Andrei's idea removed the need to believe that the universe began in a false vacuum. However, one still needed to explain, why the field should have been nearly constant over a region, with a value that was not at the minimum of the potential. To do this, one has to have a theory of the initial conditions of the universe. There are three main candidates. They are, the so called pre-big bang scenario, the tunneling hypothesis, and the no boundary proposal. In my opinion, the pre-big bang scenario is misguided, and without predictive power. And I feel the tunneling hypothesis, is either not well defined, or gives the wrong answers. But then I'm biased, for it was Jim Hartle and I, that were responsible for the no boundary proposal. This says that the quantum state of the universe, is defined by a Euclidean path integral over compact metrics, without boundary.. One can picture these metrics, as being like the surface of the Earth, with degrees of latitude, playing the role of imaginary time. One starts at the north pole, with the universe as a single point. As one goes south, the spatial size of the universe, increases like the lengths of the circles of latitude. The spatial size of the universe, reaches a maximum size at the equator, and then shrinks again to a point at the south pole. Of course, spacetime is four dimensional, not two dimensional, like the surface of the Earth, but the idea is much the same. I shall go through it in detail, because it is basic to the work I'm going to describe. The simplest compact four dimensional metric that might represent the universe, is the four sphere. One can give its metric in terms of coordinates, sigma, chi, theta and phi. One can think of sigma, as an imaginary time coordinate, and chi, theta and phi, as coordinates on a three sphere, that represents the spatial size of the universe. Again, one starts at the north pole, sigma =0, with a universe of zero spatial size, and expands up to a maximum size at the equator, sigma = pi, over 2H. But we live in a universe with a Lorentzian metric, like Minkowski space, not a Euclidean, positive definite metric. One therefore has to analytically continue, the Euclidean metrics used in the path integral, for the no boundary proposal. There are several ways one can analytically continue, the metric of the four sphere, to a Lorentzian spacetime metric. The most obvious is to follow the Euclidean time variable, sigma, from the north pole to the equator, and then go in the imaginary sigma direction, and call that real Lorentzian time, t. Instead of the size of the three spheres going as the sine of H sigma, they now go as the cosh of H t. This gives a closed universe, that expands exponentially with real time. At late times, the expansion will change from being exponential, to being slowed down by matter in the normal way. This departure of the scale factor from a cosh behavior, will occur because the original Euclidean four sphere, was not perfectly round. But the universe would still be closed, however deformed the four sphere. For nearly 15 years, I believed that the no boundary proposal, predicted that the universe was spatially closed. I also believed that the cosmological constant was zero, because it seemed unreasonable to suppose that it was less than the observational limit of 10 to the minus 120 Planck units, unless it were exactly zero. But the Einstein equations, relate the energy density in the universe, plus lambda, to the rate of expansion, and the curvature, k, of the surfaces of constant time. Define omega matter and omega lambda, to be the density and lambda, divided by the critical value. If the universe is closed, that is, k=+1, omega matter plus omega lambda, must be greater than one. Observations of luminous matter, like stars and gas clouds give an omega matter of about 0 point 0 2. We know that galaxies and clusters of galaxies, must contain non luminous, or dark matter, but the best estimate of this, is that it contributes at most 0 point 2 of the critical density. Still, Eddington once said, if your theory doesn't agree with the observations, don't worry. The observations are probably wrong. But if your theory doesn't agree with the second law of thermodynamics, forget it. I firmly believed in the no boundary proposal, and I thought it implied that the universe had to be closed. Since a closed universe, is not incompatible with the second law of thermodynamics, I was sure the observers had missed something, and there really was enough matter to close the universe. At that time, I didn't take the seriously the possibility of a small cosmological constant. The observations do not yet indicate that the universe is definitely open, or that lambda is non zero, but it is begining to look like one or the other, if not both. I won't go through all the observations, but shall just show what I consider to be the most significant pieces of evidence. The first is the distribution of large scale inhomogeneities in the universe. On the very largest scales, this can be measured by fluctuations in the microwave background, and on smaller scales by the galaxy galaxy correlation function. One can then try and fit these observations, with the predictions of inflationary theory. If one assumes the universe is filled with cold dark matter, the predicted spectrum of irregularities, depends on a quantity gamma. This is the product of omega matter, with the Hubble constant, or rate of expansion, in units of a hundred kilometers per second, per Megaparsec. (Astronomers use funny units). It is generally believed that the Hubble constant, is somewhere between 50 and 100 of those funny units. Thus if omega matter is one, gamma must be at least 0 point 5. As you can see, a gamma of 0 point 5, would predict much less irregularity on large angles, than is observed. One can get a reasonable fit to the observations, with a gamma of 0 point 2. If omega matter were one, this would imply a Hubble constant of only 20. As a theorist, I would be happy with such a figure, because it would make the universe older, and remove a possible conflict with the ages of some stars. But the observers claim the Hubble constant, has to be in the range, 50 to 100. This would imply that omega matter is at most 0 point 4. Thus dynamical measurements, give us a vertical strip in the omega matter, omega lambda plane. One can obtain further limits in this plane, from observations of supernova. Type 1 supernova, are standard candles. That is, the total energy in the explosion, is always the same, within a factor close to one. One can thus use their observed brightness, as a distance measurement, and compare it with their red shifts. This gives the limits shown on the diagram, for which I'm grateful to Ned Wright and Shawn Carol. The yellow, red and green areas represent the formal errors, and the large pink area, other possible errors. Also shown in blue, are the limits set by the position of a peak in the angular spectrum, of the variations of the microwave background. As you can see, the observations suggest that the universe is close to the open closed divide, but with a non zero lambda. Despite these indications of a low density lambda universe, I continued to believe that the cosmological constant was zero, and the no boundary proposal, implied that the universe must be closed. Then in conversations with Neil Turok, I realized there was another way of looking at the no boundary universe, that made it appear open. One starts with the point that Andrei Lindeh made, that inflation doesn't need a false vacuum, a local minimum of the potential. But if the scalar field is not at a stationary point of the potential, then it can not be constant on an instanton, a Euclidean solution of the field equations. In turn, this implies that the instanton can't be a perfectly round four sphere. A perfectly round four sphere, would have the symmetry group, O5. But with a non constant scalar field, the largest symmetry group that an instanton can have, is O4. In other words, the instanton is a deformed four sphere. One can write the metric of an O4 instanton, in terms of a function, b of sigma. Here b is the radius of a three sphere of constant distance, sigma, from the north pole of the instanton. If the instanton were a perfectly round four sphere, b would be a sine function of sigma. It would have one zero at the north pole, and a second at the south pole, which would also be a regular point of the geometry. However, if the scalar field at the north pole, is not at a stationary point of the potential, it will be almost constant over most of the four sphere, but will diverge near the south pole. This behavior is independent of the precise shape of the potential. The non constant scalar field, will cause the instanton not to be a perfectly round four sphere, and in fact there will be a singularity at the south pole. But it will be a very mild singularity, and the Euclidean action of the instanton will be finite. This Euclidean instanton, has been described as the universe begining as a pea. In fact, a pea is quite a good image for a deformed sphere. Its size of a few thousand Planck lengths, makes it a very petty pea. But the mass of the matter it contains, is about half a gram, which is about right for a pea. I actually discovered this pea instanton in 1983, but I thought it could describe the birth of close universes only. To get a closed universe, one starts with sigma =0 at the north pole, and proceeds to the equator, or rather the value of sigma at which the radius, b, of the three sphere is maximum. One then analytically continues sigma in the imaginary direction, as Lorentzian time. As I described earlier, this gives a closed universe with a scale factor that initially goes like cosh t. The scalar field, will have a small imaginary part, but that can be corrected by giving the initial value of the scalar field at the north pole, a small imaginary part. According to the no boundary proposal, the relative probability of such a closed universe, is e to minus twice the action of the part of the pea instanton, between the north pole, and the equator. Notice that as this part, doesn't contain the singularity at the south pole, there is no ambiguity about the action of a singular metric. The action of this part of the instanton, is negative, and is more negative, the bigger the pea. Thus the probability of the pea, is bigger, the bigger the pea. The negative sign of the action, may look counter intuitive, but it leads to physically reasonable consequences. As I said, I thought the no boundary proposal, implied that the universe had to be spatially closed, and finite in size. But Neil Turok and I, realized his ideas on open inflation, could be fitted in with the no boundary proposal. The universe would still be closed and finite, in one way of looking at it. But in another, it would appear open and infinite. Let's go back to the metric for the pea instanton, and analytically continue it in a different way. As before, one analytically continues the Euclidean latitude coordinate, in the imaginary direction, to become a Lorentzian time, t. The difference is that one goes in the imaginary sigma direction at the north pole, rather than the equator. One also continues the coordinate, chi, in the imaginary direction, as a coordinate, psi. This changes the three sphere, into a hyperbolic space. One therefore gets an exponentially expanding open universe. One can think of this open universe, as a bubble in a closed, de Sitter like universe. In this way, it is similar to the single bubble inflationary universes, that have been proposed by a number of authors. The difference is, the previous models all required carefully adjusted potentials, with false vacuum local minima. But the pea instanton, will work for any reasonable potential. The price one pays for a general potential, is a singularity at the south pole. In the analytically continued Lorentzian spacetime, this singularity would be time like, and naked. One might think that this naked singularity, would mean one couldn't evaluate the action of the instanton, or of perturbations about it. This would mean that one couldn't predict the quantum fluctuations, or what would happen in the universe. However, the singularity at the south pole, the stalk of the pea, is so mild, that the actions of the instanton, and of perturbations around it, are well defined. This means one can determine the relative probabilities of the instanton, and of perturbations around it. The action of the instanton itself, is negative, but the effect of perturbations around the instanton, is to increase the action, that is, to make the action less negative. According to the no boundary proposal, the probability of a field configuration, is e to minus its action. Thus perturbations around the instanton, have a lower probability, than the unperturbed background. This means that quantum fluctuation are suppressed, the bigger the fluctuation, as one would hope. On the other hand, according to the tunneling hypothesis, favored by Vilenkin and Lindeh , probabilities are proportional to e to the ~plus action. This would mean that quantum fluctuation would be ~enhanced, the bigger the fluctuation. There is no way this could lead to a sensible description of the universe. Lindeh therefore proposes to take e to the ~plus action, for the probability of the background universe, but e to the ~minus action, for the perturbations. However, there is no invariant way, in which one can divide the action, into a background part, and a part due to fluctuations. So Lindeh's proposal, does not seem well defined in general. By contrast, the no boundary proposal, is well defined. Its predictions may be surprising, but they are not obviously wrong. To recapitulate. A general potential, without false vacuums, or local minimums, leads to the pea instanton. This can be analytically continued, to either an open, or a closed universe. The no boundary proposal, then allows one to calculate the relative probabilities of different backgrounds, and the quantum fluctuations about them. There isn't just a single pea instanton, but a whole family of them, labeled by different values of the scalar field at the north pole. The higher the value of the potential at the north pole, the smaller the instanton, and the less negative the value of the action. Thus the no boundary proposal, predicts that large instantons, are more probable than small ones. This is a problem, because large instantons, will lead to a shorter period of exponential expansion or inflation, than small ones. In the closed universe case, a short period of inflation, would mean the universe would recollapse before it reached the present size and density. On the other hand, an open universe with a short period of inflation, would become almost empty early on. Clearly, the universe we live in, didn't collapse early on, or become almost empty. So we have to take account of the anthropic principle, that if the universe hadn't been suitable for our existence, we wouldn't be asking why it is, the way it is. Many physicists don't like the anthropic principle, but I think some version of it is essential, in quantum cosmology. M theory, or whatever the ultimate theory is, seems to allow a very large number of possible solutions, and compactifications. One has to have some criterion, for discarding most of them. Otherwise, why isn't the universe, eleven dimensional Minkowski space. The approach Neil Turok and I took, was to invoke the weakest version of the anthropic principle. We adopted Bayes statistics. In this, one starts with an a-priori probability distribution, and then modifies it in light of ones knowledge of the system. In this case, we took the a-priori distribution, to be the e to the minus action, predicted by the no boundary proposal. We then modified it, by the probability that the model contained galaxies, which are presumably a necessary condition, for the existence of intelligent observers. An open universe, has an infinite spatial volume. Thus the total number of galaxies in an open universe, would always be infinite, no matter how low the probability of finding a galaxy, in a given comoving volume. One therefore can not weight the a-priori probability, given by the no boundary proposal, by the total number of galaxies in the universe. Instead, we weighted by the comoving density of galaxies, predicted from the growth of quantum fluctuations, about the pea instanton. This gives a modified probability distribution for omega, the present density, divided by the critical density. For the open models, this probability distribution, is sharply peaked at an omega of about zero point zero one. This is lower than is compatible with the observations, but it is not such a bad miss. As far as I'm aware, this is the first attempt to ~predict a value of omega for an open universe, rather than fine tune a false vacuum potential, to obtain a value in the range indicated by observation. The anthropic arguments we have used, are fairly crude, and could be refined. But the best hope of getting a more realistic omega, seems to be to include other fields. Eleven dimensional super gravity, which is the best candidate we have for a theory of everything, has a three form potential, with a four form field strength. When dimensionally reduced to four dimensions, this can act as a cosmological constant. For a real four form in dimensions, the contribution to the cosmological constant is negative. It can therefore cancel the positive contribution to the cosmological constant, that must arise because super symmetry is broken, in the universe we live in. Indeed, super symmetry breaking, is a necessary condition for life. But galaxies will not form, unless the total cosmological constant, is almost zero. Thus the anthropic principle fixes the value of the four form field strength, which is a free parameter of the theory, so it almost cancels the positive contribution from symmetry breaking. But it need not cancel it exactly. The anthropic requirement, can probably be satisfied by any omega lambda between about minus one point five, and plus one point five, with a fairly flat probability distribution. This is consistent with the observations. My student, Harvey Reall and I, are now working on an eleven dimensional supergravity version of the pea instanton. One gets a reduced action in four dimensions with a four form, and two scalar fields, which describe the size, and the squashing of a seven sphere. The squashing scalar field, phi, has a potential with a minimum at the round metric, and a maximum at the squashed sphere with an Einstein metric. One can get a pea instanton, by starting phi on the exponential wall on the right. This would produce an inflationary universe, in which the squashing ran down to the round seven sphere. The scalar field that represents the size of the seven sphere, has a potential that looks unstable. However, if one takes into account the back reaction of the scalar field on the four form, the effective potential becomes stable. This looks good, but the potentials are too steep to give enough inflation. Maybe if we can include the dynamical effects of symmetry breaking, we can get something more reasonable. The aim is to find a description of the origin of the universe, on the basis of fundamental theory. Assuming that one can find a model that predicts a reasonable omega, how can we test it by observation. The best way is by observing the spectrum of fluctuations, in the microwave background. This is a very clean measurement of the quantum fluctuations, about the initial instanton. However, there is an important difference between our instanton, and previous proposals for open inflation. They have all assumed false vacuum potentials, and have used the Coleman De Lucia instanton, which is non singular. However, our instanton has a singularity at the south pole. There has been a lot of discussion of this singularity. In the Lorentzian analytically continued spacetime, the singularity is time like and naked. People have worried about this singularity, because it seemed to make the spacetime non predictable. Anything could come out of the singularity. However, perturbations of the Euclidean instanton, have finite action if and only, they obey a Dirichelet boundary condition at the singularity. Perturbation modes that don't obey this boundary condition, will have infinite action, and will be suppressed. Support for this boundary condition, has come from the work of Garriga, who has shown that in some cases at least, the singularity in the instanton, is just an artefact of Kaluza Klein reduction from higher dimensional spacetimes. In these situations, perturbations would obey this Dirichelet boundary condition. When one analytically continues to Lorentzian spacetime, the Dirichelet boundary condition, implies that perturbations reflect at the time like singularity. This has a significant effect on the two point correlation function of the perturbations. I show preliminary calculations that Neil and a student have made, for the case of omega equals zero point three. The first shows the two point correlation function of the microwave background, as a function of angle, for our instanton, and for false vacuum open inflation. The difference, which is plotted on a magnified scale, is like a step function at 30 degrees, the angle subtended by the curvature radius, on the surface of last scattering. The next graph, shows the power spectrum of this correlation function. You see it has small oscillations, that come from the Fourier transform, of the step function. The present observations of the microwave fluctuations, are not sensitive enough to detect this effect. But it may be possible with the new observations that will be coming in, from the map satellite in two thousand and one, and the Planck satellite in two thousand and six. Thus the no boundary proposal, and the pea instanton, are real science. They can be falsified by observation. I will finish on that note. Gravitational Entropy (June '98) The slides for this talk make up a Power Point Presentation. They can be downloaded as a zip file. The first indication of a connection between black holes and entropy, came in 1970, with my discovery that the area of the horizon of a black hole, always increased. There was an obvious analogy with the Second Law of Thermodynamics, which states that entropy always increases. But it was Jacob Bekenstein, who took the bold step, of suggesting the area actually was the physical entropy, and that it counted the internal states of the black hole. I was very much against this idea at first, because I felt it was a misuse of my horizon area result. If a black hole had a physical entropy, it would also have a physical temperature. If a black hole was in contact with thermal radiation, it would absorb some of the radiation, but it would not give off any radiation, since by definition, a black hole was a region from which nothing could escape. If the thermal radiation was at a lower temperature than the black hole, the loss of entropy down the black hole, would be greater than the increase of horizon area. This would be a violation of the generalized Second Law, that Bekenstein proposed. With hind sight, this should have suggested that black holes radiate. But no one, including Bekenstein and myself, thought anything could get out of a non rotating black hole. On the other hand, Penrose had shown that energy could be extracted from a rotating black hole, by a classical process. This indicated that there should be a spontaneous emission in the super radiant modes, that would be the quantum counter part of the Penrose process. In trying to understand this emission in the super radiant modes, in terms of quantum field theory in curved spacetime, I stumbled across the fact that even non rotating black holes, would radiate. Moreover, the radiation would be exactly what was required, to prevent a violation of the generalized second law. Bekenstein was right after all, but in a way he hadn't anticipated. In this talk, I want to discuss the deep reason, for the existence of such gravitational entropy. In my opinion, it is that general relativity, and extensions like supergravity, allow spacetime to have more than one topology. By topology, I mean topology in the Euclidean regime. The topology of a Lorentzian spacetime can change with time, only if there is some pathology, such as a singularity, or closed time like curves. In either of these cases, one would expect the theory to break down. It was Paul Dirac, my predecessor at Cambridge, who first realized that time evolution in quantum theory, could be formulated as a unitary transformation, generated by the Hamiltonian. This worked well in non relativistic quantum theory, in which the Hamiltonian was just the total energy. It also worked in special relativity, where the Hamiltonian, could be taken to be the time component of the four momentum. But there were problems in general relativity, where neither energy, nor linear momentum, are local quantities. Energy and momentum can only be defined globally, and only for suitable asymptotic behavior. Dirac himself, developed the Hamiltonian treatment for general relativity. In d dimensions, one can write the metric in the ADM form. That is, one introduces a time coordinate, tau, which I take to be Euclidean, and shift and lapse functions. The Hamiltonian, can then be expressed as an integral, over a surface of constant time. However, the difference from special relativity, was that all the terms in this volume integral, vanished for configurations that satisfied the field equations. I must admit that when I came across the Hamiltonian formulation as a student, I thought, why should one bother working out the volume terms, since they are zero. The answer is, of course, that although the volume terms are zero on solutions, they have non zero Dirac brackets, which become commutators in the quantum theory. Because the volume contributions to the Hamiltonian are zero, its numerical value has to come from surface integrals, at the boundaries of the volume. Such surface terms arise in any gauge theory, including Maxwell theory, when one integrates the constraint equations by parts. In the gravitational case, the Hamiltonian also gets a contribution to the surface term, from the trace K surface term in the action, that is required to cancel the variation in the Einstein Hilbert action, capital R. The surface term in general, makes both the action, and the Hamiltonian, infinite. It is therefore sensible to consider only the difference between the action or Hamiltonian, and those of some reference background solution, that the solutions approach at infinity. This reference background acts as the vacuum, for that sector of the quantum theory. It is normally taken to be flat space, or anti de Sitter space, but I will consider other possibilities. In asymptotically flat space, if the surfaces of constant tau at infinity, are related by just a time translation, the shift is zero, and the lapse is one. The Hamiltonian surface term at infinity, is then just the mass, plus the electric charge, Q, times the electro static potential, Phi. In a topologically trivial spacetime, one could make the electro static potential zero, by a gauge transformation. However, this will not be possible in non trivial topologies. As I will explain later, magnetic charges do not contribute to the surface term in the Hamiltonian. If the surface of constant tau at infinity, are related by a time translation, plus a rotation through phi, of omega, the surface term at infinity picks up an extra omega J term. Normally, one considers solutions, which can be foliated by surfaces of constant time, that have boundaries only at infinity, in asymptotically flat space. In such situations, the total Hamiltonian, that is the volume integral, plus the surface terms, will generate unitary transformations, that map the Hilbert space of initial states, into the final ones. All the quantum states concerned, can be taken to be pure states. There are no mixed states, or gravitational entropy. However, solutions like black holes, have a Euclidean geometry with non trivial topology. This means that they can't be foliated by a family of time surfaces, that agree with the usual notion of time. If you try, the family of surfaces will necessarily have intersections, or other singularities, on surfaces of codimension two or more. In fact Euclidean black holes, are the simplest examples. So I will show how the break down of unitary Hamiltonian evolution, gives rise to black hole entropy. I will then go on to more exotic possibilities, like Taub Nut, and Taub bolt. In these cases, the entropy is not necessarily a quarter the area, of a codimension two surface. As Jim Hartle and I first discovered in 1975, black holes, have a regular Euclidean analytical continuation, if and only if the Euclidean time, tau, is treated like an angular coordinate. It has to be identified with a period, beta, =2 pi over kappa, where kappa is the surface gravity of the horizon. This means that the surfaces of constant tau, all intersect on the horizon, and the concept of a unitary Hamiltonian evolution, will break down there. The surfaces of constant tau, will therefore have an inner boundary at the horizon, and the Hamiltonian will also have contributions from surface terms, at this boundary. If one takes the Hamiltonian vector, to be the combination of the tau and phi Killing vectors that vanishes on the horizon, then the lapse and shift vanish on the horizon. This means that the gravitational part of the surface term, is zero. If the vector potential is also regular on the horizon, the gauge field surface term is also zero. The thermodynamic partition function, Z, for a system at temperature, beta to the minus one, is the expectation value of e to the minus beta, times the Hamiltonian, summed over all states. As is now well known, this can be represented by a Euclidean path integral, over all fields that are periodic in Euclidean time, with period beta at infinity. Similarly, the partition function for a system with angular velocity, omega, will be given by a path integral over all fields, that are periodic under the combination of a Euclidean time translation, beta, and a rotation, omega beta. One can also specify the gauge potential at infinity. This gives the partition function for a thermodynamic ensemble, with electric and magnetic type charges. The mass, angular momentum, and electric charges of the configurations in the path integral, are not determined by the boundary conditions at infinity. They can be different for different configurations. Each configuration, will therefore be weighted in the partition function, by an e to the minus the charge, times the corresponding potential. On the other hand, the magnetic type charges, are uniquely determined by the boundary conditions at infinity, and are the same for all field configurations in the path integral. The path integral therefore gives the partition function, for a given magnetic charge sector. The lowest order contribution to the partition function, will be e to the minus I, where I, is the action of the Euclidean black hole solution. The action can be related to the Hamiltonian, as integral H, minus pq dot. In a stationary black hole metric, all the q dots will be zero. Thus the action, I, will be the time period beta, time the value of the Hamiltonian. As I said earlier, the Hamiltonian surface term at infinity, is mass, plus omega J, plus Phi Q, and the Hamiltonian surface term on the horizon, is zero. If one uses the contribution from this action to the partition function, and uses the standard formula, one finds the entropy is zero. However, because the surfaces of constant Euclidean time, all intersected at the horizon, one had to introduce an inner boundary there. The action, I, = beta times Hamiltonian, is the action for region between the boundary at infinity infinity, and a small tubular neighbourhood of the horizon. But the partition function, is given by a path integral over all metrics with the required behavior at infinity, and no internal boundaries or infinities. One therefore has to add the action of the tubular neighbourhood of the horizon. What ever supergravity theory one is using, and what ever dimension one is in, one can make a conformal transformation of the metric to the Einstein frame, in which the coefficient of the Einstein Hilbert action, capital R, is on over 16 pi G, where G is Newton's constant in the dimension of the theory. The surface term associate with the Einstein Hilbert action, is one over 8 pi G, times the trace of the second fundamental form. This gives the tubular neighbourhood of the horizon, an action of minus one over 4 G, times the codimension two area of the horizon. If one adds this action to the beta times Hamiltonian, one gets a contribution to the entropy, of area over 4 G, independent of dimension, or of the particular supergravity theory. Higher order curvature terms in the action, would give the tubular neighbourhood an action, that was small compared to area over 4 G, for large black holes. Thus the quarter area law, is universal for black holes. It can be traced to the non trivial topology of Euclidean black holes, which provides an obstruction to foliating them by a family of time surfaces, and using the Hamiltonian to generate a unitary evolution of quantum states. Because the entropy is given by the horizon area in Planck units, one might think that it corresponded to microstates, that are localized near the horizon. However, gravitational entropy, like gravitational energy, can not be localized, but can only be defined globally. This can be seen most clearly in the case of the three dimensional BTZ black hole, to which all four or five dimensional black holes, can be related by a series of U dualities, which preserve the horizon area. The BTZ black hole, is a solution of the 2+1 Einstein equations, with a negative cosmological constant. Locally, the only solution of these equations, is anti de Sitter space, but the global structure can be different. To see this, one can picture anti de Sitter space, as conformal to the interior of a cylinder in 2+1 Minkowski space. The surface of the cylinder, represents the time like infinity of anti de Sitter space. Similarly, Euclidean anti de Sitter space, is conformal to the interior of a cylinder in three dimensional Euclidean anti de Sitter space. If one now identify this cylinder periodically along its axis, one occasions the background geometry for quantum fields in anti de Sitter space, at a finite temperature. One could identify under the combination of a rotation along the axis and a rotation, but I shall consider only a translation, for simplicity. The surface of the identified cylinder, representing the boundary at infinity, will be a two torus, with periodically identified tau and phi, as the two coordinates. However, given a two torus at infinity, there are two topologically distinct ways, that one can fill it in with a two disk, cross a circle. These two ways, are shown as the two anchor rings, or solid tori, on the screen. The circle can be in either the tau or phi directions. The first corresponds to anti de Sitter at a finite temperature, and the second to the BTZ black hole. In the BTZ black, the orbits of the phi Killing vector, do not shrink to zero, because the black hole has no center in the Euclidean region. On the other hand, the orbits of the tau killing vector, shrink to zero at the horizon, which is the center of the disk, cross the circle. One gets a a non zero action and Hamiltonian for the BTZ black hole, by taking the reference background, to be anti de Sitter space at a finite temperature. As before, this leads to an entropy of a quarter the area of the horizon, which in this case, is the length of the phi orbit at the center of the disk. But the BTZ black hole, is completely homogeneous. Thus one can not localize the entropy on the horizon, which is just like the axis in ordinary three dimensional space. It arises from the global mismatch of Euclidean BTZ, with the reference background, which is Euclidean anti de Sitter, periodically identified in the tau direction. This analysis of the mismatch between the topology of a reference background, and other Euclidean solutions with the same asymptotic behavior, can be extended to other situations. The intersection of surfaces of constant time on the horizon, is only simplest way in which unitary Hamiltonian evolution can break down. Chris Hunter and I, have been investigating more complicated topologies, in which there are other possible singularities in the foliation of spacetime. One might expect that these singularities, would also have entropy associated with them. Since thermal ensembles are periodic in the Euclidean time direction, we considered reference backgrounds, and other Euclidean solutions, which have a U1 isometry group, with Killing vector, K. The isometry group, will have fixed points where K vanishes. Gary Gibbons and I, classified the possible fixed point sets in four dimensions, into two dimensional surfaces we called bolts, and isolated points that we called nuts. However, one can extend this classification scheme, to Euclidean metrics of any dimension. The fixed point sets will then lie on totally geodesic sub manifolds, of even codimension. Let tau be the parameter of the U1 isometry group. Then the metric can be written in the Kaluza Klein form, with tau as the coordinate on the internal U1. Here V, omega i, and gamma i j, are fields on the d minus one dimensional space, B, of orbits of the isometry group. B would be singular at the fixed points, so one has to leave them out of B, and introduce d minus two dimensional boundaries to B. The coordinate tau can be changed by a Kaluza Klein gauge transformation, that is, by the addition of a function, lambda on B. This changes the one form, omega, by d lambda, but leaves the field strength, F = d omega, unchanged. If the orbit space, B has non trivial homology in dimension two, the two form, F, can have non zero integrals over two cycles in B. In this case the potential one form, omega, will have Dirac like string singularities, on surfaces of dimension d minus three in B. The foliattion of the spacetime by surfaces of constant tau, will break down both at the fixed points of the isometry, and on the Kaluza Klein string singularities of omega, which I will call Misner strings, after Charles Misner who first realized their nature in the Taub nut solution. Misner strings are surfaces of dimension d minus two in the spacetime. In order to do a Hamiltonian treatment using surfaces of constant tau, one has to cut out small neighbourhoods of the fixed point sets, and of any Misner strings. The action given by beta times the value of the Hamiltonian, will then be the action of the spacetime, with the neighbourhoods removed. Putting back the neighbourhoods, the Einstein Hilbert term will give a contribution of minus a quarter area, for the Misner strings and the d minus two dimensional fixed point sets. But the contribution to the action from lower dimensional fixed points, will be zero. As before, the Hamiltonian surface terms at the fixed points, will be zero, because the lapse and shift vanish there. But the shift won't vanish on the Misner string, so there will be a Hamiltonian surface term on a Misner string, given by the shift, times a component of the second fundamental form, of the constant tau surfaces. Thus the action will be made up of several contributions. First, there will be beta times the Hamiltonian surface terms at infinity, and on the Misner strings. Then one has to subtract one over 4 G, times the sum of the areas of the bolts, plus the Misner strings. Finally, one has to subtract the same quantities for the reference background. Some or all of these quantities may diverge, but the differences from the reference background will have finite limits, as the boundary is taken to infinity. From now on, I shall mean these finite differences, when I refer to any of these contributions to the action. The partition function, Z, can be related by thermodynamics, to the entropy, and the conserved quantities like energy, angular momentum, and electric charge, whose values are not fixed by the boundary conditions. One has log Z, = the entropy, minus the sum of the conserved quantities, each weighted by its thermodynamic potential. But the Hamiltonian surface term at infinity, multiplied by beta, is by definition, the sum of the conserved quantities, weighted by their thermodynamic potentials. Thus taking the action to be minus log Z, one gets that the entropy is a quarter the area of the bolts, and Misner strings, minus beta times the Hamiltonian surface term on the Misner strings. One can make a Kaluza Klein gauge transformation, by changing tau by a function, lambda, on the orbit space. This will change the position and area of the Misner strings, but the combination, a quarter string area, minus beta the Hamiltonian surface term on the string, will be gauge invariant. Again, this shows that entropy is a global property. It can not be localized in microstates on the Misner string. The discussion I have given, applies to any gravitational theory in any dimension, which has the Einstein Hilbert action, as the leading term. I shall illustrate it, however, with some examples in four dimensions, that have been worked out by Chris Hunter. Four dimensional metrics, can have several different asymptotic behaviors. In this talk, I shall concentrate on the asymptotically locally flat, and asymptotically locally Euclidean cases, because entropy has not been defined for these spaces previously. Asymptotically local flat solutions, have a Nut charge, or magnetic type mass, N, as well as the ordinary electric type mass, M. The Nut charge is beta over 8 pi, times the first Chern number of the U1 bundle, over the sphere at infinity, in the orbit space, B. The natural reference backgrounds for solutions with Nut charge, are the self dual multi Taub Nut solutions, which have M=N. When written in Kaluza Klein form, the multi Taub Nut solutions what no bolts, or fixed point sets of dimension two. They do however, have a number of Nuts, or fixed point sets of dimension zero. From each Nut, there is a Misner string, leading to either another Nut, or infinity. The positions and areas of the Misner strings, are gauge dependent. However, a quarter the area of the Misner strings, minus beta times the Hamiltonian surface term on the strings, is gauge invariant, and is independent of the position of the Nuts in the three dimensional flat orbit space. Thus the entropy of the multi Taub Nuts, is zero, as one would expect, since they define the vacuum for that sector of the theory. There are, however, asymptotically locally flat solutions, that are not multi Taub Nut. The prime example, is the so called Taub bolt solution, discovered by Don page. As its name suggests, this solution has a bolt, with area 12 pi N squared. It also has a Misner string, stretching from the bolt to infinity. The Chern number at infinity, is one. Thus the appropriate reference background, is the single self dual Taub Nut. When one calculates the entropy according to the prescription I have given, the result is, pi N squared. Note that this is not a quarter the area of the bolt, which would have given 3 pi N squared. The difference comes from the different Misner string area and Hamiltonian contributions, in Taub Nut and Taub bolt. In the asymptotically locally Euclidean case, the appropriate reference backgrounds, are the orbifolds obtained by identifying Euclidean flat space, under a discrete sub group, Gamma, of SU2, that acts freely on the three sphere. The ALE self dual instantons, have these boundary conditions. They can be written in Kaluza Klein form, with bolts, nuts, and Misner strings. The reference backgrounds, can also be written in Kaluza Klein form, with a nut at the orbifold point, and a Misner string from the orbifold point, to infinity. However, the entropy, calculated according to the prescription I have given, turns out to be zero. This is what one would expect, because the ALE instantons, have the same super symmetry as the reference backgrounds. It is only when one has solutions with less super symmetry than the background, that one gets entropy. Examples are non extreme black holes, which have no super symmetry, or extreme black holes with central charges, which have reduced super symmetry. One can show that there are no vacuum ALE metrics, with less super symmetry than the reference background. The Israel Wilson family of Einstein Maxwell solutions, however, contains non self dual ALE, and even asymptotically Euclidean solutions, with an asymptotically constant self dual Maxwell field at infinity. Since these solutions are not super symmetric, and have different topology to the reference background, one would expect them to have entropy, and this is confirmed by calculation of examples. One might think instantons with a self dual Maxwell field at infinity, were not physically relevant. However, one can promote them to being Einstein Yang Mills solutions, with a constant self dual Yang Mills field at infinity. One could then match them to Yang Mills instantons in flat space, with large winding numbers, which can have regions where the Yang Mills field, is almost constant. Finally, to show that the expression we propose for the entropy, can be applied in more than four dimensions, consider the five sphere, of radius, R. This can be regarded as a solution of a five dimensional theory, with cosmological constant. One can take the U1 isometry group, to have a fixed point set on a three sphere of radius R. In this case there are no Misner strings. So the formula gives an entropy of, pi squared, R cubed, over 2G. However, one can choose a different U1 isometry, whose orbits are the Hopf fibration of the five sphere. This isometry group, has no fixed points. So the usual connection between entropy, and the fixed points, does not apply. But the orbit space of the Hopf fibration, is CP2. The Kaluza Klein two form, F, is the harmonic two form on CP2. The one form potential, omega, for this has a Dirac string on a two surface in the orbit space. When promoted to the full spacetime, this becomes a three dimensional Misner string. The area of the Misner string divided by 4G, minus beta times the Hamiltonian surface term on the Misner string, is again the entropy, pi squared R cubed, over 2G. This example is fairly trivial, but it shows that the method can be extended to higher dimensions. I think there are three morals can be drawn from this work. The first is that gravitational entropy just depends on the Einstein Hilbert action. It doesn't require super symmetry, string theory, or p-branes. The second is that entropy is a global quantity, like energy or angular momentum, and shouldn't be localized on the horizon. The various attempts to identify the microstates responsible for black hole entropy, are in fact constructions of dual theories, that live in separate spacetimes. The third moral, is that entropy arises from a failure to foliate the Euclidean regime, with a family of time surfaces. This would suggest that there would not be a unitary S matrix, for particle scattering described by a Euclidean section, with non trivial topology. No particle scattering situation, with non trivial Euclidean topology, has definitely been shown to exist, but the asymptotically Euclidean solutions with a constant Maxwell field at infinity, are very suggestive. They would seem to point to loss of quantum coherence and information, in black holes. This is the major unresolved question, in the quantum theory of black holes. Let's hope this meeting, gives us new insights into the problems. Quantum Cosmology, M-theory and the Anthropic Principle (January '99) Please note that bullet marks indicate progression of slide show. The slide show is a PowerPoint97 file. It is viewable on machines without PowerPoint97. You need to download the zip file. This lecture is also available for download as a postscript file. To download the file, right click on the link and select 'Save Link As...' from the menu. This lecture is the intellectual property of Professor S.W. Hawking. You may not reproduce, edit or distribute this document in anyway for monetary advantage. This talk will be based on work with Neil Turok and Harvey Reall. I will describe what I see as the framework for quantum cosmology, on the basis of M theory. I shall adopt the no boundary proposal, and shall argue that the Anthropic Principle is essentia l, if one is to pick out a solution to represent our universe, from the whole zoo of solutions allowed by M theory. Cosmology used to be regarded as a pseudo science, an area where wild speculation, was unconstrained by any reliable observations. We now have lots and lots of observational data, and a generally agreed picture of how the universe is evolving. But cosmolo gy is still not a proper science, in the sense that as usually practiced, it has no predictive power. Our observations tell us the present state of the universe, and we can run the equations backward, to calculate what the universe was like at earlier tim es. But all that tells us is that the universe is as it is now, because it was as it was then. To go further, and be a real science, cosmology would have to predict how the universe should be. We could then test its predictions against observation, like i n any other science. The task of making predictions in cosmology is made more difficult by the singularity theorems, that Roger Penrose and I proved. These showed that if General Relativity were correct, the universe would have begun with a singularity. Of course, we would expect classical General Relativity to break down near a singularity, when quantum gravitational effects have to be taken into acco unt. So what the singularity theorems are really telling us, is that the universe had a quantum origin, and that we need a theory of quantum cosmology, if we are to predict the present state of the universe. A theory of quantum cosmology has three aspects. The first, is the local theory that the fields in space-time obey. The second, is the boundary conditions for the fields. And I shall argue that the anthropic principle, is an essential third element. As far as the local theory is concerned, the best, and indeed the only consistent way we know, to describe gravitational forces, is curved space-time. And the theory has to incorporate super symmetry, because otherwise the uncancelled vacuum energies of all the modes would curl space-time into a tiny ball. T hese two requirements, seemed to point to supergravity theories, at least until 1985. But then the fashion changed suddenly. People declared that supergravity was only a low energy effective theory, because the higher loops probably diverged, though no on e was brave, or fool hardy enough to calculate an eight-loop diagram. Instead, the fundamental theory was claimed to be super strings, which were thought to be finite to all loops. But it was discovered that strings were just one member, of a wider class of extended objects, called p-branes. It seems natural to adopt the principle of p-brane democracy. All p-branes are created equal. Yet for p greater than one, the quantum theory of p-branes, diverges for higher loops. I think we should interpret these loop divergences, not as a break down of the supergravity theories, but as a break down of naive perturbation theory. In gauge theories, we know that perturbation theory breaks down at strong coupling. In quantum gravity, the role of the gauge coupling, is played by the energy of a particle. In a quantum loop one integrates over… So one would expect perturbation theory, to break down. In gauge theories, one can often use duality, to relate a strongly coupled theory, where perturbation theory is bad, to a weakly coupled one, in which it is good. The situation seems to be similar in gravity, with the relation between ultra violet and inf ra red cut-offs, in the anti de Sitter, conformal field theory, correspondence. I shall therefore not worry about the higher loop divergences, and use eleven-dimensional supergravity, as the local description of the universe. This also goes under the name of M theory, for those that rubbished supergravity in the 80s, and don't want to admit it was basically correct. In fact, as I shall show, it seems the origin of the universe, is in a regime in which first order perturbation theory, is a good approximati on. The second pillar of quantum cosmology, are boundary conditions for the local theory. There are three candidates, the pre big bang scenario, the tunneling hypothesis, and the no boundary proposal. The pre big bang scenario claims that the boundary condition, is some vacuum state in the infinite past. But if this vacuum state develops into the universe we have now, it must be unstable. And if it is unstable, it wouldn't be a vacuum state, and it wou ldn't have lasted an infinite time before becoming unstable. The quantum-tunneling hypothesis, is not actually a boundary condition on the space-time fields, but on the Wheeler Dewitt equation. However, the Wheeler Dewitt equation, acts on the infinite dimensional space of all fields on a hyper surface, and is not well defined. Also, the 3+1, or 10+1 split, is putting apart that which God, or Einstein, has joined together. In my opinion therefore, neither the pre bang scenario, nor the quantum-tunneling hypothesis, are viable. To determine what happens in the universe, we need to specify the boundary conditions, on the field configurations, that are summed over in the path integral. One natural choice, would be metrics that are asymptotically Euclidean, or asymptotically anti d e Sitter. These would be the relevant boundary conditions for scattering calculations, where one sends particles in from infinity, and measures what comes back out. However, they are not the appropriate boundary conditions for cosmology. We have no reason to believe the universe is asymptotically Euclidean, or anti de Sitter. Even if it were, we are not concerned about measurements at infinity, but in a finite region in the interior. For such measurements, there will be a contribution fro m metrics that are compact, without boundary. The action of a compact metric is given by integrating the Lagrangian. Thus its contribution to the path integral is well defined. By contrast, the action of a non-compact or singular metric involves a surface term at infinity, or at the singularity. One can add an arbitrary quantity to this surface term. It therefore seems more natural to adopt what Jim Hartle and I called the no boundary proposal. The quantum state of the universe is defined by a Euclidean p ath integral over compact metrics. In other words, the boundary condition of the universe is that it has no boundary. There are compact Reechi flat metrics of any dimension, many with high dimensional modulie spaces. Thus eleven-dimensional supergravity, or M theory, admits a very large number of solutions and compactifications. There may be some principle that we haven' t yet thought of, that restricts the possible models to a small sub class, but it seems unlikely. Thus I believe that we have to invoke the Anthropic Principle. Many physicists dislike the Anthropic Principle. They feel it is messy and vague, it can be us ed to explain almost anything, and it has little predictive power. I sympathize with these feelings, but the Anthropic Principle seems essential in quantum cosmology. Otherwise, why should we live in a four dimensional world, and not eleven, or some other number of dimensions. The anthropic answer is that two spatial dimensions, are not enough for complicated structures, like intelligent beings. On the other hand, four or more spatial dimensions would mean that gravitational and electric forces would fall off faster than the inverse square law. In this situation, planets would not have stable orbits around their star, nor electrons have stable orbits around the nucleus of an atom. Thus intelligent life, at least as we know it, could exist only in four dim ensions. I very much doubt we will find a non anthropic explanation. The Anthropic Principle is usually said to have weak and strong versions. According to the strong Anthropic Principle, there are millions of different universes, each with different values of the physical constants. Only those universes with suitable phys ical constants will contain intelligent life. With the weak Anthropic Principle, there is only a single universe. But the effective couplings are supposed to vary with position, and intelligent life occurs only in those regions, in which the couplings hav e the right values. However, quantum cosmology, and the no boundary proposal remove the distinction between the weak and strong Anthropic Principles. The different physical constants are just different modulie of the internal space, in the compactification of M theory, or eleven-dimensional supergravity. All possible modulie will occur in the path integral over compact metrics. By contrast, if the path integral were over non compact metrics, one would have to specify the values of the modulie at infinity. But why should the modulie at infinity, have those particular values, like four uncompactified dimensions, that allow intelligent life. In fact, the Anthropic Principle, really requires the no boundary proposal, and vice-versa. One can make the Anthropic Principle precise, by using Bayes statistics. One takes the a-priori probability of a class of histories, to be the e to the minus the Euclidean action, given by the no boundary proposal. One then weights this a-priori probability, with the probability that the class of histories contain intelligent life. As physicists, we don't want to be drawn into to the fine details of chemistry and biology, but we can reckon certain features, as essential prerequisites of life as we know it. Among these are the existence of galaxies and stars, and physical const ants near what we observe. There may be some other region of modulie space, that allows some different form of intelligent life, but it is likely to be an isolated island. I shall therefore ignore this possibility, and just weight the a-priori probability , with the probability to contain galaxies. The simplest compact metric that could represent a four dimensional universe, would be the product of a four sphere, with a compact internal space. But the world we live in has a metric with Lorentzian signature, rather than a positive definite Euclidean one. So one has to analytically continue the four-sphere metric, to complex values of the coordinates. There are several ways of doing this. One can analytically continue the coordinate, sigma, as sigma equator, plus i t. One obtains a Lorentzian metric, which is a closed Friedmann solution, with a scale factor that goes like cosh Ht. So this is a closed universe that collapses to a minimum si ze, and then expands exponentially again. However, one can analytically continue the four-sphere in another way. Define t = i sigma, and chi = i psi. This gives an open Friedmann universe, with a scale factor like sinh Ht. Thus one can get an apparently spatially infinite universe, from the no boundary proposal. The reason is that one is using as a time coordinate, the hyperboloids of constant distance, inside the light cone of a point in de Sitter space. The point itself, and its light cone, are the big bang of the Friedmann model, where the scale factor goes to zero. But they are not singular. Instead, the spacetime continues through the light cone to a region beyond. It is this region that deserves the name, the pre big bang scenario, rather than the misguided model that commonly bears that title. If the Euclidean four-sphere were perfectly round, both the closed and open analytical continuations, would inflate for ever. This would mean they would never form galaxies. A perfect round four sphere has a lower action, and hence a higher a-priori proba bility than any other four metric of the same volume. However, one has to weight this probability, with the probability of intelligent life, which is zero. Thus we can forget about round 4 spheres. On the other hand, if the four sphere is not perfectly round, the analytical continuation will start out expanding exponentially, but it can change over later to radiation or matter dominated, and can become very large and flat. This provides a mechanism whereby all eleven dimensions can have similar curvatures, in the compact Euclidean metric, but four dimensions can be much flatter than the other seven, in the Lorentzian analytical continuation. But the mechanism doesn't seem specific to four large dime nsions. So we will still need the Anthropic Principle, to explain why the world is four-dimensional. In the semi classical approximation, which turns out to be very good, the dominant contribution, comes from metrics near solutions of the Euclidean field equations. So we need to study deformed four spheres, in the effective theory obtained by dimensional reduction of eleven dimensional supergravity, to four dimensions. These Kaluza Klein theories, contain various scalar fields, that come from the three index field, and the modulie of the internal space. For simplicity, I will describe only the single sca lar field case. The scalar field, phi, will have a potential, V of phi. In regions where the gradients of phi are small, the energy momentum tensor will act like a cosmological constant, Lambda =8 pi G V, where G is Newton's constant in four dimensions. Thus it will curv e the Euclidean metric, like a four-sphere. However, if the field phi is not at a stationary point of V, it can not have zero gradient everywhere. This means that the solution can not have O5 symmetry, like the round four sphere. The most it can have, is O4 symmetry. In other words, the solution is a deformed four sphere. One can write the metric of an O4 instanton, in terms of a function, b of sigma. Here b is the radius of a three sphere of constant distance, sigma, from the north pole of the instanton. If the instanton were a perfectly round four-sphere, b would be a si ne function of sigma. It would have one zero at the north pole, and a second at the south pole, which would also be a regular point of the geometry. However, if the scalar field at the north pole, is not at a stationary point of the potential, it will var y over the four sphere. If the potential is carefully adjusted, and has a false vacuum local minimum, it is possible to obtain a solution that is non-singular over the whole four-sphere. This is known as the Coleman De Lucia instanton. However, for general potentials without a false vacuum, the behavior is different. The scalar field will be almost constant over most of the four-sphere, but will diverge near the south pole. This behavior is independent of the precise shape of the potent ial, and holds for any polynomial potential, and for any exponential potential, with an exponent, a, less then 2. The scale factor, b, will go to zero at the south pole, like distance to the third. This means the south pole is actually a singularity of th e four dimensional geometry. However, it is a very mild singularity, with a finite value of the trace K surface term, on a boundary around the singularity at the south pole. This means the actions of perturbations of the four dimensional geometry, are wel l defined, despite the singularity. One can therefore calculate the fluctuations in the microwave background, as I shall describe later. The deep reason, behind this good behavior of the singularity, was first seen by Garriga. He pointed out that if one dimensionally reduced five dimensional Euclidean Schwarzschild, along the tau direction, one would get a four-dimensional geometry, and a scalar field. These were singular at the horizon, in the same manner as at the south pole of the instanton. In other words, the singularity at the south pole, can be just an artifact of dimensional reduction, and the higher dimensional space, can be non s ingular. This is true quite generally. The scale factor, b, will go like distance to the third, when the internal space, collapses to zero size in one direction. When one analytically continues the deformed sphere to a Lorentzian metric, one obtains an open universe, which is inflating initially. One can think of this as a bubble in a closed de Sitter like universe. In this way, it is similar to the single bubble inflationary universes that one obtains from Coleman De Lucia instantons. The difference is that the Coleman De Lucia instantons require d carefully adjusted potentials, with false vacuum local minima. But the singular Hawking-Turok instanton, will work for any reasonable potential. The price one pays for a general potential, is a singularity at the south pole. In the analytically continue d Lorentzian space-time, this singularity would be time like, and naked. One might think that anything could come out of this naked singularity, and propagate through the big bang light cone, into the open inflating region. Thus one would not be able to p redict what would happen. However, as I already said, the singularity at the south pole of the four sphere, is so mild, that the actions of the instanton, and of perturbations around it, are well defined. This behavior of the singularity means one can determine the relative probabilities of the instanton, and of perturbations around it. The action of the instanton itself is negative, but the effect of perturbations around the instanton, is to increase the action, that is, to make the action less negative. According to the no boundary proposal, the probability of a field configuration, is e to minus its action. Thus perturbations around the instanton have a lower probability, than the unperturbed background . This means that quantum fluctuation are suppressed, the bigger the fluctuation, as one would hope. This is not the case with some versions of the tunneling boundary condition. How well do these singular instantons, account for the universe we live in? The hot big bang model seems to describe the universe very well, but it leaves unexplained a number of features. First is the isotropy. Why are different regions of the microwave sky, at very nearly the same temperature, if those regions have not communicated in the past? Second, despite this overall isotropy, why are there fluctuations of order one part in 10 to th e minus 5, with a fairly flat spectrum? Third, why is the density of matter, still so near the critical value, when any departure would grow rapidly with time? Fourth, why is the vacuum energy, or effective cosmological constant, so small, when symmetry b reaking might lead one to expect a value ten to the 80 higher? In fact, the present matter and vacuum energy densities can be regarded as two axes in a plane of possibilities. For some purposes, it is better to deal with the linear combinations, matter plus vacuum energy, which is related to the curvature of space. A nd matter minus twice vacuum energy, which gives the deceleration of the universe. Inflation was supposed to solve the problems of the hot big bang model. It does a good job with problem one, the isotropy of the universe. If the inflation continues for long enough, the universe would now be spatially flat, which would imply that the sum of the matter and vacuum energies had the critical value. But inflation by itself, places no limits on the other linear combination of matter and vacuum energies, and does not give an answer to problem two, the amplitude of the fluctuations. These have t o be fed in, as fine tunings of the scalar potential, V. Also, without a theory of initial conditions, it is not clear why the universe should start out inflating in the first place. The instantons I have described predict that the universe starts out in an inflating, de Sitter like state. Thus they solve the first problem, the fact that the universe is isotropic. However, there are difficulties with the other three problems. Accordin g to the no boundary proposal, the a-priori probability of an instanton, is e to the minus the Euclidean action. But if the Reechi scalar is positive, as is likely for a compact instanton with an isometry group, the Euclidean action will be negative. The larger the instanton, the more negative will be the action, and so the higher the a-priori probability. Thus the no boundary proposal, favors large instantons. In a way, this is a good thing, because it means that the instantons are likely to be in th e regime, where the semi classical approximation is good. However, a larger instanton, means starting at the north pole, with a lower value of the scalar potential, V. If the form of V is given, this in turn means a shorter period of inflation. Thus the u niverse may not achieve the number of e-foldings, needed to ensure omega matter, plus omega lambda, is near to one now. In the case of the open Lorentzian analytical continuation considered here, the no boundary a-priori probabilities, would be heavily we ighted towards omega matter, plus omega lambda, equals zero. Obviously, in such an empty universe, galaxies would not form, and intelligent life would not develop. So one has to invoke the anthropic principle. If one is going to have to appeal to the anthropic principle, one may as well use it also for the other fine tuning problems of the hot big bang. These are the amplitude of the fluctuations, and the fact that the vacuum energy now, is incredibly near zero . The amplitude of the scalar perturbations depends on both the potential, and its derivative. But in most potentials, the scalar perturbations are of the same form as the tensor perturbations, but are larger by a factor of about ten. For simplicity, I sh all consider just the tensor perturbations. They arise from quantum fluctuations of the metric, which freeze in amplitude when their co-moving wavelength, leaves the horizon during inflation. Thus amplitude of the tensor perturbation, will thus be roughly one over the horizon size, in Planck units. Longer co-moving wavelengths, leave the horizon first during inflation. Thus the spectrum of the tensor perturbations, at the time they re-enter th e horizon, will slowly increase with wavelength, up to a maximum of one over the size of the instanton. The time, at which the maximum amplitude re-enters the horizon, is also the time at which omega begins to drop below one. One has two competing effects. The a-priori probability from the no boundary proposal wants to make the instantons large, and probabi lity of the formation of galaxies, which requires that both omega, and the amplitude of the fluctuations, not be too small. This would give a sharp peak in the probability distribution for omega, of about ten to the minus three. The probability for the te nsor perturbations will peak at order ten to the minus eight. Both these values, are much less than what is observed. So what went wrong. We haven't yet taken into account the anthropic requirement, that the cosmological constant is very small now. Eleven dimensional supergravity contains a three-form gauge field, with a four-form field strength. When reduced to four dimensions, this acts a s a cosmological constant. For real components in the Lorentzian four-dimensional space, this cosmological constant is negative. Thus it can cancel the positive cosmological constant, that arises from super symmetry breaking. Super symmetry breaking is an anthropic requirement. One could not build intelligent beings from mass less particles. They would fly apart. Unless the positive contribution from symmetry breaking cancels almost exactly with the negative four form, galaxies wouldn't form, and again, intelligent life wouldn't develop. I very much doubt we will find a non anthropic explanation for the cosmologic al constant. In the eleven dimensional geometry, the integral of the four-form over any four cycle, or its dual over any seven cycle, have to be integers. This means that the four-form is quantized, and can not be adjusted to cancel the symmetry breaking exactly. In f act, for reasonable sizes of the internal dimensions, the quantum steps in the cosmological constant, would be much larger than the observational limits. At first, I thought this was a set back for the idea there was an anthropically controlled cancellati on of the cosmological constant. But then, I realized that it was positively in favor. The fact that we exist shows that there must be a solution to the anthropic constraints. But, the fact that the quantum steps in the cosmological constant are so large means that this solution is probably unique. This helps with the problem of low omega I described earlier. If there were several discrete solutions, or a continuous family of t hem, the strong dependence of the Euclidean action on the size of the instanton, would bias the probability to the lowest omega and fluctuation amplitude possible. This would give a single galaxy in an otherwise empty universe, not the billions we observe . But if there is only one instanton in the anthropically allowed range, the biasing towards large instantons, has no effect. Thus omega matter and omega lambda, could be somewhere in the anthropically allowed region, though it would be below the omega ma tter plus omega lambda =1 line, if the universe is one of these open analytical continuations. This is consistent with the observations. The red eliptic region, is the three sigma limits of the supernova observations. The blue region is from clustering observations, and the purple is from the Doppler peak in the microwave. They seem to have a common intersection, on or below the omega tota l =1 line. Assuming that one can find a model that predicts a reasonable omega, how can we test it by observation? The best way is by observing the spectrum of fluctuations, in the microwave background. This is a very clean measurement of the quantum fluctuations, a bout the initial instanton. However, there is an important difference between the non-singular Coleman De Lucia instantons, and the singular instantons I have described. As I said, quantum fluctuations around the instanton are well defined, despite the singularity. Perturbations of the Euclidean instanton, have finite action if and only, they obey a Dirichelet boundary condition at the singularity. Perturbation modes that don't obey this boundary condition, will have infinite action, and will be suppressed. The Dirichelet boundary condition also arises, if the singularity is resolved in higher dimensions. When one analytically continues to Lorentzian space-time, the Dirichelet boundary condition implies that perturbations reflect at the time like singularity. This has an effect on the two-point correlation function of the perturbations, but it seems to be quite small. The present observations of the microwave fluctuations are certainly not sensitive enough to detect this effect. But it may be possible with the new observations that will be coming in, from the map satellite in two thousand and one, and the Planck satellite in two thousand and six. Thus the no boundary proposal, and the pea instanton, are real science. They can be falsified by observation. I will finish on that note. Rotation, Nut charge and Anti de Sitter space Please note that (·) marks indicate progression of slide show . The slide show is a PowerPoint97 file. It may be viewed on PCs without Powerpoint97. You need to download and extract the zip file. This lecture is the intellectual property of Professor S.W. Hawking. You may not reproduce, edit or distribute this document in anyway for monetary advantage. Title slide (reveal title) The work I'm going to talk about has been carried out with Chris Hunter and Marika Taylor Robinson at Cambridge, and Don Page at Alberta. References It is described in the first three papers shown on the screen. Related work has been carried out by Chamblin, Emparan, Johnson, and Myers. However, they seemed a bit uncertain what reference background to use. I have also shown a reference to Dowker which is relevant. It has been known for quite a time, that black holes behave like they have entropy. The entropy is the area of the horizon, divided by 4 G, where G is Newton's constant. Black Hole Entropy The idea is that the Euclidean sections of black hole metrics are periodic in the imaginary time coordinate. Thus they represent black holes in equilibrium with thermal radiation. However there are problems with this interpretation. Problems with thermodynamic interpretation (first problem appear) First, one can not have thermal radiation in asymptotically flat space, all the way to infinity, because the energy density would curve the space, and make it an expanding or collapsing Friedmann universe. Thus, if you want a static situation, you have to resort to the dubious Gedanken experiment, of putting the black hole in a box. But you don't find black hole proof boxes, advertised on the Internet. (second problem appear) The second difficulty with black holes in equilibrium with thermal radiation is that black holes have negative specific heat. In many cases, when they absorb energy, they get larger and colder. This reduces the radiation they give off, and so they absorb faster than they radiate, and the equilibrium is unstable. This is closely related to the fact that the Euclidean metric has a negative mode. Thus it seems that asymptotically flat Euclidean black holes, describe the decay of hot flat space, rather than a black hole in equilibrium with thermal radiation. (third problem appear) The third difficulty with the idea of equilibrium is that if the black hole is rotating, the thermal radiation should be co-rotating with it. But far away from the black hole, the radiation would be co-rotating faster than light, which is impossible. Thus, again one has to use the artificial expedient, of a box of finite size. Way back in pre-history, Don page and I, realized one could avoid the first two difficulties, if one considered black holes in anti de Sitter space, rather than asymptotically-flat space. In anti de Sitter space, the gravitational potential increases as one goes to infinity. This red shifts the thermal radiation, and means that it has finite energy. Thus anti de Sitter space can exist at finite temperature, without collapsing. In a sense, the gravitational potential in anti de Sitter space, acts like a confining box. Anti de Sitter space can also help with the second problem, that the equilibrium between black holes and thermal radiation, will be unstable. Small black holes in anti de Sitter space, have negative specific heat, like in asymptotically flat space, and are unstable. But black holes larger than the curvature radius of anti de Sitter space, have positive specific heat, and are presumably stable. At the time, Don page and I, did not think about rotating black holes. But I recently came back to the problem, along with Chris Hunter, and Marika Taylor Robinson. We realized that thermal radiation in anti de Sitter space could co-rotate with up to some limiting angular velocity, without having to travel faster than light. Thus anti de Sitter boundary conditions, can solve all three problems, in the interpretation of Euclidean black holes, as equilibria of black holes, with thermal radiation. Anti de Sitter black holes may not seem of much interest, because we can be fairly sure, that the universe is not asymptotically anti de Sitter. However, they seem worth studying, both for the reasons I have just given, and because of the Maldacena conjecture, relating asymptotically anti de Sitter spaces, to conformal field theories on their boundary. I shall report on two pieces of work in relation to this conjecture. One is a study of rotating black holes in anti de Sitter space. We have found Kerr anti de Sitter metrics in four and five dimensions. As they approach the critical angular velocity in anti de Sitter space, their entropy, as measured by the horizon area, diverges. We compare this entropy, with that of a conformal field theory on the boundary of anti de Sitter space. This also diverges at the critical angular velocity, when the rotational velocity, approaches the speed of light. We show that the two divergences are similar. The other piece of work, is a study of gravitational entropy, in a more general setting. The quarter area law, holds for black holes or black branes in any dimension, d, that have a horizon, which is a d minus 2 dimensional fixed point set, of a U1 isometry group. However Chris Hunter and I, have recently shown that entropy can be associated with a more general class of space-times. In these, the U1 isometry group can have fixed points on surfaces of any even co-dimension, and the space-time need not be asymptotically flat, or asymptotically anti de Sitter. In these more general class, the entropy is not just a quarter the area, of the d minus two fixed point set. Among the more general class of space-times for which entropy can be defined, an interesting case is those with Nut charge. Nut charge can be defined in four dimensions, and can be regarded as a magnetic type of mass. Solutions with nut charge are not asymptotically flat in the usual sense. Instead, they are said to be asymptotically locally flat, or ALF. In the Euclidean regime, in which I shall be working, the difference can be described as follows. An asymptotically flat metric, like Euclidean Schwarzschild, has a boundary at infinity, that is a two-sphere of radius r, cross a circle, whose radius is asymptotically constant. To get finite values for the action and Hamiltonian, one subtracts the values for flat space, periodically identified. In asymptotically locally flat metrics, on the other hand, the boundary at infinity, is an S1 bundle over S2. These bundles are labeled by their first Chern number, which is proportional to the Nut charge. If the first Chern number is zero, the boundary is the product, S2 cross S1, and the metric is asymptotically flat. However, if the first Chern number is k, the boundary is a squashed three sphere, with mod k points identified around the S1 fibers. Such asymptotically locally flat metrics, can not be matched to flat space at infinity, to give a finite action and Hamiltonian, despite a number of papers that claim it can be done. The best that one can do, is match to the self-dual multi Taub nut solutions. These can be regarded as defining the vacuums for ALF metrics. In the self-dual Taub Nut solution, the U1 isometry group, has a zero dimensional fixed point set at the center, called a nut. However, the same ALF boundary conditions, admit another Euclidean solution, called the Taub bolt metric, in which the nut is replaced by a two dimensional bolt. The interesting feature, is that according to the new definition of entropy, the entropy of Taub bolt, is not equal to a quarter the area of the bolt, in Planck units. The reason is that there is a contribution to the entropy from the Misner string, the gravitational counterpart to a Dirac string for a gauge field. The fact that black hole entropy is proportional to the area of the horizon has led people to try and identify the microstates, with states on the horizon. After years of failure, success seemed to come in 1996, with the paper of Strominger and Vafa, which connected the entropy of certain black holes, with a system of D-branes. With hindsight, this can now be seen as an example of a duality, between a gravitational theory in asymptotically anti de Sitter space, and a conformal field theory on its boundary. It would be interesting if similar dualities could be found for solutions with Nut charge, so that one could verify that the contribution of the Misner string was reflected in the entropy of a conformal field theory. This would be particularly significant for solutions like Taub bolt, which don't have a spin structure. It would show whether the duality between anti de Sitter space, and conformal field theories on its boundary, depends on super symmetry. In fact, I will present evidence, that the duality requires super symmetry. To investigate the effect of Nut charge, we have found a family of Taub bolt anti de Sitter solutions. These Euclidean metrics are characterized by an integer, k, and a positive parameter, s. The boundary at large distances is an S1 bundle over S2, with first Chern number, k. If k=0, the boundary is a product, S1 cross S2, and the space is asymptotically anti de Sitter, in the usual sense. But if k is not zero, they are what may be called, asymptotically locally anti de Sitter, or ALADS. The boundary is a squashed three sphere, with k points identified around the U1 direction. This is just like asymptotically locally flat, or ALF metrics. But unlike the ALF case, the squashing of the three-sphere, tends to a finite limit, as one approaches infinity. This means that the boundary has a well-defined conformal structure. One can then ask whether the partition function and entropy, of a conformal field theory on the boundary, is related to the action and entropy, of these asymptotically locally anti de Sitter solutions. To make the ADS, CFT correspondence well posed, we have to specify the reference backgrounds, with respect to which the actions and Hamiltonians are defined. For Kerr anti de Sitter, the reference background is just identified anti de Sitter space. However, as in the asymptotically locally flat case, a squashed three sphere, can not be imbedded in Euclidean anti de Sitter. One therefore can not use it as a reference background, to make the action and Hamiltonian finite. Instead, one has to use Taub Nut anti de Sitter, which is a limiting case of our family. If mod k is greater than one, there is an orbifold singularity in the reference backgrounds, but not in the Taub bolt anti de Sitter solutions. These orbifold singularities in the backgrounds could be resolved, by replacing a small neighbourhood of the nut, by an ALE metric. We shall take it, that the orbifold singularities are harmless. Another issue that has to be resolved, is what conformal field theory to use, on the boundary of the anti de Sitter space. For five dimensional Kerr anti de Sitter space, there are good reasons to believe the boundary theory is large N Yang Mills. But for four-dimensional Kerr anti de Sitter, or Taub bolt anti de Sitter, we are on shakier ground. On the three dimensional boundaries of four dimensional anti de Sitter spaces, Yang Mills theory is not conformally invariant. The folklore is that one takes the infrared fixed point, of three-dimensional Yang Mills, but no one knows what this is. The best we can do, is calculate the determinants of free fields on the squashed three sphere, and see if they have the same dependence on the squashing, as the action. Note that as the boundary is odd dimensional, there is no conformal anomaly. The determinant of a conformally invariant operator, will just be a function of the squashing. We can then interpret the squashing, as the inverse temperature, and get the number of degrees of freedom, from a comparison with the entropy of ordinary black holes, in four dimensional anti de Sitter. I now turn to the question, of how one can define the entropy, of a space-time. A thermodynamic ensemble, is a collection of systems, whose charges are constrained by La-grange multipliers. Partition function One such charge, is the energy or mass, with the Lagrange multiplier being the inverse temperature, beta. But one can also constrain the angular momentum, and gauge charges. The partition function for the ensemble, is the sum over all states, of e to the minus, La-grange multipliers, times associated charges. Thus it can be written as, trace of e to the minus Q. Here Q is the operator that generates a Euclidean time translation, beta, a rotation, delta phi, and a gauge transformation, alpha. In other words, Q is the Hamiltonian operator, for a lapse that is beta at infinity, and a shift that is a rotation through delta phi. This means that the partition function can be represented by a Euclidean path integral. The path integral is over all metrics which at infinity, are periodic under the combination of a Euclidean time translation, beta, a rotation through delta phi, and a gauge rotation, alpha. The lowest order contributions to the path integral for the partition function will come from Euclidean solutions with a U1 isometry, that agree with the periodic boundary conditions at infinity. The Hamiltonian in general relativity or supergravity, can be written as a volume integral over a surface of constant tau, plus surface integrals over its boundaries. Gravitational Hamiltonian The volume integral vanishes by the constraint equations. Thus the numerical value of the Hamiltonian, comes entirely from the surface terms. The action can be related to the Hamiltonian in the usual way. Because the metric has a time translation isometry, all dotted quantities vanish. Thus the action is just beta times the Hamiltonian. If the solution can be foliated by a family of surfaces, that agree with Euclidean time at infinity, the only surface terms will be at infinity. Family of time surfaces In this case, a solution can be identified under any time translation, rotation, or gauge transformation at infinity. This means that the action will be linear in beta, delta phi, and alpha. If one takes such a linear action, for the partition function, and applies the standard thermodynamic relations, one finds the entropy is zero. The situation is very different however, if the solution can't be foliated by surfaces of constant tau, where tau is the parameter of the U1 isometry group, which agrees with the periodic identification at infinity. Breakdown of foliation The break down of foliation can occur in two ways. The first is at fixed points of the U1 isometry group. These occur on surfaces of even co-dimension. Fixed-point sets of co-dimension-two play a special role. I shall refer to them as bolts. Examples include the horizons of non-extreme black holes and p-branes, but there can be more complicated cases, like Taub bolt. The other way the foliation by surfaces of constant tau, can break down, is if there are what are called, Misner strings. Kaluza Klein metric To explain what they are, write the metric in the Kaluza Klein form, with respect to the U1 isometry group. The one form, omega, the scalar, V, and the metric, gamma, can be regarded as fields on B, the space of orbits of the isometry group. If B has homology in dimension two, the Kaluza Klein field strength, F, can have non-zero integrals over two cycles. This means that the one form, omega, will have Dirac strings in B. In turn, this will mean that the foliation of the spacetime, M, by surfaces of constant tau, will break down on surfaces of co-dimension two, called Misner strings. In order to do a Hamiltonian treatment using surfaces of constant tau, one has to cut out small neighbourhoods of the fixed point sets, and the Misner strings. This modifies the treatment, in two ways. First, the surfaces of constant tau now have boundaries at the fixed-point sets, and Misner strings, as well as the boundary at infinity. This means there can be additional surface terms in the Hamiltonian. In fact, the surface terms at the fixed-point sets are zero, because the shift and lapse vanish there. On the other hand, at a Misner string, the lapse vanishes, but the shift is non zero. The Hamiltonian can therefore have a surface term on the Misner string, which is the shift, times a component of the second fundamental form, of the constant tau surfaces. The total Hamiltonian, will be the sum of this Misner string Hamiltonian, and the Hamiltonian surface term at infinity. Consequences of non-foliation As before, the action will be beta times the Hamiltonian. However, this will be the action of the space-time, with the neighborhoods of the fixed-point sets and Misner strings removed. To get the action of the full space-time, one has to put back the neighborhoods. When one does so, the surface term associated with the Einstein Hilbert action, will give a contribution to the action, of minus area over 4G, for the bolts and Misner strings. Here G is Newton's constant in the dimension one is considering. The surface terms around lower dimensional fixed-point sets make no contribution to the action. The action of the space-time, will be the lowest order contribution to minus log Z, where Z is the partition function. But log Z is equal to the entropy, minus beta times the Hamiltonian at infinity. So the entropy is a quarter the area of the bolts and Misner strings, minus beta times the Hamiltonian on the Misner strings. In other words, the entropy is the amount by which the action is less than the value, beta times the Hamiltonian at infinity, that it would have if the surfaces of constant tau, foliated the space-time. This formula for the entropy applies in any dimension and for any class of boundary condition at infinity. Thus one can use it for rotating black holes, in anti de Sitter space. In this case, the reference background is just Euclidean anti de Sitter space, identified with imaginary time period, beta, and appropriate rotation. Four-dimensional Kerr-AdS The four-dimensional Kerr anti de Sitter solution, was found by Carter, and is shown on the slide. The parameter, a, determines the rate of rotation. When a-l approaches 1, the co-rotation velocity approaches the speed of light at infinity. It is therefore interesting to examine the behavior of the black hole action, and the conformal field theory partition function, in this limit. To calculate the action of the black hole is quite delicate, because one has to match it to rotating anti de Sitter space, and subtract one infinite quantity, from another. Euclidean action. Nevertheless, this can be done in a well-defined way, and the result is shown on the slide. As you might expect, it diverges at the critical angular velocity, at which the co-rotating velocity, approaches the speed of light. The boundary of rotating anti de Sitter, is a rotating Einstein universe, of one dimension lower. Thus it is straightforward in principle, to calculate the partition function for a free conformal field on the boundary. Someone like Dowker might have calculated the result exactly. However, as we are only human, we looked only at the divergence in the partition function, as one approaches the critical angular velocity. This divergence arises because in the mode sum for the partition function, one has Bose-Einstein factors with a correction because of the rotation. As one approaches the critical angular velocity, this causes a Bose-Einstein condensation in modes with the maximum axial quantum number, m. Conformal field theory The divergence in the conformal field theory partition function has the same divergence as the black hole action, at the critical angular velocity. I haven't compared the residues. This is difficult, because it is not clear what three-dimensional conformal field theory one should use on the boundary of four dimensional anti de Sitter. Five-dimensional Kerr-AdS The case of rotating black holes in anti de Sitter five, is broadly similar, but with some differences. One of these is that, because the spatial rotation group, O4, is of rank 2, there are two rotation parameters, a & b. Each of these must have absolute value less than l to the minus one, for the co-rotation velocity to be less than the speed of light, all the way out to infinity. If just one of a & b, approaches the limiting value, the action of the black hole, and the partition function of the conformal field theory, both diverge in a manner similar to the four dimensional case. Action of five-dimensional Kerr-AdS But if a = b, and they approach the limit together, the action and the partition function, both have the same stronger divergence. Again, I haven't compared residues, but this might be worth doing. It may be that in the critical angular velocity limit, the interactions between the particles of super Yang Mills theory, become unimportant. If this is the case, one would expect the action and partition function to agree, rather than differ by a factor of four thirds, as in the non rotating case. Asymptotically locally flat I now turn the case of Nut charge. For asymptotically locally flat metrics in four dimensions, the reference background is the self-dual Taub Nut solution. The Taub bolt solution, has the same asymptotic behavior, but with the zero-dimensional nut fixed point, replaced by a two-dimensional bolt. The area of the bolt is 12 pi N squared, where N is the Nut charge. The area of the Misner string is minus 6 pi N squared. That is to say, the area of the Misner string in Taub bolt, is infinite, but it is less than the area of the Misner string in Taub nut, in a well-defined sense. The Hamiltonian on the Misner string, is N over 8. Again the Misner string Hamiltonian is infinite, but the difference from Taub nut, is finite. And the period, beta, is 8pi N. Thus the entropy, is pi N squared. Note that this is less than a quarter the area of the bolt, which would give 3 pi N squared. It is the effect of the Misner string that reduces the entropy. Taub Nut Anti de Sitter We would like to confirm the effect of Misner strings on entropy, by seeing what effect they have on conformal field theories, on the boundary of anti de Sitter space. For this purpose we constructed versions of Taub nut and Taub bolt, with a negative cosmological constant. The Taub nut anti de Sitter metric is shown on the transparancy. The parameter E, is the squashing of the three-sphere at infinity. If E=1, the three spheres are round, and the metric is Euclidean anti de Sitter space. However, if E is not equal to one, the metric can not be matched to anti de Sitter space at large distance. Each value of E, therefore, defines a different sector of ALADS metrics. This is an important point, which did not seem to have been realized by Chamblin et al. Taub Bolt Anti de Sitter One can also find a family of Taub bolt anti de Sitter metrics, with the same asymptotic behavior. These are characterized by an integer, k, and a positive quantity, s. These determine the asymptotic squashing parameter, E, and the area of the bolt, A-. K is the self-intersection number of the bolt. Thus the spaces do not have spin structure if k is odd. At infinity, the squashed three sphere has k points identified around the U1 fiber. This means that the reference background, is Taub nut anti de Sitter, with k points identified. If k is greater than one, the reference background will have an orbifold singularity at the nut. However, as I said earlier, I shall take it that such singularities are harmless. Action To calculate the action, one matches the Taub bolt solution on a squashed three sphere, to a Taub nut solution. To do this, one has to re-scale the squashing parameter, E, as a function of radius. The surface term in the action, is the same asymptotically for Taub nut and Taub bolt. Thus the action comes entirely from the difference in volumes. Action for k = 1 For k greater than one, the action is always negative, while for k=1, it is positive for small areas of the bolt, and negative for large areas. This behavior is similar to that for Schwarzschild anti de Sitter space, and might indicate a phase transition in the corresponding conformal field theory. However, as I will argue later, there are problems with the ADS, CFT duality, if k=1. On the other hand, our results seem to indicate that there will be no phase transition, if more than one point is identified around the fiber. It will be interesting to see if this is indeed the case, for a conformal field theory on an identified squashed three-sphere. In these Taub bolt anti de Sitter metrics, one can calculate the area of the Misner string, and the Hamiltonian surface term. Both will be infinite, but if one matches to Taub nut anti de Sitter on a large squashed three-sphere, the differences will tend to finite limits. As in the asymptotically locally flat case, the entropy is less than a quarter the area of the bolt, because of the effect of the Misner string. Entropy One can also calculate the entropy from the partition function, by the usual thermodynamic relations. The mass will be given by taking the derivative of the action with respect to beta. This is equal to the Hamiltonian surface term at infinity. The mass or energy, is the only charge that is constrained in the ensemble. The nut charge is fixed by the boundary conditions, and so doesn't need a La-grange multiplier. Thus the entropy is beta M, minus I. This agrees with the entropy calculated from the bolts and Misner strings, showing our definition, is consistent. Formally at least, one can regard Euclidean conformal field theory on the squashed three sphere, as a twisted 2+1 theory, at a temperature, beta to the minus one. Thus one would expect the entropy to be proportional to beta to the minus two, at least for small beta. This has been confirmed by calculations by Dowker, of the determinants of scalar and fermion operators on the squashed three sphere, for k=1. Dowker has not so far calculated the higher k cases, but one would expect that these would have similar leading terms, but with beta replaced by beta over k. The next leading order terms in the determinant, are beta to the minus one, log beta. No terms like this appear in the bulk theory, so if there really is an ADS, CFT duality in this situation, the log beta terms have to cancel between the different spins. In fact, the scalar and fermion log beta terms will cancel each other, if there are twice as many scalars as fermions. This would be implied by super symmetry, suggesting that super symmetry is indeed necessary for the ADS, CFT duality. The Misner string contributions to the entropy are of order beta squared. Thus Dowker's calculations will have to be extended to this order, to k greater than one, to fermion fields with anti periodic boundary conditions, and to spin one fields. All this is quite possible, but it will probably require Dowker to do it. One might ask, how can a conformal field theory on the Euclidean squashed three sphere, correspond to a theory in a spacetime of Lorentzian signature. The answer is that, unlike the Schwarzschild anti de Sitter case, one has to continue the period, beta, to imaginary values. This makes the spacetime periodic in real time, rather than imaginary time. One gets a 2+1 rotating spacetime, rather like the Goedel universe, with closed time like curves. Although field theory in such a spacetime, may seem pathological, it can be obtained by analytical continuation, and is well defined despite the lack of causality. It is interesting that the analytically continued entropy, is negative, suggesting that causality violating spacetimes, are quantum suppressed. However, it is probably a mistake, to attach physical significance, to the Lorentzian conformal field theory. To sum up, I discussed the ADS, CFT duality in two new contexts. That of rotating black holes and that of solutions with nut charge. I showed how gravitational entropy can be defined in general. The partition function for a thermodynamic ensemble can be defined by a path integral over periodic metrics. The lowest order contributions to the partition function will come from metrics with a U1 isometry, and given behavior at infinity. The entropy of such metrics will receive contributions from horizons or bolts, and from Misner strings, which are the Dirac strings of the U1 isometry, under Kaluza Klein reduction. One would like to relate this gravitational entropy, to the entropy of a conformal field theory on the boundary. For this reason, we considered a new class of asymptotically locally anti de Sitter spaces. Other people have investigated the Maldacena conjecture, by deforming the compact part of the metric, but this is the first time deformed anti de Sitter boundary conditions, have been considered. We studied Taub bolt anti de Sitter solutions, with Taub nut anti de Sitter, as the reference background. The entropy we obtained obeyed the right thermodynamic relations, and had the right temperature dependence, to be the entropy of a conformal field theory, on the squashed three sphere. Because the Taub bolt solutions for odd k, do not have spin structures, this may indicate that the anti de Sitter, conformal field theory correspondence, does not depend on super symmetry. I will end by saying that gravitational entropy, is alive and well, 34 years on. But there's more to entropy, than just horizon area. We need to look at the nuts and bolts. Title silde