Site hosted by Angelfire.com: Build your free website today!

encarta2003info

Contents - Light

Light
I INTRODUCTION

Light, form of electromagnetic radiation similar to radiant heat, radio waves, and X rays. Light consists of extremely fast oscillations of an electromagnetic field in a particular range of frequencies that can be detected by the human eye. Different colour sensations are produced by light vibrating at different frequencies, ranging from about 4 × 1014 vibrations per second for red light to about 7.5 × 1014 vibrations per second for violet light. The visible spectrum of light is usually defined by its wavelength, ranging from the smallest visible wavelength for violet, about 40 millionths of a centimetre (16 millionths of an inch), to 75 millionths of a centimetre (about 30 millionths of an inch) for red. Higher frequencies, corresponding to shorter wavelengths, comprise ultraviolet radiation, and still higher frequencies are associated with X-rays. Lower frequencies, which are at longer wavelengths, are called infrared radiation, and still lower frequencies are characteristic of radio waves. Most light comes from electrons that vibrate at these high frequencies when heated to a high temperature. The higher the temperature, the greater the frequency of vibration and the bluer the light produced.

II NATURE OF LIGHT

Light is emitted from a source in straight lines and spreads out over a larger and larger area as it travels; the light per unit area diminishes as the square of the distance. When light strikes an object, it is either absorbed or reflected; light reflected from a rough surface is scattered in all directions. Some frequencies are reflected more strongly than others, and this gives objects their characteristic colour. White surfaces scatter light of all wavelengths equally, and black surfaces absorb nearly all light. Image-forming reflection, on the other hand, requires a highly polished surface such as that of a mirror.

Defining the nature of light has always been a fundamental problem in physics. The English mathematician and physicist Sir Isaac Newton described light as an emission of particles, and the Dutch astronomer, mathematician, and physicist Christiaan Huygens developed the theory that light travels by a wave motion.

It is now believed that these two theories are essentially complementary, and the development of quantum theory has led to the recognition that in some experiments light acts like a series of particles and in other experiments it acts like a wave. In those situations in which it travels in wave motion, the wave vibrates at right angles to the direction of travel; therefore light can be polarized in two mutually perpendicular planes (see Optics).

III VELOCITY

The speed of light was first measured in a laboratory experiment by the French physicist Armand Hippolyte Louis Fizeau, although earlier astronomical observations had yielded approximately the right velocity. By the 1970s the speed of light had been measured very precisely as 299,792,458 m/sec (186,282.397 mi/sec) in a vacuum, and could be used to measure large distances by the time it took for a pulse of light or radio waves to reach a target and return. This is the principle of radar and sonar. Accurate knowledge of the speed and the wavelength of light also permits accurate measurement of length. In fact, in 1983 the metre was redefined as the length of the path travelled by light in a vacuum during a time interval of 1/299,792,458 of a second. The velocity of light in air varies slightly with wavelength, averaging about 3 per cent less than in vacuum; the speed in water is about 25 per cent less, and in glass, 33 per cent less.

Light has an important effect on many chemicals. Sunlight, for example, is used by plants to carry out photosynthesis, and the exposure of certain silver-containing chemicals to light causes them to turn dark in the presence of other chemicals, as is the case in photography.

See Also Electric Lighting; Interference; Interferometer; Laser.

1

Contents - Optics

Optics
I INTRODUCTION

Optics, branch of physical science dealing with the propagation and behaviour of light. In a general sense, light is that part of the spectrum of electromagnetic radiation that extends from X-rays to microwaves and includes the radiant energy that produces the sensation of vision. The study of optics is divided into geometrical optics and physical optics, and these branches are discussed below.

II NATURE OF LIGHT

Radiant energy has a dual nature and obeys laws that may be explained either in terms of a stream of particles, or packets of energy, called photons, or in terms of a train of transverse waves (see Wave Motion). The concept of photons is used to explain the interactions of light and matter that result in a change in the form of energy, as in the case of the photoelectric effect or luminescence. The concept of waves is usually used to explain the propagation of light and some of the phenomena of image formation. In light waves as in other types of electromagnetic wave, there are rapidly fluctuating electric and magnetic fields at each point in space. Since they have both direction and magnitude, the fields are vector quantities. The electric and magnetic fields are at right angles to each other and to the direction of movement of the wave. The simplest sort of light wave is a pure sine wave, so called because a graph of the electric or magnetic field intensity drawn along the direction of travel at any moment would trace out a sine curve. The number of complete oscillations, or vibrations, per second of a point on the light wave is known as the frequency. The wavelength is the distance parallel to the axis between two points of the same phase—that is, points occupying equivalent positions on the wave. For example, the wavelength equals the distance from maximum to maximum or from minimum to minimum of the sine wave. In the visible spectrum differences in wavelength manifest themselves as differences in colour. The visible range extends from about 350 nanometres (violet) to 750 nanometres (red), a nanometre being equal to a billionth of a metre, or 4 × 10-8 in. White light is a mixture of the visible wavelengths. No sharp boundaries exist between wavelength regions, but 10 nanometres may be taken as the low-wavelength limit for ultraviolet radiation. Infrared radiation, which includes radiant heat energy, spans the wavelengths from about 700 nanometres to approximately 1 millimetre. The speed of an electromagnetic wave is the product of the frequency and the wavelength. In a vacuum this speed is the same for all wavelengths. The speed of light in material substances is less than in a vacuum, and is different for different wavelengths, an effect called dispersion. The ratio of the speed of light in vacuum to the speed of a particular wavelength of light in a substance is known as the index of refraction of that substance for the given wavelength. The index of refraction of air for all wavelengths is 1.00029, but for most applications it is sufficiently accurate to take it to be 1.

The laws of reflection and refraction of light are usually derived using the wave theory of light introduced by the 17th-century Dutch mathematician, astronomer, and physical scientist Christiaan Huygens. Huygens’ principle states that every point on an initial wave front may be considered as the source of small, secondary spherical wavelets that spread out in all directions from their centres with the same speed, frequency, and wavelength as the parent wave front. A new wave front can be defined, encompassing the wavelets. Since the light progresses at right angles to this wave front, changes in the direction of the light can be worked out using Huygens’ principle.

When the wavelets encounter another medium or object, each point on the boundary becomes a source of two new sets of waves. The reflected set travels back into the first medium, and the refracted set enters the second medium. The behaviour of the reflected and refracted rays can be explained by Huygens’ principle. It is simpler and sometimes sufficient to represent the propagation of light by rays rather than by waves. The ray is the flow line, or direction of travel, of radiant energy. In geometrical optics the wave theory of light is ignored and the assumption is made that light does not bend round corners. This approximation is valid when lenses, apertures, and so on are large in comparison with the wavelength of the light. Rays are traced through an optical system by applying the laws of reflection and refraction.

III GEOMETRICAL OPTICS

This area of optical science concerns the application of laws of reflection and refraction of light in the design of lenses (see Lenses below) and other optical components of instruments.

A Reflection and Refraction

If a light ray that is travelling through a homogeneous medium is incident on the surface of a second homogeneous medium, part of the light is reflected and part may enter the second medium as the refracted ray, and may or may not undergo absorption there. The amount of light reflected depends on the ratio of the refractive indexes for the two media. The plane of incidence is defined as the plane containing the incident ray and the normal (that is, the line perpendicular to the surface) at the point of incidence. The angle of incidence is the angle between the incident ray and this normal. The angles of reflection and refraction are defined correspondingly. The laws of reflection state that the angle of incidence is equal to the angle of reflection and that the incident ray, the reflected ray, and the normal at the point of incidence all lie in the same plane. If the surface of the second medium is smooth it may act as a mirror and produce a reflected image. A light ray from an object striking a flat, or plane, mirror will be reflected away from the surface. To an observer in front of the mirror, the reflected ray appears to have come from a point behind the mirror that is a continuation of that reflected ray. The image of the object appears to lie as far behind the mirror as the object lies in front of it.

If the surface of the second medium is rough, then normals to various points of the surface lie in random directions. In that case, rays that may lie in the same plane when they emerge from a point source nevertheless lie in random planes of incidence, and therefore of reflection, so are scattered and cannot form an image.

A1 Snell’s Law

This important law, named after the Dutch mathematician Willebrord van Roijen Snell, states that the product of the refractive index and the sine of the angle of incidence of a ray in one medium is equal to the product of the refractive index and the sine of the angle of refraction in a successive medium. Algebraically, this can be written n1 sin?1 = n2 sin?2, where n1, n2 are the two values of refractive index, and ?1, ?2 are the angles of incidence and refraction. The incident ray, the refracted ray, and the normal to the boundary at the point of incidence all lie in the same plane. Generally, the refractive index of a denser transparent substance is higher than that of a less dense material; that is, the speed of light is lower in the denser substance. So if a ray is incident obliquely, then a ray entering a medium with a higher refractive index is bent towards the normal, and a ray entering a medium of lower refractive index is deviated away from the normal. Rays incident along the normal are reflected and refracted along the normal.

To an observer in a less dense medium such as air, an object in a denser medium appears to lie closer to the boundary than is the actual case. A common example is that of an object lying underwater and observed from above the water. An oblique ray from the object is bent away from the normal towards the position of the observer. The object, therefore, appears to lie slightly away from its true position and at a point where a straight from the observer intersects a line normal to the surface of the water and passing through the object. In the case of light passing through more than two media with parallel boundaries, another effect occurs. If the refractive index of the first and last medium is the same, but different from the that of the intermediate medium, the ray emerges parallel to the incident ray, but is displaced laterally.

A2 Prism

If light passes through a prism, a transparent object with flat, surfaces and a uniform cross-section, the exit ray is no longer parallel to the incident ray. Because the refractive index of a substance varies for the different wavelengths, a prism can spread out the various wavelengths of light contained in an incident beam and form a spectrum. In this, the angle between the path of the incident ray and the path of the emergent ray is the angle of deviation. It can be shown that when the angle of incidence is such that it is equal to the angle made by the emergent ray, the deviation is at a minimum. The refractive index of the prism can be calculated by measuring the angle of minimum deviation and the angle between the faces of the prism.

A3 Critical Angle

Given that a ray is bent away from the normal when it enters a less dense medium, and that the deviation from the normal increases as the angle of incidence increases, an angle of incidence exists, known as the critical angle, such that the refracted ray makes an angle of 90° with the normal and travels along the boundary between the two media. If the angle of incidence is increased beyond the critical angle, the light rays will be totally reflected. Total reflection cannot occur if light is travelling from a less dense medium to a denser one. In recent years, a new, practical application of total reflection has been found in fibre optics. If light enters a solid glass or plastic tube at one end, it can be totally reflected at the boundary of the tube and, after a number of successive total reflections, emerge from the other end. Glass fibres can be drawn to a very small diameter, coated with a material of lower refractive index, and then assembled into flexible bundles or fused into plates of fibres that are used to transmit images. The flexible bundles, which can be used to provide illumination as well as to transmit images, are valuable in medical examination, as they can be passed along narrow passages or even blood vessels.

B Spherical and Aspherical Surfaces

Most of the traditional terminology of geometrical optics was developed with reference to spherical reflecting and refracting surfaces. Aspherical surfaces, however, are sometimes involved. The optic axis is a reference line that is an axis of symmetry. The optic axis passes through the centre of a spherical lens or mirror and through its centre of curvature. If a narrow beam of rays travelling along the optic axis is incident on the spherical surface of a mirror or a thin lens, the rays are reflected or refracted so that they intersect or appear to intersect at a point on the optic axis. The distance between this point and the mirror or lens is the focal length. If a lens is thick, calculations are made with reference to planes called principal planes, rather than to the surface of the lens. A lens may have two focal lengths, if the surfaces are not alike, depending on which surface the light strikes first. If an object is at the focal point, the rays emerging from it are parallel to the optic axis after reflection or refraction. If rays are converged by a lens or mirror so that they intersect in front of it, the image is real and inverted (upside down). If the rays diverge after reflection or refraction so that they only appear to come from a point through which they have not actually passed, the image is erect and is described as virtual. The ratio of the height of the image to the height of the object is the lateral magnification.

If it is understood that distances measured from the surface of a lens or mirror to objects or to real images are positive and distances measured to virtual images are negative, then, if u is the object distance, v is the image distance, and f is the focal length of a mirror or of a thin lens, the equation

1/v + 1/u = 1/f
applies to spherical mirrors and spherical lenses. If a simple lens has surfaces with radii r1 and r2, and the ratio of its refractive index to that of the medium surrounding it is n, then

1/f = (n - 1) (1/r1 + 1/r2)
The radii r1, r2 are taken to be positive or negative, depending on whether the surfaces are convex or concave, respectively.

The focal length of a spherical mirror is equal to half the radius of curvature. A narrow beam of rays travelling along the optic axis and incident on a concave mirror is reflected so that it intersects the radius at the focal point, or principal focus, halfway between the pole, or centre, of the mirror's surface and the mirror's centre of curvature. If the object distance is greater than the distance between the pole and the centre of curvature, the image is real, inverted, and diminished. If the object lies between the centre of curvature and the focal point, the image is real, inverted, and enlarged. If the object is located between the surface of the mirror and the focus, the image is virtual, upright, and enlarged. A convex mirror forms only virtual, erect, and diminished images, unless the mirror is used in conjunction with other optical components.

C Lenses

Lenses with surfaces of small radii have short focal lengths. A lens with two convex surfaces will always refract rays that are originally parallel to the optic axis so that they converge to a focus on the side of the lens opposite to the object. A concave lens surface will deviate incident rays that are originally parallel to the axis away from the axis. Unless the second surface of the lens is convex and more strongly curved than the first surface, the rays diverge and appear to come from a point on the same side of the lens as the object. Such lenses form only virtual, erect, and diminished images.

If the object distance is greater than the focal length, a converging lens forms a real and inverted image. If the object is sufficiently far away, the image is smaller than the object. If the object distance is smaller than the focal length of this lens, the image is virtual, upright, and larger than the object. The observer is then using the lens as a magnifier or simple microscope. The angle subtended at the eye by this virtual enlarged image (that is, its apparent angular size) is greater than would be the angle subtended by the object if it were at the normal viewing distance. The ratio of these two angles is the magnifying power of the lens. A lens with a shorter focal length would form a virtual image subtending a greater angle and would therefore have a greater magnifying power. The magnifying power of an instrument is a measure of its ability to make the object seem closer to the eye. This is distinct from the lateral magnification of a camera (see Photographic Techniques) or telescope, for example, where the ratio of the actual dimensions of a real image to those of the object increases as the focal length increases.

The amount of light a lens can admit increases with its diameter. Because the area occupied by an image is proportional to the square of the focal length of the lens, the light intensity over the image area is directly proportional to the diameter of the lens and inversely proportional to the square of the focal length. The image produced by a lens of diameter 3 cm and focal length 20 cm would be one-quarter as bright as the image formed by a lens of the same diameter and focal length 10 cm. The ratio of the focal length to the effective diameter of a lens is its focal ratio, the so-called f-number. The reciprocal of this ratio is called the relative aperture. Lenses having the same relative aperture have the same light-gathering power, regardless of the actual diameters and focal lengths.

D Aberration

Geometrical optics predicts that rays of light emanating from a point are imaged by spherical optical elements as a small blur. The outer parts of a spherical surface have a focal length different from that of the central area, and this defect causes a point to be imaged as a small circle. The difference in focal length for the various parts of the spherical section is called spherical aberration. If, instead of being a portion of a sphere, a concave mirror is a section of a paraboloid (see Parabola) of revolution, parallel rays incident on all areas of the surface are reflected to a point without spherical aberration. Combinations of convex and concave lenses can help to correct spherical aberration, but this defect cannot be eliminated from a single spherical lens for a real object and image.

The result of differences in lateral magnification for rays coming from an object point not on the optic axis is an effect called coma. If coma is present, light from a point is spread out into a family of circles that fit into a cone, and in a plane perpendicular to the optic axis the image pattern is comet-shaped. Coma may be eliminated for a single object-image point pair, but not for all such points, by a suitable choice of surfaces. Corresponding, or conjugate, object and image points, free from both spherical aberration and coma, are known as aplanatic points, and a lens having such a pair of points is called an aplanatic lens.

Astigmatism is the defect in which the light coming from an off-axis object point is spread along the direction of the optic axis. If the object is a vertical line, the cross section of the refracted beam at successively greater distances from the lens is an ellipse that collapses first into a horizontal line, spreads out again, and later becomes a vertical line. If, for a flat object, the surface of best focus is curved, the situation is described as curvature of field.Distortion arises from a variation of magnification with axial distance and is not caused by a lack of sharpness in the image.

Because the index of refraction varies with wavelength, the focal length of a lens also varies and causes longitudinal or axial chromatic aberration. Each wavelength forms an image of a slightly different size, giving rise to what is known as lateral chromatic aberration. Combinations of converging and diverging lenses, and of components made of glasses with different dispersions, help to minimize chromatic aberration. Mirrors are free of this defect. In general, achromatic lens combinations are corrected for chromatic aberration for two or three colours.

IV PHYSICAL OPTICS

This branch of optical science is concerned with such aspects of the behaviour of light as its emission, composition, and absorption, and with polarization, interference, and diffraction.

A Polarization of Light

The atoms in an ordinary light source emit pulses of radiation of extremely short duration. Each pulse from a single atom is a nearly monochromatic (single-wavelength) wave train. The electric vector corresponding to the wave does not rotate about the wave’s direction of travel, but keeps the same angle, or azimuth, with respect to it. The initial azimuth can have any value. When a large number of atoms are emitting light, these azimuths are randomly distributed, the properties of the light beam are the same in all directions, and the light is said to be unpolarized. If the electric vectors for each wave all have the same azimuth angle (that is, all the transverse waves lie in the same plane), the light is plane, or linearly, polarized.

The equations that describe the behaviour of electromagnetic waves involve two sets of waves, one in which the electric vector vibrates perpendicular to the plane of incidence and the other in which it vibrates parallel to that plane. All light can be considered as having a component of its electric vector vibrating in each of these planes. There may be a constant or continually varying phase difference between the two vibrations of the component.If light is linearly polarized, for example, this phase difference becomes zero or 180°. If the phase relationship is random, but more of one component is present, the light is partially polarized. When light is scattered by dust particles, for instance, the light scattered at 90° to the original path of the beam is plane-polarized, explaining why skylight from the zenith (directly overhead) is markedly polarized.

At angles other than zero or 90° of incidence, the amount of reflection at the boundary between two media is not the same for the two components of the light. Less of the component that vibrates parallel to the plane of incidence is reflected. If light is incident on a non-absorbing medium at the so-called Brewster angle, named after the 19th-century British physicist David Brewster, the component vibrating parallel to the plane of incidence is not reflected. At this angle of incidence, the reflected ray is perpendicular to the refracted ray, and the tangent of this angle of incidence is equal to the ratio of the refractive index of the second medium to that of the first.

Certain substances are anisotropic, or display properties with different values when measured along axes in different directions. The speed of light in these materials depends on the direction in which the light travels through them. Some crystals are birefringent, or exhibit double refraction. Unless light is travelling parallel to one of the crystal’s axes of symmetry (an optic axis of the crystal), it is separated into two parts that travel with different speeds. A uniaxial crystal has one axis. The component with the electric vector vibrating in a plane containing the optic axis is the ordinary ray; its speed is the same in all directions through the crystal, and Snell’s law of refraction holds. The component vibrating perpendicular to the plane containing the optic axis forms the extraordinary ray, and the speed of this ray depends on its direction through the crystal. If the ordinary ray travels faster than the extraordinary ray, the birefringence is positive; otherwise the birefringence is negative.

If a crystal is biaxial, no component exists for which the speed is independent of the direction of travel. Birefringent materials can be cut and shaped to introduce specific phase differences between two sets of polarized waves, to separate them, or to analyse the state of polarization of any incident light. A polarizer transmits only one component of vibration, either by reflecting the other away by means of properly cut prism combinations or by absorbing it. A material that preferentially absorbs one component of vibration is said to exhibit dichroism, and Polaroid is an example of this. Polaroid consists of many small dichroic crystals embedded in plastic and identically oriented. If the incident light is unpolarized, Polaroid absorbs approximately half of it. Glare from a large flat surface such as water or a wet road consists of partially polarized light, and properly oriented Polaroid can absorb more than half of it. This explains the effectiveness of Polaroid sunglasses.

The so-called analyser may be physically the same as a polarizer. If a polarizer and analyser are crossed, the analyser is oriented to allow transmission of vibrations lying in a plane perpendicular to those transmitted by the polarizer, and therefore blocks the light passed by the polarizer.

Substances that are optically active rotate the plane of linearly polarized light. A sugar crystal or a solution of sugar, for example, may be optically active. If a solution of sugar is placed between a crossed polarizer and analyser, some of the light is able to pass through. The amount of rotation of the analyser required to restore extinction of the light determines the concentration of the solution. The polarimeter is based on this principle.

Some substances, such as glass and plastic, that are not normally doubly refracting may become so if subjected to stress. If such stressed materials are placed between a polarizer and analyser, the bright and dark coloured areas that are seen give information about the strains. The technology of photoelasticity is based on double refraction produced by stresses.

Birefringence can also be introduced in otherwise homogeneous materials by magnetic and electric fields. A strong magnetic field across a liquid may cause it to become doubly refracting, a phenomenon known as the Kerr effect, after the 19th-century Scottish physicist John Kerr. If an appropriate material is placed between a crossed polarizer and analyser, light may be transmitted, depending on whether the electric field is on or off. This can act as a very rapid light switch or modulator.

B Interference and Diffraction

When two light beams cross, they may interfere in such a way that the resultant intensity pattern is affected (see Interference). The coherence of two beams is the extent to which their waves are in phase. If the phase relationship changes rapidly and randomly, the beams are incoherent. If two wave trains are coherent and if the maximum of one wave coincides with the maximum of another, the two waves combine to produce a greater intensity in that place than if the two beams were present but not coherent. If they are coherent and the maximum of one wave coincides with the minimum of the other, the two waves will cancel each other in part or completely, thus decreasing the intensity. An interference pattern consisting of dark and bright fringes may be formed. To produce a steady interference pattern the two wave trains must be polarized in the same plane. Atoms in an ordinary light source radiate independently, so a large light source usually emits incoherent radiation. To obtain coherent light from such a source, a small portion of the light is selected by means of a pinhole or slit. If this portion is then again split by double slits, double mirrors, or double prisms, and the two parts are made to travel along paths that differ in length (though not by too much) before they are combined again, an interference pattern results. Devices that do this are called interferometers; they are used in measuring small angles such as the apparent diameters of stars, or small distances such as the deviations of an optical surface from the required shape, in terms of numbers of wavelengths of light. Such an interference pattern was first demonstrated by the British physicist Thomas Young in the experiment illustrated in Fig. 1. Light that had passed through one pinhole illuminated an opaque surface that contained two pinholes. The light that passed through the two pinholes formed a pattern of alternately bright and dark circular fringes on a screen. Wavelets are drawn in the illustration to show that at points such as A, C, and E (intersection of solid line with solid line) the waves from the two pinholes arrive in phase and combine to increase the intensity. At other points, such as B and D (intersection of solid line with dashed line), the waves are 180° out of phase and cancel each other.

Light waves reflected from the two surfaces of an extremely thin transparent film on a smooth surface can interfere with each other. The rainbow colours of a film of oil on water are a result of interference, and they demonstrate the importance of the ratio of film thickness to wavelength. A single film or several films of different material can be used to increase or decrease the reflectance of a surface. Dichroic beam splitters are stacks of films of more than one material, controlled in thickness so that one band of wavelengths is reflected and another is transmitted. An interference filter made of such films transmits an extremely narrow band of wavelengths and reflects the remainder. The shape of the surface of an optical element can be checked by touching it to a master lens, or flat, and observing the fringe pattern formed because of the thin layer of air remaining between the two surfaces.

Light incident on the edge of an obstacle is bent or diffracted, and the obstacle does not form a sharp geometric shadow. The points on the edge of the obstacle act as a source of coherent waves, and a pattern of interference fringes, called a diffraction pattern, is formed. The shape of the edge of the obstacle is not exactly reproduced because part of the wave front is cut off.

Because light passes through a finite aperture when it goes through a lens, a diffraction pattern is formed around the image of an object. If the object is extremely small, the diffraction pattern appears as a series of concentric bright and dark rings around a central disc called the Airy disc, after the 19th-century English astronomer George Biddell Airy. This is true even for an aberration-free lens. If two particles are so close together that the two diffraction patterns overlap and the bright rings of one fall on the dark rings of the second, the two particles cannot be resolved (distinguished). The 19th-century German physicist Ernst Karl Abbe first explained image formation by a microscope with a theory based on the interference of diffraction patterns of various points on the object.

Fourier analysis is a mathematical treatment, named after the French mathematician Jean Fourier, that represents an optical object as a sum of simple sine waves, called components. Optical systems are sometimes evaluated by choosing an object of known Fourier components and evaluating the Fourier components present in the image. Such procedures measure what is called the optical transfer function. Extrapolations of these techniques sometimes allow extraction of information from poor images. Statistical theories have also been included in analyses of the recording of images.

A diffraction grating consists of several thousand slits that are equal in width and equally spaced (formed by ruling lines on glass or metal with a fine diamond point). Each slit gives rise to a diffraction pattern, and the many diffraction patterns interfere. Bright fringes are formed in different places for different wavelengths. If white light is incident, a continuous spectrum is formed. Prisms and gratings are used in instruments such as monochromators, spectrographs, or spectrophotometers to provide nearly monochromatic light or to analyse the wavelengths present in the incident light (see Spectroscopy; Spectroheliograph).

C Stimulated Emission

The atoms in common light sources, such as the incandescent lamp, fluorescent lamp, and neon lamp, produce light by spontaneous emission, and the radiation is incoherent. If a sufficient number of atoms absorb energy so that they are excited into appropriate states of higher energy, stimulated emission can occur. Light of a certain wavelength can produce additional light that has the same phase and direction as the original wavelength, and it will be coherent. Stimulated emission amplifies the amount of radiation having a given wavelength, and this radiation has a very narrow beam spread. The material that is excited may be a gas, a solid, or a liquid, but it must be contained or shaped to form an interferometer in which the wavelength being amplified is reflected back and forth many times. A small fraction of the excited radiation is transmitted by one of the mirrors of the interferometers. This device is called a laser, an acronym for “light amplification by stimulated emission of radiation”. Energizing a large number of atoms to be in the appropriate upper state is called pumping. Pumping may be optical or electrical. Because lasers can be made to emit pulses of extremely high energy that have a very narrow beam spread, laser light sent to the Moon and reflected back to the Earth can be detected. The intense narrow beam of the laser has found practical application in surgery and in the cutting of metals.

Holography, the technique of producing three-dimensional images, was made possible by a technique pioneered by the Hungarian-born British physicist and electrical engineer Dennis Gabor. He noted that if the diffraction pattern of an object could be recorded and the phase information also retained, the image of the object could be reconstructed by coherent illumination of the recorded diffraction pattern. Illuminating it with a wavelength longer than that used to produce the diffraction pattern would result in magnification. Because the absolute phase of a light wave cannot be directly detected physically, it was necessary to provide a reference beam coherent with the beam illuminating the object to interfere with the diffraction pattern and provide phase information. Before the development of the laser, the Gabor scheme was limited by the lack of sufficiently intense coherent light sources.

A hologram is a photographic record of the interference between a reference beam and the diffraction pattern of the object. Light from a single laser is separated into two beams. The reference beam illuminates the photographic plate, perhaps via a lens and mirror, and the second beam illuminates the object. The reference beam and the light reflected from the object jointly form a diffraction pattern on the photographic plate. If the processed hologram is illuminated by coherent light, not necessarily of the same wavelength that was used to make the hologram, a three-dimensional image of the object can be obtained. Holograms of a theoretical object can be produced by computer, and the images of these objects can then be reconstructed.

Intense, coherent laser beams permit the study of new optical effects that are produced by the interaction of certain substances with electric fields and that depend on the square or the third power of the field strength. This is called non-linear optics, and the interactions being studied affect the refractive index of the substances. The Kerr effect, mentioned earlier, belongs to this group of phenomena.

Harmonic generation of light has been observed. Infrared laser light of wavelength 1.06 micrometres, for example, can be changed to green light with a wavelength of 0.53 micrometres in a crystal of barium sodium niobate. Broadly tunable sources of coherent light in the visible and near infrared ranges can be produced by pumping suitable media with light or shorter wavelengths. A lithium niobate crystal can be made to fluoresce in red, yellow, and green by pumping it with greenish-blue laser light having a wavelength of 488 nanometres (19.2 millionths of an inch). Certain scattering phenomena can be stimulated by a single laser to produce intense, pulsed light at a wide range of monochromatic wavelengths. One of the phenomena observed in high-power optical experiments is a self-focusing effect that produces extremely short-lived filaments as small as 5 micrometres (200 millionths of an inch) in diameter. Non-linear optical effects are applied in developing efficient broadband modulators for communication systems (see Radio: Modulation).

2

Contents - Descartes, René

Descartes, René
I INTRODUCTION

Descartes, René (1596-1650), French philosopher, scientist, and mathematician, often called the founder of modern philosophy. Born in La Haye, Touraine (a region and former province of France), Descartes was the son of a minor nobleman and belonged to a family that had produced a number of learned men. At the age of eight he was enrolled in the Jesuit school of La Flèche in Anjou, where he spent the rest of his schooldays. Besides the usual classical studies, Descartes received instruction in mathematics and scholasticism, which attempted to use human reason to understand Christian doctrine. Roman Catholicism exerted a strong influence on Descartes throughout his life. Upon finishing school, he studied law at the University of Poitiers, graduating in 1616. He never practised law, however; in 1618 he entered the service of Prince Maurice of Nassau, leader of the United Provinces of the Netherlands, with the intention of following a military career. In succeeding years Descartes served in other armies, but his attention had already been attracted to the problems of mathematics and philosophy, to which he was to devote the rest of his life.

Descartes made a pilgrimage to Italy between 1623 and 1624, then spent the years from 1624 to 1628 in France, where he devoted himself to the study of philosophy and also experimented in the science of optics. In 1628, having sold his properties in France, he moved to the Netherlands, where he spent most of the rest of his life, living in a number of different cities, including Amsterdam, Deventer, Utrecht, and Leiden.

It was probably during the first years of his residence in the Netherlands that Descartes wrote his first major work, Essais Philosophiques (Philosophical Essays), published in 1637. The work contained four parts: an essay on geometry, another on optics, a third on meteors, and, lastly, Discours de la Méthode (Discourse on Method), which described his philosophical speculations. This was followed by other philosophical works, among them Meditationes de Prima Philosophia (Meditations on First Philosophy, 1641; revised 1642) and Principia Philosophiae (The Principles of Philosophy, 1644). The latter volume was dedicated to Princess Elizabeth Stuart of Bohemia, who lived in the Netherlands and with whom Descartes had formed a deep friendship. In 1649 Descartes was invited to the court of Queen Christina of Sweden in Stockholm to give the Queen instruction in philosophy. However, the rigours of the northern winter brought on the pneumonia that caused his death in 1650.

II PHILOSOPHY

Descartes attempted to apply the rational deductive methods of science, and particularly of mathematics, to philosophy. Before his time, philosophy had been dominated by the method of scholasticism, which was entirely based on comparing and contrasting the views of recognized authorities. Rejecting this method, Descartes stated: “In our search for the direct road to truth, we should busy ourselves with no object about which we cannot attain a certitude equal to that of the demonstration of arithmetic and geometry.” He therefore determined to hold nothing true until he could be absolutely certain of it. His method for discovering a truth of which he could be absolutely certain was to use scepticism: he attempted to doubt everything that he believed to be true and investigate if it was indeed possible to doubt it. Using this “method of doubt” he found that he could doubt whether he was in fact awake, since it was always possible that he was dreaming. He could also doubt whether the physical world and his own body existed, since it was always possible that a powerful and malicious demon was creating the illusion of these things in his mind. However, try as he might, he could not doubt that he himself existed, since the very act of doubting required a doubter, namely himself. In order to doubt, he had to exist. Descartes expressed this conclusion in the famous words “Cogito, ergo sum” (“I think, therefore I am”). He used it as the foundation stone on which to build a complete system of indubitable knowledge. From the principle that thinking proved his own existence, he argued that his essential characteristic was thinking.

Descartes then went on to argue for the existence of God, and to claim that God must have created two kinds of substance that make up the whole of reality. One kind was thinking substance, or minds, entities such as himself whose essential characteristic was thinking, and the other was extended substance, or bodies, for example, rocks or trees or his own body, whose essential characteristic was being extended over a certain amount of physical space. While thinking substances acted in accordance with the laws of thinking, extended substances acted in accordance with the mechanical laws of physics. This division of reality into two kinds of substance, one physical and one mental, has become known as Cartesian dualism. In one form or another it has been extraordinarily influential on Western philosophy ever since Descartes's time.

III SCIENCE

Descartes's philosophy carried him into elaborate and erroneous explanations of a number of physical phenomena. These explanations, however, were valuable, in that he substituted a system of mechanical interpretation of physical phenomena for the vague spiritual concepts of most earlier writers.

Although Descartes had at first been inclined to accept the new Copernican theory of the universe, with its concept of a system of spinning planets revolving around the Sun, he abandoned this theory when it was pronounced heretical by the Roman Catholic Church. In its place he devised a theory of vortices in which space was entirely filled with matter, in various states, whirling about the Sun.

In the field of physiology, Descartes held that part of the blood was a subtle fluid, which he called “animal spirits”. The animal spirits, he believed, came into contact with the thinking substance at a point in the brain and flowed out along the channels of the nerves to animate the muscles and other parts of the body.

Descartes's study of optics led him to the independent discovery of the fundamental law of reflection: that the angle of incidence is equal to the angle of reflection. His essay on optics was the first published statement of this law. Descartes's treatment of light as a type of pressure in a solid medium paved the way for the undulatory, or wave, theory of light.

IV MATHEMATICS

The most notable contribution that Descartes made to mathematics was his systematization of analytic geometry. This is a method for translating any point, line, or curve on a plane into numerical form. If the plane is marked off into a grid based on a horizontal and a vertical axis, then every point on the plane can be identified by two numbers that give its distances from the two axes. These numbers are known as the point's Cartesian coordinates. It is then possible, for a given line or curve, to find an equation relating the two Cartesian coordinates that holds true for all points on the curve. This equation provides an exact translation of the curve into numerical form. Descartes was the first mathematician to attempt to classify curves according to the types of equations that produce them, as well as contributing to the theory of equations. He was the originator of the use of the last letters of the alphabet to designate unknown quantities and first letters to designate known ones. He also invented the method of indices (as in x2) to express the powers of numbers. In addition, he formulated the rule, which is known as Descartes's rule of signs, for finding the number of positive and negative roots for any algebraic equation.

3

Contents - Newton, Sir Isaac

Newton, Sir Isaac
I INTRODUCTION

Newton, Sir Isaac (1642-1727), mathematician and physicist, one of the foremost scientific intellects of all time. Born at Woolsthorpe, near Grantham in Lincolnshire, where he attended school, he entered Cambridge University in 1661; he was elected a Fellow of Trinity College in 1667, and Lucasian Professor of Mathematics in 1669. He remained at the university, lecturing in most years, until 1696. Of these Cambridge years, in which Newton was at the height of his creative power, he singled out 1665-1666 (spent largely in Lincolnshire because of plague in Cambridge) as “the prime of my age for invention”. During two to three years of intense mental effort he prepared Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy) commonly known as the Principia, although this was not published until 1687.

As a firm opponent of the attempt by King James II to make the universities into Catholic institutions, Newton was elected Member of Parliament for the University of Cambridge to the Convention Parliament of 1689, and sat again in 1701-1702. Meanwhile, in 1696 he had moved to London as Warden of the Royal Mint. He became Master of the Mint in 1699, an office he retained to his death. He was elected a Fellow of the Royal Society of London in 1671, and in 1703 he became President, being annually re-elected for the rest of his life. His major work, Opticks, appeared the next year; he was knighted in Cambridge in 1705.

As Newtonian science became increasingly accepted on the Continent, and especially after a general peace was restored in 1714, following the War of the Spanish Succession, Newton became the most highly esteemed natural philosopher in Europe. His last decades were passed in revising his major works, polishing his studies of ancient history, and defending himself against critics, as well as carrying out his official duties. Newton was modest, diffident, and a man of simple tastes. He was angered by criticism or opposition, and harboured resentment; he was harsh towards enemies but generous to friends. In government, and at the Royal Society, he proved an able administrator. He never married and lived modestly, but was buried with great pomp in Westminster Abbey.

Newton has been regarded for almost 300 years as the founding examplar of modern physical science, his achievements in experimental investigation being as innovative as those in mathematical research. With equal, if not greater, energy and originality he also plunged into chemistry, the early history of Western civilization, and theology; among his special studies was an investigation of the form and dimensions, as described in the Bible, of Solomon's Temple in Jerusalem.

II OPTICS

In 1664, while still a student, Newton read recent work on optics and light by the English physicists Robert Boyle and Robert Hooke; he also studied both the mathematics and the physics of the French philosopher and scientist René Descartes. He investigated the refraction of light by a glass prism; developing over a few years a series of increasingly elaborate, refined, and exact experiments, Newton discovered measurable, mathematical patterns in the phenomenon of colour. He found white light to be a mixture of infinitely varied coloured rays (manifest in the rainbow and the spectrum), each ray definable by the angle through which it is refracted on entering or leaving a given transparent medium. He correlated this notion with his study of the interference colours of thin films (for example, of oil on water, or soap bubbles), using a simple technique of extreme acuity to measure the thickness of such films. He held that light consisted of streams of minute particles. From his experiments he could infer the magnitudes of the transparent “corpuscles” forming the surfaces of bodies, which, according to their dimensions, so interacted with white light as to reflect, selectively, the different observed colours of those surfaces.

The roots of these unconventional ideas were with Newton by about 1668; when first expressed (tersely and partially) in public in 1672 and 1675, they provoked hostile criticism, mainly because colours were thought to be modified forms of homogeneous white light. Doubts, and Newton's rejoinders, were printed in the learned journals. Notably, the scepticism of Christiaan Huygens and the failure of the French physicist Edmé Mariotte to duplicate Newton's refraction experiments in 1681 set scientists on the Continent against him for a generation. The publication of Opticks, largely written by 1692, was delayed by Newton until the critics were dead. The book was still imperfect: the colours of diffraction defeated Newton. Nevertheless, Opticks established itself, from about 1715, as a model of the interweaving of theory with quantitative experimentation.

III MATHEMATICS

In mathematics too, early brilliance appeared in Newton's student notes. He may have learnt geometry at school, though he always spoke of himself as self-taught; certainly he advanced through studying the writings of his compatriots William Oughtred and John Wallis, and of Descartes and the Dutch school. Newton made contributions to all branches of mathematics then studied, but is especially famous for his solutions to the contemporary problems in analytical geometry of drawing tangents to curves (differentiation) and defining areas bounded by curves (integration). Not only did Newton discover that these problems were inverse to each other, but he discovered general methods of resolving problems of curvature, embraced in his “method of fluxions” and “inverse method of fluxions”, respectively equivalent to Leibniz's later differential and integral calculus. Newton used the term “fluxion” (from Latin meaning “flow”) because he imagined a quantity “flowing” from one magnitude to another. Fluxions were expressed algebraically, as Leibniz's differentials were, but Newton made extensive use also (especially in the Principia) of analogous geometrical arguments. Late in life, Newton expressed regret for the algebraic style of recent mathematical progress, preferring the geometrical method of the Classical Greeks, which he regarded as clearer and more rigorous.

Newton's work on pure mathematics was virtually hidden from all but his correspondents until 1704, when he published, with Opticks, a tract on the quadrature of curves (integration) and another on the classification of the cubic curves. His Cambridge lectures, delivered from about 1673 to 1683, were published in 1707.

A The Calculus Priority Dispute

Newton had the essence of the methods of fluxions by 1666. The first to become known, privately, to other mathematicians, in 1668, was his method of integration by infinite series. In Paris in 1675 Gottfried Wilhelm Leibniz independently evolved the first ideas of his differential calculus, outlined to Newton in 1677. Newton had already described some of his mathematical discoveries to Leibniz, not including his method of fluxions. In 1684 Leibniz published his first paper on calculus; a small group of mathematicians took up his ideas.

In the 1690s Newton's friends proclaimed the priority of Newton's methods of fluxions. Supporters of Leibniz asserted that he had communicated the differential method to Newton, although Leibniz had claimed no such thing. Newtonians then asserted, rightly, that Leibniz had seen papers of Newton's during a London visit in 1676; in reality, Leibniz had taken no notice of material on fluxions. A violent dispute sprang up, part public, part private, extended by Leibniz to attacks on Newton's theory of gravitation and his ideas about God and creation; it was not ended even by Leibniz's death in 1716. The dispute delayed the reception of Newtonian science on the Continent, and dissuaded British mathematicians from sharing the researches of Continental colleagues for a century.

IV MECHANICS AND GRAVITATION

According to the well-known story, it was on seeing an apple fall in his orchard at some time during 1665 or 1666 that Newton conceived that the same force governed the motion of the Moon and the apple. He calculated the force needed to hold the Moon in its orbit, as compared with the force pulling an object to the ground. He also calculated the centripetal force needed to hold a stone in a sling, and the relation between the length of a pendulum and the time of its swing. These early explorations were not soon exploited by Newton, though he studied astronomy and the problems of planetary motion.

Correspondence with Hooke (1679-1680) redirected Newton to the problem of the path of a body subjected to a centrally directed force that varies as the inverse square of the distance; he determined it to be an ellipse, so informing Edmond Halley in August 1684. Halley's interest led Newton to demonstrate the relationship afresh, to compose a brief tract on mechanics, and finally to write the Principia.

Book I of the Principia states the foundations of the science of mechanics, developing upon them the mathematics of orbital motion round centres of force. Newton identified gravitation as the fundamental force controlling the motions of the celestial bodies. He never found its cause. To contemporaries who found the idea of attractions across empty space unintelligible, he conceded that they might prove to be caused by the impacts of unseen particles.

Book II inaugurates the theory of fluids: Newton solves problems of fluids in movement and of motion through fluids. From the density of air he calculated the speed of sound waves.

Book III shows the law of gravitation at work in the universe: Newton demonstrates it from the revolutions of the six known planets, including the Earth, and their satellites. However, he could never quite perfect the difficult theory of the Moon's motion. Comets were shown to obey the same law; in later editions, Newton added conjectures on the possibility of their return. He calculated the relative masses of heavenly bodies from their gravitational forces, and the oblateness of Earth and Jupiter, already observed. He explained tidal ebb and flow and the precession of the equinoxes from the forces exerted by the Sun and Moon. All this was done by exact computation.

Newton's work in mechanics was accepted at once in Britain, and universally after half a century. Since then it has been ranked among humanity's greatest achievements in abstract thought. It was extended and perfected by others, notably Pierre Simon de Laplace, without changing its basis and it survived into the late 19th century before it began to show signs of failing. See Quantum Theory; Relativity.

V ALCHEMY AND CHEMISTRY

Newton left a mass of manuscripts on the subjects of alchemy and chemistry, then closely related topics. Most of these were extracts from books, bibliographies, dictionaries, and so on, but a few are original. He began intensive experimentation in 1669, continuing till he left Cambridge, seeking to unravel the meaning that he hoped was hidden in alchemical obscurity and mysticism. He sought understanding of the nature and structure of all matter, formed from the “solid, massy, hard, impenetrable, movable particles” that he believed God had created. Most importantly in the “Queries” appended to “Opticks” and in the essay “On the Nature of Acids” (1710), Newton published an incomplete theory of chemical force, concealing his exploration of the alchemists, which became known a century after his death.

VI HISTORICAL AND CHRONOLOGICAL STUDIES

Newton owned more books on humanistic learning than on mathematics and science; all his life he studied them deeply. His unpublished “classical scholia”—explanatory notes intended for use in a future edition of the Principia—reveal his knowledge of pre-Socratic philosophy; he read the Fathers of the Church even more deeply. Newton sought to reconcile Greek mythology and record with the Bible, considered the prime authority on the early history of mankind. In his work on chronology he undertook to make Jewish and pagan dates compatible, and to fix them absolutely from an astronomical argument about the earliest constellation figures devised by the Greeks. He put the fall of Troy at 904 bc, about 500 years later than other scholars; this was not well received.

VII RELIGIOUS CONVICTIONS AND PERSONALITY

Newton also wrote on Judaeo-Christian prophecy, whose decipherment was essential, he thought, to the understanding of God. His book on the subject, which was reprinted well into the Victorian Age, represented lifelong study. Its message was that Christianity went astray in the 4th century ad, when the first Council of Nicaea propounded erroneous doctrines of the nature of Christ. The full extent of Newton's unorthodoxy was recognized only in the present century: but although a critic of accepted Trinitarian dogmas and the Council of Nicaea, he possessed a deep religious sense, venerated the Bible and accepted its account of creation. In late editions of his scientific works he expressed a strong sense of God's providential role in nature.

VIII PUBLICATIONS

Newton published an edition of Geographia generalis by the German geographer Varenius in 1672. His own letters on optics appeared in print from 1672 to 1676. Then he published nothing until the Principia (published in Latin in 1687; revised in 1713 and 1726; and translated into English in 1729). This was followed by Opticks in 1704; a revised edition in Latin appeared in 1706. Posthumously published writings include The Chronology of Ancient Kingdoms Amended (1728), The System of the World (1728), the first draft of Book III of the Principia, and Observations upon the Prophecies of Daniel and the Apocalypse of St John (1733).


Contributed By:
Alfred Rupert Hall

4

Article - Alhazen

Alhazen

Alhazen (965-c.1040), Arab scientist and natural philosopher, who made important contributions in optics, astronomy, and mathematics. His Arab name is Abu Ali al-Hasan ibn al-Haytham. His major work, Optics, included valuable analyses and explanations of light and vision.

Alhazen was born in Basra, in what is now Iraq. He was invited to Cairo by the Muslim ruler al-Hakim. After failing in an attempt to regulate the flow of the Nile, Alhazen feared that al-Hakim would punish him. To avoid punishment, he pretended to be insane until al-Hakim's death. He devoted the rest of his life to scientific study.

Alhazen's most important and original contributions were in optics. He developed a broad theory that explained vision, using geometry and anatomy. According to this theory, each point on a lighted area or object radiates light rays in every direction, but only one ray from each point, which strikes the eye perpendicularly, can be seen. The other rays strike at different angles and are not seen.

In astronomy, Alhazen added to the theories of the 2nd-century astronomer Ptolemy. He also summarized or explained some of the difficult mathematical theorems of the Greek mathematician Euclid.

5

Article - Fibre Optics

Fibre Optics

Fibre Optics, branch of optics dealing with the transmission of light through fibres or thin rods of glass or some other transparent material of high refractive index. If light is admitted at one end of a fibre, it can travel with very low loss, even if the fibre is curved.

The principle on which this transmission of light depends is that of total internal reflection: light travelling within the fibre's centre, or core, strikes the outside surface at an angle of incidence greater than the critical angle (see Optics), so that all the light is reflected into the fibre without loss. Thus light can be transmitted over long distances by being reflected thousands of times. In order to avoid losses through the scattering of light by impurities on the surface of the fibre, the optical-fibre core is clad with a glass layer of much lower refractive index; the reflections occur at the interface of the glass fibre and the cladding.

The simplest application of optical fibres is the transmission of light to locations otherwise hard to reach, such as the bore of a dentist's drill. Also, bundles of several thousand very thin fibres, assembled precisely side by side and optically polished at their ends, can be used to transmit images. Each point of the image projected on one face of the bundle is reproduced at the other end of the bundle, reconstituting the image, which can be observed through a magnifier. Image transmission by optical fibres is widely used in medical instruments for viewing the interior of the human body and for laser surgery, in facsimile systems, in phototypesetting, in computer graphics, and in many other applications.

Optical fibres are also used in a wide variety of sensing devices, ranging from thermometers to gyroscopes. The potential of their applications in this field is nearly unlimited, because the light sent through them is sensitive to many environmental changes, including pressure, sound waves, and strain, as well as heat and motion. The fibres can be especially useful where electrical effects could make ordinary wiring useless, inaccurate, or even hazardous. Fibres have also been developed to carry high-power laser beams for cutting and drilling.

One growing application of optical fibres is in communication, because light waves have high frequencies and the information-carrying capacity of a signal increases with frequency. Fibre-optic laser systems are used in communications networks. Many long-haul fibre communications networks providing both transcontinental and transoceanic connections are in operation. One advantage of optical-fibre systems is the great distances that a signal can travel before a repeater is needed to regenerate it. Fibre-optic repeaters are currently separated by about 100 km (about 60 mi), compared to about 1.5 km (1 mi) for electrical systems. Newly developed optical-fibre amplifiers can extend this distance even farther.

Local area networks (LANs) are another growing application for fibre optics. Unlike long-haul communications, these systems connect local subscribers to centralized equipment such as computers and printers. This system increases the utilization of equipment and can easily accommodate new users on a network. Development of new electro-optic and integrated-optic components will further expand the capability of fibre systems.

6

Contents - Physics

Physics
I INTRODUCTION

Physics, major science dealing with the fundamental constituents of the universe, the forces they exert on one another, and the effects of these forces. Sometimes in modern physics a more sophisticated approach is taken that incorporates elements of the three areas listed above; it relates to symmetry and conservation laws, such as those pertaining to energy, momentum, charge, and parity. See Atom; Energy.

See also separate articles on the different aspects of physics and the various sciences mentioned in this article.

II SCOPE OF PHYSICS

Physics is closely related to the other natural sciences and, in a sense, encompasses them. Chemistry, for example, deals with the interaction of atoms to form molecules; much of modern geology is largely a study of the physics of the Earth and is known as geophysics; and astronomy deals with the physics of the stars and outer space. Even living systems are made up of fundamental particles and, as studied in biophysics and biochemistry, they follow the same types of laws as the simpler particles traditionally studied by a physicist.

The emphasis on the interaction between particles in modern physics, known as the microscopic approach, must often be supplemented by a macroscopic approach that deals with larger elements or systems of particles. This macroscopic approach is indispensable to the application of physics to much of modern technology. Thermodynamics, for example, a branch of physics developed during the 19th century, deals with defining and measuring properties of a system as a whole and is useful in other fields of physics; it also forms the basis of much of chemical and mechanical engineering. Such properties as the temperature, pressure, and volume of a gas have no meaning for an individual atom or molecule; these thermodynamic concepts can only be applied directly to a very large system of such particles. A bridge exists, however, between the microscopic and macroscopic approach; another branch of physics, known as statistical mechanics, indicates how pressure and temperature can be related to the motion of atoms and molecules on a statistical basis (seeStatistics).

Even into the 19th century a physicist was often also a mathematician, philosopher, chemist, biologist, or engineer. Today the field has grown to such an extent that with few exceptions modern physicists have to limit their attention to one or two branches of the science. Once the fundamental aspects of a new field are discovered and understood, they become of interest to engineers and other applied scientists. The 19th-century discoveries in electricity and magnetism, for example, are now the province of electrical and communication engineers; the properties of matter discovered at the beginning of the 20th century have been applied in electronics; and the discoveries of nuclear physics, most of them not yet 40 years old, have passed into the hands of nuclear engineers for applications to peaceful or military uses.

III EARLY HISTORY OF PHYSICS

Although ideas about the physical world date from antiquity, physics did not emerge as a well-defined field of study until early in the 19th century.

A Antiquity

The Chinese, Babylonians, Egyptians, and early Mesoamericans observed the motions of the planets and succeeded in predicting eclipses, but they failed to find an underlying system governing planetary motion. The speculations of Greek philosophers introduced two major rival ideas about the fundamental constituents of the universe: atomism, proposed by Leucippus in the 4th century bc, and the theory of the elements, which had been proposed in the 5th century bc. SeePhilosophy, Greek; Philosophy, Western.

Notable progress was made in Alexandria, the scientific centre of Western civilization during the Hellenistic Age. There, the Greek mathematician and inventor Archimedes designed various practical mechanical devices involving levers and screws, and measured the density of solid bodies by submerging them in a liquid. Other important Greek scientists of this time were the astronomer Aristarchus of Samos, who measured the ratio of the distances from the Earth to the Sun and to the Moon; the mathematician, astronomer, and geographer Eratosthenes, who determined the circumference of the Earth and drew up a catalogue of stars; and the astronomer Hipparchus, who discovered the precession of the equinoxes (seeEcliptic). In the 2nd century ad the astronomer, mathematician, and geographer Ptolemy proposed the system of planetary motion that was named after him, in which the Earth was at the centre and the Sun, Moon, and stars moved around it in circular orbits (seePtolemaic System).

B Middle Ages

Little advance was made in physics, or in any other science, during the Middle Ages. However, many Classical Greek scientific treatises were preserved by such Arab scholars as Averroës and Al-Quarashi (also known as Ibn al-Nafis). The founding of the great medieval universities by monastic orders in Europe, from the 13th century onward, generally failed to advance physics or any experimental investigations. The Italian Scholastic philosopher and theologian St Thomas Aquinas, for instance, attempted to demonstrate that the works of Plato and Aristotle were consistent with the Scriptures. The English Scholastic philosopher and scientist Roger Bacon was one of the few philosophers who advocated the experimental method as the true foundation of scientific knowledge; he also did some work in astronomy, chemistry, optics, and machine design.

C 16th and 17th Centuries

The advent of modern science followed the Renaissance and was ushered in by highly successful attempts by four outstanding individuals to interpret the behaviour of the heavenly bodies during the 16th and early 17th centuries. The Polish natural philosopher Nicolaus Copernicus propounded the heliocentric system in which the planets move around the Sun. He was convinced, however, that the planetary orbits were circular, and therefore his system required almost as many complicated elaborations as the Ptolemaic system it was intended to replace (seeCopernican System). The Danish astronomer Tycho Brahe adopted a compromise between the Copernican and Ptolemaic systems; according to him, the planets went around the Sun, while the Sun went around the Earth. Brahe was a great observer, who made a series of remarkably accurate measurements. These provided his assistant, the German astronomer Johannes Kepler, with data to attack the Ptolemaic system and led to the discovery of three laws that conformed with a modified heliocentric theory. Galileo, having heard of the invention of the telescope, constructed one of his own and from 1609 was able to confirm the heliocentric system by observing the phases of the planet Venus. He also discovered the surface irregularities of the Moon, the four brightest satellites of Jupiter, sunspots, and many stars in the Milky Way. Galileo’s interests were not limited to astronomy; by using inclined planes and an improved water clock, he had earlier demonstrated that bodies of different weight fall at the same rate (thus overturning Aristotle’s idea), and that their speed increases uniformly with the time of fall. Galileo’s astronomical discoveries and his work in mechanics foreshadowed the work of the 17th-century English mathematician and physicist Isaac Newton, one of the greatest scientists who ever lived.

IV NEWTON AND MECHANICS

From about 1665, at the age of 23, Newton developed the principles of mechanics, formulated the law of universal gravitation, separated white light into colours, proposed a theory of the propagation of light, and invented differential and integral calculus. Newton’s contributions covered an enormous range of natural phenomena. He was able to show that Kepler’s laws of planetary motion and Galileo’s discoveries concerning falling bodies follow from Newton’s own second law of motion combined with his law of gravitation. Newton was able to explain the effect of the Moon in producing the tides, and the precession of the equinoxes.

A The Development of Mechanics

The subsequent development of physics owes much to Newton’s laws of motion (seeMechanics), notably the second, which states that the force needed to accelerate an object is proportional to its mass times its acceleration. If the force and the initial position and velocity of a body are given, subsequent positions and velocities can be calculated, although the force may vary with time or position (in which case, Newton’s calculus must be applied). This simple law contained another important aspect: each body has an inherent property, its inertial mass, which influences its motion. The greater this mass, the slower the change of velocity when a given force is applied. Even today, the law retains its practical value, as long as the body is not very small, not very massive, and not moving extremely rapidly. Newton’s third law, expressed simply as “for every action there is an equal and opposite reaction”, recognizes, in modern terms, that all forces between particles come in oppositely directed pairs.

B Gravity

Newton’s more specific contribution to the description of the forces in nature was the discovery of the law of gravity. Today scientists know that in addition to gravity only three other fundamental forces give rise to all observed properties and activities in the universe: electromagnetism; the so-called strong nuclear interaction, which binds together the neutrons and protons within atomic nuclei; and the weak interaction between some of the elementary particles that accounts for the phenomenon of radioactivity. Understanding of the force concept, however, dates from the universal law of gravitation, which recognizes that all material particles, and the bodies that are composed of them, have a property called gravitational mass. This property causes any two particles to exert attractive forces on each other (along the line joining them) that are directly proportional to the product of the masses, and inversely proportional to the square of the distance between the particles. This force of gravity governs the motion of the planets about the Sun and of the objects in the Earth’s own gravitational field, and is also responsible for gravitational collapse, which is believed to underlie many astrophysical phenomena, and to be the final stage in the life cycle of massive stars. SeeBlack Hole; Gravitation; Star.

One of the most important observations of physics is that the gravitational mass of a body (which is the source of the gravitational force between it and another particle), is effectively the same as its inertial mass, the property that determines the body’s motion in response to any force exerted on it (seeInertia). This equivalence, now confirmed experimentally to within one part in 1013, holds in the sense of proportionality—that is, when one body has twice the gravitational mass of another, it also has twice the inertial mass. Thus, Galileo’s demonstrations, which preceded Newton’s laws, that bodies fall to the ground with the same acceleration can be explained by the fact that the gravitational mass of a body, which determines the forces exerted on it, and the inertial mass, which determines the response to that force, cancel out.

The full significance of this equivalence between gravitational and inertial masses, however, was not appreciated until Albert Einstein devised the general theory of relativity. Einstein saw that this equivalence led to a further implication: the equivalence of a gravitational field and an accelerated frame of reference (see the section Modern Physics: Relativity in this article, below).

The force of gravity is the weakest of the four forces of nature when elementary particles are considered. The gravitational force between two protons, for example, which are among the heaviest elementary particles, is at any given distance only 10-36 the magnitude of the electrostatic forces between them, and for two such protons in the nucleus of an atom, this force in turn is many times smaller than the strong nuclear interaction. The dominance of gravity on a macroscopic scale is due to two facts: (1) There is only one type of mass, as far as is known, which leads to only one kind of gravitational force, which is attractive. The many elementary particles that make up a large body, such as the Earth, therefore exhibit an additive effect of their gravitational forces, which thus become very large. (2) The gravitational forces act over a large range, and decrease only as the square of the distance between two bodies.

By contrast, the electric charges of elementary particles, which give rise to electrostatic and magnetic forces, are either positive or negative, or absent altogether. Only particles with opposite charges attract one another, and large composite bodies therefore tend to be electrically neutral and inactive.

On the other hand, the nuclear forces, both strong and weak, are extremely short-range and become hardly noticeable at distances greater than 1 million-millionth of a centimetre.

Despite its macroscopic importance, the force of gravity remains so weak that a body must be very massive before its influence is noticed by another. Thus, the law of universal gravitation was deduced from observations of the motions of the planets long before it could be checked experimentally. Not until 1771 did the British physicist and chemist Henry Cavendish confirm it by using large spheres of lead to attract small masses attached to a torsion pendulum, and from these measurements also deduced the mass and density of the Earth.

In the two centuries after Newton, although mechanics was analysed, reformulated, and applied to complex systems, no new physical ideas were added. The Swiss mathematician Leonhard Euler first formulated the equations of motion for rigid bodies, whereas Newton had dealt only with masses concentrated at a point or that were equivalent to point masses, which thus acted like particles. Various mathematical physicists, among them Joseph Louis Lagrange and William Hamilton, extended Newton’s second law with more sophisticated and elegant reformulations. Over the same period, Euler, the Dutch-born scientist Daniel Bernoulli, and other scientists also extended Newtonian mechanics to lay the foundation of fluid mechanics.

C Electricity and Magnetism

Although the ancient Greeks were aware of the electrostatic properties of amber, and the Chinese as early as 2700 bc made magnets from lodestone, experimentation with and the understanding and use of electric and magnetic phenomena did not occur until the end of the 18th century. In 1785 the French physicist Charles Augustin de Coulomb first confirmed experimentally that electrical charges attract or repel one another according to an inverse square law, similar to that of gravitation. A powerful theory to calculate the effect of any number of static electric charges arbitrarily distributed was subsequently developed by the French mathematician Siméon Denis Poisson and the German mathematician Carl Friedrich Gauss.

A positively charged particle attracts a negatively charged one, and they tend to accelerate towards each other. If the medium through which the particles move offers resistance, they may be reduced to a constant-velocity (rather than accelerated) motion, and the medium will be heated up and may also be otherwise affected. The ability to maintain an electromotive force that could continue to drive electrically charged particles had to await the development of the chemical (cell) battery by the Italian physicist Alessandro Volta in 1800. The classical theory of a simple electric circuit assumes that the two terminals of a cell are maintained positively and negatively charged as a result of its internal properties. When the terminals are connected by a wire, negatively charged particles are simultaneously pushed away from the negative terminal and attracted to the positive one, and in the process heat up the wire that offers resistance to the motion. Upon their arrival at the positive terminal, the particles are forced through the interior of the cell towards the negative terminal, overcoming the opposing forces of Coulomb’s law. The German physicist Georg Simon Ohm first discovered the existence of a simple proportionality constant, known as the resistance of the circuit, relating the current flowing and the electromotive force supplied by a battery. Ohm’s law, which states that the current is proportional to the electromotive force (that is, that the resistance is constant), is not a fundamental and universally applicable law of physics, but rather describes the behaviour of a limited class of solid materials. See Electric Circuit.

The elementary concepts of magnetism, based on the existence of pairs of oppositely charged poles, date from the 17th century. They were developed in the work of Coulomb. The first connection between magnetism and electricity, however, was made through the pioneering experiments of the Danish physicist and chemist Hans Christian Oersted, who in 1819 discovered that a magnetic needle could be deflected by a wire nearby carrying an electric current. Within one week of learning of Oersted’s discovery, the French scientist André Marie Ampère showed experimentally that two current-carrying wires affect each other like poles of magnets. In 1831 the British physicist and chemist Michael Faraday discovered that an electric current could be induced (made to flow) in a loop of wire not connected to a battery, either by moving a magnet nearby or by placing a wire carrying a varying current nearby. The intimate connection between electricity and magnetism, now established, can best be stated in terms of electric or magnetic fields. The strength and direction of a field at any point is a measure of the force that will act on a unit charge or unit current, respectively, placed at that point. Stationary electric charges produce electric fields; currents—that is, moving electric charges—produce magnetic fields. Electric fields are also produced by changing magnetic fields, and vice versa. Electric fields exert forces on charged particles as a function of their charge alone; magnetic fields exert a force on charges in motion.

These qualitative findings were put into a precise mathematical form by the British physicist James Clerk Maxwell, who, in developing the partial differential equations that bear his name, related the space and time changes of electric and magnetic fields at a point to the charge and current densities at that point. In principle, they permit the calculation of the fields everywhere and at any time from a knowledge of the charges and currents. An unexpected result arising from the solution of these equations was the prediction of a new kind of electromagnetic field, one produced by accelerating charges. It propagated through space with the speed of light in the form of an electromagnetic wave, and decreased in strength with the inverse square of the distance from the source. In 1887 the German physicist Heinrich Hertz succeeded in generating such waves by electrical means, thereby laying the foundations for radio, radar, television, and other forms of telecommunications. See Electromagnetic Radiation.

The behaviour of electric and magnetic fields in these waves is quite similar to that of a very long taut string, one end of which is rapidly moved up and down in a periodic fashion. Any point along the string will be observed to move up and down, or oscillate, with the same frequency as the source. Points along the string at different distances from the source will reach the maximum vertical displacements at different times. Each point along the string will do what its neighbour did, but a little later, if it is further removed from the vibrating source (seeOscillation). The speed with which the disturbance, or “message”, is transmitted along the string is called the wave velocity (seeWave Motion). This is a function of the string’s mass per unit length and its tension. An instantaneous snapshot of the string (after it had been in motion for a while) would show that points having the same displacement were separated by a distance known as the wavelength, which is equal to the wave velocity divided by the frequency. In the case of the electromagnetic field one can think of the electric field strength as taking the place of the up-and-down motion of each piece of the string, with the magnetic field acting similarly in a direction at right angles to that of the electric field. The electromagnetic wave velocity away from the source is the speed of light.

D Light

The apparently linear propagation of light has been known since antiquity. The ancient Greeks believed that light consisted of a stream of corpuscles. They were, however, quite confused as to whether these corpuscles originated in the eye or in the object viewed. Any satisfactory theory of light must explain its origin and disappearance and its changes in speed and direction while it passes through various media. Partial answers to these questions were proposed in the 17th century by Newton, who based them on the assumptions of a corpuscular theory, and by the English scientist Robert Hooke and the Dutch astronomer, mathematician, and physicist Christiaan Huygens, who both proposed wave theories. No experiment could be performed that distinguished between the two theories (as the wavelength of light is very small) until the demonstration of interference in the early 19th century by the British physicist and physician Thomas Young. The French physicist Augustin Jean Fresnel decisively favoured the wave theory.

Interference can be demonstrated by placing a thin slit in front of a light source, stationing a double slit further away, and looking at a screen spaced some distance beyond the double slit. Instead of showing a uniformly illuminated image of the slits, the screen will show equally spaced light and dark bands. Further detailed assumptions would have to be added to explain how particles coming from the same source and arriving at the screen via the two slits could produce different light intensities at different points and even cancel each other to yield dark spots. Light waves, however, can quite easily produce such an effect. Assuming, as did Huygens, that each of the double slits acts as a new source, emitting light in all directions, the two wave trains arriving at the screen at the same point will not generally arrive in phase, though they will have left the two slits in phase. (Two vibrations at a given point are said to be in phase when they are at the same stage of the oscillation at each moment—thus their maxima coincide at one moment, their minima at another, and so on.) Depending on the difference in their paths, “positive” displacements of one wave train arriving at the same time as “negative” displacements of the other will tend to cancel out the latter and produce darkness, while the simultaneous arrival of either positive or negative displacements from both sources will lead to reinforcement, or brightness. At each bright spot the light intensity undergoes a time-wise variation as successive in-phase waves go from maximum positive through zero to maximum negative displacement and back. Neither the eye nor any classical instrument, however, can determine this rapid “flicker”, which in the visible-light range has a frequency from 4 × 1014 to 7.5 × 1014 hertz, or cycles per second. Although it cannot be measured directly, the frequency can be inferred from wavelength and velocity measurements. The wavelength can be determined from simple measurements of the distance between the two slits and of the distance between adjacent bright bands on the screen. The wavelength ranges from 4 × 10-5 cm (1.6 × 10-5 in) for violet light to 7.5 × 10-5 cm (3 × 10-5 in) for red light, with intermediate wavelengths for the other colours.

The first measurement of the velocity of light was carried out by the Danish astronomer Olaus Roemer in 1676. He noted an apparent time variation between successive eclipses of Jupiter’s moons, which he ascribed to changes in the distance between Earth and Jupiter, and to the corresponding differences in the time required for the light to reach the Earth. His measurement was in fair agreement with the improved 19th-century observations of the French physicist Armand Hippolyte Louis Fizeau, and with the work of the American physicist Albert Abraham Michelson and his co-workers, which extended into the 20th century. Today the velocity of light is known very accurately as 299,792.46 km/sec (186,282.4 mi/sec) in vacuum. In matter, the speed is less and varies with frequency, a phenomenon known as dispersion. See also Optics; Spectrum.

Maxwell’s work contributed several important results to the understanding of light by showing that it is electromagnetic in origin and that in a light wave electric and magnetic fields oscillate. His work predicted the existence of non-visible light, and today electromagnetic waves or radiations are known to cover the spectrum from gamma rays (see Radioactivity), with wavelengths of 10-12 cm (4 × 10-13 in) and less, through X-rays, visible light, microwaves, and radio waves, to long waves of hundreds of kilometres and more in length. It also related the velocity of light in vacuum and in media to other observed properties of space and matter on which electrical and magnetic effects depend. Maxwell’s discoveries, however, did not provide any insight into the mysterious medium, corresponding to the string, through which light and electromagnetic waves supposedly had to travel (see the section Electricity and Magnetism above). From their experience with water, sound, and elastic waves, scientists assumed a similar medium to exist, a “luminiferous ether” without mass, which existed everywhere (because light can travel through space). The ether had to act like a solid, because electromagnetic waves were known to be transverse, while gases and liquids can only sustain longitudinal waves, such as sound waves. The search for the ether occupied physicists’ attention for much of the last part of the 19th century.

The problem was further compounded by an extension of a simple problem. A person walking forwards with a speed of 32 km/h (20 mph) in a train travelling at 644 km/h (400 mph) appears to an observer on the ground to move at 676 km/h (420 mph). In relation to the speed of light the question that now arose was: If light travels at about 300,000 km/sec (about 186,000 mi/sec) through the ether, at what velocity should it travel relative to an observer on Earth, since the Earth also moves through the ether? Or, alternatively, what is the Earth’s velocity through the ether, as indicated by its effects on light waves? The famous Michelson-Morley experiment, first performed in 1887 by Michelson and the American chemist Edward Williams Morley using an interferometer, was an attempt to measure this velocity. If the Earth were travelling through a stationary ether, a difference should be apparent in the time taken by light to traverse a given distance, depending on whether it travels in the direction of or perpendicular to the Earth’s motion. The experiment was sensitive enough to detect even a very slight difference by interference; the results were negative. Physics was now in a profound quandary from which it was not rescued until Einstein formulated his theory of relativity in 1905.

E Thermodynamics

A branch of physics that assumed major stature during the 19th century was thermodynamics. It began by disentangling the previously confused concepts of heat and temperature, by arriving at meaningful definitions, and by showing how they could be related to the previously purely mechanical concepts of work and energy. See alsoHeat Transfer.

E1 Heat and Temperature

Different sensations are experienced when hot and cold bodies are touched, leading to the qualitative and subjective concept of temperature. The transfer of energy to a body generally leads to an increase in temperature when no melting or boiling occurs, and in the case of two bodies at different temperatures brought into contact, energy flows from one to the other until their temperatures become the same and thermal equilibrium is reached. Energy that flows from one body to another as a consequence of temperature differences is called heat. To arrive at a scientific measure of temperature, scientists used the observation that the addition or subtraction of heat produced a change in at least one well-defined property of a body. For example, heating a column of liquid maintained at constant pressure increased the length of the column, while heating a gas confined in a container raised its pressure. Temperature, therefore, can invariably be measured by one other physical property, as in the length of the mercury column in an ordinary thermometer, provided the other relevant properties remain unchanged. The mathematical relationship between the relevant physical properties of a body or system and its temperature is known as the equation of state. Thus, for a so-called ideal gas, a simple relationship exists between the pressure, p, volume V, number of moles n, and the absolute temperature T, given by pV = nRT, where R is the same constant for all ideal gases. Boyle’s law, named after the British physicist and chemist Robert Boyle, and Gay-Lussac’s, or Charles’s, law, named after the French physicists and chemists Joseph Louis Gay-Lussac and Jacques Alexandre César Charles, are both contained in this equation of state (seeGases).

Until well into the 19th century, heat was considered to be a massless fluid called caloric, contained in matter and capable of being squeezed out of or into it. Although the so-called caloric theory answered most early questions on thermometry and calorimetry, it failed to provide a sound explanation of many early 19th-century observations. The first true connection between heat and other forms of energy was observed in 1798 by the Anglo-American physicist and statesman Benjamin Thompson, Count von Rumford, who noted that the heat produced in the boring of cannon was roughly proportional to the amount of work done. (In mechanics, work is the product of a force on a body and the distance through which the body moves in the direction of the force during its application.)

E2 The First Law of Thermodynamics

The equivalence of heat and work was explained by the German physicist Hermann Ludwig Ferdinand von Helmholtz and the British mathematician and physicist Lord Kelvin by the middle of the 19th century. This equivalence means that, for example, the same temperature rise can be achieved in a liquid contained in a vessel by heating it or by doing an appropriate amount of work stirring a paddle wheel in the container. The numerical value of this equivalent was first demonstrated by the British physicist James Prescott Joule in experiments carried out between 1840 and 1849.

It was thus recognized that performing work on a system or heating are both means of transferring energy to the system. Therefore, the amount of energy added via heat or work has to increase the internal energy of the system, which in turn determines the temperature. If the internal energy remains unchanged, the amount of work done on a system must equal the heat given up by it. This is the first law of thermodynamics, a statement of the conservation of energy. Not until the activity of molecules in a system was better understood by the development of the kinetic theory could this internal energy be related to the sum of the kinetic energies of all the molecules making up the system.

E3 The Second Law of Thermodynamics

While the first law indicates that energy must be conserved in any interactions between a system and its surroundings, it gives no indication whether all forms of mechanical and thermal energy exchange are possible. That overall changes in energy proceed in one direction was first formulated by the French physicist and military engineer Nicolas Léonard Sadi Carnot, who in 1824 pointed out that a heat engine (a device that can produce work continuously by exchanging heat with its surroundings) requires both a hot body as a source of heat and a cold body to absorb heat that must be discharged. When the engine performs work, heat must be transferred from the hotter to the colder body; to have the reverse take place requires the expenditure of mechanical (or electrical) work. Thus, in a continuously working refrigerator, the absorption of heat from the low temperature source (the cold space) requires the performance of work (usually as electrical power), and the discharge of heat (usually via finned coils in the rear) to the surroundings (seeRefrigeration). These ideas, based on Carnot’s concepts, were eventually formulated rigorously as the second law of thermodynamics by the German mathematical physicist Rudolf Julius Emanuel Clausius and by Lord Kelvin in various alternative, although equivalent, ways. One such formulation is that heat cannot flow from a colder to a hotter body without the expenditure of work.

From the second law, it follows that in an isolated system (one that has no interactions with the surroundings) internal portions at different temperatures will always tend towards a single uniform temperature and thus produce equilibrium. This can also be applied to other internal properties that may be non-uniform initially. If milk is poured into a cup of coffee, for example, the two substances will continue to mix until they are inseparable and can no longer be differentiated. Thus, an initial ordered state, with distinct components, is turned into a mixed or disordered state. These ideas can be expressed by a thermodynamic property called entropy (first formulated by Clausius), which serves as a measure of how close a system is to equilibrium—that is, to perfect internal disorder. The entropy of an isolated system, and of the universe as a whole, can only increase, and when equilibrium is eventually reached, no more internal change of any form is possible. Applied to the universe as a whole, this principle suggests that eventually temperature throughout the cosmos will become uniform, resulting in the so-called heat death of the universe.

However, the entropy can be lowered locally by external action. This applies to machines, such as a refrigerator, in which the entropy of the cold chamber is reduced, and to living organisms. This local increase in order is, however, only possible at the expense of an entropy increase in the surroundings; here more disorder must be created.

This continued increase in entropy is related to the observed non-reversibility of macroscopic processes. If a process were spontaneously reversible—that is, if, after undergoing a process, both it and all the surroundings could be brought back to their initial state—the entropy would remain constant, in violation of the second law. While this is true for macroscopic processes, and therefore corresponds to daily experience, it does not apply to microscopic processes, which are believed to be reversible. Thus, chemical reactions between individual molecules are not governed by the second law, which applies only to macroscopic ensembles.

From the formulation of the second law, thermodynamics went on to other advances and to applications in physics, chemistry, and engineering. Most chemical engineering, all power-plant engineering, air-conditioning technology, and low-temperature physics are just a few of the fields that owe their theoretical basis to thermodynamics and to the subsequent achievements of such scientists as Maxwell, the American physicist Willard Gibbs, the German physical chemist Walther Hermann Nernst, and the Norwegian-born American chemist Lars Onsager.

F Kinetic Theory and Statistical Mechanics

The modern concept of the atom was first proposed by the British chemist and physicist John Dalton in 1808 and was based on his studies showing that chemical elements enter into combinations based on fixed ratios of their weights. The concept of molecules as the smallest particles of a substance that can exist in the free—that is, gaseous—state while still possessing the properties of any larger amount of the substance was first proposed by the Italian physicist and chemist Amedeo Avogadro in 1811. It did not find general acceptance until about 50 years later, when it also came to form the basis of the kinetic theory of gases (seeAvogadro’s Law). As developed by Maxwell, the Austrian physicist Ludwig Boltzmann, and other physicists, it enabled the laws of mechanics and probability to be applied to the behaviour of individual molecules, leading to statistical inferences about the properties of the gas as a whole.

A typical but important problem solved in this manner was the determination of the range of speeds of molecules in the gas, and from this the average kinetic energy of the molecules. The kinetic energy of a body, as a simple consequence of Newton’s second law, is ymv2, where m is the mass of the body and v its velocity. One of the achievements of kinetic theory was to show that temperature, the macroscopic thermodynamic property describing the system as a whole, was directly related to the average kinetic energy of the molecules. Another was the identification of the entropy of a system with the logarithm of the statistical probability of the energy distribution. This led to the demonstration that the state of thermodynamic equilibrium of highest probability is also the state of maximum entropy. Following these successes in the case of gases, kinetic theory and statistical mechanics were subsequently applied to other systems, a process that is still continuing.

G Early Atomic and Molecular Theories

The development of Dalton’s atomic theory and Avogadro’s law had overriding influence on the development of chemistry, in addition to their importance in physics.

G1 Avogadro’s Law

Avogadro’s law, which was easily proved by kinetic theory, indicated that a specified volume of a gas at a given temperature and pressure always contained the same number of molecules, irrespective of the gas selected. This number, however, could not be accurately determined, and physicists therefore had no sound knowledge of molecular or atomic mass and size until the turn of the 20th century. After the discovery of the electron, the American physicist Robert Andrews Millikan carefully determined its charge. This finally permitted accurate determination of Avogadro’s number, which is the number of molecules in an amount of material whose mass in grams is exactly equal to its molecular weight (seeMolecule).

Besides the mass of an atom, another quantity of interest was its size. Various and only partly successful attempts at finding the size of an atom were made during the latter part of the 19th century; the most successful applied the results of kinetic theory to non-ideal gases—that is, gases whose molecules were not points but had finite volumes. Later experiments involving the scattering of X-rays, alpha particles, and other atomic and subatomic particles by atoms led to more precise measurements of their size; they proved to be between 10-8 and 10-9 cm (4 × 10-9 and 4 × 10-8 in) in diameter. A precise statement about the size of an atom, however, requires some explicit definition of what is meant by size, since most atoms are not exactly spherical and can exist in various states with different distances between the nucleus and the electrons.

G2 Spectroscopy

One of the most important developments leading to the exploration of the interior of the atom, and to the eventual overthrow of the classical theories of physics, was spectroscopy; the other was the discovery of the subatomic particles themselves.

In 1823 the British astronomer and chemist John Herschel suggested that a chemical substance might be identified by examining its spectrum—that is, the pattern of discrete wavelengths in which light from a gaseous substance is emitted. In the years that followed, the spectra of a great many substances were catalogued by two Germans, the chemist Robert Wilhelm Bunsen and the physicist Gustav Robert Kirchhoff. Helium was discovered following the observation of an unexplained line in the Sun’s spectrum by the British astronomer Joseph Norman Lockyer in 1868. From the standpoint of atomic theory, however, the most important contributions were made by the study of the spectra of simple atoms, such as hydrogen, which showed few spectral lines. See Chemical Analysis.

Discrete line spectra originate from gaseous substances in which, in terms of modern knowledge, the electrons have been excited by heat or by bombardment with subatomic particles. In contrast, a heated solid has a continuous spectrum over the full visible range and into the infrared and ultraviolet regions. The total amount of energy emitted depends strongly on the temperature, as does the relative intensity of the different wavelength components. As a piece of iron is heated, for example, its radiation is first in the infrared spectrum and cannot be seen; the radiation then extends into the visible spectrum, where the glow shifts from red to white as the peak of its radiant spectrum shifts towards the middle of the visible range. Attempts to explain the radiation characteristics of solids, using the tools of theoretical physics available at the end of the 19th century, led to the prediction that at any given temperature the amount of radiation increased with frequency and without limit. This calculation, in which no error was found, was in disagreement with experiment and also led to an absurd conclusion: that a body at a finite temperature could radiate an infinite amount of energy. This required a new way of thinking about radiation and, indirectly, about the atom. See Infrared Radiation; Ultraviolet Radiation.

H The Breakdown of Classical Physics

By about 1880 physics was serene; most phenomena could be explained by Newtonian mechanics, Maxwell’s electromagnetic theory, thermodynamics, and Boltzmann’s statistical mechanics. It seemed that only a few problems, such as the determination of the properties of the ether and the explanation of the radiation spectra from solids and gases, were unsolved. These unexplained phenomena, however, formed the seeds of revolution, a revolution that was augmented by a series of remarkable discoveries within the last decade of the 19th century: of X-rays by Wilhelm Conrad Roentgen in 1895; of the electron by J. J. Thomson in 1895; of radioactivity by Antoine Henri Becquerel in 1896; and of the photoelectric effect by Heinrich Hertz, Wilhelm Hallwachs, and Philipp Lenard during the period from 1887 to 1899. Coupled with the disturbing results of the Michelson-Morley experiments and the discovery of cathode rays, which are electron streams, the experimental evidence in physics now outstripped all theories available to explain it.

V MODERN PHYSICS

Two major new developments during the first third of the 20th century, the quantum theory and the theory of relativity, explained these findings, yielded new discoveries, and changed the understanding of physics as it is known today.

A Relativity

To extend the example of relative velocity introduced with the Michelson-Morley experiment, two situations can be compared. One consists of a person, A, walking forward with a velocity v in a train moving at velocity u. The velocity of A with regard to an observer B stationary on the ground is then simply V = u + v. If, however, the train were at rest in the station and A was moving forward with velocity v while observer B walked the other way with velocity u, the relative speed of A and B would be exactly the same as in the first case. In more general terms, if two frames of reference are moving relative to each other at constant velocity, observations of any phenomena made by observers in either frame will be physically equivalent. As already mentioned, the Michelson-Morley experiment failed to confirm this simple addition of velocities in the case of light beams: two observers, one at rest and the other moving towards a light source with velocity u, observe the same light velocity commonly denoted by the symbol c.

Einstein incorporated the invariance of c into his theory of relativity. He also demanded a very careful rethinking of the concepts of space and time, showing the imperfection of intuitive notions about them. As a consequence of his theory, it is known that two clocks that keep identical time when at rest relative to each other must run at different speeds when they are in relative motion, and two rods that are identical in length when at rest will become different in length when they are in relative motion. Space and time must be closely linked in a four-dimensional continuum where the normal three-space dimensions have an interrelated time dimension.

Two important consequences of Einstein’s relativity theory are the equivalence of mass and energy and the limiting velocity of the speed of light for material objects. Relativistic mechanics describes the motion of objects with velocities that are appreciable fractions of the speed of light, while Newtonian mechanics remains useful for velocities typical of the macroscopic motion of objects on Earth. No material object, however, can have a speed equal to or greater than the speed of light.

Even more important is the relation between the mass m and energy E. They are coupled by the relation E = mc2 and, because c is very large, the energy equivalence of a given mass is enormous. The change of mass giving an energy change is significant in nuclear reactions, as in reactors or nuclear weapons, and in the stars, where a significant loss of mass accompanies the huge energy release.

Einstein’s original theory, formulated in 1905 and known as the special theory of relativity, was limited to frames of reference moving at constant velocity relative to each other. In 1915, he generalized his hypothesis to formulate the general theory of relativity, which applies to systems that accelerate with reference to each other. This extension showed gravitation to be a consequence of the geometry of space-time and predicted the bending of a light ray if it passed close to a massive body, such as a star, an effect first observed in 1919. General relativity, has deep significance for an understanding of the structure of the universe and its evolution. See alsoCosmology.

B Quantum Theory

The puzzle posed by the observed spectra emitted by solid bodies was first explained by the German physicist Max Planck. According to classical physics, all molecules in a solid can vibrate, with the amplitude of the vibrations directly related to the temperature. All vibration frequencies should be possible and the thermal energy of the solid should be continuously convertible into electromagnetic radiation as long as energy is supplied. Planck made a radical assumption by suggesting that the molecular oscillator could emit electromagnetic waves only in discrete bundles, now called quanta, or photons (seeQuantum Theory). Each photon has a characteristic wavelength and an energy E given by E = hf, where f is the frequency of the wave. The wavelength ? is related to the frequency by ?f = c, where c is the speed of light. With the frequency specified in hertz (Hz), or cycles per second, h, now known as Planck’s constant, is extremely small (6.626 × 10-34 joules-second). With his theory, Planck introduced a wave-particle duality into the theory of light, which for nearly a century had been considered to be wave-like only.

C Photoelectricity

If electromagnetic radiation of appropriate wavelength falls upon suitable metals, negative electric charges, now known to be electrons, are ejected from the metal surface. The important aspects of this phenomenon are the following: (1) the energy of each photoelectron depends only on the frequency of the illumination and not on its intensity; (2) the rate of electron emission depends only on the illuminating intensity and not on the frequency (provided that the minimum frequency capable of causing emission is exceeded); and (3) the photoelectrons emerge as soon as the illumination hits the surface. These observations, which could not be explained by Maxwell’s electromagnetic theory of light, led Einstein to assume in 1905 that light can be absorbed only in quanta, or photons, and that the photon completely vanishes in the absorption process, with all of its energy E (=hf) going to one electron in the metal. With this simple assumption Einstein extended Planck’s quantum theory to the absorption of electromagnetic radiation, giving additional importance to the wave-particle duality of light. It was for this work that Einstein was awarded the 1921 Nobel Prize for Physics.

D X-Rays

These very penetrating rays, first discovered by Roentgen, were shown to be electromagnetic radiation of very short wavelength in 1912 by the German physicist Max von Laue and his co-workers. The precise mechanism of X-ray production was shown to be a quantum effect, and in 1914 the British physicist Henry Gwyn-Jeffreys Moseley used his X-ray spectrograms to prove that the number of positive charges in an atom is the same as its atomic number, its position in the periodic table. The photon theory of electromagnetic radiation was further strengthened and developed by the prediction and observation of the so-called Compton effect by the American physicist Arthur Holly Compton in 1923.

E Electron Physics

That electric charges were carried by extremely small particles had already been suspected in the 19th century, and electrochemical experiments that indicated the charge of these elementary particles was a definite, invariant quantity. Experiments on the conduction of electricity through low-pressure gases led to the discovery of two kinds of rays: cathode rays, coming from the negative electrode in a gas discharge tube, and positive or canal rays from the positive electrode. J. J. Thomson’s 1895 experiment measured the ratio of the charge q to the mass m of the cathode-ray particles. Lenard in 1899 confirmed that the ratio of q to m for particles emitted in the photoelectric effect was identical to that of cathode rays. The American inventor Thomas Alva Edison had noted in 1883 that very hot wires emit electricity, an effect known as thermionic emission (now called the Edison effect), and in 1899 Thomson showed that this form of electricity also consisted of particles with the same q to m ratio as the others. About 1911 Millikan finally determined that electric charge always arises in multiples of a basic unit e and measured its value, now known to be 1.602 × 10-19 coulombs. From the measured value of q/m, with q set equal to e, the mass of the carrier, called the electron, could now be determined as 9.109 × 10-31 kg.

Finally, Thomson and others showed that the positive rays also consisted of particles, each carrying a charge e, but of the positive variety. These particles, however, now recognized as positive ions resulting from the removal of an electron from a neutral atom, are much more massive than the electron. The smallest, the hydrogen ion, is a single proton with a mass of 1.673 × 10-27 kg, about 1837 times more massive than the electron (seeIonization). The “quantized” nature of electric charge was now firmly established and, at the same time, two of the fundamental subatomic particles had been identified.

F Atomic Models

In 1913 the New Zealand-born British physicist Ernest Rutherford, making use of the newly discovered radiations from radioactive nuclei, found Thomson’s earlier model of an atom with uniformly distributed positive and negative charged particles to be untenable. The very fast, positively charged alpha particles that he employed were found to be deflected sharply in their passage through matter. This effect required an atomic model with a heavy positive scattering centre. Rutherford then suggested that the positive charge of an atom was concentrated in a massive stationary nucleus, with the negative electron moving in orbits about it, and held in the atom by the electric attraction between opposite charges. This “solar system” model, however, could not persist, according to Maxwell’s theory, in which the revolving electrons should emit electromagnetic radiation leading to a total collapse of the system in a very short time.

Another sharp break with classical physics was required at this point. It was provided by the Danish physicist Niels Bohr, who suggested that within atoms there were certain specified orbits in which electrons could revolve without emission of electromagnetic radiation. These allowed orbits, or so-called stationary states, are determined by the condition that the angular momentum J of the orbiting electron must be a positive integral multiple of Planck’s constant, divided by 2 p, that is, J = nh/2p, where the quantum number n may have any positive integer value. This extended “quantization” to dynamics, fixed the possible orbits, and allowed Bohr to calculate their radii and the corresponding energy levels. In 1913, the year in which Bohr’s first work on this subject appeared, the model was confirmed experimentally by the German-born American physicist James Franck and the German physicist Gustav Hertz.

Bohr developed his model much further. He explained how atoms radiate light and other electromagnetic waves, and also proposed that an electron “lifted” by a sufficient disturbance of the atom from the orbit of smallest radius and least energy (the ground state) into another orbit would soon “fall” back to the ground state. This falling back is accompanied by the emission of a single photon of energy E = hf, where E is the difference in energy between the higher and lower orbits. Each orbit shift emits a characteristic photon of sharply defined frequency and wavelength; thus one photon would be emitted in a direct shift from the n = 3 to the n = 1 orbit, which will be quite different from the two photons emitted in a sequential shift from the n = 3 to n = 2 orbit, and then from there to the n = 1 orbit. This model now allowed Bohr to account with great accuracy for the simplest atomic spectrum, that of hydrogen, which had defied classical physics.

Although Bohr’s model was extended and refined, it could not explain observations for atoms with more than one electron. It could not even account for the intensity of the spectral colours of the simple hydrogen atom. Because it had no more than a limited ability to predict experimental results, it remained unsatisfactory for theoretical physicists.

G Quantum Mechanics

Within a few years, roughly between 1924 and 1930, an entirely new theoretical approach to dynamics was developed to account for subatomic behaviour. Named quantum mechanics or wave mechanics, it started with the suggestion in 1924 by the French physicist Louis de Broglie that not only electromagnetic radiation but also matter could have wave as well as particle aspects. The wavelength of the so-called matter waves associated with a particle is given by the equation ? = h/mv, where m is the particle mass and v its velocity. Matter waves were conceived of as pilot waves guiding the particle motion, a property that should result in diffraction under suitable conditions. This was confirmed in 1927 by experiments on electron-crystal interactions by the American physicists Clinton Joseph Davisson and Lester Halbert Germer and the British physicist George Paget Thomson. Subsequently, Werner Heisenberg, Max Born, and Ernst Pascual Jordan of Germany and the Austrian physicist Erwin Schrödinger developed de Broglie’s idea into a mathematical form capable of dealing with a number of physical phenomena and with problems that could not be handled by classical physics. In addition to confirming Bohr’s idea regarding the quantization of energy levels in atoms, quantum mechanics now provides an understanding of the most complex atoms, and has also been a guiding spirit in nuclear physics. Although quantum mechanics is usually needed only on the microscopic level (with Newtonian mechanics still satisfactory for macroscopic systems), certain macroscopic effects, such as the properties of crystalline solids, can be satisfactorily explained only by principles of quantum mechanics.

Going beyond Broglie’s notion of the wave-particle duality of matter, additional important concepts have since been incorporated into the quantum-mechanical picture. These include the discovery that electrons must have some permanent magnetism and, with it, an intrinsic angular momentum, or spin, as a fundamental property. Spin was subsequently found in almost all other elementary particles. In 1925 the Austrian physicist Wolfgang Pauli discovered the exclusion principle, which states that in an atom no two electrons can have precisely the same set of quantum numbers. (Four quantum numbers are needed to specify completely the state of an electron in an atom.) The exclusion principle is vital for an understanding of the structure of the elements and of the periodic table. Heisenberg in 1927 put forward the uncertainty principle, which asserted the existence of a natural limit to the precision with which certain pairs of physical quantities can be known simultaneously.

Finally, a synthesis of quantum mechanics and relativity was made in 1928 by the British mathematical physicist P. A. M. Dirac, leading to the prediction of the existence of the positron and bringing the development of quantum mechanics to a culmination.

Largely as a result of Bohr’s ideas, a statistical approach developed in modern physics. The fully deterministic cause-and-effect relations of Newtonian mechanics were replaced by predictions of future events in terms of statistical probabilities only. The wave properties of matter imply that, in accordance with the uncertainty principle, the motion of a particle can never be predicted with absolute certainty, even if all the forces acting are known. Although this statistical aspect plays no detectable role in macroscopic motions, it is dominant on the molecular, atomic, and subatomic scales.

H Nuclear Physics

In 1896, Becquerel discovered radioactivity in uranium ore. Within a few years radiation from radioactive materials was found to consist of three types of emissions: alpha rays, later found by Rutherford to be the nuclei of helium atoms; beta rays, shown by Becquerel to be very fast electrons; and gamma rays, identified later as very short-wavelength electromagnetic radiation. In 1898 the French physicists Marie Curie and Pierre Curie separated two highly radioactive elements, radium and polonium, from uranium ore, thus showing that radiations could be identified with particular elements. By 1903 Rutherford and the British physical chemist Frederick Soddy had shown that after the emission of alpha or beta rays the emitting element had changed into a different one.

Radioactive processes were shortly thereafter found to be completely statistical; no method exists that could indicate which atom in a radioactive material will decay at any one time. These developments, in addition to leading to Rutherford’s and Bohr’s model of the atom, also suggested that alpha, beta, and gamma rays could only come from the nuclei of very heavy atoms. In 1919 Rutherford bombarded nitrogen with alpha particles and converted it to hydrogen and oxygen, so producing the first artificial transmutation of elements.

Meanwhile, a knowledge of the nature and abundance of isotopes was growing, largely through the development of the mass spectrometer. A model emerged in which the nucleus contained all the positive charge and almost all the mass of the atom. The nuclear-charge carriers were identified as protons, but the nuclear mass could be accounted for only if some additional uncharged particles were present (except in hydrogen). In 1932 the British physicist James Chadwick discovered the neutron, an electrically neutral particle of mass 1.675 × 10-27 kg, slightly more than that of the proton. Now nuclei could be understood as consisting of protons and neutrons, collectively called nucleons, and the atomic number of the element was simply the number of protons in the nucleus. On the other hand, the isotope number, also called the atomic mass number, was the total number of neutrons and protons present. Thus, all atoms of oxygen (atomic number 8) have eight protons, but the three isotopes of oxygen, O16, O 17, and O18, also contain within their respective nuclei eight, nine, or ten neutrons.

Positive electric charges repel each other, and because atomic nuclei (except for hydrogen) have more than one proton, they would fly apart except for a strong attractive force, called the nuclear force or strong interaction, that binds the nucleons to each other. The energy associated with this strong force is very great, millions of times greater than the energies characteristic of electrons in their orbits, that is, chemical binding energies. An escaping alpha particle (consisting of two protons and two neutrons) will therefore have to overcome this strong interaction force to escape from a radioactive nucleus such as uranium. This apparent paradox was explained by the American physicists Edward U. Condon, George Gamow, and Ronald Wilfred Gurney, who applied quantum mechanics to the problem of alpha emission in 1928 and showed that the statistical nature of nuclear processes allowed alpha particles to “leak” out of radioactive nuclei, even though their average energy was insufficient to overcome the nuclear force. Beta decay was explained as a result of a neutron disruption within the nucleus, the neutron changing into an electron (the beta particle), which is promptly ejected, and a residual proton. The “daughter” nucleus is left with one more proton than its “parent” and thus its atomic number and position in the periodic table are increased by 1. Alpha or beta emission usually leaves the nucleus with excess energy, which it unloads by emitting a gamma-ray photon.

In all these nuclear processes a large amount of energy, given by Einstein’s equation E = mc2, is released. When the process is over, the total mass of the product is less than that of the parent, with the mass difference appearing as energy. See Nuclear Energy.

VI DEVELOPMENTS IN PHYSICS SINCE 1930

 The rapid expansion of physics in the past few decades was made possible by the fundamental developments of the first third of the 20th century, coupled with recent technological advances, particularly in computer technology, electronics, nuclear-energy applications, and high-energy particle accelerators.

A Accelerators

Rutherford and other early investigators of nuclear properties were limited to the use of high-energy emissions from naturally radioactive substances to probe the atom. The first artificial high-energy emissions were produced in 1932 by the British physicist John Cockcroft and the Irish physicist Ernest Walton, who used high-voltage generators to accelerate protons to about 700,000 eV and bombarded lithium with them, changing it into helium. One electronvolt is the energy gained by an electron when the accelerating voltage is 1 volt; it is equivalent to about 1.6 × 10-19 joule. Modern accelerators produce energies measured in million electronvolts (usually written mega-electronvolts, or MeV), billion electronvolts (giga-electronvolts, or GeV), or trillion electronvolts (tera-electronvolts, or TeV). Higher-voltage sources were first made possible by the invention, also in 1932, of the Van de Graaff generator by the American physicist Robert J. Van de Graaff.

This was followed almost immediately by the invention of the cyclotron by the American physicists Ernest Orlando Lawrence and Milton Stanley Livingston. The cyclotron uses a magnetic field to bend the paths of charged particles into circles, and during each half-revolution the particles are given a small electric “kick” until they accumulate the high energy desired. Protons could be accelerated to about 10 MeV by a cyclotron, but higher energies had to await the development of the synchrotron after the end of World War II, based on the ideas of the American physicist Edwin Mattison McMillan and the Soviet physicist Vladimir I. Veksler. After World War II, accelerator design made rapid progress, and accelerators of many types were built, producing high-energy beams of electrons, protons, deuterons, heavier ions, and X-rays. For example, the accelerator at the Stanford Linear Accelerator Center (SLAC) in Stanford, California, accelerates electrons down a straight “runway”, 3.2 km (2 mi) long, by the end of which they have attained an energy of more than 20 GeV.

While lower-energy accelerators are used in various applications in industry and laboratories, the most powerful ones are used in studying the structure of elementary particles, the fundamental building blocks of nature. In such studies elementary particles are broken up by hitting them with beams of projectiles, which are usually protons or electrons. The distribution of the fragments yields information on the structure of the elementary particles.

To obtain more detailed information in this manner, the use of more energetic projectiles is necessary. Since the acceleration of a projectile is achieved by “pushing” it from behind, to obtain more energetic projectiles it is necessary to keep pushing for a longer time. Thus, high-energy accelerators are generally larger. The highest beam energy reached at the end of World War II was less than 100 MeV. A bigger accelerator, reaching 3 GeV, was built in the early 1950s at the Brookhaven National Laboratory at Upton, New York. A breakthrough in accelerator design occurred with the introduction of the strong focusing principle in 1952 by the American physicists Ernest D. Courant, Livingston, and Hartland S. Snyder. Today the world’s largest accelerators are built to produce beams of protons beyond 1 TeV. See Particle Accelerators.

B Particle Detectors

Detection and analysis of elementary particles were first accomplished through the ability of these particles to affect photographic emulsions and to energize fluorescent materials. The paths of ionized particles were first observed by the British physicist C. T. R. Wilson in a cloud chamber, where water droplets condensed on the ions produced by the particles during their passage. Electric or magnetic fields could be used to bend the particle paths, yielding information about their momentum and electric charges. A significant advance on the cloud chamber was the bubble chamber, first constructed by the American physicist Donald Arthur Glaser in 1952. It uses a liquid, usually hydrogen, instead of air, and the ions produced by a fast particle become centres of boiling, leaving an observable bubble track. Because the density of the liquid is much higher than that of air, more interactions take place in a bubble chamber than in a cloud chamber. Furthermore, the bubbles clear out faster than water droplets, allowing more frequent cycling of the bubble chamber. A third development, the spark chamber, evolved in the 1950s. In this device, many parallel plates are kept at a high voltage in a suitable gas atmosphere. An ionizing particle passing between the plates breaks down the gas, forming sparks that delineate its path.

A different type of detector, the discharge counter, was developed early during the 20th century, largely by the German physicist Hans Geiger, and was later improved by the German-American physicist Walther Müller. It is now commonly known as the Geiger-Müller counter, or simply as the Geiger counter, and although small and convenient, it has been largely replaced by faster and more convenient solid-state counting devices, such as the scintillation counter, developed about 1947 by the German-American physicist Hartmut Paul Kallmann and others. It uses the ability of ionized particles to produce a flash of light as they pass through certain organic crystals and liquids. See Particle Detectors.

C Cosmic Rays

About 1911 the Austrian-American physicist Victor Franz Hess studied cosmic rays. Primary cosmic rays consist of particles originating outside the Earth’s atmosphere. Secondary rays consist of particles and radiation produced by collision of primary cosmic-ray particles with atoms in the atmosphere. Hess found that cosmic rays arrived in a pattern determined by the Earth’s magnetic field. The rays were found to be positively charged and to consist mostly of protons with energies ranging from about 1 GeV to 1011 GeV. Cosmic rays trapped in orbits around the Earth account for the Van Allen radiation belts discovered by the first United States artificial satellite, launched in 1958.

When a very energetic primary proton smashes into the atmosphere and collides with the nitrogen and oxygen nuclei present, it produces large numbers of different secondary particles that spread towards the Earth as a cosmic-ray shower. The origin of the primary cosmic-ray protons is not yet fully understood. Some undoubtedly come from the Sun and the other stars, but it is difficult to account for the highest energies: the likelihood is that weak galactic fields operate over very long periods to accelerate interstellar protons (seeGalaxy; Milky Way).

D Elementary Particles

To the electron, proton, neutron, and photon have been added a number of other fundamental particles. In 1932 the American physicist Carl David Anderson discovered the anti-electron, or positron, predicted in 1928 by Dirac. Anderson found that an energetic cosmic gamma ray could disappear near a heavy nucleus, creating an electron-positron pair out of pure energy. When a positron subsequently meets an electron, they annihilate each other in a burst of photons.

D1 Discovery of the Muon

In 1935 the Japanese physicist Yukawa Hideki developed a theory explaining how a nucleus is held together, despite the mutual repulsion of its protons, by postulating the existence of a particle intermediate in mass between the electron and the proton. In 1936 Anderson and his co-workers discovered a new particle of 207 electron masses in secondary cosmic radiation; now called the muon, it was at first thought to be Yukawa’s nuclear “glue”. Subsequent experiments by the British physicist Cecil Frank Powell and others led to the discovery of a somewhat heavier particle of 270 electron masses, the pi-meson or pion (also obtained from secondary cosmic radiation), which was eventually identified as the missing link in Yukawa’s theory.

Many additional particles have since been found in secondary cosmic radiation and through the use of large accelerators. They include numerous massive particles, classed as hadrons (particles that take part in the strong nuclear interaction, which binds atomic nuclei together), including hyperons and various heavy mesons with masses ranging from about one to three proton masses; and so-called intermediate vector bosons such as the W and Z0 particles, the carriers of the weak nuclear force. They may be electrically neutral, positive, or negative, but never have more than one elementary electric charge e. Enduring from 10-8 to 10-14 sec, they decay into a variety of lighter particles. Each particle has its antiparticle and carries some angular momentum. They all obey certain conservation laws, involving quantum numbers such as baryon number, strangeness, and isotopic spin.

In 1931 Pauli, in order to explain the apparent failure of some conservation laws in certain radioactive processes, proposed the existence of electrically neutral particles of zero or near-zero mass that could carry away energy and momentum. This idea was further developed by the Italian-born American physicist Enrico Fermi, who named the missing particle the neutrino. Uncharged and highly unreactive, it is elusive, easily able to penetrate the entire Earth with only a small likelihood of capture. Nevertheless, it was eventually discovered in a difficult experiment performed by the Americans Frederick Reines and Clyde Lorrain Cowan, Jr. Understanding of the internal structure of protons and neutrons has also been derived from the experiments of the American physicist Robert Hofstadter, using fast electrons from linear accelerators.

In the late 1940s a number of experiments with cosmic rays revealed new types of particles, the existence of which had not been anticipated. They were called strange particles, and their properties were studied intensively in the 1950s. Then, in the 1960s, many new particles were found in experiments with the large accelerators. The electron, proton, neutron, photon, and all the particles discovered since 1932 are collectively called elementary particles. However, the term is actually a misnomer, for most of the particles have been found to have a very complicated internal structure.

Elementary particle physics is concerned with (1) the internal structure of these building blocks and (2) how they interact with one another to form nuclei. The physical principles that explain how atoms and molecules are built from nuclei and electrons are already known. At present, vigorous research is being conducted on both fronts in order to learn the physical principles upon which all matter is built.

The dominant theory of the internal structure of hadrons involves quarks, which are subparticles of fractional charge; a proton, for example, is made up of three quarks. This theory was first proposed in 1964 by the American physicists Murray Gell-Mann and George Zweig. Nucleons consist of triplets of quarks, while mesons consist of pairs of quarks. Isolated quarks cannot be produced by any known process in the modern universe, but they are believed to have existed singly in the extreme conditions found during the very creation of the universe. The theory originally needed three kinds of quarks, but later experiments, especially the discovery of the J/psi particle in 1974 by the American physicists Samuel C. C. Ting and Burton Richter, called for the introduction of three additional kinds.

D2 Unified Field Theories

The most successful theories of interactions between elementary particles, thus far, are called gauge theories. In these, the interaction between two kinds of particles is characterized by symmetry. The symmetry between neutrons and protons, for example, is such that if the identities of the particles are interchanged, nothing changes as far as the strong force is concerned. The first of the gauge theories applied to the electric and magnetic interactions between charged particles. Here, the symmetry consists in the fact that changes in the combination of electric and magnetic potentials have no effect on the results. A powerful gauge theory, which has since been verified, was that proposed independently by the American physicist Steven Weinberg and the Pakistani physicist Abdus Salam in 1967 and 1968. Their model linked intermediate vector bosons with the photon, thus uniting the electromagnetic and weak interactions, although only for leptons (particles that do not “feel” the strong force). Later work by Sheldon Lee Glashow, J. Iliopolis, and L. Maiani showed how the model could be applied to hadrons (the strongly interacting particles) as well.

Gauge theory can in principle be applied to any force field, holding out the possibility that all the interactions, or forces, can be brought together into a single unified field theory. Such efforts inevitably involve the concept of symmetry. Generalized symmetries extend to particle interchanges that vary from point to point in space and time. The difficulty for physicists is that such symmetries, while mathematically elegant, do not extend scientific understanding of the underlying nature of matter. For this reason, many physicists are exploring the possibilities of so-called supersymmetry theories, which would directly relate fermions and bosons. The theory involves further particle “twins” to those now known, differing only in spin. Doubts have been expressed about such efforts, but another approach known as superstring theory is attracting a good deal of interest. In such theories, fundamental particles are considered not as point objects but as “strings” that extend one-dimensionally to lengths of no more than 10-35 metres. Such theories solve a number of problems for the physicists who are working on unified field theories, but they are still only highly theoretical constructs.

E Nuclear Physics

In 1931 the American physicist Harold Clayton Urey discovered the hydrogen isotope deuterium and made heavy water from it. The deuterium nucleus, or deuteron (one proton plus one neutron), makes an excellent bombarding particle for inducing nuclear reactions. The French physicists Irène and Frédéric Joliot-Curie produced the first artificially radioactive nucleus in 1933-1934, leading to the production of radioisotopes for use in archaeology, biology, medicine, chemistry, and other sciences.

Fermi and many collaborators attempted a series of experiments to produce elements beyond uranium by bombarding uranium with neutrons. They succeeded, and now at least a dozen such transuranic elements have been made. As their work continued, an even more important discovery was made. Irène Joliot-Curie, the German physicists Otto Hahn and Fritz Strassmann, the Austrian physicist Lise Meitner, and the British physicist Otto Robert Frisch found that some uranium nuclei broke into two parts, a phenomenon called nuclear fission. At the same time, a huge amount of energy was released by mass conversion, as well as some neutrons. These results suggested the possibility of a self-sustained chain reaction, and this was achieved by Fermi and his group in 1942, when the first nuclear reactor went into operation. Technological developments followed rapidly; the first atomic bomb was produced in 1945 as a result of a massive programme under the direction of the American physicist J. Robert Oppenheimer, and the first nuclear power reactor for the production of electricity went into operation in Britain in 1956, yielding 78 megawatts. See Nuclear Weapons.

Further developments were based on the investigation of the energy source of the stars, which the German-American physicist Hans Bethe showed to be a series of nuclear reactions occurring at temperatures of millions of degrees. In these reactions, four hydrogen nuclei are converted into a helium nucleus, with two positrons and massive amounts of energy forming the by-products. This nuclear-fusion process was adopted in modified form, largely based on ideas developed by the Hungarian-American physicist Edward Teller, as the basis of the fusion or hydrogen bomb. First detonated in 1952, it is a weapon much more powerful than the fission bomb, a small fission bomb providing the necessary high triggering temperature.

Much current research is devoted to producing a controlled, rather than an explosive, fusion device, which would be less radioactive than a fission reactor and would provide an almost limitless source of energy. In December 1993 significant progress was made towards this goal when researchers at Princeton University used the Tokamak Fusion Test Reactor to produce a controlled fusion reaction that output 5.6 megawatts of power. However, the tokamak consumed more power than it produced during its operation.

F Solid-State Physics

In solids, the atoms are closely packed, leading to strong interactive forces and numerous interrelated effects that are not observed in gases, where the molecules largely act independently. Interaction effects lead to the mechanical, thermal, electrical, magnetic, and optical properties of solids, which is an area that remains difficult to handle theoretically, although much progress has been made.

A principal characteristic of most solids is their crystalline structure, with the atoms arranged in regular and geometrically repeating arrays (seeCrystal). The specific arrangement of the atoms may arise from a variety of forces. Some solids, such as sodium chloride, or common salt, are held together by ionic bonds originating in the electrical attraction between the ions of which the materials are composed. In others, such as diamond, atoms share electrons, giving rise to covalent bonding. Inert substances, such as neon, exhibit neither of these bonds. Their existence is a result of the so-called van der Waals forces, named after the Dutch physicist Johannes Diderik van der Waals. These forces exist between neutral molecules or atoms as a result of electric polarization. Metals, on the other hand, are bonded by a so-called electron gas, or electrons that are freed from the outer atomic shell and shared by all atoms, and that define most properties of the metal (seeMetallography).

The sharp, discrete energy levels permitted to the electrons in individual atoms become broadened into energy bands when the atoms become closely packed in a solid. The width and separation of these bands define many of the metal’s properties. For example, a so-called forbidden band, in which no electrons may exist, restricts the electron’s motion and results in a good electrical and thermal insulator. The overlapping of energy bands and the associated ease of electron motion results in the metal being a good conductor of electricity and heat. If the forbidden band is narrow, a few fast electrons may be able to jump across, yielding a semiconductor. In this case the energy-band spacing may be greatly affected by minute amounts of impurities, such as arsenic in silicon. The lowering of a high-energy band by the impurity results in a so-called donor of electrons, an n-type semiconductor. The raising of a low-energy band by an impurity such as gallium results in an acceptor, in which the vacancies or “holes” in the electron structure act like mobile positive charges and are characteristic of p-type semiconductors. A number of modern electronic devices, notably the transistor, developed by the American physicists John Bardeen, Walter Houser Brattain, and William Bradford Shockley, are based on these semiconductor properties.

Magnetic properties in a solid arise because the electrons act like tiny magnetic dipoles. Almost all solid properties depend on temperature. Thus, ferromagnetic materials, including iron and nickel, lose their normal strong residual magnetism at a characteristic high temperature, called the Curie temperature. Electrical resistance usually decreases with decreasing temperature, and for certain materials, called superconductors, it becomes extremely low near absolute zero (seeSuperconductivity). These and many other phenomena observed in solids depend on energy quantization and can best be described in terms of effective “particles” with such names as phonons, polarons, and magnons.

G Cryogenics

At very low temperatures (near absolute zero), many materials exhibit strikingly novel characteristics (seeCryogenics). At the beginning of the 20th century the Dutch physicist Heike Kamerlingh Onnes developed techniques for producing these low temperatures and discovered the superconductivity of mercury, which loses all electrical resistance at about 4 kelvins. Many other elements, alloys, and compounds do the same at their characteristic near-zero temperature, with originally magnetic materials becoming magnetic insulators. Since 1986, a number of materials have been made that are superconductive at higher temperatures. The theory of superconductivity, developed largely by John Bardeen and two other American physicists, Leon N. Cooper and John Robert Schrieffer, is extremely complicated, involving the pairing of electrons in the crystal lattice.

Another fascinating discovery was that helium does not freeze but changes at about 2 kelvins from an ordinary liquid, He I, to the superfluid He II, which has no viscosity and has a thermal conductivity about 1,000 times greater than that of silver. Films of He II can creep up the walls of their containing vessels and He II can readily permeate some materials like platinum. No fully satisfactory theory is yet available for this behaviour.

H Plasma Physics

A plasma is any substance (usually a gas) from whose atoms one or more electrons have become detached and that has therefore become ionized. The detached electrons remain, however, in the gas volume, which overall remains electrically neutral. The ionization can be effected by the introduction of large concentrations of energy, such as bombardment with fast external electrons, irradiation with laser light, or heating to very high temperatures. The individually charged plasma particles respond to electric and magnetic fields and can therefore be manipulated and contained.

Plasmas are found in gas-filled light sources such as a neon lamp, in interstellar space where residual hydrogen is ionized by radiation, and in stars whose high interior temperatures produce a high degree of ionization, a process closely connected with the nuclear fusion that supplies the energy of stars. For the hydrogen nuclei to fuse into heavier nuclei, they must be fast enough to overcome their mutual electrical repulsion. This implies a high temperature (millions of degrees). In order to produce a controlled fusion, or thermonuclear reaction, it is necessary to generate and contain plasmas magnetically; this is an important but difficult problem that falls in the field of magnetohydrodynamics.

I Lasers

An important recent development is that of the laser, an acronym for light amplification by stimulated emission of radiation. In lasers, which may have gases, liquids, or solids as the working substance, a large number of atoms are raised to a high energy level and caused to release this energy simultaneously, producing coherent light in which all waves are in phase. The coherence of the light allows for very high-intensity, sharp-wavelength light beams that remain narrow over tremendous distances.They are far more intense than light from any other source. Continuous lasers can deliver hundreds of watts, and pulsed lasers can produce millions of watts of power for very short periods. Developed during the 1950s and 1960s, largely by the American engineer and inventor Gordon Gould and the American physicists Charles Hard Townes, T. H. Maiman, Arthur Schawlow, and Ali Javan, the laser today has become an extremely powerful tool in research and technology, with applications in communications, medicine, navigation, metallurgy, fusion, and cutting materials.

J Astrophysics and Cosmology

Since World War II astronomers have made many important discoveries, such as quasars, pulsars (seestar), and the cosmic background radiation. These have challenged the ability of current physics to explain them, and have stimulated the development of theory in such areas as gravitation and elementary particle physics. It is now widely accepted that all the matter accessible to people’s observation was originally tightly packed in one location and that between 10 and 20 billion years ago it exploded in one titanic event, the big bang. The explosion has led to a universe that is still expanding. A puzzling aspect of this universe, recently revealed, is that the galaxies are not uniformly distributed. Instead, vast voids are bordered by galactic clusters shaped like filaments. The pattern of these voids and filaments is powerful evidence for the nature of the matter emerging from the big bang. It suggests the strong possiblity that familiar forms of matter were outweighed by exotic dark matter. This is just one of the ways in which the physics of the very large has converged with the physics of the very small. See also Inflationary Theory.

7

Contents - Science

Science
I INTRODUCTION

Science (Latin, scientia, from scire, “to know”), term used in its broadest sense to denote systematized knowledge in any field, but usually applied to the organization of objectively verifiable sense experience. The pursuit of knowledge in this context is known as pure science, to distinguish it from applied science, which is the search for practical uses of scientific knowledge, and from technology, through which applications are realized. For additional information, see separate articles on most of the sciences mentioned.

II ORIGINS OF SCIENCE

Efforts to systematize knowledge can be traced back to prehistoric times, through the designs that Palaeolithic people painted on the walls of caves, through numerical records that were carved in bone or stone, and through artefacts surviving from Neolithic civilizations. The oldest written records of protoscientific investigations come from Mesopotamian cultures; lists of astronomical observations, chemical substances, and disease symptoms, as well as a variety of mathematical tables, were inscribed in cuneiform characters on clay tablets. Other tablets dating from about 2000 bc show that the Babylonians had knowledge of Pythagoras' Theorem, solved quadratic equations, and developed a sexagesimal system of measurement (based on the number 60) from which modern time and angle units stem. (see Number Systems; Numerals.)

From almost the same period, papyrus documents have been discovered in the Nile Valley, containing information on the treatment of wounds and diseases, on the distribution of bread and beer, and on working out the volume of a portion of a pyramid. Some of the present-day units of length can be traced back to Egyptian prototypes, and the calendar in common use today is the indirect result of pre-Hellenic astronomical observations.

III RISE OF SCIENTIFIC THEORY

Scientific knowledge in Egypt and Mesopotamia was chiefly of a practical nature, with little rational organization. Among the first Greek scholars to seek the fundamental causes of natural phenomena was the philosopher Thales, in the 6th century bc, who introduced the concept that the Earth was a flat disc floating on the universal element, water. The mathematician and philosopher Pythagoras, who followed him, established a movement in which mathematics became a discipline fundamental to all scientific investigation. The Pythagorean scholars postulated a spherical Earth moving in a circular orbit about a central fire. In Athens, in the 4th century bc, Ionian natural philosophy and Pythagorean mathematical science combined to produce the syntheses of the logical philosophies of Plato and Aristotle. At the Academy of Plato, deductive reasoning and mathematical representation were emphasized; at the Lyceum of Aristotle, inductive reasoning and qualitative description were stressed. The interplay between these two approaches to science has led to most subsequent advances.

During the so-called Hellenistic Age following the death of Alexander the Great, the mathematician, astronomer, and geographer Eratosthenes made a remarkably accurate measurement of the Earth. Also, the astronomer Aristarchus of Samos espoused a heliocentric (Sun-centred) planetary system, although this concept did not gain acceptance in ancient times. The mathematician and inventor Archimedes laid the foundations of mechanics and hydrostatics (part of fluid mechanics); the philosopher and scientist Theophrastus became the founder of botany; the astronomer Hipparchus developed trigonometry; and the anatomists and physicians Herophilus and Erasistratus based anatomy and physiology on dissection.

Following the destruction of Carthage and Corinth by the Romans in 146 bc, scientific inquiry lost its impetus until a brief revival took place in the 2nd century ad under the Roman emperor and philosopher Marcus Aurelius. At this time the geocentric (Earth-centred) Ptolemaic System, advanced by the astronomer Ptolemy, and the medical works of the physician and philosopher Galen became standard scientific treatises for the ensuing age. A century later the new experimental science of alchemy arose, springing from the practice of metallurgy. By 300, however, alchemy had acquired an overlay of secrecy and symbolism that obscured the advantages such experimentation might have brought to science.

IV MEDIEVAL AND RENAISSANCE SCIENCE

During the Middle Ages, six leading culture groups were in existence: the Latin West, the Greek East, the Chinese, the East Indian, the Arabic, and the Mayan. The Latin group contributed little to science before the 13th century, the Greek never rose above paraphrases of ancient learning, and the Mayan had no influence on the growth of science. In China, science enjoyed periods of progress, but no sustained drive existed. Chinese mathematics reached its zenith in the 13th century with the development of ways of solving algebraic equations by means of matrices, and with the use of the arithmetic triangle. More important, however, was the impact on Europe of several practical Chinese innovations. These included the processes for manufacturing paper and gunpowder, the use of printing, and the mariner's compass. In India, the chief contributions to science were the formulation of the so-called Hindu-Arabic numerals, which are in use today, and in the conversion of trigonometry to a quasi-modern form. These advances were transmitted first to the Arabs, who combined the best elements from Babylonian, Greek, Chinese, and Hindu sources. By the 9th century, Baghdad, on the River Tigris, had become a centre for the translation of scientific works, and in the 12th century this learning was transmitted to Europe through Spain, Sicily, and Byzantium.

Recovery of ancient scientific works at European universities led, in the 13th century, to controversy over scientific method. The so-called realists espoused the Platonic approach, whereas the nominalists preferred the views of Aristotle. At the universities of Oxford and Paris, such discussions led to advances in optics and kinematics that paved the way for Galileo and the German astronomer Johannes Kepler.

The Black Death and the Hundred Years' War disrupted scientific progress for more than a century, but by the 16th century a revival was well under way. In 1543 the Polish astronomer Nicolaus Copernicus published De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Bodies), which revolutionized astronomy. Also published in 1543, De Corpis Humani Fabrica (On the Structure of the Human Body) by the Belgian anatomist Andreas Vesalius corrected and modernized the anatomical teachings of Galen and led to the discovery of the circulation of the blood. Two years later the Ars Magna (Great Art) of the Italian mathematician, physician, and astrologer Gerolamo Cardano initiated the modern period in algebra with the solution of cubic and quartic equations.

V MODERN SCIENCE

Essentially modern scientific methods and results appeared in the 17th century because of Galileo's successful combination of the functions of scholar and artisan. To the ancient methods of induction and deduction, Galileo added systematic verification through planned experiments, using newly invented scientific instruments such as the telescope, the microscope, and the thermometer. Later in the century, experimentation was widened through the use of the barometer by the Italian mathematician and physicist Evangelista Torricelli; the pendulum clock by the Dutch mathematician, physicist, and astronomer Christiaan Huygens; and the exhaust pump by the English physicist and chemist Robert Boyle and the German physicist Otto von Guericke.

The culmination of these efforts was the universal law of gravitation, published in 1687 by the English mathematician and physicist Isaac Newton in Philosophiae Naturalis Principia Mathematica. At the same time, the invention of calculus by Newton and the German philosopher and mathematician Gottfried Wilhelm Leibniz laid the foundation of today's sophisticated level of science and mathematics.

The scientific discoveries of Newton and the philosophical system of the French mathematician and philosopher René Descartes provided the background for the materialistic science of the 18th century, in which life processes were explained on a physicochemical basis. Confidence in the scientific attitude carried over to the social sciences and inspired the so-called Age of Enlightenment, which culminated in the French Revolution of 1789. The French chemist Antoine Laurent Lavoisier published Traité élémentaire de chimie (Treatise on Chemical Elements, 1789), with which the revolution in quantitative chemistry opened.

Scientific developments during the 18th century paved the way for the following “century of correlation”, so called for its broad generalizations in science. These included the atomic theory of matter postulated by the British chemist and physicist John Dalton; the electromagnetic theories of Michael Faraday and James Clerk Maxwell, also of the United Kingdom; and the law of the conservation of energy, enunciated by the British physicist James Prescott Joule and others.

The most comprehensive of the biological theories was that of evolution, put forward by Charles Darwin in his On the Origin of Species by Means of Natural Selection (1859), which stirred as much controversy in society at large as the work of Copernicus. By the beginning of the 20th century, however, the fact, but not the mechanism, of evolution was generally accepted, with disagreement centring on the genetic processes through which it occurs.

But as biology became more firmly based, physics was shaken by the unexpected consequences of quantum theory and relativity. In 1927 the German physicist Werner Heisenberg formulated the so-called uncertainty principle, which held that limits existed on the extent to which, on the subatomic scale, coordinates of an individual event can be determined. In other words, the principle stated the impossibility of predicting, with precision, that a particle such as an electron would be in a certain place at a certain time, moving at a certain velocity. Quantum mechanics instead dealt with statistical inferences relating to large numbers of individual events.

VI SCIENTIFIC COMMUNICATION

Throughout history, scientific knowledge has been transmitted chiefly through written documents, some of which are more than 4,000 years old. From ancient Greece, however, no substantial scientific work survives from the period before the Elements of the geometrician Euclid (c. 300 bc). Of the treatises written by leading scientists after that time, only about half still exist. Some of these are in Greek, and others were preserved through translation by Arab scholars in the Middle Ages. Medieval schools and universities were largely responsible for preserving these works and for fostering scientific activity.

Since the Renaissance, however, this work has been shared by scientific societies; the oldest such society, which still survives, is the Accademia dei Lincei (to which Galileo belonged), established in 1603 to promote the study of mathematical, physical, and natural sciences. Later in the century, governmental support of science led to the founding of the Royal Society of London (1662) and the Académie des Sciences de Paris (1666). These two organizations initiated publication of scientific journals, the former under the title Philosophical Transactions and the latter as Mémoires.

During the 18th century academies of science were established by other leading nations. In the United States, a club organized in 1727 by Benjamin Franklin became, in 1769, the American Philosophical Society for “promoting useful knowledge”. In 1780 the American Academy of Arts and Sciences was organized by John Adams, who became the second US president in 1797. In 1831 the British Association for the Advancement of Science met for the first time, followed in 1848 by the American Association for the Advancement of Science, and in 1872 by the Association Française pour l'Avancement des Sciences. These national organizations issue the journals Nature,Science, and Compte-Rendus, respectively. The number of scientific journals grew so rapidly during the early 20th century that A World List of Scientific Periodicals Published in the Years 1900-1933 contained some 36,000 entries in 18 languages. A large number of these are issued by specialized societies devoted to individual sciences, and most of them are fewer than 100 years old.

Since late in the 19th century, communication among scientists has been facilitated by the establishment of international organizations, such as the International Bureau of Weights and Measures (1873) and the International Council of Research (1919). The latter is a scientific federation subdivided into international unions for each of the various sciences. The unions hold international congresses every few years, the transactions of which are usually published. In addition to national and international scientific organizations, numerous major industrial firms have research departments; some of them regularly publish accounts of the work done or else file reports with government patent offices, which in turn print abstracts in bulletins that are published periodically.

VII FIELDS OF SCIENCE

Knowledge of nature originally was largely an undifferentiated observation and interrelation of experiences. The Pythagorean scholars distinguished only four sciences: arithmetic, geometry, music, and astronomy. By the time of Aristotle, however, other fields could also be recognized: mechanics, optics, physics, meteorology, zoology, and botany. Chemistry remained outside the mainstream of science until the time of Robert Boyle in the 17th century, and geology achieved the status of a science only in the 18th century. By that time the study of heat, magnetism, and electricity had become part of physics. During the 19th century scientists finally recognized that pure mathematics differs from the other sciences in that it is a logic of relations and does not depend for its structure on the laws of nature. Its applicability in the elaboration of scientific theories, however, has resulted in its continued classification among the sciences.

The pure natural sciences are generally divided into two classes: the physical sciences and the biological, or life, sciences. The principal branches among the former are physics, astronomy, chemistry, and geology; the chief biological sciences are botany and zoology. The physical sciences can be subdivided to identify such fields as mechanics, cosmology, physical chemistry, and meteorology; physiology, embryology, anatomy, genetics, and ecology are subdivisions of the biological sciences.

All classifications of the pure sciences, however, are arbitrary. In the formulations of general scientific laws, interlocking relationships among the sciences are recognized. These interrelationships are considered responsible for much of the progress today in several specialized fields of research, such as molecular biology and genetics. Several interdisciplinary sciences, such as biochemistry, biophysics, biomathematics, and bioengineering, have arisen, in which life processes are explained physicochemically. Biochemists, for example, synthesized deoxyribonucleic acid (DNA); and the cooperation of biologists with physicists led to the invention of the electron microscope, through which structures little larger than atoms can be studied. The application of these interdisciplinary methods is also expected to produce significant advances in the fields of social sciences and behavioural sciences.

The applied sciences include such fields as aeronautics, electronics, engineering, and metallurgy, which are applied physical sciences, and agronomy and medicine, which are applied biological sciences. In this case also, overlapping branches must be recognized. The cooperation, for example, between iatrophysics (a branch of medical research based on principles of physics) and bioengineering resulted in the development of the heart-lung machine used in open-heart surgery and in the design of artificial organs such as heart chambers and valves, kidneys, blood vessels, and inner-ear bones. Advances such as these are generally the result of research by teams of specialists representing different sciences, both pure and applied. This interrelationship between theory and practice is as important to the growth of science today as it was at the time of Galileo. (See Also Philosophy of Science.)

8

Article - Young, Thomas

Young, Thomas

Young, Thomas (1773-1829), British physicist, doctor, and Egyptologist, best known for his outstanding contributions in the field of optics. Young was born in Milverton, Somerset, and educated at the Universities of Edinburgh, Göttingen, and Cambridge. In 1796 he obtained a medical degree at Göttingen, and in 1799 he began to practise medicine in London. From 1802 until his death he was foreign secretary of the Royal Society. In 1811 Young was appointed to the staff of St George's Hospital, London. He served on several official scientific commissions, and after 1818 he was secretary to the Board of Longitude and editor of Nautical Almanac.

In the science of optics, Young discovered the phenomenon of interference, which helped to establish the wave nature of light. He was the first to describe and measure astigmatism and to develop a physiological explanation of colour sensation. Young is also noted for his work on the theories of capillarity and elasticity. He assisted in deciphering the Egyptian hieroglyphs inscribed on the Rosetta Stone. Among his important writings are works on medicine, Egyptology, and physics.

9

Article - Prism (optics)

Prism (optics)

Prism (optics), block of glass or other transparent material that has the same cross-section—usually a triangle—along its length. The two commonest types of prism have cross-sections that are 60° and 45° triangles. Prisms have various effects on light passing through them.

When a ray of white light is directed on to a 60° prism, its different-coloured components are refracted, or bent, to a different extent on passing through each surface, so that a coloured band of light called a spectrum is produced. This is known as dispersion, and is caused by the fact that the different colours of light have different wavelengths, and are slowed down to different extents when they pass through the glass—red light being slowed least, and violet light most. The 17th-century English physicist Isaac Newton was the first to conclude, from experiments with prisms, that ordinary sunlight is a mixture of all the different colours.

When a ray of light is directed at a suitable angle in to a prism, it strikes the face of the prism internally at an angle greater than the critical angle (see Optics; Geometrical Optics), and is therefore totally internally reflected. The prism then acts as a highly efficient mirror, and this arrangement is used in many optical instruments, such as periscopes and binoculars.

10

Contents - Wave Motion

Wave Motion
I INTRODUCTION

Wave Motion, mechanism by which energy is conveyed from one place to another in waves without the transfer of matter. Although matter is not moved from one place to another, many sorts of wave motion can occur only in matter. At any point along the path of transmission a periodic displacement, or oscillation, occurs about a neutral position. The oscillation may be of air molecules, as in the case of sound travelling through the atmosphere; of water molecules, as in waves occurring on the surface of the ocean; or of portions of a rope or a wire spring. In each of these cases the particles oscillate about their own equilibrium position and only the energy moves continuously in one direction. Such waves are called mechanical because the energy is transmitted through a material medium, without an overall movement of the medium itself. The only form of wave motion that requires no material medium for transmission is the electromagnetic wave; in this case the “oscillations” consist of variations in the intensity of electric and magnetic fields (see Electromagnetic Radiation).

II TYPES OF WAVES

Waves are divided into types according to the direction of the displacements in relation to the direction of the motion of the wave itself. If the vibration is parallel to the direction of motion, the wave is known as longitudinal (see Fig. 1). The longitudinal wave is always mechanical because it results from successive compressions (states of maximum density and pressure) and rarefactions (states of minimum density and pressure) of the medium. Sound waves typify this form of wave motion. Another type of wave is the transverse wave, in which the vibrations are at right angles to the direction of motion. A transverse wave may be mechanical, such as the wave projected along a taut string that is subjected to a transverse vibration (see Fig. 2); or it may be electromagnetic, such as light, X-rays, or radio waves. In these cases the directions of the electric and magnetic fields are at right angles to the direction of motion. Some mechanical wave motions, such as waves on the surface of a liquid, are combinations of both longitudinal and transverse motions, resulting in a circular motion of particles of the liquid.

For a transverse wave, the wavelength is the distance between two successive crests or troughs. For longitudinal waves, it is the distance from compression to compression or rarefaction to rarefaction. The frequency of the wave is the number of vibrations per second. The velocity of the wave, which is the speed at which it advances, is equal to the wavelength times the frequency. The maximum displacement involved in the vibration of a mechanical wave is called the amplitude of the wave. In the case of an electromagnetic wave, the amplitude is the maximum strength of the electric or magnetic field.

III BEHAVIOUR OF WAVES

The velocity of a wave in matter depends on the elasticity and density of the medium. In a transverse wave on a taut string, for example, the velocity depends on the tension of the string and its mass per unit length. The speed can be doubled by quadrupling the tension, or it can be reduced to one-half by quadrupling the mass of the string. The speed of electromagnetic waves is constant at about 300,000 km/s (186,000 mi/s), the speed of light. This velocity varies in passage through matter.

A Refraction

In general, the alteration in a wave's speed when it moves from one medium to another causes it to change its direction. Thus when a light ray enters glass from air, it slows down to about two-thirds of its speed in air. If it is travelling at an angle to the perpendicular, its direction changes to be closer to the perpendicular. When a ray emerges from glass into air, its speed increases, and the ray is bent away from the perpendicular direction. (A ray travelling in the perpendicular direction, either into the glass or out of it, is not deviated.) This bending of a wave is called refraction.

B Reflection

Whenever a wave strikes the interface between one medium and another, it gives rise to two waves, one of which travels on into the second medium (being refracted as it does so), while the other is sent back into the first medium, or reflected. In the case of light striking ordinary window glass, the reflected light is weak compared with the transmitted light. If, however, the light strikes an opaque material, much more of the light is reflected, with the remainder travelling a very small distance into the substance before being absorbed. See Reflection.

C Polarization

The oscillations of a transverse wave may all lie in a single plane: for example, the waves on a shaken string may all be vertical. Such a wave is described as polarized. More usually, however, transverse waves oscillate in all directions. Thus the oscillations of a shaken string can in general be regarded as a combination of vertical and horizontal oscillations, or of oscillations in any two arbitrarily chosen directions that are at right angles to each other. Since light consists of transverse waves, it too can be polarized (see Optics: Polarization of Light).

D Diffraction

All waves (apart from one-dimensional ones, as on a string), spread out somewhat as they travel. Thus a sound can be heard round the corner of a building from its source, and shadows can never be perfectly sharp. This spreading, called diffraction, becomes especially great when a wave passes through some aperture that is small compared with its wavelength.

E Interference

When two waves meet at a point, the resulting displacement at that point will be the sum of the displacements produced by each of the waves. If the displacements are in the same direction, the two waves reinforce each other; if the displacements are in the opposite direction, the waves counteract each other. This phenomenon is known as interference.

F Standing Waves

When two waves of equal wavelength and amplitude travel in opposite directions at the same velocity through a medium, stationary, or standing, waves are formed. For example, if one end of a rope is tied to a wall and the other end is shaken up and down, waves will be reflected back along the rope from the wall. Assuming that the reflection is perfectly efficient, the reflected wave will be half a wavelength behind the initiating wave. Interference will take place, and the resultant displacement at any given point and time will be the sum of the individual displacements. No motion will take place at points where the crest of the incident wave meets the trough of the reflected one. Such points are called nodes. Halfway between the nodes, the waves meet in the same phase; that is, crest will coincide with crest and trough with trough. At these points the amplitude of the resultant wave is twice as great as that of the incident wave. Thus, the rope is divided into sections one wavelength long by the nodes, which do not progress along the rope, while the rope between the nodes vibrates transversely.

Stationary waves are present in the vibrating strings of musical instruments. A violin string, for instance, when bowed or plucked, vibrates as a whole, with nodes at the ends, and also vibrates in halves, with a node at the centre, in thirds, with two equally spaced nodes, and in various other fractions, all simultaneously. The vibration as a whole produces the fundamental tone, and the other vibrations produce the various harmonics.

IV QUANTUM THEORY

According to quantum theory, which underlies all modern physics, all particles behave like waves, though this behaviour can be detected only for subatomic particles. For example, because of the wave-like nature of the electron, which is one of the constituents of the atom, the structure of the atom can be explained in terms of a system of standing waves. This wave-particle duality is a profound and far-reaching aspect of the physical world. Hence, much of the development of modern physics is based on the elaboration of the theory of waves and wave motion.

See Also Earthquake; Huygens, Christiaan; Optics.

11

Article - Refraction

Refraction

Refraction, bending of waves that occurs when a wavefront passes obliquely from one medium to another. The phenomenon is most familiar with light waves. When light passes from a less dense medium (for example, air) to a denser one (for example, glass), it is refracted towards the normal (an imaginary line perpendicular to the surface). This occurs because the light waves are slowed down by the denser medium, causing them to change direction. On passing from a denser medium into a less dense one, the light is refracted away from the normal.

There are two laws of refraction:



The incident ray, the refracted ray, and the normal all lie in the same plane.
For light rays passing from one transparent medium to another, the sine of the angle of incidence, i, and the sine of the angle of refraction, r, bear a constant ratio to one another. This is most simply stated mathematically: sin i/sin r = a constant. This constant is usually given the symbol, n, and is called the refractive index of the material. The higher the refractive index, the greater will be the extent of the refraction. This law is known as Snell’s law.

The laws of refraction can be used to understand how light waves behave when passing through more than two media with parallel boundaries. If the refractive index of the first and last medium is the same (they are both air, for example), but different to that of the intermediate medium (for example, glass), light will be refracted towards the normal on entering the glass, and on leaving will be refracted back away from the normal to exactly the same extent. The effect of this is that the emergent ray is parallel to the incident ray, but is “laterally displaced” from it.

When light waves pass from air through glass in the form of a 60° prism, the laws of refraction explain why the light now behaves somewhat differently than through a mere glass block. At first, as with a solid block of glass, the ray entering the prism is refracted towards the normal, and the emerging ray is refracted away from the normal. However, because of the angle between these two faces of the prism, the ray is turned through a considerable angle. In practice, the different colours (different wavelengths) of light present in white light are refracted to different extents, and using a prism with this arrangement is a convenient method for producing a spectrum.

One everyday effect of refraction is that objects seen under water appear to be at a shallower depth than they really are. The observer sees an underwater object in a higher position, because the eye cannot tell that the light has been refracted on its path from the object.

Total internal reflection is another phenomenon that can be explained by refraction. Because light travelling from a denser to a less dense medium is refracted away from the normal, some rays striking the boundary between the two media at a large angle of incidence cannot pass through it, but are totally internally reflected. Typically, when a ray of light emerges from a denser to a rarer medium, the ray is deflected away from the normal, but in practice there is always a weak reflected ray present also. If the angle of incidence increases, the refracted ray moves closer to the boundary between the two media because the angle of refraction has also increased. The refracted ray also becomes weaker, while the reflected ray within the glass becomes stronger. If a situation is reached when the angle of refraction is 90°, there is only a residual refracted ray grazing along the boundary between the two media, and most of the light is internally reflected. The angle of incidence for an angle of refraction of 90° is called the critical angle. When the angle of incidence is greater than the critical angle, the ray is totally internally reflected, as it is clearly impossible for any light to escape from inside the glass. The critical angle for a ray of light emerging from glass into air is approximately 42°.

Total internal reflection has many commercial uses. A 90° prism, with light totally internally reflected off one face, can be used in prismatic binoculars. This is also the principle on which optical fibres work, since the light pulses passing along such a fibre have a high angle of incidence on the walls of the fibre, and so are unable to escape from it (see Fibre Optics). The high refractive index of diamond gives it a very low critical angle (24°). This means that light is reflected internally many times before it can escape from a well-cut diamond, and can then come out in any direction. This is why diamonds sparkle.

12

Article - Euclid (mathematician)

Euclid (mathematician)

Euclid (mathematician) (fl. 300 bc), Greek mathematician, whose chief work, Elements, is a comprehensive treatise on mathematics in 13 volumes on such subjects as plane geometry, proportion in general, the properties of numbers, incommensurable magnitudes, and solid geometry. He was probably educated at Athens by pupils of Plato. He taught geometry in Alexandria and founded a school of mathematics there. The Data, a collection of geometrical theorems; the Phenomena, a description of the heavens; the Optics; the Division of the Scale, a mathematical discussion of music; and several other books have long been attributed to Euclid; most historians believe, however, that some or all of these works (other than the Elements) have been spuriously credited to him. Historians disagree as to the originality of some of his other contributions. Probably the geometrical sections of the Elements were primarily a rearrangement of the works of previous mathematicians, such as those of Eudoxus, but Euclid himself is thought to have made several original discoveries in the theory of numbers (see Number Theory).

Euclid's Elements was used as a text for 2,000 years, and even today a modified version of its first few books forms the basis of instruction in plane geometry in secondary schools. The first printed edition of Euclid's works was a translation from Arabic to Latin, which appeared at Venice in 1482.

13

Article - Huygens, Christiaan

Huygens, Christiaan

Huygens, Christiaan (1629-1695), Dutch astronomer, mathematician, and physicist, born in The Hague. His numerous, original scientific discoveries won him wide recognition and honours among scientists of the 17th century. His discoveries include the principle (later named after him) that states that every point on an advancing wavefront is itself a source of new waves (see Optics; Huygens’ Principle). From this principle he developed the wave theory of light. In 1655 he found a new method of grinding and polishing lenses. The sharper definition obtained enabled him to discover a satellite of Saturn and to give the first accurate description of the rings of Saturn. The need for an exact measure of time for observing the heavens led to his applying the pendulum to regulate the movement of clocks. In 1656 he devised a telescope eyepiece that bears his name. In Horologium Oscillatorium (1673) he determined the true relation between the length of a pendulum and the period of oscillation and developed theories on centrifugal force in circular motion which helped the English physicist Isaac Newton to formulate the laws of gravity. In 1678 he discovered the polarization of light by double refraction in calcite.

14

Article - Interference

Interference

Interference, effect that occurs when two or more waves overlap or intersect. When waves interfere with each other, the amplitude (intensity or size) of the resulting wave depends on the frequencies, relative phases (relative positions of the crests and troughs), and amplitudes of the interfering waves (see Wave Motion). For example, constructive interference occurs at a point where two overlapping or intersecting waves of the same frequency are in phase—that is, where the crests and troughs of the two waves coincide. In this case, the two waves reinforce each other and combine to form a wave that has an amplitude equal to the sum of the individual amplitudes of the original waves. Destructive interference occurs when two intersecting waves of the same frequency are completely out of phase—that is, when the crest of one wave coincides with the trough of the other. In this case, the two waves cancel each other out. Intersecting or overlapping waves that have different frequencies or that are not entirely in or out of phase with each other have more complex interference patterns.

Visible light is made up of electromagnetic waves that can interfere with each other. For example, interfering light waves are responsible for the colours occasionally seen in soap bubbles. White light is made up of light waves of different wavelengths; the light waves reflected from the inner surface of the bubble interfere with light waves of the same wavelength reflected from the outer surface of the bubble. Some of the wavelengths interfere constructively, and other wavelengths interfere destructively. Since different wavelengths of light correspond to different colours, the light reflected from the soap bubble appears coloured. The phenomenon of interference between visible light waves is exploited in holography and in interferometry (see Interferometer).

Interference can occur with all types of waves, not only with light waves. Radio waves interfere with each other when they bounce off buildings in cities, distorting the signal. Sound-wave interference must be taken into account when constructing concert halls, so that destructive interference does not result in areas in the hall where the sounds produced on stage cannot be heard. The interference of water waves can be observed by dropping objects in a still pool of water and noting how the overlapping waves interfere constructively at some points and destructively at others.

See Also Acoustics; Electromagnetic Radiation; Optics.

15

Article - Michelson, Albert Abraham

Michelson, Albert Abraham

Michelson, Albert Abraham (1852-1931), German-born American physicist, known for his famous experiment to measure the velocity of the Earth through the ether, a substance that scientists believed filled the universe. This experiment helped prove that the ether does not exist. In 1907 he was awarded the Nobel Prize for Physics for developing extremely precise instruments and conducting important investigations with them, becoming the first American citizen to earn a Nobel Prize.

Michelson was born in Strelno (now Strzelno, Poland), taken to the United States as a child, and educated at the United States Naval Academy and at the universities of Berlin, Heidelberg, and Paris. He was Professor of Physics at Clark University from 1889 to 1892, and from 1892 to 1929 was head of the Department of Physics at the University of Chicago. He determined the velocity of light with a high degree of accuracy, using instruments of his own design.

In 1887 Michelson invented the interferometer, which he used in the famous experiment, performed with the American chemist Edward Williams Morley. At that time, most scientists believed that light travelled as waves through the ether. They also believed that the Earth travelled through the ether. The Michelson-Morley experiment showed that two beams of light sent in different directions from the Earth were reflected at the same speed. According to the ether theory, the beams would have been reflected in waves of different speeds, in relation to the velocity of the Earth. In this way the experiment proved that the ether did not exist. The negative results of the experiment were also useful in the development of the theory of relativity. Michelson's major works include The Velocity of Light (1902) and Studies in Optics (1927).

16

Article - Spectrum

Spectrum

Spectrum, rainbow-like series of colours, in the order violet, blue, green, yellow, orange, and red, produced by splitting a composite light, such as white light, into its component colours. Indigo was formerly recognized as a distinct spectral colour. The rainbow is a natural spectrum, produced by meteorological phenomena. A similar effect can be produced by passing sunlight through a glass prism. The first correct explanation of the phenomenon was advanced in 1666 by the English mathematician and physicist Sir Isaac Newton.

When a ray of light passes from one transparent medium, such as air, into another, such as glass or water, it is bent; upon reemerging into the air, it is bent again. This bending is called refraction; the amount of refraction depends on the wavelength of the light. Violet light, for example, is bent more than red light in passing from air to glass or from glass to air. A mixture of red and violet light is thus dispersed into the two colours when it passes through a wedge-shaped glass prism. See Optics.

A device for producing and observing a spectrum visually is called a spectroscope; a device for observing and recording a spectrum photographically is called a spectrograph; a device for measuring the brightness of the various portions of spectra is called a spectrophotometer; and the science of using spectroscopes, spectrographs, and spectrophotometers to study spectra is called spectroscopy. For extremely accurate spectroscopic measurements, an interferometer is used. During the 19th century, scientists discovered that beyond the violet end of the spectrum radiation could be detected that was invisible to the human eye but that had marked photochemical action; this was termed ultraviolet radiation. Similarly, beyond the red end of the spectrum, infrared radiation was detected that, although invisible, transmitted energy, as shown by its ability to raise the temperature of a thermometer. The definition of the term spectrum was then revised to include these invisible radiations, and has since been extended to include radio waves beyond the infrared, and X-rays and gamma rays beyond the ultraviolet.

The term spectrum is often loosely applied today to any orderly array produced by analysis of a complex phenomenon. A complex sound such as noise, for example, may be analysed into an audio spectrum of pure tones of various pitches. Similarly, a complex mixture of elements or isotopes of different atomic weights can be separated into an orderly sequence called a mass spectrum in order of their atomic weights (see Mass Spectrometer).

Spectroscopy not only has provided an important and sensitive method of chemical analysis but has also been the chief tool for discoveries in the apparently unrelated fields of astrophysics and atomic theory. In general, changes in motions of the outer electrons of atoms produce spectra in the visible, infrared, and ultraviolet regions. Changes in motions of the inner electrons of heavy atoms produce X-ray spectra. Changes in the configurations of the nucleus of an atom produce gamma-ray spectra. Changes in the configurations of molecules produce visible and infrared spectra. See Atom; Electromagnetic Radiation; Luminescence.

Different colours of light are similar in consisting of electromagnetic radiations that travel at a speed of approximately 300,000 km per sec (about 186,000 mi per sec). They differ in having different frequencies and wavelengths, the frequency being equal to the speed of light divided by the wavelength. Two rays of light having the same wavelength also have the same frequency and the same colour. The wavelength of light is so small that it is conveniently expressed in nanometers (nm), which are equal to one-billionth of a metre, or one-millionth of a millimetre (40 billionths of an inch). The wavelength of violet light varies from about 400 to about 450 nm; (about 16 to about 18 millionths of an inch) and of red light from about 620 to about 760 nm (about 25 to about 30 millionths of an inch).

17

Contents - Maxwell, James Clerk

Maxwell, James Clerk
I INTRODUCTION

Maxwell, James Clerk (1831-1879), British physicist, whose theory of the electromagnetic field and electromagnetic theory of light, and introduction of a statistical function in the theory of gases, revolutionized physics. These ideas led to the relativity and quantum theories of the 20th century.

Maxwell was born in Edinburgh. He wrote his first paper, on oval curves, while still at school. He entered Edinburgh University in 1847, where he wrote two substantial papers: on the geometry of curves rolling on one another, and on the properties of elastic solids. He became interested in colour theory, and in the 1850s, on the basis of experiments with tinted papers and spectral colours, he established the modern theory of colour vision. He followed Thomas Young in using red, green, and blue primaries to form colour combinations. In May 1861 he projected the first trichromatic colour photograph.

Maxwell entered Peterhouse College, Cambridge University, in October 1850, but moved to Trinity College after one term. He became the pupil of the famous mathematics coach William Hopkins, graduating second wrangler (that is, taking second place) in 1854. Elected a fellow of Trinity in 1855, he was appointed Professor of Natural Philosophy at Marischal College, Aberdeen, in 1856. He lost his post when the two Aberdeen colleges were joined to form the University of Aberdeen in 1860, and moved to King’s College, London, to become Professor of Natural Philosophy and Astronomy, a post he resigned in 1865.

II ELECTROMAGNETISM AND LIGHT

Rejecting explanations (modelled on the theory of gravitation) in terms of forces acting at a distance, Michael Faraday had interpreted electricity and magnetism in terms the electromagnetic “field”, defined by imaginary lines of force. William Thomson (Lord Kelvin) had shown that these ideas could be expressed in mathematical terms.

Initially guided by Thomson, Maxwell developed Faraday’s work. He first illustrated the geometry of lines of force by the physical analogy of streamlines in a fluid (see Fluid Mechanics). Seeking a theory of the field grounded on the mechanics of an ether, a medium for transmission, he found its basis in Thomson’s 1856 proposal that the Faraday effect—the rotation of polarized light in a magnetic field—could be explained by the rotation of vortices in an ether. In his paper “On Physical Lines of Force” (1861-1862) Maxwell set out an ether model of rotating vortices (representing magnetism) separated by “idle wheel” particles (whose motion represents the flow of an electric current).

The modification of the ether model to encompass electrostatics unexpectedly led to his electromagnetic theory of light. He showed that a disturbance in the electric or magnetic field should lead to a disturbance travelling as a wave through space. He demonstrated the close agreement between the velocity of these waves and the measured velocity of light. He developed the established theory that light was propagated by an ether by asserting that this ether was electromagnetic, and he thus unified optics and electromagnetism.

Maxwell had from the first emphasized that his “idle wheel” ether model was conjectural, and in 1864 he discarded this model as a temporary scaffolding for his theory. He achieved a more general presentation of his electromagnetic theory of light in terms of the transmission of energy through the ether. He retained mechanical foundations by grounding the general equations of the electromagnetic field (the forerunners of what are now known as the four “Maxwell equations”, as reformulated in the 1880s by Oliver Heaviside and Heinrich Hertz) on general equations of dynamics. He expounded this theory in his Treatise on Electricity and Magnetism (1873).

The production of electromagnetic waves by Hertz in 1887 led to the acceptance of Maxwell’s theory of the electromagnetic field. In the 20th century it came to be detached from its formulation in terms of ether.

III THE KINETIC THEORY OF GASES

The subject of the University of Cambridge’s Adams Prize for 1857 was a study of the motions of Saturn‘s rings, whose structure and stability were in doubt at the time. On winning the prize and revising his essay for publication in 1859, Maxwell concluded that the ring system of Saturn consists of concentric rings of particles.

Alerted to problems of particle motions, Maxwell became interested in a paper by Rudolf Clausius on the kinetic theory of gases—the theory that explains the behaviour of gases in terms of the motions of their molecules, or constituent particles. Clausius had used a probabilistic argument to calculate the motions of the gas molecules, and Maxwell advanced on his procedure by introducing a statistical function (identical in form to the distribution formula in the theory of errors) for the distribution of velocities among the gas molecules. He established results for gaseous diffusion, viscosity, and thermal conductivity.

He turned to an experimental investigation of the viscosity of gases at different temperatures and pressures, observing the decay in the torsional oscillation of discs. He found that gas viscosity was a linear function of the absolute temperature. In his paper “On the Dynamical Theory of Gases” (1867), he suggested that gas molecules should be considered as centres of a force of repulsion that falls off in strength as the fifth power of the separation, a result in agreement with this experimental finding. He also presented a new derivation of the law of distribution of velocities.

Maxwell perceived that the kinetic theory of gases bore on wider problems in the theory of heat, and he expounded the implications of his theory in his “demon” paradox (a term coined by Thomson). According to the second law of thermodynamics, heat flows from hot to cold bodies unless work is done to force it to flow the other way. But Maxwell’s theory that the velocities of gas molecules are widely distributed suggests that individual faster-moving molecules could move from a cold to a hot body transferring heat as they did so. It would require the action of the demon to manipulate molecules in sufficient numbers to produce an observable flow of heat from the cold body to the hotter one, and thus violate the second law of thermodynamics. This law therefore applies only to large groups of molecules; it is a statistical law.

Maxwell’s ideas on gases and thermodynamics were developed by Ludwig Boltzmann in the 1870s, and became the accepted basis for these areas of physics.

IV THE CAVENDISH LABORATORY

Appointed to the new professorship of experimental physics at Cambridge University in 1871, Maxwell designed the Cavendish Laboratory. It opened in April 1874 and Maxwell, as its first director, instituted a programme of precision measurements in electricity. One of his last accomplishments was to edit Henry Cavendish’s Electrical Researches (1879).


Contributed By:
Peter M. Harman

18

Contents - Laser

Laser
I INTRODUCTION

Laser, acronym for light amplification by stimulated emission of radiation. Lasers are devices that amplify light and produce coherent light beams, ranging from infrared to ultraviolet. A light beam is coherent when its waves, or photons, propagate in step, or in phase, with one another (See Interference). Laser light, therefore, can be made extremely intense, highly directional, and very pure in colour (frequency). Laser devices now extend into the X-ray frequency range. Masers are similar devices for microwaves.

II PRINCIPLES OF OPERATION

Lasers harness atoms to store and emit light in a coherent fashion. The electrons in the atoms of a laser medium are first pumped, or energized, to an excited state by an energy source. They are then “stimulated” by external photons to emit the stored energy in the form of photons, a process known as stimulated emission. The photons emitted have a frequency characteristic of the atoms and travel in step with the stimulating photons. These photons in turn impinge on other excited atoms to release more photons. Light amplification is achieved as the photons move back and forth between two parallel mirrors, triggering further stimulated emissions. At the same time the intense, directional, and monochromatic laser light “leaks” through one of the mirrors, which is only partially silvered.

Stimulated emission, the underlying process for laser action, was first described theoretically by Albert Einstein in 1917. The working principles of lasers were outlined by the American physicists Arthur Schawlow and Charles Hard Townes in their 1958 patent application. The patent was granted, but was later challenged by the American physicist and engineer Gordon Gould. In 1960 the American physicist Theodore Maiman observed the first laser action in solid ruby. A year later a helium-neon gas laser was built by the Iranian-born American physicist Ali Javan. Then in 1966 a liquid laser was constructed by the American physicist Peter Sorokin. The United States Patent Office court in 1977 affirmed one of Gould's claims over the working principles of the laser.

III TYPES OF LASERS

According to the laser medium used, lasers are generally classified as solid state, gas, semiconductor, or liquid.

A Solid-State Lasers

The most common solid laser media are rods of ruby crystals and neodymium-doped glasses and crystals. The ends of the rod are fashioned into two parallel surfaces coated with a highly reflecting non-metallic film. Solid-state lasers offer the highest power output. They are usually operated in a pulsed manner to generate a burst of light over a short time. Bursts as short as 12 × 10-15 sec have been achieved, which are useful in studying physical phenomena of very brief duration. Pumping is achieved with light from xenon flash tubes, arc lamps, or metal-vapour lamps. The frequency range has been expanded from infrared (IR) to ultraviolet (UV) by multiplying the original laser frequency with crystal-like potassium dihydrogen phosphate, which are even shorter, and X-ray wavelengths, which are even shorter, have been achieved by aiming laser beams at yttrium targets.

B Gas Lasers

The laser medium of a gas laser can be a pure gas, a mixture of gases, or even metal vapour, and is usually contained in a cylindrical glass or quartz tube. Two mirrors are located outside the ends of the tube to form the laser cavity. Gas lasers are pumped by ultraviolet light, electron beams, electric current, or chemical reactions. The helium-neon laser is known for its high frequency stability, colour purity, and minimal beam spread. Carbon dioxide lasers are very efficient, and consequently they are the most powerful continuous wave (CW) lasers.

C Semiconductor Lasers

The most compact of lasers, the semiconductor laser usually consists of a junction between layers of semiconductors with different electrical conducting properties. The laser cavity is confined to the junction region by means of two reflective boundaries. Gallium arsenide is the semiconductor most commonly used. Semiconductor lasers are pumped by the direct application of electrical current across the junction, and they can be operated in the CW mode with better than 50 per cent efficiency. A method that permits even more efficient use of energy has been devised. It involves mounting tiny lasers vertically in such circuits, to a density of more than a million per square centimetre. Common uses for semiconductor lasers include CD players (see Sound Recording and Reproduction) and laser printers.

D Liquid Lasers

The most common liquid laser media are inorganic dyes contained in glass vessels. They are pumped by intense flash lamps in a pulse mode or by a gas laser in the CW mode. The frequency of a tunable dye laser can be adjusted with the help of a prism inside the laser cavity.

E Free-Electron Lasers

Lasers using beams of electrons unattached to atoms and spiralling around magnetic field lines to produce laser radiation were first developed in 1977 and are now becoming important research instruments. They are tunable, as are dye lasers, and in theory a small number could cover the entire spectrum from infrared to X-rays. Free-electron lasers should also become capable of generating very high-power radiation, which is currently too expensive to produce. See Synchrotron Radiation.

IV LASER APPLICATIONS

The use of lasers is restricted only by imagination. Lasers have become valuable tools in industry, scientific research, communication, medicine, military technology, and the arts.

A Industry

Powerful laser beams can be focused on a small spot with enormous power density. Consequently, the focused beams can readily heat, melt, or vaporize material in a precise manner. Lasers have been used, for example, to drill holes in diamonds, to shape machine tools, to trim microelectronic components, to heat-treat semiconductor chips, to cut fashion patterns, to synthesize new material, and to attempt to induce controlled nuclear fusion (see Nuclear Energy). The powerful short pulse produced by a laser also makes possible high-speed photography with an exposure time of several trillionths of a second. Highly directional laser beams are used for alignment in road and building construction.

Lasers are used for monitoring crustal movements and for geodetic surveys. They are also the most effective detectors of certain types of air pollution. In addition, lasers have been used for precise determination of the Earth-Moon distance and in tests of relativity. Very fast laser-activated switches are being developed for use in particle accelerators, and techniques have been found for using laser beams to trap small numbers of atoms in a vacuum for extremely precise studies of their spectra.

B Scientific Research

Because laser light is highly directional and monochromatic, extremely small amounts of light scattering or small frequency shifts caused by matter can easily be detected. By measuring such changes, scientists have successfully studied molecular structures. With lasers, the speed of light has been determined to an unprecedented accuracy, chemical reactions can be selectively induced, and the existence of trace substances in samples can be detected. See Chemical Analysis; Photochemistry.

C Communication

Laser light can travel a large distance in outer space with little reduction in signal strength. Lasers are therefore ideal for space communications. Because of its high frequency, laser light can carry, for example, 1,000 times as many television channels as are now carried by microwaves. Low-loss optical fibres have been developed to transmit laser light for earthbound communication in telephone and computer systems (see Fibre Optics). Laser techniques have also been used for high-density information recording. For instance, laser light simplifies the recording of a hologram, from which a three-dimensional image can be reconstructed with a laser beam (see Holography).

D Medicine

Intense, narrow beams of laser light can cut and cauterize certain tissues in a small fraction of a second without damaging the surrounding healthy tissues. They have been used to “weld” the retina, bore holes in the skull, vaporize lesions, and cauterize blood vessels. Laser techniques have also been developed for lab tests of small biological samples.

E Military Technology

Laser guidance systems for missiles, aircraft, and satellites are commonplace. The use of laser beams against hostile ballistic missiles has been proposed, as in the defence system urged by US President Ronald Reagan in 1983 (see Strategic Defense Initiative). The ability of tunable dye lasers to excite selectively an atom or molecule may open up more efficient ways to separate isotopes for construction of nuclear weapons.

V LASER SAFETY

Because the eye focuses laser light as it does other light, the chief danger in working with lasers is eye damage. Therefore, laser light should not be viewed, whether it is direct or reflected. Lasers should be used only by trained personnel wearing protective goggles.

19

Contents - Quantum Theory

Quantum Theory
I INTRODUCTION

Quantum Theory, also quantum mechanics, in physics, a theory based on using the concept of the quantum unit to describe the dynamic properties of subatomic particles and the interactions of matter and radiation. The foundation was laid by the German physicist Max Planck, who postulated in 1900 that energy can be emitted or absorbed by matter only in small, discrete units called quanta. Also fundamental to the development of quantum mechanics was the uncertainty principle, formulated by the German physicist Werner Heisenberg in 1927, which states that the position and momentum of a subatomic particle cannot be specified simultaneously.

II EARLY HISTORY

In the 18th and 19th centuries, Newtonian, or classical, mechanics appeared to provide a wholly accurate description of the motions of bodies—such as, for example, planetary motion. In the late 19th and early 20th centuries, however, experimental findings raised doubts about the completeness of Newtonian theory. Among the newer observations were the lines that appear in the spectra of light emitted by heated gases, or by gases in which electric discharges take place. From the model of the atom developed in the early 20th century by the New Zealand-born physicist Ernest Rutherford, in which negatively charged electrons circle a positive nucleus in orbits prescribed by Newton's laws of motion, scientists had also expected that the electrons would emit light over a broad frequency range, rather than in the narrow frequency ranges that form the lines in a spectrum.

Another puzzle for physicists was the coexistence of two theories of light: the corpuscular theory, which explains light as a stream of particles, and the wave theory, which views light as electromagnetic waves. A third problem was the absence of a molecular basis for thermodynamics. In his book Elementary Principles in Statistical Mechanics (1902), the American mathematical physicist J. Willard Gibbs conceded the impossibility of framing a theory of molecular action that embraced the phenomena of thermodynamics, radiation, and electrical phenomena as they were then understood.

III PLANCK'S INTRODUCTION OF THE QUANTUM

At the turn of the century, physicists did not yet clearly recognize that these and other difficulties in physics were in any way related. The first development that led to the solution of these difficulties was Planck's introduction of the concept of the quantum, as a result of physicists' studies of blackbody radiation during the closing years of the 19th century. (The term blackbody refers to an ideal body or surface that absorbs all radiant energy without any reflection.) A body at a moderately high temperature—a “red heat”—gives off most of its radiation in the low-frequency (red and infrared) regions; a body at a higher temperature—”white heat”—gives off comparatively more radiation at higher frequencies (yellow, green, or blue). During the 1890s physicists conducted detailed quantitative studies of these phenomena and expressed their results in a series of curves or graphs. The classical, or prequantum, theory predicted an altogether different set of curves from those actually observed. What Planck did was to devise a mathematical formula that described the curves exactly; he then deduced a physical hypothesis that could explain the formula. His hypothesis was that energy is radiated only in quanta of energy hu, where u is the frequency and h is the quantum of action, now known as Planck's constant.

IV EINSTEIN'S CONTRIBUTION

The next important developments in quantum mechanics were the work of Albert Einstein. He used Planck's concept of the quantum to explain certain properties of the photoelectric effect—an experimentally observed phenomenon in which electrons are emitted from metal surfaces when radiation falls on these surfaces.

According to classical theory, the energy, as measured by the voltage of the emitted electrons, should be proportional to the intensity of the radiation. Actually, however, the energy of the electrons was found to be independent of the intensity of radiation—which determined only the number of electrons emitted—and to depend solely on the frequency of the radiation. The higher the frequency of the incident radiation, the greater is the electron energy; below a certain critical frequency no electrons are emitted. These facts were explained by Einstein by assuming that a single quantum of radiant energy ejects a single electron from the metal. The energy of the quantum is proportional to the frequency, and so the energy of the electron depends on the frequency.

V THE BOHR ATOM

In 1911 Rutherford established the existence of the atomic nucleus. He assumed, on the basis of experimental evidence obtained from the scattering of alpha particles by the nuclei of gold atoms, that every atom consists of a dense, positively charged nucleus, surrounded by negatively charged electrons revolving around the nucleus as planets revolve around the Sun. The classical electromagnetic theory developed by the British physicist James Clerk Maxwell unequivocally predicted that an electron revolving around a nucleus will continuously radiate electromagnetic energy until it has lost all its energy, and eventually will fall into the nucleus. Thus, according to classical theory, an atom, as described by Rutherford, would be unstable. This difficulty led the Danish physicist Niels Bohr, in 1913, to postulate that in an atom the classical theory does not hold, and that electrons move in fixed orbits. Every change in orbit by the electron corresponds to the absorption or emission of a quantum of radiation.

The application of Bohr's theory to atoms with more than one electron proved difficult. The mathematical equations for the next simplest atom, the helium atom, were solved during the second and third decade of the century, but the results were not entirely in accordance with experiment. For more complex atoms, only approximate solutions of the equations are possible, and these are only partly concordant with observations.

VI WAVE MECHANICS

The French physicist Louis Victor de Broglie suggested in 1924 that because electromagnetic waves show particle characteristics, particles should, in some cases, also exhibit wave properties. This prediction was verified experimentally within a few years by the American physicists Clinton Joseph Davisson and Lester Halbert Germer and the British physicist George Paget Thomson. They showed that a beam of electrons scattered by a crystal produces a diffraction pattern characteristic of a wave. The wave concept of a particle led the Austrian physicist Erwin Schrödinger to develop a so-called wave equation to describe the wave properties of a particle and, more specifically, the wave behaviour of the electron in the hydrogen atom.

Although this differential equation was continuous and gave solutions for all points in space, the permissible solutions of the equation were restricted by certain conditions expressed by mathematical equations called eigenfunctions (German eigen, “own”). The Schrödinger wave equation thus had only certain discrete solutions; these solutions were mathematical expressions in which quantum numbers appeared as parameters. (Quantum numbers are integers developed in particle physics to give the magnitudes of certain characteristic quantities of particles or systems.) The Schrödinger equation was solved for the hydrogen atom and gave conclusions in substantial agreement with earlier quantum theory. Moreover, it was solvable for the helium atom, which earlier theory had failed to explain adequately, and here also it was in agreement with experimental evidence. The solutions of the Schrödinger equation also indicated that no two electrons could have the same four quantum numbers—that is, be in the same energy state. This rule, which had already been established empirically by Wolfgang Pauli in 1925, is called the exclusion principle.

VII MATRIX MECHANICS

Simultaneously with the development of wave mechanics, Heisenberg evolved a different mathematical analysis known as matrix mechanics. According to Heisenberg's theory, which was developed in collaboration with the German physicists Max Born and Ernst Pascual Jordan, the formula was not a differential equation but a matrix: an array consisting of an infinite number of rows, each row consisting of an infinite number of quantities. See Matrix Theory and Linear Algebra. Matrix mechanics introduced infinite matrices to represent the position and momentum of an electron inside an atom. Different matrices exist, one for each of the other observable physical properties associated with the motion of an electron, such as energy, and angular momentum. These matrices, like Schrödinger's differential equations, could be solved; in other words, they could be manipulated to produce predictions as to the frequencies of the lines in the hydrogen spectrum and other observable quantities. Like wave mechanics, matrix mechanics was in agreement with the earlier quantum theory for processes in which the earlier quantum theory agreed with experiment; it was also useful in explaining phenomena that earlier quantum theory could not explain.

VIII THE MEANING OF QUANTUM MECHANICS

Schrödinger subsequently succeeded in showing that wave mechanics and matrix mechanics are different mathematical versions of the same theory, now called quantum mechanics. Even for the simple hydrogen atom, which consists of two particles, both mathematical interpretations are extremely complex. The next simplest atom, helium, has three particles, and even in the relatively simple mathematics of classical dynamics, the three-body problem (that of describing the mutual interactions of three separate bodies) is not entirely soluble. The energy levels can be calculated, however. In applying quantum-mechanical mathematics to relatively complex situations, a physicist can use one of a number of mathematical formulations. The choice depends on the convenience of the formulation for obtaining suitable approximate solutions.

Although quantum mechanics describes the atom purely in terms of mathematical interpretations of observed phenomena, a rough verbal description can be given of what the atom is now thought to be like. Surrounding the nucleus is a series of stationary waves; these waves have crests at certain points, each complete standing wave representing an orbit. The absolute square of the amplitude of the wave at any point at a given time is a measure of the probability that an electron will be found there. Thus, an electron can no longer be said to be at any precise point at any given time.

IX THE UNCERTAINTY PRINCIPLE

The impossibility of pinpointing an electron at any precise time was analysed by Werner Heisenberg, who in 1927 formulated the uncertainty principle. This principle states the impossibility of simultaneously specifying the precise position and momentum of any particle. In other words, physicists cannot measure the position of a particle, for example, without causing a disturbance in the velocity of that particle. Knowledge about position and velocity are said to be complementary, that is, they cannot be precise at the same time. This principle is also fundamental to the understanding of quantum mechanics as it is generally accepted today: the wave and particle characters of electromagnetic radiation can be understood as two complementary properties of radiation.

X RESULTS OF QUANTUM THEORY

Quantum mechanics solved all of the great difficulties that troubled physicists in the early years of the 20th century. It gradually enhanced the understanding of the structure of matter, and it provided a theoretical basis for the understanding of atomic structure (see Atom) and the phenomenon of spectral lines: each spectral line corresponds to the energy of a photon transmitted or absorbed when an electron makes a transition from one energy level to another. The understanding of chemical bonding was fundamentally transformed by quantum mechanics and came to be based on Schrödinger's wave equations. New fields in physics emerged— solid-state physics, condensed-matter physics, superconductivity, nuclear physics, and elementary particle physics—that all found a consistent basis in quantum mechanics.

XI FURTHER DEVELOPMENTS

In the years since 1925, no fundamental deficiencies have been found in quantum mechanics, although the question of whether the theory should be accepted as complete has come under discussion (see Bell's Inequality). In the 1930s the application of quantum mechanics and special relativity to the theory of the electron (see Quantum Electrodynamics) allowed the British physicist Paul Dirac to formulate an equation that referred to the existence of the spin of the electron. It further led to the prediction of the existence of the positron, which was experimentally verified by the American physicist Carl David Anderson.

The application of quantum mechanics to the subject of electromagnetic radiation led to explanations of many phenomena, such as bremsstrahlung (German, “braking radiation”, the radiation emitted by electrons slowed down in matter) and pair production (the formation of a positron and an electron when electromagnetic energy interacts with matter). It also led to a grave problem, however, called the divergence difficulty: certain parameters, such as the so-called bare mass and bare charge of electrons, appear to be infinite in Dirac's equations. (The terms bare mass and bare charge refer to hypothetical electrons that do not interact with any matter or radiation; in reality, electrons interact with their own electric field.) This difficulty was partly resolved in 1947-1949 in a programme called renormalization, developed by the Japanese physicist Shin'ichiro Tomonaga, the American physicists Julian S. Schwinger and Richard Feynman, and the British-born American physicist Freeman Dyson. In this programme, the bare mass and charge of the electron are chosen to be infinite in such a way that other infinite physical quantities are cancelled out in the equations. Renormalization greatly increased the accuracy with which the structure of atoms could be calculated from first principles.

XII FUTURE PROSPECTS

Quantum mechanics underlies current attempts to account for the strong nuclear force (see Quantum Chromodynamics) and to develop a unified theory for all the fundamental interactions of matter (see Physics: Developments in Physics Since 1930: Unified Field Theories). Nevertheless, doubts exist about the completeness of quantum theory. The divergence difficulty, for example, is only partly resolved. Just as Newtonian mechanics was eventually amended by quantum mechanics and relativity, many scientists—and Einstein was among them—are convinced that quantum theory will also undergo profound changes in the future. Great theoretical difficulties exist, for example, between quantum mechanics and chaos theory, which began to develop rapidly in the 1980s. Ongoing efforts are being made by theorists such as the British physicist Stephen Hawking to develop a system that encompasses both relativity and quantum mechanics.

20

Article - Feynman, Richard Phillips

Feynman, Richard Phillips

Feynman, Richard Phillips (1918-1988), American physicist and Nobel laureate, one of the founders of the theory of quantum electrodynamics, quantitatively the most exact theory in physics. He is also remembered as a great teacher and as the author of two popular books of anecdotal reminiscences.

Born in New York on May 11, 1918, Feynman was the son of a salesman. His father was a great influence, rousing his interest in science and encouraging him to work things out for himself rather than accepting authority uncritically. Feynman graduated from the Massachusetts Institute of Technology in 1939 and received his Ph.D. from Princeton University in 1942. Even before completing his Ph.D. he had been recruited to the Manhattan Project, the effort to build a nuclear bomb. He moved to Los Alamos, where the project was based, in 1943, and for three years he was a key member of the team developing the bomb.

In 1946 Feynman joined Cornell University, where he completed his masterwork, his treatment of quantum electrodynamics (QED). This is a complete theory of electromagnetism, including electromagnetic radiation and its interactions with matter, and explains the physical behaviour of everything on the everyday scale, apart from gravitational effects. Different versions of QED were developed independently by Julian Schwinger and Sin-itiro Tomonaga, but theirs were highly abstract and mathematical. Feynman’s version took the world of physics by storm because he developed a simple approach that described interactions between particles in terms of what are called Feynman diagrams. The three physicists shared the Nobel Prize in 1965.

Feynman moved from Cornell to the California Institute of Technology in 1950. There he made significant contributions to the theory of superconductivity, the understanding of the weak nuclear interaction (the force that drives radioactive decay), early attempts to develop a quantum theory of gravity, and what became the quark theory of the strong nuclear interaction (the force that holds atomic nuclei together). He is the only physicist to have made important contributions to the understanding of all four of the fundamental forces of nature. He also dabbled in biology and the theory of computers, helping to develop one of the first parallel processing machines.

As well as all this, Feynman was an inspiring teacher. His three-volume Lectures in Physics introduced generations of young scientists to the subject, and is still in use. He was a playful personality, fond of practical jokes, who studied physics because it was fun, and drove around Pasadena in a van covered with Feynman diagrams. This side of his personality was brought out in two volumes of reminiscences, Surely You’re Joking, Mr Feynman! and What Do You Care What Other People Think?, which became surprise best-sellers in the 1980s. He only became a public figure in America, however, when asked to join the inquiry into the Challenger disaster. He created a sensation when he unexpectedly demonstrated live on television how the space shuttle’s critical “O-ring” had failed because of low temperature, using a sample of the O-ring’s material dipped in a glass of ice water.

By then, Feynman was already seriously ill from cancer, and he died in Los Angeles on February 15, 1988. He is generally regarded as the greatest theoretical physicist of the post-war era, a figure to rank alongside Paul Dirac and Albert Einstein in the annals of 20th-century science. It was Feynman who, in the words (not intended as a compliment) of the mathematically inclined Julian Schwinger, “brought computation to the masses” by making quantum physics more intelligible to the physics community.


Contributed By:
John Gribbin

21

Contents - Einstein, Albert

Einstein, Albert
I INTRODUCTION

Einstein, Albert (1879-1955), German-born American physicist and Nobel laureate, best known as the creator of the special and general theories of relativity and for his bold hypothesis concerning the particle nature of light. He is perhaps the best-known scientist of the 20th century.

Einstein was born in Ulm on March 14, 1879, and spent his youth in Munich, where his family owned a small shop that manufactured electric machinery. He did not talk until the age of three, but even as a youth he showed a brilliant curiosity about nature and an ability to understand difficult mathematical concepts. At the age of 12 he taught himself Euclidean geometry.

Einstein hated the dull regimentation and unimaginative spirit of school in Munich. When repeated business failure led the family to leave Germany for Milan, in Italy, Einstein, who was then 15 years old, used the opportunity to withdraw from the school. He spent a year with his parents in Milan, and, when it became clear that he would have to make his own way in the world, he finished secondary school in Arrau, Switzerland, and entered the Swiss National Polytechnic in Zurich. Einstein did not enjoy the methods of instruction there. He often missed classes, using the time to study physics on his own or to play his beloved violin. He passed his examinations and graduated in 1900 by studying the notes of a classmate. His professors did not think highly of him and would not recommend him for a university position.

For two years Einstein worked as a tutor and substitute teacher. In 1902 he secured a position as an examiner in the Swiss Patent Office in Bern. In 1903 he married Mileva Maric, who had been his classmate at the polytechnic. They had two sons but eventually divorced. Einstein later remarried.

II EARLY SCIENTIFIC PUBLICATIONS

In 1905 Einstein received his doctorate from the University of Zurich for a theoretical dissertation on the dimensions of molecules, and he also published three theoretical papers of central importance to the development of 20th-century physics. In the first of these papers, on Brownian motion, he made significant predictions about the motion of particles that are randomly distributed in a fluid. These predictions were later confirmed by experiment.

The second paper, on the photoelectric effect, contained a revolutionary hypothesis concerning the nature of light. Einstein not only proposed that under certain circumstances light can be considered as consisting of particles, but he also hypothesized that the energy carried by any light particle, called a photon, is proportional to the frequency of the radiation. The formula for this is E = hu, where E is the energy of the radiation, h is a universal constant known as Planck's constant, and u is the frequency of the radiation. This proposal—that the energy contained within a light beam is transferred in individual units, or quanta—contradicted a 100-year-old tradition of considering light energy to be a manifestation of continuous processes. Virtually no one accepted Einstein's proposal. In fact, when the American physicist Robert Andrews Millikan experimentally confirmed the theory almost a decade later, he was surprised and somewhat disquieted by the outcome.

Einstein, whose prime concern was to understand the nature of electromagnetic radiation, subsequently urged the development of a theory that would be a fusion of the wave and particle models for light. Again, very few physicists understood or were sympathetic to these ideas.

III EINSTEIN'S SPECIAL THEORY OF RELATIVITY

Einstein's third major paper of 1905, “On the Electrodynamics of Moving Bodies”, contained what became known as the special theory of relativity. Since the time of the English mathematician and physicist Sir Isaac Newton, natural philosophers (as physicists and chemists were known) had been trying to understand the nature of matter and radiation, and how they interacted in some unified world picture. The position that mechanical laws are fundamental has become known as the mechanical world view, and the position that electrical laws are fundamental has become known as the electromagnetic world view. Neither approach, however, is capable of providing a consistent explanation for the way radiation (light, for example) and matter interact when viewed from different inertial frames of reference, that is, an interaction viewed simultaneously by an observer at rest and an observer moving at uniform speed.

In the spring of 1905, after considering these problems for ten years, Einstein realized that the crux of the problem lay not in a theory of matter but in a theory of measurement. At the heart of his special theory of relativity was the realization that all measurements of time and space depend on judgements as to whether two distant events occur simultaneously. This led him to develop a theory based on two postulates: the principle of relativity, that physical laws are the same in all inertial reference systems, and the principle of the invariance of the speed of light, that the speed of light in a vacuum is a universal constant. He was thus able to provide a consistent and correct description of physical events in different inertial frames of reference without making special assumptions about the nature of matter or radiation, or how they interact. Virtually no one understood Einstein's argument.

IV EARLY REACTIONS TO EINSTEIN

The difficulty that others had with Einstein's work was not because it was mathematically complex or technically obscure; the problem resulted, rather, from Einstein's beliefs about the nature of good theories and the relationship between experiment and theory. Although he maintained that the only source of knowledge is experience, he also believed that scientific theories are the free creations of a finely tuned physical intuition and that the premises on which theories are based cannot be connected logically to experiment. A good theory, therefore, is one in which a minimum number of postulates is required to account for the physical evidence. This sparseness of postulates, a feature of all Einstein's work, was what made his work so difficult for colleagues to comprehend, let alone support.

Einstein did have important supporters, however. His chief early patron was the German physicist Max Planck. Einstein remained at the Patent Office for four years after his star began to rise within the physics community. Then he moved rapidly upwards in the German-speaking academic world. His first academic appointment was in 1909 at the University of Zurich. In 1911 he moved to the German-speaking university at Prague, and in 1912 he returned to the Swiss National Polytechnic in Zurich. Finally, in 1913, he was appointed director of the Kaiser Wilhelm Institute for Physics in Berlin.

V THE GENERAL THEORY OF RELATIVITY

Even before he left the Patent Office in 1907, Einstein began work on extending and generalizing the theory of relativity to all coordinate systems. He began by enunciating the principle of equivalence, a postulate that gravitational fields are equivalent to accelerations of the frame of reference. For example, people travelling in a moving lift cannot, in principle, decide whether the force that acts on them is caused by gravitation or by a constant acceleration of the lift. The full general theory of relativity was not published until 1916. In this theory, the interactions of bodies, which heretofore had been ascribed to gravitational forces, are explained as the influence of bodies on the geometry of space-time (four-dimensional space, a mathematical abstraction, having the three dimensions of Euclidean space and time as the fourth dimension).

On the basis of the general theory of relativity, Einstein accounted for previously unexplained variations in the orbital motion of the planets and predicted the bending of starlight in the vicinity of a massive body such as the Sun. The confirmation of this latter phenomenon during an eclipse of the Sun in 1919 became a media event, and Einstein's fame spread worldwide.

For the rest of his life Einstein devoted considerable time to generalizing his theory even more. His last effort, a unified field theory, which was not entirely successful, was an attempt to understand all physical interactions—including electromagnetic interactions and weak and strong nuclear interactions—in terms of the modification of the geometry of space-time between interacting entities.

Most of Einstein's colleagues felt that these efforts were misguided. Between 1915 and 1930 the mainstream of physics was the development of a new conception of the fundamental character of matter, known as quantum theory. This theory contained the feature of wave-particle duality (light exhibits the properties of a particle, as well as of a wave) that Einstein had earlier urged as necessary, as well as the uncertainty principle, which states that precision in measuring processes is limited. Additionally, it contained a novel rejection, at a fundamental level, of the notion of strict causality. Einstein, however, would not accept such notions and remained a critic of these developments until the end of his life. “God”, Einstein once said, “does not play dice with the world”.

VI WORLD CITIZEN

After 1919 Einstein became internationally renowned. He accrued honours and awards, including the Nobel Prize for Physics in 1921, from various world scientific societies. His visit to any part of the world became a national event; photographers and reporters followed him everywhere. While regretting his loss of privacy, Einstein capitalized on his fame to further his own political and social views.

The two social movements that received his full support were pacifism and Zionism. During World War I he was one of a handful of German academics willing to publicly decry Germany's involvement in the war. After the war his continued public support of pacifist and Zionist goals made him the target of vicious attacks by anti-Semitic and right-wing elements in Germany. Even his scientific theories were publicly ridiculed, especially the theory of relativity.

When Hitler came to power in Germany in 1933, Einstein immediately decided to emigrate to the United States. He took a position at the Institute for Advanced Study at Princeton, New Jersey. While continuing his efforts on behalf of world Zionism, Einstein renounced his former pacifist stand in the face of the awesome threat to humankind posed by the Nazi regime in Germany.

In 1939 Einstein collaborated with several other physicists in writing a letter to President Franklin D. Roosevelt, pointing out the possibility of making an atomic bomb and the likelihood that the German government was embarking on such a course. The letter, which bore only Einstein's signature, helped lend urgency to efforts in the United States to build the atomic bomb, but Einstein himself played no role in the work and knew nothing about it at the time.

After the war, Einstein was active in the causes of international disarmament and world government. He continued his active support of Zionism but declined the offer made by leaders of the state of Israel to become president of that country. In the United States during the late 1940s and early 1950s he spoke out on the need for the nation's intellectuals to make any sacrifice necessary to preserve political freedom. Einstein died in Princeton on April 18, 1955.

Einstein's efforts on behalf of social causes have sometimes been viewed as unrealistic. In fact, his proposals were always carefully thought out. Like his scientific theories, they were motivated by sound intuition based on a shrewd and careful assessment of evidence and observation. Although Einstein gave much of himself to political and social causes, science always came first, because, he often said, only the discovery of the nature of the universe would have lasting meaning. His writings include Relativity: the Special and General Theory (1916); About Zionism (1931); Builders of the Universe (1932); Why War? (1933), with Sigmund Freud; The World as I See It (1934); The Evolution of Physics (1938), with the Polish physicist Leopold Infeld; and Out of My Later Years (1950). Einstein's collected papers are published in a multivolume work, that began in 1987.

22

Article - Bohr, Niels Henrik David

Bohr, Niels Henrik David

Bohr, Niels Henrik David (1885-1962), Danish physicist and Nobel laureate, who made basic contributions to nuclear physics and the understanding of atomic structure.

Bohr was born in Copenhagen on October 7, 1885, the son of a physiology professor, and was educated at the University of Copenhagen, where he earned his doctorate in 1911. That same year he went to the University of Cambridge in England to study nuclear physics under J. J. Thomson, but he soon moved to the University of Manchester to work with Ernest Rutherford.

Bohr's theory of atomic structure, for which he received the Nobel Prize for Physics in 1922, was published in papers between 1913 and 1915. His work drew on Rutherford's nuclear model of the atom, in which the atom is seen as a compact nucleus surrounded by a swarm of much lighter electrons. Bohr's atomic model made use of quantum theory and the Planck constant (the ratio between quantum size and radiation frequency). The model posits that an atom emits electromagnetic radiation only when an electron in the atom jumps from one quantum level to another. This model contributed enormously to future developments of theoretical atomic physics.

In 1916 Bohr returned to the University of Copenhagen as a Professor of Physics, and in 1920 he was made Director of the university's newly formed Institute for Theoretical Physics. There Bohr developed a theory relating quantum numbers to large systems that follow classical laws, and made other major contributions to theoretical physics. His work helped lead to the concept that electrons exist in shells and that the electrons in the outermost shell determine an atom's chemical properties. He also served as a visiting professor at many universities.

In 1939, recognizing the significance of the fission experiments (see Nuclear Energy: Nuclear Energy from Fission) of the German scientists Otto Hahn and Fritz Strassmann, Bohr convinced physicists at a scientific conference in the United States of the importance of those experiments. He later demonstrated that uranium-235 is the particular isotope of uranium that undergoes nuclear fission. Bohr then returned to Denmark, where he was forced to remain after the German occupation of the country in 1940. Eventually, however, he escaped to Sweden, in peril of his life and that of his family. From Sweden the Bohrs travelled to England and eventually to the United States, where Bohr joined in the effort to develop the first atomic bomb, working at Los Alamos, New Mexico, until the first bomb's detonation in 1945. He opposed complete secrecy of the project, however, and feared the consequences of this ominous new development. He desired international control.

In 1945 Bohr returned to the University of Copenhagen, where he immediately began working to develop peaceful uses for atomic energy. He organized the first Atoms for Peace Conference in Geneva, held in 1955, and two years later he received the first Atoms for Peace Award. Bohr died in Copenhagen on November 18, 1962.

23

Article - Chaos Theory

Chaos Theory

Chaos Theory, mathematical theory dealing with systems displaying unpredictable and seemingly random behaviour, even though the components of the system are governed by strictly deterministic laws. Since its inception in the 1970s, chaos theory has become one of the fastest-growing areas of mathematical research. Physics, even including the advanced ramifications of quantum theory, has hitherto dealt primarily with systems that are in principle predictable, at least on the large scale, but the natural world exhibits a tendency towards chaotic behaviour. For example, large-scale weather systems tend to develop random patterns as they interact with more complex local systems. Other examples include the ever-changing rate of the dripping of a tap, the turbulence in a column of rising smoke, and the human heartbeat.

Scientists long lacked the mathematical means to deal with chaotic systems, however familiar, and the tendency had been to avoid them in theoretical work. From the 1970s, however, a number of physicists began to seek ways of coming to grips with chaos. One of the principal theorists was the American physicist Mitchell Feigenbaum, who determined certain consistent patterns of behaviour in systems tending towards chaos, involving quantities now known as Feigenbaum numbers. The patterns of chaos are linked to those observed in fractal geometry, and the study of chaotic systems has affinities with catastrophe theory.

24

Contents - Nuclear Physics

Nuclear Physics
I INTRODUCTION

Nuclear Physics, the study of atomic nuclei, and of their interactions with other nuclei and with individual elementary particles (see Atom).

II DECAY OF NUCLIDES

Atomic nuclei consist of positively charged protons and neutral, or uncharged, neutrons. The number of protons in a nucleus is the atomic number, which defines the chemical element. Nuclei with 11 protons, for example, are nuclei of sodium (Na) atoms. An element can have various isotopes, the nuclei of which have differing numbers of neutrons. For example, stable sodium nuclei contain 12 neutrons, whereas those with 13 are radioactive. These isotopes are notated as ®Na and ²Na, where the left-hand subscript indicates the atomic number and the superscript represents the total number of nucleons, or neutrons and protons. Any species of nucleus designated by certain atomic and neutron numbers is called a nuclide.

Radioactive nuclides are unstable: they undergo spontaneous transformation into nuclides of other elements, releasing energy in the process. These transformations include alpha (a) decay (the emission of a helium nucleus, ¸He2+), and beta (b) decay or positron (b+) decay. In b decay a neutron is transformed into a proton with the simultaneous emission of a high-energy electron. In b+ decay a nuclear proton turns into a neutron with the emission of a high-energy positron. For example, 24Na undergoes b decay to form the next higher element, magnesium:
Gamma (g) radiation, like light, is electromagnetic radiation, but by virtue of their much higher frequency, g rays have far more energy. When a or b decay occurs, the resulting nucleus is often left in an excited (higher energy) state. Gamma rays are emitted as the nucleus drops to a lower energy state.

Any characterization of radioactive nuclide decay must include a determination of the half-life of the nuclide, that is, the time it takes for half of a sample to decay. The half-life of 24Na, for example, is 15 hours. The types and energies of radiation emitted by the nuclide are also important in characterizing the decay.

III EARLY EXPERIMENTS

Radioactivity emitted by uranium salts was discovered by the French physicist Henri Becquerel in 1896. In 1898 the French scientists Marie Curie and Pierre Curie discovered the naturally occurring radioactive elements polonium (84Po) and radium (88Ra). During the 1930s, Irène and Frédéric Joliot-Curie made the first artificial radioactive nuclides by bombarding boron (5B) and aluminium (13Al) with a particles to form radioactive isotopes of nitrogen (7 N) and phosphorus (15P). Naturally occurring isotopes of these elements are stable.

The German nuclear scientists Otto Hahn and Fritz Strassmann discovered nuclear fission in 1938. When uranium is irradiated with neutrons, some uranium nuclei split into two nuclei, each with about half the atomic number of uranium. Fission releases enormous energy and is used in nuclear fission weapons and reactors (see Nuclear Energy).

IV NUCLEAR REACTIONS

Nuclear physics also involves the study of nuclear reactions: the use of nuclear projectiles to convert one species of nucleus into another. If, for example, sodium is bombarded with neutrons, some of the stable ®Na nuclei capture neutrons to form radioactive ²Na nuclei:

®Na + ¦n ? ²Na + g rays
Neutron reactions are studied by placing samples inside nuclear reactors, which produce enormous numbers of neutrons.

Nuclei can also react with each other, but being positively charged, they repel each other with great force. The projectile nucleus must have a high energy to overcome the repulsion and to react with target nuclei. High-energy nuclei are produced in cyclotrons, Van de Graaff generators, or other particle accelerators.

A typical nuclear reaction is the one that was used to produce artificially the next heavier element above uranium (°U), the heaviest element that occurs in nature (see Periodic Law). Neptunium (±Np) was made by bombarding uranium (mostly °U) with deuterons (nuclei of the heavy hydrogen isotope, ªH1) to knock out two neutrons, forming ±Np:

°U + ªH ? ±Np + 2¦n

V RADIOCHEMICAL ANALYSIS

Alpha particles, most of which are emitted by elements with atomic numbers above 83, have discrete energies characteristic of the emitting nuclide. Thus, a emitters can be identified by measuring the energies of the a particles. The samples being measured must be very thin, as a particles lose energy rapidly on passing through matter. Gamma rays also have discrete energies characteristic of the decaying nuclide, so g-ray energies can also be used to identify nuclides. Because g rays can pass through considerable amounts of matter without losing energy, samples need not be thin. Beta-particle (and positron) energy spectra are not useful for identifying nuclides; they are spread over all energies up to a maximum for each b emitter. See Particle Detectors.

Nuclear physics techniques are frequently used to analyse materials for trace elements—elements that occur in minute amounts. The technique used is called activation analysis. A sample is irradiated with nuclear projectiles, usually neutrons, to convert stable nuclides into radioactive ones, the activity of which is then measured with nuclear radiation detectors. For example, any sodium in a sample can be detected by irradiating the sample with neutrons, thereby converting some of the stable ®Na nuclei into radioactive 24Na and measuring the amount of 24Na by counting the b particles and g rays emitted.

Activation analysis can (without chemical separation) measure quantities as small as a nanogram (a billionth of a gram or 0.03 billionth of an ounce) of about 35 elements in materials such as soil, rocks, meteorites, and lunar samples. Activation analysis can be used on biological samples, such as human blood and tissue; however, fewer elements can be observed in biological materials without chemical separations.

Desired radioactive isotopes can be produced for medical diagnoses and treatments, and for use as radioactive isotopic tracers. These are valuable in studies of the chemical behaviour of elements, in the measurement of wear in car engines, and in other studies involving extremely small amounts of material.

See Also Physics; Quantum Theory.

25

Article - Planck, Max Karl Ernst Ludwig

Planck, Max Karl Ernst Ludwig

Planck, Max Karl Ernst Ludwig (1858-1947), German physicist and Nobel laureate, who was the originator of quantum theory.

Planck was born in Kiel on April 23, 1858, and educated at the universities of Munich and Berlin. He was appointed Professor of Physics at the University of Kiel in 1885, and from 1889 until 1928 filled the same position at the University of Berlin. In 1900 Planck postulated that energy is radiated in small, discrete units, which he called quanta. Developing this theory further, he discovered a universal constant of nature, which came to be known as Planck's constant. Planck's law states that the energy of each quantum is equal to the frequency of the radiation multiplied by the universal constant. His discoveries did not, however, supersede the theory that radiation is propagated in waves. Physicists now believe that electromagnetic radiation combines the properties of both waves and particles. Planck's discoveries, which were later verified by other scientists, were the basis of an entirely new field of physics, known as quantum mechanics, and provided a foundation for research in such fields as atomic energy. See Atom.

Planck received many honours for his work, notably the 1918 Nobel Prize for Physics. In 1930 Planck was elected president of the Kaiser Wilhelm Society for the Advancement of Science, the leading association of German scientists, which was later renamed the Max Planck Society. He endangered himself by openly criticizing the Nazi regime that came to power in Germany in 1933 and was forced out of the society, but became president again after World War II. He died at Göttingen on October 4, 1947. Among his writings that have been translated into English are Introduction to Theoretical Physics (5 vols., 1932-1933) and Philosophy of Physics (1936). See also Planck's Radiation Law.

26

Contents - Matrix Theory

Matrix Theory
I INTRODUCTION

Matrix Theory, a branch of pure mathematics, introduced by Arthur Cayley in 1858, associated with the solution of systems of linear equations, which arise naturally in science, engineering, and social sciences.

An m × n matrix is an array of mn numbers arranged in m rows and n columns, and enclosed in brackets. For example,
are 2 × 3 and 3 × 2 matrices. The entries in a matrix can belong to various mathematical systems such as integers, rational, real, or complex numbers. The entry in the i-th row and j-th column of a matrix A is denoted by aij or (A)ij.

An m × n matrix stores mn pieces of information aij, indexed by two parameters i, j. For instance, if m countries each export n commodities, then aij could be the amount of the j-th commodity exported by the i-th country in a given year, so each row or column of A represents a particular country or commodity.

The need to manipulate this information leads to an algebraic theory in which the basic operations of arithmetic are applied to matrices. If A and B are both m × n matrices, their sum A + B is obtained by adding their corresponding entries, that is, (A + B) ij = aij + bij. For example,
The differenceA - B is defined similarly by (A - B)ij = aij - bij. (Matrices of different shapes cannot be added or subtracted.) Thus if A and B represent exports for consecutive years, then A + B represents exports over the two-year period, and if C represents imports during the first year, then A - C represents net exports for that year.

If A is an m × n matrix, and B is an n × s matrix, their product AB is an m × s matrix with (AB)ij formed from the i-th row of A and the j-th column of B by (AB)ij = ai1b1j + ai2b2j + ... + ainbnj. For instance:
In our export example, if D is an n × 1 matrix (or column vector) whose entries are the costs per unit amount of the n commodities, then AD is an m × 1 matrix whose entries are the values of the exports of the m countries.

A square matrix is an n × n matrix for some n. If A and B are both n × n matrices, then A + B,A - B, AB and BA all exist and are also n × n matrices. The algebra of square matrices resembles the algebra of numbers in many ways (though AB may differ from BA). For instance the n × n identity matrix I defined by:
satisfies IA = AI = A for all n × n matrices A, so it behaves like the number 1. Each n × n matrix A has a number called its determinant det(A): if n = 1 then det(A) = a 11, and if n> 1 then:
where Dj (called a minor of A) is the determinant of the (n - 1) × (n - 1) matrix formed by deleting the first row and the j-th column of A. If det(A) ? 0 then A has an inverse matrix A-1 satisfying AA-1 = A-1A = I.

II SIMULTANEOUS EQUATIONS

An important application of matrices is in the solution of simultaneous linear equations. Given m equations in n unknowns x1, ..., xn, say
let A be the m × n matrix with (A)ij = a ij (i = 1, ..., m, j = 1, ..., n), and let
The equations may be written in matrix form as AX = B, and solved (where possible) by manipulating this equation. For instance, if m = n and det(A) ? 0 there is a unique solution X = A-1B.

III GEOMETRY

Matrices have important applications in geometry. A point in ordinary 3-dimensional space can be specified by 3 numbers the x, y, and z coordinates. This means that a point can be represented by a simple column vector and a set of points as a set of column vectors. Transformations, such as rotation around a point, reflection in a plane, and scaling can all be performed by the multiplication and addition of matrices. These procedures can be generalized to more abstract cases of n-dimensional space by increasing the size of the matrices involved.

IV FURTHER MATRIX NOTATION

The transpose, At, of matrix A is formed by interchanging its rows and columns, that is (At)ij = aji for all i, j. A square matrix is orthogonal if AtA = I.

The adjoint, A*, of a matrix A is formed by reversing the sign of any imaginary numbers in the elements of At (this is known as making the complex conjugate). A matrix is unitary if A*A = I. Unitary matrices are important in physics, specifically quantum theory, as they support conservation laws.

27

Article - Blackbody

Blackbody

Blackbody, idealized object in theoretical physics that absorbs all the radiation that strikes its surface, regardless of its wavelength. It is called a blackbody because it reflects no light. No such object is known to exist, although a surface consisting of carbon black may absorb all but about 3 per cent of incident radiation. In theory, a blackbody is therefore a perfect emitter of radiation also, and at any specific temperature it emits the maximum amount of energy available from a radiating body through temperature alone. At a given temperature, a blackbody emits a definite amount of energy at each wavelength, but the energy carried by the radiation is not distributed evenly across the wavelength range; the proportion of energy carried by shorter wavelengths increases as the temperature of the source increases. This is in accordance with Wien's Law.

?max T = constant,
where ?max is the maximum wavelength, and T is the surface temperature in kelvins (K).

The total energy emitted per second increases with the surface temperature of the source, and the relationship linking these quantities is known as Stefan's Law:

Q/t = s A T4
where Q/t is the energy radiated per second, A is the surface area of the blackbody, and T is the surface temperature in kelvins (K). s is known as Stefan's constant, and has the value 5.7 × 10-8 Js-1 m-2 K-4.

For a non-blackbody, Stefan's Law can be used in the form Q/t = e s A T4, where e is called the total emissivity of the body and has a value between 0 and 1.

The laws have many uses in physics, for example to determine the temperature of stars. The peak wavelength emitted by a star is determined spectroscopically, and this enables its surface temperature to be calculated using Wien's Law.

It was through the failure of attempts to explain Wien's and Stefan's laws in terms of classical physics that the basic concepts of quantum theory were first developed.

See also Ludwig Boltzmann; Max Planck; Joseph Stefan; Wilhelm Wien.

28

Article - Democritus

Democritus

Democritus (c. 460-c. 370 bc), Greek philosopher, who developed the atomic theory of the universe that had been originated by his mentor, the philosopher Leucippus. Democritus was born in Abdera, Thrace. He wrote extensively, but only fragments of his works remain.

According to Democritus's exposition of the atomic theory of matter, all things are composed of minute, invisible, indestructible particles of pure matter (atoma,”indivisibles”), which move about eternally in infinite empty space (kenon,”the void”). Although atoms are made up of precisely the same matter, they differ in shape, size, weight, sequence, and position. Qualitative differences in what the senses perceive and the birth, decay, and disappearance of things are the results not of characteristics inherent in atoms but of quantitative arrangements of atoms. Democritus viewed the creation of worlds as the natural consequence of the ceaseless whirling motion of atoms in space. Atoms collide and spin, forming larger aggregations of matter.

Democritus also wrote on ethics, proposing happiness, or “cheerfulness”, as the highest good—a condition to be achieved through moderation, tranquillity, and freedom from fear. In later histories, Democritus was known as the Laughing Philosopher, in contrast to the more sombre and pessimistic Heraclitus, the Weeping Philosopher. His atomic theory anticipated the modern principles of the conservation of energy and the irreducibility of matter.

29

Contents - Galileo (scientist)

Galileo (scientist)
I INTRODUCTION

Galileo (scientist) (1564-1642), Italian physicist and astronomer, who pioneered the scientific revolution that flowered in the work of the English physicist Isaac Newton. His main contributions to astronomy were the use of the telescope in observation, and the discovery of lunar mountains and valleys, the four largest satellites of Jupiter, the phases of Venus, and sunspots. In physics, he discovered the laws governing falling bodies and projectiles. In the history of culture, Galileo stands as a symbol of the battle against authority for freedom of inquiry.

Galileo, whose full name was Galileo Galilei, was born near Pisa, in Tuscany, on February 15, 1564. His father, Vincenzio Galilei, played an important role in the musical revolution from medieval polyphony to harmonic modulation. Just as Vincenzio saw that rigid theory stifled new forms in music, so his eldest son came to see both the currently dominant physics of the Greek philosopher Aristotle and the Roman Catholic theology influenced by it as limiting physical inquiry. Galileo was taught by monks at Vallombrosa and then entered the University of Pisa in 1580 or 1581 to study medicine. Although the syllabus was uncongenial to him, it did give him a useful introduction to current versions of Aristotelian physics.

Aristotelians made a sharp division between the Earth and the heavens. In the heavens there could be no change except the recurring patterns produced by the circular motions of the perfectly spherical heavenly bodies. The sublunar world (the universe below the Moon) was the region of the four elements—earth, water, air, and fire—and subject to its own distinct laws of natural motion. Fire, for instance, had lightness, which made it rise vertically, away from the centre of the Earth. Earthy objects fell naturally downward towards the centre of the fixed Earth: the heavier the object, the faster its fall. “Natural” motions of the elements took them to their natural place, where they rested. Rest was the natural state of an element; it was motion that needed explaining, since every motion must have a cause. This common-sense physics held sway until Galileo began to undermine it. See Chemistry: Greek Natural Philosophy; Philosophy, Greek: Plato and Aristotle.

II GALILEO’S WORK IN PHYSICS

The key to Galileo’s new physics lay in mathematics. Although he was still registered as a medical student, he increasingly devoted his time to the extra-curricular study of mathematics, with the encouragement of the court mathematician Ostilio Ricci. He left the university without a degree in 1585. For a time he tutored privately and wrote on hydrostatics, but he did not publish anything. In 1589 he became Professor of Mathematics at the University of Pisa.

The celebrated story of Galileo dropping objects from the Leaning Tower of Pisa to demonstrate to assembled professors that Aristotle was fundamentally mistaken about motion comes from his last pupil and first biographer, Vincenzo Viviani. Though Viviani’s account is sometimes dismissed as legend, it is more probably an exaggerated version of an actual event. Viviani has Galileo simultaneously dropping two objects of the same material but different weights to refute the Aristotelian belief that speed of fall is proportional to weight. That much Galileo could show even at this early stage of his career. However, his manuscript works show that he was still unclear about acceleration in free fall and that he thought more in terms of the characteristic speed of a body of a given material in a given medium.

Yet Galileo could already improve on Aristotle. He considered himself a follower of the ancient Greek scientist Archimedes and abandoned Aristotelian notions of heaviness and lightness in favour of the more useful notion of density. He made his first attempts at producing simple mathematical comparisons of how bodies of varying densities fall in various media and he was willing to ignore minor discrepancies, leaving them to be explained by further investigation. He even toyed with the idea of a body resting on a perfectly smooth surface being movable by the slightest of forces—a hint of his later approximation to inertial motion and a measure of how he was distancing himself from Aristotelian ideas of natural and forced motions.

Galileo’s contract was not renewed in 1592, probably because he contradicted Aristotelian professors. In the same year he was appointed to the chair of mathematics at the University of Padua in the republic of Venice, where he remained until 1610.

At Padua, Galileo invented a calculating “compass” for the practical solution of mathematical problems. He was much impressed by the practical knowledge of mechanics displayed by the foremen of the world-famous shipyard, the Arsenal of Venice. In his own work he combined an ability to discern simple mathematical patterns underlying familiar occurrences, such as the free fall of objects to the ground, with a knack of devising controlled observations in which the looked-for mathematical relationships presented themselves as obvious and measurable with precision. His fundamental conviction was that the universe is an open book but, as he wrote later in The Assayer, “one cannot understand it unless one first learns to understand the language and recognize the characters in which it is written. It is written in mathematical language ... .”

A Projectiles and Pendulums

This conviction led to important discoveries in the first decade of the 17th century. Galileo not only recognized that the acceleration of any body in free fall was uniform but he expressed this in a simple law: the distance travelled in free fall is proportional to the square of the time elapsed; that is, in 2 seconds a body will fall 4 times as far as it will in 1 second; in 3 seconds it will fall 9 times as far; and so on. Alternatively expressed: the distances moved in successive equal intervals of time are as the odd numbers: 1, 3, 5 ... .

This same law led to an understanding of the motion of projectiles. Galileo could look at the fall of an arrow or cannon ball and see it as made up of two independent motions: the vertical component was uniformly accelerated and conformed to his law of falling bodies; the horizontal motion imparted to the body by the bowman or gunner was at constant speed. When the horizontal and vertical components were combined, the resultant path was parabolic. The practical consequences for efficient gunnery were deduced from this seemingly abstract geometrical account.

In similar vein, Galileo investigated mechanics and the strength of materials. In his studies of pendulums he discovered that for a given pendulum the swing of the bob takes the same time for arcs of different sizes, though others soon pointed out that this was true only provided that the swings did not become too large.

One of the greatest contrasts between Galileo’s ideas and Aristotle’s is in their underlying models of motion. Galileo considered that an object moving uniformly on the Earth’s surface without meeting any resistance would continue to do so without needing to be kept moving by any force, whereas Aristotelians would look for a force to cause the continuing motion. It is true that the surface of the Earth is a spherical surface, but it is reasonable to see Galileo’s ideas as approximating to Newton’s first law of motion, according to which a body will continue in its state of rest or uniform motion in a straight line unless interfered with (see Mechanics: Newton’s Three Laws of Motion). At the least, Galileo made the advance of not treating rest as a state more natural or privileged than motion.

III ASTRONOMICAL RESEARCH

During most of his Paduan period Galileo showed only occasional interest in astronomy, although in 1597 he declared in private correspondence that he preferred the Copernican theory that the Earth revolves around the Sun to the Aristotelian and Ptolemaic assumption that the planets, the Moon, and the Sun circle a fixed Earth (see Ptolemaic System). Only the Copernican model supported Galileo’s ingenious but mistaken theory of the tides: according to this theory Earth’s rotatory motion is alternately added to the orbital motion and subtracted from it, with the effect that the seas are set sloshing backwards and forwards. To this simple mechanism, which provided one tide every 24 hours, Galileo had to add further factors, such as the orientation and configuration of seabeds and shores, to make a reasonable approximation to the variety of tidal phenomena actually observed at different places and seasons.

A Discoveries with the Telescope

In 1609 Galileo heard that a telescope had been invented in the Netherlands. In August of that year he constructed a telescope that magnified about 10 times and presented it to the doge of Venice. Its value for naval and maritime operations resulted in the doubling of his salary and the assurance of lifelong tenure as a professor.

By December 1609 Galileo had built a telescope of 20 times magnification, with which he discovered mountains and craters on the Moon. Not only did this contradict the Aristotelian idea that heavenly bodies must be perfectly spherical; it also indicated that a heavenly body could be much more like the Earth than had hitherto been imagined. Galileo also saw that the Milky Way was composed of stars, and he discovered four satellites circling Jupiter. It was therefore undeniable that at least some heavenly bodies move round a centre other than the Earth, a finding that did not prove that Copernicus had been right, but did fit in well with the Copernican system of the universe. Galileo published these findings in March 1610 in a book called The Starry Messenger. He astutely used his new fame to secure an appointment for which he had been angling for some time, that of court mathematician and philosopher at Florence; he was thereby freed from teaching duties and had time for research and writing. By December 1610 he had observed the phases of Venus, which are a natural consequence of the Copernican system, which has Venus circling the Sun within Earth’s own orbit. The Ptolemaic arrangement, by contrast, had Venus moving on an epicycle, a circle whose centre moved around the Earth but was tied to the Earth-Sun line, and it could not reproduce the phases. Ptolemaic astronomers had to concede that Venus orbits the Sun rather than the Earth, while still insisting that the Sun moves around the Earth. Galileo naturally took the discovery to be a strong confirmation of Copernicanism.

Traditionalist professors of philosophy scorned Galileo’s discoveries because Aristotle had held that only perfectly spherical bodies could exist in the heavens and that nothing new could ever appear there. So comets, for instance, had to be assigned to the world of change below the Moon and treated as meteorological phenomena. (It is a curiosity that, in a controversy over the comets of 1618, Galileo, who did as much as anyone to bridge the artificial gap between Earth and the heavens, was nevertheless willing to treat the comets as sublunar.)

Galileo also disagreed with professors at Florence and Pisa about hydrostatics, and he published a book on floating bodies in 1612. Four printed attacks on this book followed, rejecting Galileo’s physics. Aristotelians took shape to be the key to explaining why bodies float, whereas Galileo relied on the relative densities of the floating object and the medium in which it floated. Despite some embarrassment caused by the fact that he did not understand surface tension any more than his opponents, Galileo had the better of the argument, an argument he considered it useless to pursue with adversaries who were ignorant of elementary mathematics. In 1613 he published a work on sunspots (see Sun: Sunspots) and predicted victory for the Copernican theory.

IV CONFLICT WITH THE CHURCH

A Pisan professor, in Galileo’s absence but in the presence of his pupil Castelli, told the Medici (the ruling family of Florence as well as Galileo’s employers) that belief in a moving Earth was contrary to Scripture. Galileo immediately wrote a pamphlet for private circulation, Letter to Castelli, sketching his views on the relation of Scripture and science. In December 1614 a Florentine Dominican denounced “Galileists” from the pulpit, and early in 1615 the Florentine Dominican convent of San Marco sent criticisms of Galileists to the Inquisition in Rome. Galileo enlarged his Letter to Castelli into a Letter to the Grand Duchess Cristina on the correct use of biblical passages in scientific arguments, holding that the interpretation of the Bible should be adapted to increasing knowledge and warning against the danger of treating any scientific opinion as an article of Roman Catholic faith. This remarkable work of amateur theology was not published in Italy in his lifetime and had little influence on the course of events.

Early in 1616 Copernican books were subjected to censorship by the Roman Congregation of the Index of Forbidden Books, after the Jesuit cardinal Robert Bellarmine had instructed Galileo that he must no longer hold or defend the opinion that the Earth moves. Following a long tradition that hypotheses in astronomy were merely instruments or calculating devices, Cardinal Bellarmine had previously advised him to treat this subject only hypothetically and for scientific purposes, without taking Copernican concepts as literally true or attempting to reconcile them with the Bible. The public ruling of 1616 similarly laid down that Catholics could use Copernicanism as a calculating device, but could not say that it was the true system of the universe. Galileo remained silent on the subject for years, working on a method for determining longitude at sea by using his predictions of the motions of Jupiter’s satellites, resuming his earlier studies of falling bodies, and skilfully setting forth his views on scientific reasoning in a book on comets, The Assayer (1623), which is a classic of polemical writing.

A The Trial of Galileo

In 1624 Galileo began a book he wished to call Dialogue on the Tides, in which he discussed the Ptolemaic and Copernican hypotheses in relation to the physics of tides. In 1630 the book was licensed for printing by Roman Catholic censors at Rome, but they altered the title to Dialogue on the Two Chief World Systems. Because of the prevalence of plague in central Italy, it was published at Florence in 1632. Despite the book’s having two official licences, Galileo was summoned to Rome by the Inquisition to stand trial for “grave suspicion of heresy”. Although he had made considerable efforts to conform to the letter of the ruling of 1616, Galileo had clearly written a pro-Copernican book. He had occasionally also slipped up by explicitly treating Copernicanism as “probable”, meaning that, though it was yet unproven, sooner or later it could well be shown to be true. Such a position was incompatible with the ruling of 1616, as was pointed out at his trial: Catholics were allowed to use Copernicanism as a helpful calculating device, provided that they did not treat it as having any truth in it.

Galileo’s legal position was worsened by the presence in his file of a written but unsigned report that in 1616 he had been personally ordered not to discuss Copernicanism either orally or in writing. Cardinal Bellarmine had died, but Galileo produced a certificate signed by the cardinal, which gave no indication that Galileo had been subjected to any greater restriction than applied to any Roman Catholic under the 1616 edict. No signed document contradicting this was ever found. Galileo was compelled in 1633 to abjure and was sentenced to life imprisonment (swiftly commuted to permanent house arrest). The Dialogue was prohibited and the sentence against him was to be read publicly in every university.

V GALILEO’S IMPACT ON THOUGHT

The condemnation of Galileo did have some effect on universities and colleges in those countries where the Catholic Church was able to exercise control of teaching and publication, though the permission to treat Copernicanism as a useful, though false, calculating device meant that heliocentric ideas could always be made familiar to students. The ideas contained in the Dialogue could not be repressed and Galileo’s own scientific reputation remained high, both in Italy and abroad, especially after the publication of his final and greatest work.

This was the Discourses Concerning Two New Sciences, published at Leiden in 1638. It reviews and refines Galileo’s earlier studies of motion and, in general, the principles of mechanics. The book opened a road that was to lead Newton to the law of universal gravitation, which linked the planetary laws discovered by Galileo’s contemporary Johannes Kepler with Galileo’s mathematical physics. Galileo became blind before it was published, and he died at Arcetri, near Florence, on January 8, 1642.

Galileo’s most valuable scientific contribution was his part in transforming physics from a plausible framework erected on casual observations of complex everyday experiences into a method whereby selected experiences were so simplified that their underlying structures or patterns became tractable in geometrical terms and so susceptible to precise measurement (see Experiment). So, for instance, the law of falling bodies disregards the resistance of the medium and concentrates solely on the relationship between distance fallen and time elapsed in a vacuum. If this simplified law proves to be only approximate, then the approach is repeated to find what refinement is needed to account for how an actual body falls—for example, through air.

Galileo abandoned the key Aristotelian ideas according to which rest is a natural state and only motion needs explanation, and got so near to understanding the nature of inertial motion that Newton credited him with the discovery. More widely influential, however, were The Starry Messenger and the Dialogue on the Two Chief World Systems, which opened new vistas in astronomy. He was an outstanding popularizer of his own work and is recognized as a master of Italian prose.

Galileo’s lifelong struggle to free scientific inquiry from restriction by philosophical and theological interference is also remembered as a major contribution to the development of science. Since the full publication of Galileo’s trial documents in the 1870s, entire responsibility for Galileo’s condemnation has customarily been placed on the officials of the Roman Catholic Church. A fuller picture would include the role of the professors of philosophy who first persuaded theologians to link Galileo’s science with heresy, though the responsibility for the ruling of 1616 and for the condemnation of Galileo must remain with the officials of the Church and their advisers.

An investigation into the astronomer’s condemnation was opened in 1979 by Pope John Paul II. A papal commission, set up in 1982, produced several scholarly publications related to the trial. In October 1992 the commission acknowledged the error of the Church’s officials. In a speech accepting the report Pope John Paul, alluding to Galileo’s views on Scripture and science, said that Galileo, “a sincere believer, showed himself to be more perceptive in this regard than the theologians who opposed him”.


Contributed By:
Michael Sharratt

30

Contents - Electromagnetic Radiation

Electromagnetic Radiation
I INTRODUCTION

Electromagnetic Radiation, waves produced by the oscillation or acceleration of an electric charge. Electromagnetic waves have both electric and magnetic components. Electromagnetic radiation can be arranged in a spectrum that extends from waves of extremely high frequency and short wavelength to extremely low frequency and long wavelength. Visible light is only a small part of the electromagnetic spectrum. In order of decreasing frequency, the electromagnetic spectrum consists of gamma rays, hard and soft X-rays, ultraviolet radiation, visible light, infrared radiation, microwaves, and radio waves.

II PROPERTIES

Electromagnetic waves need no material medium for their transmission. Thus, light and radio waves can travel through interplanetary and interstellar space from the Sun and stars to the Earth. Regardless of their frequency and wavelength, electromagnetic waves travel at the same speed in a vacuum. The value of the metre has been defined so that the speed of light is exactly 299,792.458 km (approximately 186,282 mi) per second in a vacuum. All the components of the electromagnetic spectrum also show the typical properties of wave motion, including diffraction and interference. The wavelengths range from billionths of a centimetre to many kilometres. The wavelength and frequency of electromagnetic waves are important in determining their heating effect, visibility, penetration, and other characteristics.

III THEORY

The British physicist James Clerk Maxwell laid out the theory of electromagnetism in a series of papers published in the 1860s. He deduced that electromagnetic waves must exist and stated that visible light consisted of such waves.

Physicists had known since the early 19th century that light travels as a transverse wave (a wave in which the vibrations move in a direction perpendicular to the direction of the advancing wave front). They assumed, however, that the wave required some material medium for its transmission, so they thought that there was an extremely diffuse substance, called ether, which was the unobserved medium. Maxwell's theory made such an assumption unnecessary, but the ether concept was not abandoned immediately, because it fitted in with the Newtonian concept of an absolute space-time frame for the universe. A famous experiment conducted by the American physicist Albert Abraham Michelson and the American chemist Edward Williams Morley in the late 19th century undermined the ether concept and was important in the development of the theory of relativity. This work led to the realization that the speed of electromagnetic radiation in a vacuum is the same, regardless of the velocity of the source or the observer.

IV QUANTA OF RADIATION

At the beginning of the 20th century, however, physicists found that the wave theory did not account for all the properties of radiation. In 1900 the German physicist Max Planck demonstrated that the emission and absorption of radiation occur in finite units of energy, known as quanta. In 1905, Albert Einstein was able to explain some puzzling experimental results concerning the photoelectric effect by suggesting that electromagnetic radiation can behave like a particle.

Other phenomena that occur in the interaction between radiation and matter can also be explained only by the quantum theory. Thus, modern physicists were forced to recognize that electromagnetic radiation can behave sometimes like a particle and sometimes like a wave. The parallel concept—that matter also exhibits particle-like and wave-like characteristics—was developed in 1925 by the French physicist Louis de Broglie. SeeWave-Particle Duality.

31

Article - Hertz, Heinrich Rudolf

Hertz, Heinrich Rudolf

Hertz, Heinrich Rudolf (1857-1894), German physicist, born in Hamburg and educated at the University of Berlin. From 1885 to 1889 he was a Professor of Physics at the technical school in Karlsruhe, and after 1889 a Professor of Physics at the university in Bonn. Hertz clarified and expanded the electromagnetic theory of light which had been proposed by the British physicist James Clerk Maxwell in 1884. Hertz proved that electricity can be transmitted in electromagnetic waves, which travel at the speed of light and which possess many other properties of light. His experiments with these waves led to the development of the wireless telegraph and radio. The unit of frequency, one cycle per second, was renamed the hertz; it is commonly abbreviated Hz.

32

Article - Michelson-Morley Experiment

Michelson-Morley Experiment

Michelson-Morley Experiment, experiment attempting to measure the velocity at which the Earth was travelling through the ether. The experiment was conducted in 1887 by the German-born American physicist, Albert Michelson, and the American chemist, Edward Morley. It had the specific intention of finding out more about the ether, which was the supposed medium through which all electromagnetic waves (including light) were believed to be propagated. Ether was thought to permeate all space, it being inconceivable at that time that a wave motion could travel through a vacuum. Were the earth moving through a stationary ether, light travelling in a path parallel to the Earth's direction of motion would take a different time to pass through a given distance than would light travelling the same distance in a path perpendicular to the Earth's motion. The interferometer was arranged so that a beam of light was divided along two paths at right angles to each other; the rays were then reflected and recombined, producing interference fringes where the two beams met. If the hypothesis of the ether were correct, the two beams of light would interchange their roles as the apparatus was rotated (the one that travelled more rapidly in the first position would travel more slowly in the second position), and a shift of interference fringes would occur. Despite careful calculations to be sure that their experiment was sufficiently sensitive to detect the predicted time differences, and checking and rechecking the measurements, Michelson and Morley could find no difference between the speeds of light in the two directions, and therefore concluded that the ether did not exist.

The result of this experiment created great confusion in physicists' minds, as it made the nature of light even more difficult to understand. It did, however, create the intellectual climate in which it was possible for Einstein to put forward his theory of relativity, and it is the result that this theory requires.

33

Article - Morley, Edward Williams

Morley, Edward Williams

Morley, Edward Williams (1838-1923), American chemist and physicist, who is best known for his collaboration with A. A. Michelson in their famous optical interferometer experiment, which contributed to the realization that ether, the hypothetical medium supposed to carry light waves, did not exist. Their “ether-drift” experiment was an important step towards the eventual formulation of the special theory of relativity by Albert Einstein. But this work began about 1887, towards the end of Morley’s career. Throughout his working life he had a passionate interest in precise measurements. He studied for the Congregational ministry and while a minister at the Congregational Church in Twinsburg, Ohio, taught at Western Research College, later part of Western Reserve University; Morley was the Professor of Chemistry and Natural History until his retirement in 1906. From 1873 until 1888 he also held a second position: the Professorship of Chemistry and Toxicology at the Cleveland Medical School. Morley analysed the oxygen content of the atmosphere to within 0.0025 per cent, and measured the atomic weight of oxygen relative to hydrogen as 15.879, with an uncertainty of the order of 1 part in 10,000. He did this to test Prout’s hypothesis of 1815, according to which the atomic weights of all elements are whole-number multiples of the atomic weight of hydrogen. His accurate determinations were the weighted average of all the stable isotopes of the elements in question, but this could not be understood until after his retirement, with the discovery of isotopes. SeeAtom: Atomic Weight.

34

Article - Planck's Constant

Planck's Constant

Planck's Constant, fundamental physical constant, symbol h. It was first discovered (1900) by the German physicist Max Planck. Until that time, all forms of electromagnetic radiation, including light, had been thought to behave only as waves. Planck noticed certain deviations from the wave theory on the part of radiations emitted by so-called blackbodies, which are perfect absorbers and emitters of radiation. He came to the conclusion that electromagnetic radiation is emitted in discrete units of energy, called quanta. This conclusion was the first enunciation of the quantum theory. According to Planck, the energy of a quantum of electromagnetic radiation is equal to the frequency of the radiation multiplied by a constant. In mathematical terms, this can be expressed:

E = h f,
where E is the energy of a single quantum, h is Planck's constant, the value of which is presently accepted as 6.626 × 10-34 joule-seconds, and f is the frequency of the radiation. Using this equation, the energies associated with quanta of different frequencies of electromagnetic radiation can be calculated, for example:

Typical energy of quantum of X-rays = h.f ~ 6.6 × 10-34 × 1019 = 6.6 × 10-15 J
Typical energy of quantum of ultraviolet = h.f ~ 6.6 × 10-34 × 1015 = 6.6 × 10-19 J
Typical energy of quantum of infrared = h.f ~ 6.6 × 10-34 × 1013 = 6.6 × 10-21 J
It is interesting to note that the energy associated with a quantum of X-rays is a million times greater than that associated with a quantum of infrared radiation.

Planck's original theory has since had abundant experimental verification, and the growth of the quantum theory has brought about a fundamental change in the physicist's concept of light and matter, both of which are now thought to combine the properties of waves and particles. Thus, Planck's constant has become as important to the investigation of particles of matter as to quanta of light, now called photons. The first successful measurement (1916) of Planck's constant was made by the American physicist Robert Millikan.

35

Article - Quantum Electrodynamics

Quantum Electrodynamics

Quantum Electrodynamics or QED, a set of equations that accounts theoretically for the interactions of electromagnetic radiation with atoms and their electrons. QED appears to underlie the chemical and readily observable behaviour of matter and to encompass classical electromagnetic theory. The equations, which explain electromagnetism in terms of the quantum nature of the photon, the carrier of the force, were first formulated by Paul Dirac, Werner Heisenberg, and Wolfgang Pauli in the 1920s and 1930s. After World War II the theory was perfected by Julian Schwinger, Shin'ichiro Tomonaga, and Richard Feynman. See Physics: Modern Physics; Quantum Theory: Further Developments.

36

Article - Huygens' Principle

Huygens' Principle

Huygens' Principle, also known as Huygens’ construction (named after the Dutch scientist, Christiaan Huygens), principle that every point on an initial wavefront may be considered as the source of small, secondary wavelets that spread out in all directions from their centres, with the same frequency as the parent wavefront. A new wavefront can be defined, encompassing the wavelets. Since the light progresses at right angles to this wavefront, changes in the direction of the light can be worked out using Huygens’ principle.

Although Huygens' principle is mainly concerned with understanding simple wavefronts, refracted wavefronts, and diffracted wavefronts, several other wave phenomena, such as reflection and interference, can also be explained using this construction.

Prior to Huygens, light had been considered to consist of minute particles called corpuscles, and Huygens' great contemporary, Isaac Newton, took this view. It was the effective application of Huygens' principle that eventually convinced physicists that light could be considered to be a wave, although later discoveries, such as the photoelectric effect, made it clear that even the wave theory could not explain all of light's properties. It led to the idea of wave-particle duality.

37

Article - Diffraction

Diffraction

Diffraction, in physics, term used to describe the interaction between waves and solid objects in which a wave of any type spreads out after passing the edge of a solid object or after passing through a narrow aperture, instead of continuing to travel in a straight line. The degree of diffraction that occurs depends on the relationship between the size of the aperture or object and the wavelength of the wave—when the two are close together in size, diffraction occurs.

The diffraction of light at a single aperture leads to an interference pattern (Interference) in which there is a central bright area. On either side of this there is a dark band, with successive light and dark bands following it as the distance from the centre of the pattern increases. The width of the central light band is twice that of the other light bands. The bands become progressively less intense as the distance from the centre of the pattern increases.

A diffraction grating consists of a set of many evenly spaced slits, in which the slit separation is very small. Each slit in the grating diffracts light, and the diffracted waves then interfere constructively in certain directions only. This means that a beam of monochromatic light passing through the grating is split into a series of very narrow maxima. In general, the nth maximum will occur at an angle:

where ? is the wavelength of the light and d is the distance between adjacent slits. The spacing of the slits in a diffraction grating is usually expressed in terms of the number of slits per metre. For a grating with N slits per metre, the slit spacing is N-1.

The spreading out and blurring of light by diffraction limits the useful magnification of a microscope or telescope, so that details smaller than about a two-thousandth of a millimetre cannot be seen in most optical microscopes. Only a near-field scanning optical microscope can evade the diffraction limit and resolve details slightly smaller than the wavelength of light (Microscope: Special-Purpose Optical Microscopes).

The spacing of atoms in crystal lattices is of the same order as the wavelength of X-radiation, which means that X-rays passing through a crystal lattice will be diffracted, producing an interference pattern. (Effectively the crystal lattice acts as a diffraction grating.) X-ray diffraction is an important tool in exploring the structure of crystalline substances, and has been used to determine the structure of many important biological macromolecules (very large molecules), including deoxyribonucleic acid (DNA) and enzymes.

38

Article - Shadow

Shadow

Shadow, area of a surface that is darkened because an object lies between a light source and the surface. When the light source is very small (a point source), the shadow has sharp, clearly defined edges, but when the light source is extended there is an area, lying between the darkest shadow (the umbra) and the fully illuminated part of the surface, where some light can fall, and which is therefore only in partial shadow (the penumbra).

The shadows of the Earth and Moon are involved in eclipses. We see the Moon only because sunlight is reflected from it, so when the Earth passes exactly between the Moon and Sun, Earth's shadow falls on the Moon, which is therefore obscured. This is an eclipse of the Moon, and is said to be “total” if the whole Moon is obscured, and “partial” when only part of it is darkened. An eclipse of the Sun is caused by the Moon passing exactly between the Sun and Earth, so that the Moon's shadow falls on part of the Earth, where the eclipse is seen.

39

Article - Polarization of Light

Polarization of Light

Polarization of Light, the plane in which the oscillations of a light wave take place. Light is a transverse wave consisting of oscillating electric and magnetic fields at right angles to each other and the direction of travel. The fact that the fields are at right angles to the direction of travel is what makes the wave transverse. A sound wave, by contrast, is longitudinal, meaning that the vibrations are backwards and forwards, parallel to the direction of motion. Although in a light wave the electric and magnetic fields are always at right angles to each other, in combination they can be at any orientation. Either the electric or the magnetic field could be selected to define the polarization, but the electric field is actually used.

Every type of transverse wave has a polarization. The waves that travel along a stretched rope when it is whipped up and down are transverse. If instead the rope were whipped from side to side, the waves would also be transverse. In the first case the wave is vertically polarized (the rope moves up and down as the wave moves along); in the second case the wave is horizontally polarized (the rope moves from side to side as the wave moves along). These are not the only possibilities. There are many different directions in which the rope could move, and so many different polarizations. One way to specify the polarization is in terms of the angle between the direction of the rope’s movement and the horizontal. See Wave Motion.

Light emitted from almost all natural and artificial sources consists of individual bursts of electromagnetic radiation. Each burst has a specific polarization. However, because the bursts are produced independently of each other their polarizations are unrelated. As a whole the light is unpolarized.

Certain materials are polarizers: they will let light of only one polarization pass through. The material called Polaroid contains long, thin molecules lined up parallel to each other. When light polarized parallel to the molecules arrives, the electric field drives electrons up and down the molecule, and energy is absorbed from the wave. If the polarization is perpendicular to the molecule electrons cannot be driven far and little energy is absorbed. Light of that polarization passes through the material. If the polarization of the light is at some intermediate angle, then only part of the wave will pass. The first commercial Polaroid sheet was invented in 1928 by a 19-year-old American college student, Edwin H Land, who went on to found the Polaroid company.

Most of the light reflected from a surface, such as a body of water, is polarized parallel to the surface. Light waves of other polarizations are reflected much less strongly. In Polaroid sunglasses a Polaroid material in the lenses let only vertical polarizations through—so glare from the surface of water, for example, is eliminated.

Some crystals, such as calcite, tourmaline, and quartz, exhibit the property of double refraction, or birifringence. A light ray entering the crystal is split into two rays, refracted (bent) at different angles. Looking at an object through such a crystal, one sees two images in slightly different places. This effect occurs because the speed of light through the crystal depends on the polarization of the light. Vertically polarized light has a different speed to horizontally polarized light when in the crystal, so it is refracted by a different angle and produces a separate image. When the two rays emerge from the doubly refracting crystal, they are refracted back to their original direction, forming one combined ray. If the plate is thin, the emerging ray may have circular or elliptical polarization—that is, the direction of the electric field rotates around the direction of travel of the ray, possibly changing in strength as it does so.

40

Article - Ether (physics and astronomy)

Ether (physics and astronomy)

Ether (physics and astronomy), hypothetical substance believed by 19th-century physicists to be universal and to be the necessary medium for the propagation of electromagnetic radiation. The ether theory was abandoned after 1905, when Albert Einstein's special theory of relativity gained acceptance.

41

Article - Young's Slits

Young's Slits

Young's Slits, experimental arrangement of slits enabling the interference of light to be observed. The phenomenon was first described by British physicist, Thomas Young, in 1801. Interference occurs when light from two coherent sources falls on a screen. Coherent means, in practice, that the light is from the same source originally, but has been allowed to pass through two slits, so that it appears to be coming from two different sources. The experiment does not work with two separate sources, as the light does not then start off with its waves perfectly in phase (in step) with one another.

Light passing through the two slits, and arriving at O, has travelled exactly the same distance, and so the two waves, having started in phase, are still in phase, and therefore reinforce one another, producing a bright "fringe". However, light arriving at P will have travelled further from S1 than from S2, and will therefore be out of phase to some extent depending on the path difference S1Q. If S1Q is a whole wavelength, the waves will still be in step, and again reinforce one another. However, if the path difference is exactly half a wavelength, the waves will be out of phase, and the crest of one will arrive at P at the same time as the trough of the other. Destructive interference will occur, so no light energy arrives at P, and a dark fringe appears. This effect is repeated on both sides of O, and for a considerable distance on either side, giving a series of dark and bright fringes.

The wave nature of light is demonstrated in a variation of Thomas Young’s two-slit experiment. Pure yellow sodium light passes through a single slit, spreads out, and then falls on a screen with two narrow slits. The two beams produced spread out and overlap as they fall on a second screen, producing a pattern of fringes. Bright fringes are seen where the waves reinforce each other: this occurs at places where wave crests of one beam always arrive at the same time as those of the other beam. Dark fringes are seen in places where the waves cancel each other out: this occurs where the wave crests in one beam always arrive at the same time as the wave troughs of the other beam.

42

Article - Momentum

Momentum

Momentum, also linear momentum, in physics, fundamental quantity characterizing the motion of any object (see Mechanics). It is the product of the mass of a moving particle and its linear velocity. Momentum is a vector quantity, which means that it has both magnitude and direction. The total momentum of a system made up of a collection of objects is the vector sum of all the individual objects' momenta. For an isolated system, total momentum remains unchanged over time; this is called conservation of momentum. For example, when a tennis player hits a ball, the momentum of the racquet just before it strikes the ball plus the momentum of the ball at that moment is equal to the momentum of the racquet after it strikes the ball plus the momentum of the struck ball. As another example, imagine a swimmer jumping off a stationary raft that is floating on water. Before the jump, the raft and the swimmer are not moving, so the total momentum is zero. Upon jumping, the swimmer acquires forward momentum, and at the same time the raft moves in the other direction with an equal and opposite momentum; the total momentum of the swimmer and the raft remains at zero.

Conservation of momentum is a universal law of present-day physics; it holds true even in extreme situations where classical theories of physics break down. In particular, conservation of momentum is valid in quantum theory, which describes atomic and nuclear phenomena, and in relativity, which must be used when systems move with speeds that approach the speed of light.

According to Newton's second law of motion—named after the English astronomer, mathematician, and physicist Sir Isaac Newton—the force acting on a body in motion must be equal to its rate of change of momentum. Another way of stating Newton's second law is that the impulse—that is, the product of the force multiplied by the time over which it acts on a body—equals the change of momentum of the body.

43


Sources

1.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

2.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

3.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

4.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

5.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

6.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

7.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

8.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

9.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

10.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

11.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

12.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

13.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

14.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

15.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

16.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

17.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

18.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

19.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

20.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

21.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

22.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

23.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

24.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

25.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

26.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

27.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

28.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

29.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

30.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

31.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

32.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

33.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

34.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

35.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

36.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

37.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

38.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

39.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

40.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

41.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

42.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

43.   Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.

 




Bibliography


Microsoft® Encarta® Premium Suite 2003. © 1993-2002 Microsoft Corporation. All rights reserved.