Site hosted by Angelfire.com: Build your free website today!

Hard Disk Data Encoding and Decoding

Digital information is a stream of ones and zeros. Hard disks store information in the form of magnetic pulses. In order for the PC's data to be stored on the hard disk, therefore, it must be converted to magnetic information. When it is read from the disk, it must be converted back to digital information. This work is done by the integrated controller built into the hard drive, in combination with sense and amplification circuits that are used to interpret the weak signals read from the platters themselves.

Magnetic information on the disk consists of a stream of (very, very small) magnetic fields. As you know, a magnet has two poles, north and south, and magnetic energy (called flux) flows from the north pole to the south pole. Information is stored on the hard disk by encoding information into a series of magnetic fields. This is done by placing the magnetic fields in one of two polarities: either so the north pole arrives before the south pole as the disk spins (N-S), or so the south pole arrives before the north (S-N).


Technical Requirements for Encoding and Decoding

You might think that since there are two magnetic polarities, N-S and S-N, they could be used nicely to represent a "one" or a "zero" to allow easy encoding of digital information. That would be nice, but it doesn't work that way. The main reason is that the read/write heads are designed not to measure the polarity of magnetic fields, but rather flux reversals, which occur when the head moves from an area that has north-south polarity to one that has south-north polarity, or vice-versa. What this means is that the encoding of data must be done based on these reversals, and not the contents of the individual fields.

There is also another consideration in the encoding of data, and that is the necessity of using some sort of method of indicating where one bit ends and another begins. Even if we could use one polarity to represent a "one" and another to represent a "zero", what would happen if we needed to encode on the disk a stream of 1,000 consecutive zeros? It would be very difficult to tell where, say, bit 787 ended and bit 788 began. Also, since adjacent magnetic fields of the same polarity combine to form a larger one, this would, in layman's terms, create a mess.

To keep track of which bit is where, some sort of clock synchronization must be added to the encoding sequence. The flux reversals are considered to be written at a clock frequency, and the higher the frequency of reversals, the more data that can be stored in a given space.

Different encoding methods have developed to allow data to be stored effectively on hard disks (and other media). These have been refined over time to allow for more efficient, closely packed storage. It's important to understand the distinction of what density means in this context. Hardware technology strives to allow more bits to be stored in the same area by allowing more flux reversals per linear inch of track. Encoding methods strive to allow more bits to be stored by allowing more bits to be encoded (on average) per flux reversal.


Modified Frequency Modulation (MFM)

The first encoding system for recording digital data on magnetic media, was frequency modulation, of course abbreviated FM. (This has nothing whatever to do with FM radio, of course, except for a similarity in the concept of how the data is encoded.) This is a simple scheme, where a one is recorded as two consecutive flux reversals, and a zero is recorded as a flux reversal followed by no flux reversal. This can also be thought of as follows: a one is a reversal to represent the clock followed by a reversal to represent the "one", and a zero is a reversal to represent the clock followed by "no reversal" to represent the "zero".

The name "frequency modulation" can be seen in the patterns that are created if you look at the encoding pattern of a stream of ones or zeros. If we designate "R" to represent a flux reversal and "N" to represent no flux reversal, a byte of zeroes would be encoded as "RNRNRNRNRNRNRNRN", while a byte of all ones would be "RRRRRRRRRRRRRRRR". As you can see, the ones have double the frequency of reversals compared to the zeros; hence frequency modulation.

The problem with FM is that it is wasteful: each bit requires two flux reversal positions, resulting in a theoretical overhead of 100% compared to the ideal case (one reversal per bit). FM is obsolete and is no longer used. In fact, it was obsolete before the PC was really invented; it was originally used in floppy disks of older machines.

The replacement for FM is modified frequency modulation, or MFM. MFM improves on FM by reducing the number of flux reversals inserted just for the clock. Instead of inserting a clock reversal before each bit, one is inserted only between consecutive zeroes. This means far fewer reversals are needed on average per bit. This allows the clock frequency to be doubled, allowing for approximately double the storage capacity of FM.

MFM encoding was used on the earliest hard disks, and also on floppy disks. In fact, MFM is still the standard that is used for floppy disks today. For hard disks it was replaced by the more efficient RLL and its variants. Presumably this did not happen for floppy disks because the need for more efficiency was not nearly so great, compared to the need for backward compatibility.


Run Length Limited (RLL)

An improvement on the MFM encoding technique used in earlier hard disks and used on all floppies is run length limited or RLL. This is a more sophisticated coding technique, or more correctly stated, "family" of techniques. I say that RLL is a family of techniques because there are two parameters that define how RLL works, and therefore, there are several different variations. (Of course, you don't need to know which one your disk is using, since this is all internal to the drive anyway).

RLL works by looking at groups of bits instead of encoding one bit at a time. The idea is to mix clock and data flux reversals to allow for even denser packing of encoded data, to improve efficiency. The two parameters that define RLL are the run length and the run limit (and hence the name). The run length is the minimum spacing between flux reversals, and the run limit is the maximum spacing between them. As mentioned before, the amount of time between reversals cannot be too large or the read head can become out of sync and lose track of which bit is where.

The type of RLL used is expressed as "X,Y RLL" where X is the run length and Y is the run limit. The most commonly used type of RLL is 1,7 RLL. Describing how RLL encodes the data would be too involved to include here because it uses specific patterns of reversals to encode patterns of bits, and I've probably gotten more detailed here than I wanted to anyway. One or another variant of RLL is now used on most hard disk drives.

 

Hard Disk Tracks, Cylinders and Sectors

All information stored on a hard disk is recorded in tracks, which are concentric circles placed on the surface of each platter, much like the annual rings of a tree. The tracks are numbered, starting from zero, starting at the outside of the platter and increasing as you go in. A modern hard disk has several thousand tracks on each platter.

Data is accessed by moving the heads from the inner to the outer part of the disk, driven by the head actuator. This organization of data allows for easy access to any part of the disk, which is why disks are called random access storage devices.

Each track can hold many thousands of bytes of data. It would be wasteful to make a track the smallest unit of storage on the disk, since this would mean small files wasted a large amount of space. Therefore, each track is broken into smaller units called sectors. Each sector holds 512 bytes of user data, plus a few additional bytes used for internal drive control and for error detection and correction.


The Difference Between Tracks and Cylinders

A hard disk is usually made up of multiple platters, each of which use two heads to record and read data (one for the top of the platter and one for the bottom). The heads that access the platters are locked together on an assembly of head arms. This means that all the heads move in and out together, so each head is always physically located at the same track number. It is not possible to have one head at track 0 and another at track 1,000.

Because of this arrangement, often the track location of the heads is not referred to as a track number but rather as a cylinder number. A cylinder is basically the set of all tracks that all the heads are currently located at. So if a disk had four platters, it would (normally) have eight heads, and cylinder number 720 (for example) would be made up of the set of eight tracks, one per platter surface, at track number 720. The name comes from the fact that if you mentally visualize these tracks, they (sort of) form a cylinder because they are equal-sized circles stacked one on top of the other in space.

For most purposes, there really isn't much difference between tracks and cylinders--its basically a different way of thinking about the same thing. The addressing of individual sectors of the disk is normally done by referring to cylinders, heads and sectors (CHS). Since a cylinder is the collection of track numbers located at all of the heads of the disk, the specification "cylinder number plus head number" is identical to "track number plus head number".


Track Density and Areal Density

Areal density refers to the number of bits that can be packed into each unit of area on the surface of the disk. More is better--both in terms of capacity and performance. The track density of the disk is one factor that influences areal density. The chief obstacle to increasing track density is making sure that the tracks don't get close enough together that reading one track causes the heads to pick up data from adjacent tracks.


Zoned Bit Recording

One way that capacity and speed have been improved on hard disks over time is by improving the utilization of the larger, outer tracks of the disk. The first hard disks were rather primitive affairs and their controllers couldn't handle complicated arrangements that changed between tracks. As a result, every track had the same number of sectors. The standard for the first hard disks was 17 sectors per track.

Of course, the tracks are concentric circles, and the ones on the outside of the disk are much larger than the ones on the inside--typically double the circumference or more. Since there is a constraint on how tight the inner circles can be packed with bits, they were packed as tight as was practically possible given the state of technology, and then the outer circles were set to use the same number of sectors by reducing their bit density. This means that the outer tracks were greatly underutilized, because in theory they could hold many more sectors given the same linear bit density.

To eliminate this wasted space, modern hard disks employ a technique called zoned bit recording (ZBR), also sometimes called multiple zone recording or even just zone recording. With this technique, tracks are grouped into zones based on their physical location on the disk, and each zone is assigned a number of sectors per track. As you move from the innermost part of the disk to the outer edge, you move through different zones, each containing more sectors per track than the one before. This allows for more efficient use of the larger tracks on the outside of the disk.

One interesting side effect of this design is that the raw data transfer rate of the disk when reading the outside cylinders is much higher than reading the inside ones. Sometimes people benchmark their disks when new, and then many months later, and are surprised to find that the disk is getting slower! In fact, the disk most likely has not changed at all, but the second benchmark may have been run on tracks closer to the middle of the disk. (Fragmentation of the file system can have an impact as well in some cases.)

As an example, the table below shows the zones used by a 3.8 GB Quantum Fireball TM hard disk, which has a total of 6,810 user data tracks on each platter surface. Also included is the raw data transfer rate for each zone; notice how it decreases as you move from the outer edge of the disk (zone 0) to the hub of the disk (zone 14)--the data transfer rate at the edge is almost double what it is in the middle:

Zone

Tracks in Zone

Sectors Per Track

Data Transfer Rate (Mbits/s)

0

454

232

92.9

1

454

229

91.7

2

454

225

90.4

3

454

225

89.2

4

454

214

85.8

5

454

205

82.1

6

454

195

77.9

7

454

185

74.4

8

454

180

71.4

9

454

170

68.2

10

454

162

65.2

11

454

153

61.7

12

454

142

57.4

13

454

135

53.7

14

454

122

49.5

(From Quantum Fireball TM Product Manual, 1996 Quantum Corporation.)

A couple of additional thoughts on this data. First, having the same number of tracks per zone is not a requirement, that is just how Quantum set up this disk family. Second, notice how much larger the sector per track numbers are, compared to the 17 of the earliest disks! Modern drives can pack a lot of storage into a track.

Since the BIOS settings for IDE/ATA hard disks only allow the specification of a single number for "sectors per track", all modern hard disks use logical geometry for the BIOS setup, usually telling the BIOS 63 sectors per track and then translating to the real geometry internally.


Write Precompensation

As discussed in the section on zoned bit recording, older hard disks used the same number of sectors per track. This meant that older disks had a varying bit density as you moved from the outside edge to the inner part of the platter. Many of these older disks required that an adjustment be made when writing the inside tracks, and a setting was placed in the BIOS to allow the user to specify at what track number this compensation was to begin.

This entire matter is no longer relevant to modern hard disks, but the BIOS setting remains for compatibility reasons.


Interleaving

One factor that used to be a big issue for hard disks but is no longer, is interleaving, or setting the disk's interleave factor. The sectors on a disk are numbered logically, so that they can be addressed when data needs to be read or written. However, even though sector 2 comes after sector 1 logically, it doesn't have to be physically located next to sector 1 on the track. The distance in physical sectors between logical sector numbers is the interleave factor.

In the ideal case, sectors would be numbered consecutively, so that logical sector 2 was the next physical sector after logical sector 1. Let's suppose that we arranged them that way, and we wanted to read the disk in sequence: sector 1, 2, 3, etc. The problem is that the disk is spinning very quickly. Older controllers were not fast enough to finish processing the data they read from sector 1 until part of sector 2 had already begun to go past the head. Then, they would be forced to wait for the disk to spin all the way around to the beginning of sector 2. They'd read sector 2, and then the same problem would occur with sector 3.

The standard for older hard disks was 17 sectors per track. Instead of using an interleave factor of 1:1, where the sectors would be numbered 1, 2, 3, .. , 17, an interleave factor of 2:1 could be used instead. With this arrangement, the sectors on a 17-sector track would be numbered as follows: 1, 10, 2, 11, 3, 12, 4, 13, 5, 14, 6, 15, 7, 16, 8, 17, 9. Using this interleave factor means that while sector 1 is being processed, sector 10 is passing under the read head, and so when the controller is ready, sector 2 is just arriving at the head. This results in a rather startling reduction of over 90% in the wait time, compared to the 1:1 factor, with a dramatic improvement in performance as a result.

On older hard disks, the interleave factor was one parameter that could be tinkered with to improve performance. On modern hard disks, interleaving is not a factor, as all modern controllers are fast enough to handle a 1:1 interleave factor without any problem. The processing capabilities of hard disk drive controllers has gone up by many multiples from the early days of PCs, but in most cases the rotation speed of the platters has only gone up by 50-100%. Understanding interleaving is still important because the concept forms the basis for more advanced techniques such as head and cylinder skew, which are used on modern drives.

Warning: Some systems have BIOS settings for doing "media analysis" or setting interleave factors for hard disks. These are for older style drives that do not have an integrated controller. They should not be used with modern IDE or SCSI disks.

Note: This is a totally different concept than memory interleaving.


Head and Cylinder Skew

Sector interleaving was once used on older hard disks to ensure that the sectors were efficiently spaced on the track. This was needed to ensure that sector 2 didn't rotate past the head while sector 1 was being processed. While this is not an issue for modern, high-speed disk controllers, there are other delay issues within the drive that require spacing considerations be examined in other ways.

The first is the delay in time incurred when switching between tracks (cylinders) on the hard disk, called appropriately enough, track switch time. Let's imagine that we "lined up" all of the tracks on a platter so that the first sector on each track started at the same position on the disk. Now let's say that we want to read the entire contents of two consecutive tracks, a fairly common thing to need to do. We read all the sectors of track 1 and then switch to track 2 to start reading it at its first sector.

The problem is that it takes time to physically move the heads to track 2, and since we read all of track 1, the platter has rotated around so the first sector was basically on top of us already. A disk spinning at 3600 RPM takes around 17 ms to rotate, and the track switch time of a typical drive is around 3 ms. So while the head is moving to track 2, the first few sectors of track 2 go whizzing by, and we have to wait for an entire revolution for the first sector to return to where the head is. This problem isn't as bad as the interleave problem was, because it occurs only when changing tracks, and not every sector. But it's still avoidable.

The problem is avoided by offsetting the start sector of adjacent tracks to minimize the likely wait time (rotational latency) when switching tracks. This is called cylinder skew. By placing the first sector of track 2 adjacent to say, sector 10 of track 1, we make it likely that performance will be improved on sequential reads.

The same problem, only to a lesser degree, occurs when we change heads within a cylinder. Here there is no physical movement, but it still takes time for the switch to be made from reading one head to reading another, so it makes sense to offset the start sector of tracks within the same cylinder so that after reading from the first head/track in the cylinder, we can switch to the next one without losing our "pace". This is called head skew. Since switching heads takes less time than switching cylinders, head skew usually means a smaller number of sectors being offset than cylinder skew does.

The internal design of the drive is optimized based on the actual specific timing characteristics of the disk, with the layout of the tracks adjusted to account for cylinder skew and head skew, based on the likelihood of various access patterns. All of the details are taken care of by the controller. This is one reason why having integrated, dedicated drive electronics on the disk itself, is such a good idea.

 

Hard Disk Formatting and Capacity

Most PC users are familiar with the concept that a hard disk--in fact, all storage media--must be formatted before it can be used. There is usually some confusion, however, regarding exactly what formatting means and what it does. This is exacerbated by the fact that modern hard disks are not formatted in the same way that older ones were. This section takes a look at issues surrounding disk formatting and capacity.


Two Formatting Steps

Formatting a hard disk is actually two-step process, which is what leads to much of the confusion in this area. The first step, low-level formatting is "true" formatting. This is the step that actually creates the physical structures on the hard disk. The second step, high-level formatting, is an operating-system-level command that defines the logical structures on the disk. This is what you are doing when you use the DOS "FORMAT" command on a hard disk (but not on a floppy, where it will do the low-level formatting as well if that has not yet been done on the disk). Partitioning is done between the two formatting steps.


Low-Level Formatting

Low-level formatting is the process of outlining the positions of the tracks and sectors on the hard disk and writing the control structures that define where the tracks and sectors are. This is often called a "true" formatting operation, because it really creates the physical format that defines where the data is stored on the disk. Performing a low-level format (LLF) on a disk permanently and completely erases it and is the closest thing to "starting fresh" with a new hard disk. Unfortunately, if done incorrectly, it can cause the drive to become unreliable or even inoperable.

If you've explored other areas of this chapter describing hard disks, you have learned that modern hard disks are much more precisely designed and built, and much more complicated than older disks. Older disks had the same number of sectors per track, and did not use dedicated controllers. It was necessary for the controller to do the low-level format. Newer disks use many complex internal structures, including zoned bit recording to put more sectors on the outer tracks than the inner ones, and embedded servo data to control the head actuator. Due to this complexity, all modern hard disks are low-level formatted at the factory.

Older drives needed to be re-low-level-formatted occasionally because of the thermal expansion problems associated with using stepper motor actuators. Over time the tracks would move relative to where the heads expected them to be, and errors would result. These could be corrected by doing a low-level format, rewriting the tracks in the new positions that the stepper motor moved the heads to. This is totally unnecessary with modern voice-coil-actuated hard disks.

Warning: You should never attempt to do a low-level format on an IDE/ATA or SCSI hard disk unless you are instructed to do so by the manufacturer during a technical support session, and if so, only using software tools that they authorize. Do not try to use BIOS-based low-level formatting tools on these newer drives.

High-Level Formatting

After low-level formatting is complete we have a disk with tracks and sectors, but nothing written on them. High-level formatting is the process of writing the file system structures on the disk that let the disk be used for storing programs and data. If you are using DOS, for example, the DOS FORMAT command performs this work, writing such structures as the master boot record and file allocation tables to the disk. High-level formatting is done after the hard disk has been partitioned, even if only one partition is to be used. See here for a full description of DOS structures.

The distinction between high-level formatting and low-level formatting is important. It is not necessary to low-level format a disk to erase it: a high-level format will suffice; by wiping out the control structures and writing new ones, the old information is lost and the disk appears as new. (Much of the old data is still on the disk, but the access paths to it have been wiped out.) Also, different operating systems use different high-level format programs, because they use different file systems. However, the low-level format, which is the real place where tracks and sectors are recorded, is the same.


Defect Mapping and Spare Sectoring

Despite the precision manufacturing processes used to create hard disks, it is virtually impossible to create a disk with several million sectors and not have some errors show up. Imperfections in the media coating on the platter or other problems can make a sector inoperable. This usually shows up as errors attempting to read the sector, but there are other error types as well.

Modern disks use ECC to help identify when errors occur and in some cases correct them, however, there will still be physical flaws that prevent parts of a disk from being used. Usually these are individual sectors that don't work, and they are appropriately enough called bad sectors. If you've ever used a disk information utility on an older hard disk, or on a floppy disk, you've likely seen a report showing a few kilobytes worth of bad sectors. However, if you run such a utility on a modern hard disk, you will basically never see any reports of bad sectors on the disk. Why is this?

When the disk drive is manufactured, it is thoroughly tested for any areas that might have errors. All the sectors that have problems or are thought to be unreliable, are recorded in a special table. This is called defect mapping. On older hard disks, these were actually recorded--usually in hand-writing by the technician testing the drive--right on the top cover of the disk! This information was necessary because low-level formatting was often done by the end-user and this information was used to tell the controller which areas of the disk to avoid when formatting the disk. In addition to marking them in a physical table on the outside of the disk, each defect is marked inside the drive as well, to tell any high-level format program not to try to use that part of the disk. These markings are what cause the "bad sectors" to show up when examining the disk.

While early PC users accepted that a few bad sectors on a drive was normal, there was something distasteful about plopping down $1,000 for a new hard disk and having it report "bad sectors" as soon as you turned it on. There is no way to produce 100% perfect hard disks without them costing a small fortune, so hard disk manufacturers devised an interesting compromise.

On modern hard disks, a small number of sectors are reserved as substitutes for any bad sectors discovered in the main data storage area. During testing, any bad sectors that are found on the disk are programmed into the controller. When the controller receives a read or write for one of these sectors, it uses its designated substitute instead, taken from the pool of extra reserves. This is called spare sectoring. In fact, some drives have entire spare tracks available, if they are needed. This is all done completely transparently to the user, and the net effect is that all of the drives of a given model have the exact same capacity and there are no visible errors.

Really, when you think about it, the hard disk companies are sacrificing a small amount of storage for "good looks". It would be more efficient to use all of the sectors on the disk and just map out the few bad ones. However, sometimes marketing wins out over engineering, and it seems that more people want the warm feeling of thinking they have a perfect drive, even if it costs them theoretical storage in the process.

Because of spare sectoring, a brand new disk should not have any bad sectors. It is possible for a modern IDE/ATA or SCSI hard disk to develop new bad sectors over time. This is usually an indication of a greater problem with the disk. The vast majority of time, a single bad sector showing up will soon be accompanied by more of its friends. Make sure to contact the vendor if you see bad sectors appearing during scans, and make sure the data is backed up as well.


Unformatted and Formatted Capacity

A small amount of the space on a hard disk is taken up by the formatting information that marks the start and end of sectors, and other control structures. For this reason, a hard disk's storage total depends on if you are looking at the formatted or unformatted capacity. The difference can be quite significant: 20% or even more.

Let's be honest: stating the capacity of the hard disk in its unformatted size is kind of lame; since nobody can use a drive that is unformatted, the only thing that matters is the formatted capacity. (It's not quite as lame as how tape drive manufacturers always report the size of their drives assuming 2:1 compression, of course.) Fortunately, most hard disk manufacturers no longer report unformatted capacity, or at least if they do they just put it in the specifications area instead of in the primary advertising. It still pays to make sure you really know what you are reading.

 

Binary vs. Decimal Capacity Measurements

One of the most confusing problems regarding capacity measurements is the fact that the computing world has two different definitions for most of its measurement terms. Capacity measurements are usually expressed in kilobytes (thousands of bytes), in megabytes (millions of bytes), or gigabytes (billions of bytes). Due to a mathematical coincidence, there are two similar but different ways to express a megabyte or a gigabyte.

Computers are digital and store data using binary numbers, or powers of two. As it turns out, two to the tenth power, 2^10, is 1,024, which is approximately 1,000 (10^3). For this reason, 2^10 is also called a kilobyte. Similarly, 2^20 is 1,048,576 which is called a megabyte (as is 1,000,000), and 2^30 is 1,073,741,824, called a gigabyte (as is 1,000,000,000). There's a table in the Technical Resource Guide that summarizes this information.

The numbers expressed as powers of two are called binary kilobytes, megabytes, and gigabytes, while the conventional powers of ten are called decimal kilobytes, megabytes and gigabytes. In many areas of the PC, only binary measures are used. For example, 64 MB of system RAM always means 64 times 1,048,576 bytes of RAM. In other areas, only decimal measures are found--a 28.8K modem works at 28,800 bits per second, never 29,491. With disks, however, some companies and software packages use binary megabytes and gigabytes, and some use decimal. Notice that the difference in size between binary and decimal measures gets larger as the numbers get larger. There is only a 2.4% difference between a decimal and a binary kilobyte, but this increases to around a 5% difference for megabytes, and around 7.5% for gigabytes, which is actually fairly significant.

One of the biggest problems is that hard disk manufacturers almost always state capacities in decimal megabytes and gigabytes, while most software uses binary numbers. (Why do the drive manufacturers do this? Because it makes the drives seem larger when using decimal numbers, of course.) As a result, people get confused and wonder "what happened" to some of their hard disk's capacity.

This is also much of the source of confusion surrounding 2.1 GB hard disks and the 2 GB DOS limit on partition size. Since DOS uses binary gigabytes, and 2.1 GB hard disks are expressed in decimal terms, a 2.1 GB hard disk can in fact normally be entirely placed within a single DOS partition. 2.1 decimal gigabytes is actually 1.96 binary gigabytes. Another example is the BIOS limit on regular IDE/ATA hard disks, which is either 504 MB or 528 MB, depending on which type you are talking about.

Another thing to be careful of is converting between binary gigabytes and binary megabytes. Decimal gigabytes and megabytes differ by a factor of 1,000 but of course the binary measures differ by 1,024. So this same 2.1 GB hard disk is 2,100 MB in decimal terms. But its 1.96 binary gigabytes are equal to 2,003 binary megabytes (1.96 times 1,024).

One final "gotcha" in this area is related to arithmetic done between units that have different definitions of "mega" or "giga". For example: most people would say that the PCI bus has a maximum theoretical bandwidth of 133.3 MBytes/second, because it is 4 bytes wide and runs at 33.3 MHz. The problem is that the "M" in "MHz" is 1,000,000; the "M" in "MBytes/second" is 1,048,576. So the bandwidth of the PCI bus is more properly stated as 127.2 MBytes/second.

There's potential good news regarding this whole binary/decimal conundrum. The IEEE has proposed a new naming convention for the binary numbers, to hopefully eliminate some of the confusion. Under this proposal, for binary numbers the third and fourth letters in the prefix are changed to "bi", so "mega" becomes "mebi" for example. Thus, one megabyte would be 10^6 bytes, but one mebibyte would be 2^20 bytes. The abbreviation would become "1 MiB" instead of "1 MB". Only time will tell if this standard will catch on; old habits die hard.

 

Hard Disk Geometry Specifications and Translation

The generic term used to refer to the way the disk structures its data into platters, tracks and sectors, is its geometry. In the early days this was a relatively simple concept: the disk had a certain number of heads, tracks per surface, and sectors per track. These were entered into the BIOS set up so the PC knew how to access the drive, and that was basically that.

With newer drives the situation is more complicated. The simplistic limits placed in the older BIOSes have persisted to this day, but the disks themselves have moved on to more complicated ways of storing data, and much larger capacities. The result is that tricks must be employed to ensure compatibility between old BIOS standards and newer hard disks.

Note: These issues relate toIDE/ATA hard disks, not SCSI, which uses a different addressing methodology.

Physical Geometry

The physical geometry of a hard disk is the actual physical number of heads, cylinders and sectors used by the disk. On older disks this is the only type of geometry that is ever used. The original setup parameters in the system BIOS are designed to support the geometries of these older drives, in particular the fact that every track has the same number of sectors.

All newer drives that use zoned bit recording must hide the internal physical geometry from the rest of the system, because the BIOS can only handle one number for sectors per track. These drives use logical geometry figures, with the physical geometry hidden behind routines inside the drive controller. Usually you have to request detailed specifications for the drive to find out the true physical geometry. Even then you might have problems--I called one major drive manufacturer and the technician had no idea what I was talking about. He kept giving me the logical parameters and insisting they were the physical ones. I asked him how his drive could have 16 heads when it had only 3 platters, and he got very confused.


Logical Geometry

When you perform a drive parameter autodetection in your system BIOS setup or look in your new hard disk's setup manual to see what the drive parameters are, you are seeing the logical geometry parameters that the hard disk manufacturer has specified for the drive. Since newer drives use zoned bit recording and hence have ten or more values for sectors per track depending on which region of the disk is being examined, it is not possible to set up the disk in the BIOS using the physical geometry. Also, the BIOS has a limit of 63 sectors per track, and all newer hard disks average more than 100 sectors per track, so even without zoned bit recording, there would be a problem.

To get around this issue, the BIOS is given bogus parameters that give the approximate capacity of the disk, and the hard disk controller is given intelligence so that it can do automatic translation between the logical and physical geometry. Virtually all modern disks use a logical geometry with 16 heads and 63 sectors, since these are the largest values allowed by the BIOS. The actual physical geometry is totally different, but the BIOS (and your system) knows nothing about this. As far as the system is concerned, the disk has 16 heads and 63 sectors on every track, and the hard disk itself takes care of all the "dirty work". The physical geometry is totally hidden from view. In fact, it is hidden so well that you will have a hard time finding out the physical geometry of your drive. If you call up tech support and ask for the geometry, they will always give you the logical geometry.

Here's an example showing the difference between the physical and logical geometry for a sample drive, a 3.8 GB Quantum Fireball TM:

Specification

Physical Geometry

Logical Geometry

Read/Write Heads

6

16

Cylinders (Tracks per Surface)

6,810

7,480

Sectors Per Track

122 to 232

63

Total Sectors

7,539,840

7,539,840

(From Quantum Fireball TM Product Manual, 1996 Quantum Corporation.)

The fact that both geometries equate to the same number of total sectors is not a coincidence. The purpose of the logical geometry is to enable access to the entire disk using terms that the BIOS can handle. The logical geometry could theoretically end up with a smaller number of sectors than the physical, but this would mean wasted space on the disk. It can never specify more sectors than physically exist, of course.

The translation between logical and physical geometry is the lowest level of translation that occurs when using a modern hard disk. It is different from BIOS translation, which occurs at a higher level


BIOS Geometry Translation

While the use of logical hard disk geometry gets around the problem that physical hard disk geometries cannot be properly expressed using standard BIOS settings, they don't go far enough. In most cases, higher levels of translation are needed as well because other problems relating to old design decisions make it impossible for even the logical geometry to be used with modern large hard disks. These are the infamous BIOS capacity barriers such as the 504 MB limit on standard IDE/ATA hard disks, and other similar issues.

In order to get around these barriers, another layer of translation is often applied on top of the geometry translation that occurs inside the hard disk. This translation is performed by the BIOS. There are many issues involved in BIOS-level translation

 

Hard Disk Error Management and Recovery

Many people don't realize that it is normal for a hard disk to produce errors during reading, as part of its regular operation. As hard disks are pushed to the limits of technology, with tracks and sectors spaced closer together, weaker signals used to prevent interference, and faster spin rates produced by the spindle motor, the chances of an error occurring while reading the disk go up dramatically.

Of course having actual errors while reading the hard disk is unacceptable, since most people count on their disks reproducing the data they store on them reliably, for a period of years. Hard disk manufacturers know how important this is, and so incorporate special techniques that allow them to detect and correct hard disk errors. This allows them to make faster, higher-capacity drives that appear to the user to be error-free. The more the technology for storing data is pushed, the more sophisticated the error correction protocols must be to maintain the same level of reliability.


Error Correcting Code (ECC)

The basis for modern hard disk error detection and correction is the use of error correcting code or ECC (sometimes also called error correction code or error correcting circuits). There are several different types of error correcting codes that can be used, but the type commonly used on PCs is the Reed-Solomon algorithm. This technique is also used for error detection and correction on CD-ROM media, and is also used on some systems to detect and correct errors in the system memory.

The basis of all error detection and correction is redundant information and special software to use it. Each sector of data on the hard disk contains 512 bytes, or 4,096 bits, of user data. In addition to these bits, an additional number are devoted for use by ECC. The number of bits used per sector is a design decision: the more bits used, the more robust the error detection and correction, but the fewer sectors that can be fit on the track (since less of the disk's space is available for user data). Many hard disks now use over 200 bits of ECC code per sector.

When a sector is written to the hard disk, ECC codes are generated and stored in their reserved bits. When the sector is read back, the user data read, combined with the ECC bits, tell the controller if any errors occurred during the read. Not every error can be detected and corrected, but the vast majority of the most common ones can. Sophisticated drive firmware uses the ECC as part of its overall error management protocols.


Read Error Severities and Error Management Logic

The hard disk's controller employs sophisticated techniques to manage read errors. In a way, the system is kind of like a problem diagnosis flowchart. When a problem occurs, the simplest techniques are tried first, and if they don't work, the problem is escalated to a higher level. Every manufacturer uses different techniques, so this is just a rough guideline of what happens.

Here is an example of how a hard disk will approach error management:

  1. ECC Error Detection: The sector is read, and error detection is applied to check for any read errors. If there are no errors, the sector is passed on to the interface and the read is concluded successfully.
  2. ECC Error Correction: The controller will attempt to correct the error using the ECC codes read for the sector. In many cases the data can be corrected very quickly using these codes. If so, the data is corrected and the read considered successful. Many drive manufacturers consider this occurrence common enough that it is not even considered a "real" read error. An error corrected at this level can be considered "automatically corrected".
  3. Automatic Retry: The next step is usually to wait for the disk to come around again, and retry the read. Sometimes the first error can be caused by a stray magnetic field or other non-repeating problem, and the retry will work. If it doesn't, another retry may be done. Most controllers are pre-programmed to retry the sector a certain number of times before giving up. An error corrected after a straight retry is often considered "recovered" or "corrected after retry".
  4. Advanced Error Correction: Many drives will, on subsequent retries after the first, invoke more advanced error correction algorithms that are slower and more complex than the regular correction protocols but have an increased chance of success. These errors are "recovered after multiple reads" or "recovered after advanced correction".
  5. Failure: If the sector still cannot be read, the drive will signal a read error to the system. These are "real", unrecoverable read errors, the kind that result in a dreaded error message on the screen.

Every hard disk has statistical analysis done on it to determine the likelihood of these various errors, and they are usually given a rating in terms of how often the error is likely to occur. Looking again at the Quantum Fireball TM, we see the following error rate specifications:

Error Severity

Worst-Case Frequency of Error (Number of Bits Read Between Occurrences)

Automatically Corrected

Not Specified

Recovered Read Errors

1 billion (1 Gb)

Recovered After Multiple Reads (Full Error Correction)

1 trillion (1,000 Gb)

Unrecoverable Read Errors

100 trillion (100,000 Gb)

(From Quantum Fireball TM Product Manual, 1996 Quantum Corporation.)

I find this pretty impressive. While your hard disk does a lot of reads and writes, 100,000 gigabits is a pretty enormous number! This is why the reliability of modern hard disks is so high.

 

Error Notification and Defect Mapping

Many drives are smart enough to realize that if a sector can only be read after retries, the chances are good that something bad may be happening to that sector, and the next time it is read it might not be recoverable at all. For this reason the drive will usually do something when it has to use retries to read a sector (but usually not when ECC will correct the problem on the fly).

What the drive does depends on how it is designed. In many cases when the data has to be corrected after retries, a special message will be sent to the system indicating that the data has been corrected. Other hard disks are more sophisticated; the Fireball TM examined in this section will, if it must use its full advanced error correction protocols, automatically mark the sector bad and relocate it to one of its spare sectors.