6 Circles of Risk is a relatively long, detailed essay by by Jay Golter and Paloma Hawry that illustrates the various risks y2k poses to computer systems, businesses, and the economy.
Discussed is the sheer complexity and magnitude of the y2k threat. Here are brief exerpts from the article:
First Circle: Core Information Systems
Larger enterprises generally have developed or purchased automated applications to process and manage their critical business information. Traditionally, these data systems were maintained and controlled by the company's own professional information management staff operating within corporate-defined standards for updates, processing and output. Functions that are likely to be automated include payroll and benefit administration, inventory management, accounting, accounts payable and receivable processing, and scheduling (of staff, production, or deliveries). The software used to run each of these processes may have been developed by an in-house programming staff or purchased off the shelf from a vendor, or may represent a combination of off-the-shelf products and custom-developed applications. While these systems used to reside on mainframe computers, in recent years many organizations have developed other, more distributed platforms for these functions. Often, what were once considered core systems may even be managed on-site or off-site by an outsourcer. These arrangements may leave the outsourcer as the only entity with knowledge of the internal workings of the applications program, even though any Year 2000 impairments to those programs represent a serious or even fatal problem for the firm.
To date, the most intense Year 2000 compliance work has been in data processing departments ensuring that the core systems will continue to function after the millennium changes. For operations to continue unaffected, many elements of the system need to be scrutinized, modified, tested and replaced. This process must be conducted on all important components of the system in a coordinated fashion so that the replacement of one part of the system with a century-compliant version does not interfere with the functioning of another part of the system with which it interfaces.
In some cases, the hardware on which important applications run will itself be unable to process data after the 20th century. For example, IBM has announced that it will not provide upgrades to system 360 mainframes (1970s technology). Some of these platforms may still be in operation at firms that were unsuccessful in converting to newer equipment. Replacing a mainframe platform is rarely an easy proposition because some of the software running on the old system may be incompatible with the new system, requiring that new application software be purchased or created. Some of the complexities of replacing applications software are described below.
Assuming that the hardware platforms are capable of operating in the next century, the next core area of concern is the operating systems. Operating systems instruct the computer on how to read and follow the instructions that comprise the application software. Elements of the operating system include media software, which enables the computer to read and write data in electronic formats such as magnetic tapes or optical disks; language compilers, which enable a computer to understand programs that have been written in different software languages; and computer job scheduling systems. The problems inherent in upgrading hardware are comparable to those in upgrading operating system software: it is necessary first to test to see whether all of the applications currently being used will work with the new operating software. Over time, some data processing centers may have deliberately chosen to forgo upgrading their operating systems, so that now, for example, only an early version of a language compiler might read some of the programs that the firm continues to use. Upgrading this firms operating software will also require upgrading the language compiler. But upgrading the language compiler will require that the application software written in that language be converted or modified. Originally, the improvement in performance that the new operating system would have provided might not have been worthwhile. Now, however, all of these conversions must be completed, and in a fairly short period, simply for the firm to be at a point where the hardware and operating systems are capable of functioning in the year 2000.
The next layer of the system that needs to be examined is the applications software. These are the programs that, for example, run the general ledger system, track order processing, or contain the customer information files. Some of these applications may have been purchased from software vendors (Commercial Off-the-Shelf Software, or COTS) and some may have been developed by in-house programmers. Hybrid situations are also common. Often a COTS program will have been modified by the in-house staff to accommodate the firms unique needs. At other times, the COTS developer may customize the software to match the operational needs of the firm. For example, an application may house a database that was developed in-house and that may be accessed with a COTS program that was modified by the firm. At the time this article was being written, Year 2000 patches or upgrades for many COTS applications had not yet been released. However, in the near future the software vendor should be releasing newer versions that will function during the next decade (or else go out of business).4 Once again, extensive testing must take place before the new versions can be installed. The difficulty of testing and installing will increase with the number of modifications that the in-house staff has developed. Furthermore, individual firms may have chosen not to upgrade with the previous two or three releases. In such a case, it may be necessary to install an intermediate version of the application before installing the final, compliant version.
Some of the greatest challenges will involve applications that were created in-house. These programs must be examined line by line for places in which dates are used as part of an instruction. Once these date-related processes are identified, it must be determined if the instructions represented by that section of code will properly execute when dates from the year 2000 are processed. Modifications must be made to accommodate the new century, and the modified application must be tested to ensure that the changes have not inadvertently affected another part of the program. Locating date-sensitive code may be complicated by the fact that a programmer may have used dates in ways that are not obvious. For example, a programmer may have created a temporary signal within the program to indicate that some processing routine would be necessary for certain records. The signal may actually contain date information, and the programmer may have named the signal after his or her pet at the time. A Year 2000 remediator who finds a line of code that reads "If Spot is greater than 64 then do routine X" would not immediately realize that date information was being processed.
Repairing in-house code can be complicated by two additional factors. First, the code may be written in a very old programming language that has not been taught to new students for many years. In the past, whenever the program needed to be modified, the data-processing center might have relied on a few veteran staffers who knew the particular language or contracted the job to outside vendors. Unfortunately, while these resources might have been adequate to handle occasional jobs, they may not be sufficient for the job of combing each line of each program in that language to look for date fields and date logic. The alternative is to replace the program, either with a COTS product or with a new program written in a more common programming language. However, converting to new software can often be difficult and time-consuming; existing files must be modified to accommodate the formats used in the new software, and the software must be tested to verify that it handles all of the functions that the old system performed. Employees must be trained in use of the new system (inputting data, reading reports, taking inquiries), and procedures must be established to replicate any important functionality that was lost when the old application was retired. Furthermore, to minimize the complexity of a systems conversion, historical information from the old system is often dropped from the new system. During the early days of operations, therefore, while the new on-line histories are being built up, employees have to use other tools, like CD ROM files, for information that had previously been readily available on-line.
The other factor that will influence the difficulty of converting in-house applications is the extent to which those old systems are well documented. In the early days of data processing, computer codes were written and implemented without much thought to how easily the code could be modified latter. Over time, as data processing departments experienced the difficulty of making changes to these early programs, they established new methods and standards. For example, English-language comments or narratives should accompany the code, describing the purpose for each section of the code and the logic that the system will follow. But partly because of the difficulty of untangling the old sections of code and partly from a sense that "If it ain't broke, don't fix it," some old, undocumented programs are still being used. Now they must be examined for date sensitivity. In some cases, this task is further complicated by the lack of source code.5
Over the past couple of decades, most data-processing systems have been enhanced by the integrating of different components. For example, a system used to process customer orders received by telephone may first pull up name and address information from a customer database (along with a summary of information describing past contacts with the customer), then determine if the merchandise being requested is available from an inventory system, perform a quick credit check on the customer (perhaps with an outside servicer), determine the cost of shipping the merchandise by accessing another database, initiate the processing of the order, provide data for an accounting system, and create a log of the conversation for an employee-performance evaluation system. Expanding the date fields in the file structure of the order-taking system without also modifying the programs that allow the various other systems to interface with it would be likely to cause several of these systems to stop functioning properly. Furthermore, if the information being exchanged with another system is a date, special bridge programs must be written to convert the newly formatted date to the format maintained in the other system. At some future time, the date format in the other system may itself be modified, a change requiring that the bridge programs be altered and tested before the new change is implemented.
One recent trend in data processing at large organizations is the construction of "data warehouses." In such systems, data from various applications are defined consistently so that each element can be viewed from a variety of platforms (that is, endusers from different divisions, using different computer systems, will all be able to access the same database). Such systems may involve multiple tiers of systems. For example, a processing tier, a data storage tier, and presentation software, may all work together to create a comprehensive system. These components may be developed, in part, with a variety of COTS software, vendor software, and applications developed in-house. By enabling several applications and platforms to operate together seamlessly, such system designs provide many benefits to a business. However, they create special management challenges in the areas of data control and data integrity, data translation, and transaction timing. Network performance and connectivity become an integral aspect of the application design and functionality. These issues complicate the process of making an organizations critical systems ready for the Year 2000.
Second Circle: Networks, Workstations, PCs
The search for an institutions vulnerability to Year 2000 problems moves outward from the systems controlled by a data-processing department to other systems that are wholly or partly controlled by endusers. Although the core functions of larger organizations usually reside on centralized systems, decentralized systems offer greater flexibility -- thus many important tasks have migrated to network systems, workstations, or stand-alone PCs. Common examples include e-mail and systems that enable files to be electronically shared (for example, between loan officers and credit review officers). In addition, many of the desktop applications (such as word processing and spreadsheet programs) used in the organization may be delivered to the user over a network.
Some firms might be able to function with these systems down for an extended period of time, but doing so would still be very disruptive and inconvenient. Other firms, however, have placed extremely critical operations on these systems and may not be able to function properly should they become dysfunctional. Early in its Year 2000 remediation project, each organization must determine how dependent it is on each of the applications that run on these platforms. Part of a Year 2000 compliance plan should include testing the network servers, bridges and routers; software applications; and the drivers that enable the applications to interface with the different components of the network; the central data maintained on the network; and the special programs (such as spreadsheets) that users created to analyze or manipulate data.
PC users, whether networked or stand alone, have often created spreadsheet or database programs that help them make important decisions. These users may not be aware of the need to verify that those programs will function properly after the dates change, or they may not have the experience to design adequate tests. Indeed, the application may have been written by a former employee who had greater programming skills than the incumbent. After the century changes, these programs may appear to be still functioning even though the results they produce are faulty. Therefore, firms must identify which of these programs are critical or important, and must provide support for users trying to test and convert them.
Third Circle: Third-Party Data Exchanges
The next circle of vulnerability involves exchanges of data with other entities. Even though an institution may have corrected all of its own data-processing systems, it may still be vulnerable if it is not prepared to read accurately the data it receives from other sources. One of the ways in which automation has developed over the years is by increasing the use of systems that exchange data between organizations. An example would be EDI (Electronic Data Interchange) systems, in which a major business and all of its suppliers exchange important data -- from initiation of orders through invoicing and payment -- without manual intervention. The widespread use of such systems complicates the Year 2000 compliance process: all of the parties to an EDI system must agree on how the data will be modified to accommodate the new century. As each side of the data stream modifies and updates its data systems, coordinating and testing the results with the counterparties become very important. Organizations that are connected to several different EDI chains may find themselves having to comply with a number of different solutions.
Exchanges of data in electronic formats may occur when entities file reports with government agencies, including the IRS; when businesses order and receive credit reports on potential customers; and when organizations exchange a variety of data -- including ACH transactions, and cash management reports -- with their banks. If all users of a particular system can not agree on data formats, each user will have to create programs that, upon receiving the data, determine how to modify the input so it is handled correctly. Finding and using the proper conversion program with each incoming data stream will increase the time required for processing each transaction, and the deterioration in performance could become a serious problem for high-volume, real-time systems.
Fourth Circle: Plant and Equipment6
Although the key area of risk for some organizations will be in their data-processing systems, other organizations are exposed to more serious disruptions from other types of Year 2000 failures. Many important pieces of equipment, including telephone switchboards, security systems, HVAC, and elevators, may operate with embedded microprocessors that use calendar functions. These pieces of equipment must be tested to determine how they will behave when the century changes. Furthermore, it is not always obvious that a particular piece of machinery incorporates date functions. For example, a buildings elevator system may have an embedded calendar that determines whether the system should follow a weekend pattern of responding to calls or a weekday pattern. Even more potentially disruptive may be the same elevator systems built-in maintenance schedule: if the schedule determines that too much time has elapsed since the last reported maintenance examination, the elevator may shut down until another examination takes place. Equipment used in the production process at different firms may have similar problems. Some manufacturing processes may rely on control systems that receive time-stamped data from sensors, compare the changes over time between readings, and then either signal an operator that some procedure should begin or automatically make some adjustments to the process (for example, closing a valve). If such control units incorrectly determine the sequence in which readings occurred or incorrectly calculate the time between readings, they may fail to perform properly. The failure could disrupt the manufacturing process.
To determine in advance which machinery will be impaired, an organization has to know how the machinery was designed, and what the specifications of the embedded microprocessors are. However, many organizations are having great difficulty uncovering this important information. In some cases, the manufacturers or distributors of the equipment may no longer be in business. In other cases, manufacturers produced products using components from various sources; the result is that devices with the same make and model number will perform differently as the turn of the century nears.
The potential failure of embedded microprocessors could expose some organizations to even greater risks than those they face if data-processing systems malfunction. For example, much of the equipment used in a hospital, including patient monitors, automatic drug-dosing devices, MRI and CAT scan equipment may rely on embedded technology.7 Failure of the equipment could result in serious injury or death. But hospitals are not the only firms that may face great risks from failures of equipment. For example, in Bhopal, India, in December 1984, an estimated 6,000 people died when a valve in a Union Carbide chemical factory malfunctioned. Many plants that use or produce similarly dangerous chemicals rely on "smart technology" to monitor and control the process, and some of these may be vulnerable to Year 2000 malfunctions.8
Fifth Circle: Business Partners
The next circle of vulnerability is located outside of the firm itself. All business organizations depend on suppliers to provide essential goods and services. These suppliers include manufacturers of components and firms that deliver supplies or finished goods. Many firms, however, have become even more vulnerable to potential disruptions within the suppliers operations, for during the past decade, seeking to reduce inventory carrying costs, they have moved to "Just-in-Time" (JIT) inventory systems. In preparation for the year 2000, some of these firms may decide to increase the levels of their raw-goods inventory. This could require them to rent storage space to accommodate the increase.
Organizations are also dependent on third parties that provide services such as equipment maintenance, building and grounds maintenance, and other services such as printing. Nearly all organizations rely on utility firms that provide telecommunications, power, and water. If any of these suppliers has difficulty providing service at the level on which the organization has depended, the organizations own operations could be adversely affected. Thus, it is important for all organizations to perform due diligence on their important suppliers to make sure that each has adequately addressed Year 2000 issues. If an organization determines that any of these suppliers is not ready for the date change, the organization must locate alternative provider or establish work-around procedures. Many organizations are now including language in all newly written contracts requiring the supplier to warrant that it will be able to perform in the year 2000. Although this may be a wise business practice, it is important to recognize that by themselves such warranties do not offer enough assurance to the purchaser of the service. Organizations should be prepared to assess the Year 2000 programs of their key suppliers, or obtain independent analysis of the suppliers plans. Appendix 2 describes the vendor management process in more detail.
Firms also depend on the success of their customers. Thus, it is appropriate for most businesses to engage in a dialogue with their customers about Year 2000 issues. The immediate goal of this effort is to motivate each important customer to develop and implement a Year 2000 remediation project, but at the same time the dialogue may achieve other valuable goals. For example, a business may learn that its customer has decided to shut down some production facilities in January 2000 in order to verify the Year 2000 readiness of each production process in a controlled setting before reopening the facility. In such cases, it would be important for the supplier to learn this ahead of time and adjust its business plan to reflect the modified demand for its goods or services. Businesses may also suffer if, in advance of the date change, customers lose confidence in the firms ability to navigate the difficult waters. Continuous dialogue about Year 2000 plans with key customers may help prevent such erosion from occurring.
Sixth Circle: Macroeconomic Repercussions
The final circle of risk to which an organization is exposed because of possible date-change problems involves the economy as a whole. The sales and income of most businesses are affected by the performance of the overall economy and Year 2000 problems could adversely affect the Gross Domestic Product.
The most immediate cause of economic disruption would be the large cost many firms will incur to fix their systems in preparation for the next decade. This cost includes the opportunity cost of not being able to undertake other projects during the remaining months of the 20th century, because resources are focused on Year 2000 conversions. In addition, some machines and equipment will have to be replaced before the end of their useful life. Some marginal firms may in fact choose not to incur the cost of a conversion project and, as result, may find themselves unable to remain in business.
Additional macroeconomic disruption will occur as firms plan for the date change. Because they will be unable to determine with certainty which goods and services will be available without interruption after January 1, 2000, many will build up inventories of raw materials and finished goods. This anticipatory stocking up is like the publics rush to buy batteries and various staples when a major storm is predicted. Indeed, individuals are also likely to engage in similar preparatory activities in advance of the date change. In these cases, even if no systems actually fail, simply the process of accumulating inventories and then reducing them back to normal, will cause some distortions in the overall level of economic activity.
A third source of disruption will be the actual failures of some systems or enterprises. Initial failures will be caused by the inability of individual organizations to fix mission critical systems in time. One analyst predicts that Year 2000 problems will create a 1 percent risk that any given Fortune 500 corporation fails, a 3 percent risk of failure for any given small firm (fewer than 1,000 employees), and a 5 percent to 7 percent risk for any given mid-sized firm (1,000 - 10,000 employees). The analyst notes "Mid-sized corporations ... have historically shown a distressing tendency to utilize quite a lot of software, but to be only marginally competent in how they build and maintain the software...There are about 30,000 companies in the mid-size range in the United States, and a 5% to 7% business failure rate would mean that from 1500 to about 2100 companies might close or file for bankruptcy as a result of the year 2000 problem. This is a significant number and it is an open question as to whether the impact of the year 2000 problem is severe enough to trigger a recession."9
As other firms that depend on the failed entities as key customers or suppliers are affected by the failures, a ripple effect will occur. Because most observers believe that other countries are less ready for the year 2000 than the United States, the ripple effect may especially touch firms that rely on overseas production facilities or for which exports constitute a large portion of their sales. Additional aftereffects may also be experienced by community businesses that serve large numbers of employees of failed firms. The magnitude of these repercussions will be affected by the extent to which additional disruptions take place. Any disturbances in the operations of critical infrastructure systems (for example, power generation or transportation, including air traffic control) will complicate the process of repairing damaged systems and returning the economy to normal levels of production.
Another important factor in determining the ultimate consequences of Year 2000 problems will be the ability of government entities at all levels to deliver services. For example, urban areas where traffic signals or subway systems no longer function or where 911 operations break down, could face great difficulty. And if the delivery of some government assistance program, such as unemployment insurance benefits, is disrupted by Year 2000 problems, the economic disruptions would be exacerbated.
Dr. Edward Yardeni, chief economist at Deutsche Morgan Grenfell, has written extensively about the Year 2000 problem and its potential effects on the world economy.10 Given his analysis of the remediation efforts to date of the federal government, the electric utility industry, the transportation industry, and other components of the economy, he recently raised to 60% his estimate of the likelihood of a severe global recession related to the Year 2000 computer problem.11
On a continuum, the United States and the world could experience anything from mere disruptions (a lot of time and money spent fixing the problem or cleaning up after systems malfunction) to recession (system failures causing many business failures) or even crisis (failures severe enough to cause deaths and economic depression). Where we end up on the continuum will be determined by how successfully each firm, government entity, and key nonprofit organization handles the conversion process in the remaining months of this century -- and months are all that remain.
Click here to continue reading about the 6 Circles of Risk .
Home Page: Future, Doomsday, Year2000