“Computers in the future may weigh no more than 1.5 tons.” – Popular Mechanics, forecasting the relentless march of science, 1949
When you complete this section you will be able to:
Introduction - This file contains an introduction to this lesson. You will learn the method I've used to divide the history of operating systems into four eras. You should read this first.
- This file contains information about operating systems as they existed before
(1975-1990) - This file contains information about operating systems as
they existed between 1975 and 1990.
(1990s) - This file contains information about operating systems as they
have existed since 1990.
Today and Beyond - This file contains information about operating systems as I predict they will appear in the future.
Skill Check - This set of questions will quiz your understanding of the operating system theory and practice presented in this lesson.
Challenge - This set of advanced lab exercises is designed to help you apply your understanding to new challenges.
This section of my tutorial describes some of the highlights of the history of operating systems. You cannot fully appreciate where we are (or where we are going) unless you understand where we came from.
Once upon a time people had to make their own soap. They would labor over a pot of boiling fat (and other unsavory ingredients) until it was ready to mold. They would then pour it into a large pan to let it set – then cut it into small “bars” for use. The entire process was extremely time consuming and the final product was totally dependent on the skill of the soap maker.
Soon, a small cottage industry grew up around soap making. People would hand down their soap-making skills to their children. More importantly, people would occasionally gather and discuss soap making tips. When someone came up with a good idea he would share that with his friends and neighbors and they would adopt it. Eventually the entire community was making better soap as they shared ideas with each other. It became so useful to have good soap that the community would gather in annual “soap conferences” where folks could show off their latest ideas, give away samples, and generally have a good time. Over time, this group became known as the “Soap Consortium.”
Then William the Younger came up with a new approach. He could make a superior soap that would clean any stain or dirt! He gathered a few trusted wizards and came up with a formula that would work in everyone’s home for all types of dirty problems.
When he first began to sell his soap people wondered how he would make money. After all, people could make their own soap at home. Who would buy a bar from someone else? Of course, he did make money. People found his soap’s quality was consistently high and it was much easier to just buy a bar of soap than to make it.
At about the same time, Steve the Wise decided he could manufacture a superior soap product and also make a bit of money from it. He came up with a great soap, but decided that he would only sell it if the customer bought an entire “cleaning pack” – including a washtub, washcloth, and the great soap. To be sure, Steve’s products were far superior to anything people were using, but it was expensive to buy an entire “cleaning pack” just to get some soap – no matter how good the soap.
Steve the Wise sold a few packs from time to time, but never did capture the greater soap market.
In the end, then, there were three ways people could get good soap. The Consortium was still at work. They had recipes for fantastic soaps and were willing to help people create a perfect bar of soap for their specific needs. The consortium’s advice was available for anyone who wanted it – free of any charge. They continued to have their annual conventions and worked very hard to spread the word, “Use our ideas for free and create a perfect bar of soap.” But most people thought the effort was not worth the reward.
Steve the Wise continued to sell a few “cleaning packs” from time to time. However, he never did get rich; for while his soap was great, the additional “supplies” were expensive.
William the Younger, on the other hand couldn’t keep up with the demand. He increased production many times and occasionally introduced new varieties (such as scented soaps, decorative soaps, and such). It seemed that every time William the Younger hit the market with some new wrinkle his soap would fly off the shelf as quickly as merchants could stock it.
William the Younger became quite wealthy and famous. He was also eventually punished for unfair trade practices. He had to pay a huge fine (almost 1% of his worth) and promise to break up his monopoly. William’s soaps were eventually divided into the “William Body Soap” and the “William Clothes Soap” divisions to satisfy governmental regulators.
While the future of these three soap makers is uncertain one thing is very clear – there will always be a need for good soap.
It is difficult – perhaps impossible – to separate the development of operating systems from the development of computers and their hardware. As I wrote this chapter I was constantly bombarded with information about the rise of the personal computer, its peripherals, and the operating system that dwelt therein. I never found, though, where an operating system was developed independent of hardware.
For example, The Commodore 64 (an early home computer) came with its own operating system that did not work on any other computer. Most early computer manufacturers focused on selling hardware – the operating system was just baggage that came along with it. In reality, the idea of a single operating system that could work for all personal computers (like DOS or Windows 98) is a rather recent development in Personal Computers.
Even today, Apple Corporation marries the operating system and computer hardware. For them, the operating system and the physical hardware are inseparable – a concept that dates back over a decade. You cannot buy the Apple operating system and load it on your home computer – it is only available with an Apple computer.
This chapter, then, is a blend of the history of the personal computer as well as the operating systems that run on those computers. For convenience, I have divided this chapter into four eras. The divisions for the eras are based on events in the Personal Computer industry as they relate to operating systems.
It is possible to trace the roots of the computer back into the early part of the 20th century; but operating systems are a more recent development. Computers built in the 1950s and 1960s were generally built for a single specific purpose and their operating system was built-in. The only functions the operating system needed to handle were related to a card- or paper tape-reader for input and some sort of Teletype for output.
In those early days, a computer scientist could re-program the computer’s operating system, but that required much time and effort. You may compare this era of operating systems to the old Model-A automobile – only the mechanically gifted would venture very far from home.
In the 1960s Digital Equipment Corporation (DEC) created a series of computers called PDP. Their PDP series came with an operating system that was one of the first written for an entire line of computers rather than a specific machine. In 1968 DEC released OS/8 for their PDP/8.
In the late 1960s GE, MIT, and AT&T Bell Labs began to develop an operating system called Multics (Multiplexed Operating and Computing System). This system was designed to be multi-user and general enough to work on several different models of computers.
In 1969 AT&T pulled out of the Multics project. One of AT&T’s engineers, Ken Thompson, thought there was potential for Multics, but it was too complex and unwieldy in its then current form. Besides, he was quite fond of a game called “Space Travel” and that game could only be played on a computer using the Multics operating system. He persuaded his supervisor to let him have an old DEC PDP-7 computer that was not being used for anything else so he could “play with” Multics. He then wrote a new operating system that he called Unics (Uniplexed Operating and Computing System); which is widely considered to be the first practical multi-user, multi-tasking operating system ever devised. The name was eventually re-written to Unix. (While “Unics” was intended to be a gentle poke at the word “Multics,” I’ll spare you the jokes that were told about computers using a “Eunuch’s” operating system.)
Also in the 1960s, researchers at MIT developed a computer programming language called BASIC (Beginner’s All-Purpose Symbolic Instruction Code). It became quite popular because people who were not computer scientists could quickly learn a programming language and use it to write rather sophisticated applications for their computers. While BASIC was not the first computer language, it was one of the easiest to learn and quickly became the most popular with computer hobbyists. We’ll leave BASIC alone for now – but beware, it’ll return with a vengeance in 1975 (can anyone spell B-I-L-L?).
The early 1970s was an era of change.
Computers began to be used for more generalized functions. A single computer could be used to calculate missile ballistics, payroll data, and other unrelated “number crunching” jobs in a single day. Unlike previous computers that had to be “set up” to run a single job, then “set up” again for a different job, the new computers were more versatile and adaptable.
Peripheral devices also began to proliferate and change. Teletypes were being replaced by cathode-ray-tubes (TV sets) and card readers were being replaced by magnetic tape. These devices were much faster and more reliable than the older peripherals – but also put a greater burden on the computer.
Finally, computer users began to change. Scientists and economists wanted access to the power of the computer – but they did not want to have to program an operating system every time they needed to use the computer. There was a growing need for a common interface program (an “operating system”) all computer users could rely on to do mundane tasks like read the magnetic tape, display output on a monitor, or send data to a printer. These programs were usually called “executive” programs since they were used to control all computer functions.
However, the operating systems for machines in the 1970s were still very individualized. Companies that owned computers would hire a computer scientist to create and maintain an operating system for that particular computer. Usually, the operating system was quite good, but it was expensive and forced the company to keep the creator on their payrolls (talk about job security!).
IBM began to market its computers with their own operating systems. One of the early operating systems was called GM-NAA I/O (that’s quite a mouthful, isn’t it?). They eventually came up with an operating system called CICS, which is still in use today. However, our focus in this class is the personal computer, not commercial mainframes, so I will let the IBM line rest.
Up until 1973, operating systems were written in machine code; a type of programming language that uses the computer’s internal codes rather than easy-to-read instructions. A typical line of machine code would be nothing more than a series of numbers and only highly technical (and highly paid) gurus could decipher it. Also, since the operating system was written for a specific computer with its particular types of peripherals the operating system could be used only for that specific machine and no others. Literally, every company that wanted to use a computer had to hire someone to create an operating system for that computer.
However, in 1973 Ken Thompson (remember the game-player from AT&T?) and Dennis Ritchie (a friend of Ken’s) used a programming language called C (which they had written a few years earlier) to write a Unix kernel. This became an important event since many programmers could use the C language so, in theory, anyone could re-write the Unix kernel to match a specific computer system. Creating operating systems began to move from the realm of the gods to that of mere demigods. Thompson and Ritchie freely gave away a copy of Unix to anyone who wanted it – they were interested in creating a great operating system and knew that other programmers would be able to add to their work.
In 1974 Unix was released and licensed to universities. At about this time, Thompson spent a year on sabbatical at the University of California at Berkeley. He wrote a new version of Unix there that was distributed to students free of charge, and they then further refined the code. The students added vi (a text processor), sendmail, TCP/IP (a networking protocol), the C Shell, and virtual memory. This edition eventually became known as Unix BSD (Berkeley Software Distribution). Now there were two distinct “flavors” of Unix: AT&T’s Unix V and the upstart Unix BSD.
Bill Joy, one of the leaders of the BSD project, later became a founder of Sun Microsystems and naturally turned to a variant of Unix BSD (which he called SunOS) to power its workstations.
Do you remember MIT’s BASIC? In 1975 Bill Gates discovered BASIC and created a compiler for it, enabling BASIC programs to run more efficiently. He sold his compiler to a company called MITS (Micro Instrumentation Telemetry Systems) in Albuquerque, NM. MITS then created a home computer that became one of the first to be widely accepted by the hobbyist. Bill Gates then dropped out of Harvard to write software.
At about this same time other computer programming languages began to appear: Pascal, C, FORTRAN, and COBOL among them.
In the mid 1970s the personal computer was introduced and began to become popular. Computers were beginning to move from the hallowed halls of geekdom into the home.
In January 1975, The MITS (Micro Instrumentation and Telemetry Systems) Altair 8800 appeared on the cover of Popular Electronics. It was being sold as a kit, and for only a few hundred dollars (and a lot of headaches) a common sort of hobbyist with only a few hand tools could build a home computer.
Paul Allen (who at that time was employed by Honeywell) reportedly exclaimed, "This is it! – it's about to begin!" when he saw the Altair on the cover of Popular Electronics. Allen and his friend Bill Gates, a sophomore at Harvard, immediately set out to adapt BASIC for the machine, working in marathon 24-hour sessions. Allen flew to Albuquerque to demonstrate the language and, to everyone's surprise and relief; it worked perfectly the very first time (later débuts of Microsoft products were not so successful). Allen soon accepted a position with MITS as Director of Software Development, and Gates followed him later that year to form an informal partnership called Micro-soft, complete with hyphen.
BASIC allowed those intrepid early home computer builders to write their own applications (remember, this was before the days of Word). It was the beginnings of a trend to make the home computer useful (and marketable). As much as anything, the personal computer was built on the back of BASIC.
In the late 1970s, Microsoft was registered as a trademark and the corporation moved into offices in Albuquerque.
In 1977 Microsoft terminated their exclusive license for Microsoft's BASIC to MITS, Inc. BASIC had been the subject of an extended legal dispute between the two companies for some time (see, from the beginning Bill has been involved in legal wrangling). MITS agreed to make a "best effort" to license BASIC to other computer companies. In Bill and Paul's view, however, MITS was making less effort than it should. Arbitration decided the matter in Microsoft's favor, setting the company free to market BASIC to others. Within months, Microsoft licensed BASIC for the Commodore PET and TRS-80 computers, and began negotiating with other companies.
Seattle natives Gates and Allen announced plans to return home and set up offices in Bellevue, Washington, becoming the first microcomputer software company in the Northwest. At that time, Microsoft was still exclusively in the business of developing languages, and Microsoft BASIC was the language of choice for the entire burgeoning personal computer industry.
In 1980, Microsoft began to explore spreadsheet applications. However, the most important event of that year was a secret contract with IBM to develop computer languages for their first personal computer. IBM also discussed their need for an operating system for their personal computer with Bill Gates – but there is no firm movement in that direction, yet.
In 1981, after months of maniacal hours by developers, the IBM personal computer debuted with Microsoft's Disk Operating System (MS-DOS). Other companies decided to clone the new IBM hardware standard, and negotiated with Microsoft for the rights to distribute MS-DOS (which IBM, under pressure from Bill Gates and company, authorized).
In 1983, Microsoft produced the original Word and Windows. These products were the beginning of the “What You See Is What You Get” era at Microsoft. However, this was certainly not the first attempt at a Graphic User Interface (GUI). As early as the 1950s, the friendly folks at the Xerox Palo Alto Research Center (PARC) were fooling around with a GUI.
By the winter of 1985, Microsoft announced the retail shipment of Microsoft Windows, an operating system, which extended the features of the DOS operating system. Windows provided users with the ability to work with several programs at the same time and easily switch between them without having to quit and restart individual applications.
In 1987, Microsoft announced the release of Operating System/2 (“MS OS/2”) – a new personal computer operating system. It had been designed and developed specifically to harness the capabilities of personal computers based upon the Intel 80286 and 80386 microprocessors. Unfortunately, the OS/2 operating system never became popular and eventually vanished.
Windows 2.0 was released in April 1987. It offered compatibility with existing Windows applications and a new visual appearance compatible with the OS/2 system. In addition to the new visual appearance, it used a system of overlapping windows, rather than tiled windows. Windows 2.0 also included significant performance enhancements and improved support for expanded memory hardware.
It’s at this point we’ll leave Bill and Paul and look at other developments in the computer industry in the late 1970s and 1980s. However, we’ll be back…
As Bill Gates and Paul Allen were beginning to market BASIC for the Altair (back in 1976), another young man was beginning to take a bite of an apple…
In April 1976, the Apple I premiered at the Homebrew Computer Club in Palo Alto, but few took it seriously. This computer was Steven Wozniak's first contribution to the personal computer field. It was designed over a period of years, and was only built in printed circuit-board form when Steve Jobs (one of Wozniak’s friends) insisted it could be sold. The Apple I was based on the MOStek 6502 CPU chip, whereas most other "kit" computers were built from the Intel 8080. The Apple I was sold through several small retailers, and included only the circuit board. A tape-interface was sold separately, but the hobbyist had to find a case. The Apple I's initial cost was $666.66
Only 200 Apple I computers were ever manufactured. One of them hangs in Apple's offices with the label "Our Founder". Figure 1 is a print advertisement that ran for the original Apple.
Built in 1977, the Apple II was based on Wozniak's Apple I design, but with several additions. The first was the design of a plastic case – a rarity at the time – that was painted beige. The second was the ability to display color graphics – a “holy grail” in the industry. It had BASIC hard-coded on the ROM for easier programming, and included two game paddles and a demo cassette for $1,298. In early 1978 Apple also released a disk drive for the machine, one of the most inexpensive available. The Apple II remained on the Apple product list until 1980. It was also repackaged in a black case and sold to educational markets by Bell & Howell.
Named for one of its designer's daughters, the Lisa was supposed to be the Next Big Thing. It was the first personal computer to use a Graphical User Interface. Aimed mainly at large businesses, Apple said the Lisa would increase productivity by making computers easier to work with. The Lisa had a Motorola 68000 Processor running at 5 Mhz, 1 MB of RAM two 5.25" 871k floppy drives, an external 5 MB hard drive, and a built in 12" 720 x 360 monochrome monitor. At $9,995 it was a plunge few businesses were willing to take.
In January of 1984, the Macintosh was released with much fanfare. It was the first affordable computer to include a Graphical User Interface. It was built around the new Motorola 68000 chip, which was significantly faster than previous processors, running at 8 MHz. The Mac came in a small beige case with a black and white monitor built in. It came with a keyboard and mouse, and had a floppy drive that took 400k 3.5" disks – the first personal computer to do so. It originally sold for $2,495.
Other manufacturers were also beginning to get into the personal computer business during the late 1970s and 1980s – in fact, it seemed like the California gold rush all over again.
The TRS-80 Model 1 was Radio Shack's first personal computer. It was developed in the late 1970's when the only home computers available to the general public were those like the Altair kits and the first Apple computers. TRSDOS (the operating system for the TRS-80) was, to be blunt, horrible. One of its tricks, for example, was to lose files on the disk drive. Most computer users who had to put up with TRSDOS affectionately called it “Trash DOS” (well, maybe it wasn’t so affectionate). Two fellows in Colorado fixed most of the problems with TRSDOS, then their company (“Apparat”) released NEWDOS as a replacement.
The first Commodore VIC-20s hit store shelves in 1981. During that time, the peak production rate was 9000 units per day – an impressive number for that era. The VIC-20 was one of the most important computers of its time because it was the first color computer to break the $300 price barrier. The VIC-20 introduced millions of people to personal computing. Figure 2 shows an early ad for the VIC-20.
Under Jack Tramiel's guidance, Commodore grew into a $1 billion company, growing sevenfold from 1981 to 1984. It was one of the largest suppliers of home computers in the world. Tramiel flew in the face of the computer industry by enlisting mass merchants (K-Mart, Toys "R" Us, Target, and others) to sell the VIC-20, and later the Commodore 64. By doing so, he proved that computer buyers didn't need to rely on the handholding of an elite class of computer-literate sales people and their specialty store prices.
By 1984, about 4 million Commodore computers were in use around the world, and 300,000 more being sold per month. Commodore's management believed that market saturation was still a long way off, since only about 6% of U.S. households owned computers, which was far less than the 20-25% that owned video game machines during an early peak of the home video game craze.
In January 1982, when it was presented at the Consumer Electronics Show (CES) (at a suggested retail of US $595), nobody could foresee that the Commodore 64 would become the best-selling computer in the world with over 17 million units sold before the end of 1992. (Boy, if I only had a nickel for every investment trend I’ve missed!)
In 1978 AT&T released Unix SRV4 and it began to diverge with Unix BSD. SRV4 was more conservative, commercial, and well supported. Today, SRV4 and BSD are very similar; however, there are some major differences in the way they are marketed.
In early 80’s Unix became popular very popular for several reasons. First, it was the only operating system that was truly multi-tasking and could support multiple users. This was extremely important to businesses and colleges. While those institutions could buy single-user computers, it was much more efficient to buy a larger “mini” computer then have several people share that resource with workstations (ok, they were called “terminals” in those days). It was essential to have an operating system that would support multi-tasking – and Unix filled the bill.
Another reason Unix was popular was its cost. Unix BSD was free (though AT&T owned the rights to SRV4 and users had to pay for that OS). Users were finding that personal computers were becoming more affordable and the Unix OS was available free of charge.
Finally, schools were churning out hundreds of good Unix programmers (remember, schools often used the Unix OS since it was the operating system of choice for larger computers), so a company could afford to hire a new college graduate and put that person to work immediately rather than try to train someone to use a proprietary OS.
In 1984 Richard Stallman started working on a Unix clone he called “GNU” (this stands for “Gnu’s Not Unix”). He was interested in creating a new operating system since AT&T had become quite controlling of Unix System V (that’s System “Five,” not “Vee”) and Berkeley was no longer supporting the BSD version of the OS. By the early 1990’s GNU had released their C library and Bourne Again Shell (bash). Everything was done except the kernel – but that would have to wait for the next era in operating systems.
The defining moment for the Modern era was when Microsoft released Windows 3.0 on May 22, 1990. This version of Windows offered dramatic performance increases for applications, advanced ease of use and aesthetic appeal, and straightforward integration into corporate computing environments. Windows 3.0 quickly became the most prevalent operating system on earth and changed the face of personal computing forever. It was during this era that Microsoft became the giant in the operating system game; and their competition slowly disappeared from the scene. Many personal computer manufacturers also folded during the 1990s, including Commodore, the Radio Shack line (TRS), and dozens of smaller companies. The 1990s could be considered the era of The Coalescing Of The Personal Computer Industry. (Well, it’s an impressive title, anyway.)
Soon after Microsoft shipped Windows 3.0 they also began shipping a new programming language: Visual Basic for Windows. This language made programming for a Windows-based computer easy – and even fun. Visual Basic soon caught fire, just as its more comely parent (Quick Basic) had in an earlier era.
In early 1992 Microsoft shipped Windows 3.1 with over 1,000 enhancements to the Windows 3.0 system. Windows 3.1 created unprecedented user demand with over one million advanced orders placed worldwide. By the fall of 1992 Microsoft released Windows for Workgroups that added intra-office networking capability to the Windows line. This meant that several computers could share files or a single printer in an office.
In early 1993 Microsoft released MS-DOS 6.0 and introduced a data compression program called DoubleSpace. In the fall of 1993 Windows for Workgroups 3.11 was released, adding enhanced support for Novell and Windows NT, along with numerous code changes improving performance significantly. Windows 3.11 became one of the best-selling operating systems of all time. You can still find Windows 3.11 being used in some offices.
On August 24, 1995 (12:01 am – to be precise), Microsoft began selling Windows 95 to great fanfare (including a plug by Tonight Show’s Jay Leno). By August 29 they had sold more than 1 million copies at retail stores – along with an unknown number of copies loaded onto new equipment. This was undoubtedly the biggest software sales event in the history of computing – over 1 million copies of a program in only five days!
In late 1995 Microsoft released Internet Explorer 2.0 and made it available for downloading via the Internet at no charge to licensed users of Windows 95.
In January 1997 Microsoft announced the release of Office 97, the new version of the world's best-selling productivity suite, which integrated their applications and the Web.
In late 1997, Microsoft's Internet Explorer 4.0 was released to enormous customer demand. Internet Explorer 4.0 introduced Active Channel "push" content and Web integration into the entire Microsoft line (both Windows and Office).
In 1991 Linus Torvalds (a computer science student at the University of Helsinki) started with a small Unix-like operating system called Minix and created a new operating system kernel he called Linux (he named it after himself). Linus released the Linux kernel to the Internet and invited other computer scientists and students to improve on it. The response was overwhelming. Soon, Linux was married with Stallman’s GNU project to create a complete operating system. By 1994 Linux 1.0 was available with over 100,000 registered users. By 1999 version 2.2 was released and is being used by and estimated 7.5 million users.
Which makes one ask, “Why Linux?”
Of course, there are some folks who are just plain Microsoft haters. They would use any system that did not have the Microsoft name on the box. Those ideas are beyond the scope of this book.
Linux is a true multi-tasking, multi-user system. Up until the release of Windows ME (which was not available when I wrote this chapter), no Windows product intended for the personal computer could make that claim. Linux is able to log on a specific person and keep that person from “fooling with” someone else’s files. The Microsoft folks would argue that Windows NT is a multi-tasking, multi-user system. That is true – but NT is not intended for the personal computer market – it’s a server intended for businesses.
Linux is also available free of charge. While this may not be a great selling point, it is important in many parts of the world where resources (read “$$$”) are scarcer.
Linux also freely gives away its source code. This means that any programmer in the world can look at (and modify) the code for any Linux function. It is not possible to “hide” traps in the code – and “bugs” are quickly ferreted out and killed. Moreover, many times another programmer may be able to improve some bit of the code to enhance a program’s options or make it run more efficiently. It’s almost like having thousands of programmers available to check (and improve) any of the operating system software. Microsoft is not a believer in the concept of open source code – in fact, the Windows source code is one of the industry’s most closely guarded secrets.
While it is difficult (at best) to predict the future of operating systems, it is possible to look at current trends and project those into the future. Here’s my best shot…
One of the most prevalent current trends is toward smaller, hand-held devices. Digital phones, personal data devices, even hand-held calculators all need operating systems. The older OS that served your personal computer well is not adequate for a portable device. An OS for a digital cell phone, for example, must be very tiny and tightly focused on that one task – Windows is definitely too big. Linux may rise to meet the challenge of the tiny OS, or perhaps Microsoft will develop some version of Windows that will work in the limited environment of the portable device.
Closely related to the portable devices are new “smart” appliances. There has been some work on making a “smart” home where the toaster, refrigerator, and other appliances have a tiny micro-system embedded and are connected to an internal network. These devices can then “communicate” with each other and keep a central processor updated. This type of environment will create a need for a new type of OS. Again, perhaps the Linux community will create something appropriate – or maybe Microsoft – but this will be a new OS. (Anyone care to start writing a new operating system for a toaster?)
There is growing concern for personal security and privacy. In the near future people will not accept the lack of security in Windows 95/98. At the time I wrote this, Windows XP has not been officially released (though it is in beta release); but I understand Microsoft has significantly improved the security in that product. Windows XP appears to be something of a cross between Windows 2000 and Windows 98. It seems likely that in the future computer users will demand improved security from the operating system.
Related to security is the recent furor over the FBI’s “Carnivore” program. While this is not directly related to operating systems, it seems that the FBI can (with the court’s approval) intercept and read e-mail intended for personal use. This will, without doubt, increase the demand for encryption systems. In the future you may see encryption schemes built into the operating system so personal files are automatically encrypted as they are saved.
There is an important trend toward “renting” applications. In the future you may not want to buy an expensive product like Windows Office, but would be willing to rent it for $5 per month (or whatever figure). You will be able to download a special version of the application that will automatically disable itself after a certain date. You will also be able to download some sort of digital key every month to continue using the software (after you’ve paid the fee, of course). This option will become especially popular in places like colleges where an entire lab pack can be rented for a rather small amount, and then when new versions are released the computers can be updated by simply renting a new pack. Students will also be able to rent the software they need for a particular course without having to spend a small fortune to buy software that will be out of date before graduation day. Of course, operating systems must be devised to handle this type of transaction – but it is possible even today.
In the near future distributed operating systems will become much more common. A distributed OS is one that can make several computers work on some job at the same time. There is an example of a distributed system in the SETI (Search for Extra-Terrestrial Intelligence) project (see http://www.seti.org/ for more information). In the future, the power of distributed computing will be used to solve complex mathematics or science problems. It takes a special type of OS to handle distributed computing – one that must still be developed for home use.
The Internet will become more ubiquitous. E-commerce will grow and create a demand for an OS that can integrate brick-and-mortar stores with online stores. Push technology will begin to bring intelligence into our homes at an ever-increasing pace. Windows is tightly bound with Internet Explorer and this trend will only continue – despite the best efforts of our federal government.
The previous paragraph not withstanding, the breakup of Microsoft will have a profound effect on the future of operating systems. Regardless of your personal feeling concerning the Microsoft breakup, you must admit that some version of Windows is currently being used by more people than any other operating system in the history of computing – and will likely continue to lead that category far into the future. With the Microsoft breakup, the operating system may improve somewhat, but it will not be so tightly bound with the applications. This could lead to more instability and problems with a Windows-based system than we find currently. Time will tell…