Software Engineering - Introduction
Table of Contents
|
Software delivers what many believe will be the most important
product of the next century - information. Whether it resides
on a desktop computer, in a cell phone, or in a supercomputer,
software transforms data and provides a means of acquiring it from a
worldwide network of information.
The role of computer software has undergone significant change
since the 1950's. Dramatic improvements in hardware
performance, profound changes in computing architectures, vast
increases in memory and storage capacity, and a wide variety of
exotic input and output options have yielded sophisticated and
complex systems. Sophistication and complexity can produce
dazzling results when everything works properly, but they pose huge
problems for those who must design, build and maintain them.
The evolution of software may be characterized by four eras [1]:
I. The Early Years (50's - mid 60's)
- Batch orientation - simple, task-oriented programs.
- Custom software - written "in-house"
- Limited distribution - maintained "in-house"
We learned much about the implementation of computer-based
systems, but little about standardization, testing or maintenance.
II. The Second Era (mid-60's - late 70's)
- Multiuser - VMS, UNIX
- Real-time - increased speed
- Database - increased storage capacity
- Product software - widespread distribution
With widespread distribution, a crisis in software maintenance
arose.
III. Third Era (mid-70's - mid-80's)
- Distributed systems - local and global networking
- Embedded "intelligence" & low cost hardware
- microprocessor based products (cars, robots, medical devices)
- Consumer impact - the personal computer
Computers became accessible to the public at large.
IV. The Fourth Era - (mid-80's - present)
- Powerful desktop systems, client-server architectures
- Object-oriented technologies
- Expert systems and artificial intelligence - complex
problems
- Artificial neural networks - pattern recognition,
human-like information processing
- Parallel computing
Dramatic changes in the way that we build computer programs.
Unfortunately, a set of software-related problems has persisted
throughout the evolution of computer-based systems, and these
problems continue to intensify.
- Hardware advances continue to outpace our ability to build
software to tap hardware's potential.
- Our ability to build new programs cannot keep pace with the
demand for new programs, nor can we build programs rapidly
enough to meet business and market needs.
- The widespread use of computers has made society increasingly
dependent on the reliable operation of software. Enormous
economic damage and potential human suffering can occur when
software fails.
- We struggle to build computer software that has high
reliability and quality.
- Our ability to support and enhance existing programs is
threatened by poor design and inadequate resources.
It is for these reasons that software engineering has evolved as
a discipline.
|
|
|
We can find many definitions of Software Engineering. For example, the
IEEE definition is:
Software Engineering.
The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of
software; that is, the application of engineering to software.
(IEEE Std 610-1990)
- Input:
description of the problem (from
a client)
- Output: software
system as a long-term solution for the problem of the client.
|
|
|
It is a fact that electric power generators fail, but far less
frequently than payroll products do. It is true that bridges
sometimes collapse, but considerably less often than operating
systems do. In the belief that software design,
implementation, and maintenance could be put on the same footing as
traditional engineering disciplines, a NATO study group in 1967
coined the term software engineering. The first
software engineering conference was sponsored by NATO in 1968, with
the proclamation that there was a software crisis,
namely, that the quality of the software of the day was generally
unacceptably low and that deadlines and cost limits were not
being met. The consensus was that software engineering should
use the philosophies and paradigms of established engineering
disciplines. This is easier said than done. Although there
have been success stories, a considerable amount of software is
still being delivered late, over budget, and with residual
faults.
- For every 6 large scale projects are 2 canceled
- 75% of large scale systems are "operational
failures"
That the "software crisis" is still with us, 30 years
later, suggests that the software production process, while
resembling traditional engineering in many respects, has its own
unique properties and problems. There are several reasons why
software has not been "engineered" as is done in other
fields:
- Complexity. As it executes, software goes through
discrete states. The total number of such states may be
vast, such that it may be impossible to consider all possibilities.
Complexity continues to grow as computing demands evolve:
multiuser, distributed environments place considerable demands
upon the underlying software.
- Fault tolerance. The failure of a bridge has
obvious disastrous implications. The same regard is
generally not given to software systems. Faults in
software are not only common, but they are almost expected,
given their prevalence. When an operating system fails,
instead of considering redesign, it is assumed that one will
simply reboot and continue. The exception, of course, is
in safety-critical systems, where human life or substantial
economic value is at stake.
- Flexibility. It is expected that one can do
almost anything in software, since it can be modified so
easily. Changing requirements of the software as a project
progresses create problems in design, testing and
maintenance. This greatly impacts the quality, cost, and
delivery time of the product.
|
Industry Experts Declare a
Software Crisis
- Wall Street Journal - November 1964.
The problem still
persists...
Software’s Chronic Crisis
- Scientific American, September 1994
Danger: Software At Work
- Report on Business Magazine, March 1995
How Software Doesn’t Work
- Byte, December 1995
Trust Me, I’m Your Software
- Discover Magazine, May 1996
Cancelled Contracts Cost
Ottawa Millions
- Globe and Mail, September 1996
Software is Pervasive
- Software has entered the
mainstream of society.
- It’s in our homes, our
appliances, our toys, and our cars.
- It’s in our planes, our
railways, our nuclear plants, and our medical devices.
- In the past, complaints
about software problems were usually unreported.
- This was because software
users were usually sophisticated “techies”.
- But this is no longer true
today, software users come from all parts of society


- Fortune Magazine, 1993
Some Data...
zIn
June 1994, IBM’s Consulting Group released a survey of 24
leading companies that had developed large distributed systems.
zThe
survey reported that:
- 55% of the software
developed cost more that projected.
- 68% took longer to
complete than predicted.
- 88% had to be
substantially redesigned.
zA
recent 1994 study by the Standish Group of 8,380 projects in the
government and private sectors in the U.S. showed that:
- 31% of software projects
are canceled before they are completed.
- 53% of those are completed
cost an average of 189% of their original estimates.
- of those 53%, only 42%
have their original set of proposed features and functions.
- only 9% of the projects
were completed on time and on budget.
US government planned to spent $26.5 billion on information
technology in 1996. It has been estimated that 1995 world-wide
software costs to be $435 billion. Software costs will continue to
rise (at roughly 12%/year) although hardware costs may decrease
because:
- a new application means new programs
- new computers need new software or modifications of existing
software.
- programming is a labour-intensive skill.
Software errors result in two costs:
- the harm which ensues.
- the effort of correction.
When
is a Software Project a Failure?
- If the project is
terminated because of cost or schedule overruns, it is a
failure.
- zIf
the project has experienced cost or schedule overruns in
excess of 50% of the original estimate, the project is a
failure.
- zWhen
the software project results in client lawsuits for
contractual noncompliance, the project is considered a
failure.
The following are a few examples that illustrate what is going
wrong and why.
- In the early 1980's the United States Internal Revenue Service
(IRS) hired Sperry Corporation to build an automated federal
income tax form processing system. According to the Washington
Post, the "system has proved inadequate to the
workload, cost nearly twice what was expected and must be
replaced soon" [2]. In 1985, an extra $90 million was
needed to enhance the original $103 million worth of Sperry
equipment. In addition, because the problem prevented the
IRS from returning refunds to taxpayers by the deadline, the IRS
was forced to pay $40.2 million in interest and $22.3 million in
overtime wages for its employees who were trying to catch
up. By 1996, the situation had not improved, and had
become a $4 billion fiasco. Poor project planning
has been identified as the fundamental cause of the problems.
- For years, the public accepted the infusion of software in
their daily lives with little question. In the mid-1980's
however, President Reagan's Strategic Defense Initiative (SDI)
heightened the public's awareness of the difficulty of producing
a fault-free software system. The project was met with
outward skepticism from the academic community [3], stating that
there was no way to write and test the software to guarantee
adequate reliability. For example, many believe that
the SDI system would have required somewhere between 10
and 100 million lines of code; by comparison, there were
100 thousand lines of code in the space shuttle in
1985. The reliability constraints on the SDI system would
be impossible to test [4]. This is a problem in many safety-critical
systems, and is an area of considerable research effort.
- Helpful technology can become deadly when software is
improperly designed or programmed. The medical community
was aghast when the Therac-25, a radiation therapy and X-ray
machine developed by Atomic Energy of Canada, Ltd (AECL).,
malfunctioned and killed several patients [5]. The
software designers had not anticipated the use of control inputs
in nonstandard ways and as a result, the machine issued a high
dose of radiation when low levels were intended. AECL was found
by the FDA to have applied little or no fault or reliability
testing to the system software.
- Real-time systems pose an additional challenge in fault
detection. An example is the embedded software in the
Ariane-5, a space rocket designed by the European Space Agency
(ESA). On June 4, 1996, on its maiden flight, the rocket
was launched and performed perfectly for approximately 40
seconds. It then began to veer off course, and had to be
destroyed by remote control. The cost of the rocket and
the four satellites onboard came to $500 million. The root
of the failure was that software modules were reused from the
previous mission (Ariane-4), but a specification error led to a
design error such that reused modules were improperly
used. It cannot be stressed strongly enough that documentation
must be complete, correct, and traceable throughout the
development lifecycle.
- In 1997, 167,000 Californians were billed $667,000 for
unwarranted local telephone calls because of a problem with
software purchased from Northern Telecom. A similar
problem was experienced by customers in New York City. The
problem stemmed from a fault in a software upgrade to the
DMS-100 telephone switch. The fault caused the billing
interface to use the wrong area code, resulting in local calls
being billed as long-distance calls. It took the local
phone companies about a month to find and fix the cause of the
problem. Had Northern Telecom performed complete regression
testing on the software upgrade, the billing problem would
not have occurred.
- A major U.S. Army initiative to modernize thousands of aging
computer systems has hit the skids, careening far beyond
schedule and well over budget. The 10-year project, known as the
Sustaining Base Information Services (SBIS) program, is supposed
to replace some 3,700 automated applications by the year 2002.
The current systems automate virtually every business
function--from payroll and personnel management to budgeting and
health care--at more than 380 installations worldwide. But after
investing almost three years and about $158 million, the army
has yet to receive a single replacement system. "Battling
the Enemy Within: A billion-dollar fiasco is just the tip of the
military's software problems", Scientific American, April
1996 .
- The opening of the Denver International Airport (DIA) had to
be delayed for 16 months due to an automated luggage handling
system that was was afflicted by “serious mechanical and
software problems.” In tests, bags were misloaded, misrouted
or fell out of carts, causing the system to jam. Finally,
the DIA had to install a $51 million alternative system to get
around the problem. In 1998, it was announced that this
alternative system was not year 2000 compliant, and that a fix
will cost the DIA $10 million [6].
- VLSI design software can't keep up to progress in chip
integration. As hardware advances push below 0.25 micron
techology, the software tools available to design integrated
circuits are not going to be able to keep up with the added
complexity. As a result, software may stifle progress. This
is being experienced by Texas Instruments, IBM, and Intel as
they design next-generation chips. "ONE
SMALL STEP: The next big advance in chip design arrives one year
early", Scientific American, August 1996
- Microsoft Corp. was to have released Version 5 of its Windows
NT server and workstation operating systems in 1998. The
size of the development effort have made planning and testing a
formidable task, delaying the progress of the project to the
point that Microsoft has renamed the product-to-be Windows
2000. When it's finished, Windows 2000 Professional is
likely to top out at more than 30 million lines of code.
The delays may severely compromise Microsoft's market position,
as others have introduced competitive products in the areas of
directory services (Novell's Netware 5) and operating
environments (Linux).
- In 1990, the US Federal Aviation Administration sought to
replace its aging air traffic control system. IBM's
Federal Systems Company, a known leader in software development
at the time, was given the contract. It was even agreed
that $500 per line of code would be paid for the development,
five time the industry average. In early 1993, the program
was years behind schedule and billions of dollars over
budget. In 1996, Raytheon Corp. was awarded nearly $1
billion to recover the project. The problems with the IBM
contract still haunt the FAA: they must keep 30 ancient IBM 3083
computers from suffering year 2000 failures. The FAA is
about a month away from completing its year 2000 assessment on
the IBM 3083s and the approximately 500,000 lines of code that
run on them, said Paul Takemoto, an FAA spokesman in Washington.
"We believe we have both the tools and the people to
certify [the 3083] as Y2K-compliant," he said. Less
than 100 of these old machines are still in use, according to
IBM. And businesses would be foolish to continue running
applications -- especially mission-critical ones -- on them,
analysts said.
These problems stem from an unrealistic view of what it takes to
construct the software to perform these tasks.
The
Worst Software Practices [9]
- zNo
historical software-measurement data.
- zRejection
of accurate cost estimates.
- zFailure
to use automated estimating and planning tools.
- zExcessive,
irrational schedule pressure and creep in user requirements.
- zFailure
to monitor progress and to perform risk management.
- zFailure
to use design reviews and code inspections
To improve the
record we must:
- better understand the development process;
- learn how to estimate trade off time, manpower, dollars;
- estimate and measure the quality, reliability, and cost of the
end product.
There are no distinct rules
which dictate how software should be developed, but rather, best
practices. The focus here will be these best practices, and
methods to assess their effectiveness.
|
|
|
Management Myths
Managers are often under pressure to maintain budgets, keep
schedules from slipping, and improve quality. It is not
uncommon for mismanagement to result from the following fallacies.
Myth: "State-of-the-art tools are the solution."
Reality: Computer aided software engineering
(CASE) tools are important for achieving good quality and
productivity, yet the majority of software developers do not use
them. Even if they are used, "a fool with a tool is still
a fool."
Myth: "If we get behind schedule, we can
add more programmers and catch up."
Reality: Software development is not a
mechanistic process like manufacturing. Adding people to a
late software project makes it later [7]. This is because new
people must be brought up to speed, and communication overhead
increases.
Management Myths
Customer myths lead to false expectations (by the customer) and
ultimately, dissatisfaction with the developer.
Myth: "A general statement of objectives
is sufficient to begin writing programs - we can fill in the details
later."
Reality: Poor up-front definition of
requirements is the major cause of failed software efforts. A
formal and detailed description of information domain, function,
performance, interfaces, design constraints, and validation criteria
is essential. These characteristics can be determined only
after thorough communication between customer and developer.
Myth: "Project requirements continually
change, but change can be easily accommodated because software is
flexible."
Reality: The impact of a requirements change
varies with the time at which it is introduced. If serious
attention is given to up-front definition, early requests for change
can be easily accommodated. When changes are requested later
in the software development cycle, the cost impact grows
rapidly.
Practitioner's Myths
Myths that are still believed by software practitioners have been
fostered by decades of programming culture, where programming was
viewed as an art form.
Myth: "One we write the program and get
it to work, our job is done."
Reality: Industry data indicate that between 50
and 70 percent of all effort expended on a program will be expended
after it is delivered to the customer for the first time.
Myth: "Until I get the program running,
I really have no way of assessing its quality."
Reality: One of the most effective software
quality assurance mechanisms can be applied from the inception of a
project - the formal technical review. This has been found to
be more effective than testing for finding certain classes of
software errors.
Myth: The only deliverable for a successful
project is the working program.
Reality: A working program is only one part of a
software product which includes programs, documents, and data.
Documentation forms the foundation for successful development and,
more importantly, provides guidance for the software maintenance
task.
Recognition of software realities
is the first step toward formulation of practical solutions for
software development.
|
A software product usually begins as a vague concept, such as
"Wouldn't it be nice if the computer could gather, process, and
plot all of our data." Once the need for a software
product has been established, the product goes through a series of
development phases. Typically, the product is specified,
designed, and then implemented. If the client is satisfied,
the product is installed, and while it is operational it is
maintained. When the product finally comes to the end of its
useful life, it is decommissioned. The series of steps through
which a product progresses is called the life-cycle model.
The best life-cycle model for a given product may be
different. The factors which determine the the appropriate
model include the size of the project, the complexity, the required
development time, the degree of risk, the degree of certainty as to
what the customer wants, and the degree to which the customer
requirements may change.
The two most widely used life-cycle models are the waterfall mode
and the prototyping model. In addition, the spiral model
is now receiving considerable attention. The strengths and
weakness of these models will be examined here.
The following activities occur during the waterfall life cycle
paradigm:
- Requirements analysis and definition. The
system's services, constraints and goals are established by
consultation with the customers and users. These are then
defined in an manner which is understandable by both
customers/users and development staff.
- Specification Phase. From the requirements, a
specifications document is produced which states exactly what
the product is to do (but not how it will be done).
- System and Software Design. The systems design
process partitions the requirements to either hardware or
software systems. Software design is actually
a multistep process the focuses on four distinct attributes of a
program: data structure, software architecture, interface
representations, and procedural (algorithmic) detail. In
contrast to the specifications document that specifies what
requirements will be met, the design documents contain
representations that describe how the product will meet
them.
- Implementation and unit testing. During this
stage, the software design is realized as a set of programs or
modules. Unit testing involves verifying that each unit
meets its specification.
- Integration and system testing. The individual
program units are integrated and tested as a complete system to
ensure that the software requirements have been met.
- Acceptance testing. The purpose of acceptance
testing is for the client to determine whether the product
satisfies its specifications as claimed by the developer. During
acceptance testing, the product is evaluated for its
correctness, robustness, performance, and documentation.
- Operations and maintenance. The operations and
maintenance phase involves is the re-application of each of the
preceding activities for existing software. The re-application
may be required to correct an error in the original software, to
adapt the software to changes in its external environment (e.g.,
new hardware, operating system), or to provide enhancement to
function or performance requested by the customer. This is
generally the longest life-cycle phase.
These stages are shown diagrammatically below.
Normal development is shown by the solid green arrows.
Maintenance occurs along the path of the dashed
arrows.

Figure 1 - The Waterfall Model
The waterfall model is the most widely used in software
engineering. It leads to systematic, rational software development,
but like any generic model, the life cycle paradigm can be problematic
for the following reasons:
- The rigid sequential flow of the model is rarely encountered
in real life. Iteration can occur causing the sequence of steps
to become muddled.
- It is often difficult for the customer to provide a detailed
specification of what is required early in the process. Yet this
model requires a definite specification as a necessary building
block for subsequent steps.
- Much time can pass before any operational elements of the
system are available for customer evaluation. If a major error
in implementation is made, it may not be uncovered until much
later.
Do these potential problems mean that the life cycle paradigm
should be avoided? Absolutely not! They do mean, however, that the
application of this software engineering paradigm must be carefully
managed to ensure successful results.
Prototyping moves the developer and customer toward a
"quick" implementation. Prototyping begins with
requirements gathering. Meetings between developer and customer are
conducted to determine overall system objectives and functional and
performance requirements. The developer then applies a set of tools
to develop a quick design and build a working model (the
"prototype") of some element(s) of the system. The
customer or user "test drives" the prototype, evaluating
its function and recommending changes to better meet customer needs.
Iteration occurs as this process is repeated, and an acceptable
model is derived. The developer then moves to "productize"
the prototype by applying many of the steps described for the
classic life cycle.

Figure 2 - The Prototyping Model
In object oriented programming a library of reusable
objects (data structures and associated procedures) the software
engineer can rapidly create prototypes and production programs.
The benefits of prototyping are:
- a working model is provided to the customer/user early in the
process, enabling early assessment and bolstering confidence,
- the developer gains experience and insight by building the
model, thereby resulting in a more solid implementation of
"the real thing"
- the prototype serves to clarify otherwise vague requirements,
reducing ambiguity and improving communication between developer
and user.
But prototyping also has a set of inherent problems:
- The user sees what appears to be a fully working system (in
actuality, it is a partially working model) and believes that
the prototype (a model) can be easily transformed into a
production system. This is rarely the case. Yet many users have
pressured developers into releasing prototypes for production
use that have been unreliable, and worse, virtually
unmaintainable.
- The developer often makes technical compromises to build a
"quick and dirty" model. Sometimes these compromises
are propagated into the production system, resulting in
implementation and maintenance problems.
-
Prototyping is applicable only to a limited
class of problems. In general, a prototype is valuable when
heavy human-machine interaction occurs, when complex output is
to be produced or when new or untested algorithms are to be
applied. It is far less beneficial for large, batch-oriented
processing or embedded process control applications.
There is almost always risk involved in the development of
software. For example,
- key personnel may resign before the product has been
adequately documented,
- the manufacturer of hardware on which the product is
critically dependent may go bankrupt,
- too little (or too much) time may be invested in
testing,
- technological breakthroughs may render the product obsolete,
- a lower-priced, functionally equivalent product may come to
market.
For obvious reasons, software developers try to minimize risks
whenever possible. A product built using the waterfall model
may be subject to substantial risk because of its linear development
cycle. The prototyping model is quite effective at minimizing
risk, allowing a periodic reassessment of the
requirements.
The idea of minimizing risks via the use of prototypes and other
means is the underlying concept of the spiral model [8].
A simplistic way of looking at the spiral model is as a series of
waterfall models, each preceded by a risk analysis. Before
commencing each phase, an attempt is made to control (or resolve)
the risks. If it is impossible to adequately resolve all the
significant risks at a given stage, the project is immediately
terminated. Prototypes can be used to provide information
about certain classes of risk. For example, timing constraints
can be tested by constructing a prototype and measuring whether the
prototype can achieve the necessary performance.
The spiral model is shown in the figure below. The radial
dimension represents cumulative cost to date, the angular dimension
represents progress through the spiral. Each cycle of the
spiral corresponds to a development phase.

Figure
3 - The Spiral Model A phase begins (in the top
left quadrant) by determining objectives of that phase, alternatives
for achieving those objectives, and constraints imposed on those
alternatives. Next, that strategy is analyzed from the
viewpoint of risk. Attempts are made to resolve every potential
risk, in some cases by building a prototype. If certain risks
cannot be resolved, the project may be terminated or scaled
down. If all risks are resolved, the next development step is
started. This quadrant of the spiral model corresponds to the
pure waterfall model. Finally, the results of that phase are
evaluated and the next phase is planned. The
advantages of the spiral model are:
The weaknesses include:
- The risk-driven model is dependent on the developers' ability
to identify project risk. The entire product depends on the risk
assessment skills of the developer. If those skills are weak
then the product could be a disaster. A design produced by an
expert may be implemented by non-experts. In a case such as
this, the expert does not need a great deal of detailed
documentation, but must provide enough additional documentation
to keep the non-experts from going astray.
- The process steps need to be further elaborated to make sure
that the software developers are consistent in their production.
It is still fairly new compared to other models, so it has not
been used significantly and therefore the problems associated
with it haven't been widely tested and solved.
|
Project Phases for the Development of any Large
System,
- Initial conception
- Requirements analysis
- Specification
- Initial design
- Verification and test of design
- Redesign
- Prototype manufacturing
- Assembly and system-integration tests
- Acceptance tests (validation of design)
- Production (if several systems are required)
- Field (operational) trial and debugging
- Field maintenance
- Design and installation of added features
- System discard (death of system) or complete system redesign.
Boehm [8] gives figures for several systems showing about
- 40% of effort on analysis and design
- 20% on coding and auditing (handchecking)
- 40% on testing and correcting bugs.
Documentation was not included, estimated at extra 10%.
As a project progresses, the cost of changes increase
dramatically. The reasons for increases in cost are:
- Testing becomes more complex and costly;
- Documentation of changes becomes more widespread and costly;
- Communication of problems and changes involves many people;
- Repeating of previous tests (regression testing) becomes
costly;
- Once operation is begun, the development team is disbanded and
reassigned.
|
References:
- Pressman, R.S., Software Engineering: A
Practitioner's Approach, McGraw-Hill (4th ed.), 1997.
- Sawyer, K., "The Mess at the IRS," Washington
Post National Weekly Edition, pp. 6-7, November 11, 1985.
- Parnas, D., "Software Aspects of Strategic Defense
Systems," Datamation, 28(12), December, 1985.
- Pfleeger, S.L., Software Engineering: Theory and
Practice, Prentice-Hall, 1998.
- Leveson, N., and C. Turner, "An Investigation of
the Therac-25 Accidents," IEEE Computer, 26(7), pp.
18-41, July, 1993.
- Gibbs, W., Scientific American, pp. 86-95, Sept.
1994.
- Brooks, F., The Mythical Man-Month,
Addison-Wesley, 1975.
- Boehm, B., "A spiral model for software
development and enhancement," IEEE Computer, 21(5),
pp. 61-72, 1988.
- Jones, C., “Our
Worst Current Development Practices”, IEEE Software,
March 1996, pp. 102-104.
|
|

CMPE 3213 - Advanced Software Engineering
(Fall 1999)

Back to top
|
|