SDLC Phase 4: Systems Implementation
Stages:
1. Application development, including designing,
writing, testing, and documenting
programs and code modules
2. Perform installation and evaluation tasks, including
user training, file conversion,
system changeover, and evaluation of the results
Objectives
Describe the major tasks and activities
that are completed during the systems implementation
phase
Discuss the role of the systems analyst during application development
Explain the importance of quality assurance and the
role of software engineering in
software development
Describe the different types of documentation the systems analyst must prepare
Explain the different phases of testing including
unit testing, link testing, and system
testing
Describe top-down design and modular design and the advantages of these approaches
Introduction
During the systems implementation phase, the development
team uses the system
design specification as a blueprint for constructing
the new system
Analysts and programmers have different roles during application development
An analyst's main task is to deliver clear, accurate specifications to a programmer
Quality Assurance
Quality assurance is vitally important in all business areas, including IS functions
The main objective of quality assurance is to detect
and avoid problems as early as
possible
Quality assurance can detect
Inaccurate requirements
Design or coding errors
Faulty documentation
Ineffective testing
Software engineering
Stresses quality in software design
Solid design
Effective structure
Accurate documentation
Careful testing
Software Engineering Institute (SEI)
Mission is to improve quality
of software-based systems
Capability Maturity Model
is designed to improve quality, reduce development time, and cut costs
Application Development
Planning the overall design strategy
Use top-down (modular) approach
and partition the system into subsystems and modules
Develop programs and modules
Design, code, test, and
document
Test the system
Link test
System test
Complete all documentation
Documentation review and application design
Program designs are based
on
System design specification
Prior phase documentation
DFDs
Process descriptions
Screen layouts
Report layouts
Source documents
Data dictionary entries
Structure (hierarchy) charts
Show the organization of
program modules and the functions they perform
Program flowcharts
Show the internal logic
needed to perform program tasks and provide output
Pseudocode
Documents the program’s
logical steps
Coding
Process of turning program
logic into specific instructions that can be executed by the computer system
Many programming languages
exist
Visual C++
Access Basic
Visual Basic
SQL
HTML
Java
Testing the application
Testing is necessary to
ensure that all programs function correctly
First step is to detect syntax errors and obtain a clean compilation
Next step is to eliminate
logic errors
Techniques include desk checking, structured walkthrough, and code review
Final step is testing - Unit, link, and systems testing
Unit testing
Involves an individual program
Objective is to identify and eliminate execution errors and any remaining
logic errors
Stub testing is a technique of using stubs to represent entry or exit points
that will be linked later to another program or data file
Link testing
Involves two or more programs that depend on each other
Also called string testing, series testing, or integration testing
Link testing ensures that the job streams are correct
Test data is necessary to simulate actual conditions and test the interface
between programs
System testing
Involves the entire information system and includes all typical processing
situations
Requires users to verify all processing options and outputs
Uses live data
Involves a final test of all programs
Ensures that proper documentation is ready
Verifies that all system components work correctly
Confirms that the system can handle predicted data volumes in a timely
and efficient manner
TRADEOFF
How far should you go with system testing?
Tradeoff: pressure for the new system from users
and managers vs. the need to avoid
major errors
Typical issues to consider
What is the judgment of analysts,
programmers, IS management, and the project manager?
Do potential problems exist
that might affect the integrity or accuracy of data?
Can minor changes be treated
as future maintenance items?
Documentation
Explains the system and helps people
interact with it
Types of documentation
Program documentation
System documentation
Operations documentation
User documentation
Program documentation
Begins in the systems analysis
phase and continues during systems implementation
Includes process descriptions
and report layouts
Programmers provide documentation
with comments that make it easier to understand and maintain the program
An analyst must verify that
program documentation is accurate and complete
System documentation
System documentation describes
the system’s functions and how they are implemented
Most system documentation
is prepared during the systems analysis and systems design phases
Documentation consists of
Data dictionary entries
Data flow diagrams
Screen layouts
Source documents
Initial systems request
Operations documentation
Typically used in a minicomputer
or mainframe environment with centralized processing and batch job scheduling
Documentation tells the
IS operations group how and when to run programs
Common example is a program
run sheet, which contains information needed for processing and distributing
output
User documentation
Typically includes the following
items
System overview
Source document description, with samples
Menu and data entry screens
Reports that are available, with samples
Security and audit trail information
Responsibility for input, output, processing
Procedures for handling changes/problems
Examples of exceptions and error situations
Frequently asked questions (FAQ)
Explanation of Help & updating the manual
Written documentation material
often is provided in a user manual
Analysts prepare the material
and users review it and participate in developing the manual
Online documentation can
empower users and reduce the need for direct IS support
Context-sensitive Help
Interactive tutorials
Hints and tips
Hypertext
On-screen demos
Management Approval
After system testing is complete, the results are
presented to management
Test results
Status of all required documentation
Input from users who participated
Recommendations
Detailed time schedules,
cost estimates, and staffing requirements
Management Options
Return to Design Phase
Retest
Proceed with Implementation
If approved, a schedule for system installation and evaluation will be established
Objectives
Discuss the main tasks in the installation and evaluation
process
Explain why it is important to maintain separate
operational and test environments
Develop an overall training plan with specific objectives
for each group of participants
Explain three typical ways to provide training,
including vendors, outside resources, and in-house staff
Describe the file conversion process
Identify four system changeover methods and discuss
the advantages and disadvantages of each
Explain the purpose of a post-implementation evaluation
and list the specific topics covered during the evaluation
Specify the contents of the final report to management
Introduction
Installation and evaluation completes the systems implementation phase
The new system is now ready to be used
Remaining tasks
Prepare an operational environment
and install the new system
Provide training for users,
IS staff, and managers
Perform file conversion
and system changeover
Carry out post-implementation
evaluation
Present a final report to
management
Operational and Test Environments
Test environment
Programmers and analysts
use the test environment to develop and maintain programs
The test environment contains copies of
Programs
Procedures
Test data files
Operational environment
Also called the production
environment
Access is limited to information
system users
IS staff enter the production
environment only to correct problems or perform authorized work
Live, actual data is used
All changes must be verified
and user approval obtained
Preparation of the operational environment
Examine all system components
that affect system performance
Hardware and software configurations
Operating system programs and utilities
Network resources
Check all communications
features, both before and after loading programs
Include network specifications
in documentation
Training
A training plan should be considered early in the
systems development process
Deliver the right training to the right people at
the right time
Specific training is necessary for
Users
Managers
IS department staff members
Vendor training
If hardware or software
is purchased outside, vendor training should be considered
Many vendors offer free
or nominal cost training for customers
Vendor training can be performed
at the vendor’s site or at the customer’s location
Vendor training often provides
the best return on training dollars
Outside training resources
If vendor training or internal
training is impractical, outside trainers or consultants can be used
Outside training generally
is not practical for in-house developed systems
Many sources of training
information exist
Consultants
Universities
Information management organizations
Industry associations
In-house training
IS staff and user departments
usually share responsibility for developing and conducting training for
in-house
systems
Training techniques can
include many techniques and training aids, including multimedia, demonstrations,
videotapes, and charts
Some Training Guidelines to Consider
Train people in groups,
with separate programs for distinct groups
Select the most effective
place for training
Provide for learning by
hearing, seeing, and doing
Prepare a training manual
Develop interactive tutorials
and training tools
Rely on previous trainees
When training is complete,
conduct a full-scale simulation for users to gain experience and confidence
File Conversion
File conversion can take place after the operational environment is established and training has been performed
Issues to consider
Automated conversion techniques
Methods of exporting data
to the new system
Programs designed to extract
and convert data
Controls required to protect
vulnerable data
Verification of results
by users
System Changeover
System changeover puts the new system online and retires the old system
Four typical approaches exist
Direct cutover
Parallel operation
Pilot operation
Phased changeover
Each approach involves different cost and risk factors
Direct cutover
With direct cutover, changeover
from the old system to the new system occurs immediately, as the new system
becomes operational
Cost is relatively low because
only one system is in operation
Risk is relatively high
because there is no backup option
Timing is an important factor
for systems that have periodic processing cycles
Parallel operation
With parallel operation,
both the new and the old systems operate fully for a specified period
Data is input to both systems,
and results can be verified
Cost is relatively high,
because both systems operate for a period of time
Risk is relatively low,
because results can be verified and a backup option exists
Method is impractical if
the systems are dissimilar or cannot be supported together
Pilot operation
With pilot operation, both
the new and the old systems operate, but only at a selected location, called
a pilot site
The rest of the organization
continues to use the old system
Cost is relatively moderate,
because only one location runs both systems
Risk also is relatively
moderate, because the new system is installed only at the pilot site and
the risk of failure is reduced
Phased changeover
With phased changeover,
the system is implemented in stages, or modules across the organization
Phased changeover gives
part of the system to entire organization
Cost is relatively moderate,
because the system is implemented in stages, rather than all at once
Risk also is relatively
moderate, because the risk is limited to the module being implemented
Post-Implementation Evaluation
After the system is operational, two main tasks must
be performed
Post-implementation evaluation
Final report to management
Post-implementation evaluation feedback
Includes various areas
Accuracy, completeness, and timeliness of output
User satisfaction
System reliability and maintainability
Adequacy of system controls and security
Hardware efficiency/platform performance
Effectiveness of database implementation
Performance of the IS team
Completeness and quality of documentation
Quality and effectiveness of training
Accuracy of cost-benefit estimates and development schedules
A post-implementation evaluation
is based on fact-finding methods similar to techniques used during the
systems
analysis phase
Ideally, post-implementation
evaluation should be performed by people who were not involved in the development
process
Usually done by IS staff and users
Internal or external auditors often are involved
TRADEOFF
When should post-implementation evaluation occur
— how soon after system operation begins?
If too long, users remember
less about the development process and how it might be improved
If too soon, users have
insufficient time to assess system strengths and weaknesses
Six months of operation is desirable, but pressure to finish sooner often exists
Final Report to Management
Report contents
1. Final versions of all system documentation
2. Planned modifications and enhancements to the system that have been identified
3. A recap of all systems development costs and schedules
4. A comparison of actual costs and schedules to the original estimates
5. The post implementation evaluation, if it has
been performed