CHAPTER 10 System
Architecture
Chapter 10 is the final chapter in the systems design phase of
the SDLC. This chapter describes system architecture, which
translates the logical design of an information system into a
physical blueprint. As you plan the system architecture, you
will learn about servers, clients, processing methods, networks,
and related issues.
OBJECTIVES
When you finish this chapter, you will be able to:
· Provide a checklist of issues to consider when selecting a
system architecture
· Trace the evolution of system architecture from mainframes to
current designs
· Explain client/server architecture, including tiers, cost-benefit
issues, and performance
· Compare in-house e-commerce development with packaged
solutions and service providers
· Discuss the impact of cloud computing and Web 2.0
· Define network topology, including hierarchical, bus, ring,
star, and mesh models
· Describe wireless networking, including wireless standards,
topologies, and trends
· Describe the system design specification
INTRODUCTION
At this point in the SDLC, your objective is to determine an
overall architecture to implement the information system. You
learned in Chapter 1 that an information system requires
hardware, software, data, procedures, and people to accomplish
a specific set of functions. An effective system combines those
elements into an architecture, or design, that is flexible, cost-
effective, technically sound, and able to support the information
needs of the business. This chapter covers a wide range of
topics that support the overall system design, just as a plan for a
new home would include a foundation plan, building methods,
wiring and plumbing diagrams, traffic flows, and costs.
System architecture translates the logical design of an
information system into a physical structure that includes
hardware, software, network support, processing methods, and
security. The end product of the systems design phase is the
system design specification. If this document is approved, the
next step is systems implementation.
PREVIEW CASE: Mountain View College Bookstore
Background: Wendy Lee, manager of college services at
Mountain View College, wants a new information system that
will improve efficiency and customer service at the three
college bookstores.
In this part of the case, Tina Allen (systems analyst) and David
Conroe (student intern) are talking about system architecture
issues.
Participants:
Tina and David
Location:
Mountain View College cafeteria, Thursday afternoon, January
9, 2014
Project status:
The team completed user interface and data design work. The
last step in the systems design phase is to consider a system
architecture for the bookstore system.
Discussion topics:
System architecture checklist, client/server architecture,
processing methods, and network issues
Tina:
Hi, David. Did you enjoy the holiday break?
David:
I sure did. Now I’m ready to get back to work.
Tina:
Good. As the last step in the systems design phase of the SDLC,
we need to study the physical structure, or architecture, of the
bookstore system. Our checklist includes our college’s
organization and culture, enterprise resource planning, total cost
of ownership, scalability, Web integration, legacy systems,
processing methods, security issues, and portals that could
affect the system design.
David:
So, where do we start?
Tina:
Well, the bookstore interfaces with many publishers and
vendors, so we’ll consider supply chain management, which is
part of enterprise resource planning, or ERP for short.
David:
What happens after we finish the checklist?
Tina:
Then we’ll define a client/server architecture. As I see it, the
bookstore client workstations will share the processing with a
server in the IT department. Also, we may need to look at
middleware software to connect the new system with existing
legacy systems, such as the college accounting system.
David:
Anything else?
Tina:
Yes. We need to select a network plan, or topology, so we’ll
know how to plan the physical cabling and connections — or
possibly use wireless technology. When we’re done, we’ll
submit a system design specification for approval.
David:
Sounds good to me.
Tina:
Good. Here’s a task list to get us started:
FIGURE 10-1 Typical system architecture tasks.
© Cengage Learning 2014
ARCHITECTURE CHECKLIST
Just as an architect begins a project with a list of the owner’s
requirements, a systems analyst must approach system
architecture with an overall checklist. Before making a
decision, the analyst must consider several issues that will
affect the architecture choice:
· Corporate organization and culture
· Enterprise resource planning (ERP)
· Initial and total cost of ownership (TCO)
· Scalability
· Web integration
· Legacy system interface requirements
· Processing options
· Security issues
· Corporate portals
Corporate Organization and Culture
To be successful, an information system must perform well in a
company’s organization and culture. For example, consider two
large bicycle brands, Green Bikes and Blue Bikes. Each firm
has three operating divisions: an Asian subsidiary that
manufactures the bicycles, a factory in Los Angeles that
produces bike accessories and clothing, and a plant in Canada
that makes bike carriers, racks, and custom trailers.
On the surface, the two firms are similar, but they have very
different organizations and corporate cultures. Green Bikes is
highly centralized, and oversees day-to-day operations from its
Los Angeles office. Blue Bikes also has a Los Angeles
executive office, but allows its three business units to operate
separately, with minimal corporate oversight. Both firms are
successful, and it is unlikely that their managerial styles will
change anytime soon.
Suppose you were a consultant, and both firms asked you to
suggest an IT architecture that would boost productivity and
reduce costs. How would corporate organization and culture
issues affect your recommendation? There is no easy answer to
that question. The best approach probably would be to study
day-to-day business functions, talk to users at all levels, and
focus on operational feasibility issues, just as you did earlier in
the development process.
Enterprise Resource Planning (ERP)
Many companies use enterprise resource planning (ERP)
software, which was described in Chapter 1. The objective of
ERP is to establish a company-wide strategy for using IT that
includes a specific architecture, standards for data, processing,
network, and user interface design. A main advantage of ERP is
that it describes a specific hardware and software environment,
also called a platform, that ensures connectivity and easy
integration of future systems, including in-house software and
commercial packages.
Even though ERP has been very popular, some argue that it is
outdated because of major advances in technology. For example,
in her CIO Magazine article shown in Figure 10-2, author Karen
Goulart calls ERP an “old workhorse.” She suggests that ERP’s
future success depends on integrating new technologies such as
mobility and cloud computing, among others. In other words, an
ERP designer might have to bring enterprise data, anytime,
anywhere, to a smart phone in a sales rep’s pocket.
Many companies are extending internal ERP systems to their
suppliers and customers, using a concept called supply chain
management (SCM). For example, in a totally integrated supply
chain system, a customer order could cause a manufacturing
system to schedule a work order, which in turn triggers a call
for more parts from one or more suppliers. In a dynamic, highly
competitive economy, SCM can help companies achieve faster
response, better customer service, and lower operating costs.
FIGURE 10-2 Is ERP outdated, or will it still be around? Author
Karen Goulart says that ERP’s future success depends on
integrating new technology, such as mobility and cloud
computing.
© 2007–2012, TechTarget
Microsoft offers an enterprise solution called Microsoft
Dynamics, as shown in Figure 10-3 on the next page. The
interesting “test drive” video provides a scenario-based preview
of how the software can integrate financial management,
customer relationship management (CRM), supply chain
management (SCM), and business metrics.
CASE IN POINT 10.1: ABC SYSTEMS
You are a systems analyst at ABC Systems, a fast-growing IT
consulting firm that provides a wide range of services to
companies that want to establish e-commerce operations. During
the last 18 months, ABC acquired two smaller firms and set up a
new division that specializes in supply chain management.
Aligning ABC’s internal systems was quite a challenge, and top
management was not especially happy with the integration cost
or the timetable. To avoid future problems, you have decided to
suggest an ERP strategy, and you plan to present your views at
the staff meeting tomorrow. ABC’s management team is very
informal and prefers a loose, flexible style of management. How
will you persuade them that ERP is the way to go?
Initial Cost and TCO
You learned earlier about the importance of considering
economic feasibility and TCO during systems planning and
analysis. TCO includes tangible purchases, fees, and contracts
called hard costs. However, additional soft costs of
management, support, training, and downtime are just as
important, but more difficult to measure.
Firms such as Micromation, which is shown in Figure 10-4 on
page 409, offer specialized TCO analysis, benchmarks, and
consulting. As the Micromation chart shows, user-related costs
represent a very large slice of the total pie.
A TCO analysis should include the the following questions.
· If in-house development was selected as the best alternative
initially, is it still the best choice? Is the necessary technical
expertise available, and does the original cost estimate appear
realistic?
· If a specific package was chosen initially, is it still the best
choice? Are newer versions or competitive products available?
Have any changes occurred in pricing or support?
· Have any new types of outsourcing become available?
FIGURE 10-3 Microsoft invites you to watch or take a scenario-
based test drive of its Microsoft Dynamics product.
Screenshots used with permission from Microsoft.
· Have any economic, governmental, or regulatory events
occurred that could affect the proposed project?
· Have any significant technical developments occurred that
could affect the proposed project?
· Have any major assumptions changed since the company made
the build versus buy decision?
· Are there any merger or acquisition issues to consider,
whereby the company might require compatibility with a
specific environment?
· Have any new trends occurred in the marketplace? Are new
products or technologies on the verge of being introduced?
· Have you updated the original TCO estimate? If so, are there
any significant differences?
FIGURE 10-4 The Micromation site suggests that soft costs are
very significant, but are more difficult to measure.
© 2012 Micromation
The answers to these questions might affect the initial cost and
TCO for the proposed system. You should review system
requirements and alternatives now, before proceeding to design
the system architecture.
Scalability
A network is composed of individual nodes. A node represents a
physical device, wired or wireless, that can send, receive, or
manage network data. For example, nodes can be servers,
workstations, shared printers, mass storage devices, wireless
access points, or mobile computers.
Scalability, also called extensibility, refers to a system’s ability
to expand, change, or downsize easily to meet the changing
needs of a business enterprise. Scalability is especially
important in implementing systems that are volume-related,
such as transaction processing systems. A scalable system is
necessary to support a dynamic, growing business. For example,
a scalable network could handle anywhere from a few dozen
nodes to thousands of nodes, and a scalable DBMS could
support the acquisition of an entire new sales division. When
investing large amounts of money in a project, management is
especially concerned about scalability issues that could affect
the system’s life expectancy.
Web Integration
An information system includes applications, which are
programs that handle the input, manage the processing logic,
and provide the required output. The systems analyst must know
if a new application will be part of an e-commerce strategy and
the degree of integration with other Web-based components. As
you learned earlier, a Web-centric architecture follows Internet
design protocols and enables a company to integrate the new
application into its e-commerce strategy. Even where e-
commerce is not involved, a Web-centric application can run on
the Internet or a company intranet or extranet. A Web-based
application avoids many of the connectivity and compatibility
problems that typically arise when different hardware
environments are involved. In a Web-based environment, a
firm’s external business partners can use standard Web browsers
to import and export data.
Legacy Systems
A new system might have to interface with one or more legacy
systems, which are older systems that use outdated technology,
but still are functional. For example, a new marketing
information system might need to report sales data to a server-
based accounting system and obtain product cost data from a
legacy manufacturing system.
Interfacing a new system with a legacy system involves analysis
of data formats and compatibility. In some cases, a company
will need to convert legacy file data, which can be an expensive
and time-consuming process. Middleware, which is discussed
later in this chapter, might be needed to pass data between new
systems and legacy systems. Finally, to select the best
architecture, the analyst must know if the new application
eventually will replace the legacy system.
Processing Options
In planning the architecture, designers also must consider how
the system will process data — online or in batches. For
example, a high-capacity transaction processing system, such as
an order entry system, requires more network, processing, and
data storage resources than a monthly billing system that
handles data in batches. Also, if the system must operate online,
24 hours a day and seven days a week (24/7), provision must be
made for backup and speedy recovery in the event of system
failure.
The characteristics of online and batch processing methods are
described later in this chapter, with examples of each type.
Security Issues
From the password protection shown in Figure 10-5 to complex
intrusion detection systems, security threats and defenses are a
major concern to a systems analyst. As the physical design is
translated into specific hardware and software, the analyst must
consider security issues and determine how the company will
address them. Security is especially important when data or
processing is performed at remote locations, rather than at a
centralized facility. In mission-critical systems, security issues
will have a major impact on system architecture and design.
Web-based systems introduce additional security concerns, as
critical data must be protected in the Internet environment.
Also, firms that use e-commerce applications must assure
customers that their personal data is safe and secure. System
security concepts and strategies are discussed in detail in
Chapter 12, Managing Systems Support and Security.
FIGURE 10-5 User IDs and passwords are important elements
of system security.
© JMiks / Shutterstock.com
Corporate Portals
Depending on the system, the planned architecture might
include a corporate portal. A portal is an entrance to a
multifunction Web site. After entering a portal, a user can
navigate to a destination using various tools and features
provided by the portal designer. A corporate portal can provide
access for customers, employees, suppliers, and the public. A
well-designed portal can integrate with various other systems
and provide a consistent look and feel.
SYSTEM ARCHITECTURE: THEN AND NOW
Every business information system must carry out three main
functions:
· Manage applications that perform the processing logic.
· Handle data storage and access.
· Provide an interface that allows users to interact with the
system.
Depending on the architecture, the three functions are
performed on a server, on a client, or are divided between the
server and the client. As you plan the design, you must
determine where the functions will be carried out and the
advantages and disadvantages of each design approach.
Mainframe Architecture
A server is a computer that supplies data, processing services,
or other support to one or more computers, called clients. The
earliest servers were mainframe computers, and a system design
where the server performs all the processing sometimes is
described as mainframe architecture. Although the actual server
does not have to be a mainframe, the term mainframe
architecture typically describes a multiuser environment where
the server is significantly more powerful than the clients. A
systems analyst should know the history of mainframe
architecture to understand the server’s role in modern system
design.
In the 1960s, mainframe architecture was the only choice. In
addition to centralized data processing, the earliest systems
performed all data input and output at a central location, often
called a data processing center. Physical data was delivered or
transmitted in some manner to the data processing center, where
it was entered into the system. Users in the organization had no
input or output capability, except for printed reports that were
distributed by a corporate IT department.
As network technology advanced, companies installed terminals
at remote locations, so that users could enter and access data
from anywhere in the organization, regardless of where the
centralized computer was located. A terminal included a
keyboard and display screen to handle input and output, but
lacked independent processing capability. In a centralized
design, as shown in Figure 10-6, the remote user’s keystrokes
are transmitted from his or her terminal to the mainframe, which
responds by sending screen output back to the user’s screen.
Today, mainframe architecture still is used in industries that
require large amounts of processing that can be done at a central
location. For example, a bank might use mainframe servers to
update customer balances each night. In a blend of old and new
technology, an Internet-based retail operation might use
mainframe architecture at a customer service center that fulfills
its online sales as shown in Figure 10-7 on the next page.
FIGURE 10-6 In a centralized design, the remote user’s
keystrokes are transmitted to the mainframe, which responds by
sending screen output back to the user’s screen.
© Cengage Learning 2014
Impact of the Personal Computer
When PC technology exploded in the 1990s, powerful
microcomputers quickly appeared on corporate desktops. Users
found that they could run their own word processing,
spreadsheet, and database applications, without assistance from
the IT group, in a mode called stand-alone computing. Before
long, companies linked the stand-alone computers into networks
that enabled the user clients to exchange data and perform local
processing.
When an individual user works in stand-alone mode, the
workstation performs all the functions of a server by storing,
accessing, and processing data, as well as providing a user
interface. Although stand-alone PCs improved employee
productivity and allowed users to perform tasks that previously
required IT department assistance, stand-alone computing was
inefficient and expensive. Even worse, maintaining data on
individual workstations raised major concerns about data
security, integrity, and consistency. Without a central storage
location, it was impossible to protect and back up valuable
business data, and companies were exposed to enormous risks.
In some cases, users who were frustrated by a lack of support
and services from the IT department created and managed their
own databases. In addition to security concerns, this led to data
inconsistency and unreliability.
FIGURE 10-7 Internet-based retail operations such as
Amazon.com use customer service centers to fulfill online sales.
© Bloomberg via Getty Images
Network Evolution
As technology became available, companies resolved the
problems of stand-alone computing by joining clients into a
local area network (LAN) that allows sharing of data and
hardware resources, as shown in Figure 10-8. One or more
LANs, in turn, can connect to a centralized server. Further
advances in technology made it possible to create powerful
networks that could use satellite links, high-speed fiber-optic
lines, or the Internet to share data.
A wide area network (WAN) spans long distances and can
connect LANs that are continents apart, as shown in Figure 10-
9. When a user accesses data on a LAN or WAN, the network is
transparent because a user sees the data as if it were stored on
his or her own workstation. Company-wide systems that connect
one or more LANs or WANs are called distributed systems. The
capabilities of a distributed system depend on the power and
capacity of the underlying data communication network.
Compared to mainframe architecture, distributed systems
increase concerns about data security and integrity because
many individual clients require access to perform processing.
FIGURE 10-8 A LAN allows sharing of data and hardware, such
as printers and scanners.
© Cengage Learning 2014
CLIENT/SERVER DESIGNS
Today’s interconnected world requires an information
architecture that spans the entire enterprise. Whether you are
dealing with a departmental network or a multinational
corporation, as a systems analyst you will work with a
distributed computing strategy called client/server architecture.
FIGURE 10-9 A WAN can connect many LANs and link users
who are continents apart.
© Cengage Learning 2014
Overview
Although no standard definition exists, the term client/server
architecture generally refers to systems that divide processing
between one or more networked clients and a central server. In
a typical client/server system, the client handles the entire user
interface, including data entry, data query, and screen
presentation logic. The server stores the data and provides data
access and database management functions. Application logic is
divided in some manner between the server and the clients. In a
client/server interaction, the client submits a request for
information from the server, which carries out the operation and
responds to the client. As shown in Figure 10-10 the data file is
not transferred from the server to the client — only the request
and the result are transmitted across the network. To fulfill a
request from a client, the server might contact other servers for
data or processing support, but that process is transparent to the
client. The analogy can be made to a restaurant where the
customer gives an order to a server, who relays the request to a
cook, who actually prepares the meal.
Figure 10-11 on the next page lists some major differences
between client/server and traditional mainframe systems. Many
early client/server systems did not produce expected savings
because few clear standards existed, and development costs
often were higher than anticipated. Implementation was
expensive because clients needed powerful hardware and
software to handle shared processing tasks. In addition, many
companies had an installed base of data, called legacy data,
which was difficult to access and transport to a client/server
environment.
FIGURE 10-10 In a client/server design, data is stored and
usually processed on the server.
© Cengage Learning 2014
As large-scale networks grew more powerful, client/server
systems became more cost-effective. Many companies invested
in client/server systems to achieve a unique combination of
computing power, flexibility, and support for changing business
operations. Today, client/server architecture is the dominant
form of systems design, using Internet protocols and network
models such as the ones described on pages 426–430. As
businesses form new alliances with customers and suppliers, the
client/server concept continues to expand to include clients and
servers outside the organization.
Cloud computing, which is discussed later in this chapter, is
seen by some observers as an entirely new concept. Others see
it as the ultimate form of client/server architecture, where
Internet-based computing becomes the server part of
client/server and handles processing tasks, while the Internet
itself becomes the platform that replaces traditional networks.
The bottom line is that it doesn’t matter whether cloud
computing is part of a client/server evolution, or a whole new
way of thinking about computing. Either way, successful
systems must support business requirements, and system
architecture is an important step in the systems development
process.
FIGURE 10-11 Comparison of the characteristics of
client/server and mainframe systems.
© Cengage Learning 2014
The Client’s Role
The client/server relationship must specify how the processing
will be divided between the client and the server. A fat client,
also called a thick client, design locates all or most of the
application processing logic at the client. A thin client design
locates all or most of the processing logic at the server. What
are the advantages and disadvantages of each design? Most IT
experts agree that thin client designs provide better
performance, because program code resides on the server, near
the data. In contrast, a fat client handles more of the processing
and must access and update the data more often. Compared with
maintaining a central server, fat client TCO also is higher,
because of initial hardware and software requirements and the
ongoing expense of supporting and updating remote client
computers. A fat client design, however, is simpler and less
expensive to develop, because the architecture resembles
traditional file server designs where all processing is performed
at the client. Figure 10-12 compares the characteristics of fat
and thin clients.
Client/Server Tiers
Early client/server designs were called two-tier designs. In a
two-tier design, the user interface resides on the client, all data
resides on the server, and the application logic can run either on
the server or on the client, or be divided between the client and
the server.
More recently, another form of client/server design, called a
three-tier design, has become popular. In a three-tier design, the
user interface runs on the client and the data is stored on the
server, just as with a two-tier design. A three-tier design also
has a middle layer between the client and server that processes
the client requests and translates them into data access
commands that can be understood and carried out by the server,
as shown in Figure 10-13. You can think of the middle layer as
an application server, because it provides the application logic,
or business logic, required by the system. Three-tier designs
also are called n-tier designs, to indicate that some designs use
more than one intermediate layer.
FIGURE 10-12 Characteristics of fat and thin clients.
© Cengage Learning 2014
The advantage of the application logic layer is that a three-tier
design enhances overall performance by reducing the data
server’s workload. The separate application logic layer also
relieves clients of complex processing tasks. Because it can run
on a minicomputer that is much more powerful than the typical
client workstations, the middle layer is more efficient and cost-
effective in large-scale systems. Figure 10-14 on the next page
shows where the data, the application logic, and the user
interface are located on various architectures. In a client/server
system, the tiers communicate using software called
middleware, which is described in the following section.
FIGURE 10-13 Characteristics of two-tier versus three-tier
client/server design.
© Cengage Learning 2014
Middleware
A recent Internet search for the phrase “What is Middleware”
returned 80,500 sites. Unfortunately, the term middleware
means different things to different people. So, what is
middleware?
FIGURE 10-14 The location of the data, the application logic,
and the user interface depend on the type of architecture.
© Cengage Learning 2014
In a multi-tier system, special software called middleware
enables the tiers to communicate and pass data back and forth.
Here are some other definitions you might encounter:
· Middleware offers an interface to connect software and
hardware.
· Middleware can integrate legacy systems and Web-based
applications. For example, when a user enters a customer
number on a Web form, middleware can update a legacy
accounting system.
· Middleware is like glue that holds different applications
together.
· Middleware represents the slash in the term client/server.
· Middleware resembles the plumbing system in your home: it
connects important objects in a way that requires little or
attention.
The bottom line is that a hard and fast definition isn’t all that
important. If you grasp the concept of middleware, you’ll be
able to handle the development tools and learn the techniques
required in your IT environment.
Cost-Benefit Issues
To support business requirements, information systems need to
be scalable, powerful, and flexible. For most companies,
client/server systems offer the best combination of features to
meet those needs. Whether a business is expanding or
downsizing, client/server systems enable the firm to scale the
system in a rapidly changing environment. As the size of the
business changes, it is easier to adjust the number of clients and
the processing functions they perform than it is to alter the
capability of a large-scale central server.
Client/server computing also allows companies to transfer
applications from expensive mainframes to less-expensive client
platforms. In addition, using common languages such as SQL,
clients and servers can communicate across multiple platforms.
That difference is important because many businesses have
substantial investments in a variety of hardware and software
environments.
Finally, client/server systems reduce network load and improve
response times. For example, consider a user at a company
headquarters who wants information about total sales figures. In
a client/server system, the server locates the data, performs the
necessary processing, and responds immediately to the client’s
request. The data retrieval and processing functions are
transparent to the client because they are done on the server, not
the client.
Performance Issues
While it provides many advantages, client/server architecture
does involve performance issues that relate to the separation of
server-based data and networked clients that must access the
data.
Consider the difference between client/server design and a
centralized environment, where a server-based program issues a
command that is executed by the server’s own CPU. Processing
speed is enhanced because program instructions and data both
travel on an internal system bus, which moves data more
efficiently than an external network.
In contrast to the centralized system, a client/server design
separates applications and data. Networked clients submit data
requests to the server, which responds by sending data back to
the clients. When the number of clients and the demand for
services increases beyond a certain level, network capacity
becomes a constraint, and system performance declines
dramatically.
In the article shown in Figure 10-15, IBM states that the
performance characteristics of a client/server system are not the
same as a centralized processing environment. Client/server
response times increase gradually as more requests are made,
but then rise dramatically when the system nears its capacity.
This point is called the knee of the curve, because it marks a
sharp decline in the system’s speed and efficiency. To deliver
and maintain acceptable performance, system developers must
anticipate the number of users, network traffic, server size and
location, and design a client/server architecture that can support
current and future business needs.
What is the answer to enhancing client/server performance?
According to IBM, client/server systems must be designed so
the client contacts the server only when necessary and makes as
few trips as possible.
Another issue that affects client/server performance is data
storage. Just as processing can be done at various places, data
can be stored in more than one location using a distributed
database management system (DDBMS).
Using a DDBMS offers several advantages: Data stored closer
to users can reduce network traffic; the system is scalable, so
new data sites can be added without reworking the system
design; and with data stored in various locations, the system is
less likely to experience a catastrophic failure. A potential
disadvantage of distributed data storage involves data security.
It can be more difficult to maintain controls and standards when
data is stored in various locations. In addition, the architecture
of a DDBMS is more complex and difficult to manage. From a
system design standpoint, the challenge is that companies often
want it both ways — they want the control that comes with
centralization and the flexibility associated with
decentralization.
FIGURE 10-15 According to IBM, client/server response times
increase gradually, and then rise dramatically when the system
nears its capacity. That point is referred to as the knee of the
curve.
© Copyright IBM Corporation 1994, 2012.
THE IMPACT OF THE INTERNET
The Internet has had an enormous impact on system
architecture. The Internet has become more than a
communication channel — many IT observers see it as a
fundamentally different environment for system development.
Recall that in a traditional client/server system, the client
handles the user interface, as shown in Figure 10-14 on the
previous page, and the server (or servers in a multi-tier system)
handles the data and application logic. In a sense, part of the
system runs on the client, part on the server. In contrast, in an
Internet-based architecture, in addition to data and application
logic, the entire user interface is provided by the Web server in
the form of HTML documents that are displayed by the client’s
browser. Shifting the responsibility for the interface from the
client to the server simplifies data transmission and results in
lower hardware cost and complexity.
The advantages of Internet-based architecture have changed
fundamental ideas about how computer systems should be
designed, and we are moving rapidly to a total online
environment. At the same time, millions of people are using
Web-based collaboration and social networking applications to
accomplish tasks that used to be done in person, over the phone,
or by more traditional Internet channels. The following sections
examine cloud computing and Web 2.0. It is important to
understand these trends, which are shaping the IT industry’s
future.
Cloud Computing
Cloud computing refers to the cloud symbol that often is used to
represent the Internet. The cloud computing concept envisions a
cloud of remote computers that provide a total online software
and data environment that is hosted by third parties. For
example, a user’s computer does not perform processing or
computing tasks — the cloud does. This concept is in contrast
to today’s computing model, which is based on networks that
strategically distribute processing and data across the
enterprise. In a sense, the cloud of computers acts as one giant
computer that performs tasks for users.
Figure 10-16 shows users connected to the cloud, which
performs the computing work. Instead of requiring specific
hardware and software on the user’s computer, cloud computing
spreads the workload to powerful remote systems that are part
of the cloud. The user appears to be working on a local system,
but all computing is actually performed in the cloud. No updates
or maintenance are required of the user, and there are no
compatibility issues.
Cloud computing effectively eliminates compatibility issues,
because the Internet itself is the platform. This architecture also
provides scaling on demand, which matches resources to needs
at any given time. For example, during peak loads, additional
cloud servers might come on line automatically to support the
workload.
FIGURE 10-16 The explosive growth of cloud computing has
attracted many firms that fight hard for market share.
© Cengage Learning 2014
Cloud computing is an ideal platform for powerful Software as a
Service (SaaS) applications. As you learned in Chapter 7, SaaS
is a popular deployment method where software is not
purchased but is paid for as a service, much like one pays for
electricity or cable TV each month. In this architecture, updates
and changes to services can be easily made by service providers
without involving the users.
Even though cloud computing has tremendous advantages, some
concerns exist. First, cloud computing requires significantly
more bandwidth (the amount of data that can be transferred in a
fixed time period) than traditional client/server networks.
Second, because cloud computing is Internet-based, if a user’s
Internet connection becomes unavailable, he or she will be
unable to access any cloud-based services. In addition, there are
security concerns associated with sending large amounts of data
over the Internet, as well as concerns about storing it securely.
Finally, there is the issue of control. Because a service provider
hosts the resources and manages data storage and access, the
provider has complete control of the system. Many firms are
wary of handing over control of mission-critical data and
systems to a third-party provider.
Future technology advances will make cloud computing even
more feasible, desirable, and secure. As the IT industry moves
toward a Web-based architecture, cloud computing will be
marketed aggressively and growth will be rapid. Figure 10-16
lists some of the major players in the cloud market. Although
the list will change from month to month, or week to week, one
thing will not change: cloud computing will be a cornerstone of
IT growth in the coming decade.
Web 2.0
The shift to Internet-based collaboration has been so powerful
and compelling that it has been named Web 2.0. Web 2.0 is not
a reference to a more technically advanced version of the
current Web. Rather, Web 2.0 envisions a second generation of
the Web that will enable people to collaborate, interact, and
share information more dynamically.
Leading Web 2.0 author Tim O’Reilly has suggested that the
strong interest in Web 2.0 is driven by the concept of the
Internet as a platform. O’Reilly sees future Web 2.0
applications delivering software as a continuous service with no
limitations on the number of users that can connect or how users
can consume, modify, and exchange data.
Social networking sites, such as Facebook, Twitter, and
LinkedIn are seeing explosive growth in the Web 2.0
environment. Another form of social collaboration is called a
wiki. A wiki is a Web-based repository of information that
anyone can access, contribute to, or modify. In a sense, a wiki
represents the collective knowledge of a group of people. One
of the best-known wikis is Wikipedia.org, but smaller-scale
wikis are growing rapidly at businesses, schools, and other
organizations that want to compile and share information.
One of the goals of Web 2.0 is to enhance creativity,
interaction, and shared ideas. In this regard, the Web 2.0
concept resembles the agile development process and the open-
source software movement. Web 2.0 communities and services
are based on a body of data created by users. As users
collaborate, new layers of information are added in an overall
environment known as the Internet operating system. These
layers can contain text, sound bytes, images, and video clips
that are shared with the user community.
E-COMMERCE ARCHITECTURE
The huge expansion of Web-based commerce is reshaping the IT
landscape. Internet business solutions must be efficient,
reliable, and cost-effective. When planning an e-commerce
architecture, analysts can examine in-house development,
packaged solutions, and service providers. The following
sections discuss these options.
In-House
Solution
s
In Chapter 7, you learned how to analyze advantages and
disadvantages of in-house development versus purchasing a
software package. The same basic principles apply to system
design.
If you decide to proceed with an in-house solution, you must
have an overall plan to help achieve your goals. How should
you begin? Figure 10-17 offers guidelines for companies
developing e-commerce strategies. An in-house solution usually
requires a greater initial investment, but provides more
flexibility for a company that must adapt quickly in a dynamic
e-commerce environment. By working in-house, a company has
more freedom to integrate with customers and suppliers and is
less dependent on vendor-specific solutions.
FIGURE 10-17 Guidelines for companies developing e-
commerce strategies.
© Cengage Learning 2014
For smaller companies, the decision about in-house Web
development is even more critical, because this approach will
require financial resources and management attention that many
small companies might be unable or unwilling to commit. An
in-house strategy, however, can provide valuable benefits,
including the following:
· A unique Web site, with a look and feel consistent with the
company’s other marketing efforts
· Complete control over the organization of the site, the number
of pages, and the size of the files
· A scalable structure to handle increases in sales and product
offerings in the future
· More flexibility to modify and manage the site as the company
changes
· The opportunity to integrate the firm’s Web-based business
systems with its other information systems, creating the
potential for more savings and better customer service
Whether a firm uses an in-house or a packaged design, the
decision about Web hosting is a separate issue. Although
internal hosting has some advantages, such as greater control
and security, the expense would be much greater, especially for
a small- to medium-sized firm.
CASE IN POINT 10.2: SMALL POTATOES, INC.
Small Potatoes is a family-operated seed business that has
grown rapidly. Small Potatoes specializes in supplying home
gardeners with the finest seeds and gardening supplies. Until
now, the firm has done all its business by placing ads in
gardening and health magazines, and taking orders using a toll-
free telephone number.
Now, the family has decided to establish a Web site and sell
online, but there is some disagreement about the best way to
proceed. Some say it would be better to develop the site on their
own, and Betty Lou Jones, a recent computer science graduate,
believes she can handle the task. Others, including Sam Jones,
Betty’s grandfather, feel it would be better to outsource the site
and focus on the business itself. Suppose the family asked for
your opinion. What would you say? What additional questions
would you ask?
FIGURE 10-18 Intershop offers software solutions for smaller
companies that want to get an e-business up and running
quickly.
© 2009–2012 Intershop Communications AG
Packaged

More Related Content

PDF
Managing and Using Information Systems A Strategic Approach 6th Edition Pearl...
PDF
Managing and Using Information Systems A Strategic Approach 6th Edition Pearl...
PDF
Managing and Using Information Systems A Strategic Approach 6th Edition Pearl...
PPT
The Role of a Systems Architect
PDF
Session 4 it architecture and competitive advantage
PPT
Chapter 10 sdlc
PPT
Chapter 10 sdlc
PDF
Information system for managers
Managing and Using Information Systems A Strategic Approach 6th Edition Pearl...
Managing and Using Information Systems A Strategic Approach 6th Edition Pearl...
Managing and Using Information Systems A Strategic Approach 6th Edition Pearl...
The Role of a Systems Architect
Session 4 it architecture and competitive advantage
Chapter 10 sdlc
Chapter 10 sdlc
Information system for managers

Similar to CHAPTER 10 SystemArchitectureChapter 10 is the final chapter.docx (20)

PDF
An Introductory Session on Enterprise Architecture
PPT
It Consulting Slides
PPT
CC07.ppt
PDF
chapter10-120827115414-phpapp02.pdf
PPTX
Chapter 10 System Architecture.Information Technology Project Management pptx
PPTX
chapter01-120827115344-phpapp01 (1)-converted.pptx
PPT
Chapter 12
PPS
Biz Nova It Project Bonus Slides
PPT
COMPUTER ORGANIZATION AND ARCHITECTURE NOTES
PDF
WEEK2-Analyzing the Business Case.pdf
PPT
Formal Information Technology in a Small, Growing Company
PDF
Curated Computing
PPT
ch01_02.ppt
PPT
ch01_02.ppt
PDF
Enterprise solution design principles
PPT
PDF
Systems analysis and design lecture 1
PDF
The Need to Professionalize the Discipline of EA The Need to Professionalize ...
PDF
ARMnet Financial Product Management Design Philosphy
An Introductory Session on Enterprise Architecture
It Consulting Slides
CC07.ppt
chapter10-120827115414-phpapp02.pdf
Chapter 10 System Architecture.Information Technology Project Management pptx
chapter01-120827115344-phpapp01 (1)-converted.pptx
Chapter 12
Biz Nova It Project Bonus Slides
COMPUTER ORGANIZATION AND ARCHITECTURE NOTES
WEEK2-Analyzing the Business Case.pdf
Formal Information Technology in a Small, Growing Company
Curated Computing
ch01_02.ppt
ch01_02.ppt
Enterprise solution design principles
Systems analysis and design lecture 1
The Need to Professionalize the Discipline of EA The Need to Professionalize ...
ARMnet Financial Product Management Design Philosphy
Ad

More from cravennichole326 (20)

DOCX
Examine how nature is discussed throughout The Open Boat.”  Loo.docx
DOCX
Examine All Children Can Learn. Then, search the web for effec.docx
DOCX
Examine each of these items, which are available on the internet .docx
DOCX
Examine a web browser interface and describe the various forms .docx
DOCX
Examine a scenario that includes an inter-group conflict. In this sc.docx
DOCX
Examine a current law, or a bill proposing a law, that has to do wit.docx
DOCX
Exam IT 505Multiple Choice (20 questions , 2 points each)Pleas.docx
DOCX
EXAMEstructura 8.1 - Miniprueba AVerbosCom.docx
DOCX
Examine current practice guidelines related to suicide screeni.docx
DOCX
Examine Case Study Pakistani Woman with Delusional Thought Processe.docx
DOCX
Examination of Modern LeadershipModule 1 Leadership History, F.docx
DOCX
Examine current international OB issues that challenge organizat.docx
DOCX
Executive Program Practical Connection Assignment .docx
DOCX
Executive Program Practical Connection Assignment Component .docx
DOCX
Executive Program Group Project Assignment Component Profi.docx
DOCX
Executive Practical Connection Activityit is a priority that stu.docx
DOCX
Executive FunctionThe Search for an Integrated AccountMari.docx
DOCX
Executive Compensation and IncentivesMartin J. ConyonEx.docx
DOCX
Executing the StrategyLearning ObjectivesAfter reading.docx
DOCX
Executing Strategies in a Global Environment Examining the Case of .docx
Examine how nature is discussed throughout The Open Boat.”  Loo.docx
Examine All Children Can Learn. Then, search the web for effec.docx
Examine each of these items, which are available on the internet .docx
Examine a web browser interface and describe the various forms .docx
Examine a scenario that includes an inter-group conflict. In this sc.docx
Examine a current law, or a bill proposing a law, that has to do wit.docx
Exam IT 505Multiple Choice (20 questions , 2 points each)Pleas.docx
EXAMEstructura 8.1 - Miniprueba AVerbosCom.docx
Examine current practice guidelines related to suicide screeni.docx
Examine Case Study Pakistani Woman with Delusional Thought Processe.docx
Examination of Modern LeadershipModule 1 Leadership History, F.docx
Examine current international OB issues that challenge organizat.docx
Executive Program Practical Connection Assignment .docx
Executive Program Practical Connection Assignment Component .docx
Executive Program Group Project Assignment Component Profi.docx
Executive Practical Connection Activityit is a priority that stu.docx
Executive FunctionThe Search for an Integrated AccountMari.docx
Executive Compensation and IncentivesMartin J. ConyonEx.docx
Executing the StrategyLearning ObjectivesAfter reading.docx
Executing Strategies in a Global Environment Examining the Case of .docx
Ad

Recently uploaded (20)

PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
Empowerment Technology for Senior High School Guide
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PPTX
Virtual and Augmented Reality in Current Scenario
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
PDF
Trump Administration's workforce development strategy
PDF
Hazard Identification & Risk Assessment .pdf
PDF
International_Financial_Reporting_Standa.pdf
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PPTX
20th Century Theater, Methods, History.pptx
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
My India Quiz Book_20210205121199924.pdf
PPTX
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
PPTX
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
PDF
advance database management system book.pdf
PPTX
History, Philosophy and sociology of education (1).pptx
Weekly quiz Compilation Jan -July 25.pdf
Empowerment Technology for Senior High School Guide
FORM 1 BIOLOGY MIND MAPS and their schemes
Virtual and Augmented Reality in Current Scenario
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
Trump Administration's workforce development strategy
Hazard Identification & Risk Assessment .pdf
International_Financial_Reporting_Standa.pdf
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
Practical Manual AGRO-233 Principles and Practices of Natural Farming
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
20th Century Theater, Methods, History.pptx
Chinmaya Tiranga quiz Grand Finale.pdf
Paper A Mock Exam 9_ Attempt review.pdf.
My India Quiz Book_20210205121199924.pdf
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
advance database management system book.pdf
History, Philosophy and sociology of education (1).pptx

CHAPTER 10 SystemArchitectureChapter 10 is the final chapter.docx

  • 1. CHAPTER 10 System Architecture Chapter 10 is the final chapter in the systems design phase of the SDLC. This chapter describes system architecture, which translates the logical design of an information system into a physical blueprint. As you plan the system architecture, you will learn about servers, clients, processing methods, networks, and related issues. OBJECTIVES When you finish this chapter, you will be able to: · Provide a checklist of issues to consider when selecting a system architecture · Trace the evolution of system architecture from mainframes to current designs · Explain client/server architecture, including tiers, cost-benefit issues, and performance · Compare in-house e-commerce development with packaged solutions and service providers · Discuss the impact of cloud computing and Web 2.0 · Define network topology, including hierarchical, bus, ring, star, and mesh models · Describe wireless networking, including wireless standards, topologies, and trends · Describe the system design specification INTRODUCTION At this point in the SDLC, your objective is to determine an overall architecture to implement the information system. You learned in Chapter 1 that an information system requires hardware, software, data, procedures, and people to accomplish a specific set of functions. An effective system combines those elements into an architecture, or design, that is flexible, cost- effective, technically sound, and able to support the information needs of the business. This chapter covers a wide range of
  • 2. topics that support the overall system design, just as a plan for a new home would include a foundation plan, building methods, wiring and plumbing diagrams, traffic flows, and costs. System architecture translates the logical design of an information system into a physical structure that includes hardware, software, network support, processing methods, and security. The end product of the systems design phase is the system design specification. If this document is approved, the next step is systems implementation. PREVIEW CASE: Mountain View College Bookstore Background: Wendy Lee, manager of college services at Mountain View College, wants a new information system that will improve efficiency and customer service at the three college bookstores. In this part of the case, Tina Allen (systems analyst) and David Conroe (student intern) are talking about system architecture issues. Participants: Tina and David Location: Mountain View College cafeteria, Thursday afternoon, January 9, 2014 Project status: The team completed user interface and data design work. The last step in the systems design phase is to consider a system architecture for the bookstore system. Discussion topics: System architecture checklist, client/server architecture, processing methods, and network issues Tina: Hi, David. Did you enjoy the holiday break? David: I sure did. Now I’m ready to get back to work. Tina: Good. As the last step in the systems design phase of the SDLC,
  • 3. we need to study the physical structure, or architecture, of the bookstore system. Our checklist includes our college’s organization and culture, enterprise resource planning, total cost of ownership, scalability, Web integration, legacy systems, processing methods, security issues, and portals that could affect the system design. David: So, where do we start? Tina: Well, the bookstore interfaces with many publishers and vendors, so we’ll consider supply chain management, which is part of enterprise resource planning, or ERP for short. David: What happens after we finish the checklist? Tina: Then we’ll define a client/server architecture. As I see it, the bookstore client workstations will share the processing with a server in the IT department. Also, we may need to look at middleware software to connect the new system with existing legacy systems, such as the college accounting system. David: Anything else? Tina: Yes. We need to select a network plan, or topology, so we’ll know how to plan the physical cabling and connections — or possibly use wireless technology. When we’re done, we’ll submit a system design specification for approval. David: Sounds good to me. Tina: Good. Here’s a task list to get us started: FIGURE 10-1 Typical system architecture tasks. © Cengage Learning 2014 ARCHITECTURE CHECKLIST Just as an architect begins a project with a list of the owner’s
  • 4. requirements, a systems analyst must approach system architecture with an overall checklist. Before making a decision, the analyst must consider several issues that will affect the architecture choice: · Corporate organization and culture · Enterprise resource planning (ERP) · Initial and total cost of ownership (TCO) · Scalability · Web integration · Legacy system interface requirements · Processing options · Security issues · Corporate portals Corporate Organization and Culture To be successful, an information system must perform well in a company’s organization and culture. For example, consider two large bicycle brands, Green Bikes and Blue Bikes. Each firm has three operating divisions: an Asian subsidiary that manufactures the bicycles, a factory in Los Angeles that produces bike accessories and clothing, and a plant in Canada that makes bike carriers, racks, and custom trailers. On the surface, the two firms are similar, but they have very different organizations and corporate cultures. Green Bikes is highly centralized, and oversees day-to-day operations from its Los Angeles office. Blue Bikes also has a Los Angeles executive office, but allows its three business units to operate separately, with minimal corporate oversight. Both firms are successful, and it is unlikely that their managerial styles will change anytime soon. Suppose you were a consultant, and both firms asked you to suggest an IT architecture that would boost productivity and reduce costs. How would corporate organization and culture issues affect your recommendation? There is no easy answer to that question. The best approach probably would be to study day-to-day business functions, talk to users at all levels, and focus on operational feasibility issues, just as you did earlier in
  • 5. the development process. Enterprise Resource Planning (ERP) Many companies use enterprise resource planning (ERP) software, which was described in Chapter 1. The objective of ERP is to establish a company-wide strategy for using IT that includes a specific architecture, standards for data, processing, network, and user interface design. A main advantage of ERP is that it describes a specific hardware and software environment, also called a platform, that ensures connectivity and easy integration of future systems, including in-house software and commercial packages. Even though ERP has been very popular, some argue that it is outdated because of major advances in technology. For example, in her CIO Magazine article shown in Figure 10-2, author Karen Goulart calls ERP an “old workhorse.” She suggests that ERP’s future success depends on integrating new technologies such as mobility and cloud computing, among others. In other words, an ERP designer might have to bring enterprise data, anytime, anywhere, to a smart phone in a sales rep’s pocket. Many companies are extending internal ERP systems to their suppliers and customers, using a concept called supply chain management (SCM). For example, in a totally integrated supply chain system, a customer order could cause a manufacturing system to schedule a work order, which in turn triggers a call for more parts from one or more suppliers. In a dynamic, highly competitive economy, SCM can help companies achieve faster response, better customer service, and lower operating costs. FIGURE 10-2 Is ERP outdated, or will it still be around? Author Karen Goulart says that ERP’s future success depends on integrating new technology, such as mobility and cloud computing. © 2007–2012, TechTarget Microsoft offers an enterprise solution called Microsoft Dynamics, as shown in Figure 10-3 on the next page. The interesting “test drive” video provides a scenario-based preview
  • 6. of how the software can integrate financial management, customer relationship management (CRM), supply chain management (SCM), and business metrics. CASE IN POINT 10.1: ABC SYSTEMS You are a systems analyst at ABC Systems, a fast-growing IT consulting firm that provides a wide range of services to companies that want to establish e-commerce operations. During the last 18 months, ABC acquired two smaller firms and set up a new division that specializes in supply chain management. Aligning ABC’s internal systems was quite a challenge, and top management was not especially happy with the integration cost or the timetable. To avoid future problems, you have decided to suggest an ERP strategy, and you plan to present your views at the staff meeting tomorrow. ABC’s management team is very informal and prefers a loose, flexible style of management. How will you persuade them that ERP is the way to go? Initial Cost and TCO You learned earlier about the importance of considering economic feasibility and TCO during systems planning and analysis. TCO includes tangible purchases, fees, and contracts called hard costs. However, additional soft costs of management, support, training, and downtime are just as important, but more difficult to measure. Firms such as Micromation, which is shown in Figure 10-4 on page 409, offer specialized TCO analysis, benchmarks, and consulting. As the Micromation chart shows, user-related costs represent a very large slice of the total pie. A TCO analysis should include the the following questions. · If in-house development was selected as the best alternative initially, is it still the best choice? Is the necessary technical expertise available, and does the original cost estimate appear realistic? · If a specific package was chosen initially, is it still the best choice? Are newer versions or competitive products available? Have any changes occurred in pricing or support? · Have any new types of outsourcing become available?
  • 7. FIGURE 10-3 Microsoft invites you to watch or take a scenario- based test drive of its Microsoft Dynamics product. Screenshots used with permission from Microsoft. · Have any economic, governmental, or regulatory events occurred that could affect the proposed project? · Have any significant technical developments occurred that could affect the proposed project? · Have any major assumptions changed since the company made the build versus buy decision? · Are there any merger or acquisition issues to consider, whereby the company might require compatibility with a specific environment? · Have any new trends occurred in the marketplace? Are new products or technologies on the verge of being introduced? · Have you updated the original TCO estimate? If so, are there any significant differences? FIGURE 10-4 The Micromation site suggests that soft costs are very significant, but are more difficult to measure. © 2012 Micromation The answers to these questions might affect the initial cost and TCO for the proposed system. You should review system requirements and alternatives now, before proceeding to design the system architecture. Scalability A network is composed of individual nodes. A node represents a physical device, wired or wireless, that can send, receive, or manage network data. For example, nodes can be servers, workstations, shared printers, mass storage devices, wireless access points, or mobile computers. Scalability, also called extensibility, refers to a system’s ability to expand, change, or downsize easily to meet the changing needs of a business enterprise. Scalability is especially important in implementing systems that are volume-related, such as transaction processing systems. A scalable system is
  • 8. necessary to support a dynamic, growing business. For example, a scalable network could handle anywhere from a few dozen nodes to thousands of nodes, and a scalable DBMS could support the acquisition of an entire new sales division. When investing large amounts of money in a project, management is especially concerned about scalability issues that could affect the system’s life expectancy. Web Integration An information system includes applications, which are programs that handle the input, manage the processing logic, and provide the required output. The systems analyst must know if a new application will be part of an e-commerce strategy and the degree of integration with other Web-based components. As you learned earlier, a Web-centric architecture follows Internet design protocols and enables a company to integrate the new application into its e-commerce strategy. Even where e- commerce is not involved, a Web-centric application can run on the Internet or a company intranet or extranet. A Web-based application avoids many of the connectivity and compatibility problems that typically arise when different hardware environments are involved. In a Web-based environment, a firm’s external business partners can use standard Web browsers to import and export data. Legacy Systems A new system might have to interface with one or more legacy systems, which are older systems that use outdated technology, but still are functional. For example, a new marketing information system might need to report sales data to a server- based accounting system and obtain product cost data from a legacy manufacturing system. Interfacing a new system with a legacy system involves analysis of data formats and compatibility. In some cases, a company will need to convert legacy file data, which can be an expensive and time-consuming process. Middleware, which is discussed later in this chapter, might be needed to pass data between new systems and legacy systems. Finally, to select the best
  • 9. architecture, the analyst must know if the new application eventually will replace the legacy system. Processing Options In planning the architecture, designers also must consider how the system will process data — online or in batches. For example, a high-capacity transaction processing system, such as an order entry system, requires more network, processing, and data storage resources than a monthly billing system that handles data in batches. Also, if the system must operate online, 24 hours a day and seven days a week (24/7), provision must be made for backup and speedy recovery in the event of system failure. The characteristics of online and batch processing methods are described later in this chapter, with examples of each type. Security Issues From the password protection shown in Figure 10-5 to complex intrusion detection systems, security threats and defenses are a major concern to a systems analyst. As the physical design is translated into specific hardware and software, the analyst must consider security issues and determine how the company will address them. Security is especially important when data or processing is performed at remote locations, rather than at a centralized facility. In mission-critical systems, security issues will have a major impact on system architecture and design. Web-based systems introduce additional security concerns, as critical data must be protected in the Internet environment. Also, firms that use e-commerce applications must assure customers that their personal data is safe and secure. System security concepts and strategies are discussed in detail in Chapter 12, Managing Systems Support and Security. FIGURE 10-5 User IDs and passwords are important elements of system security. © JMiks / Shutterstock.com Corporate Portals Depending on the system, the planned architecture might
  • 10. include a corporate portal. A portal is an entrance to a multifunction Web site. After entering a portal, a user can navigate to a destination using various tools and features provided by the portal designer. A corporate portal can provide access for customers, employees, suppliers, and the public. A well-designed portal can integrate with various other systems and provide a consistent look and feel. SYSTEM ARCHITECTURE: THEN AND NOW Every business information system must carry out three main functions: · Manage applications that perform the processing logic. · Handle data storage and access. · Provide an interface that allows users to interact with the system. Depending on the architecture, the three functions are performed on a server, on a client, or are divided between the server and the client. As you plan the design, you must determine where the functions will be carried out and the advantages and disadvantages of each design approach. Mainframe Architecture A server is a computer that supplies data, processing services, or other support to one or more computers, called clients. The earliest servers were mainframe computers, and a system design where the server performs all the processing sometimes is described as mainframe architecture. Although the actual server does not have to be a mainframe, the term mainframe architecture typically describes a multiuser environment where the server is significantly more powerful than the clients. A systems analyst should know the history of mainframe architecture to understand the server’s role in modern system design. In the 1960s, mainframe architecture was the only choice. In addition to centralized data processing, the earliest systems performed all data input and output at a central location, often called a data processing center. Physical data was delivered or transmitted in some manner to the data processing center, where
  • 11. it was entered into the system. Users in the organization had no input or output capability, except for printed reports that were distributed by a corporate IT department. As network technology advanced, companies installed terminals at remote locations, so that users could enter and access data from anywhere in the organization, regardless of where the centralized computer was located. A terminal included a keyboard and display screen to handle input and output, but lacked independent processing capability. In a centralized design, as shown in Figure 10-6, the remote user’s keystrokes are transmitted from his or her terminal to the mainframe, which responds by sending screen output back to the user’s screen. Today, mainframe architecture still is used in industries that require large amounts of processing that can be done at a central location. For example, a bank might use mainframe servers to update customer balances each night. In a blend of old and new technology, an Internet-based retail operation might use mainframe architecture at a customer service center that fulfills its online sales as shown in Figure 10-7 on the next page. FIGURE 10-6 In a centralized design, the remote user’s keystrokes are transmitted to the mainframe, which responds by sending screen output back to the user’s screen. © Cengage Learning 2014 Impact of the Personal Computer When PC technology exploded in the 1990s, powerful microcomputers quickly appeared on corporate desktops. Users found that they could run their own word processing, spreadsheet, and database applications, without assistance from the IT group, in a mode called stand-alone computing. Before long, companies linked the stand-alone computers into networks that enabled the user clients to exchange data and perform local processing. When an individual user works in stand-alone mode, the workstation performs all the functions of a server by storing, accessing, and processing data, as well as providing a user
  • 12. interface. Although stand-alone PCs improved employee productivity and allowed users to perform tasks that previously required IT department assistance, stand-alone computing was inefficient and expensive. Even worse, maintaining data on individual workstations raised major concerns about data security, integrity, and consistency. Without a central storage location, it was impossible to protect and back up valuable business data, and companies were exposed to enormous risks. In some cases, users who were frustrated by a lack of support and services from the IT department created and managed their own databases. In addition to security concerns, this led to data inconsistency and unreliability. FIGURE 10-7 Internet-based retail operations such as Amazon.com use customer service centers to fulfill online sales. © Bloomberg via Getty Images Network Evolution As technology became available, companies resolved the problems of stand-alone computing by joining clients into a local area network (LAN) that allows sharing of data and hardware resources, as shown in Figure 10-8. One or more LANs, in turn, can connect to a centralized server. Further advances in technology made it possible to create powerful networks that could use satellite links, high-speed fiber-optic lines, or the Internet to share data. A wide area network (WAN) spans long distances and can connect LANs that are continents apart, as shown in Figure 10- 9. When a user accesses data on a LAN or WAN, the network is transparent because a user sees the data as if it were stored on his or her own workstation. Company-wide systems that connect one or more LANs or WANs are called distributed systems. The capabilities of a distributed system depend on the power and capacity of the underlying data communication network. Compared to mainframe architecture, distributed systems increase concerns about data security and integrity because many individual clients require access to perform processing.
  • 13. FIGURE 10-8 A LAN allows sharing of data and hardware, such as printers and scanners. © Cengage Learning 2014 CLIENT/SERVER DESIGNS Today’s interconnected world requires an information architecture that spans the entire enterprise. Whether you are dealing with a departmental network or a multinational corporation, as a systems analyst you will work with a distributed computing strategy called client/server architecture. FIGURE 10-9 A WAN can connect many LANs and link users who are continents apart. © Cengage Learning 2014 Overview Although no standard definition exists, the term client/server architecture generally refers to systems that divide processing between one or more networked clients and a central server. In a typical client/server system, the client handles the entire user interface, including data entry, data query, and screen presentation logic. The server stores the data and provides data access and database management functions. Application logic is divided in some manner between the server and the clients. In a client/server interaction, the client submits a request for information from the server, which carries out the operation and responds to the client. As shown in Figure 10-10 the data file is not transferred from the server to the client — only the request and the result are transmitted across the network. To fulfill a request from a client, the server might contact other servers for data or processing support, but that process is transparent to the client. The analogy can be made to a restaurant where the customer gives an order to a server, who relays the request to a cook, who actually prepares the meal. Figure 10-11 on the next page lists some major differences between client/server and traditional mainframe systems. Many early client/server systems did not produce expected savings
  • 14. because few clear standards existed, and development costs often were higher than anticipated. Implementation was expensive because clients needed powerful hardware and software to handle shared processing tasks. In addition, many companies had an installed base of data, called legacy data, which was difficult to access and transport to a client/server environment. FIGURE 10-10 In a client/server design, data is stored and usually processed on the server. © Cengage Learning 2014 As large-scale networks grew more powerful, client/server systems became more cost-effective. Many companies invested in client/server systems to achieve a unique combination of computing power, flexibility, and support for changing business operations. Today, client/server architecture is the dominant form of systems design, using Internet protocols and network models such as the ones described on pages 426–430. As businesses form new alliances with customers and suppliers, the client/server concept continues to expand to include clients and servers outside the organization. Cloud computing, which is discussed later in this chapter, is seen by some observers as an entirely new concept. Others see it as the ultimate form of client/server architecture, where Internet-based computing becomes the server part of client/server and handles processing tasks, while the Internet itself becomes the platform that replaces traditional networks. The bottom line is that it doesn’t matter whether cloud computing is part of a client/server evolution, or a whole new way of thinking about computing. Either way, successful systems must support business requirements, and system architecture is an important step in the systems development process. FIGURE 10-11 Comparison of the characteristics of client/server and mainframe systems.
  • 15. © Cengage Learning 2014 The Client’s Role The client/server relationship must specify how the processing will be divided between the client and the server. A fat client, also called a thick client, design locates all or most of the application processing logic at the client. A thin client design locates all or most of the processing logic at the server. What are the advantages and disadvantages of each design? Most IT experts agree that thin client designs provide better performance, because program code resides on the server, near the data. In contrast, a fat client handles more of the processing and must access and update the data more often. Compared with maintaining a central server, fat client TCO also is higher, because of initial hardware and software requirements and the ongoing expense of supporting and updating remote client computers. A fat client design, however, is simpler and less expensive to develop, because the architecture resembles traditional file server designs where all processing is performed at the client. Figure 10-12 compares the characteristics of fat and thin clients. Client/Server Tiers Early client/server designs were called two-tier designs. In a two-tier design, the user interface resides on the client, all data resides on the server, and the application logic can run either on the server or on the client, or be divided between the client and the server. More recently, another form of client/server design, called a three-tier design, has become popular. In a three-tier design, the user interface runs on the client and the data is stored on the server, just as with a two-tier design. A three-tier design also has a middle layer between the client and server that processes the client requests and translates them into data access commands that can be understood and carried out by the server, as shown in Figure 10-13. You can think of the middle layer as an application server, because it provides the application logic, or business logic, required by the system. Three-tier designs
  • 16. also are called n-tier designs, to indicate that some designs use more than one intermediate layer. FIGURE 10-12 Characteristics of fat and thin clients. © Cengage Learning 2014 The advantage of the application logic layer is that a three-tier design enhances overall performance by reducing the data server’s workload. The separate application logic layer also relieves clients of complex processing tasks. Because it can run on a minicomputer that is much more powerful than the typical client workstations, the middle layer is more efficient and cost- effective in large-scale systems. Figure 10-14 on the next page shows where the data, the application logic, and the user interface are located on various architectures. In a client/server system, the tiers communicate using software called middleware, which is described in the following section. FIGURE 10-13 Characteristics of two-tier versus three-tier client/server design. © Cengage Learning 2014 Middleware A recent Internet search for the phrase “What is Middleware” returned 80,500 sites. Unfortunately, the term middleware means different things to different people. So, what is middleware? FIGURE 10-14 The location of the data, the application logic, and the user interface depend on the type of architecture. © Cengage Learning 2014 In a multi-tier system, special software called middleware enables the tiers to communicate and pass data back and forth. Here are some other definitions you might encounter: · Middleware offers an interface to connect software and hardware. · Middleware can integrate legacy systems and Web-based applications. For example, when a user enters a customer
  • 17. number on a Web form, middleware can update a legacy accounting system. · Middleware is like glue that holds different applications together. · Middleware represents the slash in the term client/server. · Middleware resembles the plumbing system in your home: it connects important objects in a way that requires little or attention. The bottom line is that a hard and fast definition isn’t all that important. If you grasp the concept of middleware, you’ll be able to handle the development tools and learn the techniques required in your IT environment. Cost-Benefit Issues To support business requirements, information systems need to be scalable, powerful, and flexible. For most companies, client/server systems offer the best combination of features to meet those needs. Whether a business is expanding or downsizing, client/server systems enable the firm to scale the system in a rapidly changing environment. As the size of the business changes, it is easier to adjust the number of clients and the processing functions they perform than it is to alter the capability of a large-scale central server. Client/server computing also allows companies to transfer applications from expensive mainframes to less-expensive client platforms. In addition, using common languages such as SQL, clients and servers can communicate across multiple platforms. That difference is important because many businesses have substantial investments in a variety of hardware and software environments. Finally, client/server systems reduce network load and improve response times. For example, consider a user at a company headquarters who wants information about total sales figures. In a client/server system, the server locates the data, performs the necessary processing, and responds immediately to the client’s request. The data retrieval and processing functions are transparent to the client because they are done on the server, not
  • 18. the client. Performance Issues While it provides many advantages, client/server architecture does involve performance issues that relate to the separation of server-based data and networked clients that must access the data. Consider the difference between client/server design and a centralized environment, where a server-based program issues a command that is executed by the server’s own CPU. Processing speed is enhanced because program instructions and data both travel on an internal system bus, which moves data more efficiently than an external network. In contrast to the centralized system, a client/server design separates applications and data. Networked clients submit data requests to the server, which responds by sending data back to the clients. When the number of clients and the demand for services increases beyond a certain level, network capacity becomes a constraint, and system performance declines dramatically. In the article shown in Figure 10-15, IBM states that the performance characteristics of a client/server system are not the same as a centralized processing environment. Client/server response times increase gradually as more requests are made, but then rise dramatically when the system nears its capacity. This point is called the knee of the curve, because it marks a sharp decline in the system’s speed and efficiency. To deliver and maintain acceptable performance, system developers must anticipate the number of users, network traffic, server size and location, and design a client/server architecture that can support current and future business needs. What is the answer to enhancing client/server performance? According to IBM, client/server systems must be designed so the client contacts the server only when necessary and makes as few trips as possible. Another issue that affects client/server performance is data storage. Just as processing can be done at various places, data
  • 19. can be stored in more than one location using a distributed database management system (DDBMS). Using a DDBMS offers several advantages: Data stored closer to users can reduce network traffic; the system is scalable, so new data sites can be added without reworking the system design; and with data stored in various locations, the system is less likely to experience a catastrophic failure. A potential disadvantage of distributed data storage involves data security. It can be more difficult to maintain controls and standards when data is stored in various locations. In addition, the architecture of a DDBMS is more complex and difficult to manage. From a system design standpoint, the challenge is that companies often want it both ways — they want the control that comes with centralization and the flexibility associated with decentralization. FIGURE 10-15 According to IBM, client/server response times increase gradually, and then rise dramatically when the system nears its capacity. That point is referred to as the knee of the curve. © Copyright IBM Corporation 1994, 2012. THE IMPACT OF THE INTERNET The Internet has had an enormous impact on system architecture. The Internet has become more than a communication channel — many IT observers see it as a fundamentally different environment for system development. Recall that in a traditional client/server system, the client handles the user interface, as shown in Figure 10-14 on the previous page, and the server (or servers in a multi-tier system) handles the data and application logic. In a sense, part of the system runs on the client, part on the server. In contrast, in an Internet-based architecture, in addition to data and application logic, the entire user interface is provided by the Web server in the form of HTML documents that are displayed by the client’s browser. Shifting the responsibility for the interface from the client to the server simplifies data transmission and results in
  • 20. lower hardware cost and complexity. The advantages of Internet-based architecture have changed fundamental ideas about how computer systems should be designed, and we are moving rapidly to a total online environment. At the same time, millions of people are using Web-based collaboration and social networking applications to accomplish tasks that used to be done in person, over the phone, or by more traditional Internet channels. The following sections examine cloud computing and Web 2.0. It is important to understand these trends, which are shaping the IT industry’s future. Cloud Computing Cloud computing refers to the cloud symbol that often is used to represent the Internet. The cloud computing concept envisions a cloud of remote computers that provide a total online software and data environment that is hosted by third parties. For example, a user’s computer does not perform processing or computing tasks — the cloud does. This concept is in contrast to today’s computing model, which is based on networks that strategically distribute processing and data across the enterprise. In a sense, the cloud of computers acts as one giant computer that performs tasks for users. Figure 10-16 shows users connected to the cloud, which performs the computing work. Instead of requiring specific hardware and software on the user’s computer, cloud computing spreads the workload to powerful remote systems that are part of the cloud. The user appears to be working on a local system, but all computing is actually performed in the cloud. No updates or maintenance are required of the user, and there are no compatibility issues. Cloud computing effectively eliminates compatibility issues, because the Internet itself is the platform. This architecture also provides scaling on demand, which matches resources to needs at any given time. For example, during peak loads, additional cloud servers might come on line automatically to support the workload.
  • 21. FIGURE 10-16 The explosive growth of cloud computing has attracted many firms that fight hard for market share. © Cengage Learning 2014 Cloud computing is an ideal platform for powerful Software as a Service (SaaS) applications. As you learned in Chapter 7, SaaS is a popular deployment method where software is not purchased but is paid for as a service, much like one pays for electricity or cable TV each month. In this architecture, updates and changes to services can be easily made by service providers without involving the users. Even though cloud computing has tremendous advantages, some concerns exist. First, cloud computing requires significantly more bandwidth (the amount of data that can be transferred in a fixed time period) than traditional client/server networks. Second, because cloud computing is Internet-based, if a user’s Internet connection becomes unavailable, he or she will be unable to access any cloud-based services. In addition, there are security concerns associated with sending large amounts of data over the Internet, as well as concerns about storing it securely. Finally, there is the issue of control. Because a service provider hosts the resources and manages data storage and access, the provider has complete control of the system. Many firms are wary of handing over control of mission-critical data and systems to a third-party provider. Future technology advances will make cloud computing even more feasible, desirable, and secure. As the IT industry moves toward a Web-based architecture, cloud computing will be marketed aggressively and growth will be rapid. Figure 10-16 lists some of the major players in the cloud market. Although the list will change from month to month, or week to week, one thing will not change: cloud computing will be a cornerstone of IT growth in the coming decade. Web 2.0 The shift to Internet-based collaboration has been so powerful and compelling that it has been named Web 2.0. Web 2.0 is not
  • 22. a reference to a more technically advanced version of the current Web. Rather, Web 2.0 envisions a second generation of the Web that will enable people to collaborate, interact, and share information more dynamically. Leading Web 2.0 author Tim O’Reilly has suggested that the strong interest in Web 2.0 is driven by the concept of the Internet as a platform. O’Reilly sees future Web 2.0 applications delivering software as a continuous service with no limitations on the number of users that can connect or how users can consume, modify, and exchange data. Social networking sites, such as Facebook, Twitter, and LinkedIn are seeing explosive growth in the Web 2.0 environment. Another form of social collaboration is called a wiki. A wiki is a Web-based repository of information that anyone can access, contribute to, or modify. In a sense, a wiki represents the collective knowledge of a group of people. One of the best-known wikis is Wikipedia.org, but smaller-scale wikis are growing rapidly at businesses, schools, and other organizations that want to compile and share information. One of the goals of Web 2.0 is to enhance creativity, interaction, and shared ideas. In this regard, the Web 2.0 concept resembles the agile development process and the open- source software movement. Web 2.0 communities and services are based on a body of data created by users. As users collaborate, new layers of information are added in an overall environment known as the Internet operating system. These layers can contain text, sound bytes, images, and video clips that are shared with the user community. E-COMMERCE ARCHITECTURE The huge expansion of Web-based commerce is reshaping the IT landscape. Internet business solutions must be efficient, reliable, and cost-effective. When planning an e-commerce architecture, analysts can examine in-house development, packaged solutions, and service providers. The following sections discuss these options. In-House
  • 23. Solution s In Chapter 7, you learned how to analyze advantages and disadvantages of in-house development versus purchasing a software package. The same basic principles apply to system design. If you decide to proceed with an in-house solution, you must have an overall plan to help achieve your goals. How should you begin? Figure 10-17 offers guidelines for companies developing e-commerce strategies. An in-house solution usually requires a greater initial investment, but provides more flexibility for a company that must adapt quickly in a dynamic e-commerce environment. By working in-house, a company has more freedom to integrate with customers and suppliers and is less dependent on vendor-specific solutions. FIGURE 10-17 Guidelines for companies developing e- commerce strategies. © Cengage Learning 2014 For smaller companies, the decision about in-house Web development is even more critical, because this approach will require financial resources and management attention that many
  • 24. small companies might be unable or unwilling to commit. An in-house strategy, however, can provide valuable benefits, including the following: · A unique Web site, with a look and feel consistent with the company’s other marketing efforts · Complete control over the organization of the site, the number of pages, and the size of the files · A scalable structure to handle increases in sales and product offerings in the future · More flexibility to modify and manage the site as the company changes · The opportunity to integrate the firm’s Web-based business systems with its other information systems, creating the potential for more savings and better customer service Whether a firm uses an in-house or a packaged design, the decision about Web hosting is a separate issue. Although internal hosting has some advantages, such as greater control and security, the expense would be much greater, especially for a small- to medium-sized firm. CASE IN POINT 10.2: SMALL POTATOES, INC. Small Potatoes is a family-operated seed business that has grown rapidly. Small Potatoes specializes in supplying home gardeners with the finest seeds and gardening supplies. Until now, the firm has done all its business by placing ads in gardening and health magazines, and taking orders using a toll-
  • 25. free telephone number. Now, the family has decided to establish a Web site and sell online, but there is some disagreement about the best way to proceed. Some say it would be better to develop the site on their own, and Betty Lou Jones, a recent computer science graduate, believes she can handle the task. Others, including Sam Jones, Betty’s grandfather, feel it would be better to outsource the site and focus on the business itself. Suppose the family asked for your opinion. What would you say? What additional questions would you ask? FIGURE 10-18 Intershop offers software solutions for smaller companies that want to get an e-business up and running quickly. © 2009–2012 Intershop Communications AG Packaged