SlideShare a Scribd company logo
Instant Ebook Access, One Click Away – Begin at ebookgate.com
Distributed systems principles and paradigms 2nd
ed., New international ed Edition Tanenbaum
https://ebookgate.com/product/distributed-systems-
principles-and-paradigms-2nd-ed-new-international-ed-
edition-tanenbaum/
OR CLICK BUTTON
DOWLOAD EBOOK
Get Instant Ebook Downloads – Browse at https://ebookgate.com
Click here to visit ebookgate.com and download ebook now
Instant digital products (PDF, ePub, MOBI) available
Download now and explore formats that suit you...
Starting out with Visual C 2010 2nd ed., new international
ed Edition Gaddis
https://ebookgate.com/product/starting-out-with-visual-c-2010-2nd-ed-
new-international-ed-edition-gaddis/
ebookgate.com
Cloud Computing Principles and Paradigms Wiley Series on
Parallel and Distributed Computing 1st Edition Rajkumar
Buyya
https://ebookgate.com/product/cloud-computing-principles-and-
paradigms-wiley-series-on-parallel-and-distributed-computing-1st-
edition-rajkumar-buyya/
ebookgate.com
Principles of Polymer Systems 6th ed. Edition Archer
https://ebookgate.com/product/principles-of-polymer-systems-6th-ed-
edition-archer/
ebookgate.com
Operating systems internals and design principles 7th ed
Edition Stallings
https://ebookgate.com/product/operating-systems-internals-and-design-
principles-7th-ed-edition-stallings/
ebookgate.com
Distributed computing principles algorithms and systems
1st Edition Ajay D. Kshemkalyani
https://ebookgate.com/product/distributed-computing-principles-
algorithms-and-systems-1st-edition-ajay-d-kshemkalyani/
ebookgate.com
Intermediate algebra 6th ed., Pearson new international ed
Edition Martin-Gay
https://ebookgate.com/product/intermediate-algebra-6th-ed-pearson-new-
international-ed-edition-martin-gay/
ebookgate.com
Principles and Applications of Distributed Event Based
Systems First Edition Annika M. Hinze
https://ebookgate.com/product/principles-and-applications-of-
distributed-event-based-systems-first-edition-annika-m-hinze/
ebookgate.com
Intelligent Systems 2nd Ed 2nd Edition Bogdan M.
Wilamowski
https://ebookgate.com/product/intelligent-systems-2nd-ed-2nd-edition-
bogdan-m-wilamowski/
ebookgate.com
Nonlinear and Distributed Circuits 1st Edition Wai-Kai
Chen (Ed.)
https://ebookgate.com/product/nonlinear-and-distributed-circuits-1st-
edition-wai-kai-chen-ed/
ebookgate.com
Distributed systems principles and paradigms 2nd ed., New international ed Edition Tanenbaum
Distributed systems principles and paradigms 2nd ed., New international ed Edition Tanenbaum
Distributed Systems
Principles and Paradigms
Andrew S. Tanenbaum Maarten Van Steen
Second Edition
Pearson Education Limited
Edinburgh Gate
Harlow
Essex CM20 2JE
England and Associated Companies throughout the world
Visit us on the World Wide Web at: www.pearsoned.co.uk
© Pearson Education Limited 2014
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted
in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without either the
prior written permission of the publisher or a licence permitting restricted copying in the United Kingdom
issued by the Copyright Licensing Agency Ltd, Saffron House, 6–10 Kirby Street, London EC1N 8TS.
All trademarks used herein are the property of their respective owners. The use of any trademark
in this text does not vest in the author or publisher any trademark ownership rights in such
trademarks, nor does the use of such trademarks imply any affiliation with or endorsement of this
book by such owners.
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library
Printed in the United States of America
ISBN 10: 1-292-02552-2
ISBN 13: 978-1-292-02552-0
Table of Contents
P E A R S O N C U S T O M L I B R A R Y
I
Chapter 1. Introduction
1
Andrew S. Tanenbaum, Maarten Van Steen
Chapter 2. Architectures
33
Andrew S. Tanenbaum, Maarten Van Steen
Chapter 3. Processes
69
Andrew S. Tanenbaum, Maarten Van Steen
Chapter 4. Communication
115
Andrew S. Tanenbaum, Maarten Van Steen
Chapter 5. Naming
179
Andrew S. Tanenbaum, Maarten Van Steen
Chapter 6. Synchronization
231
Andrew S. Tanenbaum, Maarten Van Steen
Chapter 7. Consistency and Replication
273
Andrew S. Tanenbaum, Maarten Van Steen
Chapter 8. Fault Tolerance
321
Andrew S. Tanenbaum, Maarten Van Steen
Chapter 9. Security
377
Andrew S. Tanenbaum, Maarten Van Steen
Chapter 10. Distributed Object-Based Systems
443
Andrew S. Tanenbaum, Maarten Van Steen
Chapter 11. Distributed File Systems
491
Andrew S. Tanenbaum, Maarten Van Steen
Chapter 12. Distributed Web-Based Systems
545
Andrew S. Tanenbaum, Maarten Van Steen
Chapter 13. Distributed Coordination-Based Systems
589
Andrew S. Tanenbaum, Maarten Van Steen
623
Index
This page intentionally left blank
1
INTRODUCTION
Computer systems are undergoing a revolution. From 1945, when the modern
computer era began, until about 1985, computers were large and expensive. Even
minicomputers cost at least tens of thousands of dollars each. As a result, most
organizations had only a handful of computers, and for lack of a way to connect
them, these operated independently from one another.
Starting around the the mid-1980s, however, two advances in technology
began to change that situation. The first was the development of powerful micro-
processors. Initially, these were 8-bit machines, but soon 16-, 32-, and 64-bit
CPUs became common. Many of these had the computing power of a mainframe
(i.e., large) computer, but for a fraction of the price.
The amount of improvement that has occurred in computer technology in the
past half century is truly staggering and totally unprecedented in other industries.
From a machine that cost 10 million dollars and executed 1 instruction per second,
we have come to machines that cost 1000 dollars and are able to execute 1 billion
instructions per second, a price/performance gain of 1013
. If cars had improved at
this rate in the same time period, a Rolls Royce would now cost 1 dollar and get a
billion miles per gallon. (Unfortunately, it would probably also have a 200-page
manual telling how to open the door.)
The second development was the invention of high-speed computer networks.
Local-area networks or LANs allow hundreds of machines within a building to
be connected in such a way that small amounts of information can be transferred
between machines in a few microseconds or so. Larger amounts of data can be
From Chapter 1 of Distributed Systems: Principles and Paradigms, Second Edition. Andrew S. Tanenbaum,
Maarten Van Steen. Copyright © 2007 by Pearson Education, Inc. Publishing as Prentice Hall. All rights reserved.
1
INTRODUCTION CHAP. 1
moved between machines at rates of 100 million to 10 billion bits/sec. Wide-area
networks or WANs allow millions of machines all over the earth to be connected
at speeds varying from 64 Kbps (kilobits per second) to gigabits per second.
The result of these technologies is that it is now not only feasible, but easy, to
put together computing systems composed of large numbers of computers con-
nected by a high-speed network. They are usually called computer networks or
distributed systems, in contrast to the previous centralized systems (or single-
processor systems) consisting of a single computer, its peripherals, and perhaps
some remote terminals.
1.1 DEFINITION OF A DISTRIBUTED SYSTEM
Various definitions of distributed systems have been given in the literature,
none of them satisfactory, and none of them in agreement with any of the others.
For our purposes it is sufficient to give a loose characterization:
A distributed system is a collection of independent computers that
appears to its users as a single coherent system.
This definition has several important aspects. The first one is that a distributed
system consists of components (i.e., computers) that are autonomous. A second
aspect is that users (be they people or programs) think they are dealing with a sin-
gle system. This means that one way or the other the autonomous components
need to collaborate. How to establish this collaboration lies at the heart of devel-
oping distributed systems. Note that no assumptions are made concerning the type
of computers. In principle, even within a single system, they could range from
high-performance mainframe computers to small nodes in sensor networks. Like-
wise, no assumptions are made on the way that computers are interconnected. We
will return to these aspects later in this chapter.
Instead of going further with definitions, it is perhaps more useful to concen-
trate on important characteristics of distributed systems. One important charac-
teristic is that differences between the various computers and the ways in which
they communicate are mostly hidden from users. The same holds for the internal
organization of the distributed system. Another important characteristic is that
users and applications can interact with a distributed system in a consistent and
uniform way, regardless of where and when interaction takes place.
In principle, distributed systems should also be relatively easy to expand or
scale. This characteristic is a direct consequence of having independent com-
puters, but at the same time, hiding how these computers actually take part in the
system as a whole. A distributed system will normally be continuously available,
although perhaps some parts may be temporarily out of order. Users and applica-
tions should not notice that parts are being replaced or fixed, or that new parts are
added to serve more users or applications.
2
SEC. 1.1 DEFINITION OF A DISTRIBUTED SYSTEM
In order to support heterogeneous computers and networks while offering a
single-system view, distributed systems are often organized by means of a layer of
software–that is, logically placed between a higher-level layer consisting of users
and applications, and a layer underneath consisting of operating systems and basic
communication facilities, as shown in Fig. 1-1 Accordingly, such a distributed
system is sometimes called middleware.
Local OS 1 Local OS 2 Local OS 3 Local OS 4
Appl. A Application B Appl. C
Computer 1 Computer 2 Computer 4
Computer 3
Network
Distributed system layer (middleware)
Figure 1-1. A distributed system organized as middleware. The middleware
layer extends over multiple machines, and offers each application the same in-
terface.
Fig. 1-1 shows four networked computers and three applications, of which ap-
plication B is distributed across computers 2 and 3. Each application is offered the
same interface. The distributed system provides the means for components of a
single distributed application to communicate with each other, but also to let dif-
ferent applications communicate. At the same time, it hides, as best and reason-
able as possible, the differences in hardware and operating systems from each ap-
plication.
1.2 GOALS
Just because it is possible to build distributed systems does not necessarily
mean that it is a good idea. After all, with current technology it is also possible to
put four floppy disk drives on a personal computer. It is just that doing so would
be pointless. In this section we discuss four important goals that should be met to
make building a distributed system worth the effort. A distributed system should
make resources easily accessible; it should reasonably hide the fact that resources
are distributed across a network; it should be open; and it should be scalable.
1.2.1 Making Resources Accessible
The main goal of a distributed system is to make it easy for the users (and ap-
plications) to access remote resources, and to share them in a controlled and effi-
cient way. Resources can be just about anything, but typical examples include
3
INTRODUCTION CHAP. 1
things like printers, computers, storage facilities, data, files, Web pages, and net-
works, to name just a few. There are many reasons for wanting to share resources.
One obvious reason is that of economics. For example, it is cheaper to let a printer
be shared by several users in a small office than having to buy and maintain a sep-
arate printer for each user. Likewise, it makes economic sense to share costly re-
sources such as supercomputers, high-performance storage systems, imagesetters,
and other expensive peripherals.
Connecting users and resources also makes it easier to collaborate and ex-
change information, as is clearly illustrated by the success of the Internet with its
simple protocols for exchanging files, mail, documents, audio, and video. The
connectivity of the Internet is now leading to numerous virtual organizations in
which geographically widely-dispersed groups of people work together by means
of groupware, that is, software for collaborative editing, teleconferencing, and so
on. Likewise, the Internet connectivity has enabled electronic commerce allowing
us to buy and sell all kinds of goods without actually having to go to a store or
even leave home.
However, as connectivity and sharing increase, security is becoming increas-
ingly important. In current practice, systems provide little protection against
eavesdropping or intrusion on communication. Passwords and other sensitive in-
formation are often sent as cleartext (i.e., unencrypted) through the network, or
stored at servers that we can only hope are trustworthy. In this sense, there is
much room for improvement. For example, it is currently possible to order goods
by merely supplying a credit card number. Rarely is proof required that the custo-
mer owns the card. In the future, placing orders this way may be possible only if
you can actually prove that you physically possess the card by inserting it into a
card reader.
Another security problem is that of tracking communication to build up a
preference profile of a specific user (Wang et al., 1998). Such tracking explicitly
violates privacy, especially if it is done without notifying the user. A related prob-
lem is that increased connectivity can also lead to unwanted communication, such
as electronic junk mail, often called spam. In such cases, what we may need is to
protect ourselves using special information filters that select incoming messages
based on their content.
1.2.2 Distribution Transparency
An important goal of a distributed system is to hide the fact that its processes
and resources are physically distributed across multiple computers. A distributed
system that is able to present itself to users and applications as if it were only a
single computer system is said to be transparent. Let us first take a look at what
kinds of transparency exist in distributed systems. After that we will address the
more general question whether transparency is always required.
4
SEC. 1.2 GOALS
Types of Transparency
The concept of transparency can be applied to several aspects of a distributed
system, the most important ones shown in Fig. 1-2.
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Transparency Description
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Access Hide differences in data representation and how a resource is accessed
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Location Hide where a resource is located
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Migration Hide that a resource may move to another location
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Relocation Hide that a resource may be moved to another location while in use
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Replication Hide that a resource is replicated
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Concurrency Hide that a resource may be shared by several competitive users
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Failure Hide the failure and recovery of a resource
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
Figure 1-2. Different forms of transparency in a distributed system (ISO, 1995).
Access transparency deals with hiding differences in data representation and
the way that resources can be accessed by users. At a basic level, we wish to hide
differences in machine architectures, but more important is that we reach agree-
ment on how data is to be represented by different machines and operating sys-
tems. For example, a distributed system may have computer systems that run dif-
ferent operating systems, each having their own file-naming conventions. Differ-
ences in naming conventions, as well as how files can be manipulated, should all
be hidden from users and applications.
An important group of transparency types has to do with the location of a re-
source. Location transparency refers to the fact that users cannot tell where a re-
source is physically located in the system. Naming plays an important role in
achieving location transparency. In particular, location transparency can be
achieved by assigning only logical names to resources, that is, names in which the
location of a resource is not secretly encoded. An example of a such a name is the
URL http://www.prenhall.com/index.html, which gives no clue about the location
of Prentice Hall’s main Web server. The URL also gives no clue as to whether
index.html has always been at its current location or was recently moved there.
Distributed systems in which resources can be moved without affecting how those
resources can be accessed are said to provide migration transparency. Even
stronger is the situation in which resources can be relocated while they are being
accessed without the user or application noticing anything. In such cases, the sys-
tem is said to support relocation transparency. An example of relocation trans-
parency is when mobile users can continue to use their wireless laptops while
moving from place to place without ever being (temporarily) disconnected.
As we shall see, replication plays a very important role in distributed systems.
For example, resources may be replicated to increase availability or to improve
5
INTRODUCTION CHAP. 1
performance by placing a copy close to the place where it is accessed. Replica-
tion transparency deals with hiding the fact that several copies of a resource
exist. To hide replication from users, it is necessary that all replicas have the same
name. Consequently, a system that supports replication transparency should gen-
erally support location transparency as well, because it would otherwise be impos-
sible to refer to replicas at different locations.
We already mentioned that an important goal of distributed systems is to al-
low sharing of resources. In many cases, sharing resources is done in a coopera-
tive way, as in the case of communication. However, there are also many ex-
amples of competitive sharing of resources. For example, two independent users
may each have stored their files on the same file server or may be accessing the
same tables in a shared database. In such cases, it is important that each user does
not notice that the other is making use of the same resource. This phenomenon is
called concurrency transparency. An important issue is that concurrent access
to a shared resource leaves that resource in a consistent state. Consistency can be
achieved through locking mechanisms, by which users are, in turn, given ex-
clusive access to the desired resource. A more refined mechanism is to make use
of transactions, but as we shall see in later chapters, transactions are quite difficult
to implement in distributed systems.
A popular alternative definition of a distributed system, due to Leslie Lam-
port, is ‘‘You know you have one when the crash of a computer you’ve never
heard of stops you from getting any work done.’’ This description puts the finger
on another important issue of distributed systems design: dealing with failures.
Making a distributed system failure transparent means that a user does not no-
tice that a resource (he has possibly never heard of) fails to work properly, and
that the system subsequently recovers from that failure. Masking failures is one of
the hardest issues in distributed systems and is even impossible when certain
apparently realistic assumptions are made, as we will discuss in Chap. 8. The
main difficulty in masking failures lies in the inability to distinguish between a
dead resource and a painfully slow resource. For example, when contacting a busy
Web server, a browser will eventually time out and report that the Web page is
unavailable. At that point, the user cannot conclude that the server is really down.
Degree of Transparency
Although distribution transparency is generally considered preferable for any
distributed system, there are situations in which attempting to completely hide all
distribution aspects from users is not a good idea. An example is requesting your
electronic newspaper to appear in your mailbox before 7 A.M. local time, as usual,
while you are currently at the other end of the world living in a different time
zone. Your morning paper will not be the morning paper you are used to.
Likewise, a wide-area distributed system that connects a process in San Fran-
cisco to a process in Amsterdam cannot be expected to hide the fact that Mother
6
SEC. 1.2 GOALS
Nature will not allow it to send a message from one process to the other in less
than about 35 milliseconds. In practice it takes several hundreds of milliseconds
using a computer network. Signal transmission is not only limited by the speed of
light, but also by limited processing capacities of the intermediate switches.
There is also a trade-off between a high degree of transparency and the per-
formance of a system. For example, many Internet applications repeatedly try to
contact a server before finally giving up. Consequently, attempting to mask a tran-
sient server failure before trying another one may slow down the system as a
whole. In such a case, it may have been better to give up earlier, or at least let the
user cancel the attempts to make contact.
Another example is where we need to guarantee that several replicas, located
on different continents, need to be consistent all the time. In other words, if one
copy is changed, that change should be propagated to all copies before allowing
any other operation. It is clear that a single update operation may now even take
seconds to complete, something that cannot be hidden from users.
Finally, there are situations in which it is not at all obvious that hiding distri-
bution is a good idea. As distributed systems are expanding to devices that people
carry around, and where the very notion of location and context awareness is
becoming increasingly important, it may be best to actually expose distribution
rather than trying to hide it. This distribution exposure will become more evident
when we discuss embedded and ubiquitous distributed systems later in this chap-
ter. As a simple example, consider an office worker who wants to print a file from
her notebook computer. It is better to send the print job to a busy nearby printer,
rather than to an idle one at corporate headquarters in a different country.
There are also other arguments against distribution transparency. Recognizing
that full distribution transparency is simply impossible, we should ask ourselves
whether it is even wise to pretend that we can achieve it. It may be much better to
make distribution explicit so that the user and application developer are never
tricked into believing that there is such a thing as transparency. The result will be
that users will much better understand the (sometimes unexpected) behavior of a
distributed system, and are thus much better prepared to deal with this behavior.
The conclusion is that aiming for distribution transparency may be a nice goal
when designing and implementing distributed systems, but that it should be con-
sidered together with other issues such as performance and comprehensibility.
The price for not being able to achieve full transparency may be surprisingly high.
1.2.3 Openness
Another important goal of distributed systems is openness. An open distrib-
uted system is a system that offers services according to standard rules that
describe the syntax and semantics of those services. For example, in computer
networks, standard rules govern the format, contents, and meaning of messages
sent and received. Such rules are formalized in protocols. In distributed systems,
7
INTRODUCTION CHAP. 1
services are generally specified through interfaces, which are often described in
an Interface Definition Language (IDL). Interface definitions written in an IDL
nearly always capture only the syntax of services. In other words, they specify
precisely the names of the functions that are available together with types of the
parameters, return values, possible exceptions that can be raised, and so on. The
hard part is specifying precisely what those services do, that is, the semantics of
interfaces. In practice, such specifications are always given in an informal way by
means of natural language.
If properly specified, an interface definition allows an arbitrary process that
needs a certain interface to talk to another process that provides that interface. It
also allows two independent parties to build completely different implementations
of those interfaces, leading to two separate distributed systems that operate in
exactly the same way. Proper specifications are complete and neutral. Complete
means that everything that is necessary to make an implementation has indeed
been specified. However, many interface definitions are not at all complete, so
that it is necessary for a developer to add implementation-specific details. Just as
important is the fact that specifications do not prescribe what an implementation
should look like; they should be neutral. Completeness and neutrality are impor-
tant for interoperability and portability (Blair and Stefani, 1998). Interoperabil-
ity characterizes the extent by which two implementations of systems or com-
ponents from different manufacturers can co-exist and work together by merely
relying on each other’s services as specified by a common standard. Portability
characterizes to what extent an application developed for a distributed system A
can be executed, without modification, on a different distributed system B that
implements the same interfaces as A.
Another important goal for an open distributed system is that it should be easy
to configure the system out of different components (possibly from different de-
velopers). Also, it should be easy to add new components or replace existing ones
without affecting those components that stay in place. In other words, an open dis-
tributed system should also be extensible. For example, in an extensible system,
it should be relatively easy to add parts that run on a different operating system, or
even to replace an entire file system. As many of us know from daily practice,
attaining such flexibility is easier said than done.
Separating Policy from Mechanism
To achieve flexibility in open distributed systems, it is crucial that the system
is organized as a collection of relatively small and easily replaceable or adaptable
components. This implies that we should provide definitions not only for the
highest-level interfaces, that is, those seen by users and applications, but also
definitions for interfaces to internal parts of the system and describe how those
parts interact. This approach is relatively new. Many older and even contemporary
systems are constructed using a monolithic approach in which components are
8
SEC. 1.2 GOALS
only logically separated but implemented as one, huge program. This approach
makes it hard to replace or adapt a component without affecting the entire system.
Monolithic systems thus tend to be closed instead of open.
The need for changing a distributed system is often caused by a component
that does not provide the optimal policy for a specific user or application. As an
example, consider caching in the World Wide Web. Browsers generally allow
users to adapt their caching policy by specifying the size of the cache, and wheth-
er a cached document should always be checked for consistency, or perhaps only
once per session. However, the user cannot influence other caching parameters,
such as how long a document may remain in the cache, or which document should
be removed when the cache fills up. Also, it is impossible to make caching deci-
sions based on the content of a document. For instance, a user may want to cache
railroad timetables, knowing that these hardly change, but never information on
current traffic conditions on the highways.
What we need is a separation between policy and mechanism. In the case of
Web caching, for example, a browser should ideally provide facilities for only
storing documents, and at the same time allow users to decide which documents
are stored and for how long. In practice, this can be implemented by offering a
rich set of parameters that the user can set (dynamically). Even better is that a
user can implement his own policy in the form of a component that can be
plugged into the browser. Of course, that component must have an interface that
the browser can understand so that it can call procedures of that interface.
1.2.4 Scalability
Worldwide connectivity through the Internet is rapidly becoming as common
as being able to send a postcard to anyone anywhere around the world. With this
in mind, scalability is one of the most important design goals for developers of
distributed systems.
Scalability of a system can be measured along at least three different dimen-
sions (Neuman, 1994). First, a system can be scalable with respect to its size,
meaning that we can easily add more users and resources to the system. Second, a
geographically scalable system is one in which the users and resources may lie far
apart. Third, a system can be administratively scalable, meaning that it can still be
easy to manage even if it spans many independent administrative organizations.
Unfortunately, a system that is scalable in one or more of these dimensions often
exhibits some loss of performance as the system scales up.
Scalability Problems
When a system needs to scale, very different types of problems need to be
solved. Let us first consider scaling with respect to size. If more users or resources
need to be supported, we are often confronted with the limitations of centralized
9
INTRODUCTION CHAP. 1
services, data, and algorithms (see Fig. 1-3). For example, many services are cen-
tralized in the sense that they are implemented by means of only a single server
running on a specific machine in the distributed system. The problem with this
scheme is obvious: the server can become a bottleneck as the number of users and
applications grows. Even if we have virtually unlimited processing and storage ca-
pacity, communication with that server will eventually prohibit further growth.
Unfortunately, using only a single server is sometimes unavoidable. Imagine
that we have a service for managing highly confidential information such as medi-
cal records, bank accounts, and so on. In such cases, it may be best to implement
that service by means of a single server in a highly secured separate room, and
protected from other parts of the distributed system through special network com-
ponents. Copying the server to several locations to enhance performance may be
out of the question as it would make the service less secure.
$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Concept Example
$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Centralized services A single server for all users
$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Centralized data A single on-line telephone book
$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Centralized algorithms Doing routing based on complete information
$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
!!
!
!
!
!
!
!!
!
!
!
!
!
!!
!
!
!
!
!
Figure 1-3. Examples of scalability limitations.
Just as bad as centralized services are centralized data. How should we keep
track of the telephone numbers and addresses of 50 million people? Suppose that
each data record could be fit into 50 characters. A single 2.5-gigabyte disk parti-
tion would provide enough storage. But here again, having a single database
would undoubtedly saturate all the communication lines into and out of it. Like-
wise, imagine how the Internet would work if its Domain Name System (DNS)
was still implemented as a single table. DNS maintains information on millions of
computers worldwide and forms an essential service for locating Web servers. If
each request to resolve a URL had to be forwarded to that one and only DNS
server, it is clear that no one would be using the Web (which, by the way, would
solve the problem).
Finally, centralized algorithms are also a bad idea. In a large distributed sys-
tem, an enormous number of messages have to be routed over many lines. From a
theoretical point of view, the optimal way to do this is collect complete informa-
tion about the load on all machines and lines, and then run an algorithm to com-
pute all the optimal routes. This information can then be spread around the system
to improve the routing.
The trouble is that collecting and transporting all the input and output infor-
mation would again be a bad idea because these messages would overload part of
the network. In fact, any algorithm that operates by collecting information from
all the sites, sends it to a single machine for processing, and then distributes the
10
SEC. 1.2 GOALS
results should generally be avoided. Only decentralized algorithms should be
used. These algorithms generally have the following characteristics, which distin-
guish them from centralized algorithms:
1. No machine has complete information about the system state.
2. Machines make decisions based only on local information.
3. Failure of one machine does not ruin the algorithm.
4. There is no implicit assumption that a global clock exists.
The first three follow from what we have said so far. The last is perhaps less obvi-
ous but also important. Any algorithm that starts out with: ‘‘At precisely 12:00:00
all machines shall note the size of their output queue’’ will fail because it is
impossible to get all the clocks exactly synchronized. Algorithms should take into
account the lack of exact clock synchronization. The larger the system, the larger
the uncertainty. On a single LAN, with considerable effort it may be possible to
get all clocks synchronized down to a few microseconds, but doing this nationally
or internationally is tricky.
Geographical scalability has its own problems. One of the main reasons why
it is currently hard to scale existing distributed systems that were designed for
local-area networks is that they are based on synchronous communication. In
this form of communication, a party requesting service, generally referred to as a
client, blocks until a reply is sent back. This approach generally works fine in
LANs where communication between two machines is generally at worst a few
hundred microseconds. However, in a wide-area system, we need to take into ac-
count that interprocess communication may be hundreds of milliseconds, three
orders of magnitude slower. Building interactive applications using synchronous
communication in wide-area systems requires a great deal of care (and not a little
patience).
Another problem that hinders geographical scalability is that communication
in wide-area networks is inherently unreliable, and virtually always point-to-point.
In contrast, local-area networks generally provide highly reliable communication
facilities based on broadcasting, making it much easier to develop distributed sys-
tems. For example, consider the problem of locating a service. In a local-area sys-
tem, a process can simply broadcast a message to every machine, asking if it is
running the service it needs. Only those machines that have that service respond,
each providing its network address in the reply message. Such a location scheme
is unthinkable in a wide-area system: just imagine what would happen if we tried
to locate a service this way in the Internet. Instead, special location services need
to be designed, which may need to scale worldwide and be capable of servicing a
billion users. We return to such services in Chap. 5.
Geographical scalability is strongly related to the problems of centralized
solutions that hinder size scalability. If we have a system with many centralized
11
INTRODUCTION CHAP. 1
components, it is clear that geographical scalability will be limited due to the per-
formance and reliability problems resulting from wide-area communication. In ad-
dition, centralized components now lead to a waste of network resources. Imagine
that a single mail server is used for an entire country. This would mean that send-
ing an e-mail to your neighbor would first have to go to the central mail server,
which may be hundreds of miles away. Clearly, this is not the way to go.
Finally, a difficult, and in many cases open question is how to scale a distrib-
uted system across multiple, independent administrative domains. A major prob-
lem that needs to be solved is that of conflicting policies with respect to resource
usage (and payment), management, and security.
For example, many components of a distributed system that reside within a
single domain can often be trusted by users that operate within that same domain.
In such cases, system administration may have tested and certified applications,
and may have taken special measures to ensure that such components cannot be
tampered with. In essence, the users trust their system administrators. However,
this trust does not expand naturally across domain boundaries.
If a distributed system expands into another domain, two types of security
measures need to be taken. First of all, the distributed system has to protect itself
against malicious attacks from the new domain. For example, users from the new
domain may have only read access to the file system in its original domain. Like-
wise, facilities such as expensive image setters or high-performance computers
may not be made available to foreign users. Second, the new domain has to pro-
tect itself against malicious attacks from the distributed system. A typical example
is that of downloading programs such as applets in Web browsers. Basically, the
new domain does not know behavior what to expect from such foreign code, and
may therefore decide to severely limit the access rights for such code. The prob-
lem, as we shall see in Chap. 9, is how to enforce those limitations.
Scaling Techniques
Having discussed some of the scalability problems brings us to the question of
how those problems can generally be solved. In most cases, scalability problems
in distributed systems appear as performance problems caused by limited capacity
of servers and network. There are now basically only three techniques for scaling:
hiding communication latencies, distribution, and replication [see also Neuman
(1994)].
Hiding communication latencies is important to achieve geographical scala-
bility. The basic idea is simple: try to avoid waiting for responses to remote (and
potentially distant) service requests as much as possible. For example, when a ser-
vice has been requested at a remote machine, an alternative to waiting for a reply
from the server is to do other useful work at the requester’s side. Essentially, what
this means is constructing the requesting application in such a way that it uses
only asynchronous communication. When a reply comes in, the application is
12
SEC. 1.2 GOALS
interrupted and a special handler is called to complete the previously-issued re-
quest. Asynchronous communication can often be used in batch-processing sys-
tems and parallel applications, in which more or less independent tasks can be
scheduled for execution while another task is waiting for communication to com-
plete. Alternatively, a new thread of control can be started to perform the request.
Although it blocks waiting for the reply, other threads in the process can continue.
However, there are many applications that cannot make effective use of asyn-
chronous communication. For example, in interactive applications when a user
sends a request he will generally have nothing better to do than to wait for the
answer. In such cases, a much better solution is to reduce the overall communica-
tion, for example, by moving part of the computation that is normally done at the
server to the client process requesting the service. A typical case where this ap-
proach works is accessing databases using forms. Filling in forms can be done by
sending a separate message for each field, and waiting for an acknowledgment
from the server, as shown in Fig. 1-4(a). For example, the server may check for
syntactic errors before accepting an entry. A much better solution is to ship the
code for filling in the form, and possibly checking the entries, to the client, and
have the client return a completed form, as shown in Fig. 1-4(b). This approach
of shipping code is now widely supported by the Web in the form of Java applets
and Javascript.
M
A
A
R
T
E
N
MAARTEN
MAARTEN
STEEN@CS.VU.NL
STEEN@CS.VU.NL
VAN STEEN
VAN STEEN
FIRST NAME
FIRST NAME
LAST NAME
LAST NAME
E-MAIL
E-MAIL
Server
Server
Client
Client
Check form
Check form
Process form
Process form
MAARTEN
VAN STEEN
STEEN@CS.VU.NL
(a)
(b)
Figure 1-4. The difference between letting (a) a server or (b) a client check
forms as they are being filled.
Another important scaling technique is distribution. Distribution involves
taking a component, splitting it into smaller parts, and subsequently spreading
13
INTRODUCTION CHAP. 1
those parts across the system. An excellent example of distribution is the Internet
Domain Name System (DNS). The DNS name space is hierarchically organized
into a tree of domains, which are divided into nonoverlapping zones, as shown in
Fig. 1-5. The names in each zone are handled by a single name server. Without
going into too many details, one can think of each path name being the name of a
host in the Internet, and thus associated with a network address of that host. Basi-
cally, resolving a name means returning the network address of the associated
host. Consider, for example, the name nl.vu.cs.flits. To resolve this name, it is
first passed to the server of zone Z1 (see Fig. 1-5) which returns the address of the
server for zone Z2, to which the rest of name, vu.cs.flits, can be handed. The
server for Z2 will return the address of the server for zone Z3, which is capable of
handling the last part of the name and will return the address of the associated
host.
int com edu gov mil org net jp us nl
sun
eng
yale
eng
ai linda
robot
acm
jack jill
ieee
keio
cs
cs
pc24
co
nec
csl
oce vu
cs
flits fluit
ac
Generic Countries
Z1
Z2
Z3
Figure 1-5. An example of dividing the DNS name space into zones.
This example illustrates how the naming service, as provided by DNS, is dis-
tributed across several machines, thus avoiding that a single server has to deal
with all requests for name resolution.
As another example, consider the World Wide Web. To most users, the Web
appears to be an enormous document-based information system in which each
document has its own unique name in the form of a URL. Conceptually, it may
even appear as if there is only a single server. However, the Web is physically
distributed across a large number of servers, each handling a number of Web doc-
uments. The name of the server handling a document is encoded into that docu-
ment’s URL. It is only because of this distribution of documents that the Web has
been capable of scaling to its current size.
Considering that scalability problems often appear in the form of performance
degradation, it is generally a good idea to actually replicate components across a
14
SEC. 1.2 GOALS
distributed system. Replication not only increases availability, but also helps to
balance the load between components leading to better performance. Also, in geo-
graphically widely-dispersed systems, having a copy nearby can hide much of the
communication latency problems mentioned before.
Caching is a special form of replication, although the distinction between the
two is often hard to make or even artificial. As in the case of replication, caching
results in making a copy of a resource, generally in the proximity of the client ac-
cessing that resource. However, in contrast to replication, caching is a decision
made by the client of a resource, and not by the owner of a resource. Also, cach-
ing happens on demand whereas replication is often planned in advance.
There is one serious drawback to caching and replication that may adversely
affect scalability. Because we now have multiple copies of a resource, modifying
one copy makes that copy different from the others. Consequently, caching and
replication leads to consistency problems.
To what extent inconsistencies can be tolerated depends highly on the usage
of a resource. For example, many Web users find it acceptable that their browser
returns a cached document of which the validity has not been checked for the last
few minutes. However, there are also many cases in which strong consistency
guarantees need to be met, such as in the case of electronic stock exchanges and
auctions. The problem with strong consistency is that an update must be immedi-
ately propagated to all other copies. Moreover, if two updates happen concur-
rently, it is often also required that each copy is updated in the same order. Situa-
tions such as these generally require some global synchronization mechanism.
Unfortunately, such mechanisms are extremely hard or even impossible to imple-
ment in a scalable way, as she insists that photons and electrical signals obey a
speed limit of 187 miles/msec (the speed of light). Consequently, scaling by repli-
cation may introduce other, inherently nonscalable solutions. We return to replica-
tion and consistency in Chap. 7.
When considering these scaling techniques, one could argue that size scalabil-
ity is the least problematic from a technical point of view. In many cases, simply
increasing the capacity of a machine will the save the day (at least temporarily
and perhaps at significant costs). Geographical scalability is a much tougher prob-
lem as Mother Nature is getting in our way. Nevertheless, practice shows that
combining distribution, replication, and caching techniques with different forms
of consistency will often prove sufficient in many cases. Finally, administrative
scalability seems to be the most difficult one, partly also because we need to solve
nontechnical problems (e.g., politics of organizations and human collaboration).
Nevertheless, progress has been made in this area, by simply ignoring administra-
tive domains. The introduction and now widespread use of peer-to-peer technol-
ogy demonstrates what can be achieved if end users simply take over control
(Aberer and Hauswirth, 2005; Lua et al., 2005; and Oram, 2001). However, let it
be clear that peer-to-peer technology can at best be only a partial solution to solv-
ing administrative scalability. Eventually, it will have to be dealt with.
15
INTRODUCTION CHAP. 1
1.2.5 Pitfalls
It should be clear by now that developing distributed systems can be a formid-
able task. As we will see many times throughout this book, there are so many
issues to consider at the same time that it seems that only complexity can be the
result. Nevertheless, by following a number of design principles, distributed sys-
tems can be developed that strongly adhere to the goals we set out in this chapter.
Many principles follow the basic rules of decent software engineering and will not
be repeated here.
However, distributed systems differ from traditional software because com-
ponents are dispersed across a network. Not taking this dispersion into account
during design time is what makes so many systems needlessly complex and re-
sults in mistakes that need to be patched later on. Peter Deutsch, then at Sun
Microsystems, formulated these mistakes as the following false assumptions that
everyone makes when developing a distributed application for the first time:
1. The network is reliable.
2. The network is secure.
3. The network is homogeneous.
4. The topology does not change.
5. Latency is zero.
6. Bandwidth is infinite.
7. Transport cost is zero.
8. There is one administrator.
Note how these assumptions relate to properties that are unique to distributed sys-
tems: reliability, security, heterogeneity, and topology of the network; latency and
bandwidth; transport costs; and finally administrative domains. When developing
nondistributed applications, many of these issues will most likely not show up.
Most of the principles we discuss in this book relate immediately to these
assumptions. In all cases, we will be discussing solutions to problems that are
caused by the fact that one or more assumptions are false. For example, reliable
networks simply do not exist, leading to the impossibility of achieving failure
transparency. We devote an entire chapter to deal with the fact that networked
communication is inherently insecure. We have already argued that distributed
systems need to take heterogeneity into account. In a similar vein, when discuss-
ing replication for solving scalability problems, we are essentially tackling latency
and bandwidth problems. We will also touch upon management issues at various
points throughout this book, dealing with the false assumptions of zero-cost tran-
sportation and a single administrative domain.
16
SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS
1.3 TYPES OF DISTRIBUTED SYSTEMS
Before starting to discuss the principles of distributed systems, let us first take
a closer look at the various types of distributed systems. In the following we make
a distinction between distributed computing systems, distributed information sys-
tems, and distributed embedded systems.
1.3.1 Distributed Computing Systems
An important class of distributed systems is the one used for high-perfor-
mance computing tasks. Roughly speaking, one can make a distinction between
two subgroups. In cluster computing the underlying hardware consists of a col-
lection of similar workstations or PCs, closely connected by means of a high-
speed local-area network. In addition, each node runs the same operating system.
The situation becomes quite different in the case of grid computing. This
subgroup consists of distributed systems that are often constructed as a federation
of computer systems, where each system may fall under a different administrative
domain, and may be very different when it comes to hardware, software, and
deployed network technology.
Cluster Computing Systems
Cluster computing systems became popular when the price/performance ratio
of personal computers and workstations improved. At a certain point, it became
financially and technically attractive to build a supercomputer using off-the-shelf
technology by simply hooking up a collection of relatively simple computers in a
high-speed network. In virtually all cases, cluster computing is used for parallel
programming in which a single (compute intensive) program is run in parallel on
multiple machines.
Local OS
Local OS Local OS Local OS
Standard network
Component
of
parallel
application
Component
of
parallel
application
Component
of
parallel
application
Parallel libs
Management
application
High-speed network
Remote access
network
Master node Compute node Compute node Compute node
Figure 1-6. An example of a cluster computing system.
17
INTRODUCTION CHAP. 1
One well-known example of a cluster computer is formed by Linux-based
Beowulf clusters, of which the general configuration is shown in Fig. 1-6. Each
cluster consists of a collection of compute nodes that are controlled and accessed
by means of a single master node. The master typically handles the allocation of
nodes to a particular parallel program, maintains a batch queue of submitted jobs,
and provides an interface for the users of the system. As such, the master actually
runs the middleware needed for the execution of programs and management of the
cluster, while the compute nodes often need nothing else but a standard operating
system.
An important part of this middleware is formed by the libraries for executing
parallel programs. As we will discuss in Chap. 4, many of these libraries effec-
tively provide only advanced message-based communication facilities, but are not
capable of handling faulty processes, security, etc.
As an alternative to this hierarchical organization, a symmetric approach is
followed in the MOSIX system (Amar et al., 2004). MOSIX attempts to provide
a single-system image of a cluster, meaning that to a process a cluster computer
offers the ultimate distribution transparency by appearing to be a single computer.
As we mentioned, providing such an image under all circumstances is impossible.
In the case of MOSIX, the high degree of transparency is provided by allowing
processes to dynamically and preemptively migrate between the nodes that make
up the cluster. Process migration allows a user to start an application on any node
(referred to as the home node), after which it can transparently move to other
nodes, for example, to make efficient use of resources. We will return to process
migration in Chap. 3.
Grid Computing Systems
A characteristic feature of cluster computing is its homogeneity. In most
cases, the computers in a cluster are largely the same, they all have the same oper-
ating system, and are all connected through the same network. In contrast, grid
computing systems have a high degree of heterogeneity: no assumptions are made
concerning hardware, operating systems, networks, administrative domains, secu-
rity policies, etc.
A key issue in a grid computing system is that resources from different organ-
izations are brought together to allow the collaboration of a group of people or
institutions. Such a collaboration is realized in the form of a virtual organization.
The people belonging to the same virtual organization have access rights to the re-
sources that are provided to that organization. Typically, resources consist of
compute servers (including supercomputers, possibly implemented as cluster com-
puters), storage facilities, and databases. In addition, special networked devices
such as telescopes, sensors, etc., can be provided as well.
Given its nature, much of the software for realizing grid computing evolves
around providing access to resources from different administrative domains, and
18
SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS
to only those users and applications that belong to a specific virtual organization.
For this reason, focus is often on architectural issues. An architecture proposed by
Foster et al. (2001). is shown in Fig. 1-7
Applications
Collective layer
Resource layer
Fabric layer
Connectivity layer
Figure 1-7. A layered architecture for grid computing systems.
The architecture consists of four layers. The lowest fabric layer provides in-
terfaces to local resources at a specific site. Note that these interfaces are tailored
to allow sharing of resources within a virtual organization. Typically, they will
provide functions for querying the state and capabilities of a resource, along with
functions for actual resource management (e.g., locking resources).
The connectivity layer consists of communication protocols for supporting
grid transactions that span the usage of multiple resources. For example, protocols
are needed to transfer data between resources, or to simply access a resource from
a remote location. In addition, the connectivity layer will contain security proto-
cols to authenticate users and resources. Note that in many cases human users are
not authenticated; instead, programs acting on behalf of the users are authenti-
cated. In this sense, delegating rights from a user to programs is an important
function that needs to be supported in the connectivity layer. We return exten-
sively to delegation when discussing security in distributed systems.
The resource layer is responsible for managing a single resource. It uses the
functions provided by the connectivity layer and calls directly the interfaces made
available by the fabric layer. For example, this layer will offer functions for
obtaining configuration information on a specific resource, or, in general, to per-
form specific operations such as creating a process or reading data. The resource
layer is thus seen to be responsible for access control, and hence will rely on the
authentication performed as part of the connectivity layer.
The next layer in the hierarchy is the collective layer. It deals with handling
access to multiple resources and typically consists of services for resource
discovery, allocation and scheduling of tasks onto multiple resources, data repli-
cation, and so on. Unlike the connectivity and resource layer, which consist of a
relatively small, standard collection of protocols, the collective layer may consist
of many different protocols for many different purposes, reflecting the broad spec-
trum of services it may offer to a virtual organization.
19
INTRODUCTION CHAP. 1
Finally, the application layer consists of the applications that operate within a
virtual organization and which make use of the grid computing environment.
Typically the collective, connectivity, and resource layer form the heart of
what could be called a grid middleware layer. These layers jointly provide access
to and management of resources that are potentially dispersed across multiple
sites. An important observation from a middleware perspective is that with grid
computing the notion of a site (or administrative unit) is common. This prevalence
is emphasized by the gradual shift toward a service-oriented architecture in
which sites offer access to the various layers through a collection of Web services
(Joseph et al., 2004). This, by now, has led to the definition of an alternative ar-
chitecture known as the Open Grid Services Architecture (OGSA). This archi-
tecture consists of various layers and many components, making it rather com-
plex. Complexity seems to be the fate of any standardization process. Details on
OGSA can be found in Foster et al. (2005).
1.3.2 Distributed Information Systems
Another important class of distributed systems is found in organizations that
were confronted with a wealth of networked applications, but for which interoper-
ability turned out to be a painful experience. Many of the existing middleware
solutions are the result of working with an infrastructure in which it was easier to
integrate applications into an enterprise-wide information system (Bernstein,
1996; and Alonso et al., 2004).
We can distinguish several levels at which integration took place. In many
cases, a networked application simply consisted of a server running that applica-
tion (often including a database) and making it available to remote programs, call-
ed clients. Such clients could send a request to the server for executing a specific
operation, after which a response would be sent back. Integration at the lowest
level would allow clients to wrap a number of requests, possibly for different ser-
vers, into a single larger request and have it executed as a distributed transac-
tion. The key idea was that all, or none of the requests would be executed.
As applications became more sophisticated and were gradually separated into
independent components (notably distinguishing database components from proc-
essing components), it became clear that integration should also take place by let-
ting applications communicate directly with each other. This has now led to a
huge industry that concentrates on enterprise application integration (EAI). In
the following, we concentrate on these two forms of distributed systems.
Transaction Processing Systems
To clarify our discussion, let us concentrate on database applications. In prac-
tice, operations on a database are usually carried out in the form of transactions.
Programming using transactions requires special primitives that must either be
20
SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS
supplied by the underlying distributed system or by the language runtime system.
Typical examples of transaction primitives are shown in Fig. 1-8. The exact list
of primitives depends on what kinds of objects are being used in the transaction
(Gray and Reuter, 1993). In a mail system, there might be primitives to send,
receive, and forward mail. In an accounting system, they might be quite different.
READ and WRITE are typical examples, however. Ordinary statements, procedure
calls, and so on, are also allowed inside a transaction. In particular, we mention
that remote procedure calls (RPCs), that is, procedure calls to remote servers, are
often also encapsulated in a transaction, leading to what is known as a tran-
sactional RPC. We discuss RPCs extensively in Chap. 4.
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Primitive Description
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
BEGIN"TRANSACTION Mark the start of a transaction
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
END"TRANSACTION Terminate the transaction and try to commit
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
ABORT"TRANSACTION Kill the transaction and restore the old values
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
READ Read data from a file, a table, or otherwise
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
WRITE Write data to a file, a table, or otherwise
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
!!
!
!
!
!
!
!
!
!
!!
!
!
!
!
!
!
!
!
!!
!
!
!
!
!
!
!
!
Figure 1-8. Example primitives for transactions.
BEGIN#TRANSACTION and END#TRANSACTION are used to delimit the
scope of a transaction. The operations between them form the body of the tran-
saction. The characteristic feature of a transaction is either all of these operations
are executed or none are executed. These may be system calls, library procedures,
or bracketing statements in a language, depending on the implementation.
This all-or-nothing property of transactions is one of the four characteristic
properties that transactions have. More specifically, transactions are:
1. Atomic: To the outside world, the transaction happens indivisibly.
2. Consistent: The transaction does not violate system invariants.
3. Isolated: Concurrent transactions do not interfere with each other.
4. Durable: Once a transaction commits, the changes are permanent.
These properties are often referred to by their initial letters: ACID.
The first key property exhibited by all transactions is that they are atomic.
This property ensures that each transaction either happens completely, or not at
all, and if it happens, it happens in a single indivisible, instantaneous action.
While a transaction is in progress, other processes (whether or not they are them-
selves involved in transactions) cannot see any of the intermediate states.
The second property says that they are consistent. What this means is that if
the system has certain invariants that must always hold, if they held before the
transaction, they will hold afterward too. For example, in a banking system, a key
21
INTRODUCTION CHAP. 1
invariant is the law of conservation of money. After every internal transfer, the
amount of money in the bank must be the same as it was before the transfer, but
for a brief moment during the transaction, this invariant may be violated. The vio-
lation is not visible outside the transaction, however.
The third property says that transactions are isolated or serializable. What it
means is that if two or more transactions are running at the same time, to each of
them and to other processes, the final result looks as though all transactions ran
sequentially in some (system dependent) order.
The fourth property says that transactions are durable. It refers to the fact
that once a transaction commits, no matter what happens, the transaction goes for-
ward and the results become permanent. No failure after the commit can undo the
results or cause them to be lost. (Durability is discussed extensively in Chap. 8.)
So far, transactions have been defined on a single database. A nested tran-
saction is constructed from a number of subtransactions, as shown in Fig. 1-9.
The top-level transaction may fork off children that run in parallel with one anoth-
er, on different machines, to gain performance or simplify programming. Each of
these children may also execute one or more subtransactions, or fork off its own
children.
Airline database Hotel database
Subtransaction Subtransaction
Nested transaction
Two different (independent) databases
Figure 1-9. A nested transaction.
Subtransactions give rise to a subtle, but important, problem. Imagine that a
transaction starts several subtransactions in parallel, and one of these commits,
making its results visible to the parent transaction. After further computation, the
parent aborts, restoring the entire system to the state it had before the top-level
transaction started. Consequently, the results of the subtransaction that committed
must nevertheless be undone. Thus the permanence referred to above applies only
to top-level transactions.
Since transactions can be nested arbitrarily deeply, considerable administra-
tion is needed to get everything right. The semantics are clear, however. When
any transaction or subtransaction starts, it is conceptually given a private copy of
all data in the entire system for it to manipulate as it wishes. If it aborts, its private
universe just vanishes, as if it had never existed. If it commits, its private universe
replaces the parent’s universe. Thus if a subtransaction commits and then later a
22
SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS
new subtransaction is started, the second one sees the results produced by the first
one. Likewise, if an enclosing (higher-level) transaction aborts, all its underlying
subtransactions have to be aborted as well.
Nested transactions are important in distributed systems, for they provide a
natural way of distributing a transaction across multiple machines. They follow a
logical division of the work of the original transaction. For example, a transaction
for planning a trip by which three different flights need to be reserved can be logi-
cally split up into three subtransactions. Each of these subtransactions can be
managed separately and independent of the other two.
In the early days of enterprise middleware systems, the component that hand-
led distributed (or nested) transactions formed the core for integrating applications
at the server or database level. This component was called a transaction proc-
essing monitor or TP monitor for short. Its main task was to allow an application
to access multiple server/databases by offering it a transactional programming
model, as shown in Fig. 1-10.
TP monitor
Server
Server
Server
Client
application
Requests
Reply
Request
Request
Request
Reply
Reply
Reply
Transaction
Figure 1-10. The role of a TP monitor in distributed systems.
Enterprise Application Integration
As mentioned, the more applications became decoupled from the databases
they were built upon, the more evident it became that facilities were needed to
integrate applications independent from their databases. In particular, application
components should be able to communicate directly with each other and not mere-
ly by means of the request/reply behavior that was supported by transaction proc-
essing systems.
This need for interapplication communication led to many different communi-
cation models, which we will discuss in detail in this book (and for which reason
we shall keep it brief for now). The main idea was that existing applications could
directly exchange information, as shown in Fig. 1-11.
23
INTRODUCTION CHAP. 1
Server-side
application
Server-side
application
Server-side
application
Client
application
Client
application
Communication middleware
Figure 1-11. Middleware as a communication facilitator in enterprise applica-
tion integration.
Several types of communication middleware exist. With remote procedure
calls (RPC), an application component can effectively send a request to another
application component by doing a local procedure call, which results in the re-
quest being packaged as a message and sent to the callee. Likewise, the result will
be sent back and returned to the application as the result of the procedure call.
As the popularity of object technology increased, techniques were developed
to allow calls to remote objects, leading to what is known as remote method
invocations (RMI). An RMI is essentially the same as an RPC, except that it op-
erates on objects instead of applications.
RPC and RMI have the disadvantage that the caller and callee both need to be
up and running at the time of communication. In addition, they need to know ex-
actly how to refer to each other. This tight coupling is often experienced as a seri-
ous drawback, and has led to what is known as message-oriented middleware, or
simply MOM. In this case, applications simply send messages to logical contact
points, often described by means of a subject. Likewise, applications can indicate
their interest for a specific type of message, after which the communication mid-
dleware will take care that those messages are delivered to those applications.
These so-called publish/subscribe systems form an important and expanding
class of distributed systems. We will discuss them at length in Chap. 13.
1.3.3 Distributed Pervasive Systems
The distributed systems we have been discussing so far are largely charac-
terized by their stability: nodes are fixed and have a more or less permanent and
high-quality connection to a network. To a certain extent, this stability has been
realized through the various techniques that are discussed in this book and which
aim at achieving distribution transparency. For example, the wealth of techniques
24
SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS
for masking failures and recovery will give the impression that only occasionally
things may go wrong. Likewise, we have been able to hide aspects related to the
actual network location of a node, effectively allowing users and applications to
believe that nodes stay put.
However, matters have become very different with the introduction of mobile
and embedded computing devices. We are now confronted with distributed sys-
tems in which instability is the default behavior. The devices in these, what we
refer to as distributed pervasive systems, are often characterized by being small,
battery-powered, mobile, and having only a wireless connection, although not all
these characteristics apply to all devices. Moreover, these characteristics need not
necessarily be interpreted as restrictive, as is illustrated by the possibilities of
modern smart phones (Roussos et al., 2005).
As its name suggests, a distributed pervasive system is part of our surround-
ings (and as such, is generally inherently distributed). An important feature is the
general lack of human administrative control. At best, devices can be configured
by their owners, but otherwise they need to automatically discover their environ-
ment and ‘‘nestle in’’ as best as possible. This nestling in has been made more pre-
cise by Grimm et al. (2004) by formulating the following three requirements for
pervasive applications:
1. Embrace contextual changes.
2. Encourage ad hoc composition.
3. Recognize sharing as the default.
Embracing contextual changes means that a device must be continuously be
aware of the fact that its environment may change all the time. One of the sim-
plest changes is discovering that a network is no longer available, for example,
because a user is moving between base stations. In such a case, the application
should react, possibly by automatically connecting to another network, or taking
other appropriate actions.
Encouraging ad hoc composition refers to the fact that many devices in per-
vasive systems will be used in very different ways by different users. As a result,
it should be easy to configure the suite of applications running on a device, either
by the user or through automated (but controlled) interposition.
One very important aspect of pervasive systems is that devices generally join
the system in order to access (and possibly provide) information. This calls for
means to easily read, store, manage, and share information. In light of the inter-
mittent and changing connectivity of devices, the space where accessible informa-
tion resides will most likely change all the time.
Mascolo et al. (2004) as well as Niemela and Latvakoski (2004) came to simi-
lar conclusions: in the presence of mobility, devices should support easy and ap-
plication-dependent adaptation to their local environment. They should be able to
25
INTRODUCTION CHAP. 1
efficiently discover services and react accordingly. It should be clear from these
requirements that distribution transparency is not really in place in pervasive sys-
tems. In fact, distribution of data, processes, and control is inherent to these sys-
tems, for which reason it may be better just to simply expose it rather than trying
to hide it. Let us now take a look at some concrete examples of pervasive systems.
Home Systems
An increasingly popular type of pervasive system, but which may perhaps be
the least constrained, are systems built around home networks. These systems
generally consist of one or more personal computers, but more importantly inte-
grate typical consumer electronics such as TVs, audio and video equipment, gam-
ing devices, (smart) phones, PDAs, and other personal wearables into a single sys-
tem. In addition, we can expect that all kinds of devices such as kitchen appli-
ances, surveillance cameras, clocks, controllers for lighting, and so on, will all be
hooked up into a single distributed system.
From a system’s perspective there are several challenges that need to be ad-
dressed before pervasive home systems become reality. An important one is that
such a system should be completely self-configuring and self-managing. It cannot
be expected that end users are willing and able to keep a distributed home system
up and running if its components are prone to errors (as is the case with many of
today’s devices.) Much has already been accomplished through the Universal
Plug and Play (UPnP) standards by which devices automatically obtain IP ad-
dresses, can discover each other, etc. (UPnP Forum, 2003). However, more is
needed. For example, it is unclear how software and firmware in devices can be
easily updated without manual intervention, or when updates do take place, that
compatibility with other devices is not violated.
Another pressing issue is managing what is known as a ‘‘personal space.’’
Recognizing that a home system consists of many shared as well as personal de-
vices, and that the data in a home system is also subject to sharing restrictions,
much attention is paid to realizing such personal spaces. For example, part of
Alice’s personal space may consist of her agenda, family photo’s, a diary, music
and videos that she bought, etc. These personal assets should be stored in such a
way that Alice has access to them whenever appropriate. Moreover, parts of this
personal space should be (temporarily) accessible to others, for example, when
she needs to make a business appointment.
Fortunately, things may become simpler. It has long been thought that the per-
sonal spaces related to home systems were inherently distributed across the vari-
ous devices. Obviously, such a dispersion can easily lead to significant synchroni-
zation problems. However, problems may be alleviated due to the rapid increase
in the capacity of hard disks, along with a decrease in their size. Configuring a
multi-terabyte storage unit for a personal computer is not really a problem. At the
same time, portable hard disks having a capacity of hundreds of gigabytes are
26
SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS
being placed inside relatively small portable media players. With these continu-
ously increasing capacities, we may see pervasive home systems adopt an archi-
tecture in which a single machine acts as a master (and is hidden away somewhere
in the basement next to the central heating), and all other fixed devices simply
provide a convenient interface for humans. Personal devices will then be cram-
med with daily needed information, but will never run out of storage.
However, having enough storage does not solve the problem of managing per-
sonal spaces. Being able to store huge amounts of data shifts the problem to stor-
ing relevant data and being able to find it later. Increasingly we will see pervasive
systems, like home networks, equipped with what are called recommenders, pro-
grams that consult what other users have stored in order to identify similar taste,
and from that subsequently derive which content to place in one’s personal space.
An interesting observation is that the amount of information that recommender
programs need to do their work is often small enough to allow them to be run on
PDAs (Miller et al., 2004).
Electronic Health Care Systems
Another important and upcoming class of pervasive systems are those related
to (personal) electronic health care. With the increasing cost of medical treatment,
new devices are being developed to monitor the well-being of individuals and to
automatically contact physicians when needed. In many of these systems, a major
goal is to prevent people from being hospitalized.
Personal health care systems are often equipped with various sensors organ-
ized in a (preferably wireless) body-area network (BAN). An important issue is
that such a network should at worst only minimally hinder a person. To this end,
the network should be able to operate while a person is moving, with no strings
(i.e., wires) attached to immobile devices.
This requirement leads to two obvious organizations, as shown in Fig. 1-12.
In the first one, a central hub is part of the BAN and collects data as needed. From
time to time, this data is then offloaded to a larger storage device. The advantage
of this scheme is that the hub can also manage the BAN. In the second scenario,
the BAN is continuously hooked up to an external network, again through a wire-
less connection, to which it sends monitored data. Separate techniques will need
to be deployed for managing the BAN. Of course, further connections to a physi-
cian or other people may exist as well.
From a distributed system’s perspective we are immediately confronted with
questions such as:
1. Where and how should monitored data be stored?
2. How can we prevent loss of crucial data?
3. What infrastructure is needed to generate and propagate alerts?
27
INTRODUCTION CHAP. 1
body-area network body-area network
ECG sensor
Motion sensors
Tilt sensor
PDA
Transmitter
External
storage
GPRS/UMTS
(a) (b)
Figure 1-12. Monitoring a person in a pervasive electronic health care system,
using (a) a local hub or (b) a continuous wireless connection.
4. How can physicians provide online feedback?
5. How can extreme robustness of the monitoring system be realized?
6. What are the security issues and how can the proper policies be
enforced?
Unlike home systems, we cannot expect the architecture of pervasive health care
systems to move toward single-server systems and have the monitoring devices
operate with minimal functionality. On the contrary: for reasons of efficiency, de-
vices and body-area networks will be required to support in-network data proc-
essing, meaning that monitoring data will, for example, have to be aggregated be-
fore permanently storing it or sending it to a physician. Unlike the case for distrib-
uted information systems, there is yet no clear answer to these questions.
Sensor Networks
Our last example of pervasive systems is sensor networks. These networks in
many cases form part of the enabling technology for pervasiveness and we see
that many solutions for sensor networks return in pervasive applications. What
makes sensor networks interesting from a distributed system’s perspective is that
in virtually all cases they are used for processing information. In this sense, they
do more than just provide communication services, which is what traditional com-
puter networks are all about. Akyildiz et al. (2002) provide an overview from a
networking perspective. A more systems-oriented introduction to sensor networks
is given by Zhao and Guibas (2004). Strongly related are mesh networks which
essentially form a collection of (fixed) nodes that communicate through wireless
links. These networks may form the basis for many medium-scale distributed sys-
tems. An overview is provided in Akyildiz et al. (2005).
28
SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS
A sensor network typically consists of tens to hundreds or thousands of rela-
tively small nodes, each equipped with a sensing device. Most sensor networks
use wireless communication, and the nodes are often battery powered. Their lim-
ited resources, restricted communication capabilities, and constrained power con-
sumption demand that efficiency be high on the list of design criteria.
The relation with distributed systems can be made clear by considering sensor
networks as distributed databases. This view is quite common and easy to under-
stand when realizing that many sensor networks are deployed for measurement
and surveillance applications (Bonnet et al., 2002). In these cases, an operator
would like to extract information from (a part of) the network by simply issuing
queries such as ‘‘What is the northbound traffic load on Highway 1?’’ Such
queries resemble those of traditional databases. In this case, the answer will prob-
ably need to be provided through collaboration of many sensors located around
Highway 1, while leaving other sensors untouched.
To organize a sensor network as a distributed database, there are essentially
two extremes, as shown in Fig. 1-13. First, sensors do not cooperate but simply
send their data to a centralized database located at the operator’s site. The other
extreme is to forward queries to relevant sensors and to let each compute an
answer, requiring the operator to sensibly aggregate the returned answers.
Neither of these solutions is very attractive. The first one requires that sensors
send all their measured data through the network, which may waste network re-
sources and energy. The second solution may also be wasteful as it discards the
aggregation capabilities of sensors which would allow much less data to be re-
turned to the operator. What is needed are facilities for in-network data proc-
essing, as we also encountered in pervasive health care systems.
In-network processing can be done in numerous ways. One obvious one is to
forward a query to all sensor nodes along a tree encompassing all nodes and to
subsequently aggregate the results as they are propagated back to the root, where
the initiator is located. Aggregation will take place where two or more branches of
the tree come to together. As simple as this scheme may sound, it introduces diffi-
cult questions:
1. How do we (dynamically) set up an efficient tree in a sensor network?
2. How does aggregation of results take place? Can it be controlled?
3. What happens when network links fail?
These questions have been partly addressed in TinyDB, which implements a de-
clarative (database) interface to wireless sensor networks. In essence, TinyDB can
use any tree-based routing algorithm. An intermediate node will collect and ag-
gregate the results from its children, along with its own findings, and send that
toward the root. To make matters efficient, queries span a period of time allowing
29
INTRODUCTION CHAP. 1
Operator's site
Sensor network
Sensor data
is sent directly
to operator
Operator's site
Sensor network
Query
Sensors
send only
answers
Each sensor
can process and
store data
(a)
(b)
Figure 1-13. Organizing a sensor network database, while storing and proc-
essing data (a) only at the operator’s site or (b) only at the sensors.
for careful scheduling of operations so that network resources and energy are
optimally consumed. Details can be found in Madden et al. (2005).
However, when queries can be initiated from different points in the network,
using single-rooted trees such as in TinyDB may not be efficient enough. As an
alternative, sensor networks may be equipped with special nodes where results are
forwarded to, as well as the queries related to those results. To give a simple ex-
ample, queries and results related temperature readings are collected at a different
location than those related to humidity measurements. This approach corresponds
directly to the notion of publish/subscribe systems, which we will discuss exten-
sively in Chap. 13.
1.4 SUMMARY
Distributed systems consist of autonomous computers that work together to
give the appearance of a single coherent system. One important advantage is that
they make it easier to integrate different applications running on different com-
puters into a single system. Another advantage is that when properly designed,
30
SEC. 1.4 SUMMARY
distributed systems scale well with respect to the size of the underlying network.
These advantages often come at the cost of more complex software, degradation
of performance, and also often weaker security. Nevertheless, there is consid-
erable interest worldwide in building and installing distributed systems.
Distributed systems often aim at hiding many of the intricacies related to the
distribution of processes, data, and control. However, this distribution transpar-
ency not only comes at a performance price, but in practical situations it can never
be fully achieved. The fact that trade-offs need to be made between achieving var-
ious forms of distribution transparency is inherent to the design of distributed sys-
tems, and can easily complicate their understanding.
Matters are further complicated by the fact that many developers initially
make assumptions about the underlying network that are fundamentally wrong.
Later, when assumptions are dropped, it may turn out to be difficult to mask
unwanted behavior. A typical example is assuming that network latency is not sig-
nificant. Later, when porting an existing system to a wide-area network, hiding
latencies may deeply affect the system’s original design. Other pitfalls include
assuming that the network is reliable, static, secure, and homogeneous.
Different types of distributed systems exist which can be classified as being
oriented toward supporting computations, information processing, and pervasive-
ness. Distributed computing systems are typically deployed for high-performance
applications often originating from the field of parallel computing. A huge class
of distributed can be found in traditional office environments where we see data-
bases playing an important role. Typically, transaction processing systems are
deployed in these environments. Finally, an emerging class of distributed systems
is where components are small and the system is composed in an ad hoc fashion,
but most of all is no longer managed through a system administrator. This last
class is typically represented by ubiquitous computing environments.
PROBLEMS
1. An alternative definition for a distributed system is that of a collection of independent
computers providing the view of being a single system, that is, it is completely hidden
from users that there even multiple computers. Give an example where this view
would come in very handy.
2. What is the role of middleware in a distributed system?
3. Many networked systems are organized in terms of a back office and a front office.
How does organizations match with the coherent view we demand for a distributed
system?
4. Explain what is meant by (distribution) transparency, and give examples of different
types of transparency.
31
INTRODUCTION CHAP. 1
5. Why is it sometimes so hard to hide the occurrence and recovery from failures in a
distributed system?
6. Why is it not always a good idea to aim at implementing the highest degree of trans-
parency possible?
7. What is an open distributed system and what benefits does openness provide?
8. Describe precisely what is meant by a scalable system.
9. Scalability can be achieved by applying different techniques. What are these tech-
niques?
10. Explain what is meant by a virtual organization and give a hint on how such organiza-
tions could be implemented.
11. When a transaction is aborted, we have said that the world is restored to its previous
state, as though the transaction had never happened. We lied. Give an example where
resetting the world is impossible.
12. Executing nested transactions requires some form of coordination. Explain what a
coordinator should actually do.
13. We argued that distribution transparency may not be in place for pervasive systems.
This statement is not true for all types of transparencies. Give an example.
14. We already gave some examples of distributed pervasive systems: home systems,
electronic health-care systems, and sensor networks. Extend this list with more ex-
amples.
15. (Lab assignment) Sketch a design for a home system consisting of a separate media
server that will allow for the attachment of a wireless client. The latter is connected to
(analog) audio/video equipment and transforms the digital media streams to analog
output. The server runs on a separate machine, possibly connected to the Internet, but
has no keyboard and/or monitor connected.
32
2
ARCHITECTURES
Distributed systems are often complex pieces of software of which the com-
ponents are by definition dispersed across multiple machines. To master their
complexity, it is crucial that these systems are properly organized. There are dif-
ferent ways on how to view the organization of a distributed system, but an obvi-
ous one is to make a distinction between the logical organization of the collection
of software components and on the other hand the actual physical realization.
The organization of distributed systems is mostly about the software com-
ponents that constitute the system. These software architectures tell us how the
various software components are to be organized and how they should interact. In
this chapter we will first pay attention to some commonly applied approaches
toward organizing (distributed) computer systems.
The actual realization of a distributed system requires that we instantiate and
place software components on real machines. There are many different choices
that can be made in doing so. The final instantiation of a software architecture is
also referred to as a system architecture. In this chapter we will look into tradi-
tional centralized architectures in which a single server implements most of the
software components (and thus functionality), while remote clients can access that
server using simple communication means. In addition, we consider decentralized
architectures in which machines more or less play equal roles, as well as hybrid
organizations.
As we explained in Chap. 1, an important goal of distributed systems is to
separate applications from underlying platforms by providing a middleware layer.
From Chapter 2 of Distributed Systems: Principles and Paradigms, Second Edition. Andrew S. Tanenbaum,
Maarten Van Steen. Copyright © 2007 by Pearson Education, Inc. Publishing as Prentice Hall. All rights reserved.
33
ARCHITECTURES CHAP. 2
Adopting such a layer is an important architectural decision, and its main purpose
is to provide distribution transparency. However, trade-offs need to be made to
achieve transparency, which has led to various techniques to make middleware
adaptive. We discuss some of the more commonly applied ones in this chapter, as
they affect the organization of the middleware itself.
Adaptability in distributed systems can also be achieved by having the system
monitor its own behavior and taking appropriate measures when needed. This in-
sight has led to a class of what are now referred to as autonomic systems. These
distributed systems are frequently organized in the form of feedback control
loops, which form an important architectural element during a system’s design. In
this chapter, we devote a section to autonomic distributed systems.
2.1 ARCHITECTURAL STYLES
We start our discussion on architectures by first considering the logical organ-
ization of distributed systems into software components, also referred to as soft-
ware architecture (Bass et al., 2003). Research on software architectures has
matured considerably and it is now commonly accepted that designing or adopting
an architecture is crucial for the successful development of large systems.
For our discussion, the notion of an architectural style is important. Such a
style is formulated in terms of components, the way that components are con-
nected to each other, the data exchanged between components, and finally how
these elements are jointly configured into a system. A component is a modular
unit with well-defined required and provided interfaces that is replaceable within
its environment (OMG, 2004b). As we shall discuss below, the important issue
about a component for distributed systems is that it can be replaced, provided we
respect its interfaces. A somewhat more difficult concept to grasp is that of a con-
nector, which is generally described as a mechanism that mediates communica-
tion, coordination, or cooperation among components (Mehta et al., 2000; and
Shaw and Clements, 1997). For example, a connector can be formed by the facili-
ties for (remote) procedure calls, message passing, or streaming data.
Using components and connectors, we can come to various configurations,
which, in turn have been classified into architectural styles. Several styles have by
now been identified, of which the most important ones for distributed systems are:
1. Layered architectures
2. Object-based architectures
3. Data-centered architectures
4. Event-based architectures
The basic idea for the layered style is simple: components are organized in a
layered fashion where a component at layer Li is allowed to call components at
34
SEC. 2.1 ARCHITECTURAL STYLES
the underlying layer Li −1, but not the other way around, as shown in Fig. 2-1(a).
This model has been widely adopted by the networking community; we briefly
review it in Chap. 4. A key observation is that control generally flows from layer
to layer: requests go down the hierarchy whereas the results flow upward.
A far looser organization is followed in object-based architectures, which
are illustrated in Fig. 2-1(b). In essence, each object corresponds to what we have
defined as a component, and these components are connected through a (remote)
procedure call mechanism. Not surprisingly, this software architecture matches
the client-server system architecture we described above. The layered and object-
based architectures still form the most important styles for large software systems
(Bass et al., 2003).
Layer N
Layer N-1
Layer 1
Layer 2
Request
flow
Response
flow
(a) (b)
Object
Object
Object
Object
Object
Method call
Figure 2-1. The (a) layered and (b) object-based architectural style.
Data-centered architectures evolve around the idea that processes commun-
icate through a common (passive or active) repository. It can be argued that for
distributed systems these architectures are as important as the layered and object-
based architectures. For example, a wealth of networked applications have been
developed that rely on a shared distributed file system in which virtually all com-
munication takes place through files. Likewise, Web-based distributed systems,
which we discuss extensively in Chap. 12, are largely data-centric: processes
communicate through the use of shared Web-based data services.
In event-based architectures, processes essentially communicate through the
propagation of events, which optionally also carry data, as shown in Fig. 2-2(a).
For distributed systems, event propagation has generally been associated with
what are known as publish/subscribe systems (Eugster et al., 2003). The basic
idea is that processes publish events after which the middleware ensures that only
those processes that subscribed to those events will receive them. The main
advantage of event-based systems is that processes are loosely coupled. In princi-
ple, they need not explicitly refer to each other. This is also referred to as being
decoupled in space, or referentially decoupled.
35
ARCHITECTURES CHAP. 2
(a) (b)
Component Component
Component
Event bus
Publish
Publish
Event delivery
Component Component
Data delivery
Shared (persistent) data space
Figure 2-2. The (a) event-based and (b) shared data-space architectural style.
Event-based architectures can be combined with data-centered architectures,
yielding what is also known as shared data spaces. The essence of shared data
spaces is that processes are now also decoupled in time: they need not both be ac-
tive when communication takes place. Furthermore, many shared data spaces use
a SQL-like interface to the shared repository in that sense that data can be ac-
cessed using a description rather than an explicit reference, as is the case with
files. We devote Chap. 13 to this architectural style.
What makes these software architectures important for distributed systems is
that they all aim at achieving (at a reasonable level) distribution transparency.
However, as we have argued, distribution transparency requires making trade-offs
between performance, fault tolerance, ease-of-programming, and so on. As there
is no single solution that will meet the requirements for all possible distributed ap-
plications, researchers have abandoned the idea that a single distributed system
can be used to cover 90% of all possible cases.
2.2 SYSTEM ARCHITECTURES
Now that we have briefly discussed some common architectural styles, let us
take a look at how many distributed systems are actually organized by considering
where software components are placed. Deciding on software components, their
interaction, and their placement leads to an instance of a software architecture,
also called a system architecture (Bass et al., 2003). We will discuss centralized
and decentralized organizations, as well as various hybrid forms.
2.2.1 Centralized Architectures
Despite the lack of consensus on many distributed systems issues, there is one
issue that many researchers and practitioners agree upon: thinking in terms of cli-
ents that request services from servers helps us understand and manage the com-
plexity of distributed systems and that is a good thing.
36
SEC. 2.2 SYSTEM ARCHITECTURES
In the basic client-server model, processes in a distributed system are divided
into two (possibly overlapping) groups. A server is a process implementing a spe-
cific service, for example, a file system service or a database service. A client is a
process that requests a service from a server by sending it a request and subse-
quently waiting for the server’s reply. This client-server interaction, also known
as request-reply behavior is shown in Fig. 2-3
Client
Request Reply
Server
Provide service Time
Wait for result
Figure 2-3. General interaction between a client and a server.
Communication between a client and a server can be implemented by means
of a simple connectionless protocol when the underlying network is fairly reliable
as in many local-area networks. In these cases, when a client requests a service, it
simply packages a message for the server, identifying the service it wants, along
with the necessary input data. The message is then sent to the server. The latter, in
turn, will always wait for an incoming request, subsequently process it, and pack-
age the results in a reply message that is then sent to the client.
Using a connectionless protocol has the obvious advantage of being efficient.
As long as messages do not get lost or corrupted, the request/reply protocol just
sketched works fine. Unfortunately, making the protocol resistant to occasional
transmission failures is not trivial. The only thing we can do is possibly let the cli-
ent resend the request when no reply message comes in. The problem, however, is
that the client cannot detect whether the original request message was lost, or that
transmission of the reply failed. If the reply was lost, then resending a request
may result in performing the operation twice. If the operation was something like
‘‘transfer $10,000 from my bank account,’’ then clearly, it would have been better
that we simply reported an error instead. On the other hand, if the operation was
‘‘tell me how much money I have left,’’ it would be perfectly acceptable to resend
the request. When an operation can be repeated multiple times without harm, it is
said to be idempotent. Since some requests are idempotent and others are not it
should be clear that there is no single solution for dealing with lost messages. We
defer a detailed discussion on handling transmission failures to Chap. 8.
As an alternative, many client-server systems use a reliable connection-
oriented protocol. Although this solution is not entirely appropriate in a local-area
network due to relatively low performance, it works perfectly fine in wide-area
systems in which communication is inherently unreliable. For example, virtually
all Internet application protocols are based on reliable TCP/IP connections. In this
37
ARCHITECTURES CHAP. 2
case, whenever a client requests a service, it first sets up a connection to the
server before sending the request. The server generally uses that same connection
to send the reply message, after which the connection is torn down. The trouble is
that setting up and tearing down a connection is relatively costly, especially when
the request and reply messages are small.
Application Layering
The client-server model has been subject to many debates and controversies
over the years. One of the main issues was how to draw a clear distinction be-
tween a client and a server. Not surprisingly, there is often no clear distinction.
For example, a server for a distributed database may continuously act as a client
because it is forwarding requests to different file servers responsible for imple-
menting the database tables. In such a case, the database server itself essentially
does no more than process queries.
However, considering that many client-server applications are targeted toward
supporting user access to databases, many people have advocated a distinction be-
tween the following three levels, essentially following the layered architectural
style we discussed previously:
1. The user-interface level
2. The processing level
3. The data level
The user-interface level contains all that is necessary to directly interface with the
user, such as display management. The processing level typically contains the ap-
plications. The data level manages the actual data that is being acted on.
Clients typically implement the user-interface level. This level consists of the
programs that allow end users to interact with applications. There is a consid-
erable difference in how sophisticated user-interface programs are.
The simplest user-interface program is nothing more than a character-based
screen. Such an interface has been typically used in mainframe environments. In
those cases where the mainframe controls all interaction, including the keyboard
and monitor, one can hardly speak of a client-server environment. However, in
many cases, the user’s terminal does some local processing such as echoing typed
keystrokes, or supporting form-like interfaces in which a complete entry is to be
edited before sending it to the main computer.
Nowadays, even in mainframe environments, we see more advanced user in-
terfaces. Typically, the client machine offers at least a graphical display in which
pop-up or pull-down menus are used, and of which many of the screen controls
are handled through a mouse instead of the keyboard. Typical examples of such
interfaces include the X-Windows interfaces as used in many UNIX environments,
and earlier interfaces developed for MS-DOS PCs and Apple Macintoshes.
38
SEC. 2.2 SYSTEM ARCHITECTURES
Modern user interfaces offer considerably more functionality by allowing ap-
plications to share a single graphical window, and to use that window to exchange
data through user actions. For example, to delete a file, it is usually possible to
move the icon representing that file to an icon representing a trash can. Likewise,
many word processors allow a user to move text in a document to another position
by using only the mouse. We return to user interfaces in Chap. 3.
Many client-server applications can be constructed from roughly three dif-
ferent pieces: a part that handles interaction with a user, a part that operates on a
database or file system, and a middle part that generally contains the core func-
tionality of an application. This middle part is logically placed at the processing
level. In contrast to user interfaces and databases, there are not many aspects com-
mon to the processing level. Therefore, we shall give several examples to make
this level clearer.
As a first example, consider an Internet search engine. Ignoring all the
animated banners, images, and other fancy window dressing, the user interface of
a search engine is very simple: a user types in a string of keywords and is subse-
quently presented with a list of titles of Web pages. The back end is formed by a
huge database of Web pages that have been prefetched and indexed. The core of
the search engine is a program that transforms the user’s string of keywords into
one or more database queries. It subsequently ranks the results into a list, and
transforms that list into a series of HTML pages. Within the client-server model,
this information retrieval part is typically placed at the processing level. Fig. 2-4
shows this organization.
Database
with Web pages
Query
generator
Ranking
algorithm
HTML
generator
User interface
Keyword expression
Database queries
Web page titles
with meta-information
Ranked list
of page titles
HTML page
containing list
Processing
level
User-interface
level
Data level
Figure 2-4. The simplified organization of an Internet search engine into three different
layers.
As a second example, consider a decision support system for a stock broker-
age. Analogous to a search engine, such a system can be divided into a front end
39
ARCHITECTURES CHAP. 2
implementing the user interface, a back end for accessing a database with the
financial data, and the analysis programs between these two. Analysis of financial
data may require sophisticated methods and techniques from statistics and artifi-
cial intelligence. In some cases, the core of a financial decision support system
may even need to be executed on high-performance computers in order to achieve
the throughput and responsiveness that is expected from its users.
As a last example, consider a typical desktop package, consisting of a word
processor, a spreadsheet application, communication facilities, and so on. Such
‘‘office’’ suites are generally integrated through a common user interface that sup-
ports compound documents, and operates on files from the user’s home directory.
(In an office environment, this home directory is often placed on a remote file
server.) In this example, the processing level consists of a relatively large collec-
tion of programs, each having rather simple processing capabilities.
The data level in the client-server model contains the programs that maintain
the actual data on which the applications operate. An important property of this
level is that data are often persistent, that is, even if no application is running,
data will be stored somewhere for next use. In its simplest form, the data level
consists of a file system, but it is more common to use a full-fledged database. In
the client-server model, the data level is typically implemented at the server side.
Besides merely storing data, the data level is generally also responsible for
keeping data consistent across different applications. When databases are being
used, maintaining consistency means that metadata such as table descriptions,
entry constraints and application-specific metadata are also stored at this level.
For example, in the case of a bank, we may want to generate a notification when a
customer’s credit card debt reaches a certain value. This type of information can
be maintained through a database trigger that activates a handler for that trigger at
the appropriate moment.
In most business-oriented environments, the data level is organized as a rela-
tional database. Data independence is crucial here. The data are organized inde-
pendent of the applications in such a way that changes in that organization do not
affect applications, and neither do the applications affect the data organization.
Using relational databases in the client-server model helps separate the processing
level from the data level, as processing and data are considered independent.
However, relational databases are not always the ideal choice. A charac-
teristic feature of many applications is that they operate on complex data types
that are more easily modeled in terms of objects than in terms of relations. Exam-
ples of such data types range from simple polygons and circles to representations
of aircraft designs, as is the case with computer-aided design (CAD) systems.
In those cases where data operations are more easily expressed in terms of ob-
ject manipulations, it makes sense to implement the data level by means of an ob-
ject-oriented or object-relational database. Notably the latter type has gained
popularity as these databases build upon the widely dispersed relational data
model, while offering the advantages that object-orientation gives.
40
Exploring the Variety of Random
Documents with Different Content
Distributed systems principles and paradigms 2nd ed., New international ed Edition Tanenbaum
VIII
WARREN HASTINGS,
THE FIRST UNCROWNED KING OF INDIA
B
VIII
WARREN HASTINGS
OTH in point of time and personal capacity, Warren Hastings,
first Governor-General of the British Empire in India, was the
successor of Robert, Lord Clive. At the same time it may be as well
to point out in this connection that there might be more literal
correctness in describing Warren Hastings as an Empire-Preserver
rather than an Empire-Maker.
It was the victor of Plassey who rough-hewed the stones upon
which the now gorgeous fabric of our Indian Empire stands. It was
Hastings who, in spite of stupendous difficulties, took those stones
and laid them down according to that plan which he had formed,
and which has been followed in the main by all who have added to
the structure.
As was said in other words of William of Orange, one of the
greatest claims that the great Governor has to the interest and
admiration of those who have a share in the splendid inheritance
that he built up, lies in the fact that he did his work in the face of
everlasting hindrances and in the midst of perpetual
embarrassments, which must infallibly have discouraged and
bewildered any but a man upon whom the gods had set the stamp
of greatness, and, in their own way, crowned him one of the kings of
men. In short, like the grandson of William the Silent, Warren
Hastings was first and foremost an overcomer of difficulties.
Great and splendid and enduring as his work undoubtedly was, it
would not, after all, have been very difficult to do if he had just been
left to do it—not helped, because he wasn’t the kind of man who
wanted help, but just left alone. Instead of this, however, as though
it were not enough that his work of organising and consolidating
what the sword of Clive had won, and combating the infinity of
complications arising out of the rivalry of a dozen warring native
potentates, he was purposely surrounded in his own council-
chamber by unscrupulous enemies of his own blood and country,
whose only title to historical recognition is now the infamy that they
have earned by failing to prevent the doing of that work which
Warren Hastings saw had got to be done, and which he, with an
inflexible heroism, decided to do in spite of everything that his
enemies, white or brown, Mohammedan, Hindoo or British, could do
to cripple him.
Sir Alfred Lyall, his most recent biographer, has very happily said
of him that “perhaps no man of undisputed genius ever inherited
less in mind or money from his parents or owed them fewer
obligations of any kind.” His father, Pynaston Hastings, was the
vagrant ne’er-do-well son of a fine old family. He married when only
fifteen without any means or prospect of supporting a family. Warren
was the second son. His father was only seventeen at his birth, and
his mother died a few days later. As soon as he was old enough
Pynaston took holy orders, married again, obtained a living in the
West Indies, and there died, leaving his son to be put into a charity
school by his grandfather.
This is not much for a father to do for a son, but there was
something else that Pynaston Hastings did which was of very great
consequence, though in the nature of the case no credit is due to
him for it. He transmitted to him the blood of a long line of
ancestors, which stretched away back through one of the followers
of William the Norman to the days of those old pirate kings of the
Northland who, as I have pointed out before, were none the worse
fathers of Empire-Makers because they were pirates as well.
One of his ancestors, John Hastings, Lord of the Manors of
Yelford-Hastings in Oxfordshire, and of Dalesford in Worcestershire,
lost about half of his worldly goods, including the plate that he sent
to be coined at the Oxford Mint, in helping Charles Stuart to fight the
great Oliver, and afterwards spent most of the remainder in buying
his peace from the Parliament. It was on the ancient estate of
Dalesford, long before sold to the stranger and the alien, that
Warren Hastings was born, some two hundred years later, practically
a pauper and almost an outcast, under the shadow of his ancestral
home.
When he came to reasoning years he made a boyish resolve,
challenging fate with all the splendid insolence of a seven-year-old
dreamer, that some day he would make his fortune and buy the old
place back—which in due course he did, although in those days his
prospect of doing so was about as small as it was of reigning over
the millions of subjects whose descendants to-day revere his
memory almost as that of one of their own demigods.
When he was twelve years old Warren was taken away from the
charity school by one of his uncles and sent to Westminster, where
he distinguished himself by winning a King’s scholarship in the year
1747. Even when his poor old grandfather, the last Hastings of
Dalesford, and the miserably paid rector of the parish which his
ancestors had owned, sent Warren to sit beside the little rustics of
the village school, he immediately singled himself out from them by
the willing intelligence with which he took to his work and
afterwards the headmaster of Westminster had high hopes of
university distinctions for him. It was indeed a somewhat curious
coincidence that Robert Clive should have been such an exceedingly
bad boy and the completer of his work such a good one.
But the Fates had already decided that Warren Hastings was to
graduate with honours in a very much bigger university than that on
the banks of the Isis or the Cam. His uncle died suddenly, and the
orphan lad was passed on to the care of a distant connection who
happened to be a director of the East India Company.
His headmaster remonstrated strongly, but happily without
effect, against his immediate removal to Christ’s Hospital to learn
account-keeping before going out to Bengal as a writer in the service
of “John Company.”
It seems as though the worthy Dr. Nichols had a very high
opinion of his intellectual abilities, for, when all his protests failed, he
actually offered to send his brilliant young pupil to Oxford at his own
expense.
Happily for the British Empire Mr. Director Chiswick, the relative
aforesaid, stuck to his selfish project of getting him off his hands as
quickly and permanently as possible by sending him out to Calcutta
to take jungle fever or make a fortune, just in the same way that
Clive’s despairing parents had done.
He sailed for Calcutta when he was seventeen, the same age as
his precious father was when he was born. He had been two years
at the desk in Calcutta when there came the news that Clive had
taken Arcot and put a very different complexion on the struggle
between the English and French Companies for the supremacy of
India.
About that time he was sent to a little town on the Hooghly
about a mile from Moorshedabad, and while he was here driving
bargains with native silk-weavers and tea merchants, Surajah
Dowlah marched into Calcutta and cast such English prisoners as he
could lay hold of into the Black Hole.
Hastings was also taken prisoner, but most fortunately did not
get into the Black Hole, and he appears to have been set at large on
the intercession of the chief of the Dutch factory. During the period
which followed his partial release—for he was still under surveillance
at Moorshedabad—he made his first essay in diplomacy, or what
would perhaps be more correctly described as political intrigue, with
the result that the city got too hot for him, and he fled to Fulda, an
island below Calcutta, where, as has been pithily said, the English
fugitives from Fort William “were encamped like a shipwrecked crew
awaiting rescue.”
The rescue came in the shape of the combined naval and
military expedition, commanded by Admiral Watson and Robert
Clive, which was destined to end in the triumph of Plassey, and
Warren Hastings, as Macaulay aptly suggests in his brilliant but
singularly misinformed essay, doubtless inspired by the example of
Clive and the similarity of their entrance on to the stage of Indian
affairs, like him exchanged the pen for the sword, and fought
through the campaign. But Clive saw “that there was more in his
head than his arm,” and after the battle of Plassey he sent him as
resident Agent of the Company to the Court of Meer Jaffier, the
puppet-nabob who had been set up in the place of Surajah Dowlah.
He held this post until he was made a Member of Council in
1761, and was obliged to remove to Calcutta. Clive was at home
now, and the interregnum of oppression, extortion, and general
mismanagement was in full swing; but the man who was afterwards
so grossly wronged and falsely impeached, and who passed through
the most celebrated trial in English history charged with just such
crimes, had so little taste for them that three years later he came
back a comparatively poor man, and the fortune he had he either
gave away to his relations or lost through the failure of a Dutch
trading-house.
After a stay of four years, during which he renewed his intimacy
with his old schoolfellow, the creator of the immortal John Gilpin,
and made the acquaintance of Johnson and Boswell, he found
himself so reduced in circumstances that he not only had to ask the
Directors of the Company to give him more employment in India, but
when he got it he was forced to borrow the money to pay his
passage out again.
It is quite impossible to form any just and reasonable judgment
of the work which Warren Hastings now went out to do unless one
first gets an adequate idea of the condition of things obtaining in
India before the English went there, and of the conditions that
would have obtained, if men like Clive, Hastings, Cornwallis, and
Wellesley had not by one means and another—some good, some
bad, but all just what were possible under the circumstances—
succeeded in imposing the Pax Britannica upon the rival and
constantly warring potentates who governed the native populations.
No doubt the war on the Rohillas, or the so-called spoliation of
the Begums of Oude, together with more or less magnified
incidentals, formed famous themes in after years for the inflated
eloquence and grandiloquent over-statements of Edmund Burke and
Sheridan, and for the far less comprehensible or excusable special
pleading of Lord Macaulay.
It was, no doubt, very affecting to see the patched and
powdered fine ladies who paid their fifty guineas a seat in
Westminster Hall to watch the men of words mangling the
reputation of the man of deeds, weeping and fainting at the
harrowing pictures they drew—mostly on their own imaginations—of
the sufferings which he had not caused; but we of to-day are
sufficiently far removed from the personal spite and the passion and
rivalry which inspired the enemies and accusers of the great
Governor to be able to look at things as they actually were, and in
doing so we shall see that, however heavy was the hand that
Warren Hastings laid upon the subject peoples, it was but as a
caress to a blow when compared with the oppression and extortion
with which conqueror after conqueror, Mohammedan and Hindoo,
Sikh, Afghan, and Mahratta, had ground down and despoiled the
helpless races which successively passed under their sway.
Order, however dearly bought, is always less expensive than
anarchy, and the impassioned periods of Burke and Sheridan look
somewhat silly when we compare them with the sober facts. It
never seems to have struck them or their audience to make any
comparison between the English gentleman and loyal servant of his
country whom they would have handed down to history as a
monster of iniquity, and those real tyrants of the type of Surajah
Dowlah, Hyder-Ali, and Nana-Sahib, whose brutal rule and ruthless
wars of conquest and extermination must have been, under the
circumstances, the only possible alternative to the strong and steady
control of the Englishman.
The first thing that Warren Hastings did on his return was to
reorganise the trade of the Province, and in this he succeeded so
well that the Directors rewarded him in 1772 with the Governorship
of Bengal; and if they could have stopped there, leaving him to do
the rest, the immediately subsequent history of India might have
been very much more creditable to the rulers and more pleasant
reading for the descendants of the ruled than it was. But unhappily a
body of traders and shareholders became possessed with the idea
that they were the proper sort of people to rule a country divided by
political and religious factions, with a history of almost constant
warfare stretching back for centuries, and situated fifteen thousand
miles away.
This, on the face of it, was an impossibility. When they had
found their Governor they should have trusted him to govern,
instead of sending out his personal enemies to sit at his council-table
to spy upon his actions and hamper and oppose him in everything
that he did.
But there was something else in its way quite as serious as this.
Practically all the charges that were brought against Warren Hastings
on his impeachment are answered and disposed of by the fact that
the only condition upon which he could retain his position and do the
work that he had set his soul upon doing was, in three words,
making India pay. John Company looked upon his new possession as
a trader on a market. With the Directors, who, after all were
Hastings’ masters, it was business first, and policy and government a
good distance after.
Even Macaulay admits that every exhortation to govern leniently
and respect the rights of the native princes and their subjects was
accompanied by a demand for increased contributions. “The
inconsistency was at once manifest to their vice-regent at Calcutta,
who, with an empty treasury, with an unpaid army, with his own
salary often in arrear, with deficient crops, with Government tenants
daily running away, was called upon to remit home another half-
million without fail.”
There is another thing to be remembered before we can judge
Warren Hastings fairly in the matter of his forced contributions. The
tea that was flung overboard in Boston Harbour in the December of
1773 was imported by the East India Company. The connection will
appear more obvious when we look at what followed.
Great Britain was about to plunge into war, east and west, north
and south. Criminal misgovernment at home had produced revolt
abroad. Disaster after disaster and disgrace after disgrace were soon
to befall the British arms. The Anglo-Saxon race was about to be
split in two, and England herself was to fight, if not for her very
existence, at least for her honourable place among the nations.
All this Warren Hastings foresaw with that marvellous prevision
which made some of his actions look almost prophetic, and
determined that, come what might elsewhere, the Star of the East
should not be plucked from the British Crown. He was not a soldier.
He was an administrator. His task was not to increase but to hold.
He was by no means always successful in war, and in all his long rule
he never added a province or a district to the area of British India;
but what Clive won he held and strengthened during those fateful
years when the destiny of Britain as an empire was trembling in the
balances of Fate.
Now, to keep India, money was absolutely necessary, and the
getting of it was not always work that could be done with kid gloves
on, and the greatness of Warren Hastings as Empire-Maker or Holder
may be seen in the fact that he deliberately, and with his eyes open,
risked his future fortune and reputation in the doing of this work by
the only means available.
He knew that his methods would be censured by his masters
and made unscrupulous use of by his enemies, and he said so in so
many words, and, careless of criticism and undeterred by the most
virulent and treasonable opposition, he succeeded so far that he was
able to say with truth that he had rescued one province from infamy,
and two from total ruin. It is simply amazing to the dispassionate
reader of the present day to watch the needless struggles which
were imposed upon this man, already confronted by a titanic task,
by the very men who ought to have been the first, for their own
sakes and their country’s, to have made his way as smooth and his
burdens as light as possible.
The man who may be fairly described as the evil genius of
Warren Hastings’ career was that Sir Philip Francis who is generally
looked upon as the author of the far-famed Letters of Junius. He and
Sir John Clavering, both personal enemies of the Governor-General—
as he was now—were sent out as members of the Council, and to
the days of their death they never ceased to thwart and embarrass
him by every means in their power.
One reason for their enmity was undoubtedly the sordid motive
of getting him turned out of the Governor-Generalship in order that
one of them might succeed to his office, and that both might share
in the fruits of the extortions which, in him, they condemned.
This was not only unjust to Hastings, but it was also a crime
against their country, committed at a moment when she had all too
much need of such men as he was.
To my mind, at least, there is a very strong resemblance
between the savage invective of Junius and the consistent and
unscrupulous malevolence with which Sir Philip Francis tried to
wreck the life-work of a man at whose table he was not worthy to
sit.
Those were days in which political rivalry and personal enmity
entailed personal consequences if they were pushed too far.
Hastings seemed to have come at length to the conclusion that India
was not large enough to hold himself and Francis. He had submitted
to insult after insult, and he would have been something more than
human if his enemy’s unceasing efforts to make his life a misery and
his work a failure had not left some bitterness in his soul, and so one
fine day he sat down and embodied his opinion of him in a Minute to
the Council, and in this he purposely put words which meant
inevitable bloodshed:
“I do not trust to his promise of candour; convinced that he is
incapable of it, and that his sole purpose and wish are to embarrass
and defeat every measure which I may undertake or which may tend
even to promote the public interest if my credit is connected with
them.... Every disappointment and misfortune have been aggravated
by him, and every fabricated tale of armies devoted to famine and
massacre have found their first and most ready way to his office,
where it is known they would meet with most welcome reception....
I judge of his public conduct by my experience of his private, which I
have found void of truth and honour. This is a severe charge but
temperately and deliberately made.”
These were not words which a man in those days could write
without taking his chance of a bullet or the point of a small-sword,
and Hastings knew this perfectly well. Francis challenged him on the
spot, and the day but one after they confronted each other with
pistols at fourteen paces. Francis’s pistol missed fire, and Hastings
obligingly waited until he had reprimed. The second time the pistol
went off, but the ball flew wide. Hastings returned it very
deliberately and his enemy went down with a bullet in the right side.
Distributed systems principles and paradigms 2nd ed., New international ed Edition Tanenbaum
HIS ENEMY WENT DOWN WITH A BULLET IN THE RIGHT
SIDE.
The difference between the two men may be seen from what
followed. After his adversary had been carried home, the Governor-
General sent him a friendly message offering to visit him and bury
the hatchet for good, as was customary in such affairs between
gentlemen. Francis, not being a gentleman, refused, and as soon as
he was well enough to travel he came home to England to injure by
backstairs-intrigue and the most unscrupulous lying and
misrepresentation the man who, in the midst of his difficulties and
dangers, had proved all too strong for him in the open.
To use his own words, “after a service of thirty-five years from
its commencement, and almost thirteen of them passed in the
charge and exercise of the first nominal office of the government,”
Warren Hastings at last laid down his thankless task and came home
to render an account of his stewardship before a tribunal which
possessed neither adequate knowledge to judge of his actions nor
that judicial spirit of calmness and impartiality which could alone
have guaranteed him such a trial as English justice accords to the
vilest criminal.
His impeachment is not only the most notable but altogether the
most amazing trial in the history of British Law. It would be alike
superfluous and presumptuous to reproduce here an account of that
which has been described in the incomparable sentences of Lord
Macaulay. His essay on Warren Hastings has been considered by
many to be the finest of that magnificent collection of Essays and
Reviews, and the story of the Impeachment is undoubtedly the
finest portion of it. Hence those who read these lines cannot do
better than read it as well. If they have read it before they will
simply be repeating a pleasure; if they have not, then a new
pleasure awaits them.
What we are concerned with here are the bare facts of the
matter; but we may first pause for a moment to look at the man as
he was when he came across the world to face his mostly
incompetent and prejudiced judges. This is how his picture is drawn
by Wraxall, a contemporary and a personal acquaintance. The
portrait is certainly more faithful than the ridiculous caricatures
drawn by Burke and Sheridan.
“When he landed in his native country he had attained his fifty-
second year. In his person he was thin, but not tall, of a spare habit,
very bald, with a countenance placidly thoughtful, but when
animated full of intelligence. Placed in a situation where he might
have amassed immense wealth without exciting censure, he revisited
England with only a modest competence. In private life he was
playful and gay to a degree hardly conceivable; never carrying his
political vexations into the bosom of his family. Of a temper so
buoyant and elastic that the instant he quitted the council-board
where he had been assailed by every species of opposition, often
heightened by personal acrimony, he mixed in society like a youth
upon whom care had never intruded.”
Such was the man who, in a period of national dejection which
almost amounted to disgrace, came back, the one man of his
generation who had upheld the honour of the British name abroad in
a post of great difficulty and danger, to receive, not reward, but
impeachment.
He first faced his judges on February 13, 1788, “looking very
infirm and much indisposed, and dressed in a plain, poppy-coloured
suit of clothes.” He was finally acquitted on March 1, 1794! The trial
thus languished through seven sessions of Parliament, the total
hearing occupied one hundred and eighteen sittings of the Court,
and the vindication of his personal and official character from the
slanders of enemies, who were at last refuted with complete
discredit to his slanderers cost him about £100,000, of which no less
than £75,000 were actually certified legal costs—and this was the
reward that England gave to the one man who was capable of
preserving to her the fruit of the victories of Clive and his gallant
lieutenants!
Modern opinion, endorsed by the high legal authority of the late
Sir James Stephen, has completely rejected alike the personal
vilifications of such self-interested traitors as Francis and Clavering,
and the emotional special-pleading of Burke and Sheridan.
“The impeachment of Warren Hastings,” he says, “is, I think, a
blot on the judicial history of the country. It was monstrous that a
man should be tortured at irregular intervals for seven years, in
order that a singularly incompetent tribunal might be addressed
before an excited audience by Burke and Sheridan, in language far
removed from the calmness with which an advocate for the
prosecution ought to address a criminal court.”
To some extent Hastings was recouped for the cost of his
persecution, even if he was not rewarded for his distinguished
services. He was granted a pension of £4,000 a year for twenty-eight
and a half years, part paid in advance, and a loan of £50,000 free of
interest. But meanwhile he had been fulfilling the dream of his
boyhood by buying back his ancestral estate for £60,000, and
another £60,000 was still owing to the lawyers.
Henceforth, disgusted, as he may well have been, with the
ingratitude of the country he had served so well in so difficult a time,
he retired to his old home and spent the remaining years of his life
in the calm pursuits of a country gentleman, diversified by the
cultivation of letters and the writing of verses.
It was in these days that he used to tell his friends how, as a
little lad of seven, he had lain in the long grass on the banks of a
stream that flowed through the old domain of Dalesford and dreamt
the wild dream whose fulfilment had, after all, been stranger than
the dream itself—for not even his boyish romance could be
compared with the fact that, during the winning of the means to buy
back the home of his fathers, he had risen to be the actual ruler of
something like fifty millions of people, and the dictator of terms of
peace and war to princes who governed territories half as large as
Europe and even more populous.
But in the end he outlived both his enemies and the discredit
they had tried to cast upon him. Two years before the battle of
Waterloo he was summoned before the Houses of Parliament in the
evening of his days to give evidence on the work of his manhood,
and when he retired, after nearly four hours’ examination, the whole
crowded House of Commons rose and stood uncovered and in
silence as the old Empire-Keeper walked out of the Chamber.
He lived to see that empire, for which he had striven so painfully
and so manfully, redeemed by the genius and valour of Rodney and
Nelson and Wellington from the disgrace and degradation which had
threatened it during the last decades of the eighteenth century, and
three years after Waterloo he died.
His remains lie in the family church at Dalesford, and, to once
more quote the words of Sir Alfred Lyall, “in Westminster Abbey a
bust and an inscription commemorate the name and career of a man
who, rising early to high place and power, held an office of the
greatest importance to his country for thirteen years, by sheer force
of character and tenaciousness against adversity, and who spent the
next seven years in defending himself before a nation which
accepted the benefits but disliked the ways of his too masterly
activity.”
Lord Macaulay, who throughout his famous essay does him less
than justice, concludes it by making almost generous amends. “Not
only had the poor orphan retrieved the fallen fortunes of his line—
not only had he re-purchased the old lands and rebuilt the old
dwelling—he had preserved and extended an empire.
1
He had
founded a policy. He had administered government and war with
more than the capacity of Richelieu. He patronised learning with the
judicious liberality of Cosmo. He had been attacked by the most
formidable combination of enemies that ever sought the destruction
of a single victim; and over that combination, after a struggle of ten
years, he had triumphed. He had at length gone down to his grave
in the fulness of age, in peace after so many troubles, in honour
after so much obloquy.”
1
In the territorial sense this is hardly correct.
The great essayist probably meant extension in
the sense of increase of prestige and influence
over the still independent states of the Peninsula.
IX
NELSON
“ENGLAND EXPECTS THAT EVERY MAN
WILL DO HIS DUTY.”
I
IX
NELSON
am conscious of more difficulties ahead in beginning this sketch
than I have felt with regard to any other of the series, for,
while on the one hand it would be absurd to omit from the glorious
ranks of our Empire-Makers the most glorious of them all, it is at the
same time practically impossible to say anything fresh or even
anything that is not very generally known about the man who,
however much he may once have been slighted, and however
inadequately his earlier services may have been rewarded during his
life, has now come to be the idol of the country that he saved from
invasion and the Empire that he preserved from destruction.
His life has been written and re-written, his character and his
actions have been discussed and rediscussed, the most private acts
and thoughts of his life have been dragged out into the full glare of
publicity—a fate which any great man would have to be a very great
sinner to deserve—but when all this has been said and done there
remains a single, sharply-defined individuality of this incomparable
naval captain whom the whole world now acknowledges and
reveres, quite apart from all national considerations, as the greatest
sailor who ever trod a deck and the greatest naval strategist who
ever planned a battle or took a fleet into action.
It has been said that when a nation is on the brink of ruin the
Fates either hasten its end or send some great man to restore its
fortunes. It certainly was thus with the Britain of Nelson’s early
youth. On the 17th of October, 1781, Lord Hawke, the victor of
Quiberon Bay, and the last of the great line of seamen of whom
Admiral Blake was the first, died, leaving, as Horace Walpole said the
next day in the House of Commons, his mantle to nobody.
Apparently, there was no one worthy to wear it. The fortunes of
England were indeed at a low ebb. Both her naval and military
prestige had very seriously declined. The American colonies had
been lost by the worst of statesmanship at home and the worst of
bungling incompetence and cowardice abroad. We had been beaten
by the raw colonists on land and by the French and Dutch at sea.
At home the very highest circles of the realm were polluted by
such corruption and crippled by such imbecility as would be
absolutely incredible to us now, Imagine, for instance, what would
be thought to-day of the post of Secretary of State for War being
given to a man who had been explicitly declared by a court martial
to be absolutely incapable of serving his country in any military
capacity!—and yet this is only one example out of many of the
flagrant abuses of this amazingly disgraceful period.
Happily, however, for the honour of the race and the safety of
the Empire there had been born, twenty-three years before to a
country parson in Norfolk, a boy, the fifth in a family of eleven, who
fourteen years later was destined to die in the moment of victory,
happy in the knowledge that he had not left his country a single
enemy to fight throughout the length and breadth of the High Seas.
When Horace Walpole spoke his panegyric on Lord Hawke he would
probably have been very much surprised if he had been told that it
was this then insignificant and unknown cousin of his own who was
not only to take up the mantle of the hero of Quiberon, but to
bequeath it in his turn, not to a rival or a successor, but to the
country which his last triumph left mistress of the seas.
Although there doesn’t seem to be any direct proof, it may be
admitted that there is sufficiently strong presumption to warrant us
in believing, if we choose to do so, that Horatio Nelson, son of the
Rev. Edmund Nelson, Rector of Burnham Thorpe in Norfolk, could
one way or another have traced a lineage back to the old Sea Kings
of the North.
Certainly he must have had some of the blood of those who
fought the Armada in his veins, and it is noteworthy that a Danish
poet in celebrating his valour, wisdom, and clemency during and
after the great battle of Copenhagen, attempted to soothe the
wounded pride of his countrymen by pointing out that Nelson was
indubitably a Danish name and that after all they had only been
beaten by the descendant of one of their old Sea Kings.
But however this may be, the immediate facts all show that the
man who crowned and completed the work which Francis Drake and
his brother pirates began came of a stock that seemed to promise
but little in the way of hereditary battle-winning.
Every one on his father’s side appears either to have been a
parson or to have married one. His mother’s father was a parson
too, but happily she had a brother Maurice who was a captain in the
Navy, and had done some very good work at a time when good work
was badly wanted.
This gallant sailor was a great grand-nephew of Sir John
Suckling, the poet, and it may be noticed, in passing, that on the
21st of October, 1757, the day which we now know as the
anniversary of Trafalgar—Captain Maurice Suckling in the
Dreadnought, in company with two other sixty-gun ships, attacked
seven large French men-of-war off Cape François in the West Indies,
and gave them such a hammering that they were very thankful for
the wind which enabled them to escape.
But still more noteworthy is the opinion of Captain Maurice
Suckling of his nephew when he first received his father’s request to
give him a place on board his ship.
“What,” he wrote in reply to the application, “has poor Horatio
done, who is so weak, that he above all the rest should be sent to
rough it out at sea? But let him come, and the first time we go into
action a cannon-ball may knock off his head and provide for him at
once.”
The weakness here somewhat grimly alluded to was the curse of
Nelson’s existence from the day that he first set foot on the deck of
a ship to the moment when the bullet from the mizen-top of the
Redoubtable made his almost constant bodily suffering a matter of
minutes.
His physical infirmities, or at any rate the weakness of his body
as compared with the vast strength and tireless energy of his mind,
bring him into very close relationship with William of Orange. Putting
nationality aside, he was, in fact, on the sea what William was on
land, and the central point in his policy was also the same—tireless
and unsparing hostility to France.
With Nelson, indeed, this appears to have gone very near to the
borders of fanaticism. Some of his sayings with regard to the
Frenchmen of his day are absolutely ferocious. Hatred and contempt
are about equally blended in them. “Hate a Frenchman as you would
hate the devil!” was with him an axiom and was his usual form of
advice to midshipmen on entering the service.
On one occasion in the Mediterranean he said to one of his
captains who had got into a dispute about the property which the
defeated French garrison at Gaieta were to be allowed to take away
with them:
“I am sorry that you had any altercation with them. There is no
way to deal with a Frenchman but to knock him down. To be civil to
him is only to be laughed at when they are enemies.”
The same spirit breathes through nearly all his letters. Thus, for
instance, he concluded a letter to the British Minister at Vienna with
these words: “Down, down with the French ought to be written in
the council-room of every country in the world, and may Almighty
God give right thoughts to every sovereign is my constant prayer.”
He seems to have had respect for every other enemy that he
met; but for the French he had nothing save contemptuous and
unsparing hostility. “Close with a Frenchman, but out-manœuvre a
Russian” was another of his favourite sayings. This, it is to be hoped,
is all past and gone; but it is instructive as giving us the key, not
only to Nelson’s policy, but also to that spirit which made the British
man-of-warsmen of the day absolutely prefer to fight the French at
long odds than on even terms.
It was this spirit which was embodied in another of Nelson’s pet
phrases: “Any Englishman is worth three Frenchmen.” Of course that
would be all nonsense now; but in justice to our neighbours it ought
to be remembered that the Frenchmen whom Nelson and his sailors
met and conquered were the worst and not the best of their nation.
The old navy of France, the navy which had commanded the
Eastern Seas in the days of Clive and which had with impunity
insulted the English shores and brought an invading force into
Ireland in the time of William the Third no longer existed. It had
been essentially an aristocratic service like our own, its officers were
gentlemen and thorough sailors, and its seamen were brave,
disciplined, and obedient.
But in her blood-drunkenness France had either murdered or
banished nearly every man who was fit to command a ship or who
knew how to point a gun. The fleets of revolutionary France were for
the most part commanded by ignoramuses or poltroons, or both,
and manned by a rabble who had neither stamina, training, or
discipline.
Without the slightest wish to detract from the splendour of the
victories of Nelson or his comrades, I still think it is only fair to point
out again, as has once or twice been done before, that when we
read of French Admirals declining battle even when they had
superior force, or of running away before the battle was over, or of a
small British squadron crumpling up a whole fleet with very trifling
loss to itself, we ought to remember that the French Admirals had
little or no confidence in their officers, while the officers had still less
either in their admirals or their men.
On the other hand, such a man as Nelson, Collingwood, or
Hardy had simply to say that he was going to do a certain thing to
convince every one serving under him that it was about as good as
already done.
This brings me naturally to one of Nelson’s most striking
characteristics. No man who rose to distinction in the Navy was ever
guilty of so many barefaced acts of insubordination as he was.
Happily for him and for us his disobedience or neglect of orders was
always justified by victory. The genius for supreme command, which
was far and away the strongest point in his character, manifested
itself very early in his career. The event proved that he was the
superior of every naval officer then afloat, whether admiral or
midshipman, and he seemed instinctively to know it.
When he was commanding the old Agamemnon in the
Mediterranean, at the time when it was in dispute whether Corsica
should fall under the rule of France or Britain, he fought two French
ships, the Ça Ira and the Sans Culottes, for a whole day and beat
them. The next day a sort of general action was fought, Admiral
Hotham being in command of the British fleet. Nelson naturally
wanted a fight to a finish, but the Admiral was content with the
capture of two ships and the flight of the rest, and in reply to
Nelson’s remonstrances he said: “We must be contented. We have
done very well.”
In a letter home on the subject of this action, Nelson penned a
sentence which was at once prophetic in itself and closely
characteristic of the writer. It was this: “I wish to be an Admiral and
in command of the English fleet. I should very soon either do much
or be ruined. My disposition cannot bear tame and slow measures.
Sure I am had I commanded on the 14th, that either the whole
French fleet would have graced my triumph or I should have been in
a confounded scrape.”
That is Nelson’s mental portrait drawn by himself. No half
measures would ever do for him, and in most of the letters that he
sent home from his various scenes of action, whether they were
written to his wife, his private friends, or the Lords of the Admiralty,
we find the constant complaint, made with an insistence amounting
almost to petulance, that when he saw complete triumph within his
grasp his superiors either would not help him to secure it or forced
him to be content with a mere temporary advantage.
Under such circumstances it was only natural that such a man
should now and then break loose. He saw quite plainly that there
were confused councils at home, and timid tactics afloat. He saw
also that under Napoleon the power of France was growing every
day.
The Board of Admiralty was apparently both corrupt and
incompetent. The Mediterranean fleet had been so shamefully
neglected that after Nelson had fought an action off Toulon even he
was afraid to risk another without the certainty of victory because
there was “not so much as a mast to be had east of Gibraltar,” and
he could not possibly have re-fitted his ships. It was about this time
that he said in one of his letters home:
“I am acting, not only without the orders of my commander-in-
chief, but in some measure contrary to him.”
If the authorities at home had only had the same opinion of his
abilities as those had who were able to watch his operations on the
spot, and particularly in Italy, it is quite possible that the whole
history of Europe might have been changed and that Napoleon
would never have won that series of brilliant victories which cost
such an infinity of blood and treasure, and which bore no fruits but
such as resembled all too closely the fabled Dead Sea apples.
Nelson’s patriotism may have been of a somewhat narrow-
minded order, and his hatred of the French may have partaken
somewhat of the nature of bigotry, but there can be no doubt that
he was the one man in Europe who saw what was coming and had
the ability, if he had only had the power, to save the world from the
horrors of the Napoleonic wars.
Thus, for instance, if his advice had been taken, the splendid
victory of Aboukir Bay might have been turned into the decisive
battle of the war which only ended with Waterloo. As it was, he to
some extent took the law into his own hands. He saw perfectly well
that Napoleon’s ultimate point of attack was not Egypt but India. He
sent an officer with dispatches to the Governor of Bombay, advising
him of the defeat of the French Fleet, and in this dispatch he said:
“I know that Bombay was their first object if they could get
there, but I trust that now Almighty God will overthrow in Egypt
these pests of the human race. Buonaparte has never yet had to
contend with an English officer, and I shall endeavour to make him
respect us.”
In another dispatch to the Admiralty he taught a lesson which
we have only lately begun to learn. In those days of the old wooden-
walls the handy, light-heeled frigate was to the ships of the line what
the swift cruisers of to-day are to the big battleships. They were the
eyes and ears of the fleet, and they could be sent on errands which
were impossible to the huge three-deckers. After the battle of the
Nile was won he said in this dispatch:
“Were I to die this moment want of frigates would be found
stamped on my heart. No words of mine can express what I have
suffered, and am suffering, for want of them.”
The inner meaning of these bitter words was one of vast
importance, not only to Britain, but to all Europe. They meant really
that the most splendid victory that had so far been won at sea had
been robbed of half its results. For want of the lighter craft, even of
a few bomb-vessels and fire-ships which he had implored the
authorities to send him, Napoleon’s store-ships and transports in the
harbour of Alexandria escaped attack and certain destruction.
Their destruction would have enabled Nelson to carry out the
policy which his genius had told him was the only true one to pursue
at this momentous crisis. He would have cut off Napoleon’s
communications and deprived him of his supplies. Then he would
have blockaded the Egyptian Coast and left the future conqueror of
Austerlitz to perish amidst the sands of Egypt. As he said to himself:
“To Egypt they went with their own consent, and there they shall
remain while Nelson commands this squadron—for never, never will
he consent to the return of one ship or Frenchman. I wish them to
perish in Egypt and give an awful lesson to the world of the justice
of the Almighty.”
This was a pitiless pronouncement, but no one who has read the
history of the Napoleonic wars can doubt the accuracy of Nelson’s
foresight or the true humanity of his policy, for, if this had happened
only a few thousands out of the five million lives which these wars
are computed to have cost would have been lost. There would have
been no Austerlitz, or Wagram, or Jena for France to boast of; but,
on the other hand, there would have been no Leipsic, no Moscow,
and no Waterloo.
As usual, however, Nelson, although he had magnificently
restored the credit of the British arms at sea, was crippled by
shortness of means and baulked by the stupidity and incompetence
of his masters at home. Sir Sidney Smith’s policy was preferred to
his, with the result that Napoleon was permitted to desert his army
and live to become the curse of Europe for the next seventeen
years.
But, if he did not do all he wanted to do, when Nelson won the
battle of the Nile he completely established his claim to be
considered one of the Empire-makers of Britain, for if he had not
followed the French with that unerring judgment of his, and if he
had not, in defiance of all accepted naval tactics, attacked them in
what was considered to be an unassailable position—that is to say,
moored off shore in two lines with both ends protected by batteries
—all the work that Clive and Hastings had done in India might have
been undone, and, considering the miserable state of our national
defences, we might either have lost India or had to wage such an
exhausting war for it that we could not possibly have taken the
decisive share that we afterwards did in the overthrow of the French
power.
As he said in one of his most famous utterances while the British
fleet was streaming into the bay: “Where there is room for a
Frenchman to swing, there is room for an Englishman to get
alongside him.”
That was Nelson. His idea was always to get alongside, to get as
close as possible to the enemy and to hit him as hard as he could.
Mere defeat was not enough for him. He wanted a fight to a finish,
the finish being the absolute destruction or capture of the hostile
force.
This was not because there was anything particularly ferocious
in his nature. On the contrary, a more tender-hearted man never
lived.
Before that one defeat of his at Teneriffe when he lost his arm,
he wrote to his Commander-in-chief—this letter, by the way, was the
last he ever wrote with his right hand—expressing solicitude for
everybody but himself. None knew better than he the desperate
nature of the venture, for in this very letter he said that on the
morrow his head would probably be crowned either with laurel or
cypress, and the last thing he did before he left his ship was to call
his stepson to help him in burning his wife’s letters, and then
ordered him to remain behind, saying: “Should we both fall, what
would become of your poor mother?”
Happily Lieutenant Nisbet disobeyed the order to his face and
went. When the bullet shattered Nelson’s arm at the elbow, it was
his stepson who had the presence of mind to whip off his silk
handkerchief and bind it round above the wound. But for this,
Nelson would never have fought another battle, for he must have
bled to death before he reached his ship.
It so happened that he could have been put much sooner on
board the Sea Horse, but her commander, Captain Freemantle, was
still on shore, and, for all he knew, might be dead or alive. His wife
was on board the Sea Horse, and Nelson, wounded and bleeding as
he was, insisted on going on, saying: “I would rather suffer death
than alarm Mrs. Freemantle by letting her see me in this state when
I can give her no tidings of her husband.” Freemantle, as it turned
out, had been wounded in almost exactly the same place only a few
minutes before.
When Nelson got back to his own ship, he would not hear of
being slung or carried up on deck.
“I’ve got one arm and two legs left,” he said, “and I’ll get up by
myself.”
And so he did, and up a single rope at that. In a strong man this
would have been wonderful; in a mere weakling as Nelson physically
was, it was little short of a miracle.
This was the man who, in the Battle of Cape St. Vincent, with an
utterly disabled ship, boarded and took two Spanish men-of-war
both bigger than his own. One of them had eighty and the other a
hundred and twelve guns; his own only mounted seventy-four.
It is, of course, entirely out of the question that in such a mere
sketch as this I should attempt to follow Nelson through even a
moderate proportion of the hundred and five engagements in which
he personally fought, nor would it be fitting that I should attempt to
emulate the brilliant and detailed descriptions which have illustrated
the principal of them.
With his doings at Naples and Palermo, and his much-debated
and inexplicable attachment to Lady Hamilton which unhappily
began during this period, we have here no concern. The hero of the
Nile, like every other great man, had his faults. Those who cavil at
them are really blaming their possessors for not being perfect, for if
really great men had no faults they would be perfect, and that is
impossible, and, so much being said, the scene may now shift
forthwith from the Mediterranean to the Baltic.
The Armed Neutrality is now only a phrase in history, but in the
year 1801 it was a very serious reality. It was a league between
Russia, Sweden, and Denmark. From the English point of view it
meant this—that France, with whom we had now practically
embarked in a struggle to the death, would be able, under the
sanction of this league, to import from the shores of the Baltic the
very articles that we did not wish her to have, and which she
couldn’t get elsewhere. These were naval stores, pine-trees for
masts and spars, hemp for rigging, tar, and so on.
It was very easy to see that this Armed Neutrality meant in plain
English that these three Powers were quite agreeable to the
smashing-up of Great Britain by France provided that they were not
called upon to pay any of the expenses or suffer any of the other
losses of the war. Denmark was therefore politely but firmly
requested to detach herself from this league, the reason being that
Denmark in those days kept the key of the Baltic. Denmark refused,
and unhappily for her she did so just at the time when the Victor of
the Nile had come home for a well-earned holiday.
We are not accustomed now, in the pride of our unequalled
naval strength, to take very much account of the fleets of these
three countries, but just before the Battle of the Baltic was fought it
was a very different matter.
The Danes had twenty-three line-of-battle ships and thirty-one
frigates, not counting bomb-vessels and guard-ships. Sweden had
eighteen ships of the line, fourteen frigates and sloops and seventy-
four galleys, as well as a small swarm of gun-boats, while Russia
could put to sea eighty-two line-of-battle ships and forty-two
frigates.
Such a force within the narrow waters of the Baltic was a very
formidable one, but before we can arrive at a just appreciation of
the magnificence and importance of the service which Nelson did for
his country we must remember that of all European waters those of
the Baltic, and especially of the approaches to it, are the most
difficult and dangerous. Even with the aid of steam it would be no
light matter to take a fleet into the Baltic under the guns of Elsinore
and Kronberg were the lamps of the lighthouses extinguished and all
the buoys removed.
What then must it have been to go in with a fleet of sailing ships
utterly at the mercy of wind and current, to say nothing of the ice?
Indeed, Southey tells us that when Nelson went to Yarmouth to join
the fleet under Admiral Sir Hyde-Parker he found him a little nervous
about dark nights and ice-floes.
His own remarks on the subject are very well worthy of
remembrance: “These are not times for nervous systems,” he said.
“I hope we shall give our northern enemies that hailstorm of bullets
which gives our dear country the dominion of the sea. We have it
and all the devils in the North cannot take it from us if our wooden
walls have fair play.”
It was a most egregious mistake not to have made the Victor of
the Nile and the Conqueror of the Mediterranean commander-in-
chief of the Northern Squadron. His fame was already resounding
through the world, and every one except the Lords of the Admiralty
seems to have already recognised the fact that he was by far the
finest sailor of the age.
Here again, too, officialism at home sadly crippled the work of
valour and genius abroad. As usual Nelson had his own plans, and
as usual they were the very best possible. His idea was to attack the
Russian Squadron in Reval and the Danish in Copenhagen
simultaneously, and by preventing their coalition make it too risky for
the Swedes to join in.
Captain Mahan, who is certainly entitled to be considered one of
the foremost naval authorities of the day, describes Nelson’s plan of
attack as worthy of Napoleon himself, and says that if adopted it
“would have brought down the Baltic Confederacy with a crash that
would have resounded throughout Europe.” As it was, more timid
counsels prevailed, but thanks to Nelson the end was the same, or
nearly so.
We may gather some notion of the difficulty of getting on to the
scene of battle when we read that no less than three English line-of-
battle ships went aground before the battle began, and we also get
an interesting glimpse of that old hand-to-hand style of naval
warfare which has now passed away for ever, when we are told that
the ships opened fire at a range of two hundred yards! Nowadays
firing would begin at between three and four thousand. If two
modern fleets were to get to business at that range the said
business would probably consist of one broadside from each, one
discharge of the big guns, and after that general wreck and ruin. It
is not likely that either side would win, and it is certain that both
sides would lose.
From ten to one the battle raged fast and furious, and so much
damage had been done on the English side that Sir Hyde-Parker
made a signal to leave off action. It was at this moment that Nelson
uttered those immortal words, which were destined to be as famous
even as his signal at Trafalgar:
“What? Leave off action? No, damn me if I do! You know, Foley,
I have a right to be blind sometimes. No, I really don’t see the
signal. Fire away!”
Those were days of hard swearing as well as hard hitting, and,
considering all the circumstances, even the purest of modern purists
may forgive a little vehemence of expression to the man who that
day did such good work, not only for our grandfathers, but for us
and our children.
Distributed systems principles and paradigms 2nd ed., New international ed Edition Tanenbaum
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!
ebookgate.com

More Related Content

PDF
Distributed systems principles and paradigms 2nd ed., New international ed Ed...
buoteotten59
 
PDF
Distributed systems principles and paradigms 2nd ed., New international ed Ed...
gdphanjin
 
PDF
Distributed systems principles and paradigms 2nd ed., New international ed Ed...
mcvoydinkaos
 
PPT
Chapter 1-Introduction.ppt
sirajmohammed35
 
DOCX
Distributed computing
rohitsalunke
 
PDF
Introduction to paralle and distributed computing
HafizMuhammadAzeemAk
 
PPT
Chapter One.ppt
abdigeremew
 
PPTX
Lecture_1.pptx Introduction Introduction
HaiderAli84963
 
Distributed systems principles and paradigms 2nd ed., New international ed Ed...
buoteotten59
 
Distributed systems principles and paradigms 2nd ed., New international ed Ed...
gdphanjin
 
Distributed systems principles and paradigms 2nd ed., New international ed Ed...
mcvoydinkaos
 
Chapter 1-Introduction.ppt
sirajmohammed35
 
Distributed computing
rohitsalunke
 
Introduction to paralle and distributed computing
HafizMuhammadAzeemAk
 
Chapter One.ppt
abdigeremew
 
Lecture_1.pptx Introduction Introduction
HaiderAli84963
 

Similar to Distributed systems principles and paradigms 2nd ed., New international ed Edition Tanenbaum (20)

PPTX
Distributed Systems Distributed Systems - MSc..pptx
ssuser376193
 
PPT
Chapter 1-Introduction.ppt
balewayalew
 
PPTX
Chapter 1-Introduction to distributed system.pptx
gadisaAdamu
 
PPTX
Distributed Systems.pptx
salutiontechnology
 
PPT
Chap 01 lecture 1distributed computer lecture
Muhammad Arslan
 
PPTX
Unit 1
Baskarkncet
 
PPT
Lecture 2 - Definition and Goals of a Distributed System.ppt
KostadinKostadin
 
PPTX
Introduction to Distributed System
Sunita Sahu
 
PDF
Lecture 1 distriubted computing
ARTHURDANIEL12
 
PPTX
Unit 1
Karthi Vel
 
PDF
20CS2021 DISTRIBUTED COMPUTING
Kathirvel Ayyaswamy
 
PDF
chapter 1-Introductionkkkclll;;;x;lc,.pdf
habtaassefa0
 
PPT
chap-0 .ppt
Lookly Sam
 
PPT
- Introduction - Distributed - System -
ssuser7c150a
 
PPTX
Distributed Computing Introduction01.pptx
janetvidyaanancys
 
PPTX
Unit 1
Karthi Vel
 
PPT
DS ( distributions Systems )chap-01.ppt
DostMohammadFahimi
 
DOCX
DISTRIBUTED SYSTEM.docx
vinaypandey170
 
PPTX
chapter-1Introduction to DS,Issues and Architecture.pptx
ARULMURUGANRAMU1
 
PPTX
Lect 2 Types of Distributed Systems.pptx
PardonSamson
 
Distributed Systems Distributed Systems - MSc..pptx
ssuser376193
 
Chapter 1-Introduction.ppt
balewayalew
 
Chapter 1-Introduction to distributed system.pptx
gadisaAdamu
 
Distributed Systems.pptx
salutiontechnology
 
Chap 01 lecture 1distributed computer lecture
Muhammad Arslan
 
Unit 1
Baskarkncet
 
Lecture 2 - Definition and Goals of a Distributed System.ppt
KostadinKostadin
 
Introduction to Distributed System
Sunita Sahu
 
Lecture 1 distriubted computing
ARTHURDANIEL12
 
Unit 1
Karthi Vel
 
20CS2021 DISTRIBUTED COMPUTING
Kathirvel Ayyaswamy
 
chapter 1-Introductionkkkclll;;;x;lc,.pdf
habtaassefa0
 
chap-0 .ppt
Lookly Sam
 
- Introduction - Distributed - System -
ssuser7c150a
 
Distributed Computing Introduction01.pptx
janetvidyaanancys
 
Unit 1
Karthi Vel
 
DS ( distributions Systems )chap-01.ppt
DostMohammadFahimi
 
DISTRIBUTED SYSTEM.docx
vinaypandey170
 
chapter-1Introduction to DS,Issues and Architecture.pptx
ARULMURUGANRAMU1
 
Lect 2 Types of Distributed Systems.pptx
PardonSamson
 
Ad

Recently uploaded (20)

PPTX
20250924 Navigating the Future: How to tell the difference between an emergen...
McGuinness Institute
 
PPTX
Artificial Intelligence in Gastroentrology: Advancements and Future Presprec...
AyanHossain
 
PPTX
CARE OF UNCONSCIOUS PATIENTS .pptx
AneetaSharma15
 
PPTX
HEALTH CARE DELIVERY SYSTEM - UNIT 2 - GNM 3RD YEAR.pptx
Priyanshu Anand
 
PPTX
Applications of matrices In Real Life_20250724_091307_0000.pptx
gehlotkrish03
 
PDF
Module 2: Public Health History [Tutorial Slides]
JonathanHallett4
 
PPTX
Tips Management in Odoo 18 POS - Odoo Slides
Celine George
 
DOCX
pgdei-UNIT -V Neurological Disorders & developmental disabilities
JELLA VISHNU DURGA PRASAD
 
PDF
Virat Kohli- the Pride of Indian cricket
kushpar147
 
PPTX
Cleaning Validation Ppt Pharmaceutical validation
Ms. Ashatai Patil
 
PPTX
Five Point Someone – Chetan Bhagat | Book Summary & Analysis by Bhupesh Kushwaha
Bhupesh Kushwaha
 
PPTX
Sonnet 130_ My Mistress’ Eyes Are Nothing Like the Sun By William Shakespear...
DhatriParmar
 
PPTX
Measures_of_location_-_Averages_and__percentiles_by_DR SURYA K.pptx
Surya Ganesh
 
PPTX
An introduction to Prepositions for beginners.pptx
drsiddhantnagine
 
PPTX
family health care settings home visit - unit 6 - chn 1 - gnm 1st year.pptx
Priyanshu Anand
 
PPTX
Basics and rules of probability with real-life uses
ravatkaran694
 
PPTX
Introduction to pediatric nursing in 5th Sem..pptx
AneetaSharma15
 
PPTX
Artificial-Intelligence-in-Drug-Discovery by R D Jawarkar.pptx
Rahul Jawarkar
 
PPTX
CONCEPT OF CHILD CARE. pptx
AneetaSharma15
 
PDF
Biological Classification Class 11th NCERT CBSE NEET.pdf
NehaRohtagi1
 
20250924 Navigating the Future: How to tell the difference between an emergen...
McGuinness Institute
 
Artificial Intelligence in Gastroentrology: Advancements and Future Presprec...
AyanHossain
 
CARE OF UNCONSCIOUS PATIENTS .pptx
AneetaSharma15
 
HEALTH CARE DELIVERY SYSTEM - UNIT 2 - GNM 3RD YEAR.pptx
Priyanshu Anand
 
Applications of matrices In Real Life_20250724_091307_0000.pptx
gehlotkrish03
 
Module 2: Public Health History [Tutorial Slides]
JonathanHallett4
 
Tips Management in Odoo 18 POS - Odoo Slides
Celine George
 
pgdei-UNIT -V Neurological Disorders & developmental disabilities
JELLA VISHNU DURGA PRASAD
 
Virat Kohli- the Pride of Indian cricket
kushpar147
 
Cleaning Validation Ppt Pharmaceutical validation
Ms. Ashatai Patil
 
Five Point Someone – Chetan Bhagat | Book Summary & Analysis by Bhupesh Kushwaha
Bhupesh Kushwaha
 
Sonnet 130_ My Mistress’ Eyes Are Nothing Like the Sun By William Shakespear...
DhatriParmar
 
Measures_of_location_-_Averages_and__percentiles_by_DR SURYA K.pptx
Surya Ganesh
 
An introduction to Prepositions for beginners.pptx
drsiddhantnagine
 
family health care settings home visit - unit 6 - chn 1 - gnm 1st year.pptx
Priyanshu Anand
 
Basics and rules of probability with real-life uses
ravatkaran694
 
Introduction to pediatric nursing in 5th Sem..pptx
AneetaSharma15
 
Artificial-Intelligence-in-Drug-Discovery by R D Jawarkar.pptx
Rahul Jawarkar
 
CONCEPT OF CHILD CARE. pptx
AneetaSharma15
 
Biological Classification Class 11th NCERT CBSE NEET.pdf
NehaRohtagi1
 
Ad

Distributed systems principles and paradigms 2nd ed., New international ed Edition Tanenbaum

  • 1. Instant Ebook Access, One Click Away – Begin at ebookgate.com Distributed systems principles and paradigms 2nd ed., New international ed Edition Tanenbaum https://ebookgate.com/product/distributed-systems- principles-and-paradigms-2nd-ed-new-international-ed- edition-tanenbaum/ OR CLICK BUTTON DOWLOAD EBOOK Get Instant Ebook Downloads – Browse at https://ebookgate.com Click here to visit ebookgate.com and download ebook now
  • 2. Instant digital products (PDF, ePub, MOBI) available Download now and explore formats that suit you... Starting out with Visual C 2010 2nd ed., new international ed Edition Gaddis https://ebookgate.com/product/starting-out-with-visual-c-2010-2nd-ed- new-international-ed-edition-gaddis/ ebookgate.com Cloud Computing Principles and Paradigms Wiley Series on Parallel and Distributed Computing 1st Edition Rajkumar Buyya https://ebookgate.com/product/cloud-computing-principles-and- paradigms-wiley-series-on-parallel-and-distributed-computing-1st- edition-rajkumar-buyya/ ebookgate.com Principles of Polymer Systems 6th ed. Edition Archer https://ebookgate.com/product/principles-of-polymer-systems-6th-ed- edition-archer/ ebookgate.com Operating systems internals and design principles 7th ed Edition Stallings https://ebookgate.com/product/operating-systems-internals-and-design- principles-7th-ed-edition-stallings/ ebookgate.com
  • 3. Distributed computing principles algorithms and systems 1st Edition Ajay D. Kshemkalyani https://ebookgate.com/product/distributed-computing-principles- algorithms-and-systems-1st-edition-ajay-d-kshemkalyani/ ebookgate.com Intermediate algebra 6th ed., Pearson new international ed Edition Martin-Gay https://ebookgate.com/product/intermediate-algebra-6th-ed-pearson-new- international-ed-edition-martin-gay/ ebookgate.com Principles and Applications of Distributed Event Based Systems First Edition Annika M. Hinze https://ebookgate.com/product/principles-and-applications-of- distributed-event-based-systems-first-edition-annika-m-hinze/ ebookgate.com Intelligent Systems 2nd Ed 2nd Edition Bogdan M. Wilamowski https://ebookgate.com/product/intelligent-systems-2nd-ed-2nd-edition- bogdan-m-wilamowski/ ebookgate.com Nonlinear and Distributed Circuits 1st Edition Wai-Kai Chen (Ed.) https://ebookgate.com/product/nonlinear-and-distributed-circuits-1st- edition-wai-kai-chen-ed/ ebookgate.com
  • 6. Distributed Systems Principles and Paradigms Andrew S. Tanenbaum Maarten Van Steen Second Edition
  • 7. Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies throughout the world Visit us on the World Wide Web at: www.pearsoned.co.uk © Pearson Education Limited 2014 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without either the prior written permission of the publisher or a licence permitting restricted copying in the United Kingdom issued by the Copyright Licensing Agency Ltd, Saffron House, 6–10 Kirby Street, London EC1N 8TS. All trademarks used herein are the property of their respective owners. The use of any trademark in this text does not vest in the author or publisher any trademark ownership rights in such trademarks, nor does the use of such trademarks imply any affiliation with or endorsement of this book by such owners. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Printed in the United States of America ISBN 10: 1-292-02552-2 ISBN 13: 978-1-292-02552-0
  • 8. Table of Contents P E A R S O N C U S T O M L I B R A R Y I Chapter 1. Introduction 1 Andrew S. Tanenbaum, Maarten Van Steen Chapter 2. Architectures 33 Andrew S. Tanenbaum, Maarten Van Steen Chapter 3. Processes 69 Andrew S. Tanenbaum, Maarten Van Steen Chapter 4. Communication 115 Andrew S. Tanenbaum, Maarten Van Steen Chapter 5. Naming 179 Andrew S. Tanenbaum, Maarten Van Steen Chapter 6. Synchronization 231 Andrew S. Tanenbaum, Maarten Van Steen Chapter 7. Consistency and Replication 273 Andrew S. Tanenbaum, Maarten Van Steen Chapter 8. Fault Tolerance 321 Andrew S. Tanenbaum, Maarten Van Steen Chapter 9. Security 377 Andrew S. Tanenbaum, Maarten Van Steen Chapter 10. Distributed Object-Based Systems 443 Andrew S. Tanenbaum, Maarten Van Steen Chapter 11. Distributed File Systems 491 Andrew S. Tanenbaum, Maarten Van Steen Chapter 12. Distributed Web-Based Systems 545 Andrew S. Tanenbaum, Maarten Van Steen Chapter 13. Distributed Coordination-Based Systems 589 Andrew S. Tanenbaum, Maarten Van Steen 623 Index
  • 10. 1 INTRODUCTION Computer systems are undergoing a revolution. From 1945, when the modern computer era began, until about 1985, computers were large and expensive. Even minicomputers cost at least tens of thousands of dollars each. As a result, most organizations had only a handful of computers, and for lack of a way to connect them, these operated independently from one another. Starting around the the mid-1980s, however, two advances in technology began to change that situation. The first was the development of powerful micro- processors. Initially, these were 8-bit machines, but soon 16-, 32-, and 64-bit CPUs became common. Many of these had the computing power of a mainframe (i.e., large) computer, but for a fraction of the price. The amount of improvement that has occurred in computer technology in the past half century is truly staggering and totally unprecedented in other industries. From a machine that cost 10 million dollars and executed 1 instruction per second, we have come to machines that cost 1000 dollars and are able to execute 1 billion instructions per second, a price/performance gain of 1013 . If cars had improved at this rate in the same time period, a Rolls Royce would now cost 1 dollar and get a billion miles per gallon. (Unfortunately, it would probably also have a 200-page manual telling how to open the door.) The second development was the invention of high-speed computer networks. Local-area networks or LANs allow hundreds of machines within a building to be connected in such a way that small amounts of information can be transferred between machines in a few microseconds or so. Larger amounts of data can be From Chapter 1 of Distributed Systems: Principles and Paradigms, Second Edition. Andrew S. Tanenbaum, Maarten Van Steen. Copyright © 2007 by Pearson Education, Inc. Publishing as Prentice Hall. All rights reserved. 1
  • 11. INTRODUCTION CHAP. 1 moved between machines at rates of 100 million to 10 billion bits/sec. Wide-area networks or WANs allow millions of machines all over the earth to be connected at speeds varying from 64 Kbps (kilobits per second) to gigabits per second. The result of these technologies is that it is now not only feasible, but easy, to put together computing systems composed of large numbers of computers con- nected by a high-speed network. They are usually called computer networks or distributed systems, in contrast to the previous centralized systems (or single- processor systems) consisting of a single computer, its peripherals, and perhaps some remote terminals. 1.1 DEFINITION OF A DISTRIBUTED SYSTEM Various definitions of distributed systems have been given in the literature, none of them satisfactory, and none of them in agreement with any of the others. For our purposes it is sufficient to give a loose characterization: A distributed system is a collection of independent computers that appears to its users as a single coherent system. This definition has several important aspects. The first one is that a distributed system consists of components (i.e., computers) that are autonomous. A second aspect is that users (be they people or programs) think they are dealing with a sin- gle system. This means that one way or the other the autonomous components need to collaborate. How to establish this collaboration lies at the heart of devel- oping distributed systems. Note that no assumptions are made concerning the type of computers. In principle, even within a single system, they could range from high-performance mainframe computers to small nodes in sensor networks. Like- wise, no assumptions are made on the way that computers are interconnected. We will return to these aspects later in this chapter. Instead of going further with definitions, it is perhaps more useful to concen- trate on important characteristics of distributed systems. One important charac- teristic is that differences between the various computers and the ways in which they communicate are mostly hidden from users. The same holds for the internal organization of the distributed system. Another important characteristic is that users and applications can interact with a distributed system in a consistent and uniform way, regardless of where and when interaction takes place. In principle, distributed systems should also be relatively easy to expand or scale. This characteristic is a direct consequence of having independent com- puters, but at the same time, hiding how these computers actually take part in the system as a whole. A distributed system will normally be continuously available, although perhaps some parts may be temporarily out of order. Users and applica- tions should not notice that parts are being replaced or fixed, or that new parts are added to serve more users or applications. 2
  • 12. SEC. 1.1 DEFINITION OF A DISTRIBUTED SYSTEM In order to support heterogeneous computers and networks while offering a single-system view, distributed systems are often organized by means of a layer of software–that is, logically placed between a higher-level layer consisting of users and applications, and a layer underneath consisting of operating systems and basic communication facilities, as shown in Fig. 1-1 Accordingly, such a distributed system is sometimes called middleware. Local OS 1 Local OS 2 Local OS 3 Local OS 4 Appl. A Application B Appl. C Computer 1 Computer 2 Computer 4 Computer 3 Network Distributed system layer (middleware) Figure 1-1. A distributed system organized as middleware. The middleware layer extends over multiple machines, and offers each application the same in- terface. Fig. 1-1 shows four networked computers and three applications, of which ap- plication B is distributed across computers 2 and 3. Each application is offered the same interface. The distributed system provides the means for components of a single distributed application to communicate with each other, but also to let dif- ferent applications communicate. At the same time, it hides, as best and reason- able as possible, the differences in hardware and operating systems from each ap- plication. 1.2 GOALS Just because it is possible to build distributed systems does not necessarily mean that it is a good idea. After all, with current technology it is also possible to put four floppy disk drives on a personal computer. It is just that doing so would be pointless. In this section we discuss four important goals that should be met to make building a distributed system worth the effort. A distributed system should make resources easily accessible; it should reasonably hide the fact that resources are distributed across a network; it should be open; and it should be scalable. 1.2.1 Making Resources Accessible The main goal of a distributed system is to make it easy for the users (and ap- plications) to access remote resources, and to share them in a controlled and effi- cient way. Resources can be just about anything, but typical examples include 3
  • 13. INTRODUCTION CHAP. 1 things like printers, computers, storage facilities, data, files, Web pages, and net- works, to name just a few. There are many reasons for wanting to share resources. One obvious reason is that of economics. For example, it is cheaper to let a printer be shared by several users in a small office than having to buy and maintain a sep- arate printer for each user. Likewise, it makes economic sense to share costly re- sources such as supercomputers, high-performance storage systems, imagesetters, and other expensive peripherals. Connecting users and resources also makes it easier to collaborate and ex- change information, as is clearly illustrated by the success of the Internet with its simple protocols for exchanging files, mail, documents, audio, and video. The connectivity of the Internet is now leading to numerous virtual organizations in which geographically widely-dispersed groups of people work together by means of groupware, that is, software for collaborative editing, teleconferencing, and so on. Likewise, the Internet connectivity has enabled electronic commerce allowing us to buy and sell all kinds of goods without actually having to go to a store or even leave home. However, as connectivity and sharing increase, security is becoming increas- ingly important. In current practice, systems provide little protection against eavesdropping or intrusion on communication. Passwords and other sensitive in- formation are often sent as cleartext (i.e., unencrypted) through the network, or stored at servers that we can only hope are trustworthy. In this sense, there is much room for improvement. For example, it is currently possible to order goods by merely supplying a credit card number. Rarely is proof required that the custo- mer owns the card. In the future, placing orders this way may be possible only if you can actually prove that you physically possess the card by inserting it into a card reader. Another security problem is that of tracking communication to build up a preference profile of a specific user (Wang et al., 1998). Such tracking explicitly violates privacy, especially if it is done without notifying the user. A related prob- lem is that increased connectivity can also lead to unwanted communication, such as electronic junk mail, often called spam. In such cases, what we may need is to protect ourselves using special information filters that select incoming messages based on their content. 1.2.2 Distribution Transparency An important goal of a distributed system is to hide the fact that its processes and resources are physically distributed across multiple computers. A distributed system that is able to present itself to users and applications as if it were only a single computer system is said to be transparent. Let us first take a look at what kinds of transparency exist in distributed systems. After that we will address the more general question whether transparency is always required. 4
  • 14. SEC. 1.2 GOALS Types of Transparency The concept of transparency can be applied to several aspects of a distributed system, the most important ones shown in Fig. 1-2. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Transparency Description $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Access Hide differences in data representation and how a resource is accessed $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Location Hide where a resource is located $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Migration Hide that a resource may move to another location $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Relocation Hide that a resource may be moved to another location while in use $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Replication Hide that a resource is replicated $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Concurrency Hide that a resource may be shared by several competitive users $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Failure Hide the failure and recovery of a resource $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! Figure 1-2. Different forms of transparency in a distributed system (ISO, 1995). Access transparency deals with hiding differences in data representation and the way that resources can be accessed by users. At a basic level, we wish to hide differences in machine architectures, but more important is that we reach agree- ment on how data is to be represented by different machines and operating sys- tems. For example, a distributed system may have computer systems that run dif- ferent operating systems, each having their own file-naming conventions. Differ- ences in naming conventions, as well as how files can be manipulated, should all be hidden from users and applications. An important group of transparency types has to do with the location of a re- source. Location transparency refers to the fact that users cannot tell where a re- source is physically located in the system. Naming plays an important role in achieving location transparency. In particular, location transparency can be achieved by assigning only logical names to resources, that is, names in which the location of a resource is not secretly encoded. An example of a such a name is the URL http://www.prenhall.com/index.html, which gives no clue about the location of Prentice Hall’s main Web server. The URL also gives no clue as to whether index.html has always been at its current location or was recently moved there. Distributed systems in which resources can be moved without affecting how those resources can be accessed are said to provide migration transparency. Even stronger is the situation in which resources can be relocated while they are being accessed without the user or application noticing anything. In such cases, the sys- tem is said to support relocation transparency. An example of relocation trans- parency is when mobile users can continue to use their wireless laptops while moving from place to place without ever being (temporarily) disconnected. As we shall see, replication plays a very important role in distributed systems. For example, resources may be replicated to increase availability or to improve 5
  • 15. INTRODUCTION CHAP. 1 performance by placing a copy close to the place where it is accessed. Replica- tion transparency deals with hiding the fact that several copies of a resource exist. To hide replication from users, it is necessary that all replicas have the same name. Consequently, a system that supports replication transparency should gen- erally support location transparency as well, because it would otherwise be impos- sible to refer to replicas at different locations. We already mentioned that an important goal of distributed systems is to al- low sharing of resources. In many cases, sharing resources is done in a coopera- tive way, as in the case of communication. However, there are also many ex- amples of competitive sharing of resources. For example, two independent users may each have stored their files on the same file server or may be accessing the same tables in a shared database. In such cases, it is important that each user does not notice that the other is making use of the same resource. This phenomenon is called concurrency transparency. An important issue is that concurrent access to a shared resource leaves that resource in a consistent state. Consistency can be achieved through locking mechanisms, by which users are, in turn, given ex- clusive access to the desired resource. A more refined mechanism is to make use of transactions, but as we shall see in later chapters, transactions are quite difficult to implement in distributed systems. A popular alternative definition of a distributed system, due to Leslie Lam- port, is ‘‘You know you have one when the crash of a computer you’ve never heard of stops you from getting any work done.’’ This description puts the finger on another important issue of distributed systems design: dealing with failures. Making a distributed system failure transparent means that a user does not no- tice that a resource (he has possibly never heard of) fails to work properly, and that the system subsequently recovers from that failure. Masking failures is one of the hardest issues in distributed systems and is even impossible when certain apparently realistic assumptions are made, as we will discuss in Chap. 8. The main difficulty in masking failures lies in the inability to distinguish between a dead resource and a painfully slow resource. For example, when contacting a busy Web server, a browser will eventually time out and report that the Web page is unavailable. At that point, the user cannot conclude that the server is really down. Degree of Transparency Although distribution transparency is generally considered preferable for any distributed system, there are situations in which attempting to completely hide all distribution aspects from users is not a good idea. An example is requesting your electronic newspaper to appear in your mailbox before 7 A.M. local time, as usual, while you are currently at the other end of the world living in a different time zone. Your morning paper will not be the morning paper you are used to. Likewise, a wide-area distributed system that connects a process in San Fran- cisco to a process in Amsterdam cannot be expected to hide the fact that Mother 6
  • 16. SEC. 1.2 GOALS Nature will not allow it to send a message from one process to the other in less than about 35 milliseconds. In practice it takes several hundreds of milliseconds using a computer network. Signal transmission is not only limited by the speed of light, but also by limited processing capacities of the intermediate switches. There is also a trade-off between a high degree of transparency and the per- formance of a system. For example, many Internet applications repeatedly try to contact a server before finally giving up. Consequently, attempting to mask a tran- sient server failure before trying another one may slow down the system as a whole. In such a case, it may have been better to give up earlier, or at least let the user cancel the attempts to make contact. Another example is where we need to guarantee that several replicas, located on different continents, need to be consistent all the time. In other words, if one copy is changed, that change should be propagated to all copies before allowing any other operation. It is clear that a single update operation may now even take seconds to complete, something that cannot be hidden from users. Finally, there are situations in which it is not at all obvious that hiding distri- bution is a good idea. As distributed systems are expanding to devices that people carry around, and where the very notion of location and context awareness is becoming increasingly important, it may be best to actually expose distribution rather than trying to hide it. This distribution exposure will become more evident when we discuss embedded and ubiquitous distributed systems later in this chap- ter. As a simple example, consider an office worker who wants to print a file from her notebook computer. It is better to send the print job to a busy nearby printer, rather than to an idle one at corporate headquarters in a different country. There are also other arguments against distribution transparency. Recognizing that full distribution transparency is simply impossible, we should ask ourselves whether it is even wise to pretend that we can achieve it. It may be much better to make distribution explicit so that the user and application developer are never tricked into believing that there is such a thing as transparency. The result will be that users will much better understand the (sometimes unexpected) behavior of a distributed system, and are thus much better prepared to deal with this behavior. The conclusion is that aiming for distribution transparency may be a nice goal when designing and implementing distributed systems, but that it should be con- sidered together with other issues such as performance and comprehensibility. The price for not being able to achieve full transparency may be surprisingly high. 1.2.3 Openness Another important goal of distributed systems is openness. An open distrib- uted system is a system that offers services according to standard rules that describe the syntax and semantics of those services. For example, in computer networks, standard rules govern the format, contents, and meaning of messages sent and received. Such rules are formalized in protocols. In distributed systems, 7
  • 17. INTRODUCTION CHAP. 1 services are generally specified through interfaces, which are often described in an Interface Definition Language (IDL). Interface definitions written in an IDL nearly always capture only the syntax of services. In other words, they specify precisely the names of the functions that are available together with types of the parameters, return values, possible exceptions that can be raised, and so on. The hard part is specifying precisely what those services do, that is, the semantics of interfaces. In practice, such specifications are always given in an informal way by means of natural language. If properly specified, an interface definition allows an arbitrary process that needs a certain interface to talk to another process that provides that interface. It also allows two independent parties to build completely different implementations of those interfaces, leading to two separate distributed systems that operate in exactly the same way. Proper specifications are complete and neutral. Complete means that everything that is necessary to make an implementation has indeed been specified. However, many interface definitions are not at all complete, so that it is necessary for a developer to add implementation-specific details. Just as important is the fact that specifications do not prescribe what an implementation should look like; they should be neutral. Completeness and neutrality are impor- tant for interoperability and portability (Blair and Stefani, 1998). Interoperabil- ity characterizes the extent by which two implementations of systems or com- ponents from different manufacturers can co-exist and work together by merely relying on each other’s services as specified by a common standard. Portability characterizes to what extent an application developed for a distributed system A can be executed, without modification, on a different distributed system B that implements the same interfaces as A. Another important goal for an open distributed system is that it should be easy to configure the system out of different components (possibly from different de- velopers). Also, it should be easy to add new components or replace existing ones without affecting those components that stay in place. In other words, an open dis- tributed system should also be extensible. For example, in an extensible system, it should be relatively easy to add parts that run on a different operating system, or even to replace an entire file system. As many of us know from daily practice, attaining such flexibility is easier said than done. Separating Policy from Mechanism To achieve flexibility in open distributed systems, it is crucial that the system is organized as a collection of relatively small and easily replaceable or adaptable components. This implies that we should provide definitions not only for the highest-level interfaces, that is, those seen by users and applications, but also definitions for interfaces to internal parts of the system and describe how those parts interact. This approach is relatively new. Many older and even contemporary systems are constructed using a monolithic approach in which components are 8
  • 18. SEC. 1.2 GOALS only logically separated but implemented as one, huge program. This approach makes it hard to replace or adapt a component without affecting the entire system. Monolithic systems thus tend to be closed instead of open. The need for changing a distributed system is often caused by a component that does not provide the optimal policy for a specific user or application. As an example, consider caching in the World Wide Web. Browsers generally allow users to adapt their caching policy by specifying the size of the cache, and wheth- er a cached document should always be checked for consistency, or perhaps only once per session. However, the user cannot influence other caching parameters, such as how long a document may remain in the cache, or which document should be removed when the cache fills up. Also, it is impossible to make caching deci- sions based on the content of a document. For instance, a user may want to cache railroad timetables, knowing that these hardly change, but never information on current traffic conditions on the highways. What we need is a separation between policy and mechanism. In the case of Web caching, for example, a browser should ideally provide facilities for only storing documents, and at the same time allow users to decide which documents are stored and for how long. In practice, this can be implemented by offering a rich set of parameters that the user can set (dynamically). Even better is that a user can implement his own policy in the form of a component that can be plugged into the browser. Of course, that component must have an interface that the browser can understand so that it can call procedures of that interface. 1.2.4 Scalability Worldwide connectivity through the Internet is rapidly becoming as common as being able to send a postcard to anyone anywhere around the world. With this in mind, scalability is one of the most important design goals for developers of distributed systems. Scalability of a system can be measured along at least three different dimen- sions (Neuman, 1994). First, a system can be scalable with respect to its size, meaning that we can easily add more users and resources to the system. Second, a geographically scalable system is one in which the users and resources may lie far apart. Third, a system can be administratively scalable, meaning that it can still be easy to manage even if it spans many independent administrative organizations. Unfortunately, a system that is scalable in one or more of these dimensions often exhibits some loss of performance as the system scales up. Scalability Problems When a system needs to scale, very different types of problems need to be solved. Let us first consider scaling with respect to size. If more users or resources need to be supported, we are often confronted with the limitations of centralized 9
  • 19. INTRODUCTION CHAP. 1 services, data, and algorithms (see Fig. 1-3). For example, many services are cen- tralized in the sense that they are implemented by means of only a single server running on a specific machine in the distributed system. The problem with this scheme is obvious: the server can become a bottleneck as the number of users and applications grows. Even if we have virtually unlimited processing and storage ca- pacity, communication with that server will eventually prohibit further growth. Unfortunately, using only a single server is sometimes unavoidable. Imagine that we have a service for managing highly confidential information such as medi- cal records, bank accounts, and so on. In such cases, it may be best to implement that service by means of a single server in a highly secured separate room, and protected from other parts of the distributed system through special network com- ponents. Copying the server to several locations to enhance performance may be out of the question as it would make the service less secure. $ $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Concept Example $ $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Centralized services A single server for all users $ $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Centralized data A single on-line telephone book $ $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Centralized algorithms Doing routing based on complete information $ $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ !! ! ! ! ! ! !! ! ! ! ! ! !! ! ! ! ! ! Figure 1-3. Examples of scalability limitations. Just as bad as centralized services are centralized data. How should we keep track of the telephone numbers and addresses of 50 million people? Suppose that each data record could be fit into 50 characters. A single 2.5-gigabyte disk parti- tion would provide enough storage. But here again, having a single database would undoubtedly saturate all the communication lines into and out of it. Like- wise, imagine how the Internet would work if its Domain Name System (DNS) was still implemented as a single table. DNS maintains information on millions of computers worldwide and forms an essential service for locating Web servers. If each request to resolve a URL had to be forwarded to that one and only DNS server, it is clear that no one would be using the Web (which, by the way, would solve the problem). Finally, centralized algorithms are also a bad idea. In a large distributed sys- tem, an enormous number of messages have to be routed over many lines. From a theoretical point of view, the optimal way to do this is collect complete informa- tion about the load on all machines and lines, and then run an algorithm to com- pute all the optimal routes. This information can then be spread around the system to improve the routing. The trouble is that collecting and transporting all the input and output infor- mation would again be a bad idea because these messages would overload part of the network. In fact, any algorithm that operates by collecting information from all the sites, sends it to a single machine for processing, and then distributes the 10
  • 20. SEC. 1.2 GOALS results should generally be avoided. Only decentralized algorithms should be used. These algorithms generally have the following characteristics, which distin- guish them from centralized algorithms: 1. No machine has complete information about the system state. 2. Machines make decisions based only on local information. 3. Failure of one machine does not ruin the algorithm. 4. There is no implicit assumption that a global clock exists. The first three follow from what we have said so far. The last is perhaps less obvi- ous but also important. Any algorithm that starts out with: ‘‘At precisely 12:00:00 all machines shall note the size of their output queue’’ will fail because it is impossible to get all the clocks exactly synchronized. Algorithms should take into account the lack of exact clock synchronization. The larger the system, the larger the uncertainty. On a single LAN, with considerable effort it may be possible to get all clocks synchronized down to a few microseconds, but doing this nationally or internationally is tricky. Geographical scalability has its own problems. One of the main reasons why it is currently hard to scale existing distributed systems that were designed for local-area networks is that they are based on synchronous communication. In this form of communication, a party requesting service, generally referred to as a client, blocks until a reply is sent back. This approach generally works fine in LANs where communication between two machines is generally at worst a few hundred microseconds. However, in a wide-area system, we need to take into ac- count that interprocess communication may be hundreds of milliseconds, three orders of magnitude slower. Building interactive applications using synchronous communication in wide-area systems requires a great deal of care (and not a little patience). Another problem that hinders geographical scalability is that communication in wide-area networks is inherently unreliable, and virtually always point-to-point. In contrast, local-area networks generally provide highly reliable communication facilities based on broadcasting, making it much easier to develop distributed sys- tems. For example, consider the problem of locating a service. In a local-area sys- tem, a process can simply broadcast a message to every machine, asking if it is running the service it needs. Only those machines that have that service respond, each providing its network address in the reply message. Such a location scheme is unthinkable in a wide-area system: just imagine what would happen if we tried to locate a service this way in the Internet. Instead, special location services need to be designed, which may need to scale worldwide and be capable of servicing a billion users. We return to such services in Chap. 5. Geographical scalability is strongly related to the problems of centralized solutions that hinder size scalability. If we have a system with many centralized 11
  • 21. INTRODUCTION CHAP. 1 components, it is clear that geographical scalability will be limited due to the per- formance and reliability problems resulting from wide-area communication. In ad- dition, centralized components now lead to a waste of network resources. Imagine that a single mail server is used for an entire country. This would mean that send- ing an e-mail to your neighbor would first have to go to the central mail server, which may be hundreds of miles away. Clearly, this is not the way to go. Finally, a difficult, and in many cases open question is how to scale a distrib- uted system across multiple, independent administrative domains. A major prob- lem that needs to be solved is that of conflicting policies with respect to resource usage (and payment), management, and security. For example, many components of a distributed system that reside within a single domain can often be trusted by users that operate within that same domain. In such cases, system administration may have tested and certified applications, and may have taken special measures to ensure that such components cannot be tampered with. In essence, the users trust their system administrators. However, this trust does not expand naturally across domain boundaries. If a distributed system expands into another domain, two types of security measures need to be taken. First of all, the distributed system has to protect itself against malicious attacks from the new domain. For example, users from the new domain may have only read access to the file system in its original domain. Like- wise, facilities such as expensive image setters or high-performance computers may not be made available to foreign users. Second, the new domain has to pro- tect itself against malicious attacks from the distributed system. A typical example is that of downloading programs such as applets in Web browsers. Basically, the new domain does not know behavior what to expect from such foreign code, and may therefore decide to severely limit the access rights for such code. The prob- lem, as we shall see in Chap. 9, is how to enforce those limitations. Scaling Techniques Having discussed some of the scalability problems brings us to the question of how those problems can generally be solved. In most cases, scalability problems in distributed systems appear as performance problems caused by limited capacity of servers and network. There are now basically only three techniques for scaling: hiding communication latencies, distribution, and replication [see also Neuman (1994)]. Hiding communication latencies is important to achieve geographical scala- bility. The basic idea is simple: try to avoid waiting for responses to remote (and potentially distant) service requests as much as possible. For example, when a ser- vice has been requested at a remote machine, an alternative to waiting for a reply from the server is to do other useful work at the requester’s side. Essentially, what this means is constructing the requesting application in such a way that it uses only asynchronous communication. When a reply comes in, the application is 12
  • 22. SEC. 1.2 GOALS interrupted and a special handler is called to complete the previously-issued re- quest. Asynchronous communication can often be used in batch-processing sys- tems and parallel applications, in which more or less independent tasks can be scheduled for execution while another task is waiting for communication to com- plete. Alternatively, a new thread of control can be started to perform the request. Although it blocks waiting for the reply, other threads in the process can continue. However, there are many applications that cannot make effective use of asyn- chronous communication. For example, in interactive applications when a user sends a request he will generally have nothing better to do than to wait for the answer. In such cases, a much better solution is to reduce the overall communica- tion, for example, by moving part of the computation that is normally done at the server to the client process requesting the service. A typical case where this ap- proach works is accessing databases using forms. Filling in forms can be done by sending a separate message for each field, and waiting for an acknowledgment from the server, as shown in Fig. 1-4(a). For example, the server may check for syntactic errors before accepting an entry. A much better solution is to ship the code for filling in the form, and possibly checking the entries, to the client, and have the client return a completed form, as shown in Fig. 1-4(b). This approach of shipping code is now widely supported by the Web in the form of Java applets and Javascript. M A A R T E N MAARTEN MAARTEN [email protected] [email protected] VAN STEEN VAN STEEN FIRST NAME FIRST NAME LAST NAME LAST NAME E-MAIL E-MAIL Server Server Client Client Check form Check form Process form Process form MAARTEN VAN STEEN [email protected] (a) (b) Figure 1-4. The difference between letting (a) a server or (b) a client check forms as they are being filled. Another important scaling technique is distribution. Distribution involves taking a component, splitting it into smaller parts, and subsequently spreading 13
  • 23. INTRODUCTION CHAP. 1 those parts across the system. An excellent example of distribution is the Internet Domain Name System (DNS). The DNS name space is hierarchically organized into a tree of domains, which are divided into nonoverlapping zones, as shown in Fig. 1-5. The names in each zone are handled by a single name server. Without going into too many details, one can think of each path name being the name of a host in the Internet, and thus associated with a network address of that host. Basi- cally, resolving a name means returning the network address of the associated host. Consider, for example, the name nl.vu.cs.flits. To resolve this name, it is first passed to the server of zone Z1 (see Fig. 1-5) which returns the address of the server for zone Z2, to which the rest of name, vu.cs.flits, can be handed. The server for Z2 will return the address of the server for zone Z3, which is capable of handling the last part of the name and will return the address of the associated host. int com edu gov mil org net jp us nl sun eng yale eng ai linda robot acm jack jill ieee keio cs cs pc24 co nec csl oce vu cs flits fluit ac Generic Countries Z1 Z2 Z3 Figure 1-5. An example of dividing the DNS name space into zones. This example illustrates how the naming service, as provided by DNS, is dis- tributed across several machines, thus avoiding that a single server has to deal with all requests for name resolution. As another example, consider the World Wide Web. To most users, the Web appears to be an enormous document-based information system in which each document has its own unique name in the form of a URL. Conceptually, it may even appear as if there is only a single server. However, the Web is physically distributed across a large number of servers, each handling a number of Web doc- uments. The name of the server handling a document is encoded into that docu- ment’s URL. It is only because of this distribution of documents that the Web has been capable of scaling to its current size. Considering that scalability problems often appear in the form of performance degradation, it is generally a good idea to actually replicate components across a 14
  • 24. SEC. 1.2 GOALS distributed system. Replication not only increases availability, but also helps to balance the load between components leading to better performance. Also, in geo- graphically widely-dispersed systems, having a copy nearby can hide much of the communication latency problems mentioned before. Caching is a special form of replication, although the distinction between the two is often hard to make or even artificial. As in the case of replication, caching results in making a copy of a resource, generally in the proximity of the client ac- cessing that resource. However, in contrast to replication, caching is a decision made by the client of a resource, and not by the owner of a resource. Also, cach- ing happens on demand whereas replication is often planned in advance. There is one serious drawback to caching and replication that may adversely affect scalability. Because we now have multiple copies of a resource, modifying one copy makes that copy different from the others. Consequently, caching and replication leads to consistency problems. To what extent inconsistencies can be tolerated depends highly on the usage of a resource. For example, many Web users find it acceptable that their browser returns a cached document of which the validity has not been checked for the last few minutes. However, there are also many cases in which strong consistency guarantees need to be met, such as in the case of electronic stock exchanges and auctions. The problem with strong consistency is that an update must be immedi- ately propagated to all other copies. Moreover, if two updates happen concur- rently, it is often also required that each copy is updated in the same order. Situa- tions such as these generally require some global synchronization mechanism. Unfortunately, such mechanisms are extremely hard or even impossible to imple- ment in a scalable way, as she insists that photons and electrical signals obey a speed limit of 187 miles/msec (the speed of light). Consequently, scaling by repli- cation may introduce other, inherently nonscalable solutions. We return to replica- tion and consistency in Chap. 7. When considering these scaling techniques, one could argue that size scalabil- ity is the least problematic from a technical point of view. In many cases, simply increasing the capacity of a machine will the save the day (at least temporarily and perhaps at significant costs). Geographical scalability is a much tougher prob- lem as Mother Nature is getting in our way. Nevertheless, practice shows that combining distribution, replication, and caching techniques with different forms of consistency will often prove sufficient in many cases. Finally, administrative scalability seems to be the most difficult one, partly also because we need to solve nontechnical problems (e.g., politics of organizations and human collaboration). Nevertheless, progress has been made in this area, by simply ignoring administra- tive domains. The introduction and now widespread use of peer-to-peer technol- ogy demonstrates what can be achieved if end users simply take over control (Aberer and Hauswirth, 2005; Lua et al., 2005; and Oram, 2001). However, let it be clear that peer-to-peer technology can at best be only a partial solution to solv- ing administrative scalability. Eventually, it will have to be dealt with. 15
  • 25. INTRODUCTION CHAP. 1 1.2.5 Pitfalls It should be clear by now that developing distributed systems can be a formid- able task. As we will see many times throughout this book, there are so many issues to consider at the same time that it seems that only complexity can be the result. Nevertheless, by following a number of design principles, distributed sys- tems can be developed that strongly adhere to the goals we set out in this chapter. Many principles follow the basic rules of decent software engineering and will not be repeated here. However, distributed systems differ from traditional software because com- ponents are dispersed across a network. Not taking this dispersion into account during design time is what makes so many systems needlessly complex and re- sults in mistakes that need to be patched later on. Peter Deutsch, then at Sun Microsystems, formulated these mistakes as the following false assumptions that everyone makes when developing a distributed application for the first time: 1. The network is reliable. 2. The network is secure. 3. The network is homogeneous. 4. The topology does not change. 5. Latency is zero. 6. Bandwidth is infinite. 7. Transport cost is zero. 8. There is one administrator. Note how these assumptions relate to properties that are unique to distributed sys- tems: reliability, security, heterogeneity, and topology of the network; latency and bandwidth; transport costs; and finally administrative domains. When developing nondistributed applications, many of these issues will most likely not show up. Most of the principles we discuss in this book relate immediately to these assumptions. In all cases, we will be discussing solutions to problems that are caused by the fact that one or more assumptions are false. For example, reliable networks simply do not exist, leading to the impossibility of achieving failure transparency. We devote an entire chapter to deal with the fact that networked communication is inherently insecure. We have already argued that distributed systems need to take heterogeneity into account. In a similar vein, when discuss- ing replication for solving scalability problems, we are essentially tackling latency and bandwidth problems. We will also touch upon management issues at various points throughout this book, dealing with the false assumptions of zero-cost tran- sportation and a single administrative domain. 16
  • 26. SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS 1.3 TYPES OF DISTRIBUTED SYSTEMS Before starting to discuss the principles of distributed systems, let us first take a closer look at the various types of distributed systems. In the following we make a distinction between distributed computing systems, distributed information sys- tems, and distributed embedded systems. 1.3.1 Distributed Computing Systems An important class of distributed systems is the one used for high-perfor- mance computing tasks. Roughly speaking, one can make a distinction between two subgroups. In cluster computing the underlying hardware consists of a col- lection of similar workstations or PCs, closely connected by means of a high- speed local-area network. In addition, each node runs the same operating system. The situation becomes quite different in the case of grid computing. This subgroup consists of distributed systems that are often constructed as a federation of computer systems, where each system may fall under a different administrative domain, and may be very different when it comes to hardware, software, and deployed network technology. Cluster Computing Systems Cluster computing systems became popular when the price/performance ratio of personal computers and workstations improved. At a certain point, it became financially and technically attractive to build a supercomputer using off-the-shelf technology by simply hooking up a collection of relatively simple computers in a high-speed network. In virtually all cases, cluster computing is used for parallel programming in which a single (compute intensive) program is run in parallel on multiple machines. Local OS Local OS Local OS Local OS Standard network Component of parallel application Component of parallel application Component of parallel application Parallel libs Management application High-speed network Remote access network Master node Compute node Compute node Compute node Figure 1-6. An example of a cluster computing system. 17
  • 27. INTRODUCTION CHAP. 1 One well-known example of a cluster computer is formed by Linux-based Beowulf clusters, of which the general configuration is shown in Fig. 1-6. Each cluster consists of a collection of compute nodes that are controlled and accessed by means of a single master node. The master typically handles the allocation of nodes to a particular parallel program, maintains a batch queue of submitted jobs, and provides an interface for the users of the system. As such, the master actually runs the middleware needed for the execution of programs and management of the cluster, while the compute nodes often need nothing else but a standard operating system. An important part of this middleware is formed by the libraries for executing parallel programs. As we will discuss in Chap. 4, many of these libraries effec- tively provide only advanced message-based communication facilities, but are not capable of handling faulty processes, security, etc. As an alternative to this hierarchical organization, a symmetric approach is followed in the MOSIX system (Amar et al., 2004). MOSIX attempts to provide a single-system image of a cluster, meaning that to a process a cluster computer offers the ultimate distribution transparency by appearing to be a single computer. As we mentioned, providing such an image under all circumstances is impossible. In the case of MOSIX, the high degree of transparency is provided by allowing processes to dynamically and preemptively migrate between the nodes that make up the cluster. Process migration allows a user to start an application on any node (referred to as the home node), after which it can transparently move to other nodes, for example, to make efficient use of resources. We will return to process migration in Chap. 3. Grid Computing Systems A characteristic feature of cluster computing is its homogeneity. In most cases, the computers in a cluster are largely the same, they all have the same oper- ating system, and are all connected through the same network. In contrast, grid computing systems have a high degree of heterogeneity: no assumptions are made concerning hardware, operating systems, networks, administrative domains, secu- rity policies, etc. A key issue in a grid computing system is that resources from different organ- izations are brought together to allow the collaboration of a group of people or institutions. Such a collaboration is realized in the form of a virtual organization. The people belonging to the same virtual organization have access rights to the re- sources that are provided to that organization. Typically, resources consist of compute servers (including supercomputers, possibly implemented as cluster com- puters), storage facilities, and databases. In addition, special networked devices such as telescopes, sensors, etc., can be provided as well. Given its nature, much of the software for realizing grid computing evolves around providing access to resources from different administrative domains, and 18
  • 28. SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS to only those users and applications that belong to a specific virtual organization. For this reason, focus is often on architectural issues. An architecture proposed by Foster et al. (2001). is shown in Fig. 1-7 Applications Collective layer Resource layer Fabric layer Connectivity layer Figure 1-7. A layered architecture for grid computing systems. The architecture consists of four layers. The lowest fabric layer provides in- terfaces to local resources at a specific site. Note that these interfaces are tailored to allow sharing of resources within a virtual organization. Typically, they will provide functions for querying the state and capabilities of a resource, along with functions for actual resource management (e.g., locking resources). The connectivity layer consists of communication protocols for supporting grid transactions that span the usage of multiple resources. For example, protocols are needed to transfer data between resources, or to simply access a resource from a remote location. In addition, the connectivity layer will contain security proto- cols to authenticate users and resources. Note that in many cases human users are not authenticated; instead, programs acting on behalf of the users are authenti- cated. In this sense, delegating rights from a user to programs is an important function that needs to be supported in the connectivity layer. We return exten- sively to delegation when discussing security in distributed systems. The resource layer is responsible for managing a single resource. It uses the functions provided by the connectivity layer and calls directly the interfaces made available by the fabric layer. For example, this layer will offer functions for obtaining configuration information on a specific resource, or, in general, to per- form specific operations such as creating a process or reading data. The resource layer is thus seen to be responsible for access control, and hence will rely on the authentication performed as part of the connectivity layer. The next layer in the hierarchy is the collective layer. It deals with handling access to multiple resources and typically consists of services for resource discovery, allocation and scheduling of tasks onto multiple resources, data repli- cation, and so on. Unlike the connectivity and resource layer, which consist of a relatively small, standard collection of protocols, the collective layer may consist of many different protocols for many different purposes, reflecting the broad spec- trum of services it may offer to a virtual organization. 19
  • 29. INTRODUCTION CHAP. 1 Finally, the application layer consists of the applications that operate within a virtual organization and which make use of the grid computing environment. Typically the collective, connectivity, and resource layer form the heart of what could be called a grid middleware layer. These layers jointly provide access to and management of resources that are potentially dispersed across multiple sites. An important observation from a middleware perspective is that with grid computing the notion of a site (or administrative unit) is common. This prevalence is emphasized by the gradual shift toward a service-oriented architecture in which sites offer access to the various layers through a collection of Web services (Joseph et al., 2004). This, by now, has led to the definition of an alternative ar- chitecture known as the Open Grid Services Architecture (OGSA). This archi- tecture consists of various layers and many components, making it rather com- plex. Complexity seems to be the fate of any standardization process. Details on OGSA can be found in Foster et al. (2005). 1.3.2 Distributed Information Systems Another important class of distributed systems is found in organizations that were confronted with a wealth of networked applications, but for which interoper- ability turned out to be a painful experience. Many of the existing middleware solutions are the result of working with an infrastructure in which it was easier to integrate applications into an enterprise-wide information system (Bernstein, 1996; and Alonso et al., 2004). We can distinguish several levels at which integration took place. In many cases, a networked application simply consisted of a server running that applica- tion (often including a database) and making it available to remote programs, call- ed clients. Such clients could send a request to the server for executing a specific operation, after which a response would be sent back. Integration at the lowest level would allow clients to wrap a number of requests, possibly for different ser- vers, into a single larger request and have it executed as a distributed transac- tion. The key idea was that all, or none of the requests would be executed. As applications became more sophisticated and were gradually separated into independent components (notably distinguishing database components from proc- essing components), it became clear that integration should also take place by let- ting applications communicate directly with each other. This has now led to a huge industry that concentrates on enterprise application integration (EAI). In the following, we concentrate on these two forms of distributed systems. Transaction Processing Systems To clarify our discussion, let us concentrate on database applications. In prac- tice, operations on a database are usually carried out in the form of transactions. Programming using transactions requires special primitives that must either be 20
  • 30. SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS supplied by the underlying distributed system or by the language runtime system. Typical examples of transaction primitives are shown in Fig. 1-8. The exact list of primitives depends on what kinds of objects are being used in the transaction (Gray and Reuter, 1993). In a mail system, there might be primitives to send, receive, and forward mail. In an accounting system, they might be quite different. READ and WRITE are typical examples, however. Ordinary statements, procedure calls, and so on, are also allowed inside a transaction. In particular, we mention that remote procedure calls (RPCs), that is, procedure calls to remote servers, are often also encapsulated in a transaction, leading to what is known as a tran- sactional RPC. We discuss RPCs extensively in Chap. 4. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Primitive Description $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ BEGIN"TRANSACTION Mark the start of a transaction $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ END"TRANSACTION Terminate the transaction and try to commit $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ ABORT"TRANSACTION Kill the transaction and restore the old values $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ READ Read data from a file, a table, or otherwise $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ WRITE Write data to a file, a table, or otherwise $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ !! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! Figure 1-8. Example primitives for transactions. BEGIN#TRANSACTION and END#TRANSACTION are used to delimit the scope of a transaction. The operations between them form the body of the tran- saction. The characteristic feature of a transaction is either all of these operations are executed or none are executed. These may be system calls, library procedures, or bracketing statements in a language, depending on the implementation. This all-or-nothing property of transactions is one of the four characteristic properties that transactions have. More specifically, transactions are: 1. Atomic: To the outside world, the transaction happens indivisibly. 2. Consistent: The transaction does not violate system invariants. 3. Isolated: Concurrent transactions do not interfere with each other. 4. Durable: Once a transaction commits, the changes are permanent. These properties are often referred to by their initial letters: ACID. The first key property exhibited by all transactions is that they are atomic. This property ensures that each transaction either happens completely, or not at all, and if it happens, it happens in a single indivisible, instantaneous action. While a transaction is in progress, other processes (whether or not they are them- selves involved in transactions) cannot see any of the intermediate states. The second property says that they are consistent. What this means is that if the system has certain invariants that must always hold, if they held before the transaction, they will hold afterward too. For example, in a banking system, a key 21
  • 31. INTRODUCTION CHAP. 1 invariant is the law of conservation of money. After every internal transfer, the amount of money in the bank must be the same as it was before the transfer, but for a brief moment during the transaction, this invariant may be violated. The vio- lation is not visible outside the transaction, however. The third property says that transactions are isolated or serializable. What it means is that if two or more transactions are running at the same time, to each of them and to other processes, the final result looks as though all transactions ran sequentially in some (system dependent) order. The fourth property says that transactions are durable. It refers to the fact that once a transaction commits, no matter what happens, the transaction goes for- ward and the results become permanent. No failure after the commit can undo the results or cause them to be lost. (Durability is discussed extensively in Chap. 8.) So far, transactions have been defined on a single database. A nested tran- saction is constructed from a number of subtransactions, as shown in Fig. 1-9. The top-level transaction may fork off children that run in parallel with one anoth- er, on different machines, to gain performance or simplify programming. Each of these children may also execute one or more subtransactions, or fork off its own children. Airline database Hotel database Subtransaction Subtransaction Nested transaction Two different (independent) databases Figure 1-9. A nested transaction. Subtransactions give rise to a subtle, but important, problem. Imagine that a transaction starts several subtransactions in parallel, and one of these commits, making its results visible to the parent transaction. After further computation, the parent aborts, restoring the entire system to the state it had before the top-level transaction started. Consequently, the results of the subtransaction that committed must nevertheless be undone. Thus the permanence referred to above applies only to top-level transactions. Since transactions can be nested arbitrarily deeply, considerable administra- tion is needed to get everything right. The semantics are clear, however. When any transaction or subtransaction starts, it is conceptually given a private copy of all data in the entire system for it to manipulate as it wishes. If it aborts, its private universe just vanishes, as if it had never existed. If it commits, its private universe replaces the parent’s universe. Thus if a subtransaction commits and then later a 22
  • 32. SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS new subtransaction is started, the second one sees the results produced by the first one. Likewise, if an enclosing (higher-level) transaction aborts, all its underlying subtransactions have to be aborted as well. Nested transactions are important in distributed systems, for they provide a natural way of distributing a transaction across multiple machines. They follow a logical division of the work of the original transaction. For example, a transaction for planning a trip by which three different flights need to be reserved can be logi- cally split up into three subtransactions. Each of these subtransactions can be managed separately and independent of the other two. In the early days of enterprise middleware systems, the component that hand- led distributed (or nested) transactions formed the core for integrating applications at the server or database level. This component was called a transaction proc- essing monitor or TP monitor for short. Its main task was to allow an application to access multiple server/databases by offering it a transactional programming model, as shown in Fig. 1-10. TP monitor Server Server Server Client application Requests Reply Request Request Request Reply Reply Reply Transaction Figure 1-10. The role of a TP monitor in distributed systems. Enterprise Application Integration As mentioned, the more applications became decoupled from the databases they were built upon, the more evident it became that facilities were needed to integrate applications independent from their databases. In particular, application components should be able to communicate directly with each other and not mere- ly by means of the request/reply behavior that was supported by transaction proc- essing systems. This need for interapplication communication led to many different communi- cation models, which we will discuss in detail in this book (and for which reason we shall keep it brief for now). The main idea was that existing applications could directly exchange information, as shown in Fig. 1-11. 23
  • 33. INTRODUCTION CHAP. 1 Server-side application Server-side application Server-side application Client application Client application Communication middleware Figure 1-11. Middleware as a communication facilitator in enterprise applica- tion integration. Several types of communication middleware exist. With remote procedure calls (RPC), an application component can effectively send a request to another application component by doing a local procedure call, which results in the re- quest being packaged as a message and sent to the callee. Likewise, the result will be sent back and returned to the application as the result of the procedure call. As the popularity of object technology increased, techniques were developed to allow calls to remote objects, leading to what is known as remote method invocations (RMI). An RMI is essentially the same as an RPC, except that it op- erates on objects instead of applications. RPC and RMI have the disadvantage that the caller and callee both need to be up and running at the time of communication. In addition, they need to know ex- actly how to refer to each other. This tight coupling is often experienced as a seri- ous drawback, and has led to what is known as message-oriented middleware, or simply MOM. In this case, applications simply send messages to logical contact points, often described by means of a subject. Likewise, applications can indicate their interest for a specific type of message, after which the communication mid- dleware will take care that those messages are delivered to those applications. These so-called publish/subscribe systems form an important and expanding class of distributed systems. We will discuss them at length in Chap. 13. 1.3.3 Distributed Pervasive Systems The distributed systems we have been discussing so far are largely charac- terized by their stability: nodes are fixed and have a more or less permanent and high-quality connection to a network. To a certain extent, this stability has been realized through the various techniques that are discussed in this book and which aim at achieving distribution transparency. For example, the wealth of techniques 24
  • 34. SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS for masking failures and recovery will give the impression that only occasionally things may go wrong. Likewise, we have been able to hide aspects related to the actual network location of a node, effectively allowing users and applications to believe that nodes stay put. However, matters have become very different with the introduction of mobile and embedded computing devices. We are now confronted with distributed sys- tems in which instability is the default behavior. The devices in these, what we refer to as distributed pervasive systems, are often characterized by being small, battery-powered, mobile, and having only a wireless connection, although not all these characteristics apply to all devices. Moreover, these characteristics need not necessarily be interpreted as restrictive, as is illustrated by the possibilities of modern smart phones (Roussos et al., 2005). As its name suggests, a distributed pervasive system is part of our surround- ings (and as such, is generally inherently distributed). An important feature is the general lack of human administrative control. At best, devices can be configured by their owners, but otherwise they need to automatically discover their environ- ment and ‘‘nestle in’’ as best as possible. This nestling in has been made more pre- cise by Grimm et al. (2004) by formulating the following three requirements for pervasive applications: 1. Embrace contextual changes. 2. Encourage ad hoc composition. 3. Recognize sharing as the default. Embracing contextual changes means that a device must be continuously be aware of the fact that its environment may change all the time. One of the sim- plest changes is discovering that a network is no longer available, for example, because a user is moving between base stations. In such a case, the application should react, possibly by automatically connecting to another network, or taking other appropriate actions. Encouraging ad hoc composition refers to the fact that many devices in per- vasive systems will be used in very different ways by different users. As a result, it should be easy to configure the suite of applications running on a device, either by the user or through automated (but controlled) interposition. One very important aspect of pervasive systems is that devices generally join the system in order to access (and possibly provide) information. This calls for means to easily read, store, manage, and share information. In light of the inter- mittent and changing connectivity of devices, the space where accessible informa- tion resides will most likely change all the time. Mascolo et al. (2004) as well as Niemela and Latvakoski (2004) came to simi- lar conclusions: in the presence of mobility, devices should support easy and ap- plication-dependent adaptation to their local environment. They should be able to 25
  • 35. INTRODUCTION CHAP. 1 efficiently discover services and react accordingly. It should be clear from these requirements that distribution transparency is not really in place in pervasive sys- tems. In fact, distribution of data, processes, and control is inherent to these sys- tems, for which reason it may be better just to simply expose it rather than trying to hide it. Let us now take a look at some concrete examples of pervasive systems. Home Systems An increasingly popular type of pervasive system, but which may perhaps be the least constrained, are systems built around home networks. These systems generally consist of one or more personal computers, but more importantly inte- grate typical consumer electronics such as TVs, audio and video equipment, gam- ing devices, (smart) phones, PDAs, and other personal wearables into a single sys- tem. In addition, we can expect that all kinds of devices such as kitchen appli- ances, surveillance cameras, clocks, controllers for lighting, and so on, will all be hooked up into a single distributed system. From a system’s perspective there are several challenges that need to be ad- dressed before pervasive home systems become reality. An important one is that such a system should be completely self-configuring and self-managing. It cannot be expected that end users are willing and able to keep a distributed home system up and running if its components are prone to errors (as is the case with many of today’s devices.) Much has already been accomplished through the Universal Plug and Play (UPnP) standards by which devices automatically obtain IP ad- dresses, can discover each other, etc. (UPnP Forum, 2003). However, more is needed. For example, it is unclear how software and firmware in devices can be easily updated without manual intervention, or when updates do take place, that compatibility with other devices is not violated. Another pressing issue is managing what is known as a ‘‘personal space.’’ Recognizing that a home system consists of many shared as well as personal de- vices, and that the data in a home system is also subject to sharing restrictions, much attention is paid to realizing such personal spaces. For example, part of Alice’s personal space may consist of her agenda, family photo’s, a diary, music and videos that she bought, etc. These personal assets should be stored in such a way that Alice has access to them whenever appropriate. Moreover, parts of this personal space should be (temporarily) accessible to others, for example, when she needs to make a business appointment. Fortunately, things may become simpler. It has long been thought that the per- sonal spaces related to home systems were inherently distributed across the vari- ous devices. Obviously, such a dispersion can easily lead to significant synchroni- zation problems. However, problems may be alleviated due to the rapid increase in the capacity of hard disks, along with a decrease in their size. Configuring a multi-terabyte storage unit for a personal computer is not really a problem. At the same time, portable hard disks having a capacity of hundreds of gigabytes are 26
  • 36. SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS being placed inside relatively small portable media players. With these continu- ously increasing capacities, we may see pervasive home systems adopt an archi- tecture in which a single machine acts as a master (and is hidden away somewhere in the basement next to the central heating), and all other fixed devices simply provide a convenient interface for humans. Personal devices will then be cram- med with daily needed information, but will never run out of storage. However, having enough storage does not solve the problem of managing per- sonal spaces. Being able to store huge amounts of data shifts the problem to stor- ing relevant data and being able to find it later. Increasingly we will see pervasive systems, like home networks, equipped with what are called recommenders, pro- grams that consult what other users have stored in order to identify similar taste, and from that subsequently derive which content to place in one’s personal space. An interesting observation is that the amount of information that recommender programs need to do their work is often small enough to allow them to be run on PDAs (Miller et al., 2004). Electronic Health Care Systems Another important and upcoming class of pervasive systems are those related to (personal) electronic health care. With the increasing cost of medical treatment, new devices are being developed to monitor the well-being of individuals and to automatically contact physicians when needed. In many of these systems, a major goal is to prevent people from being hospitalized. Personal health care systems are often equipped with various sensors organ- ized in a (preferably wireless) body-area network (BAN). An important issue is that such a network should at worst only minimally hinder a person. To this end, the network should be able to operate while a person is moving, with no strings (i.e., wires) attached to immobile devices. This requirement leads to two obvious organizations, as shown in Fig. 1-12. In the first one, a central hub is part of the BAN and collects data as needed. From time to time, this data is then offloaded to a larger storage device. The advantage of this scheme is that the hub can also manage the BAN. In the second scenario, the BAN is continuously hooked up to an external network, again through a wire- less connection, to which it sends monitored data. Separate techniques will need to be deployed for managing the BAN. Of course, further connections to a physi- cian or other people may exist as well. From a distributed system’s perspective we are immediately confronted with questions such as: 1. Where and how should monitored data be stored? 2. How can we prevent loss of crucial data? 3. What infrastructure is needed to generate and propagate alerts? 27
  • 37. INTRODUCTION CHAP. 1 body-area network body-area network ECG sensor Motion sensors Tilt sensor PDA Transmitter External storage GPRS/UMTS (a) (b) Figure 1-12. Monitoring a person in a pervasive electronic health care system, using (a) a local hub or (b) a continuous wireless connection. 4. How can physicians provide online feedback? 5. How can extreme robustness of the monitoring system be realized? 6. What are the security issues and how can the proper policies be enforced? Unlike home systems, we cannot expect the architecture of pervasive health care systems to move toward single-server systems and have the monitoring devices operate with minimal functionality. On the contrary: for reasons of efficiency, de- vices and body-area networks will be required to support in-network data proc- essing, meaning that monitoring data will, for example, have to be aggregated be- fore permanently storing it or sending it to a physician. Unlike the case for distrib- uted information systems, there is yet no clear answer to these questions. Sensor Networks Our last example of pervasive systems is sensor networks. These networks in many cases form part of the enabling technology for pervasiveness and we see that many solutions for sensor networks return in pervasive applications. What makes sensor networks interesting from a distributed system’s perspective is that in virtually all cases they are used for processing information. In this sense, they do more than just provide communication services, which is what traditional com- puter networks are all about. Akyildiz et al. (2002) provide an overview from a networking perspective. A more systems-oriented introduction to sensor networks is given by Zhao and Guibas (2004). Strongly related are mesh networks which essentially form a collection of (fixed) nodes that communicate through wireless links. These networks may form the basis for many medium-scale distributed sys- tems. An overview is provided in Akyildiz et al. (2005). 28
  • 38. SEC. 1.3 TYPES OF DISTRIBUTED SYSTEMS A sensor network typically consists of tens to hundreds or thousands of rela- tively small nodes, each equipped with a sensing device. Most sensor networks use wireless communication, and the nodes are often battery powered. Their lim- ited resources, restricted communication capabilities, and constrained power con- sumption demand that efficiency be high on the list of design criteria. The relation with distributed systems can be made clear by considering sensor networks as distributed databases. This view is quite common and easy to under- stand when realizing that many sensor networks are deployed for measurement and surveillance applications (Bonnet et al., 2002). In these cases, an operator would like to extract information from (a part of) the network by simply issuing queries such as ‘‘What is the northbound traffic load on Highway 1?’’ Such queries resemble those of traditional databases. In this case, the answer will prob- ably need to be provided through collaboration of many sensors located around Highway 1, while leaving other sensors untouched. To organize a sensor network as a distributed database, there are essentially two extremes, as shown in Fig. 1-13. First, sensors do not cooperate but simply send their data to a centralized database located at the operator’s site. The other extreme is to forward queries to relevant sensors and to let each compute an answer, requiring the operator to sensibly aggregate the returned answers. Neither of these solutions is very attractive. The first one requires that sensors send all their measured data through the network, which may waste network re- sources and energy. The second solution may also be wasteful as it discards the aggregation capabilities of sensors which would allow much less data to be re- turned to the operator. What is needed are facilities for in-network data proc- essing, as we also encountered in pervasive health care systems. In-network processing can be done in numerous ways. One obvious one is to forward a query to all sensor nodes along a tree encompassing all nodes and to subsequently aggregate the results as they are propagated back to the root, where the initiator is located. Aggregation will take place where two or more branches of the tree come to together. As simple as this scheme may sound, it introduces diffi- cult questions: 1. How do we (dynamically) set up an efficient tree in a sensor network? 2. How does aggregation of results take place? Can it be controlled? 3. What happens when network links fail? These questions have been partly addressed in TinyDB, which implements a de- clarative (database) interface to wireless sensor networks. In essence, TinyDB can use any tree-based routing algorithm. An intermediate node will collect and ag- gregate the results from its children, along with its own findings, and send that toward the root. To make matters efficient, queries span a period of time allowing 29
  • 39. INTRODUCTION CHAP. 1 Operator's site Sensor network Sensor data is sent directly to operator Operator's site Sensor network Query Sensors send only answers Each sensor can process and store data (a) (b) Figure 1-13. Organizing a sensor network database, while storing and proc- essing data (a) only at the operator’s site or (b) only at the sensors. for careful scheduling of operations so that network resources and energy are optimally consumed. Details can be found in Madden et al. (2005). However, when queries can be initiated from different points in the network, using single-rooted trees such as in TinyDB may not be efficient enough. As an alternative, sensor networks may be equipped with special nodes where results are forwarded to, as well as the queries related to those results. To give a simple ex- ample, queries and results related temperature readings are collected at a different location than those related to humidity measurements. This approach corresponds directly to the notion of publish/subscribe systems, which we will discuss exten- sively in Chap. 13. 1.4 SUMMARY Distributed systems consist of autonomous computers that work together to give the appearance of a single coherent system. One important advantage is that they make it easier to integrate different applications running on different com- puters into a single system. Another advantage is that when properly designed, 30
  • 40. SEC. 1.4 SUMMARY distributed systems scale well with respect to the size of the underlying network. These advantages often come at the cost of more complex software, degradation of performance, and also often weaker security. Nevertheless, there is consid- erable interest worldwide in building and installing distributed systems. Distributed systems often aim at hiding many of the intricacies related to the distribution of processes, data, and control. However, this distribution transpar- ency not only comes at a performance price, but in practical situations it can never be fully achieved. The fact that trade-offs need to be made between achieving var- ious forms of distribution transparency is inherent to the design of distributed sys- tems, and can easily complicate their understanding. Matters are further complicated by the fact that many developers initially make assumptions about the underlying network that are fundamentally wrong. Later, when assumptions are dropped, it may turn out to be difficult to mask unwanted behavior. A typical example is assuming that network latency is not sig- nificant. Later, when porting an existing system to a wide-area network, hiding latencies may deeply affect the system’s original design. Other pitfalls include assuming that the network is reliable, static, secure, and homogeneous. Different types of distributed systems exist which can be classified as being oriented toward supporting computations, information processing, and pervasive- ness. Distributed computing systems are typically deployed for high-performance applications often originating from the field of parallel computing. A huge class of distributed can be found in traditional office environments where we see data- bases playing an important role. Typically, transaction processing systems are deployed in these environments. Finally, an emerging class of distributed systems is where components are small and the system is composed in an ad hoc fashion, but most of all is no longer managed through a system administrator. This last class is typically represented by ubiquitous computing environments. PROBLEMS 1. An alternative definition for a distributed system is that of a collection of independent computers providing the view of being a single system, that is, it is completely hidden from users that there even multiple computers. Give an example where this view would come in very handy. 2. What is the role of middleware in a distributed system? 3. Many networked systems are organized in terms of a back office and a front office. How does organizations match with the coherent view we demand for a distributed system? 4. Explain what is meant by (distribution) transparency, and give examples of different types of transparency. 31
  • 41. INTRODUCTION CHAP. 1 5. Why is it sometimes so hard to hide the occurrence and recovery from failures in a distributed system? 6. Why is it not always a good idea to aim at implementing the highest degree of trans- parency possible? 7. What is an open distributed system and what benefits does openness provide? 8. Describe precisely what is meant by a scalable system. 9. Scalability can be achieved by applying different techniques. What are these tech- niques? 10. Explain what is meant by a virtual organization and give a hint on how such organiza- tions could be implemented. 11. When a transaction is aborted, we have said that the world is restored to its previous state, as though the transaction had never happened. We lied. Give an example where resetting the world is impossible. 12. Executing nested transactions requires some form of coordination. Explain what a coordinator should actually do. 13. We argued that distribution transparency may not be in place for pervasive systems. This statement is not true for all types of transparencies. Give an example. 14. We already gave some examples of distributed pervasive systems: home systems, electronic health-care systems, and sensor networks. Extend this list with more ex- amples. 15. (Lab assignment) Sketch a design for a home system consisting of a separate media server that will allow for the attachment of a wireless client. The latter is connected to (analog) audio/video equipment and transforms the digital media streams to analog output. The server runs on a separate machine, possibly connected to the Internet, but has no keyboard and/or monitor connected. 32
  • 42. 2 ARCHITECTURES Distributed systems are often complex pieces of software of which the com- ponents are by definition dispersed across multiple machines. To master their complexity, it is crucial that these systems are properly organized. There are dif- ferent ways on how to view the organization of a distributed system, but an obvi- ous one is to make a distinction between the logical organization of the collection of software components and on the other hand the actual physical realization. The organization of distributed systems is mostly about the software com- ponents that constitute the system. These software architectures tell us how the various software components are to be organized and how they should interact. In this chapter we will first pay attention to some commonly applied approaches toward organizing (distributed) computer systems. The actual realization of a distributed system requires that we instantiate and place software components on real machines. There are many different choices that can be made in doing so. The final instantiation of a software architecture is also referred to as a system architecture. In this chapter we will look into tradi- tional centralized architectures in which a single server implements most of the software components (and thus functionality), while remote clients can access that server using simple communication means. In addition, we consider decentralized architectures in which machines more or less play equal roles, as well as hybrid organizations. As we explained in Chap. 1, an important goal of distributed systems is to separate applications from underlying platforms by providing a middleware layer. From Chapter 2 of Distributed Systems: Principles and Paradigms, Second Edition. Andrew S. Tanenbaum, Maarten Van Steen. Copyright © 2007 by Pearson Education, Inc. Publishing as Prentice Hall. All rights reserved. 33
  • 43. ARCHITECTURES CHAP. 2 Adopting such a layer is an important architectural decision, and its main purpose is to provide distribution transparency. However, trade-offs need to be made to achieve transparency, which has led to various techniques to make middleware adaptive. We discuss some of the more commonly applied ones in this chapter, as they affect the organization of the middleware itself. Adaptability in distributed systems can also be achieved by having the system monitor its own behavior and taking appropriate measures when needed. This in- sight has led to a class of what are now referred to as autonomic systems. These distributed systems are frequently organized in the form of feedback control loops, which form an important architectural element during a system’s design. In this chapter, we devote a section to autonomic distributed systems. 2.1 ARCHITECTURAL STYLES We start our discussion on architectures by first considering the logical organ- ization of distributed systems into software components, also referred to as soft- ware architecture (Bass et al., 2003). Research on software architectures has matured considerably and it is now commonly accepted that designing or adopting an architecture is crucial for the successful development of large systems. For our discussion, the notion of an architectural style is important. Such a style is formulated in terms of components, the way that components are con- nected to each other, the data exchanged between components, and finally how these elements are jointly configured into a system. A component is a modular unit with well-defined required and provided interfaces that is replaceable within its environment (OMG, 2004b). As we shall discuss below, the important issue about a component for distributed systems is that it can be replaced, provided we respect its interfaces. A somewhat more difficult concept to grasp is that of a con- nector, which is generally described as a mechanism that mediates communica- tion, coordination, or cooperation among components (Mehta et al., 2000; and Shaw and Clements, 1997). For example, a connector can be formed by the facili- ties for (remote) procedure calls, message passing, or streaming data. Using components and connectors, we can come to various configurations, which, in turn have been classified into architectural styles. Several styles have by now been identified, of which the most important ones for distributed systems are: 1. Layered architectures 2. Object-based architectures 3. Data-centered architectures 4. Event-based architectures The basic idea for the layered style is simple: components are organized in a layered fashion where a component at layer Li is allowed to call components at 34
  • 44. SEC. 2.1 ARCHITECTURAL STYLES the underlying layer Li −1, but not the other way around, as shown in Fig. 2-1(a). This model has been widely adopted by the networking community; we briefly review it in Chap. 4. A key observation is that control generally flows from layer to layer: requests go down the hierarchy whereas the results flow upward. A far looser organization is followed in object-based architectures, which are illustrated in Fig. 2-1(b). In essence, each object corresponds to what we have defined as a component, and these components are connected through a (remote) procedure call mechanism. Not surprisingly, this software architecture matches the client-server system architecture we described above. The layered and object- based architectures still form the most important styles for large software systems (Bass et al., 2003). Layer N Layer N-1 Layer 1 Layer 2 Request flow Response flow (a) (b) Object Object Object Object Object Method call Figure 2-1. The (a) layered and (b) object-based architectural style. Data-centered architectures evolve around the idea that processes commun- icate through a common (passive or active) repository. It can be argued that for distributed systems these architectures are as important as the layered and object- based architectures. For example, a wealth of networked applications have been developed that rely on a shared distributed file system in which virtually all com- munication takes place through files. Likewise, Web-based distributed systems, which we discuss extensively in Chap. 12, are largely data-centric: processes communicate through the use of shared Web-based data services. In event-based architectures, processes essentially communicate through the propagation of events, which optionally also carry data, as shown in Fig. 2-2(a). For distributed systems, event propagation has generally been associated with what are known as publish/subscribe systems (Eugster et al., 2003). The basic idea is that processes publish events after which the middleware ensures that only those processes that subscribed to those events will receive them. The main advantage of event-based systems is that processes are loosely coupled. In princi- ple, they need not explicitly refer to each other. This is also referred to as being decoupled in space, or referentially decoupled. 35
  • 45. ARCHITECTURES CHAP. 2 (a) (b) Component Component Component Event bus Publish Publish Event delivery Component Component Data delivery Shared (persistent) data space Figure 2-2. The (a) event-based and (b) shared data-space architectural style. Event-based architectures can be combined with data-centered architectures, yielding what is also known as shared data spaces. The essence of shared data spaces is that processes are now also decoupled in time: they need not both be ac- tive when communication takes place. Furthermore, many shared data spaces use a SQL-like interface to the shared repository in that sense that data can be ac- cessed using a description rather than an explicit reference, as is the case with files. We devote Chap. 13 to this architectural style. What makes these software architectures important for distributed systems is that they all aim at achieving (at a reasonable level) distribution transparency. However, as we have argued, distribution transparency requires making trade-offs between performance, fault tolerance, ease-of-programming, and so on. As there is no single solution that will meet the requirements for all possible distributed ap- plications, researchers have abandoned the idea that a single distributed system can be used to cover 90% of all possible cases. 2.2 SYSTEM ARCHITECTURES Now that we have briefly discussed some common architectural styles, let us take a look at how many distributed systems are actually organized by considering where software components are placed. Deciding on software components, their interaction, and their placement leads to an instance of a software architecture, also called a system architecture (Bass et al., 2003). We will discuss centralized and decentralized organizations, as well as various hybrid forms. 2.2.1 Centralized Architectures Despite the lack of consensus on many distributed systems issues, there is one issue that many researchers and practitioners agree upon: thinking in terms of cli- ents that request services from servers helps us understand and manage the com- plexity of distributed systems and that is a good thing. 36
  • 46. SEC. 2.2 SYSTEM ARCHITECTURES In the basic client-server model, processes in a distributed system are divided into two (possibly overlapping) groups. A server is a process implementing a spe- cific service, for example, a file system service or a database service. A client is a process that requests a service from a server by sending it a request and subse- quently waiting for the server’s reply. This client-server interaction, also known as request-reply behavior is shown in Fig. 2-3 Client Request Reply Server Provide service Time Wait for result Figure 2-3. General interaction between a client and a server. Communication between a client and a server can be implemented by means of a simple connectionless protocol when the underlying network is fairly reliable as in many local-area networks. In these cases, when a client requests a service, it simply packages a message for the server, identifying the service it wants, along with the necessary input data. The message is then sent to the server. The latter, in turn, will always wait for an incoming request, subsequently process it, and pack- age the results in a reply message that is then sent to the client. Using a connectionless protocol has the obvious advantage of being efficient. As long as messages do not get lost or corrupted, the request/reply protocol just sketched works fine. Unfortunately, making the protocol resistant to occasional transmission failures is not trivial. The only thing we can do is possibly let the cli- ent resend the request when no reply message comes in. The problem, however, is that the client cannot detect whether the original request message was lost, or that transmission of the reply failed. If the reply was lost, then resending a request may result in performing the operation twice. If the operation was something like ‘‘transfer $10,000 from my bank account,’’ then clearly, it would have been better that we simply reported an error instead. On the other hand, if the operation was ‘‘tell me how much money I have left,’’ it would be perfectly acceptable to resend the request. When an operation can be repeated multiple times without harm, it is said to be idempotent. Since some requests are idempotent and others are not it should be clear that there is no single solution for dealing with lost messages. We defer a detailed discussion on handling transmission failures to Chap. 8. As an alternative, many client-server systems use a reliable connection- oriented protocol. Although this solution is not entirely appropriate in a local-area network due to relatively low performance, it works perfectly fine in wide-area systems in which communication is inherently unreliable. For example, virtually all Internet application protocols are based on reliable TCP/IP connections. In this 37
  • 47. ARCHITECTURES CHAP. 2 case, whenever a client requests a service, it first sets up a connection to the server before sending the request. The server generally uses that same connection to send the reply message, after which the connection is torn down. The trouble is that setting up and tearing down a connection is relatively costly, especially when the request and reply messages are small. Application Layering The client-server model has been subject to many debates and controversies over the years. One of the main issues was how to draw a clear distinction be- tween a client and a server. Not surprisingly, there is often no clear distinction. For example, a server for a distributed database may continuously act as a client because it is forwarding requests to different file servers responsible for imple- menting the database tables. In such a case, the database server itself essentially does no more than process queries. However, considering that many client-server applications are targeted toward supporting user access to databases, many people have advocated a distinction be- tween the following three levels, essentially following the layered architectural style we discussed previously: 1. The user-interface level 2. The processing level 3. The data level The user-interface level contains all that is necessary to directly interface with the user, such as display management. The processing level typically contains the ap- plications. The data level manages the actual data that is being acted on. Clients typically implement the user-interface level. This level consists of the programs that allow end users to interact with applications. There is a consid- erable difference in how sophisticated user-interface programs are. The simplest user-interface program is nothing more than a character-based screen. Such an interface has been typically used in mainframe environments. In those cases where the mainframe controls all interaction, including the keyboard and monitor, one can hardly speak of a client-server environment. However, in many cases, the user’s terminal does some local processing such as echoing typed keystrokes, or supporting form-like interfaces in which a complete entry is to be edited before sending it to the main computer. Nowadays, even in mainframe environments, we see more advanced user in- terfaces. Typically, the client machine offers at least a graphical display in which pop-up or pull-down menus are used, and of which many of the screen controls are handled through a mouse instead of the keyboard. Typical examples of such interfaces include the X-Windows interfaces as used in many UNIX environments, and earlier interfaces developed for MS-DOS PCs and Apple Macintoshes. 38
  • 48. SEC. 2.2 SYSTEM ARCHITECTURES Modern user interfaces offer considerably more functionality by allowing ap- plications to share a single graphical window, and to use that window to exchange data through user actions. For example, to delete a file, it is usually possible to move the icon representing that file to an icon representing a trash can. Likewise, many word processors allow a user to move text in a document to another position by using only the mouse. We return to user interfaces in Chap. 3. Many client-server applications can be constructed from roughly three dif- ferent pieces: a part that handles interaction with a user, a part that operates on a database or file system, and a middle part that generally contains the core func- tionality of an application. This middle part is logically placed at the processing level. In contrast to user interfaces and databases, there are not many aspects com- mon to the processing level. Therefore, we shall give several examples to make this level clearer. As a first example, consider an Internet search engine. Ignoring all the animated banners, images, and other fancy window dressing, the user interface of a search engine is very simple: a user types in a string of keywords and is subse- quently presented with a list of titles of Web pages. The back end is formed by a huge database of Web pages that have been prefetched and indexed. The core of the search engine is a program that transforms the user’s string of keywords into one or more database queries. It subsequently ranks the results into a list, and transforms that list into a series of HTML pages. Within the client-server model, this information retrieval part is typically placed at the processing level. Fig. 2-4 shows this organization. Database with Web pages Query generator Ranking algorithm HTML generator User interface Keyword expression Database queries Web page titles with meta-information Ranked list of page titles HTML page containing list Processing level User-interface level Data level Figure 2-4. The simplified organization of an Internet search engine into three different layers. As a second example, consider a decision support system for a stock broker- age. Analogous to a search engine, such a system can be divided into a front end 39
  • 49. ARCHITECTURES CHAP. 2 implementing the user interface, a back end for accessing a database with the financial data, and the analysis programs between these two. Analysis of financial data may require sophisticated methods and techniques from statistics and artifi- cial intelligence. In some cases, the core of a financial decision support system may even need to be executed on high-performance computers in order to achieve the throughput and responsiveness that is expected from its users. As a last example, consider a typical desktop package, consisting of a word processor, a spreadsheet application, communication facilities, and so on. Such ‘‘office’’ suites are generally integrated through a common user interface that sup- ports compound documents, and operates on files from the user’s home directory. (In an office environment, this home directory is often placed on a remote file server.) In this example, the processing level consists of a relatively large collec- tion of programs, each having rather simple processing capabilities. The data level in the client-server model contains the programs that maintain the actual data on which the applications operate. An important property of this level is that data are often persistent, that is, even if no application is running, data will be stored somewhere for next use. In its simplest form, the data level consists of a file system, but it is more common to use a full-fledged database. In the client-server model, the data level is typically implemented at the server side. Besides merely storing data, the data level is generally also responsible for keeping data consistent across different applications. When databases are being used, maintaining consistency means that metadata such as table descriptions, entry constraints and application-specific metadata are also stored at this level. For example, in the case of a bank, we may want to generate a notification when a customer’s credit card debt reaches a certain value. This type of information can be maintained through a database trigger that activates a handler for that trigger at the appropriate moment. In most business-oriented environments, the data level is organized as a rela- tional database. Data independence is crucial here. The data are organized inde- pendent of the applications in such a way that changes in that organization do not affect applications, and neither do the applications affect the data organization. Using relational databases in the client-server model helps separate the processing level from the data level, as processing and data are considered independent. However, relational databases are not always the ideal choice. A charac- teristic feature of many applications is that they operate on complex data types that are more easily modeled in terms of objects than in terms of relations. Exam- ples of such data types range from simple polygons and circles to representations of aircraft designs, as is the case with computer-aided design (CAD) systems. In those cases where data operations are more easily expressed in terms of ob- ject manipulations, it makes sense to implement the data level by means of an ob- ject-oriented or object-relational database. Notably the latter type has gained popularity as these databases build upon the widely dispersed relational data model, while offering the advantages that object-orientation gives. 40
  • 50. Exploring the Variety of Random Documents with Different Content
  • 52. VIII WARREN HASTINGS, THE FIRST UNCROWNED KING OF INDIA
  • 53. B VIII WARREN HASTINGS OTH in point of time and personal capacity, Warren Hastings, first Governor-General of the British Empire in India, was the successor of Robert, Lord Clive. At the same time it may be as well to point out in this connection that there might be more literal correctness in describing Warren Hastings as an Empire-Preserver rather than an Empire-Maker. It was the victor of Plassey who rough-hewed the stones upon which the now gorgeous fabric of our Indian Empire stands. It was Hastings who, in spite of stupendous difficulties, took those stones and laid them down according to that plan which he had formed, and which has been followed in the main by all who have added to the structure. As was said in other words of William of Orange, one of the greatest claims that the great Governor has to the interest and admiration of those who have a share in the splendid inheritance that he built up, lies in the fact that he did his work in the face of everlasting hindrances and in the midst of perpetual embarrassments, which must infallibly have discouraged and bewildered any but a man upon whom the gods had set the stamp of greatness, and, in their own way, crowned him one of the kings of men. In short, like the grandson of William the Silent, Warren Hastings was first and foremost an overcomer of difficulties. Great and splendid and enduring as his work undoubtedly was, it would not, after all, have been very difficult to do if he had just been left to do it—not helped, because he wasn’t the kind of man who wanted help, but just left alone. Instead of this, however, as though it were not enough that his work of organising and consolidating what the sword of Clive had won, and combating the infinity of
  • 54. complications arising out of the rivalry of a dozen warring native potentates, he was purposely surrounded in his own council- chamber by unscrupulous enemies of his own blood and country, whose only title to historical recognition is now the infamy that they have earned by failing to prevent the doing of that work which Warren Hastings saw had got to be done, and which he, with an inflexible heroism, decided to do in spite of everything that his enemies, white or brown, Mohammedan, Hindoo or British, could do to cripple him. Sir Alfred Lyall, his most recent biographer, has very happily said of him that “perhaps no man of undisputed genius ever inherited less in mind or money from his parents or owed them fewer obligations of any kind.” His father, Pynaston Hastings, was the vagrant ne’er-do-well son of a fine old family. He married when only fifteen without any means or prospect of supporting a family. Warren was the second son. His father was only seventeen at his birth, and his mother died a few days later. As soon as he was old enough Pynaston took holy orders, married again, obtained a living in the West Indies, and there died, leaving his son to be put into a charity school by his grandfather. This is not much for a father to do for a son, but there was something else that Pynaston Hastings did which was of very great consequence, though in the nature of the case no credit is due to him for it. He transmitted to him the blood of a long line of ancestors, which stretched away back through one of the followers of William the Norman to the days of those old pirate kings of the Northland who, as I have pointed out before, were none the worse fathers of Empire-Makers because they were pirates as well. One of his ancestors, John Hastings, Lord of the Manors of Yelford-Hastings in Oxfordshire, and of Dalesford in Worcestershire, lost about half of his worldly goods, including the plate that he sent to be coined at the Oxford Mint, in helping Charles Stuart to fight the great Oliver, and afterwards spent most of the remainder in buying his peace from the Parliament. It was on the ancient estate of
  • 55. Dalesford, long before sold to the stranger and the alien, that Warren Hastings was born, some two hundred years later, practically a pauper and almost an outcast, under the shadow of his ancestral home. When he came to reasoning years he made a boyish resolve, challenging fate with all the splendid insolence of a seven-year-old dreamer, that some day he would make his fortune and buy the old place back—which in due course he did, although in those days his prospect of doing so was about as small as it was of reigning over the millions of subjects whose descendants to-day revere his memory almost as that of one of their own demigods. When he was twelve years old Warren was taken away from the charity school by one of his uncles and sent to Westminster, where he distinguished himself by winning a King’s scholarship in the year 1747. Even when his poor old grandfather, the last Hastings of Dalesford, and the miserably paid rector of the parish which his ancestors had owned, sent Warren to sit beside the little rustics of the village school, he immediately singled himself out from them by the willing intelligence with which he took to his work and afterwards the headmaster of Westminster had high hopes of university distinctions for him. It was indeed a somewhat curious coincidence that Robert Clive should have been such an exceedingly bad boy and the completer of his work such a good one. But the Fates had already decided that Warren Hastings was to graduate with honours in a very much bigger university than that on the banks of the Isis or the Cam. His uncle died suddenly, and the orphan lad was passed on to the care of a distant connection who happened to be a director of the East India Company. His headmaster remonstrated strongly, but happily without effect, against his immediate removal to Christ’s Hospital to learn account-keeping before going out to Bengal as a writer in the service of “John Company.”
  • 56. It seems as though the worthy Dr. Nichols had a very high opinion of his intellectual abilities, for, when all his protests failed, he actually offered to send his brilliant young pupil to Oxford at his own expense. Happily for the British Empire Mr. Director Chiswick, the relative aforesaid, stuck to his selfish project of getting him off his hands as quickly and permanently as possible by sending him out to Calcutta to take jungle fever or make a fortune, just in the same way that Clive’s despairing parents had done. He sailed for Calcutta when he was seventeen, the same age as his precious father was when he was born. He had been two years at the desk in Calcutta when there came the news that Clive had taken Arcot and put a very different complexion on the struggle between the English and French Companies for the supremacy of India. About that time he was sent to a little town on the Hooghly about a mile from Moorshedabad, and while he was here driving bargains with native silk-weavers and tea merchants, Surajah Dowlah marched into Calcutta and cast such English prisoners as he could lay hold of into the Black Hole. Hastings was also taken prisoner, but most fortunately did not get into the Black Hole, and he appears to have been set at large on the intercession of the chief of the Dutch factory. During the period which followed his partial release—for he was still under surveillance at Moorshedabad—he made his first essay in diplomacy, or what would perhaps be more correctly described as political intrigue, with the result that the city got too hot for him, and he fled to Fulda, an island below Calcutta, where, as has been pithily said, the English fugitives from Fort William “were encamped like a shipwrecked crew awaiting rescue.” The rescue came in the shape of the combined naval and military expedition, commanded by Admiral Watson and Robert Clive, which was destined to end in the triumph of Plassey, and
  • 57. Warren Hastings, as Macaulay aptly suggests in his brilliant but singularly misinformed essay, doubtless inspired by the example of Clive and the similarity of their entrance on to the stage of Indian affairs, like him exchanged the pen for the sword, and fought through the campaign. But Clive saw “that there was more in his head than his arm,” and after the battle of Plassey he sent him as resident Agent of the Company to the Court of Meer Jaffier, the puppet-nabob who had been set up in the place of Surajah Dowlah. He held this post until he was made a Member of Council in 1761, and was obliged to remove to Calcutta. Clive was at home now, and the interregnum of oppression, extortion, and general mismanagement was in full swing; but the man who was afterwards so grossly wronged and falsely impeached, and who passed through the most celebrated trial in English history charged with just such crimes, had so little taste for them that three years later he came back a comparatively poor man, and the fortune he had he either gave away to his relations or lost through the failure of a Dutch trading-house. After a stay of four years, during which he renewed his intimacy with his old schoolfellow, the creator of the immortal John Gilpin, and made the acquaintance of Johnson and Boswell, he found himself so reduced in circumstances that he not only had to ask the Directors of the Company to give him more employment in India, but when he got it he was forced to borrow the money to pay his passage out again. It is quite impossible to form any just and reasonable judgment of the work which Warren Hastings now went out to do unless one first gets an adequate idea of the condition of things obtaining in India before the English went there, and of the conditions that would have obtained, if men like Clive, Hastings, Cornwallis, and Wellesley had not by one means and another—some good, some bad, but all just what were possible under the circumstances— succeeded in imposing the Pax Britannica upon the rival and constantly warring potentates who governed the native populations.
  • 58. No doubt the war on the Rohillas, or the so-called spoliation of the Begums of Oude, together with more or less magnified incidentals, formed famous themes in after years for the inflated eloquence and grandiloquent over-statements of Edmund Burke and Sheridan, and for the far less comprehensible or excusable special pleading of Lord Macaulay. It was, no doubt, very affecting to see the patched and powdered fine ladies who paid their fifty guineas a seat in Westminster Hall to watch the men of words mangling the reputation of the man of deeds, weeping and fainting at the harrowing pictures they drew—mostly on their own imaginations—of the sufferings which he had not caused; but we of to-day are sufficiently far removed from the personal spite and the passion and rivalry which inspired the enemies and accusers of the great Governor to be able to look at things as they actually were, and in doing so we shall see that, however heavy was the hand that Warren Hastings laid upon the subject peoples, it was but as a caress to a blow when compared with the oppression and extortion with which conqueror after conqueror, Mohammedan and Hindoo, Sikh, Afghan, and Mahratta, had ground down and despoiled the helpless races which successively passed under their sway. Order, however dearly bought, is always less expensive than anarchy, and the impassioned periods of Burke and Sheridan look somewhat silly when we compare them with the sober facts. It never seems to have struck them or their audience to make any comparison between the English gentleman and loyal servant of his country whom they would have handed down to history as a monster of iniquity, and those real tyrants of the type of Surajah Dowlah, Hyder-Ali, and Nana-Sahib, whose brutal rule and ruthless wars of conquest and extermination must have been, under the circumstances, the only possible alternative to the strong and steady control of the Englishman. The first thing that Warren Hastings did on his return was to reorganise the trade of the Province, and in this he succeeded so
  • 59. well that the Directors rewarded him in 1772 with the Governorship of Bengal; and if they could have stopped there, leaving him to do the rest, the immediately subsequent history of India might have been very much more creditable to the rulers and more pleasant reading for the descendants of the ruled than it was. But unhappily a body of traders and shareholders became possessed with the idea that they were the proper sort of people to rule a country divided by political and religious factions, with a history of almost constant warfare stretching back for centuries, and situated fifteen thousand miles away. This, on the face of it, was an impossibility. When they had found their Governor they should have trusted him to govern, instead of sending out his personal enemies to sit at his council-table to spy upon his actions and hamper and oppose him in everything that he did. But there was something else in its way quite as serious as this. Practically all the charges that were brought against Warren Hastings on his impeachment are answered and disposed of by the fact that the only condition upon which he could retain his position and do the work that he had set his soul upon doing was, in three words, making India pay. John Company looked upon his new possession as a trader on a market. With the Directors, who, after all were Hastings’ masters, it was business first, and policy and government a good distance after. Even Macaulay admits that every exhortation to govern leniently and respect the rights of the native princes and their subjects was accompanied by a demand for increased contributions. “The inconsistency was at once manifest to their vice-regent at Calcutta, who, with an empty treasury, with an unpaid army, with his own salary often in arrear, with deficient crops, with Government tenants daily running away, was called upon to remit home another half- million without fail.” There is another thing to be remembered before we can judge Warren Hastings fairly in the matter of his forced contributions. The
  • 60. tea that was flung overboard in Boston Harbour in the December of 1773 was imported by the East India Company. The connection will appear more obvious when we look at what followed. Great Britain was about to plunge into war, east and west, north and south. Criminal misgovernment at home had produced revolt abroad. Disaster after disaster and disgrace after disgrace were soon to befall the British arms. The Anglo-Saxon race was about to be split in two, and England herself was to fight, if not for her very existence, at least for her honourable place among the nations. All this Warren Hastings foresaw with that marvellous prevision which made some of his actions look almost prophetic, and determined that, come what might elsewhere, the Star of the East should not be plucked from the British Crown. He was not a soldier. He was an administrator. His task was not to increase but to hold. He was by no means always successful in war, and in all his long rule he never added a province or a district to the area of British India; but what Clive won he held and strengthened during those fateful years when the destiny of Britain as an empire was trembling in the balances of Fate. Now, to keep India, money was absolutely necessary, and the getting of it was not always work that could be done with kid gloves on, and the greatness of Warren Hastings as Empire-Maker or Holder may be seen in the fact that he deliberately, and with his eyes open, risked his future fortune and reputation in the doing of this work by the only means available. He knew that his methods would be censured by his masters and made unscrupulous use of by his enemies, and he said so in so many words, and, careless of criticism and undeterred by the most virulent and treasonable opposition, he succeeded so far that he was able to say with truth that he had rescued one province from infamy, and two from total ruin. It is simply amazing to the dispassionate reader of the present day to watch the needless struggles which were imposed upon this man, already confronted by a titanic task, by the very men who ought to have been the first, for their own
  • 61. sakes and their country’s, to have made his way as smooth and his burdens as light as possible. The man who may be fairly described as the evil genius of Warren Hastings’ career was that Sir Philip Francis who is generally looked upon as the author of the far-famed Letters of Junius. He and Sir John Clavering, both personal enemies of the Governor-General— as he was now—were sent out as members of the Council, and to the days of their death they never ceased to thwart and embarrass him by every means in their power. One reason for their enmity was undoubtedly the sordid motive of getting him turned out of the Governor-Generalship in order that one of them might succeed to his office, and that both might share in the fruits of the extortions which, in him, they condemned. This was not only unjust to Hastings, but it was also a crime against their country, committed at a moment when she had all too much need of such men as he was. To my mind, at least, there is a very strong resemblance between the savage invective of Junius and the consistent and unscrupulous malevolence with which Sir Philip Francis tried to wreck the life-work of a man at whose table he was not worthy to sit. Those were days in which political rivalry and personal enmity entailed personal consequences if they were pushed too far. Hastings seemed to have come at length to the conclusion that India was not large enough to hold himself and Francis. He had submitted to insult after insult, and he would have been something more than human if his enemy’s unceasing efforts to make his life a misery and his work a failure had not left some bitterness in his soul, and so one fine day he sat down and embodied his opinion of him in a Minute to the Council, and in this he purposely put words which meant inevitable bloodshed: “I do not trust to his promise of candour; convinced that he is incapable of it, and that his sole purpose and wish are to embarrass
  • 62. and defeat every measure which I may undertake or which may tend even to promote the public interest if my credit is connected with them.... Every disappointment and misfortune have been aggravated by him, and every fabricated tale of armies devoted to famine and massacre have found their first and most ready way to his office, where it is known they would meet with most welcome reception.... I judge of his public conduct by my experience of his private, which I have found void of truth and honour. This is a severe charge but temperately and deliberately made.” These were not words which a man in those days could write without taking his chance of a bullet or the point of a small-sword, and Hastings knew this perfectly well. Francis challenged him on the spot, and the day but one after they confronted each other with pistols at fourteen paces. Francis’s pistol missed fire, and Hastings obligingly waited until he had reprimed. The second time the pistol went off, but the ball flew wide. Hastings returned it very deliberately and his enemy went down with a bullet in the right side.
  • 64. HIS ENEMY WENT DOWN WITH A BULLET IN THE RIGHT SIDE. The difference between the two men may be seen from what followed. After his adversary had been carried home, the Governor- General sent him a friendly message offering to visit him and bury the hatchet for good, as was customary in such affairs between gentlemen. Francis, not being a gentleman, refused, and as soon as he was well enough to travel he came home to England to injure by backstairs-intrigue and the most unscrupulous lying and misrepresentation the man who, in the midst of his difficulties and dangers, had proved all too strong for him in the open. To use his own words, “after a service of thirty-five years from its commencement, and almost thirteen of them passed in the charge and exercise of the first nominal office of the government,” Warren Hastings at last laid down his thankless task and came home to render an account of his stewardship before a tribunal which possessed neither adequate knowledge to judge of his actions nor that judicial spirit of calmness and impartiality which could alone have guaranteed him such a trial as English justice accords to the vilest criminal. His impeachment is not only the most notable but altogether the most amazing trial in the history of British Law. It would be alike superfluous and presumptuous to reproduce here an account of that which has been described in the incomparable sentences of Lord Macaulay. His essay on Warren Hastings has been considered by many to be the finest of that magnificent collection of Essays and Reviews, and the story of the Impeachment is undoubtedly the finest portion of it. Hence those who read these lines cannot do better than read it as well. If they have read it before they will simply be repeating a pleasure; if they have not, then a new pleasure awaits them. What we are concerned with here are the bare facts of the matter; but we may first pause for a moment to look at the man as
  • 65. he was when he came across the world to face his mostly incompetent and prejudiced judges. This is how his picture is drawn by Wraxall, a contemporary and a personal acquaintance. The portrait is certainly more faithful than the ridiculous caricatures drawn by Burke and Sheridan. “When he landed in his native country he had attained his fifty- second year. In his person he was thin, but not tall, of a spare habit, very bald, with a countenance placidly thoughtful, but when animated full of intelligence. Placed in a situation where he might have amassed immense wealth without exciting censure, he revisited England with only a modest competence. In private life he was playful and gay to a degree hardly conceivable; never carrying his political vexations into the bosom of his family. Of a temper so buoyant and elastic that the instant he quitted the council-board where he had been assailed by every species of opposition, often heightened by personal acrimony, he mixed in society like a youth upon whom care had never intruded.” Such was the man who, in a period of national dejection which almost amounted to disgrace, came back, the one man of his generation who had upheld the honour of the British name abroad in a post of great difficulty and danger, to receive, not reward, but impeachment. He first faced his judges on February 13, 1788, “looking very infirm and much indisposed, and dressed in a plain, poppy-coloured suit of clothes.” He was finally acquitted on March 1, 1794! The trial thus languished through seven sessions of Parliament, the total hearing occupied one hundred and eighteen sittings of the Court, and the vindication of his personal and official character from the slanders of enemies, who were at last refuted with complete discredit to his slanderers cost him about £100,000, of which no less than £75,000 were actually certified legal costs—and this was the reward that England gave to the one man who was capable of preserving to her the fruit of the victories of Clive and his gallant lieutenants!
  • 66. Modern opinion, endorsed by the high legal authority of the late Sir James Stephen, has completely rejected alike the personal vilifications of such self-interested traitors as Francis and Clavering, and the emotional special-pleading of Burke and Sheridan. “The impeachment of Warren Hastings,” he says, “is, I think, a blot on the judicial history of the country. It was monstrous that a man should be tortured at irregular intervals for seven years, in order that a singularly incompetent tribunal might be addressed before an excited audience by Burke and Sheridan, in language far removed from the calmness with which an advocate for the prosecution ought to address a criminal court.” To some extent Hastings was recouped for the cost of his persecution, even if he was not rewarded for his distinguished services. He was granted a pension of £4,000 a year for twenty-eight and a half years, part paid in advance, and a loan of £50,000 free of interest. But meanwhile he had been fulfilling the dream of his boyhood by buying back his ancestral estate for £60,000, and another £60,000 was still owing to the lawyers. Henceforth, disgusted, as he may well have been, with the ingratitude of the country he had served so well in so difficult a time, he retired to his old home and spent the remaining years of his life in the calm pursuits of a country gentleman, diversified by the cultivation of letters and the writing of verses. It was in these days that he used to tell his friends how, as a little lad of seven, he had lain in the long grass on the banks of a stream that flowed through the old domain of Dalesford and dreamt the wild dream whose fulfilment had, after all, been stranger than the dream itself—for not even his boyish romance could be compared with the fact that, during the winning of the means to buy back the home of his fathers, he had risen to be the actual ruler of something like fifty millions of people, and the dictator of terms of peace and war to princes who governed territories half as large as Europe and even more populous.
  • 67. But in the end he outlived both his enemies and the discredit they had tried to cast upon him. Two years before the battle of Waterloo he was summoned before the Houses of Parliament in the evening of his days to give evidence on the work of his manhood, and when he retired, after nearly four hours’ examination, the whole crowded House of Commons rose and stood uncovered and in silence as the old Empire-Keeper walked out of the Chamber. He lived to see that empire, for which he had striven so painfully and so manfully, redeemed by the genius and valour of Rodney and Nelson and Wellington from the disgrace and degradation which had threatened it during the last decades of the eighteenth century, and three years after Waterloo he died. His remains lie in the family church at Dalesford, and, to once more quote the words of Sir Alfred Lyall, “in Westminster Abbey a bust and an inscription commemorate the name and career of a man who, rising early to high place and power, held an office of the greatest importance to his country for thirteen years, by sheer force of character and tenaciousness against adversity, and who spent the next seven years in defending himself before a nation which accepted the benefits but disliked the ways of his too masterly activity.” Lord Macaulay, who throughout his famous essay does him less than justice, concludes it by making almost generous amends. “Not only had the poor orphan retrieved the fallen fortunes of his line— not only had he re-purchased the old lands and rebuilt the old dwelling—he had preserved and extended an empire. 1 He had founded a policy. He had administered government and war with more than the capacity of Richelieu. He patronised learning with the judicious liberality of Cosmo. He had been attacked by the most formidable combination of enemies that ever sought the destruction of a single victim; and over that combination, after a struggle of ten years, he had triumphed. He had at length gone down to his grave in the fulness of age, in peace after so many troubles, in honour after so much obloquy.”
  • 68. 1 In the territorial sense this is hardly correct. The great essayist probably meant extension in the sense of increase of prestige and influence over the still independent states of the Peninsula.
  • 69. IX NELSON “ENGLAND EXPECTS THAT EVERY MAN WILL DO HIS DUTY.”
  • 70. I IX NELSON am conscious of more difficulties ahead in beginning this sketch than I have felt with regard to any other of the series, for, while on the one hand it would be absurd to omit from the glorious ranks of our Empire-Makers the most glorious of them all, it is at the same time practically impossible to say anything fresh or even anything that is not very generally known about the man who, however much he may once have been slighted, and however inadequately his earlier services may have been rewarded during his life, has now come to be the idol of the country that he saved from invasion and the Empire that he preserved from destruction. His life has been written and re-written, his character and his actions have been discussed and rediscussed, the most private acts and thoughts of his life have been dragged out into the full glare of publicity—a fate which any great man would have to be a very great sinner to deserve—but when all this has been said and done there remains a single, sharply-defined individuality of this incomparable naval captain whom the whole world now acknowledges and reveres, quite apart from all national considerations, as the greatest sailor who ever trod a deck and the greatest naval strategist who ever planned a battle or took a fleet into action. It has been said that when a nation is on the brink of ruin the Fates either hasten its end or send some great man to restore its fortunes. It certainly was thus with the Britain of Nelson’s early youth. On the 17th of October, 1781, Lord Hawke, the victor of Quiberon Bay, and the last of the great line of seamen of whom Admiral Blake was the first, died, leaving, as Horace Walpole said the next day in the House of Commons, his mantle to nobody.
  • 71. Apparently, there was no one worthy to wear it. The fortunes of England were indeed at a low ebb. Both her naval and military prestige had very seriously declined. The American colonies had been lost by the worst of statesmanship at home and the worst of bungling incompetence and cowardice abroad. We had been beaten by the raw colonists on land and by the French and Dutch at sea. At home the very highest circles of the realm were polluted by such corruption and crippled by such imbecility as would be absolutely incredible to us now, Imagine, for instance, what would be thought to-day of the post of Secretary of State for War being given to a man who had been explicitly declared by a court martial to be absolutely incapable of serving his country in any military capacity!—and yet this is only one example out of many of the flagrant abuses of this amazingly disgraceful period. Happily, however, for the honour of the race and the safety of the Empire there had been born, twenty-three years before to a country parson in Norfolk, a boy, the fifth in a family of eleven, who fourteen years later was destined to die in the moment of victory, happy in the knowledge that he had not left his country a single enemy to fight throughout the length and breadth of the High Seas. When Horace Walpole spoke his panegyric on Lord Hawke he would probably have been very much surprised if he had been told that it was this then insignificant and unknown cousin of his own who was not only to take up the mantle of the hero of Quiberon, but to bequeath it in his turn, not to a rival or a successor, but to the country which his last triumph left mistress of the seas. Although there doesn’t seem to be any direct proof, it may be admitted that there is sufficiently strong presumption to warrant us in believing, if we choose to do so, that Horatio Nelson, son of the Rev. Edmund Nelson, Rector of Burnham Thorpe in Norfolk, could one way or another have traced a lineage back to the old Sea Kings of the North. Certainly he must have had some of the blood of those who fought the Armada in his veins, and it is noteworthy that a Danish
  • 72. poet in celebrating his valour, wisdom, and clemency during and after the great battle of Copenhagen, attempted to soothe the wounded pride of his countrymen by pointing out that Nelson was indubitably a Danish name and that after all they had only been beaten by the descendant of one of their old Sea Kings. But however this may be, the immediate facts all show that the man who crowned and completed the work which Francis Drake and his brother pirates began came of a stock that seemed to promise but little in the way of hereditary battle-winning. Every one on his father’s side appears either to have been a parson or to have married one. His mother’s father was a parson too, but happily she had a brother Maurice who was a captain in the Navy, and had done some very good work at a time when good work was badly wanted. This gallant sailor was a great grand-nephew of Sir John Suckling, the poet, and it may be noticed, in passing, that on the 21st of October, 1757, the day which we now know as the anniversary of Trafalgar—Captain Maurice Suckling in the Dreadnought, in company with two other sixty-gun ships, attacked seven large French men-of-war off Cape François in the West Indies, and gave them such a hammering that they were very thankful for the wind which enabled them to escape. But still more noteworthy is the opinion of Captain Maurice Suckling of his nephew when he first received his father’s request to give him a place on board his ship. “What,” he wrote in reply to the application, “has poor Horatio done, who is so weak, that he above all the rest should be sent to rough it out at sea? But let him come, and the first time we go into action a cannon-ball may knock off his head and provide for him at once.” The weakness here somewhat grimly alluded to was the curse of Nelson’s existence from the day that he first set foot on the deck of a ship to the moment when the bullet from the mizen-top of the
  • 73. Redoubtable made his almost constant bodily suffering a matter of minutes. His physical infirmities, or at any rate the weakness of his body as compared with the vast strength and tireless energy of his mind, bring him into very close relationship with William of Orange. Putting nationality aside, he was, in fact, on the sea what William was on land, and the central point in his policy was also the same—tireless and unsparing hostility to France. With Nelson, indeed, this appears to have gone very near to the borders of fanaticism. Some of his sayings with regard to the Frenchmen of his day are absolutely ferocious. Hatred and contempt are about equally blended in them. “Hate a Frenchman as you would hate the devil!” was with him an axiom and was his usual form of advice to midshipmen on entering the service. On one occasion in the Mediterranean he said to one of his captains who had got into a dispute about the property which the defeated French garrison at Gaieta were to be allowed to take away with them: “I am sorry that you had any altercation with them. There is no way to deal with a Frenchman but to knock him down. To be civil to him is only to be laughed at when they are enemies.” The same spirit breathes through nearly all his letters. Thus, for instance, he concluded a letter to the British Minister at Vienna with these words: “Down, down with the French ought to be written in the council-room of every country in the world, and may Almighty God give right thoughts to every sovereign is my constant prayer.” He seems to have had respect for every other enemy that he met; but for the French he had nothing save contemptuous and unsparing hostility. “Close with a Frenchman, but out-manœuvre a Russian” was another of his favourite sayings. This, it is to be hoped, is all past and gone; but it is instructive as giving us the key, not only to Nelson’s policy, but also to that spirit which made the British
  • 74. man-of-warsmen of the day absolutely prefer to fight the French at long odds than on even terms. It was this spirit which was embodied in another of Nelson’s pet phrases: “Any Englishman is worth three Frenchmen.” Of course that would be all nonsense now; but in justice to our neighbours it ought to be remembered that the Frenchmen whom Nelson and his sailors met and conquered were the worst and not the best of their nation. The old navy of France, the navy which had commanded the Eastern Seas in the days of Clive and which had with impunity insulted the English shores and brought an invading force into Ireland in the time of William the Third no longer existed. It had been essentially an aristocratic service like our own, its officers were gentlemen and thorough sailors, and its seamen were brave, disciplined, and obedient. But in her blood-drunkenness France had either murdered or banished nearly every man who was fit to command a ship or who knew how to point a gun. The fleets of revolutionary France were for the most part commanded by ignoramuses or poltroons, or both, and manned by a rabble who had neither stamina, training, or discipline. Without the slightest wish to detract from the splendour of the victories of Nelson or his comrades, I still think it is only fair to point out again, as has once or twice been done before, that when we read of French Admirals declining battle even when they had superior force, or of running away before the battle was over, or of a small British squadron crumpling up a whole fleet with very trifling loss to itself, we ought to remember that the French Admirals had little or no confidence in their officers, while the officers had still less either in their admirals or their men. On the other hand, such a man as Nelson, Collingwood, or Hardy had simply to say that he was going to do a certain thing to convince every one serving under him that it was about as good as already done.
  • 75. This brings me naturally to one of Nelson’s most striking characteristics. No man who rose to distinction in the Navy was ever guilty of so many barefaced acts of insubordination as he was. Happily for him and for us his disobedience or neglect of orders was always justified by victory. The genius for supreme command, which was far and away the strongest point in his character, manifested itself very early in his career. The event proved that he was the superior of every naval officer then afloat, whether admiral or midshipman, and he seemed instinctively to know it. When he was commanding the old Agamemnon in the Mediterranean, at the time when it was in dispute whether Corsica should fall under the rule of France or Britain, he fought two French ships, the Ça Ira and the Sans Culottes, for a whole day and beat them. The next day a sort of general action was fought, Admiral Hotham being in command of the British fleet. Nelson naturally wanted a fight to a finish, but the Admiral was content with the capture of two ships and the flight of the rest, and in reply to Nelson’s remonstrances he said: “We must be contented. We have done very well.” In a letter home on the subject of this action, Nelson penned a sentence which was at once prophetic in itself and closely characteristic of the writer. It was this: “I wish to be an Admiral and in command of the English fleet. I should very soon either do much or be ruined. My disposition cannot bear tame and slow measures. Sure I am had I commanded on the 14th, that either the whole French fleet would have graced my triumph or I should have been in a confounded scrape.” That is Nelson’s mental portrait drawn by himself. No half measures would ever do for him, and in most of the letters that he sent home from his various scenes of action, whether they were written to his wife, his private friends, or the Lords of the Admiralty, we find the constant complaint, made with an insistence amounting almost to petulance, that when he saw complete triumph within his
  • 76. grasp his superiors either would not help him to secure it or forced him to be content with a mere temporary advantage. Under such circumstances it was only natural that such a man should now and then break loose. He saw quite plainly that there were confused councils at home, and timid tactics afloat. He saw also that under Napoleon the power of France was growing every day. The Board of Admiralty was apparently both corrupt and incompetent. The Mediterranean fleet had been so shamefully neglected that after Nelson had fought an action off Toulon even he was afraid to risk another without the certainty of victory because there was “not so much as a mast to be had east of Gibraltar,” and he could not possibly have re-fitted his ships. It was about this time that he said in one of his letters home: “I am acting, not only without the orders of my commander-in- chief, but in some measure contrary to him.” If the authorities at home had only had the same opinion of his abilities as those had who were able to watch his operations on the spot, and particularly in Italy, it is quite possible that the whole history of Europe might have been changed and that Napoleon would never have won that series of brilliant victories which cost such an infinity of blood and treasure, and which bore no fruits but such as resembled all too closely the fabled Dead Sea apples. Nelson’s patriotism may have been of a somewhat narrow- minded order, and his hatred of the French may have partaken somewhat of the nature of bigotry, but there can be no doubt that he was the one man in Europe who saw what was coming and had the ability, if he had only had the power, to save the world from the horrors of the Napoleonic wars. Thus, for instance, if his advice had been taken, the splendid victory of Aboukir Bay might have been turned into the decisive battle of the war which only ended with Waterloo. As it was, he to some extent took the law into his own hands. He saw perfectly well
  • 77. that Napoleon’s ultimate point of attack was not Egypt but India. He sent an officer with dispatches to the Governor of Bombay, advising him of the defeat of the French Fleet, and in this dispatch he said: “I know that Bombay was their first object if they could get there, but I trust that now Almighty God will overthrow in Egypt these pests of the human race. Buonaparte has never yet had to contend with an English officer, and I shall endeavour to make him respect us.” In another dispatch to the Admiralty he taught a lesson which we have only lately begun to learn. In those days of the old wooden- walls the handy, light-heeled frigate was to the ships of the line what the swift cruisers of to-day are to the big battleships. They were the eyes and ears of the fleet, and they could be sent on errands which were impossible to the huge three-deckers. After the battle of the Nile was won he said in this dispatch: “Were I to die this moment want of frigates would be found stamped on my heart. No words of mine can express what I have suffered, and am suffering, for want of them.” The inner meaning of these bitter words was one of vast importance, not only to Britain, but to all Europe. They meant really that the most splendid victory that had so far been won at sea had been robbed of half its results. For want of the lighter craft, even of a few bomb-vessels and fire-ships which he had implored the authorities to send him, Napoleon’s store-ships and transports in the harbour of Alexandria escaped attack and certain destruction. Their destruction would have enabled Nelson to carry out the policy which his genius had told him was the only true one to pursue at this momentous crisis. He would have cut off Napoleon’s communications and deprived him of his supplies. Then he would have blockaded the Egyptian Coast and left the future conqueror of Austerlitz to perish amidst the sands of Egypt. As he said to himself: “To Egypt they went with their own consent, and there they shall remain while Nelson commands this squadron—for never, never will
  • 78. he consent to the return of one ship or Frenchman. I wish them to perish in Egypt and give an awful lesson to the world of the justice of the Almighty.” This was a pitiless pronouncement, but no one who has read the history of the Napoleonic wars can doubt the accuracy of Nelson’s foresight or the true humanity of his policy, for, if this had happened only a few thousands out of the five million lives which these wars are computed to have cost would have been lost. There would have been no Austerlitz, or Wagram, or Jena for France to boast of; but, on the other hand, there would have been no Leipsic, no Moscow, and no Waterloo. As usual, however, Nelson, although he had magnificently restored the credit of the British arms at sea, was crippled by shortness of means and baulked by the stupidity and incompetence of his masters at home. Sir Sidney Smith’s policy was preferred to his, with the result that Napoleon was permitted to desert his army and live to become the curse of Europe for the next seventeen years. But, if he did not do all he wanted to do, when Nelson won the battle of the Nile he completely established his claim to be considered one of the Empire-makers of Britain, for if he had not followed the French with that unerring judgment of his, and if he had not, in defiance of all accepted naval tactics, attacked them in what was considered to be an unassailable position—that is to say, moored off shore in two lines with both ends protected by batteries —all the work that Clive and Hastings had done in India might have been undone, and, considering the miserable state of our national defences, we might either have lost India or had to wage such an exhausting war for it that we could not possibly have taken the decisive share that we afterwards did in the overthrow of the French power. As he said in one of his most famous utterances while the British fleet was streaming into the bay: “Where there is room for a
  • 79. Frenchman to swing, there is room for an Englishman to get alongside him.” That was Nelson. His idea was always to get alongside, to get as close as possible to the enemy and to hit him as hard as he could. Mere defeat was not enough for him. He wanted a fight to a finish, the finish being the absolute destruction or capture of the hostile force. This was not because there was anything particularly ferocious in his nature. On the contrary, a more tender-hearted man never lived. Before that one defeat of his at Teneriffe when he lost his arm, he wrote to his Commander-in-chief—this letter, by the way, was the last he ever wrote with his right hand—expressing solicitude for everybody but himself. None knew better than he the desperate nature of the venture, for in this very letter he said that on the morrow his head would probably be crowned either with laurel or cypress, and the last thing he did before he left his ship was to call his stepson to help him in burning his wife’s letters, and then ordered him to remain behind, saying: “Should we both fall, what would become of your poor mother?” Happily Lieutenant Nisbet disobeyed the order to his face and went. When the bullet shattered Nelson’s arm at the elbow, it was his stepson who had the presence of mind to whip off his silk handkerchief and bind it round above the wound. But for this, Nelson would never have fought another battle, for he must have bled to death before he reached his ship. It so happened that he could have been put much sooner on board the Sea Horse, but her commander, Captain Freemantle, was still on shore, and, for all he knew, might be dead or alive. His wife was on board the Sea Horse, and Nelson, wounded and bleeding as he was, insisted on going on, saying: “I would rather suffer death than alarm Mrs. Freemantle by letting her see me in this state when I can give her no tidings of her husband.” Freemantle, as it turned
  • 80. out, had been wounded in almost exactly the same place only a few minutes before. When Nelson got back to his own ship, he would not hear of being slung or carried up on deck. “I’ve got one arm and two legs left,” he said, “and I’ll get up by myself.” And so he did, and up a single rope at that. In a strong man this would have been wonderful; in a mere weakling as Nelson physically was, it was little short of a miracle. This was the man who, in the Battle of Cape St. Vincent, with an utterly disabled ship, boarded and took two Spanish men-of-war both bigger than his own. One of them had eighty and the other a hundred and twelve guns; his own only mounted seventy-four. It is, of course, entirely out of the question that in such a mere sketch as this I should attempt to follow Nelson through even a moderate proportion of the hundred and five engagements in which he personally fought, nor would it be fitting that I should attempt to emulate the brilliant and detailed descriptions which have illustrated the principal of them. With his doings at Naples and Palermo, and his much-debated and inexplicable attachment to Lady Hamilton which unhappily began during this period, we have here no concern. The hero of the Nile, like every other great man, had his faults. Those who cavil at them are really blaming their possessors for not being perfect, for if really great men had no faults they would be perfect, and that is impossible, and, so much being said, the scene may now shift forthwith from the Mediterranean to the Baltic. The Armed Neutrality is now only a phrase in history, but in the year 1801 it was a very serious reality. It was a league between Russia, Sweden, and Denmark. From the English point of view it meant this—that France, with whom we had now practically embarked in a struggle to the death, would be able, under the sanction of this league, to import from the shores of the Baltic the
  • 81. very articles that we did not wish her to have, and which she couldn’t get elsewhere. These were naval stores, pine-trees for masts and spars, hemp for rigging, tar, and so on. It was very easy to see that this Armed Neutrality meant in plain English that these three Powers were quite agreeable to the smashing-up of Great Britain by France provided that they were not called upon to pay any of the expenses or suffer any of the other losses of the war. Denmark was therefore politely but firmly requested to detach herself from this league, the reason being that Denmark in those days kept the key of the Baltic. Denmark refused, and unhappily for her she did so just at the time when the Victor of the Nile had come home for a well-earned holiday. We are not accustomed now, in the pride of our unequalled naval strength, to take very much account of the fleets of these three countries, but just before the Battle of the Baltic was fought it was a very different matter. The Danes had twenty-three line-of-battle ships and thirty-one frigates, not counting bomb-vessels and guard-ships. Sweden had eighteen ships of the line, fourteen frigates and sloops and seventy- four galleys, as well as a small swarm of gun-boats, while Russia could put to sea eighty-two line-of-battle ships and forty-two frigates. Such a force within the narrow waters of the Baltic was a very formidable one, but before we can arrive at a just appreciation of the magnificence and importance of the service which Nelson did for his country we must remember that of all European waters those of the Baltic, and especially of the approaches to it, are the most difficult and dangerous. Even with the aid of steam it would be no light matter to take a fleet into the Baltic under the guns of Elsinore and Kronberg were the lamps of the lighthouses extinguished and all the buoys removed. What then must it have been to go in with a fleet of sailing ships utterly at the mercy of wind and current, to say nothing of the ice?
  • 82. Indeed, Southey tells us that when Nelson went to Yarmouth to join the fleet under Admiral Sir Hyde-Parker he found him a little nervous about dark nights and ice-floes. His own remarks on the subject are very well worthy of remembrance: “These are not times for nervous systems,” he said. “I hope we shall give our northern enemies that hailstorm of bullets which gives our dear country the dominion of the sea. We have it and all the devils in the North cannot take it from us if our wooden walls have fair play.” It was a most egregious mistake not to have made the Victor of the Nile and the Conqueror of the Mediterranean commander-in- chief of the Northern Squadron. His fame was already resounding through the world, and every one except the Lords of the Admiralty seems to have already recognised the fact that he was by far the finest sailor of the age. Here again, too, officialism at home sadly crippled the work of valour and genius abroad. As usual Nelson had his own plans, and as usual they were the very best possible. His idea was to attack the Russian Squadron in Reval and the Danish in Copenhagen simultaneously, and by preventing their coalition make it too risky for the Swedes to join in. Captain Mahan, who is certainly entitled to be considered one of the foremost naval authorities of the day, describes Nelson’s plan of attack as worthy of Napoleon himself, and says that if adopted it “would have brought down the Baltic Confederacy with a crash that would have resounded throughout Europe.” As it was, more timid counsels prevailed, but thanks to Nelson the end was the same, or nearly so. We may gather some notion of the difficulty of getting on to the scene of battle when we read that no less than three English line-of- battle ships went aground before the battle began, and we also get an interesting glimpse of that old hand-to-hand style of naval warfare which has now passed away for ever, when we are told that
  • 83. the ships opened fire at a range of two hundred yards! Nowadays firing would begin at between three and four thousand. If two modern fleets were to get to business at that range the said business would probably consist of one broadside from each, one discharge of the big guns, and after that general wreck and ruin. It is not likely that either side would win, and it is certain that both sides would lose. From ten to one the battle raged fast and furious, and so much damage had been done on the English side that Sir Hyde-Parker made a signal to leave off action. It was at this moment that Nelson uttered those immortal words, which were destined to be as famous even as his signal at Trafalgar: “What? Leave off action? No, damn me if I do! You know, Foley, I have a right to be blind sometimes. No, I really don’t see the signal. Fire away!” Those were days of hard swearing as well as hard hitting, and, considering all the circumstances, even the purest of modern purists may forgive a little vehemence of expression to the man who that day did such good work, not only for our grandfathers, but for us and our children.
  • 85. Welcome to Our Bookstore - The Ultimate Destination for Book Lovers Are you passionate about books and eager to explore new worlds of knowledge? At our website, we offer a vast collection of books that cater to every interest and age group. From classic literature to specialized publications, self-help books, and children’s stories, we have it all! Each book is a gateway to new adventures, helping you expand your knowledge and nourish your soul Experience Convenient and Enjoyable Book Shopping Our website is more than just an online bookstore—it’s a bridge connecting readers to the timeless values of culture and wisdom. With a sleek and user-friendly interface and a smart search system, you can find your favorite books quickly and easily. Enjoy special promotions, fast home delivery, and a seamless shopping experience that saves you time and enhances your love for reading. Let us accompany you on the journey of exploring knowledge and personal growth! ebookgate.com