2. Index
• Introduction
• Advantages
• Hash Function
• Hash Table
• Collision Resolution Techniques
• Separate Chaining
• Linear Chaining
• Quadratic Probing
• Double Hashing
• Application
• Reference
3. Introduction
• Hashing is a technique that is used to uniquely identify
a specific object from a group of similar objects.
• Some examples of how hashing is used in our lives include:
• In universities, each student is assigned a unique roll number that can
be used to retrieve information about them.
• In libraries, each book is assigned a unique number that can be used to
determine information about the book, such as its exact position in the
library or the users it has been issued to etc.
• In both these examples the students and books were hashed to a
unique number.
4. Continue…
•Assume that you have an object and you want to assign
a key to it to make searching easy. To store the
key/value pair, you can use a simple array like a data
structure where keys (integers) can be used directly as
an index to store values. However, in cases where the
keys are large and cannot be used directly as an
index, you should use hashing.
5. Continue…
• In hashing, large keys are converted into small keys
by using hash functions.
• The values are then stored in a data structure called hash table.
• The idea of hashing is to distribute entries (key/value pairs)
uniformly across an array.
• Each element is assigned a key (converted key).
• By using that key you can access the element in O(1) time. Using
the key, the algorithm (hash function) computes an index that
suggests where an entry can be found or inserted.
6. Continue…
• Hashing is implemented in two steps:
• An element is converted into an integer by using a hash
function. This element can be used as an index to store the
original element, which falls into the hash table.
• The element is stored in the hash table where it can be
quickly retrieved using hashed key.
• hash = hashfunc(key)
index = hash % array_size
7. Advantages
• The main advantage of hash tables over other table data structures
is speed.
• This advantage is more apparent when the number of entries is large
(thousands or more).
• Hash tables are particularly efficient when the maximum number of
entries can be predicted in advance, so that the bucket
array can be allocated once with the optimum size and never resized.
8. Hash function
• A hash function is any function that can be used to map a data set
of an arbitrary size to a data set of a fixed size, which falls into the hash
table.
• The values returned by a hash function are called hash values, hash codes,
hash sums, or simply hashes.
9. Continue…
• A hash function is any function that can be used to map
a data set of an arbitrary size to a data set of a fixed
size, which falls into the hash table.
• The values returned by a hash function are called hash
values, hash codes, hash sums, or simply hashes.
• To achieve a good hashing mechanism, It is important to
have a good hash function with the following basic
requirements:
• Easy to compute
• Uniform distribution
10. Hash Table
• A hash table is a data structure that is used to store keys/value pairs. It
uses a hash function to compute an index into an array in which an
element will be inserted or searched.
• By using a good hash function, hashing can work well. Under reasonable
assumptions, the average time required to search for an element in a hash
table is O(1).
11. Collision resolution techniques
• If x1 and x2 are two different keys, it is possible that h(x1) = h(x2). This
is called a collision. Collision resolution is the most important issue in
hash table implementations.
• Choosing a hash function that minimizes the number of collisions and
also hashes uniformly is another critical issue.
• Separate chaining (open hashing)
• Linear probing (open addressing or closed hashing)
• Quadratic Probing
• Double hashing
12. Separate Chaining (Open Hashing)
• Separate chaining is one of the most commonly used collision resolution
techniques.
• It is usually implemented using linked lists. In separate chaining, each
element of the hash table is a linked list.
• To store an element in the hash table you must insert it into a specific
linked list.
• If there is any collision (i.e. two different elements have same hash value)
then store both the elements in the same linked list.
14. Linear Probing
• In open addressing, instead of in linked lists, all entry records
are stored in the array itself.
• When a new entry has to be inserted, the hash index of the
hashed value is computed and then the array is examined
(starting with the hashed index).
• If the slot at the hashed index is unoccupied, then the entry
record is inserted in slot at the hashed index else it proceeds
in some probe sequence until it finds an unoccupied slot.
15. Continue…
• The probe sequence is the sequence that is followed while
traversing through entries. In different probe sequences, you can
have different intervals between successive entry slots or probes.
• When searching for an entry, the array is scanned in the same
sequence until either the target element is found or an unused slot
is found. This indicates that there is no such key in the table. The
name "open addressing" refers to the fact that the location or
address of the item is not determined by its hash value.
16. Continue…
• Linear probing is when the interval between successive probes is
fixed (usually to 1). Let’s assume that the hashed index for a
particular entry is index. The probing sequence for linear
probing will be:
index = index % hashTableSize
index = (index + 1) % hashTableSize
index = (index + 2) % hashTableSize
index = (index + 3) % hashTableSize
17. Quadratic Probing
• Quadratic probing is similar to linear probing and the only
difference is the interval between successive probes or entry
slots.
• Here, when the slot at a hashed index for an entry record is
already occupied, you must start traversing until you find an
unoccupied slot.
• The interval between slots is computed by adding the
successive value of an arbitrary polynomial in the original
hashed index.
18. Continue…
• Let us assume that the hashed index for an entry is index and
at index there is an occupied slot. The probe sequence will be as follows:
index = index % hashTableSize
index = (index + 12
) % hashTableSize
index = (index + 22
) % hashTableSize
index = (index + 32
) % hashTableSize
19. Double hashing
• Double hashing is similar to linear probing and the only
difference is the interval between successive probes. Here, the
interval between probes is computed by using two hash
functions.
• Let us say that the hashed index for an entry record is an index
that is computed by one hashing function and the slot at that
index is already occupied. You must start traversing in a specific
probing sequence to look for an unoccupied slot. The probing
sequence will be:
20. Continue…
index = (index + 1 * indexH) % hashTableSize;
index = (index + 2 * indexH) % hashTableSize;
21. Applications
• Associative arrays: Hash tables are commonly used to implement many
types of in-memory tables. They are used to implement associative arrays
(arrays whose indices are arbitrary strings or other complicated objects).
• Database indexing: Hash tables may also be used as disk-based data
structures and database indices (such as in dbm).
• Caches: Hash tables can be used to implement caches i.e. auxiliary data
tables that are used to speed up the access to data, which is primarily stored
in slower media.
• Object representation: Several dynamic languages, such as Perl, Python,
JavaScript, and Ruby use hash tables to implement objects.
• Hash Functions are used in various algorithms to make their computing
faster.