12. Example:- T1 and T2 don’t follow two-phase locking protocol
read_lock or
write _lock
should be done
first for all
resources that
are used before
unlock of
resources.
read_lock or
write _lock
should be done
first for all
resources that
are used before
unlock of
resources.
13. Example: T1 and T2 are non-serializable schedule S, which doesn’t
follow two-phase locking
read_lock or write
_lock should be
done first for all
resources that are
used before unlock
of resources.
14. • If we enforce locking, T1’ and T2’ can be written as follows:
All the required
resources will be
locked first.
Unlocking of
resources will be
done at different
stages.
All the required
resources will be
locked first.
Unlocking of
resources will be
done at different
stages.
16. Dealing with dead lock and starvation
Deadlock:- It is a condition where two or more transactions are waiting
indefinitely for one another to give up locks.
• Deadlock is said to be one of the most feared complications in DBMS as no task
ever gets finished and is in waiting forever.
17. • A simple example is shown in figure where the two transactions T1’ and T2’ are
deadlocked in a partial schedule; T1’ is in waiting queue for X, which is locked by
T2’, while T2’ is in waiting queue for Y, which is locked by T1’. Meanwhile,
neither T1’ nor T2’ nor any other transaction can access items X and Y.
18. Deadlock prevention protocols
• One way to prevent deadlock is to use a deadlock prevention
protocol.
• Protocol 1:- It requires that every transaction lock all the items it
needs in advance. If any of the items cannot be obtained, none of
the items are locked. Rather, the transaction waits and then tries
again to lock all the items it needs.
• Protocol 2:- It involves ordering all the items in the database and
making sure that a transaction that needs several items will lock
them according to that order. This requires that the programmer
(or the system) is aware of the chosen order of the items
19. • Some of these techniques use the concept of transaction
timestamp TS(T), which is a unique identifier assigned to each
transaction.
• The timestamps are typically based on the order in which
transactions are started; hence, if transaction T1 starts before
transaction T2, then TS(T1) < TS(T2).
• The older transaction (which starts first) has the smaller
timestamp value.
• Protocols based on time-stamp are:
1) wait-die
2) wound-wait
20. 1) wait-die protocol
a) If the older transaction is waiting for a resource which is locked by the younger
transaction, then the older transaction is allowed to wait for resource until it is
available.
b) If the older transaction has held some resources and if younger is waiting for it, then
younger is killed and restarted later with the random delay but with the same
timestamp.
21. 2) wound-wait protocol
a) If the older transaction requests for a resource which is held by the younger transaction,
then older transaction forces younger one to kill the transaction and release the
resource. After the minute delay, the younger transaction is restarted but with the same
timestamp.
b) If the older transaction has held a resource which is requested by the younger
transaction, then the younger transaction is asked to wait until older releases it.
22. • Another group of protocols that prevent deadlock don't require timestamps. These
include:
a) no waiting (NW) protocol
b) cautious waiting protocol
a) No waiting protocol
if a transaction tries to lock a data item that is already locked by another transaction, it is
immediately aborted and rolled back. This prevents deadlocks because transactions never
wait for each other; they either acquire the lock or are aborted and must retry later.
23. b) cautious waiting protocol
It allows transactions to wait, but a transaction will only wait if the transaction holding the
lock is not already waiting for another transaction. This way, it prevents deadlock situations
by ensuring that a waiting transaction will not cause another transaction to wait
24. Deadlock detection
• A simple way to detect a state of deadlock is for the system to construct and
maintain a wait-for graph.
Wait-for graph
A wait-for graph is a directed graph used in deadlock detection in database
systems and operating systems.
It represents the relationships between transactions (or processes) and the
resources they are waiting for.
25. • A deadlock occurs if there is a cycle in the wait-for graph. A cycle indicates that a
group of transactions are waiting on each other in a circular chain, and none of
them can proceed.
Example:-
34. Granularity of Data Items and Multiple
Granularity Locking
• Granularity level considerations for locking refer to the different sizes or scopes
of data that a lock can control in a database system.
47. What is NOSQL? Explain the CAP theorem.
Types of NOSQL databases are as follows:
SQL NO SQL
Relational DB Distributed DB
Defined Schema Dynamic Schema
Vertical Scalable Horizontal Scalable
Low Availability Highly Available
Support Complex Queries Not Supported for Complex Queries
54. Key-Value based database
• Key-value databases store data as a collection of key-value pairs. Each key is
unique and maps directly to a value, which can be any type of data (e.g., string,
integer, JSON etc).
57. Document based databases
• A document-based database is a type of NoSQL database designed to store,
retrieve, and manage data in the form of documents.
• Document-based databases use a more flexible schema, where each document
is a self-contained unit of data.
• These documents are typically stored in formats like JSON (JavaScript Object
Notation) or BSON (Binary JSON).
• MongoDB is a popular document-based database that stores data in JSON-like
documents.
61. Column-family database/wide column store database
• Imagine a traditional spreadsheet. Data is organized in rows and columns. A
relational database works similarly, storing data in rows with fixed columns.
• Column-based or wide-column databases, Instead of storing data row by row,
data is stored in columns.
• This approach allows for high scalability and efficient read and write
operations, particularly for large-scale and analytical workloads.