SlideShare a Scribd company logo
NOSQL
THEORY, IMPLEMENTATION
S,
AN INTRODUCTION
FIRAT ATAGUN
firat@yahoo-inc.com
http://firatatagun.com
NoSQL


What does it mean?
 Not

Only SQL.
Use Cases









Massive write performance.
Fast key value look ups.
Flexible schema and data types.
No single point of failure.
Fast prototyping and development.
Out of the box scalability.
Easy maintenance.
Motives Behind NoSQL





Big data.
Scalability.
Data format.
Manageability.
Big Data







Collect.
Store.
Organize.
Analyze.
Share.

Data growth outruns the ability to manage it so
we need scalable solutions.
Scalability


Scale up, Vertical scalability.
 Increasing

server capacity.
 Adding more CPU, RAM.
 Managing is hard.
 Possible down times
Scalability


Scale out, Horizontal scalability.


Adding servers to existing system with little effort, aka
Elastically scalable.










Shared nothing.
Use of commodity/cheap hardware.
Heterogeneous systems.
Controlled Concurrency (avoid locks).
Service Oriented Architecture. Local states.






Bugs, hardware errors, things fail all the time.
It should become cheaper. Cost efficiency.

Decentralized to reduce bottlenecks.
Avoid Single point of failures.

Asynchrony.
Symmetry, you don’t have to know what is happening. All
nodes should be symmetric.
What is Wrong With RDBMS?



Nothing. One size fits all? Not really.
Impedance mismatch.













Object Relational Mapping doesn't work quite well.

Rigid schema design.
Harder to scale.
Replication.
Joins across multiple nodes? Hard.
How does RDMS handle data growth? Hard.
Need for a DBA.
Many programmers are already familiar with it.
Transactions and ACID make development easy.
Lots of tools to use.
ACID Semantics







Atomicity: All or nothing.
Consistency: Consistent state of data and
transactions.
Isolation: Transactions are isolated from each
other.
Durability: When the transaction is
committed, state will be durable.

Any data store can achieve Atomicity, Isolation and
Durability but do you always need consistency? No.
By giving up ACID properties, one can achieve
higher performance and scalability.
Enter CAP Theorem



Also known as Brewer’s Theorem by Prof. Eric
Brewer, published in 2000 at University of
Berkeley.
“Of three properties of a shared data system:
data consistency, system availability and
tolerance to network partitions, only two can
be achieved at any given moment.”
Proven by Nancy Lynch et al. MIT labs.



http://www.cs.berkeley.edu/~brewer/cs262b-2004/PODC-keynote.pdf




CAP Semantics


Consistency: Clients should read the same
data. There are many levels of consistency.
Consistency – RDBMS.
 Tunable Consistency – Cassandra.
 Eventual Consistency – Amazon Dynamo.
 Strict




Availability: Data to be available.
Partial Tolerance: Data to be partitioned across
network segments due to network failures.
A Simple Proof
Consistent and available
No partition.
App

Data

A

Data

B
A Simple Proof
Available and partitioned
Not consistent, we get back old data.
App

Data

A

Old Data

B
A Simple Proof
Consistent and partitioned
Not available, waiting…
App

New Data
Wait for new data

A

B
BASE, an ACID Alternative
Almost the opposite of ACID.
 Basically available: Nodes in the a distributed
environment can go down, but the whole
system shouldn’t be affected.
 Soft State (scalable): The state of the system
and data changes over time.
 Eventual Consistency: Given enough
time, data will be consistent across the
distributed system.
A Clash of cultures
ACID:
• Strong consistency.
• Less availability.
• Pessimistic concurrency.
• Complex.
BASE:
• Availability is the most important thing. Willing
to sacrifice for this (CAP).
• Weaker consistency (Eventual).
• Best effort.
• Simple and fast.
• Optimistic.
Distributed Transactions


Two phase commit.




Starbucks doesn’t use two phase commit by Gregor Hophe.

Possible failures


Network errors.



Node errors.



Database errors.

Commit

Rollback
Coordinator
Acknowledge

Problems:
Locking the entire cluster if one node is down
Possible to implement timeouts.
Possible to use Quorum.
Quorum: in a distributed environment, if there is
partition, then the nodes vote to commit or
rollback.

Complete operation
Release locks
Consistent Hashing



Solves Partitioning Problem.
Consistent Hashing, Memcahced.





servers = [s1, s2, s3, s4, s5]
serverToSendData = servers[hash(data) % servers.length]

A New Hope
 Continuum Approach.

Virtual Nodes in a cycle.
 Hash both objects and caches.
 Easy Replication.




Eventually Consistent.

What happens if nodes fail?
 How do you add nodes?


http://www.akamai.com/dl/technical_publications/ConsistenHashingandRandomTreesDistributedCachingprotocolsforrelievingHotSpotsontheworldwideweb.pdf
Concurrency models




Optimistic concurrency.
Pessimistic concurrency.
MVCC.
Vector Clocks



Used for conflict detection of data.
Timestamp based resolution of conflicts is not
enough.

Time 1:
Time 2:

Replicated

Time 3:

Update

Time 4: Update
Time 5:

Replicated

Conflict detection
Vector Clocks
Document.v.1([A, 1])

A

Update
Document.v.2([A, 2])

Document.v.2([A, 2],[B,1])

A

B

C

Conflicts are detected.

Document.v.2([A, 2],[C,1])
Read Repair
Value = Data.v2

Client
GET (K, Q=2)
Value = Data.v2

Update K = Data.v2

Value = Data.v1
Gossip Protocol & Hinted Handoffs


Most preferred communication protocol in a
distributed environment is Gossip Protocol.
D

A
G
H

• All the nodes talk to each other peer wise.
• There is no global state.
• No single point of coordinator.
• If one node goes down and there is a Quorum
load for that node is shared among others.
• Self managing system.
• If a new node joins, load is also distributed.
B

C

F

Requests coming to F will be handled by
the nodes who takes the load of F, lets say C with
the hint that it took the requests which was for F,
when F becomes available, F will get this
Information from C. Self healing property.
Data Models








Key/Value Pairs.
Tuples (rows).
Documents.
Columns.
Objects.
Graphs.

There are corresponding data stores.
Complexity
Key-Value Stores










Memcached – Key value stores.
Membase – Memcached with persistence and
improved consistent hashing.
AppFabric Cache – Multi region Cache.
Redis – Data structure server.
Riak – Based on Amazon’s Dynamo.
Project Voldemort – eventual consistent key
value stores, auto scaling.
Memcached











Very easy to setup and use.
Consistent hashing.
Scales very well.
In memory caching, no persistence.
LRU eviction policy.
O(1) to set/get/delete.
Atomic operations set/get/delete.
No iterators, or very difficult.
Membase














Easy to manage via web console.
Monitoring and management via Web console.
Consistency and Availability.
Dynamic/Linear Scalability, add a node, hit join to
cluster and rebalance.
Low latency, high throughput.
Compatible with current Memcached Clients.
Data Durability, persistent to disk asynchronously.
Rebalancing (Peer to peer replication).
Fail over (Master/Slave).
vBuckets are used for consistent hashing.
O(1) to set/get/delete.
Redis














Distributed Data structure server.
Consistent hashing at client.
Non-blocking I/O, single threaded.
Values are binary safe strings: byte strings.
String : Key/Value Pair, set/get. O(1) many string operations.
Lists: lpush, lpop, rpush, rpop.you can use it as stack or
queue. O(1). Publisher/Subscriber is available.
Set: Collection of Unique
elements, add, pop, union, intersection etc. set operations.
Sorted Set: Unique elements sorted by scores. O(logn).
Supports range operations.
Hashes: Multiple Key/Value pairs
HMSET user 1 username foo password bar age 30
HGET user 1 age
Microsoft AppFabric










Add a node to the cluster easily. Elastic
scalability.
Namespaces to organize different caches.
LRU Eviction policy.
Timeout/Time to live is default to 10 min.
No persistence.
O(1) to set/get/delete.
Optimistic and pessimistic concurrency.
Supports tagging.
Document Stores







Schema Free.
Usually JSON like interchange model.
Query Model: JavaScript or custom.
Aggregations: Map/Reduce.
Indexes are done via B-Trees.
Mongodb












Data types:
bool, int, double, string, object(bson), oid, array, null, d
ate.
Database and collections are created automatically.
Lots of Language Drivers.
Capped collections are fixed size
collections, buffers, very fast, FIFO, good for logs. No
indexes.
Object id are generated by client, 12 bytes packed
data. 4 byte time, 3 byte machine, 2 byte pid, 3 byte
counter.
Possible to refer other documents in different
collections but more efficient to embed documents.
Replication is very easy to setup. You can read from
Mongodb



Connection pooling is done for you. Sweet.
Supports aggregation.







Map Reduce with JavaScript.

You have indexes, B-Trees. Ids are always
indexed.
Updates are atomic. Low contention locks.
Querying mongo done with a document:
Lazy, returns a cursor.
 Reduceable to SQL, select, insert, update limit, sort
etc.






Several operators:




There is more: upsert (either inserts of updates)
$ne, $and, $or, $lt, $gt, $incr,$decr and so on.

Repository Pattern makes development very easy.
Mongodb - Sharding

Config servers: Keeps mapping
Mongos: Routing servers
Mongod: master-slave replicas
Couchdb




Availability and Partial Tolerance.
Views are used to query. Map/Reduce.
MVCC – Multiple Concurrent versions. No locks.














A little overhead with this approach due to garbage collection.
Conflict resolution.

Very simple, REST based. Schema Free.
Shared nothing, seamless peer based Bi-Directional replication.
Auto Compaction. Manual with Mongodb.
Uses B-Trees
Documents and indexes are kept in memory and flushed to disc
periodically.
Documents have states, in case of a failure, recovery can continue
from the state documents were left.
No built in auto-sharding, there are open source projects.
You can’t define your indexes.
Object Stores



Objectivity.
Db4o.
Objectivity






No need for ORM. Closer to OOP.
Complex data modeling.
Schema evolution.
Scalable Collections: List, Set, Map.
Object relations.
 Bi-Directional






relations

ACID properties.
Blazingly fast, uses paging.
Supports replication and clustering.
Column Stores
Row oriented
Id

username

email

Department

1

John

john@foo.com

Sales

2

Mary

mary@foo.com

Marketing

3

Yoda

yoda@foo.com

IT

Column oriented
Id

Username

email

Department

1

John

john@foo.com

Sales

2

Mary

mary@foo.com

Marketing

3

Yoda

yoda@foo.com

IT
Cassandra













Tunable consistency.
Decentralized.
Writes are faster than reads.
No Single point of failure.
Incremental scalability.
Uses consistent hashing (logical partitioning)
when clustered.
Hinted handoffs.
Peer to peer routing(ring).
Thrift API.
Multi data center support.
Cassandra at Netflix

http://techblog.netflix.com/2011/11/benchmarking-cassandra-scalability-on.html
Graph Stores




Based on Graph Theory.
Scale vertically, no clustering.
You can use graph algorithms easily.
Neo4J









Nodes, Relationship.
Traversals.
HTTP/REST.
ACID.
Web Admin.
Not too much support for languages.
Has transactions.
Which one to use?


Key-value stores:




Document databases:




Quirky stuff.

Columnar:




Complex object models.

Data Structure Server:




OLTP. SQL. Transactions. Relations.

OODBMS




Natural data modeling. Programmer friendly. Rapid development. Web
friendly, CRUD.

RDMBS:




Processing a constant stream of small reads and writes.

Handles size well. Massive write loads. High availability. Multiple-data centers.
MapReduce

Graph:


Graph algorithms and relations.

Want more ideas ?
http://highscalability.com/blog/2011/6/20/35-use-cases-for-choosing-your-nextnosql-database.html



Thank you.

More Related Content

What's hot (20)

PPTX
From catalogues to models: transitioning from existing requirements technique...
James Towers
 
PPS
Software design principles
Ritesh Singh
 
PPT
Slides chapter 11
Priyanka Shetty
 
PPTX
Grasp principles
Yuriy Shapovalov
 
PPTX
Cqrs and Event Sourcing Intro For Developers
wojtek_s
 
PPT
07 software design
kebsterz
 
PDF
Model-Based Systems Engineering Demystified
Elizabeth Steiner
 
PDF
software-architecture-patterns
Pallav Kumar
 
PPT
Unit iii(part b - architectural design)
BALAJI A
 
PPTX
Software Design 1: Coupling & cohesion
Attila Magyar
 
PDF
Cohesion and Coupling - The Keys To Changing Your Code With Confidence
Dan Donahue
 
PPT
Final grasp ASE
babak danyal
 
PDF
Aspect oriented software development
Maryam Malekzad
 
PPTX
Architectural patterns part 1
assinha
 
PPT
Chapter 08
guru3188
 
PPTX
How I Learned To Apply Design Patterns
Andy Maleh
 
PDF
Clean Code .Net Cheetsheets
NikitaGoncharuk1
 
PPTX
Software architectural patterns - A Quick Understanding Guide
Mohammed Fazuluddin
 
PPT
Design final
Indu Sharma Bhardwaj
 
From catalogues to models: transitioning from existing requirements technique...
James Towers
 
Software design principles
Ritesh Singh
 
Slides chapter 11
Priyanka Shetty
 
Grasp principles
Yuriy Shapovalov
 
Cqrs and Event Sourcing Intro For Developers
wojtek_s
 
07 software design
kebsterz
 
Model-Based Systems Engineering Demystified
Elizabeth Steiner
 
software-architecture-patterns
Pallav Kumar
 
Unit iii(part b - architectural design)
BALAJI A
 
Software Design 1: Coupling & cohesion
Attila Magyar
 
Cohesion and Coupling - The Keys To Changing Your Code With Confidence
Dan Donahue
 
Final grasp ASE
babak danyal
 
Aspect oriented software development
Maryam Malekzad
 
Architectural patterns part 1
assinha
 
Chapter 08
guru3188
 
How I Learned To Apply Design Patterns
Andy Maleh
 
Clean Code .Net Cheetsheets
NikitaGoncharuk1
 
Software architectural patterns - A Quick Understanding Guide
Mohammed Fazuluddin
 
Design final
Indu Sharma Bhardwaj
 

Similar to NoSQL Introduction, Theory, Implementations (20)

PPTX
http://www.hfadeel.com/Blog/?p=151
xlight
 
PPT
Bhupeshbansal bigdata
Bhupesh Bansal
 
ODP
Nosql availability & integrity
Fahri Firdausillah
 
PDF
Azure and cloud design patterns
Venkatesh Narayanan
 
ODP
Front Range PHP NoSQL Databases
Jon Meredith
 
PDF
No sq lv1_0
Tuan Luong
 
PPT
Schemaless Databases
Dan Gunter
 
PPS
Web20expo Scalable Web Arch
mclee
 
PPS
Web20expo Scalable Web Arch
guest18a0f1
 
PPS
Web20expo Scalable Web Arch
royans
 
PDF
Comparison between mongo db and cassandra using ycsb
sonalighai
 
PPTX
Master.pptx
KarthikR780430
 
PDF
No Sql On Social And Sematic Web
Stefan Ceriu
 
PDF
NoSQL On Social And Sematic Web
Stefan Prutianu
 
PPTX
Overview of MongoDB and Other Non-Relational Databases
Andrew Kandels
 
PPTX
MongoDB
fsbrooke
 
PPT
Voldemort & Hadoop @ Linkedin, Hadoop User Group Jan 2010
Bhupesh Bansal
 
PPT
Hadoop and Voldemort @ LinkedIn
Hadoop User Group
 
PDF
NoSQL Basics - A Quick Tour
Bikram Sinha. MBA, PMP
 
PPS
Scalable Web Architectures: Common Patterns and Approaches - Web 2.0 Expo NYC
Cal Henderson
 
http://www.hfadeel.com/Blog/?p=151
xlight
 
Bhupeshbansal bigdata
Bhupesh Bansal
 
Nosql availability & integrity
Fahri Firdausillah
 
Azure and cloud design patterns
Venkatesh Narayanan
 
Front Range PHP NoSQL Databases
Jon Meredith
 
No sq lv1_0
Tuan Luong
 
Schemaless Databases
Dan Gunter
 
Web20expo Scalable Web Arch
mclee
 
Web20expo Scalable Web Arch
guest18a0f1
 
Web20expo Scalable Web Arch
royans
 
Comparison between mongo db and cassandra using ycsb
sonalighai
 
Master.pptx
KarthikR780430
 
No Sql On Social And Sematic Web
Stefan Ceriu
 
NoSQL On Social And Sematic Web
Stefan Prutianu
 
Overview of MongoDB and Other Non-Relational Databases
Andrew Kandels
 
MongoDB
fsbrooke
 
Voldemort & Hadoop @ Linkedin, Hadoop User Group Jan 2010
Bhupesh Bansal
 
Hadoop and Voldemort @ LinkedIn
Hadoop User Group
 
NoSQL Basics - A Quick Tour
Bikram Sinha. MBA, PMP
 
Scalable Web Architectures: Common Patterns and Approaches - Web 2.0 Expo NYC
Cal Henderson
 
Ad

Recently uploaded (20)

PDF
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
PDF
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
PPTX
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
PDF
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
PDF
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
PDF
July Patch Tuesday
Ivanti
 
PDF
CIFDAQ Weekly Market Wrap for 11th July 2025
CIFDAQ
 
PDF
Blockchain Transactions Explained For Everyone
CIFDAQ
 
PDF
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PDF
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
PDF
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PDF
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
PDF
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
PDF
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
PPTX
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
PDF
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
PDF
Log-Based Anomaly Detection: Enhancing System Reliability with Machine Learning
Mohammed BEKKOUCHE
 
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
July Patch Tuesday
Ivanti
 
CIFDAQ Weekly Market Wrap for 11th July 2025
CIFDAQ
 
Blockchain Transactions Explained For Everyone
CIFDAQ
 
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
Log-Based Anomaly Detection: Enhancing System Reliability with Machine Learning
Mohammed BEKKOUCHE
 
Ad

NoSQL Introduction, Theory, Implementations

  • 1. NOSQL THEORY, IMPLEMENTATION S, AN INTRODUCTION FIRAT ATAGUN [email protected] http://firatatagun.com
  • 2. NoSQL  What does it mean?  Not Only SQL.
  • 3. Use Cases        Massive write performance. Fast key value look ups. Flexible schema and data types. No single point of failure. Fast prototyping and development. Out of the box scalability. Easy maintenance.
  • 4. Motives Behind NoSQL     Big data. Scalability. Data format. Manageability.
  • 5. Big Data      Collect. Store. Organize. Analyze. Share. Data growth outruns the ability to manage it so we need scalable solutions.
  • 6. Scalability  Scale up, Vertical scalability.  Increasing server capacity.  Adding more CPU, RAM.  Managing is hard.  Possible down times
  • 7. Scalability  Scale out, Horizontal scalability.  Adding servers to existing system with little effort, aka Elastically scalable.        Shared nothing. Use of commodity/cheap hardware. Heterogeneous systems. Controlled Concurrency (avoid locks). Service Oriented Architecture. Local states.     Bugs, hardware errors, things fail all the time. It should become cheaper. Cost efficiency. Decentralized to reduce bottlenecks. Avoid Single point of failures. Asynchrony. Symmetry, you don’t have to know what is happening. All nodes should be symmetric.
  • 8. What is Wrong With RDBMS?   Nothing. One size fits all? Not really. Impedance mismatch.           Object Relational Mapping doesn't work quite well. Rigid schema design. Harder to scale. Replication. Joins across multiple nodes? Hard. How does RDMS handle data growth? Hard. Need for a DBA. Many programmers are already familiar with it. Transactions and ACID make development easy. Lots of tools to use.
  • 9. ACID Semantics     Atomicity: All or nothing. Consistency: Consistent state of data and transactions. Isolation: Transactions are isolated from each other. Durability: When the transaction is committed, state will be durable. Any data store can achieve Atomicity, Isolation and Durability but do you always need consistency? No. By giving up ACID properties, one can achieve higher performance and scalability.
  • 10. Enter CAP Theorem  Also known as Brewer’s Theorem by Prof. Eric Brewer, published in 2000 at University of Berkeley. “Of three properties of a shared data system: data consistency, system availability and tolerance to network partitions, only two can be achieved at any given moment.” Proven by Nancy Lynch et al. MIT labs.  http://www.cs.berkeley.edu/~brewer/cs262b-2004/PODC-keynote.pdf  
  • 11. CAP Semantics  Consistency: Clients should read the same data. There are many levels of consistency. Consistency – RDBMS.  Tunable Consistency – Cassandra.  Eventual Consistency – Amazon Dynamo.  Strict   Availability: Data to be available. Partial Tolerance: Data to be partitioned across network segments due to network failures.
  • 12. A Simple Proof Consistent and available No partition. App Data A Data B
  • 13. A Simple Proof Available and partitioned Not consistent, we get back old data. App Data A Old Data B
  • 14. A Simple Proof Consistent and partitioned Not available, waiting… App New Data Wait for new data A B
  • 15. BASE, an ACID Alternative Almost the opposite of ACID.  Basically available: Nodes in the a distributed environment can go down, but the whole system shouldn’t be affected.  Soft State (scalable): The state of the system and data changes over time.  Eventual Consistency: Given enough time, data will be consistent across the distributed system.
  • 16. A Clash of cultures ACID: • Strong consistency. • Less availability. • Pessimistic concurrency. • Complex. BASE: • Availability is the most important thing. Willing to sacrifice for this (CAP). • Weaker consistency (Eventual). • Best effort. • Simple and fast. • Optimistic.
  • 17. Distributed Transactions  Two phase commit.   Starbucks doesn’t use two phase commit by Gregor Hophe. Possible failures  Network errors.  Node errors.  Database errors. Commit Rollback Coordinator Acknowledge Problems: Locking the entire cluster if one node is down Possible to implement timeouts. Possible to use Quorum. Quorum: in a distributed environment, if there is partition, then the nodes vote to commit or rollback. Complete operation Release locks
  • 18. Consistent Hashing   Solves Partitioning Problem. Consistent Hashing, Memcahced.    servers = [s1, s2, s3, s4, s5] serverToSendData = servers[hash(data) % servers.length] A New Hope  Continuum Approach. Virtual Nodes in a cycle.  Hash both objects and caches.  Easy Replication.   Eventually Consistent. What happens if nodes fail?  How do you add nodes?  http://www.akamai.com/dl/technical_publications/ConsistenHashingandRandomTreesDistributedCachingprotocolsforrelievingHotSpotsontheworldwideweb.pdf
  • 20. Vector Clocks   Used for conflict detection of data. Timestamp based resolution of conflicts is not enough. Time 1: Time 2: Replicated Time 3: Update Time 4: Update Time 5: Replicated Conflict detection
  • 21. Vector Clocks Document.v.1([A, 1]) A Update Document.v.2([A, 2]) Document.v.2([A, 2],[B,1]) A B C Conflicts are detected. Document.v.2([A, 2],[C,1])
  • 22. Read Repair Value = Data.v2 Client GET (K, Q=2) Value = Data.v2 Update K = Data.v2 Value = Data.v1
  • 23. Gossip Protocol & Hinted Handoffs  Most preferred communication protocol in a distributed environment is Gossip Protocol. D A G H • All the nodes talk to each other peer wise. • There is no global state. • No single point of coordinator. • If one node goes down and there is a Quorum load for that node is shared among others. • Self managing system. • If a new node joins, load is also distributed. B C F Requests coming to F will be handled by the nodes who takes the load of F, lets say C with the hint that it took the requests which was for F, when F becomes available, F will get this Information from C. Self healing property.
  • 24. Data Models       Key/Value Pairs. Tuples (rows). Documents. Columns. Objects. Graphs. There are corresponding data stores.
  • 26. Key-Value Stores       Memcached – Key value stores. Membase – Memcached with persistence and improved consistent hashing. AppFabric Cache – Multi region Cache. Redis – Data structure server. Riak – Based on Amazon’s Dynamo. Project Voldemort – eventual consistent key value stores, auto scaling.
  • 27. Memcached         Very easy to setup and use. Consistent hashing. Scales very well. In memory caching, no persistence. LRU eviction policy. O(1) to set/get/delete. Atomic operations set/get/delete. No iterators, or very difficult.
  • 28. Membase            Easy to manage via web console. Monitoring and management via Web console. Consistency and Availability. Dynamic/Linear Scalability, add a node, hit join to cluster and rebalance. Low latency, high throughput. Compatible with current Memcached Clients. Data Durability, persistent to disk asynchronously. Rebalancing (Peer to peer replication). Fail over (Master/Slave). vBuckets are used for consistent hashing. O(1) to set/get/delete.
  • 29. Redis          Distributed Data structure server. Consistent hashing at client. Non-blocking I/O, single threaded. Values are binary safe strings: byte strings. String : Key/Value Pair, set/get. O(1) many string operations. Lists: lpush, lpop, rpush, rpop.you can use it as stack or queue. O(1). Publisher/Subscriber is available. Set: Collection of Unique elements, add, pop, union, intersection etc. set operations. Sorted Set: Unique elements sorted by scores. O(logn). Supports range operations. Hashes: Multiple Key/Value pairs HMSET user 1 username foo password bar age 30 HGET user 1 age
  • 30. Microsoft AppFabric         Add a node to the cluster easily. Elastic scalability. Namespaces to organize different caches. LRU Eviction policy. Timeout/Time to live is default to 10 min. No persistence. O(1) to set/get/delete. Optimistic and pessimistic concurrency. Supports tagging.
  • 31. Document Stores      Schema Free. Usually JSON like interchange model. Query Model: JavaScript or custom. Aggregations: Map/Reduce. Indexes are done via B-Trees.
  • 32. Mongodb        Data types: bool, int, double, string, object(bson), oid, array, null, d ate. Database and collections are created automatically. Lots of Language Drivers. Capped collections are fixed size collections, buffers, very fast, FIFO, good for logs. No indexes. Object id are generated by client, 12 bytes packed data. 4 byte time, 3 byte machine, 2 byte pid, 3 byte counter. Possible to refer other documents in different collections but more efficient to embed documents. Replication is very easy to setup. You can read from
  • 33. Mongodb   Connection pooling is done for you. Sweet. Supports aggregation.     Map Reduce with JavaScript. You have indexes, B-Trees. Ids are always indexed. Updates are atomic. Low contention locks. Querying mongo done with a document: Lazy, returns a cursor.  Reduceable to SQL, select, insert, update limit, sort etc.    Several operators:   There is more: upsert (either inserts of updates) $ne, $and, $or, $lt, $gt, $incr,$decr and so on. Repository Pattern makes development very easy.
  • 34. Mongodb - Sharding Config servers: Keeps mapping Mongos: Routing servers Mongod: master-slave replicas
  • 35. Couchdb    Availability and Partial Tolerance. Views are used to query. Map/Reduce. MVCC – Multiple Concurrent versions. No locks.           A little overhead with this approach due to garbage collection. Conflict resolution. Very simple, REST based. Schema Free. Shared nothing, seamless peer based Bi-Directional replication. Auto Compaction. Manual with Mongodb. Uses B-Trees Documents and indexes are kept in memory and flushed to disc periodically. Documents have states, in case of a failure, recovery can continue from the state documents were left. No built in auto-sharding, there are open source projects. You can’t define your indexes.
  • 37. Objectivity      No need for ORM. Closer to OOP. Complex data modeling. Schema evolution. Scalable Collections: List, Set, Map. Object relations.  Bi-Directional    relations ACID properties. Blazingly fast, uses paging. Supports replication and clustering.
  • 39. Cassandra           Tunable consistency. Decentralized. Writes are faster than reads. No Single point of failure. Incremental scalability. Uses consistent hashing (logical partitioning) when clustered. Hinted handoffs. Peer to peer routing(ring). Thrift API. Multi data center support.
  • 41. Graph Stores    Based on Graph Theory. Scale vertically, no clustering. You can use graph algorithms easily.
  • 43. Which one to use?  Key-value stores:   Document databases:   Quirky stuff. Columnar:   Complex object models. Data Structure Server:   OLTP. SQL. Transactions. Relations. OODBMS   Natural data modeling. Programmer friendly. Rapid development. Web friendly, CRUD. RDMBS:   Processing a constant stream of small reads and writes. Handles size well. Massive write loads. High availability. Multiple-data centers. MapReduce Graph:  Graph algorithms and relations. Want more ideas ? http://highscalability.com/blog/2011/6/20/35-use-cases-for-choosing-your-nextnosql-database.html 

Editor's Notes

  • #12: N : Number of nodes with a replica of data.W: Number of nodes that must acknowledge the update.R : Minimum number of nodes that succeeds read operation.W + R > N Strong ConsistencyW + R <= N Weak Consistency