SlideShare a Scribd company logo
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)

More Related Content

What's hot (20)

PDF
Web technology
Selvin Josy Bai Somu
 
DOCX
Blood bank management
Sudha Hari Tech Solution Pvt ltd
 
PPTX
Use case of hospital managment system
Mohin Uddin Majumder (Sanofi Mohin)
 
PPT
BANKING SYSTEM
Ashok Basnet
 
PPT
Bank Management System
Vinoth Ratnam Sudalaimuthu
 
PPT
Atm System
Nila Kamal Nayak
 
PDF
Food distribution management system
Amit P
 
PDF
Online Examination System Project report
SARASWATENDRA SINGH
 
DOCX
Software requirements specification of Library Management System
Soumili Sen
 
PPT
Mca ii os u-3 dead lock & io systems
Rai University
 
PPTX
Blood Bank Management System
Chirag N Jain
 
PPTX
PG LIFE.pptx
NaveenKumar6192
 
PPTX
Library management system using java technology
Pragati Startup Presentation Designer firm
 
PPT
File Management in Operating Systems
vampugani
 
PPTX
final presentation fake news detection.pptx
RudraSaraswat6
 
PPTX
Message passing in Distributed Computing Systems
Alagappa Govt Arts College, Karaikudi
 
PPTX
Online Book Store
Ankita Jangir
 
DOCX
Synopsis for property portal projects for final year students
Skyblue.aero
 
PPTX
Sales and inventory management
Rohit Gupta
 
PDF
Web Based Cattle Farm Management System Report
AL-Khalil
 
Web technology
Selvin Josy Bai Somu
 
Blood bank management
Sudha Hari Tech Solution Pvt ltd
 
Use case of hospital managment system
Mohin Uddin Majumder (Sanofi Mohin)
 
BANKING SYSTEM
Ashok Basnet
 
Bank Management System
Vinoth Ratnam Sudalaimuthu
 
Atm System
Nila Kamal Nayak
 
Food distribution management system
Amit P
 
Online Examination System Project report
SARASWATENDRA SINGH
 
Software requirements specification of Library Management System
Soumili Sen
 
Mca ii os u-3 dead lock & io systems
Rai University
 
Blood Bank Management System
Chirag N Jain
 
PG LIFE.pptx
NaveenKumar6192
 
Library management system using java technology
Pragati Startup Presentation Designer firm
 
File Management in Operating Systems
vampugani
 
final presentation fake news detection.pptx
RudraSaraswat6
 
Message passing in Distributed Computing Systems
Alagappa Govt Arts College, Karaikudi
 
Online Book Store
Ankita Jangir
 
Synopsis for property portal projects for final year students
Skyblue.aero
 
Sales and inventory management
Rohit Gupta
 
Web Based Cattle Farm Management System Report
AL-Khalil
 

More from Sean Cribbs (19)

KEY
Eventually Consistent Data Structures (from strangeloop12)
Sean Cribbs
 
KEY
Eventually-Consistent Data Structures
Sean Cribbs
 
KEY
A Case of Accidental Concurrency
Sean Cribbs
 
KEY
Embrace NoSQL and Eventual Consistency with Ripple
Sean Cribbs
 
KEY
Riak with node.js
Sean Cribbs
 
KEY
Schema Design for Riak (Take 2)
Sean Cribbs
 
PDF
Riak (Øredev nosql day)
Sean Cribbs
 
PDF
The Radiant Ethic
Sean Cribbs
 
KEY
Introduction to Riak and Ripple (KC.rb)
Sean Cribbs
 
KEY
Riak with Rails
Sean Cribbs
 
KEY
Schema Design for Riak
Sean Cribbs
 
KEY
Introduction to Riak - Red Dirt Ruby Conf Training
Sean Cribbs
 
PDF
Introducing Riak and Ripple
Sean Cribbs
 
ZIP
Round PEG, Round Hole - Parsing Functionally
Sean Cribbs
 
PDF
Story Driven Development With Cucumber
Sean Cribbs
 
KEY
Achieving Parsing Sanity In Erlang
Sean Cribbs
 
PDF
Of Rats And Dragons
Sean Cribbs
 
KEY
Erlang/OTP for Rubyists
Sean Cribbs
 
PDF
Content Management That Won't Rot Your Brain
Sean Cribbs
 
Eventually Consistent Data Structures (from strangeloop12)
Sean Cribbs
 
Eventually-Consistent Data Structures
Sean Cribbs
 
A Case of Accidental Concurrency
Sean Cribbs
 
Embrace NoSQL and Eventual Consistency with Ripple
Sean Cribbs
 
Riak with node.js
Sean Cribbs
 
Schema Design for Riak (Take 2)
Sean Cribbs
 
Riak (Øredev nosql day)
Sean Cribbs
 
The Radiant Ethic
Sean Cribbs
 
Introduction to Riak and Ripple (KC.rb)
Sean Cribbs
 
Riak with Rails
Sean Cribbs
 
Schema Design for Riak
Sean Cribbs
 
Introduction to Riak - Red Dirt Ruby Conf Training
Sean Cribbs
 
Introducing Riak and Ripple
Sean Cribbs
 
Round PEG, Round Hole - Parsing Functionally
Sean Cribbs
 
Story Driven Development With Cucumber
Sean Cribbs
 
Achieving Parsing Sanity In Erlang
Sean Cribbs
 
Of Rats And Dragons
Sean Cribbs
 
Erlang/OTP for Rubyists
Sean Cribbs
 
Content Management That Won't Rot Your Brain
Sean Cribbs
 
Ad

Editor's Notes

  • #10: Think of it like a big hash-table
  • #11: Think of it like a big hash-table
  • #12: Think of it like a big hash-table
  • #13: Think of it like a big hash-table
  • #14: Think of it like a big hash-table
  • #15: Think of it like a big hash-table
  • #16: X = throughput, compute power for MapReduce, storage, lower latency
  • #17: X = throughput, compute power for MapReduce, storage, lower latency
  • #18: X = throughput, compute power for MapReduce, storage, lower latency
  • #33: Consistent hashing means: 1) large, fixed-size key-space 2) no rehashing of keys - always hash the same way
  • #34: Consistent hashing means: 1) large, fixed-size key-space 2) no rehashing of keys - always hash the same way
  • #35: Consistent hashing means: 1) large, fixed-size key-space 2) no rehashing of keys - always hash the same way
  • #36: Consistent hashing means: 1) large, fixed-size key-space 2) no rehashing of keys - always hash the same way
  • #37: Consistent hashing means: 1) large, fixed-size key-space 2) no rehashing of keys - always hash the same way
  • #103: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #104: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #105: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #106: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #107: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #108: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #109: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #110: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #111: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #112: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #113: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #114: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #115: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #116: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #117: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #118: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #119: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #120: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #121: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #122: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #123: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #124: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #125: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #126: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #127: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #128: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #129: 1) Client requests a key 2) Get handler starts up to service the request 3) Hashes key to its owner partitions (N=3) 4) Sends similar “get” request to those partitions 5) Waits for R replies that concur (R=2) 6) Resolves the object, replies to client 7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • #159: *** make sure to talk about LWW, and commit hooks -- tell them to ignore the vclock business ***
  • #181: “Quorums”? When I say “quora” I mean the constraints (or lack thereof) your application puts on request consistency.
  • #182: Remember that requests contact all participant partitions/vnodes. No computer system is 100% reliable, so there will be times when increased latency or hardware failure will make a node unavailable. By unavailable, I mean requests timeout, the network partitions, or there’s an actual physical outage. FT = fault-tolerance, C = consistency Strong consistency (as opposed to strict) means that the participants in each read or write quorum overlap. The typical example is N=3, R=2, W=2. In all successful read requests, at least one of the read partitions will be one that accepted the latest write.
  • #183: Remember that requests contact all participant partitions/vnodes. No computer system is 100% reliable, so there will be times when increased latency or hardware failure will make a node unavailable. By unavailable, I mean requests timeout, the network partitions, or there’s an actual physical outage. FT = fault-tolerance, C = consistency Strong consistency (as opposed to strict) means that the participants in each read or write quorum overlap. The typical example is N=3, R=2, W=2. In all successful read requests, at least one of the read partitions will be one that accepted the latest write.
  • #184: Remember that requests contact all participant partitions/vnodes. No computer system is 100% reliable, so there will be times when increased latency or hardware failure will make a node unavailable. By unavailable, I mean requests timeout, the network partitions, or there’s an actual physical outage. FT = fault-tolerance, C = consistency Strong consistency (as opposed to strict) means that the participants in each read or write quorum overlap. The typical example is N=3, R=2, W=2. In all successful read requests, at least one of the read partitions will be one that accepted the latest write.
  • #185: Remember that requests contact all participant partitions/vnodes. No computer system is 100% reliable, so there will be times when increased latency or hardware failure will make a node unavailable. By unavailable, I mean requests timeout, the network partitions, or there’s an actual physical outage. FT = fault-tolerance, C = consistency Strong consistency (as opposed to strict) means that the participants in each read or write quorum overlap. The typical example is N=3, R=2, W=2. In all successful read requests, at least one of the read partitions will be one that accepted the latest write.
  • #186: However, writes are a little more complicated to track than reads. When there’s a detectable node outage/partition, writes will be sent to fallbacks (hinted handoff), which means that Riak is HIGHLY write-available. Also, there’s an implied R quorum because the internal Erlang client has to fetch the object to update it and the vclock.
  • #187: However, writes are a little more complicated to track than reads. When there’s a detectable node outage/partition, writes will be sent to fallbacks (hinted handoff), which means that Riak is HIGHLY write-available. Also, there’s an implied R quorum because the internal Erlang client has to fetch the object to update it and the vclock.
  • #188: However, writes are a little more complicated to track than reads. When there’s a detectable node outage/partition, writes will be sent to fallbacks (hinted handoff), which means that Riak is HIGHLY write-available. Also, there’s an implied R quorum because the internal Erlang client has to fetch the object to update it and the vclock.
  • #189: However, writes are a little more complicated to track than reads. When there’s a detectable node outage/partition, writes will be sent to fallbacks (hinted handoff), which means that Riak is HIGHLY write-available. Also, there’s an implied R quorum because the internal Erlang client has to fetch the object to update it and the vclock.
  • #190: Why don’t we outright reclaim the space? Ordering is hard to determine since deletes require no vclock. We prefer not lose data when there is an issue of contention.
  • #191: Why don’t we outright reclaim the space? Ordering is hard to determine since deletes require no vclock. We prefer not lose data when there is an issue of contention.
  • #192: Why don’t we outright reclaim the space? Ordering is hard to determine since deletes require no vclock. We prefer not lose data when there is an issue of contention.
  • #270: This is probably one of the easiest Map-Reduce queries/jobs you can submit. It simply returns the values of all the keys in the bucket, including their bucket/key/vclock and metadata.
  • #271: Instead of specifying the function inline, you can also store it under a bucket/key, and have Riak retrieve and execute it automatically.
  • #272: A query that makes use of the “arg” in the map phase, named functions, and a reduce phase. Finally here’s how you can submit all queries. Use the @- to signify that your data will come on the next line and be terminated by Ctrl-D.
  • #273: A query that makes use of the “arg” in the map phase, named functions, and a reduce phase. Finally here’s how you can submit all queries. Use the @- to signify that your data will come on the next line and be terminated by Ctrl-D.