SlideShare a Scribd company logo
Introduction to SolrCloud
Timothy Potter, LucidWorks
Solr Exchange: Introduction to SolrCloud
My SolrCloud Experience
• Solr Committer; currently working on hardening SolrCloud
• Operated 36 node cluster in AWS for Dachis Group (1.5 years
ago, 18 shards ~900M docs)
• Built a Fabric/boto framework for deploying and managing a
cluster in the cloud
– https://github.com/LucidWorks/solr-scale-tk
• Co-author of Solr In Action; wrote CH 13 which covers
SolrCloud
What is SolrCloud?
Subset of optional features in Solr to enable and
simplify horizontal scaling a search index using
sharding and replication.
Goals
performance, scalability, high-availability,
simplicity, and elasticity
Terminology
• ZooKeeper: Distributed coordination service that provides centralized configuration, cluster state
management, and leader election
• Node: JVM process bound to a specific port on a machine; hosts the Solr web application
• Collection: Search index distributed across multiple nodes; each collection has a name, shard
count, and replication factor
• Replication Factor: Number of copies of a document in a collection
• Shard: Logical slice of a collection; each shard has a name, hash range, leader, and replication factor.
Documents are assigned to one and only one shard per collection using a hash-based document
routing strategy.
• Replica: Solr index that hosts a copy of a shard in a collection; behind the scenes, each replica is
implemented as a Solr core
• Leader: Replica in a shard that assumes special duties needed to support distributed indexing in Solr;
each shard has one and only one leader at any time and leaders are elected using ZooKeeper
SolrCloud High-level Architecture
Java VM (J2SE v. 7)
Jetty (node 1) on port: 8984
Solr Web app
collection
shard1 - Leader
Jetty (node 3) on port: 8984
collection
shard1 - Replica
Java VM (J2SE v. 7)
Solr Web app
Java VM (J2SE v. 7)
Jetty (node 2) on port: 8985
Solr Web app
collection
shard2 - Leader
Jetty (node 4) on port: 8985
collection
shard2 - Replica
Java VM (J2SE v. 7)
Solr Web app
Zookeeper1
Zookeeper2
Zookeeper3
ZooKeeper Ensemble
Leader
Election
Replication
Replication
Sharding
Centralized
Configuration
Management
REST Web Services
XML / JSON / HTTP
Millions of
Documents
Millions of
Users
Server 1 Server 2
Load Balancer
Collection == Distributed Index
A collection is a distributed index defined by:
– named configuration stored in ZooKeeper
– number of shards: documents are distributed across N partitions of the index
– document routing strategy: how documents get assigned to shards
– replication factor: how many copies of each document in the collection
Collections API:
curl "http://localhost:8983/solr/admin/collections?
action=CREATE&name=logstash4solr&replicationFactor=2&
numShards=2&collection.configName=logs"
Demo
1. Start-up bootstrap node with embedded ZooKeeper
2. Add another shard
3. Add some replicas
4. Index some docs
5. Distributed queries
6. Knock-over a node, see cluster stay operational
Sharding
• Collection has a fixed number of shards
– existing shards can be split
• When to shard?
– Large number of docs
– Large document sizes
– Parallelization during indexing and queries
– Data partitioning (custom hashing)
Document Routing
• Each shard covers a hash-range
• Default: Hash ID into 32-bit integer, map to range
– leads to balanced (roughly) shards
• Custom-hashing (example in a few slides)
• Tri-level: app!user!doc
• Implicit: no hash-range set for shards
Replication
• Why replicate?
– High-availability
– Load balancing
• How does it work in SolrCloud?
– Near-real-time, not master-slave
– Leader forwards to replicas in parallel, waits for response
– Error handling during indexing is tricky
Distributed Indexing
View of cluster state from Zk
Shard 1
Leader
Node 1 Node 2
Shard 2
Replica
Shard 2
Leader
Shard 1
Replica
Zookeeper
CloudSolrServer
“smart client”
1
2
4
tlogtlog
Get URLs of current leaders?
35
shard1 range:
80000000-ffffffff
shard2 range:
0-7fffffff
1. Get cluster state from ZK
2. Route document directly to
leader (hash on doc ID)
3. Persist document on durable
storage (tlog)
4. Forward to healthy replicas
5. Acknowledge write
succeed to client
Shard Leader
• Additional responsibilities during indexing only! Not a
master node
• Leader is a replica (handles queries)
• Accepts update requests for the shard
• Increments the _version_ on the new or updated doc
• Sends updates (in parallel) to all replicas
Distributed Queries
View of cluster state from Zk
Shard 1
Leader
Node 1 Node 2
Shard 2
Leader
Shard 2
Replica
Shard 1
Replica
Zookeeper
CloudSolrServer
3
q=*:*
Get URLs of all live nodes
4
2
Query controller
Or just a load balancer works too
get fields
1
1. Query client can be ZK aware or just
query thru a load balancer
2. Client can send query to any node in
the cluster
3. Controller node distributes the query
to a replica for each shard to identify
documents matching query
4. Controller node sorts the results from
step 3 and issues a second query for
all fields for a page of results
Scalability / Stability Highlights
• All nodes in cluster perform indexing and execute queries; no master
node
• Distributed indexing: No SPoF, high throughput via direct updates to
leaders, automated failover to new leader
• Distributed queries: Add replicas to scale-out qps; parallelize complex
query computations; fault-tolerance
• Indexing / queries continue so long as there is 1 healthy replica per
shard
SolrCloud and CAP
• A distributed system should be: Consistent, Available, and Partition tolerant
– CAP says pick 2 of the 3! (slightly more nuanced than that in reality)
• SolrCloud favors consistency over write-availability (CP)
– All replicas in a shard have the same data
– Active replica sets concept (writes accepted so long as a shard has at least one active
replica available)
• No tools to detect or fix consistency issues in Solr
– Reads go to one replica; no concept of quorum
– Writes must fail if consistency cannot be guaranteed (SOLR-5468)
ZooKeeper
• Is a very good thing ... clusters are a zoo!
• Centralized configuration management
• Cluster state management
• Leader election (shard leader and overseer)
• Overseer distributed work queue
• Live Nodes
– Ephemeral znodes used to signal a server is gone
• Needs 3 nodes for quorum in production
ZooKeeper: Centralized Configuration
• Store config files in ZooKeeper
• Solr nodes pull config during core
initialization
• Config sets can be “shared” across
collections
• Changes are uploaded to ZK and
then collections should be reloaded
ZooKeeper: State management
• Keep track of live nodes /live_nodes znode
– ephemeral nodes
– ZooKeeper client timeout
• Collection metadata and replica state in /clusterstate.json
– Every core has watchers for /live_nodes and /clusterstate.json
• Leader election
– ZooKeeper sequence number on ephemeral znodes
Overseer
• What does it do?
– Persists collection state change events to
ZooKeeper
– Controller for Collection API commands
– Ordered updates
– One per cluster (for all collections); elected using
leader election
• How does it work?
– Asynchronous (pub/sub messaging)
– ZooKeeper as distributed queue recipe
– Automated failover to a healthy node
– Can be assigned to a dedicated node (SOLR-
5476)
Collection Aliases
Indexing
Client 1
Indexing
Client 2
Indexing
Client N...
logstash4solr collection
Search
Client 1
Search
Client 2
Search
Client N
...
logstash4solr-write
collection alias
logstash4solr-read
collection alias
Update requests
Query requests
logstash4solr collection
Queries continue to execute
against the logstash4solr collection
while the new one is building
Use the Collections API to
create a new collection named logstash4solr2
and update the logstash4solr-write alias
to direct writes to the new collection
Custom Hashing
{
"id" : ”httpd!2",
"level_s" : ”ERROR",
"lang_s" : "en",
...
},
Hash:
shardKey!docID
shard1 range:
80000000-ffffffff
shard2 range:
0-7fffffff
Shard 1
Leader
Shard 2
Leader
• Route documents to specific shards based on a shard key component in
the document ID
– Send all log messages from the same system to the same shard
• Direct queries to specific shards: q=...&_route_=httpd
Custom Hashing Highlights
• Co-locate documents having a common property in the same shard
– e.g. docs having IDs httpd!21 and httpd!33 will be in the same shard
• Scale-up the replicas for specific shards to address high query and/or
indexing volume from specific apps
• Not as much control over the distribution of keys
– httpd, mysql, and collectd all in same shard
• Can split unbalanced shards when using custom hashing
Shard Splitting
• Split range in half
Shard 1_1
Leader
Node 1 Node 2
Shard 2
Leader
Shard 2
Replica
Shard 1_1
Replica
shard1_0 range:
80000000-bfffffff
shard2 range:
0-7fffffff
Shard 1_0
Leader
Shard 1_0
Replica
shard1_1 range:
c0000000-ffffffff
Shard 1
Leader
Node 1 Node 2
Shard 2
Leader
Shard 2
Replica
Shard 1
Replica
shard1 range:
80000000-ffffffff
shard2 range:
0-7fffffff
Other Features / Highlights
• Near-Real-Time Search: Documents are visible within a second or so after being
indexed
• Partial Document Update: Just update the fields you need to change on existing
documents
• Optimistic Locking: Ensure updates are applied to the correct version of a
document
• Transaction log: Better recoverability; peer-sync between nodes after hiccups
• HTTPS
• Use HDFS for storing indexes
• Use MapReduce for building index (SOLR-1301)
What’s Next?
• Constantly hardening existing features
– More Chaos monkey tests to cover tricky areas in the code
• Large-scale performance testing; 1000’s of collections, 100’s of Solr nodes,
billions of documents
• Splitting collection state into separate znodes (SOLR-5473)
• Collection management UI (SOLR-4388)
• Cluster deployment / management tools
– My talk tomorrow: http://sched.co/1bsKUMn
• Ease of use!
– Please contribute to the mailing list, wiki, JIRA
Wrap-up / Questions
• LucidWorks: http://www.lucidworks.com
• Solr Scale Toolkit: https://github.com/LucidWorks/solr-scale-tk
• SiLK: http://www.lucidworks.com/lucidworks-silk/
• Solr In Action: http://www.manning.com/grainger/
• Connect: @thelabdude / tim.potter@lucidworks.com

More Related Content

PPTX
Dependency injection using Google guice
Aman Verma
 
PDF
Networking in Java with NIO and Netty
Constantine Slisenka
 
PPTX
Intro to git and git hub
Venkat Malladi
 
PPTX
Git
Shinu Suresh
 
PPTX
GitBlit plugin for Gerrit Code Review
Luca Milanesio
 
ODP
Git ve GitHub
ismail AKBUDAK
 
PPTX
SSRF exploit the trust relationship
n|u - The Open Security Community
 
PDF
Introduction to Nexus Repository Manager.pdf
Knoldus Inc.
 
Dependency injection using Google guice
Aman Verma
 
Networking in Java with NIO and Netty
Constantine Slisenka
 
Intro to git and git hub
Venkat Malladi
 
GitBlit plugin for Gerrit Code Review
Luca Milanesio
 
Git ve GitHub
ismail AKBUDAK
 
SSRF exploit the trust relationship
n|u - The Open Security Community
 
Introduction to Nexus Repository Manager.pdf
Knoldus Inc.
 

What's hot (20)

PDF
CVE-2015-8562の脆弱性について
Yu Iwama
 
PDF
Kali ile Linux'e Giriş | IntelRAD
Mehmet Ince
 
PDF
Spring Security
Knoldus Inc.
 
PDF
Graylog for open stack 3 steps to know why
Vietnam Open Infrastructure User Group
 
PPT
Advanced SQL Injection
amiable_indian
 
DOCX
Keepalived+MaxScale+MariaDB_운영매뉴얼_1.0.docx
NeoClova
 
PDF
Introduction to Git and GitHub
Vikram SV
 
PDF
SSRF workshop
Ivan Novikov
 
PDF
Git 版本控制 (使用教學)
Jui An Huang (黃瑞安)
 
PDF
git and github
Darren Oakley
 
PPTX
MySQL_MariaDB로의_전환_기술요소-202212.pptx
NeoClova
 
PPTX
Git One Day Training Notes
glen_a_smith
 
PPTX
Beyaz Şapkalı Hacker CEH Eğitimi - Bölüm 1, 2, 3
BGA Cyber Security
 
PDF
Course 102: Lecture 12: Basic Text Handling
Ahmed El-Arabawy
 
PPT
Continuous Integration (Jenkins/Hudson)
Dennys Hsieh
 
PPTX
Nginx A High Performance Load Balancer, Web Server & Reverse Proxy
Amit Aggarwal
 
PDF
Web Uygulamalarında Kaynak Kod Analizi - 1
Mehmet Ince
 
PDF
Oracle Veritabanı Güvenlik Testi Çalışmaları
BGA Cyber Security
 
PPTX
[135] 오픈소스 데이터베이스, 은행 서비스에 첫발을 내밀다.
NAVER D2
 
CVE-2015-8562の脆弱性について
Yu Iwama
 
Kali ile Linux'e Giriş | IntelRAD
Mehmet Ince
 
Spring Security
Knoldus Inc.
 
Graylog for open stack 3 steps to know why
Vietnam Open Infrastructure User Group
 
Advanced SQL Injection
amiable_indian
 
Keepalived+MaxScale+MariaDB_운영매뉴얼_1.0.docx
NeoClova
 
Introduction to Git and GitHub
Vikram SV
 
SSRF workshop
Ivan Novikov
 
Git 版本控制 (使用教學)
Jui An Huang (黃瑞安)
 
git and github
Darren Oakley
 
MySQL_MariaDB로의_전환_기술요소-202212.pptx
NeoClova
 
Git One Day Training Notes
glen_a_smith
 
Beyaz Şapkalı Hacker CEH Eğitimi - Bölüm 1, 2, 3
BGA Cyber Security
 
Course 102: Lecture 12: Basic Text Handling
Ahmed El-Arabawy
 
Continuous Integration (Jenkins/Hudson)
Dennys Hsieh
 
Nginx A High Performance Load Balancer, Web Server & Reverse Proxy
Amit Aggarwal
 
Web Uygulamalarında Kaynak Kod Analizi - 1
Mehmet Ince
 
Oracle Veritabanı Güvenlik Testi Çalışmaları
BGA Cyber Security
 
[135] 오픈소스 데이터베이스, 은행 서비스에 첫발을 내밀다.
NAVER D2
 
Ad

Similar to Solr Exchange: Introduction to SolrCloud (20)

ODP
GIDS2014: SolrCloud: Searching Big Data
Shalin Shekhar Mangar
 
PDF
Introduction to SolrCloud
Varun Thacker
 
PPTX
Benchmarking Solr Performance at Scale
thelabdude
 
PDF
Solr Compute Cloud – An Elastic Solr Infrastructure: Presented by Nitin Sharm...
Lucidworks
 
PPTX
Solr Compute Cloud - An Elastic SolrCloud Infrastructure
Nitin S
 
PPTX
Solr Lucene Conference 2014 - Nitin Presentation
Nitin Sharma
 
PPTX
SFBay Area Solr Meetup - June 18th: Benchmarking Solr Performance
Lucidworks (Archived)
 
PDF
Scaling SolrCloud to a Large Number of Collections - Fifth Elephant 2014
Shalin Shekhar Mangar
 
PPTX
Solr Lucene Revolution 2014 - Solr Compute Cloud - Nitin
bloomreacheng
 
PPTX
Benchmarking Solr Performance
Lucidworks
 
PDF
Scaling search with SolrCloud
Saumitra Srivastav
 
PDF
Scaling SolrCloud to a Large Number of Collections: Presented by Shalin Shekh...
Lucidworks
 
PDF
First oslo solr community meetup lightning talk janhoy
Cominvent AS
 
PPTX
Deploying and managing SolrCloud in the cloud using the Solr Scale Toolkit
thelabdude
 
PDF
[Hic2011] using hadoop lucene-solr-for-large-scale-search by systex
James Chen
 
PDF
Deploying and managing Solr at scale
Anshum Gupta
 
PPTX
Meetup on Apache Zookeeper
Anshul Patel
 
PPTX
Real time Analytics with Apache Kafka and Apache Spark
Rahul Jain
 
PPTX
Scaling SolrCloud to a large number of Collections
Anshum Gupta
 
PPT
Zookeeper Introduce
jhao niu
 
GIDS2014: SolrCloud: Searching Big Data
Shalin Shekhar Mangar
 
Introduction to SolrCloud
Varun Thacker
 
Benchmarking Solr Performance at Scale
thelabdude
 
Solr Compute Cloud – An Elastic Solr Infrastructure: Presented by Nitin Sharm...
Lucidworks
 
Solr Compute Cloud - An Elastic SolrCloud Infrastructure
Nitin S
 
Solr Lucene Conference 2014 - Nitin Presentation
Nitin Sharma
 
SFBay Area Solr Meetup - June 18th: Benchmarking Solr Performance
Lucidworks (Archived)
 
Scaling SolrCloud to a Large Number of Collections - Fifth Elephant 2014
Shalin Shekhar Mangar
 
Solr Lucene Revolution 2014 - Solr Compute Cloud - Nitin
bloomreacheng
 
Benchmarking Solr Performance
Lucidworks
 
Scaling search with SolrCloud
Saumitra Srivastav
 
Scaling SolrCloud to a Large Number of Collections: Presented by Shalin Shekh...
Lucidworks
 
First oslo solr community meetup lightning talk janhoy
Cominvent AS
 
Deploying and managing SolrCloud in the cloud using the Solr Scale Toolkit
thelabdude
 
[Hic2011] using hadoop lucene-solr-for-large-scale-search by systex
James Chen
 
Deploying and managing Solr at scale
Anshum Gupta
 
Meetup on Apache Zookeeper
Anshul Patel
 
Real time Analytics with Apache Kafka and Apache Spark
Rahul Jain
 
Scaling SolrCloud to a large number of Collections
Anshum Gupta
 
Zookeeper Introduce
jhao niu
 
Ad

More from thelabdude (8)

PPTX
Running Solr in the Cloud at Memory Speed with Alluxio
thelabdude
 
PPTX
NYC Lucene/Solr Meetup: Spark / Solr
thelabdude
 
PPTX
ApacheCon NA 2015 Spark / Solr Integration
thelabdude
 
PPTX
Integrate Solr with real-time stream processing applications
thelabdude
 
PPTX
Scaling Through Partitioning and Shard Splitting in Solr 4
thelabdude
 
PPTX
Lucene Revolution 2013 - Scaling Solr Cloud for Large-scale Social Media Anal...
thelabdude
 
PPT
Boosting Documents in Solr (Lucene Revolution 2011)
thelabdude
 
PPTX
Dachis Group Pig Hackday: Pig 202
thelabdude
 
Running Solr in the Cloud at Memory Speed with Alluxio
thelabdude
 
NYC Lucene/Solr Meetup: Spark / Solr
thelabdude
 
ApacheCon NA 2015 Spark / Solr Integration
thelabdude
 
Integrate Solr with real-time stream processing applications
thelabdude
 
Scaling Through Partitioning and Shard Splitting in Solr 4
thelabdude
 
Lucene Revolution 2013 - Scaling Solr Cloud for Large-scale Social Media Anal...
thelabdude
 
Boosting Documents in Solr (Lucene Revolution 2011)
thelabdude
 
Dachis Group Pig Hackday: Pig 202
thelabdude
 

Recently uploaded (20)

PDF
Enhancing Healthcare RPM Platforms with Contextual AI Integration
Cadabra Studio
 
PDF
MiniTool Power Data Recovery Crack New Pre Activated Version Latest 2025
imang66g
 
PDF
WatchTraderHub - Watch Dealer software with inventory management and multi-ch...
WatchDealer Pavel
 
PDF
Adobe Illustrator Crack Full Download (Latest Version 2025) Pre-Activated
imang66g
 
PDF
Immersive experiences: what Pharo users do!
ESUG
 
PPTX
Web Testing.pptx528278vshbuqffqhhqiwnwuq
studylike474
 
PPTX
The-Dawn-of-AI-Reshaping-Our-World.pptxx
parthbhanushali307
 
PPTX
Explanation about Structures in C language.pptx
Veeral Rathod
 
PDF
Summary Of Odoo 18.1 to 18.4 : The Way For Odoo 19
CandidRoot Solutions Private Limited
 
PPTX
classification of computer and basic part of digital computer
ravisinghrajpurohit3
 
PDF
Exploring AI Agents in Process Industries
amoreira6
 
PDF
49784907924775488180_LRN2959_Data_Pump_23ai.pdf
Abilash868456
 
PDF
Salesforce Implementation Services Provider.pdf
VALiNTRY360
 
PDF
Using licensed Data Loss Prevention (DLP) as a strategic proactive data secur...
Q-Advise
 
DOCX
Can You Build Dashboards Using Open Source Visualization Tool.docx
Varsha Nayak
 
PDF
Download iTop VPN Free 6.1.0.5882 Crack Full Activated Pre Latest 2025
imang66g
 
PPTX
Can You Build Dashboards Using Open Source Visualization Tool.pptx
Varsha Nayak
 
PPTX
Presentation about variables and constant.pptx
safalsingh810
 
PPTX
ASSIGNMENT_1[1][1][1][1][1] (1) variables.pptx
kr2589474
 
PDF
Protecting the Digital World Cyber Securit
dnthakkar16
 
Enhancing Healthcare RPM Platforms with Contextual AI Integration
Cadabra Studio
 
MiniTool Power Data Recovery Crack New Pre Activated Version Latest 2025
imang66g
 
WatchTraderHub - Watch Dealer software with inventory management and multi-ch...
WatchDealer Pavel
 
Adobe Illustrator Crack Full Download (Latest Version 2025) Pre-Activated
imang66g
 
Immersive experiences: what Pharo users do!
ESUG
 
Web Testing.pptx528278vshbuqffqhhqiwnwuq
studylike474
 
The-Dawn-of-AI-Reshaping-Our-World.pptxx
parthbhanushali307
 
Explanation about Structures in C language.pptx
Veeral Rathod
 
Summary Of Odoo 18.1 to 18.4 : The Way For Odoo 19
CandidRoot Solutions Private Limited
 
classification of computer and basic part of digital computer
ravisinghrajpurohit3
 
Exploring AI Agents in Process Industries
amoreira6
 
49784907924775488180_LRN2959_Data_Pump_23ai.pdf
Abilash868456
 
Salesforce Implementation Services Provider.pdf
VALiNTRY360
 
Using licensed Data Loss Prevention (DLP) as a strategic proactive data secur...
Q-Advise
 
Can You Build Dashboards Using Open Source Visualization Tool.docx
Varsha Nayak
 
Download iTop VPN Free 6.1.0.5882 Crack Full Activated Pre Latest 2025
imang66g
 
Can You Build Dashboards Using Open Source Visualization Tool.pptx
Varsha Nayak
 
Presentation about variables and constant.pptx
safalsingh810
 
ASSIGNMENT_1[1][1][1][1][1] (1) variables.pptx
kr2589474
 
Protecting the Digital World Cyber Securit
dnthakkar16
 

Solr Exchange: Introduction to SolrCloud

  • 3. My SolrCloud Experience • Solr Committer; currently working on hardening SolrCloud • Operated 36 node cluster in AWS for Dachis Group (1.5 years ago, 18 shards ~900M docs) • Built a Fabric/boto framework for deploying and managing a cluster in the cloud – https://github.com/LucidWorks/solr-scale-tk • Co-author of Solr In Action; wrote CH 13 which covers SolrCloud
  • 4. What is SolrCloud? Subset of optional features in Solr to enable and simplify horizontal scaling a search index using sharding and replication. Goals performance, scalability, high-availability, simplicity, and elasticity
  • 5. Terminology • ZooKeeper: Distributed coordination service that provides centralized configuration, cluster state management, and leader election • Node: JVM process bound to a specific port on a machine; hosts the Solr web application • Collection: Search index distributed across multiple nodes; each collection has a name, shard count, and replication factor • Replication Factor: Number of copies of a document in a collection • Shard: Logical slice of a collection; each shard has a name, hash range, leader, and replication factor. Documents are assigned to one and only one shard per collection using a hash-based document routing strategy. • Replica: Solr index that hosts a copy of a shard in a collection; behind the scenes, each replica is implemented as a Solr core • Leader: Replica in a shard that assumes special duties needed to support distributed indexing in Solr; each shard has one and only one leader at any time and leaders are elected using ZooKeeper
  • 6. SolrCloud High-level Architecture Java VM (J2SE v. 7) Jetty (node 1) on port: 8984 Solr Web app collection shard1 - Leader Jetty (node 3) on port: 8984 collection shard1 - Replica Java VM (J2SE v. 7) Solr Web app Java VM (J2SE v. 7) Jetty (node 2) on port: 8985 Solr Web app collection shard2 - Leader Jetty (node 4) on port: 8985 collection shard2 - Replica Java VM (J2SE v. 7) Solr Web app Zookeeper1 Zookeeper2 Zookeeper3 ZooKeeper Ensemble Leader Election Replication Replication Sharding Centralized Configuration Management REST Web Services XML / JSON / HTTP Millions of Documents Millions of Users Server 1 Server 2 Load Balancer
  • 7. Collection == Distributed Index A collection is a distributed index defined by: – named configuration stored in ZooKeeper – number of shards: documents are distributed across N partitions of the index – document routing strategy: how documents get assigned to shards – replication factor: how many copies of each document in the collection Collections API: curl "http://localhost:8983/solr/admin/collections? action=CREATE&name=logstash4solr&replicationFactor=2& numShards=2&collection.configName=logs"
  • 8. Demo 1. Start-up bootstrap node with embedded ZooKeeper 2. Add another shard 3. Add some replicas 4. Index some docs 5. Distributed queries 6. Knock-over a node, see cluster stay operational
  • 9. Sharding • Collection has a fixed number of shards – existing shards can be split • When to shard? – Large number of docs – Large document sizes – Parallelization during indexing and queries – Data partitioning (custom hashing)
  • 10. Document Routing • Each shard covers a hash-range • Default: Hash ID into 32-bit integer, map to range – leads to balanced (roughly) shards • Custom-hashing (example in a few slides) • Tri-level: app!user!doc • Implicit: no hash-range set for shards
  • 11. Replication • Why replicate? – High-availability – Load balancing • How does it work in SolrCloud? – Near-real-time, not master-slave – Leader forwards to replicas in parallel, waits for response – Error handling during indexing is tricky
  • 12. Distributed Indexing View of cluster state from Zk Shard 1 Leader Node 1 Node 2 Shard 2 Replica Shard 2 Leader Shard 1 Replica Zookeeper CloudSolrServer “smart client” 1 2 4 tlogtlog Get URLs of current leaders? 35 shard1 range: 80000000-ffffffff shard2 range: 0-7fffffff 1. Get cluster state from ZK 2. Route document directly to leader (hash on doc ID) 3. Persist document on durable storage (tlog) 4. Forward to healthy replicas 5. Acknowledge write succeed to client
  • 13. Shard Leader • Additional responsibilities during indexing only! Not a master node • Leader is a replica (handles queries) • Accepts update requests for the shard • Increments the _version_ on the new or updated doc • Sends updates (in parallel) to all replicas
  • 14. Distributed Queries View of cluster state from Zk Shard 1 Leader Node 1 Node 2 Shard 2 Leader Shard 2 Replica Shard 1 Replica Zookeeper CloudSolrServer 3 q=*:* Get URLs of all live nodes 4 2 Query controller Or just a load balancer works too get fields 1 1. Query client can be ZK aware or just query thru a load balancer 2. Client can send query to any node in the cluster 3. Controller node distributes the query to a replica for each shard to identify documents matching query 4. Controller node sorts the results from step 3 and issues a second query for all fields for a page of results
  • 15. Scalability / Stability Highlights • All nodes in cluster perform indexing and execute queries; no master node • Distributed indexing: No SPoF, high throughput via direct updates to leaders, automated failover to new leader • Distributed queries: Add replicas to scale-out qps; parallelize complex query computations; fault-tolerance • Indexing / queries continue so long as there is 1 healthy replica per shard
  • 16. SolrCloud and CAP • A distributed system should be: Consistent, Available, and Partition tolerant – CAP says pick 2 of the 3! (slightly more nuanced than that in reality) • SolrCloud favors consistency over write-availability (CP) – All replicas in a shard have the same data – Active replica sets concept (writes accepted so long as a shard has at least one active replica available) • No tools to detect or fix consistency issues in Solr – Reads go to one replica; no concept of quorum – Writes must fail if consistency cannot be guaranteed (SOLR-5468)
  • 17. ZooKeeper • Is a very good thing ... clusters are a zoo! • Centralized configuration management • Cluster state management • Leader election (shard leader and overseer) • Overseer distributed work queue • Live Nodes – Ephemeral znodes used to signal a server is gone • Needs 3 nodes for quorum in production
  • 18. ZooKeeper: Centralized Configuration • Store config files in ZooKeeper • Solr nodes pull config during core initialization • Config sets can be “shared” across collections • Changes are uploaded to ZK and then collections should be reloaded
  • 19. ZooKeeper: State management • Keep track of live nodes /live_nodes znode – ephemeral nodes – ZooKeeper client timeout • Collection metadata and replica state in /clusterstate.json – Every core has watchers for /live_nodes and /clusterstate.json • Leader election – ZooKeeper sequence number on ephemeral znodes
  • 20. Overseer • What does it do? – Persists collection state change events to ZooKeeper – Controller for Collection API commands – Ordered updates – One per cluster (for all collections); elected using leader election • How does it work? – Asynchronous (pub/sub messaging) – ZooKeeper as distributed queue recipe – Automated failover to a healthy node – Can be assigned to a dedicated node (SOLR- 5476)
  • 21. Collection Aliases Indexing Client 1 Indexing Client 2 Indexing Client N... logstash4solr collection Search Client 1 Search Client 2 Search Client N ... logstash4solr-write collection alias logstash4solr-read collection alias Update requests Query requests logstash4solr collection Queries continue to execute against the logstash4solr collection while the new one is building Use the Collections API to create a new collection named logstash4solr2 and update the logstash4solr-write alias to direct writes to the new collection
  • 22. Custom Hashing { "id" : ”httpd!2", "level_s" : ”ERROR", "lang_s" : "en", ... }, Hash: shardKey!docID shard1 range: 80000000-ffffffff shard2 range: 0-7fffffff Shard 1 Leader Shard 2 Leader • Route documents to specific shards based on a shard key component in the document ID – Send all log messages from the same system to the same shard • Direct queries to specific shards: q=...&_route_=httpd
  • 23. Custom Hashing Highlights • Co-locate documents having a common property in the same shard – e.g. docs having IDs httpd!21 and httpd!33 will be in the same shard • Scale-up the replicas for specific shards to address high query and/or indexing volume from specific apps • Not as much control over the distribution of keys – httpd, mysql, and collectd all in same shard • Can split unbalanced shards when using custom hashing
  • 24. Shard Splitting • Split range in half Shard 1_1 Leader Node 1 Node 2 Shard 2 Leader Shard 2 Replica Shard 1_1 Replica shard1_0 range: 80000000-bfffffff shard2 range: 0-7fffffff Shard 1_0 Leader Shard 1_0 Replica shard1_1 range: c0000000-ffffffff Shard 1 Leader Node 1 Node 2 Shard 2 Leader Shard 2 Replica Shard 1 Replica shard1 range: 80000000-ffffffff shard2 range: 0-7fffffff
  • 25. Other Features / Highlights • Near-Real-Time Search: Documents are visible within a second or so after being indexed • Partial Document Update: Just update the fields you need to change on existing documents • Optimistic Locking: Ensure updates are applied to the correct version of a document • Transaction log: Better recoverability; peer-sync between nodes after hiccups • HTTPS • Use HDFS for storing indexes • Use MapReduce for building index (SOLR-1301)
  • 26. What’s Next? • Constantly hardening existing features – More Chaos monkey tests to cover tricky areas in the code • Large-scale performance testing; 1000’s of collections, 100’s of Solr nodes, billions of documents • Splitting collection state into separate znodes (SOLR-5473) • Collection management UI (SOLR-4388) • Cluster deployment / management tools – My talk tomorrow: http://sched.co/1bsKUMn • Ease of use! – Please contribute to the mailing list, wiki, JIRA
  • 27. Wrap-up / Questions • LucidWorks: http://www.lucidworks.com • Solr Scale Toolkit: https://github.com/LucidWorks/solr-scale-tk • SiLK: http://www.lucidworks.com/lucidworks-silk/ • Solr In Action: http://www.manning.com/grainger/ • Connect: @thelabdude / [email protected]

Editor's Notes

  • #3: Tag cloud showing the major concepts in SolrCloud
  • #5: Optional: You don’t have to use SolrCloud if you don’t need itHorizontal scaling: add more nodesSharding: split a large index into slices, where each slice contains a subset of the entire document setReplication: add copies of each document in an index to support more queries per second and high-availability
  • #6: This slide just for reference
  • #7: Shard Leaders are elected using ZooKeeperLeaders forward documents to replicas in real-timeZooKeeper provides centralized configuration, leader election, and cluster state managementZooKeeper can be clustered into multiple nodes called an “ensemble” and is highly scalable and fault tolerant
  • #9: Logstash4SolrCome to my other talk if you want to see more SolrClouddev-opsdebug=track, shards.info
  • #10: Parallelize during indexing and query executionData partitioning
  • #11: http://searchhub.org/2014/01/06/10590/
  • #12: Near-real-time
  • #13: TODO: mention streamingTODO: Better diagramTODO: Better coverage of TLOGEach shard covers a unique hash rangeShard leader applies document versioning to support optimistic locking and directs update requests to healthy replicas.CloudSolrServer supports high-throughput indexing by sending batches of documents in parallel directly to shard leaders.CloudSolrServer is a “smart client” in that it queries ZooKeeper for cluster state (and watches for cluster state changes).If you provide a batch of 100 documents to CloudSolrServer, it will break the batch up into sub-batches for each shard and then send the sub-batches in parallel directly to the shard leaders.
  • #14: TODO: this slide is weakAutomated failoverWhy do we need a leader?
  • #15: Better diagram
  • #16: TODO: Work this slide into others
  • #17: Why consistency?What does that require? ie. what do I give up?How does this affect me in reality?
  • #22: Mention other uses of Collection Aliases tooShard aliases (future)
  • #24: Allows you to target queries to specific shards (when that makes sense)Non-distributed queries
  • #25: Split range by _route_ (SOLR-5308, 5338)Mention over-sharding (diagram maybe)TODO: Animate and show moving to another nodeShow the Collections API command