SlideShare a Scribd company logo
ElasticSearch for DevOps
What’s ElasticSearch?
• “flexible and powerful open source,
distributed real-time
search and analytics engine for the
cloud”
• http://www.elasticsearch.org/
What’s ElasticSearch?
• “flexible and powerful open source,
distributed real-time
search and analytics engine for the
cloud”
• JSON-oriented;
• RESTful API;
• Schema free.
MySQL ElasticSearch
database Index
table Type
column field
Defined data type Auto detected
What’s ElasticSearch?
• “flexible and powerful open source,
distributed real-time
search and analytics engine for the
cloud”
• Master nodes & data nodes;
• Auto-organize for replicas and shards;
• Asynchronous transport between nodes.
What’s ElasticSearch?
• “flexible and powerful open source,
distributed real-time
search and analytics engine for the
cloud”
• Flush every 1 second.
What’s ElasticSearch?
• “flexible and powerful open source,
distributed real-time
search and analytics engine for the
cloud”
• Build on Apache lucene.
• Also has facets just as solr.
What’s ElasticSearch?
• “flexible and powerful open source,
distributed real-time
search and analytics engine for the
cloud”
• Give a cluster name, auto-discovery by
unicast/multicast ping or EC2 key.
• No zookeeper needed.
Howto Curl
• Index
$ curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}‘
{"ok":true,"_index":“twitter","_type":“tweet","_id":"1","_v
ersion":1}
Howto Curl
• Get
$ curl -XGET 'http://localhost:9200/twitter/tweet/1'
{
"_index" : "twitter",
"_type" : "tweet",
"_id" : "1",
"_source" : {
"user" : "kimchy",
"postDate" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}
}
Howto Curl
• Query
$ curl -XPOST 'http://localhost:9200/twitter/tweet/_search?
pretty=1&size=1' -d '{
"query" : {
"term" : { "user" : "kimchy" }
"fields": ["message"]
}
}'
Howto Curl
• Query
• Term => { match some terms (after analyzed)}
• Match => { match whole field (no analyzed)}
• Prefix => { match field prefix (no analyzed)}
• Range => { from, to}
• Regexp => { .* }
• Query_string => { this AND that OR thus }
• Must/must_not => {query}
• Shoud => [{query},{}]
• Bool => {must,must_not,should,…}
Howto Curl
• Filter
$ curl -XPOST 'http://localhost:9200/twitter/tweet/_search?
pretty=1&size=1' -d '{
"query" : {
“match_all" : {}
},
"filter" : {
"term" : { “user" : “kimchy" }
}
}'
Much faster because filter is cacheable and do not calcute
_score.
Howto Curl
• Filter
• And => [{filter},{filter}] (only two)
• Not => {filter}
• Or => [{filter},{filter}](only two)
• Script => {“script”:”doc[‘field’].value > 10”}
• Other like the query DSL
Howto Curl
• Facets
$ curl -XPOST 'http://localhost:9200/twitter/tweet/_search?pretty=1&size=0'
-d '{
"query" : {
“match_all" : {}
},
"filter" : {
“prefix" : { “user" : “k" }
},
"facets" : {
“usergroup" : {
"terms" : { "field" : “user" }
}
}
}'
Howto Curl
• Facets
• terms => [{“term”:”kimchy”,”count”:20},{}]
• Range <= [{“from”:10,”to”:20},]
• Histogram <= {“field”:”user”,”interval”:10}
• Statistical <= {“field”:”reqtime”}
=> [{“min”:,”max”:,”avg”:,”count”:}]
Howto Perl – ElasticSearch.pm
use ElasticSearch;
my $es = ElasticSearch->new(
servers => 'search.foo.com:9200', # default '127.0.0.1:9200'
transport => 'http' # default 'http'
| 'httplite ' # 30% faster, future default
| 'httptiny ' # 1% more faster
| 'curl'
| 'aehttp'
| 'aecurl'
| 'thrift', # generated code too slow
max_requests => 10_000, # default 10000
trace_calls => 'log_file',
no_refresh => 0 | 1,
);
Howto Perl – ElasticSearch.pm
use ElasticSearch;
my $es = ElasticSearch->new(
servers => 'search.foo.com:9200',
transport => 'httptiny ‘,
max_requests => 10_000,
trace_calls => 'log_file',
no_refresh => 0 | 1,
);
• Get nodelist by /_cluster API from the $servers;
• Rand change request to other node after
$max_requests.
Howto Perl – ElasticSearch.pm
$es->index(
index => 'twitter',
type => 'tweet',
id => 1,
data => {
user => 'kimchy',
post_date => '2009-11-15T14:12:12',
message => 'trying out Elastic Search'
}
);
Howto Perl – ElasticSearch.pm
$es->search(
facets => {
wow_facet => {
query => { text => { content => 'wow' }},
facet_filter => { term => {status => 'active' }},
}
}
)
Howto Perl – ElasticSearch.pm
$es->search(
facets => {
wow_facet => {
queryb => { content => 'wow' },
facet_filterb => { status => 'active' },
}
}
)
ElasticSearch::SearchBuilder
More perlish
SQL::Abstract-like
But I don’t like ==!
Howto Perl – Elastic::Model
• Tie a Moose object to elasticsearch
package MyApp;
use Elastic::Model;
has_namespace 'myapp' => {
user => 'MyApp::User'
};
no Elastic::Model;
1;
Howto Perl – Elastic::Model
package MyApp::User;
use Elastic::Doc;
use DateTime;
has 'name' => (
is => 'rw',
isa => 'Str',
);
has 'email' => (
is => 'rw',
isa => 'Str',
);
has 'created' => (
is => 'ro',
isa => 'DateTime',
default => sub { DateTime->now }
);
no Elastic::Doc;
1;
Howto Perl – Elastic::Model
package MyApp::User;
use Moose;
use DateTime;
has 'name' => (
is => 'rw',
isa => 'Str',
);
has 'email' => (
is => 'rw',
isa => 'Str',
);
has 'created' => (
is => 'ro',
isa => 'DateTime',
default => sub { DateTime->now }
);
no Moose;
1;
Howto Perl – Elastic::Model
• Connect to db
my $es = ElasticSearch->new( servers => 'localhost:9200' );
my $model = MyApp->new( es => $es );
• Create database and table
$model->namespace('myapp')->index->create();
• CRUD
my $domain = $model->domain('myapp');
$domain->newdoc()|get();
• search
my $search = $domain->view->type(‘user’)->query(…)->filterb(…);
$results = $search->search;
say "Total results found: ".$results->total;
while (my $doc = $results->next_doc) {
say $doc->name;
}
ES for Dev -- Github
• 20TB data;
• 1300000000 files;
• 130000000000 code lines.
• Using 26 Elasticsearch storage nodes(each
has 2TB SSD) managed by puppet.
• 1replica + 20 shards.
• https://github.com/blog/1381-a-whole-new-code-search
• https://github.com/blog/1397-recent-code-search-outages
ES for Dev – Git::Search
• Thank you, Mateu Hunter!
• https://github.com/mateu/Git-Search
cpanm --installdeps .
cp git-search.conf git-search-local.conf
edit git-search-local.conf
perl -Ilib bin/insert_docs.pl
plackup -Ilib
curl http://localhost:5000/text_you_want
ES for Perler -- Metacpan
• search.cpan.org => metacpan.org
• use ElasticSearch as API backend;
• use Catalyst build website frontend.
• Learn API:
https://github.com/CPAN-API/cpan-api/wiki/API-docs
• Have a try:
http://explorer.metacpan.org/
ES for Perler – index-weekly
• A Perl script (55 lines) to index
devopsweekly into elasticsearch.
• https://github.com/alcy/index-weekly
• We can do same thing to perlweekly,right?
ES for logging - Logstash
• “logstash is a tool for managing events
and logs. You can use it to collect logs,
parse them, and store them for later use.”
• http://logstash.net/
ES for logging - Logstash
• “logstash is a tool for managing events
and logs. You can use it to collect logs,
parse them, and store them for later use.”
• Log is stream, not file!
• Event is something not only oneline!
ES for logging - Logstash
• “logstash is a tool for managing events
and logs. You can use it to collect logs,
parse them, and store them for later use.”
• file/*mq/stdin/tcp/udp/websocket…(34
input plugins now)
ES for logging - Logstash
• “logstash is a tool for managing events
and logs. You can use it to collect logs,
parse them, and store them for later use.”
• date/geoip/grok/multiline/mutate…(29
filter plugins now)
ES for logging - Logstash
• “logstash is a tool for managing events
and logs. You can use it to collect logs,
parse them, and store them for later use.”
• transfer:stdout/*mq/tcp/udp/file/websocket…
• alert:ganglia/nagios/opentsdb/graphite/irc/xmpp
/email…
• store:elasticsearch/mongodb/riak
• (47 output plugins now)
ES for logging - Logstash
ES for logging - Logstash
input {
redis {
host => "127.0.0.1“
type => "redis-input“
data_type => "list“
key => "logstash“
}
}
filter {
grok {
type => “redis-input“
pattern => "%{COMBINEDAPACHELOG}"
}
}
output {
elasticsearch {
host => "127.0.0.1“
}
}
ES for logging - Logstash
• Grok(Regexp capture):
%{IP:client:string}
%{NUMBER:bytes:int}
More default patterns at source:
https://github.com/logstash/logstash/tree/master/patterns
ES for logging - Logstash
For example:
10.2.21.130 - - [08/Apr/2013:11:13:40 +0800] "GET
/mediawiki/load.php HTTP/1.1" 304 -
"http://som.d.xiaonei.com/mediawiki/index.php"
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3)
AppleWebKit/536.28.10 (KHTML, like Gecko) Version/6.0.3
Safari/536.28.10"
ES for logging - Logstash
{"@source":"file://chenryn-Lenovo/home/chenryn/test.txt",
"@tags":[],
"@fields":{
"clientip":["10.2.21.130"],
"ident":["-"],
"auth":["-"],
"timestamp":["08/Apr/2013:11:13:40 +0800"],
"verb":["GET"],
"request":["/mediawiki/load.php"],
"httpversion":["1.1"],
"response":["304"],
"referrer":[""http://som.d.xiaonei.com/mediawiki/index.php""],
"agent":[""Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.28.10 (KHTML, like
Gecko) Version/6.0.3 Safari/536.28.10""]
},
"@timestamp":"2013-04-08T03:34:37.959Z",
"@source_host":"chenryn-Lenovo",
"@source_path":"/home/chenryn/test.txt",
"@message":"10.2.21.130 - - [08/Apr/2013:11:13:40 +0800] "GET /mediawiki/load.php HTTP/1.1"
304 - "http://som.d.xiaonei.com/mediawiki/index.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X
10_8_3) AppleWebKit/536.28.10 (KHTML, like Gecko) Version/6.0.3 Safari/536.28.10"",
"@type":"apache“
}
ES for logging - Logstash
"properties" : {
"@fields" : {
"dynamic" : "true",
"properties" : {
"client" : {
"type" : "string",
"index" : "not_analyzed“
},
"size" : {
"type" : "long",
"index" : "not_analyzed“
},
"status" : {
"type" : "string",
"index" : "not_analyzed“
},
"upstreamtime" : {
"type" : "double“
},
}
},
ES for logging - Kibana
ES for logging – Message::Passing
• Logstash port to Perl5
• 17 CPAN modules
ES for logging – Message::Passing
use Message::Passing::DSL;
run_message_server message_chain {
output elasticsearch => (
class => 'ElasticSearch',
elasticsearch_servers => ['127.0.0.1:9200'],
);
filter regexp => (
class => 'Regexp',
format => ':nginxaccesslog',
capture => [qw( ts status remotehost url oh responsetime upstreamtime bytes )]
output_to => 'elasticsearch',
);
filter tologstash => (
class => 'ToLogstash',
output_to => 'regexp',
);
input file => (
class => 'FileTail',
output_to => ‘tologstash',
);
};
Message::Passing vs Logstash
100_000 lines nginx access log
logstash::output::elasticsearch_http
(default)
4m30.013s
logstash::output::elasticsearch_http
(flush_size => 1000)
3m41.657s
message::passing::filter::regexp
(v0.01 call $self->_regex->regexp() everyline)
1m22.519s
message::passing::filter::regexp
(v0.04 store $self->_regex->regexp() to $self->_re)
0m44.606s
D::P::Elasticsearch & D::P::Ajax
Build Website using PerlDancer
get '/' => require_role SOM => sub {
my $indices = elsearch->cluster_state->{routing_table}->{indices};
template 'psa/map',
{
providers => [ sort keys %$default_provider ],
datasources =>
[ grep { /^$index_prefix/ && s/$index_prefix// } keys %$indices ],
inputfrom => strftime( "%FT%T", localtime( time() - 864000 ) ),
inputto => strftime( "%FT%T", localtime() ),
};
};
ajax '/api/area' => sub {
my $param = from_json( request->body );
my $index = $index_prefix . $param->{'datasource'};
my $limit = $param->{'limit'} || 50;
my $from = $param->{'from'} || 'now-10d';
my $to = $param->{'to'} || 'now';
my $res = pct_terms( $index, $limit, $from, $to );
return to_json($res);
};
use Dancer ‘:syntax’;
get '/' => require_role SOM => sub {
my $indices = elsearch->cluster_state->{routing_table}->{indices};
template 'psa/map',
{
providers => [ sort keys %$default_provider ],
datasources =>
[ grep { /^$index_prefix/ && s/$index_prefix// } keys %$indices ],
inputfrom => strftime( "%FT%T", localtime( time() - 864000 ) ),
inputto => strftime( "%FT%T", localtime() ),
};
};
ajax '/api/area' => sub {
my $param = from_json( request->body );
my $index = $index_prefix . $param->{'datasource'};
my $limit = $param->{'limit'} || 50;
my $from = $param->{'from'} || 'now-10d';
my $to = $param->{'to'} || 'now';
my $res = pct_terms( $index, $limit, $from, $to );
return to_json($res);
};
use Dancer::Plugin::Auth::Extensible;
get '/' => require_role SOM => sub {
my $indices = elsearch->cluster_state->{routing_table}->{indices};
template 'psa/map',
{
providers => [ sort keys %$default_provider ],
datasources =>
[ grep { /^$index_prefix/ && s/$index_prefix// } keys %$indices ],
inputfrom => strftime( "%FT%T", localtime( time() - 864000 ) ),
inputto => strftime( "%FT%T", localtime() ),
};
};
ajax '/api/area' => sub {
my $param = from_json( request->body );
my $index = $index_prefix . $param->{'datasource'};
my $limit = $param->{'limit'} || 50;
my $from = $param->{'from'} || 'now-10d';
my $to = $param->{'to'} || 'now';
my $res = pct_terms( $index, $limit, $from, $to );
return to_json($res);
};
use Dancer::Plugin::Ajax;
get '/' => require_role SOM => sub {
my $indices = elsearch->cluster_state->{routing_table}->{indices};
template 'psa/map',
{
providers => [ sort keys %$default_provider ],
datasources =>
[ grep { /^$index_prefix/ && s/$index_prefix// } keys %$indices ],
inputfrom => strftime( "%FT%T", localtime( time() - 864000 ) ),
inputto => strftime( "%FT%T", localtime() ),
};
};
ajax '/api/area' => sub {
my $param = from_json( request->body );
my $index = $index_prefix . $param->{'datasource'};
my $limit = $param->{'limit'} || 50;
my $from = $param->{'from'} || 'now-10d';
my $to = $param->{'to'} || 'now';
my $res = pct_terms( $index, $limit, $from, $to );
return to_json($res);
};
use Dancer::Plugin::ElasticSearch;
get '/' => require_role SOM => sub {
my $indices = elsearch->cluster_state->{routing_table}->{indices};
template 'psa/map',
{
providers => [ sort keys %$default_provider ],
datasources =>
[ grep { /^$index_prefix/ && s/$index_prefix// } keys %$indices ],
inputfrom => strftime( "%FT%T", localtime( time() - 864000 ) ),
inputto => strftime( "%FT%T", localtime() ),
};
};
ajax '/api/area' => sub {
my $param = from_json( request->body );
my $index = $index_prefix . $param->{'datasource'};
my $limit = $param->{'limit'} || 50;
my $from = $param->{'from'} || 'now-10d';
my $to = $param->{'to'} || 'now';
my $res = pct_terms( $index, $limit, $from, $to );
return to_json($res);
};
use Dancer::Plugin::ElasticSearch;
sub area_terms {
my ( $index, $level, $limit, $from, $to ) = @_;
my $data = elsearch->search(
index => $index,
type => $type,
facets => {
area => {
facet_filter => {
and => [
{ range => { date => { from => $from, to => $to } } },
{ numeric_range => { timeCost => { gte => $level } } },
],
},
terms => {
field => "fromArea",
size => $limit,
}
}
}
);
return $data->{facets}->{area}->{terms};
}
ES for monitor – oculus(Etsy Kale)
• Kale to detect anomalous metrics and see
if any other metrics look similar.
• http://codeascraft.com/2013/06/11/introd
ucing-kale/
ES for monitor – oculus(Etsy Kale)
• Kale to detect anomalous metrics and see
if any other metrics look similar.
• https://github.com/etsy/skyline
ES for monitor – oculus(Etsy Kale)
• Kale to detect anomalous metrics and see
if any other metrics look similar.
• https://github.com/etsy/oculus
ES for monitor – oculus(Etsy Kale)
• import monitor data from redis/ganglia to
elasticsearch
• Using native script to calculate distance:
script.native:
oculus_euclidian.type:
com.etsy.oculus.tsscorers.EuclidianScriptFactory
oculus_dtw.type:
com.etsy.oculus.tsscorers.DTWScriptFactory
ES for monitor – oculus(Etsy Kale)
• https://speakerdeck.com/astanway/bring-the-noise-
continuously-deploying-under-a-hailstorm-of-metrics
VBox example
• apt-get install -y git cpanminus virtualbox
• cpanm Rex
• git clone https://github.com/chenryn/esdevops
• cd esdevops
• rex init --name esdevops
How ElasticSearch lives in my DevOps life

More Related Content

What's hot (20)

PDF
Monitoring with Graylog - a modern approach to monitoring?
inovex GmbH
 
PDF
Logstash + Elasticsearch + Kibana Presentation on Startit Tech Meetup
Startit
 
PDF
Logstash: Get to know your logs
SmartLogic
 
PDF
Journée DevOps : Des dashboards pour tous avec ElasticSearch, Logstash et Kibana
Publicis Sapient Engineering
 
PPTX
Customer Intelligence: Using the ELK Stack to Analyze ForgeRock OpenAM Audit ...
ForgeRock
 
PDF
elk_stack_alexander_szalonnas
Alexander Szalonnas
 
PDF
Advanced troubleshooting linux performance
Forthscale
 
PPTX
ELK Stack
Phuc Nguyen
 
PDF
LogStash in action
Manuj Aggarwal
 
PPTX
Logstash
Rajgourav Jain
 
PDF
From Zero to Production Hero: Log Analysis with Elasticsearch (from Velocity ...
Sematext Group, Inc.
 
PDF
Mobile Analytics mit Elasticsearch und Kibana
inovex GmbH
 
PDF
Logstash family introduction
Owen Wu
 
PDF
Logging with Elasticsearch, Logstash & Kibana
Amazee Labs
 
PDF
Logging logs with Logstash - Devops MK 10-02-2016
Steve Howe
 
PDF
Experiences in ELK with D3.js for Large Log Analysis and Visualization
Surasak Sanguanpong
 
PDF
Logs aggregation and analysis
Divante
 
ODP
Using Logstash, elasticsearch & kibana
Alejandro E Brito Monedero
 
PDF
Elk stack @inbot
Jilles van Gurp
 
Monitoring with Graylog - a modern approach to monitoring?
inovex GmbH
 
Logstash + Elasticsearch + Kibana Presentation on Startit Tech Meetup
Startit
 
Logstash: Get to know your logs
SmartLogic
 
Journée DevOps : Des dashboards pour tous avec ElasticSearch, Logstash et Kibana
Publicis Sapient Engineering
 
Customer Intelligence: Using the ELK Stack to Analyze ForgeRock OpenAM Audit ...
ForgeRock
 
elk_stack_alexander_szalonnas
Alexander Szalonnas
 
Advanced troubleshooting linux performance
Forthscale
 
ELK Stack
Phuc Nguyen
 
LogStash in action
Manuj Aggarwal
 
Logstash
Rajgourav Jain
 
From Zero to Production Hero: Log Analysis with Elasticsearch (from Velocity ...
Sematext Group, Inc.
 
Mobile Analytics mit Elasticsearch und Kibana
inovex GmbH
 
Logstash family introduction
Owen Wu
 
Logging with Elasticsearch, Logstash & Kibana
Amazee Labs
 
Logging logs with Logstash - Devops MK 10-02-2016
Steve Howe
 
Experiences in ELK with D3.js for Large Log Analysis and Visualization
Surasak Sanguanpong
 
Logs aggregation and analysis
Divante
 
Using Logstash, elasticsearch & kibana
Alejandro E Brito Monedero
 
Elk stack @inbot
Jilles van Gurp
 

Similar to How ElasticSearch lives in my DevOps life (20)

PDF
ELK stack introduction
abenyeung1
 
PDF
Null Bachaav - May 07 Attack Monitoring workshop.
Prajal Kulkarni
 
PDF
Introduction to Elasticsearch
Ruslan Zavacky
 
PPTX
Attack monitoring using ElasticSearch Logstash and Kibana
Prajal Kulkarni
 
PPT
Craig Brown speaks on ElasticSearch
imarcticblue
 
PPTX
Elastic pivorak
Pivorak MeetUp
 
PDF
Log analysis with the elk stack
Vikrant Chauhan
 
PPT
Elk presentation1#3
uzzal basak
 
PPTX
ElasticSearch AJUG 2013
Roy Russo
 
PDF
Workshop: Learning Elasticsearch
Anurag Patel
 
PDF
Elasticsearch Basics
Shifa Khan
 
PPTX
ACM BPM and elasticsearch AMIS25
Getting value from IoT, Integration and Data Analytics
 
PDF
ELK-Stack-Essential-Concepts-TheELKStack-LunchandLearn.pdf
cadejaumafiq
 
PPTX
Building a Unified Logging Layer with Fluentd, Elasticsearch and Kibana
Mushfekur Rahman
 
PPTX
Centralized log-management-with-elastic-stack
Rich Lee
 
PDF
Elasticsearch Introduction at BigData meetup
Eric Rodriguez (Hiring in Lex)
 
PPTX
The Elastic Stack as a SIEM
John Hubbard
 
PDF
ElasticSearch: Distributed Multitenant NoSQL Datastore and Search Engine
Daniel N
 
PPTX
The ELK Stack - Launch and Learn presentation
saivjadhav2003
 
PPTX
ElasticSearch - DevNexus Atlanta - 2014
Roy Russo
 
ELK stack introduction
abenyeung1
 
Null Bachaav - May 07 Attack Monitoring workshop.
Prajal Kulkarni
 
Introduction to Elasticsearch
Ruslan Zavacky
 
Attack monitoring using ElasticSearch Logstash and Kibana
Prajal Kulkarni
 
Craig Brown speaks on ElasticSearch
imarcticblue
 
Elastic pivorak
Pivorak MeetUp
 
Log analysis with the elk stack
Vikrant Chauhan
 
Elk presentation1#3
uzzal basak
 
ElasticSearch AJUG 2013
Roy Russo
 
Workshop: Learning Elasticsearch
Anurag Patel
 
Elasticsearch Basics
Shifa Khan
 
ELK-Stack-Essential-Concepts-TheELKStack-LunchandLearn.pdf
cadejaumafiq
 
Building a Unified Logging Layer with Fluentd, Elasticsearch and Kibana
Mushfekur Rahman
 
Centralized log-management-with-elastic-stack
Rich Lee
 
Elasticsearch Introduction at BigData meetup
Eric Rodriguez (Hiring in Lex)
 
The Elastic Stack as a SIEM
John Hubbard
 
ElasticSearch: Distributed Multitenant NoSQL Datastore and Search Engine
Daniel N
 
The ELK Stack - Launch and Learn presentation
saivjadhav2003
 
ElasticSearch - DevNexus Atlanta - 2014
Roy Russo
 
Ad

More from 琛琳 饶 (10)

PPT
{{more}} Kibana4
琛琳 饶
 
PPT
ELK stack at weibo.com
琛琳 饶
 
PPTX
More kibana
琛琳 饶
 
PPT
Monitor is all for ops
琛琳 饶
 
PPT
Perl调用微博API实现自动查询应答
琛琳 饶
 
PDF
Add mailinglist command to gitolite
琛琳 饶
 
PPT
Skyline 简介
琛琳 饶
 
PPT
DNS协议与应用简介
琛琳 饶
 
DOC
Mysql测试报告
琛琳 饶
 
PPT
Perl在nginx里的应用
琛琳 饶
 
{{more}} Kibana4
琛琳 饶
 
ELK stack at weibo.com
琛琳 饶
 
More kibana
琛琳 饶
 
Monitor is all for ops
琛琳 饶
 
Perl调用微博API实现自动查询应答
琛琳 饶
 
Add mailinglist command to gitolite
琛琳 饶
 
Skyline 简介
琛琳 饶
 
DNS协议与应用简介
琛琳 饶
 
Mysql测试报告
琛琳 饶
 
Perl在nginx里的应用
琛琳 饶
 
Ad

Recently uploaded (20)

PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PDF
Go Concurrency Real-World Patterns, Pitfalls, and Playground Battles.pdf
Emily Achieng
 
PDF
The Rise of AI and IoT in Mobile App Tech.pdf
IMG Global Infotech
 
PPTX
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
PPTX
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
DOCX
Python coding for beginners !! Start now!#
Rajni Bhardwaj Grover
 
PDF
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
PDF
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
PPTX
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
PDF
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
PDF
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
PDF
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
PDF
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
PPTX
Designing Production-Ready AI Agents
Kunal Rai
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PDF
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
PDF
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
PDF
Biography of Daniel Podor.pdf
Daniel Podor
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
Go Concurrency Real-World Patterns, Pitfalls, and Playground Battles.pdf
Emily Achieng
 
The Rise of AI and IoT in Mobile App Tech.pdf
IMG Global Infotech
 
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
Python coding for beginners !! Start now!#
Rajni Bhardwaj Grover
 
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
Designing Production-Ready AI Agents
Kunal Rai
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
Biography of Daniel Podor.pdf
Daniel Podor
 

How ElasticSearch lives in my DevOps life

  • 2. What’s ElasticSearch? • “flexible and powerful open source, distributed real-time search and analytics engine for the cloud” • http://www.elasticsearch.org/
  • 3. What’s ElasticSearch? • “flexible and powerful open source, distributed real-time search and analytics engine for the cloud” • JSON-oriented; • RESTful API; • Schema free. MySQL ElasticSearch database Index table Type column field Defined data type Auto detected
  • 4. What’s ElasticSearch? • “flexible and powerful open source, distributed real-time search and analytics engine for the cloud” • Master nodes & data nodes; • Auto-organize for replicas and shards; • Asynchronous transport between nodes.
  • 5. What’s ElasticSearch? • “flexible and powerful open source, distributed real-time search and analytics engine for the cloud” • Flush every 1 second.
  • 6. What’s ElasticSearch? • “flexible and powerful open source, distributed real-time search and analytics engine for the cloud” • Build on Apache lucene. • Also has facets just as solr.
  • 7. What’s ElasticSearch? • “flexible and powerful open source, distributed real-time search and analytics engine for the cloud” • Give a cluster name, auto-discovery by unicast/multicast ping or EC2 key. • No zookeeper needed.
  • 8. Howto Curl • Index $ curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{ "user" : "kimchy", "post_date" : "2009-11-15T14:12:12", "message" : "trying out Elastic Search" }‘ {"ok":true,"_index":“twitter","_type":“tweet","_id":"1","_v ersion":1}
  • 9. Howto Curl • Get $ curl -XGET 'http://localhost:9200/twitter/tweet/1' { "_index" : "twitter", "_type" : "tweet", "_id" : "1", "_source" : { "user" : "kimchy", "postDate" : "2009-11-15T14:12:12", "message" : "trying out Elastic Search" } }
  • 10. Howto Curl • Query $ curl -XPOST 'http://localhost:9200/twitter/tweet/_search? pretty=1&size=1' -d '{ "query" : { "term" : { "user" : "kimchy" } "fields": ["message"] } }'
  • 11. Howto Curl • Query • Term => { match some terms (after analyzed)} • Match => { match whole field (no analyzed)} • Prefix => { match field prefix (no analyzed)} • Range => { from, to} • Regexp => { .* } • Query_string => { this AND that OR thus } • Must/must_not => {query} • Shoud => [{query},{}] • Bool => {must,must_not,should,…}
  • 12. Howto Curl • Filter $ curl -XPOST 'http://localhost:9200/twitter/tweet/_search? pretty=1&size=1' -d '{ "query" : { “match_all" : {} }, "filter" : { "term" : { “user" : “kimchy" } } }' Much faster because filter is cacheable and do not calcute _score.
  • 13. Howto Curl • Filter • And => [{filter},{filter}] (only two) • Not => {filter} • Or => [{filter},{filter}](only two) • Script => {“script”:”doc[‘field’].value > 10”} • Other like the query DSL
  • 14. Howto Curl • Facets $ curl -XPOST 'http://localhost:9200/twitter/tweet/_search?pretty=1&size=0' -d '{ "query" : { “match_all" : {} }, "filter" : { “prefix" : { “user" : “k" } }, "facets" : { “usergroup" : { "terms" : { "field" : “user" } } } }'
  • 15. Howto Curl • Facets • terms => [{“term”:”kimchy”,”count”:20},{}] • Range <= [{“from”:10,”to”:20},] • Histogram <= {“field”:”user”,”interval”:10} • Statistical <= {“field”:”reqtime”} => [{“min”:,”max”:,”avg”:,”count”:}]
  • 16. Howto Perl – ElasticSearch.pm use ElasticSearch; my $es = ElasticSearch->new( servers => 'search.foo.com:9200', # default '127.0.0.1:9200' transport => 'http' # default 'http' | 'httplite ' # 30% faster, future default | 'httptiny ' # 1% more faster | 'curl' | 'aehttp' | 'aecurl' | 'thrift', # generated code too slow max_requests => 10_000, # default 10000 trace_calls => 'log_file', no_refresh => 0 | 1, );
  • 17. Howto Perl – ElasticSearch.pm use ElasticSearch; my $es = ElasticSearch->new( servers => 'search.foo.com:9200', transport => 'httptiny ‘, max_requests => 10_000, trace_calls => 'log_file', no_refresh => 0 | 1, ); • Get nodelist by /_cluster API from the $servers; • Rand change request to other node after $max_requests.
  • 18. Howto Perl – ElasticSearch.pm $es->index( index => 'twitter', type => 'tweet', id => 1, data => { user => 'kimchy', post_date => '2009-11-15T14:12:12', message => 'trying out Elastic Search' } );
  • 19. Howto Perl – ElasticSearch.pm $es->search( facets => { wow_facet => { query => { text => { content => 'wow' }}, facet_filter => { term => {status => 'active' }}, } } )
  • 20. Howto Perl – ElasticSearch.pm $es->search( facets => { wow_facet => { queryb => { content => 'wow' }, facet_filterb => { status => 'active' }, } } ) ElasticSearch::SearchBuilder More perlish SQL::Abstract-like But I don’t like ==!
  • 21. Howto Perl – Elastic::Model • Tie a Moose object to elasticsearch package MyApp; use Elastic::Model; has_namespace 'myapp' => { user => 'MyApp::User' }; no Elastic::Model; 1;
  • 22. Howto Perl – Elastic::Model package MyApp::User; use Elastic::Doc; use DateTime; has 'name' => ( is => 'rw', isa => 'Str', ); has 'email' => ( is => 'rw', isa => 'Str', ); has 'created' => ( is => 'ro', isa => 'DateTime', default => sub { DateTime->now } ); no Elastic::Doc; 1;
  • 23. Howto Perl – Elastic::Model package MyApp::User; use Moose; use DateTime; has 'name' => ( is => 'rw', isa => 'Str', ); has 'email' => ( is => 'rw', isa => 'Str', ); has 'created' => ( is => 'ro', isa => 'DateTime', default => sub { DateTime->now } ); no Moose; 1;
  • 24. Howto Perl – Elastic::Model • Connect to db my $es = ElasticSearch->new( servers => 'localhost:9200' ); my $model = MyApp->new( es => $es ); • Create database and table $model->namespace('myapp')->index->create(); • CRUD my $domain = $model->domain('myapp'); $domain->newdoc()|get(); • search my $search = $domain->view->type(‘user’)->query(…)->filterb(…); $results = $search->search; say "Total results found: ".$results->total; while (my $doc = $results->next_doc) { say $doc->name; }
  • 25. ES for Dev -- Github • 20TB data; • 1300000000 files; • 130000000000 code lines. • Using 26 Elasticsearch storage nodes(each has 2TB SSD) managed by puppet. • 1replica + 20 shards. • https://github.com/blog/1381-a-whole-new-code-search • https://github.com/blog/1397-recent-code-search-outages
  • 26. ES for Dev – Git::Search • Thank you, Mateu Hunter! • https://github.com/mateu/Git-Search cpanm --installdeps . cp git-search.conf git-search-local.conf edit git-search-local.conf perl -Ilib bin/insert_docs.pl plackup -Ilib curl http://localhost:5000/text_you_want
  • 27. ES for Perler -- Metacpan • search.cpan.org => metacpan.org • use ElasticSearch as API backend; • use Catalyst build website frontend. • Learn API: https://github.com/CPAN-API/cpan-api/wiki/API-docs • Have a try: http://explorer.metacpan.org/
  • 28. ES for Perler – index-weekly • A Perl script (55 lines) to index devopsweekly into elasticsearch. • https://github.com/alcy/index-weekly • We can do same thing to perlweekly,right?
  • 29. ES for logging - Logstash • “logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use.” • http://logstash.net/
  • 30. ES for logging - Logstash • “logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use.” • Log is stream, not file! • Event is something not only oneline!
  • 31. ES for logging - Logstash • “logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use.” • file/*mq/stdin/tcp/udp/websocket…(34 input plugins now)
  • 32. ES for logging - Logstash • “logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use.” • date/geoip/grok/multiline/mutate…(29 filter plugins now)
  • 33. ES for logging - Logstash • “logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use.” • transfer:stdout/*mq/tcp/udp/file/websocket… • alert:ganglia/nagios/opentsdb/graphite/irc/xmpp /email… • store:elasticsearch/mongodb/riak • (47 output plugins now)
  • 34. ES for logging - Logstash
  • 35. ES for logging - Logstash input { redis { host => "127.0.0.1“ type => "redis-input“ data_type => "list“ key => "logstash“ } } filter { grok { type => “redis-input“ pattern => "%{COMBINEDAPACHELOG}" } } output { elasticsearch { host => "127.0.0.1“ } }
  • 36. ES for logging - Logstash • Grok(Regexp capture): %{IP:client:string} %{NUMBER:bytes:int} More default patterns at source: https://github.com/logstash/logstash/tree/master/patterns
  • 37. ES for logging - Logstash For example: 10.2.21.130 - - [08/Apr/2013:11:13:40 +0800] "GET /mediawiki/load.php HTTP/1.1" 304 - "http://som.d.xiaonei.com/mediawiki/index.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.28.10 (KHTML, like Gecko) Version/6.0.3 Safari/536.28.10"
  • 38. ES for logging - Logstash {"@source":"file://chenryn-Lenovo/home/chenryn/test.txt", "@tags":[], "@fields":{ "clientip":["10.2.21.130"], "ident":["-"], "auth":["-"], "timestamp":["08/Apr/2013:11:13:40 +0800"], "verb":["GET"], "request":["/mediawiki/load.php"], "httpversion":["1.1"], "response":["304"], "referrer":[""http://som.d.xiaonei.com/mediawiki/index.php""], "agent":[""Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.28.10 (KHTML, like Gecko) Version/6.0.3 Safari/536.28.10""] }, "@timestamp":"2013-04-08T03:34:37.959Z", "@source_host":"chenryn-Lenovo", "@source_path":"/home/chenryn/test.txt", "@message":"10.2.21.130 - - [08/Apr/2013:11:13:40 +0800] "GET /mediawiki/load.php HTTP/1.1" 304 - "http://som.d.xiaonei.com/mediawiki/index.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.28.10 (KHTML, like Gecko) Version/6.0.3 Safari/536.28.10"", "@type":"apache“ }
  • 39. ES for logging - Logstash "properties" : { "@fields" : { "dynamic" : "true", "properties" : { "client" : { "type" : "string", "index" : "not_analyzed“ }, "size" : { "type" : "long", "index" : "not_analyzed“ }, "status" : { "type" : "string", "index" : "not_analyzed“ }, "upstreamtime" : { "type" : "double“ }, } },
  • 40. ES for logging - Kibana
  • 41. ES for logging – Message::Passing • Logstash port to Perl5 • 17 CPAN modules
  • 42. ES for logging – Message::Passing use Message::Passing::DSL; run_message_server message_chain { output elasticsearch => ( class => 'ElasticSearch', elasticsearch_servers => ['127.0.0.1:9200'], ); filter regexp => ( class => 'Regexp', format => ':nginxaccesslog', capture => [qw( ts status remotehost url oh responsetime upstreamtime bytes )] output_to => 'elasticsearch', ); filter tologstash => ( class => 'ToLogstash', output_to => 'regexp', ); input file => ( class => 'FileTail', output_to => ‘tologstash', ); };
  • 43. Message::Passing vs Logstash 100_000 lines nginx access log logstash::output::elasticsearch_http (default) 4m30.013s logstash::output::elasticsearch_http (flush_size => 1000) 3m41.657s message::passing::filter::regexp (v0.01 call $self->_regex->regexp() everyline) 1m22.519s message::passing::filter::regexp (v0.04 store $self->_regex->regexp() to $self->_re) 0m44.606s
  • 45. Build Website using PerlDancer get '/' => require_role SOM => sub { my $indices = elsearch->cluster_state->{routing_table}->{indices}; template 'psa/map', { providers => [ sort keys %$default_provider ], datasources => [ grep { /^$index_prefix/ && s/$index_prefix// } keys %$indices ], inputfrom => strftime( "%FT%T", localtime( time() - 864000 ) ), inputto => strftime( "%FT%T", localtime() ), }; }; ajax '/api/area' => sub { my $param = from_json( request->body ); my $index = $index_prefix . $param->{'datasource'}; my $limit = $param->{'limit'} || 50; my $from = $param->{'from'} || 'now-10d'; my $to = $param->{'to'} || 'now'; my $res = pct_terms( $index, $limit, $from, $to ); return to_json($res); };
  • 46. use Dancer ‘:syntax’; get '/' => require_role SOM => sub { my $indices = elsearch->cluster_state->{routing_table}->{indices}; template 'psa/map', { providers => [ sort keys %$default_provider ], datasources => [ grep { /^$index_prefix/ && s/$index_prefix// } keys %$indices ], inputfrom => strftime( "%FT%T", localtime( time() - 864000 ) ), inputto => strftime( "%FT%T", localtime() ), }; }; ajax '/api/area' => sub { my $param = from_json( request->body ); my $index = $index_prefix . $param->{'datasource'}; my $limit = $param->{'limit'} || 50; my $from = $param->{'from'} || 'now-10d'; my $to = $param->{'to'} || 'now'; my $res = pct_terms( $index, $limit, $from, $to ); return to_json($res); };
  • 47. use Dancer::Plugin::Auth::Extensible; get '/' => require_role SOM => sub { my $indices = elsearch->cluster_state->{routing_table}->{indices}; template 'psa/map', { providers => [ sort keys %$default_provider ], datasources => [ grep { /^$index_prefix/ && s/$index_prefix// } keys %$indices ], inputfrom => strftime( "%FT%T", localtime( time() - 864000 ) ), inputto => strftime( "%FT%T", localtime() ), }; }; ajax '/api/area' => sub { my $param = from_json( request->body ); my $index = $index_prefix . $param->{'datasource'}; my $limit = $param->{'limit'} || 50; my $from = $param->{'from'} || 'now-10d'; my $to = $param->{'to'} || 'now'; my $res = pct_terms( $index, $limit, $from, $to ); return to_json($res); };
  • 48. use Dancer::Plugin::Ajax; get '/' => require_role SOM => sub { my $indices = elsearch->cluster_state->{routing_table}->{indices}; template 'psa/map', { providers => [ sort keys %$default_provider ], datasources => [ grep { /^$index_prefix/ && s/$index_prefix// } keys %$indices ], inputfrom => strftime( "%FT%T", localtime( time() - 864000 ) ), inputto => strftime( "%FT%T", localtime() ), }; }; ajax '/api/area' => sub { my $param = from_json( request->body ); my $index = $index_prefix . $param->{'datasource'}; my $limit = $param->{'limit'} || 50; my $from = $param->{'from'} || 'now-10d'; my $to = $param->{'to'} || 'now'; my $res = pct_terms( $index, $limit, $from, $to ); return to_json($res); };
  • 49. use Dancer::Plugin::ElasticSearch; get '/' => require_role SOM => sub { my $indices = elsearch->cluster_state->{routing_table}->{indices}; template 'psa/map', { providers => [ sort keys %$default_provider ], datasources => [ grep { /^$index_prefix/ && s/$index_prefix// } keys %$indices ], inputfrom => strftime( "%FT%T", localtime( time() - 864000 ) ), inputto => strftime( "%FT%T", localtime() ), }; }; ajax '/api/area' => sub { my $param = from_json( request->body ); my $index = $index_prefix . $param->{'datasource'}; my $limit = $param->{'limit'} || 50; my $from = $param->{'from'} || 'now-10d'; my $to = $param->{'to'} || 'now'; my $res = pct_terms( $index, $limit, $from, $to ); return to_json($res); };
  • 50. use Dancer::Plugin::ElasticSearch; sub area_terms { my ( $index, $level, $limit, $from, $to ) = @_; my $data = elsearch->search( index => $index, type => $type, facets => { area => { facet_filter => { and => [ { range => { date => { from => $from, to => $to } } }, { numeric_range => { timeCost => { gte => $level } } }, ], }, terms => { field => "fromArea", size => $limit, } } } ); return $data->{facets}->{area}->{terms}; }
  • 51. ES for monitor – oculus(Etsy Kale) • Kale to detect anomalous metrics and see if any other metrics look similar. • http://codeascraft.com/2013/06/11/introd ucing-kale/
  • 52. ES for monitor – oculus(Etsy Kale) • Kale to detect anomalous metrics and see if any other metrics look similar. • https://github.com/etsy/skyline
  • 53. ES for monitor – oculus(Etsy Kale) • Kale to detect anomalous metrics and see if any other metrics look similar. • https://github.com/etsy/oculus
  • 54. ES for monitor – oculus(Etsy Kale) • import monitor data from redis/ganglia to elasticsearch • Using native script to calculate distance: script.native: oculus_euclidian.type: com.etsy.oculus.tsscorers.EuclidianScriptFactory oculus_dtw.type: com.etsy.oculus.tsscorers.DTWScriptFactory
  • 55. ES for monitor – oculus(Etsy Kale) • https://speakerdeck.com/astanway/bring-the-noise- continuously-deploying-under-a-hailstorm-of-metrics
  • 56. VBox example • apt-get install -y git cpanminus virtualbox • cpanm Rex • git clone https://github.com/chenryn/esdevops • cd esdevops • rex init --name esdevops

Editor's Notes

  • #39: Using LogStash::Outputs::STDOUT with `debug =&gt; true`
  • #40: Schema free, but please define schema using /_mapping or template.json for performance.
  • #41: http://demo.kibana.org http://demo.logstash.net