SlideShare a Scribd company logo
Mesos Python framework
O. Sallou, DevExp 2016
CC-BY-SA 3.0
Interacting with Mesos, 2 choices
Python API:
- not compatible with Python 3
- Easy to implement
- Bindings over C API
HTTP API:
- HTTP calls with persistent connection and streaming
- Recent
- Language independent,
Workflow
Register => Listen for offer => accept/decline offer => listen for job status
Messages use Protobuf [0], HTTP interface also supports JSON.
See Mesos protobuf definition [1] to read or create messages.
[0] https://developers.google.com/protocol-buffers/
[1] https://github.com/apache/mesos/blob/master/include/mesos/mesos.proto
Simple example
Python API
Register
framework = mesos_pb2.FrameworkInfo()
# mesos_pb2.XXX() read/use/write protobuf Mesos objects
framework.user = "" # Have Mesos fill in the current user.
framework.name = "Example Mesos framework"
framework.failover_timeout = 3600 * 24*7 # 1 week
# Optionally, restart from a previous run
mesos_framework_id = mesos_pb2.FrameworkID()
mesos_framework_id.value = XYZ
framework.id.MergeFrom(mesos_framework_id)
framework.principal = "godocker-mesos-framework"
# We will create our scheduler class MesosScheduler in next slide
mesosScheduler = MesosScheduler(1, executor)
# Let’s declare a framework, with a scheduler to manage offers
driver = mesos.native.MesosSchedulerDriver(
mesosScheduler,
framework,
‘zk://127.0.01:2881’)
driver.start()
executor = mesos_pb2.ExecutorInfo()
executor.executor_id.value = "sample"
executor.name = "Example executor"
When scheduler ends...
When scheduler stops, Mesos will kill any remaining tasks after
“failover_timeout” value.
One can set FrameworkID to restart framework and keep same context.
Mesos will keep tasks, and send status messages to framework.
Scheduler skeleton
class MesosScheduler(mesos.interface.Scheduler):
def registered(self, driver, frameworkId, masterInfo):
logging.info("Registered with framework ID %s" % frameworkId.value)
self.frameworkId = frameworkId.value
def resourceOffers(self, driver, offers):
'''
Receive offers, an offer defines a node
with available resources (cpu, mem, etc.)
'''
for offer in offers:
logging.debug('Mesos:Offer:Decline)
driver.declineOffer(offer.id)
def statusUpdate(self, driver, update):
'''
Receive status info from submitted tasks
(switch to running, failure of node, etc.)
'''
logging.debug("Task %s is in state %s" % 
(update.task_id.value, mesos_pb2.TaskState.Name
(update.state)))
def frameworkMessage(self, driver,
executorId, slaveId, message):
logging.debug("Received framework message")
# usually, nothing to do here
Messages are asynchronous
Status updates and offers are asynchronous callbacks. Scheduler run in a
separate thread.
You’re never the initiator of the requests (except registration), but you will
receive callback messages when something change on Mesos side (job switch
to running, node failure, …)
Submit a task
for offer in offers:
# Get available cpu and mem for this offer
offerCpus = 0
offerMem = 0
for resource in offer.resources:
if resource.name == "cpus":
offerCpus += resource.scalar.value
elif resource.name == "mem":
offerMem += resource.scalar.value
# We could chek for other resources here
logging.debug("Mesos:Received offer %s with cpus: %s and mem: %s" 
% (offer.id.value, offerCpus, offerMem))
# We should check that offer has enough resources
sample_task = create_a_sample_task(offer)
array_of_task = [ sample_task ]
driver.launchTasks(offer.id, array_of_task)
Mesos support any custom resource definition on
nodes (gpu, slots, disk, …), using scalar or range
values
When a task is launched, requested resources will be
removed from available resources for the selected
node.
Next offers won’t propose thoses resources again
until task is over (or killed).
Define a task
def create_a_sample_task(offer):
task = mesos_pb2.TaskInfo()
# The container part (native or docker)
container = mesos_pb2.ContainerInfo()
container.type = 1 # mesos_pb2.ContainerInfo.Type.DOCKER
# Let’s add a volume
volume = container.volumes.add()
volume.container_path = “/tmp/test”
volume.host_path = “/tmp/incontainer”
volume.mode = 1 # mesos_pb2.Volume.Mode.RW
# The command to execute, if not using entrypoint
command = mesos_pb2.CommandInfo()
command.value = “echo hello world”
task.command.MergeFrom(command)
# Unique identifier (or let mesos assign one)
task.task_id.value = XYZ_UNIQUE_IDENTIFIER
# the slave where task is executed
task.slave_id.value = offer.slave_id.value
task.name = “my_sample_task”
# The resources/requirements
# Resources have names, cpu, mem and ports are available
# by default, one can define custom ones per slave node
# and get them by their name here
cpus = task.resources.add()
cpus.name = "cpus"
cpus.type = mesos_pb2.Value.SCALAR
cpus.scalar.value = 2
mem = task.resources.add()
mem.name = "mem"
mem.type = mesos_pb2.Value.SCALAR
mem.scalar.value = 3000 #3 Go
Define a task (next)
# Now the Docker part
docker = mesos_pb2.ContainerInfo.DockerInfo()
docker.image = “debian:latest”
docker.network = 2 # mesos_pb2.ContainerInfo.DockerInfo.Network.BRIDGE
docker.force_pull_image = True
container.docker.MergeFrom(docker)
# Let’s map some ports, ports are resources like cpu and mem
# We will map container port 80 to an available host port
# Let’s pick the first available port for this offer, for simplicity
# we will skip here controls and suppose there is at least one port
offer_port = None
for resource in offer.resources:
if resource.name == "ports":
for mesos_range in resource.ranges.range:
offer_port = mesos_range.begin
break
# We map port 80 to offer_port in container
docker_port = docker.port_mappings.add()
docker_port.host_port = 80
docker_port.container_port = offer_port
# We tell mesos that we reserve this port
# Mesos will remove it from next offers until task
completion
mesos_ports = task.resources.add()
mesos_ports.name = "ports"
mesos_ports.type = mesos_pb2.Value.RANGES
port_range = mesos_ports.ranges.range.add()
port_range.begin = offer_port
port_range.end = offer_port
task.container.MergeFrom(container)
return task
Task status
def statusUpdate(self, driver, update):
'''
Receive status info from submitted tasks
(switch to running, failure of node, etc.)
'''
logging.debug("Task %s is in state %s" % 
(update.task_id.value, mesos_pb2.TaskState.Name(update.state)))
if int(update.state= == 1:
#Switched to RUNNING
container_info = json.loads(update.data)
if int(update.state) in [2,3,4,5,7]:
# Over or failure
logging.error(“Task is over or failed”)
Want to kill a task?
def resourceOffers(self, driver, offers):
….
task_id = mesos_pb2.TaskID()
task_id.value = my_unique_task_id
driver.killTask(task_id)
A framework
Quite easy to setup
Many logs on Mesos side for
debug
Share the same resources with
other frameworks
Different executors (docker,
native, …)
In a few lines of code

More Related Content

What's hot (20)

PDF
Introduction to mesos bay
hongbin034
 
PPTX
Making Apache Kafka Elastic with Apache Mesos
Joe Stein
 
ODP
Introduction to Mesos
koboltmarky
 
PPTX
Apache mesos - overview
amgoth kundan lal
 
PPTX
Hands On Intro to Node.js
Chris Cowan
 
PDF
Java 8 고급 (6/6)
Kyung Koo Yoon
 
PPTX
Terraform day02
Gourav Varma
 
PDF
Configuring MongoDB HA Replica Set on AWS EC2
ShepHertz
 
PDF
Integrating Docker with Mesos and Marathon
Rishabh Chaudhary
 
PPTX
Terraform modules restructured
Ami Mahloof
 
PDF
[오픈소스컨설팅] EFK Stack 소개와 설치 방법
Open Source Consulting
 
PDF
Developing Terraform Modules at Scale - HashiTalks 2021
TomStraub5
 
PPT
Hadoop on ec2
Mark Kerzner
 
PDF
Terraform introduction
Jason Vance
 
PDF
Building Distributed System with Celery on Docker Swarm
Wei Lin
 
PDF
Zookeeper In Action
juvenxu
 
PDF
WebCamp 2016: DevOps. Ярослав Погребняк: Gobetween - новый лоад балансер для ...
WebCamp
 
PDF
Drupaljam 2017 - Deploying Drupal 8 onto Hosted Kubernetes in Google Cloud
Dropsolid
 
PDF
Kafka ops-new
Ariel Moskovich
 
PPTX
Terraform day03
Gourav Varma
 
Introduction to mesos bay
hongbin034
 
Making Apache Kafka Elastic with Apache Mesos
Joe Stein
 
Introduction to Mesos
koboltmarky
 
Apache mesos - overview
amgoth kundan lal
 
Hands On Intro to Node.js
Chris Cowan
 
Java 8 고급 (6/6)
Kyung Koo Yoon
 
Terraform day02
Gourav Varma
 
Configuring MongoDB HA Replica Set on AWS EC2
ShepHertz
 
Integrating Docker with Mesos and Marathon
Rishabh Chaudhary
 
Terraform modules restructured
Ami Mahloof
 
[오픈소스컨설팅] EFK Stack 소개와 설치 방법
Open Source Consulting
 
Developing Terraform Modules at Scale - HashiTalks 2021
TomStraub5
 
Hadoop on ec2
Mark Kerzner
 
Terraform introduction
Jason Vance
 
Building Distributed System with Celery on Docker Swarm
Wei Lin
 
Zookeeper In Action
juvenxu
 
WebCamp 2016: DevOps. Ярослав Погребняк: Gobetween - новый лоад балансер для ...
WebCamp
 
Drupaljam 2017 - Deploying Drupal 8 onto Hosted Kubernetes in Google Cloud
Dropsolid
 
Kafka ops-new
Ariel Moskovich
 
Terraform day03
Gourav Varma
 

Viewers also liked (20)

PPTX
Dynamic Scheduling - Federated clusters in mesos
Aaron Carey
 
PDF
Strata SC 2014: Apache Mesos as an SDK for Building Distributed Frameworks
Paco Nathan
 
PDF
MesosCon EU - HTTP API Framework
Marco Massenzio
 
PPTX
Mesos sys adminday
Javier Cortejoso de Andrés
 
PDF
Getting Started Hacking OpenNebula - Fosdem-2013
OpenNebula Project
 
PPTX
Meson: Building a Machine Learning Orchestration Framework on Mesos
Antony Arokiasamy
 
PPTX
Meson: Heterogeneous Workflows with Spark at Netflix
Antony Arokiasamy
 
PPTX
Mesos framework API v1
Mesosphere Inc.
 
PPTX
DC/OS: The definitive platform for modern apps
Datio Big Data
 
PDF
Deploying Docker Containers at Scale with Mesos and Marathon
Discover Pinterest
 
PPTX
Apache Kafka, HDFS, Accumulo and more on Mesos
Joe Stein
 
PDF
TDC2016POA | Trilha Infraestrutura - Apache Mesos & Marathon: gerenciando rem...
tdc-globalcode
 
PDF
Container Orchestration Wars (Micro Edition)
Karl Isenberg
 
PDF
CI/CD with Docker, DC/OS, and Jenkins
Karl Isenberg
 
PDF
Mesos introduction
haosdent huang
 
PDF
Piloter un loadbalancer pour exposer les microservoces de mon cluster Mesos/M...
Kodo Kojo
 
PDF
Container Orchestration Wars
Karl Isenberg
 
PDF
How to deploy Apache Spark 
to Mesos/DCOS
Legacy Typesafe (now Lightbend)
 
PDF
Understanding Kubernetes
Tu Pham
 
PDF
"On-premises" FaaS on Kubernetes
Alex Casalboni
 
Dynamic Scheduling - Federated clusters in mesos
Aaron Carey
 
Strata SC 2014: Apache Mesos as an SDK for Building Distributed Frameworks
Paco Nathan
 
MesosCon EU - HTTP API Framework
Marco Massenzio
 
Mesos sys adminday
Javier Cortejoso de Andrés
 
Getting Started Hacking OpenNebula - Fosdem-2013
OpenNebula Project
 
Meson: Building a Machine Learning Orchestration Framework on Mesos
Antony Arokiasamy
 
Meson: Heterogeneous Workflows with Spark at Netflix
Antony Arokiasamy
 
Mesos framework API v1
Mesosphere Inc.
 
DC/OS: The definitive platform for modern apps
Datio Big Data
 
Deploying Docker Containers at Scale with Mesos and Marathon
Discover Pinterest
 
Apache Kafka, HDFS, Accumulo and more on Mesos
Joe Stein
 
TDC2016POA | Trilha Infraestrutura - Apache Mesos & Marathon: gerenciando rem...
tdc-globalcode
 
Container Orchestration Wars (Micro Edition)
Karl Isenberg
 
CI/CD with Docker, DC/OS, and Jenkins
Karl Isenberg
 
Mesos introduction
haosdent huang
 
Piloter un loadbalancer pour exposer les microservoces de mon cluster Mesos/M...
Kodo Kojo
 
Container Orchestration Wars
Karl Isenberg
 
How to deploy Apache Spark 
to Mesos/DCOS
Legacy Typesafe (now Lightbend)
 
Understanding Kubernetes
Tu Pham
 
"On-premises" FaaS on Kubernetes
Alex Casalboni
 
Ad

Similar to Creating a Mesos python framework (20)

PPTX
Introduction to Apache Mesos
Joe Stein
 
PDF
Apache Mesos: a simple explanation of basics
Gladson Manuel
 
PDF
A Travel Through Mesos
Datio Big Data
 
PDF
OSDC 2015: Bernd Mathiske | Why the Datacenter Needs an Operating System
NETWAYS
 
ODP
Presentation v1 (1)
koboltmarky
 
PDF
OSDC 2016 - Mesos and the Architecture of the New Datacenter by Jörg Schad
NETWAYS
 
PDF
Apache Mesos at Twitter (Texas LinuxFest 2014)
Chris Aniszczyk
 
PPTX
Scalable On-Demand Hadoop Clusters with Docker and Mesos
DataWorks Summit
 
PPTX
Scalable On-Demand Hadoop Clusters with Docker and Mesos
nelsonadpresent
 
PDF
Mesos: A State-of-the-art Container Orchestrator
C4Media
 
PDF
Making Distributed Data Persistent Services Elastic (Without Losing All Your ...
C4Media
 
PDF
Spark on Mesos-A Deep Dive-(Dean Wampler and Tim Chen, Typesafe and Mesosphere)
Spark Summit
 
PDF
Mesos 1.0
Anand Mazumdar
 
PDF
MesosCon EU 2017 - Criteo - Operating Mesos-based Infrastructures
pierrecdn -
 
PDF
Mesos: The Operating System for your Datacenter
David Greenberg
 
PDF
Building Web Scale Apps with Docker and Mesos by Alex Rukletsov (Mesosphere)
Docker, Inc.
 
PDF
Scaling and Embracing Failure: Clustering Docker with Mesos
Rob Gulewich
 
PDF
Mesos on coreOS
충섭 김
 
PPT
What can-be-done-around-mesos
Zhou Weitao
 
PDF
Introduction to Apache Mesos and DC/OS
Steve Wong
 
Introduction to Apache Mesos
Joe Stein
 
Apache Mesos: a simple explanation of basics
Gladson Manuel
 
A Travel Through Mesos
Datio Big Data
 
OSDC 2015: Bernd Mathiske | Why the Datacenter Needs an Operating System
NETWAYS
 
Presentation v1 (1)
koboltmarky
 
OSDC 2016 - Mesos and the Architecture of the New Datacenter by Jörg Schad
NETWAYS
 
Apache Mesos at Twitter (Texas LinuxFest 2014)
Chris Aniszczyk
 
Scalable On-Demand Hadoop Clusters with Docker and Mesos
DataWorks Summit
 
Scalable On-Demand Hadoop Clusters with Docker and Mesos
nelsonadpresent
 
Mesos: A State-of-the-art Container Orchestrator
C4Media
 
Making Distributed Data Persistent Services Elastic (Without Losing All Your ...
C4Media
 
Spark on Mesos-A Deep Dive-(Dean Wampler and Tim Chen, Typesafe and Mesosphere)
Spark Summit
 
Mesos 1.0
Anand Mazumdar
 
MesosCon EU 2017 - Criteo - Operating Mesos-based Infrastructures
pierrecdn -
 
Mesos: The Operating System for your Datacenter
David Greenberg
 
Building Web Scale Apps with Docker and Mesos by Alex Rukletsov (Mesosphere)
Docker, Inc.
 
Scaling and Embracing Failure: Clustering Docker with Mesos
Rob Gulewich
 
Mesos on coreOS
충섭 김
 
What can-be-done-around-mesos
Zhou Weitao
 
Introduction to Apache Mesos and DC/OS
Steve Wong
 
Ad

Recently uploaded (20)

PPTX
Customise Your Correlation Table in IBM SPSS Statistics.pptx
Version 1 Analytics
 
PDF
Digger Solo: Semantic search and maps for your local files
seanpedersen96
 
PDF
Build It, Buy It, or Already Got It? Make Smarter Martech Decisions
bbedford2
 
PPTX
AEM User Group: India Chapter Kickoff Meeting
jennaf3
 
PPTX
Comprehensive Risk Assessment Module for Smarter Risk Management
EHA Soft Solutions
 
PDF
The 5 Reasons for IT Maintenance - Arna Softech
Arna Softech
 
PPTX
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
PPTX
Tally_Basic_Operations_Presentation.pptx
AditiBansal54083
 
PDF
Open Chain Q2 Steering Committee Meeting - 2025-06-25
Shane Coughlan
 
PDF
MiniTool Power Data Recovery 8.8 With Crack New Latest 2025
bashirkhan333g
 
PPTX
Homogeneity of Variance Test Options IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PDF
vMix Pro 28.0.0.42 Download vMix Registration key Bundle
kulindacore
 
PPTX
OpenChain @ OSS NA - In From the Cold: Open Source as Part of Mainstream Soft...
Shane Coughlan
 
PPTX
Foundations of Marketo Engage - Powering Campaigns with Marketo Personalization
bbedford2
 
PDF
SciPy 2025 - Packaging a Scientific Python Project
Henry Schreiner
 
PDF
Odoo CRM vs Zoho CRM: Honest Comparison 2025
Odiware Technologies Private Limited
 
PDF
Download Canva Pro 2025 PC Crack Full Latest Version
bashirkhan333g
 
PPTX
Agentic Automation: Build & Deploy Your First UiPath Agent
klpathrudu
 
PPTX
Tally software_Introduction_Presentation
AditiBansal54083
 
PDF
iTop VPN With Crack Lifetime Activation Key-CODE
utfefguu
 
Customise Your Correlation Table in IBM SPSS Statistics.pptx
Version 1 Analytics
 
Digger Solo: Semantic search and maps for your local files
seanpedersen96
 
Build It, Buy It, or Already Got It? Make Smarter Martech Decisions
bbedford2
 
AEM User Group: India Chapter Kickoff Meeting
jennaf3
 
Comprehensive Risk Assessment Module for Smarter Risk Management
EHA Soft Solutions
 
The 5 Reasons for IT Maintenance - Arna Softech
Arna Softech
 
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
Tally_Basic_Operations_Presentation.pptx
AditiBansal54083
 
Open Chain Q2 Steering Committee Meeting - 2025-06-25
Shane Coughlan
 
MiniTool Power Data Recovery 8.8 With Crack New Latest 2025
bashirkhan333g
 
Homogeneity of Variance Test Options IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
vMix Pro 28.0.0.42 Download vMix Registration key Bundle
kulindacore
 
OpenChain @ OSS NA - In From the Cold: Open Source as Part of Mainstream Soft...
Shane Coughlan
 
Foundations of Marketo Engage - Powering Campaigns with Marketo Personalization
bbedford2
 
SciPy 2025 - Packaging a Scientific Python Project
Henry Schreiner
 
Odoo CRM vs Zoho CRM: Honest Comparison 2025
Odiware Technologies Private Limited
 
Download Canva Pro 2025 PC Crack Full Latest Version
bashirkhan333g
 
Agentic Automation: Build & Deploy Your First UiPath Agent
klpathrudu
 
Tally software_Introduction_Presentation
AditiBansal54083
 
iTop VPN With Crack Lifetime Activation Key-CODE
utfefguu
 

Creating a Mesos python framework

  • 1. Mesos Python framework O. Sallou, DevExp 2016 CC-BY-SA 3.0
  • 2. Interacting with Mesos, 2 choices Python API: - not compatible with Python 3 - Easy to implement - Bindings over C API HTTP API: - HTTP calls with persistent connection and streaming - Recent - Language independent,
  • 3. Workflow Register => Listen for offer => accept/decline offer => listen for job status Messages use Protobuf [0], HTTP interface also supports JSON. See Mesos protobuf definition [1] to read or create messages. [0] https://developers.google.com/protocol-buffers/ [1] https://github.com/apache/mesos/blob/master/include/mesos/mesos.proto
  • 5. Register framework = mesos_pb2.FrameworkInfo() # mesos_pb2.XXX() read/use/write protobuf Mesos objects framework.user = "" # Have Mesos fill in the current user. framework.name = "Example Mesos framework" framework.failover_timeout = 3600 * 24*7 # 1 week # Optionally, restart from a previous run mesos_framework_id = mesos_pb2.FrameworkID() mesos_framework_id.value = XYZ framework.id.MergeFrom(mesos_framework_id) framework.principal = "godocker-mesos-framework" # We will create our scheduler class MesosScheduler in next slide mesosScheduler = MesosScheduler(1, executor) # Let’s declare a framework, with a scheduler to manage offers driver = mesos.native.MesosSchedulerDriver( mesosScheduler, framework, ‘zk://127.0.01:2881’) driver.start() executor = mesos_pb2.ExecutorInfo() executor.executor_id.value = "sample" executor.name = "Example executor"
  • 6. When scheduler ends... When scheduler stops, Mesos will kill any remaining tasks after “failover_timeout” value. One can set FrameworkID to restart framework and keep same context. Mesos will keep tasks, and send status messages to framework.
  • 7. Scheduler skeleton class MesosScheduler(mesos.interface.Scheduler): def registered(self, driver, frameworkId, masterInfo): logging.info("Registered with framework ID %s" % frameworkId.value) self.frameworkId = frameworkId.value def resourceOffers(self, driver, offers): ''' Receive offers, an offer defines a node with available resources (cpu, mem, etc.) ''' for offer in offers: logging.debug('Mesos:Offer:Decline) driver.declineOffer(offer.id) def statusUpdate(self, driver, update): ''' Receive status info from submitted tasks (switch to running, failure of node, etc.) ''' logging.debug("Task %s is in state %s" % (update.task_id.value, mesos_pb2.TaskState.Name (update.state))) def frameworkMessage(self, driver, executorId, slaveId, message): logging.debug("Received framework message") # usually, nothing to do here
  • 8. Messages are asynchronous Status updates and offers are asynchronous callbacks. Scheduler run in a separate thread. You’re never the initiator of the requests (except registration), but you will receive callback messages when something change on Mesos side (job switch to running, node failure, …)
  • 9. Submit a task for offer in offers: # Get available cpu and mem for this offer offerCpus = 0 offerMem = 0 for resource in offer.resources: if resource.name == "cpus": offerCpus += resource.scalar.value elif resource.name == "mem": offerMem += resource.scalar.value # We could chek for other resources here logging.debug("Mesos:Received offer %s with cpus: %s and mem: %s" % (offer.id.value, offerCpus, offerMem)) # We should check that offer has enough resources sample_task = create_a_sample_task(offer) array_of_task = [ sample_task ] driver.launchTasks(offer.id, array_of_task) Mesos support any custom resource definition on nodes (gpu, slots, disk, …), using scalar or range values When a task is launched, requested resources will be removed from available resources for the selected node. Next offers won’t propose thoses resources again until task is over (or killed).
  • 10. Define a task def create_a_sample_task(offer): task = mesos_pb2.TaskInfo() # The container part (native or docker) container = mesos_pb2.ContainerInfo() container.type = 1 # mesos_pb2.ContainerInfo.Type.DOCKER # Let’s add a volume volume = container.volumes.add() volume.container_path = “/tmp/test” volume.host_path = “/tmp/incontainer” volume.mode = 1 # mesos_pb2.Volume.Mode.RW # The command to execute, if not using entrypoint command = mesos_pb2.CommandInfo() command.value = “echo hello world” task.command.MergeFrom(command) # Unique identifier (or let mesos assign one) task.task_id.value = XYZ_UNIQUE_IDENTIFIER # the slave where task is executed task.slave_id.value = offer.slave_id.value task.name = “my_sample_task” # The resources/requirements # Resources have names, cpu, mem and ports are available # by default, one can define custom ones per slave node # and get them by their name here cpus = task.resources.add() cpus.name = "cpus" cpus.type = mesos_pb2.Value.SCALAR cpus.scalar.value = 2 mem = task.resources.add() mem.name = "mem" mem.type = mesos_pb2.Value.SCALAR mem.scalar.value = 3000 #3 Go
  • 11. Define a task (next) # Now the Docker part docker = mesos_pb2.ContainerInfo.DockerInfo() docker.image = “debian:latest” docker.network = 2 # mesos_pb2.ContainerInfo.DockerInfo.Network.BRIDGE docker.force_pull_image = True container.docker.MergeFrom(docker) # Let’s map some ports, ports are resources like cpu and mem # We will map container port 80 to an available host port # Let’s pick the first available port for this offer, for simplicity # we will skip here controls and suppose there is at least one port offer_port = None for resource in offer.resources: if resource.name == "ports": for mesos_range in resource.ranges.range: offer_port = mesos_range.begin break # We map port 80 to offer_port in container docker_port = docker.port_mappings.add() docker_port.host_port = 80 docker_port.container_port = offer_port # We tell mesos that we reserve this port # Mesos will remove it from next offers until task completion mesos_ports = task.resources.add() mesos_ports.name = "ports" mesos_ports.type = mesos_pb2.Value.RANGES port_range = mesos_ports.ranges.range.add() port_range.begin = offer_port port_range.end = offer_port task.container.MergeFrom(container) return task
  • 12. Task status def statusUpdate(self, driver, update): ''' Receive status info from submitted tasks (switch to running, failure of node, etc.) ''' logging.debug("Task %s is in state %s" % (update.task_id.value, mesos_pb2.TaskState.Name(update.state))) if int(update.state= == 1: #Switched to RUNNING container_info = json.loads(update.data) if int(update.state) in [2,3,4,5,7]: # Over or failure logging.error(“Task is over or failed”)
  • 13. Want to kill a task? def resourceOffers(self, driver, offers): …. task_id = mesos_pb2.TaskID() task_id.value = my_unique_task_id driver.killTask(task_id)
  • 14. A framework Quite easy to setup Many logs on Mesos side for debug Share the same resources with other frameworks Different executors (docker, native, …) In a few lines of code