SlideShare a Scribd company logo
CHEP2012 – New York (USA), May 21-25 2012



   Design and implementation of a reliable and cost-effective
  cloud computing infrastructure: the INFN Napoli experience
        Vincenzo Capone, bRosario Esposito, bSilvio Pardi, b,cFrancesco Taurino, bGennaro Tortone
      a,b
                                                    Università degli Studi di Napoli Federico II – Napoli, Italy
                                                     a

                                            b
                                              INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy
                                             c
                                               CNR-SPIN - Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy
                              Email: ecapone@na.infn.it, resposit@na.infn.it, spardi@na.infn.it, taurino@na.infn.it, tortone@na.infn.it


Introduction
In this work, we describe an IaaS (Infrastructure as a Service) cloud
computing system, with high availability and redundancy features which
is currently in production at INFN-Napoli and ATLAS Tier-2 data centre.
The main goal we intended to achieve was a simplified method to
manage our computing resources and deliver reliable user services,
reusing existing hardware without incurring heavy costs. A combined
usage of virtualization and clustering technologies allowed us to
consolidate our services on a small number of physical machines,
reducing electric power costs. As a results of our efforts we developed a
complete solution for data and computing centers that can be easily
replicated using commodity hardware.


                                                                                                A snapshot of some the custom CLI tools.
                                                                 Hardware
                                                                 We started from commodity hardware we already owned, that was upgraded
                                                                 in order to fulfill the requested performance. In particular, three Dell
                                                                 PowerEdge 1950 rack servers have been used as VM executors, and two
                                                                 Dell PowerEdge 2950 as VM stores.
                                                                 All servers are equipped with dual Intel Xeon E5430, providing 8 cpu cores
                                                                 per server, with 8 Gbyte of RAM. The upgrades consisted in a 8 Gbyte RAM
                                                                 and 2 ports Ethernet NIC on hypervisors, 6 x 1.5 TByte SATA hard disk and a
                                                                 4 ports Ethernet NIC on both storage servers. The storage server disks are
                                                                 configured in RAID5 (dm-raid software mode), so the total available storage
                                                                 space is 7.5 Tbyte per server. Hypervisor servers have 2 x 500 Gbyte disks
                                                                 configured in RAID1. Furthermore a dedicated 24 gigabit ports Cisco Catalyst
                                                                 2960G switch was added to the hardware configuration to provide a
                                                                 dedicated storage LAN.

Network
The main requirements for the network serving our infrastructure are: performance, reliability and resiliency. To achieve these
goals, we set up a double path between every hypervisor and both storage servers, with two different switches involved, so that a
failure in one of them doesn’t impact on the execution of the Virtual Machines, whose disk images are hosted on the storage
servers. The Cisco Catalyst 6509 is the core switch of our science department network infrastructure, and every server is
connected to it via the onboard dual gigabit ethernet port, in LACP bonding mode, so to provide the necessary connectivity and the
sufficient bandwidth to the VMs: this link is in trunk mode, so that every VM can be connected to the desired VLAN. The second
switch (Cisco 2960G) is connected to the former via a 3 x 1 Gbit LACP bond link. A private VLAN hosts the data traffic between the
storage servers and the hypervisors; within this VLAN every storage server is connected with three gigabit links to the Cisco 2960G
and the fourth to the Cisco 6509, while every hypervisor is connected with one link to both switches; the multiple connection of the
servers to the two switches is achieved with the Adaptive Load Balancing mode. Within this topology, the Cisco 2960G is
completely dedicated to the network traffic of the storage VLAN, while the Cisco 6509 is used as access switch towards the LAN,
and as the redundant switch for the storage VLAN.


Software
 The OS used on all servers was initially Scientific Linux 5.5, with KVM as virtualization system. We selected KVM
 as the best architecture for virtualization on modern processors with fast hardware virtualization support (VT-x
 and NPT on Intel or AMD-V and EPT on AMD). After, we updated all servers to Scientific Linux 6.2 to use the
 new KVM version and KSM (Kernel Samepage Merging), a memory deduplication feature which enables more
 guests sharing to share the same memory pages of the host.


Storage
We chose GlusterFS as a fault-tolerant backend storage for virtual
machine images. GlusterFS is an open source, clustered file-
system for scaling the storage capacity of many servers to several
petabytes. It aggregates various storage servers or bricks over
Infiniband RDMA and/or TCP/IP interconnection into one large
parallel network file system. Key features of GlusterFS:
- Modular, stackable storage OS architecture
- Data stored in native formats
- No metadata – Elastic hashing
- Automatic file replication with self-healing.
In the GlusterFS world a volume is a logical collection of bricks, where each brick is
an export directory on a server in the trusted storage pool. Most of the Gluster
management operations happen on the volume. In our local setup we used a
replicated Gluster volume created on top of 2 servers to store virtual machine disk
images in qcow2 file format. Each storage server exports a storage brick which
consists of an ext3 file system built on a Linux software RAID5 array.                                               IOZone benchmark measuring disk r/w performance to GlusterFS:
                                                                                                                     1) storage server local array, 2) KVM host, 3) guest VM disk image
In the picture on the right are shown the disk performances under various use cases.


Features
We have developed some CLI scripts for day by day tasks on our private cloud in order to reduce administration efforts, like rapid
provisioning of guest, listing, rapid migration, load balancing and automatic migration and restart of VMs hosted on a failed
hypervisor.
With our deployment we’ve achieved all the goals we intended to: ease of management, high availability and fault tolerance. The
functional integrity of the whole cloud system is preserved even after the fault of multiple elements of the system: in fact, no other
effects, but the declining of the overall performance, happens after the failure of one of the two switches, one of the two storage
servers, all but one of the KVM hypervisors, even if all this happens at the same time.
In conclusion, our system has proved itself a solid and efficient solution, after more than one year of uninterrupted uptime, to deploy
all those services that don’t require a heavy load on the I/O subsystem, but that are a crucial element of a modern datacenter.


                                                               http://www.na.infn.it

More Related Content

PDF
PVOps Update
The Linux Foundation
 
PDF
Building a Distributed Block Storage System on Xen
The Linux Foundation
 
PDF
(Free and Net) BSD Xen Roadmap
The Linux Foundation
 
PDF
Oracle rac 10g best practices
Haseeb Alam
 
ODP
Kvm
Bert Desmet
 
PPTX
Impact of Intel Optane Technology on HPC
MemVerge
 
PDF
Cascade lake-advanced-performance-press-deck
DESMOND YUEN
 
PDF
Xen PV Performance Status and Optimization Opportunities
The Linux Foundation
 
PVOps Update
The Linux Foundation
 
Building a Distributed Block Storage System on Xen
The Linux Foundation
 
(Free and Net) BSD Xen Roadmap
The Linux Foundation
 
Oracle rac 10g best practices
Haseeb Alam
 
Impact of Intel Optane Technology on HPC
MemVerge
 
Cascade lake-advanced-performance-press-deck
DESMOND YUEN
 
Xen PV Performance Status and Optimization Opportunities
The Linux Foundation
 

What's hot (20)

PPTX
Heterogeneous Computing on POWER - IBM and OpenPOWER technologies to accelera...
Cesar Maciel
 
PPTX
Memory management in vx works
Dhan V Sagar
 
PDF
Xen in Linux 3.x (or PVOPS)
The Linux Foundation
 
PDF
Building High Availability Clusters with SUSE Linux Enterprise High Availabil...
Novell
 
PDF
XS Boston 2008 XenLoop
The Linux Foundation
 
PDF
KVM Tuning @ eBay
Xu Jiang
 
KEY
Finadmin virtualization2
David Moody
 
PDF
Kvm performance optimization for ubuntu
Sim Janghoon
 
PPTX
JetStor portfolio update final_2020-2021
Gene Leyzarovich
 
ODP
Docker on Power Systems
Cesar Maciel
 
PPTX
Openstorage with OpenStack, by Bradley
Hui Cheng
 
PDF
A guide of PostgreSQL on Kubernetes
t8kobayashi
 
PPTX
Realtime scheduling for virtual machines in SKT
The Linux Foundation
 
PPTX
Revisiting CephFS MDS and mClock QoS Scheduler
Yongseok Oh
 
PDF
Linux container & docker
ejlp12
 
PDF
A Survey of Performance Comparison between Virtual Machines and Containers
prashant desai
 
PDF
VMworld 2013: Cisco, VMware and Hyper-converged Solutions for the Enterprise....
VMworld
 
PDF
Paravirtualized File Systems
Eric Van Hensbergen
 
PDF
General Bare-metal Provisioning Framework.pdf
OpenStack Foundation
 
PDF
1212312232
maclean liu
 
Heterogeneous Computing on POWER - IBM and OpenPOWER technologies to accelera...
Cesar Maciel
 
Memory management in vx works
Dhan V Sagar
 
Xen in Linux 3.x (or PVOPS)
The Linux Foundation
 
Building High Availability Clusters with SUSE Linux Enterprise High Availabil...
Novell
 
XS Boston 2008 XenLoop
The Linux Foundation
 
KVM Tuning @ eBay
Xu Jiang
 
Finadmin virtualization2
David Moody
 
Kvm performance optimization for ubuntu
Sim Janghoon
 
JetStor portfolio update final_2020-2021
Gene Leyzarovich
 
Docker on Power Systems
Cesar Maciel
 
Openstorage with OpenStack, by Bradley
Hui Cheng
 
A guide of PostgreSQL on Kubernetes
t8kobayashi
 
Realtime scheduling for virtual machines in SKT
The Linux Foundation
 
Revisiting CephFS MDS and mClock QoS Scheduler
Yongseok Oh
 
Linux container & docker
ejlp12
 
A Survey of Performance Comparison between Virtual Machines and Containers
prashant desai
 
VMworld 2013: Cisco, VMware and Hyper-converged Solutions for the Enterprise....
VMworld
 
Paravirtualized File Systems
Eric Van Hensbergen
 
General Bare-metal Provisioning Framework.pdf
OpenStack Foundation
 
1212312232
maclean liu
 
Ad

Viewers also liked (9)

PDF
AAI Nazionale
Francesco Taurino
 
PDF
Open Security
Francesco Taurino
 
PDF
AAI Locale
Francesco Taurino
 
PPT
Unattended
Francesco Taurino
 
PDF
APT per RPM
Francesco Taurino
 
PDF
Proposte (informatiche) per il comune di Santa Maria Capua Vetere
Francesco Taurino
 
PDF
Excelsior 2009
Francesco Taurino
 
PDF
ClearOS - Linux Small Business Server
Francesco Taurino
 
AAI Nazionale
Francesco Taurino
 
Open Security
Francesco Taurino
 
AAI Locale
Francesco Taurino
 
Unattended
Francesco Taurino
 
APT per RPM
Francesco Taurino
 
Proposte (informatiche) per il comune di Santa Maria Capua Vetere
Francesco Taurino
 
Excelsior 2009
Francesco Taurino
 
ClearOS - Linux Small Business Server
Francesco Taurino
 
Ad

Similar to Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience (20)

PDF
How DreamHost builds a public cloud with OpenStack.pdf
OpenStack Foundation
 
PDF
How DreamHost builds a Public Cloud with OpenStack
Carl Perry
 
PPTX
Open Source Cloud, Virtualization and Deployment Technologies
mestery
 
PPT
MyCloud for $100k
Sebastien Goasguen
 
PDF
Practice and challenges from building IaaS
Shawn Zhu
 
PDF
OpenStack Tutorial
Bret Piatt
 
PPT
How to Design a Scalable Private Cloud
AFCOM
 
PDF
XenSummit - 08/28/2012
Ceph Community
 
PPTX
Cloud Computing Tools
Jithin Parakka
 
PPTX
Cloud stack overview
gavin_lee
 
PDF
Linuxtag 2012 - OpenNebula
OpenNebula Project
 
ODP
Block Storage For VMs With Ceph
The Linux Foundation
 
PDF
Building IaaS Clouds and the Art of Virtual Machine Management: A Practical G...
Ruben S. Montero
 
PDF
#VirtualDesignMaster 3 Challenge 3 – James Brown
vdmchallenge
 
PDF
Exploration of eucalyptus_v2.0
huangwenjun310
 
PDF
Red Hat Storage - Introduction to GlusterFS
GlusterFS
 
PDF
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Ceph Community
 
PPTX
Openstack Diablo Survey
Pjack Chen
 
How DreamHost builds a public cloud with OpenStack.pdf
OpenStack Foundation
 
How DreamHost builds a Public Cloud with OpenStack
Carl Perry
 
Open Source Cloud, Virtualization and Deployment Technologies
mestery
 
MyCloud for $100k
Sebastien Goasguen
 
Practice and challenges from building IaaS
Shawn Zhu
 
OpenStack Tutorial
Bret Piatt
 
How to Design a Scalable Private Cloud
AFCOM
 
XenSummit - 08/28/2012
Ceph Community
 
Cloud Computing Tools
Jithin Parakka
 
Cloud stack overview
gavin_lee
 
Linuxtag 2012 - OpenNebula
OpenNebula Project
 
Block Storage For VMs With Ceph
The Linux Foundation
 
Building IaaS Clouds and the Art of Virtual Machine Management: A Practical G...
Ruben S. Montero
 
#VirtualDesignMaster 3 Challenge 3 – James Brown
vdmchallenge
 
Exploration of eucalyptus_v2.0
huangwenjun310
 
Red Hat Storage - Introduction to GlusterFS
GlusterFS
 
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Ceph Community
 
Openstack Diablo Survey
Pjack Chen
 

More from Francesco Taurino (20)

ODP
La mia nuvola in azienda o a casa
Francesco Taurino
 
PDF
Da 0 all'open per PA e PMI
Francesco Taurino
 
PDF
LPPP - Linux Per le PMI (piccole e medie imprese) e la PA (Pubblica Amministr...
Francesco Taurino
 
PDF
Francesco M. Taurino - Relazione tecnica e pubblicazioni
Francesco Taurino
 
PDF
Presentazione del nuovo sito web del Comune di Santa Maria Capua Vetere
Francesco Taurino
 
PDF
Open Source
Francesco Taurino
 
PDF
Applicazioni open source
Francesco Taurino
 
PDF
Una rete aziendale con Linux
Francesco Taurino
 
PDF
Xen e OpenVirtuozzo
Francesco Taurino
 
PDF
Back to Mainframe
Francesco Taurino
 
PPT
NetDisco
Francesco Taurino
 
PDF
Redhat RHCE Certification
Francesco Taurino
 
PDF
PfSense Cluster
Francesco Taurino
 
PDF
Monitoraggio ambientale a basso costo - 2
Francesco Taurino
 
La mia nuvola in azienda o a casa
Francesco Taurino
 
Da 0 all'open per PA e PMI
Francesco Taurino
 
LPPP - Linux Per le PMI (piccole e medie imprese) e la PA (Pubblica Amministr...
Francesco Taurino
 
Francesco M. Taurino - Relazione tecnica e pubblicazioni
Francesco Taurino
 
Presentazione del nuovo sito web del Comune di Santa Maria Capua Vetere
Francesco Taurino
 
Open Source
Francesco Taurino
 
Applicazioni open source
Francesco Taurino
 
Una rete aziendale con Linux
Francesco Taurino
 
Xen e OpenVirtuozzo
Francesco Taurino
 
Back to Mainframe
Francesco Taurino
 
Redhat RHCE Certification
Francesco Taurino
 
PfSense Cluster
Francesco Taurino
 
Monitoraggio ambientale a basso costo - 2
Francesco Taurino
 

Recently uploaded (20)

PDF
Economic Impact of Data Centres to the Malaysian Economy
flintglobalapac
 
PPTX
Introduction to Flutter by Ayush Desai.pptx
ayushdesai204
 
PPTX
cloud computing vai.pptx for the project
vaibhavdobariyal79
 
PDF
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
PDF
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
PDF
Orbitly Pitch Deck|A Mission-Driven Platform for Side Project Collaboration (...
zz41354899
 
PDF
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
PPTX
Simple and concise overview about Quantum computing..pptx
mughal641
 
PPTX
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
PPTX
OA presentation.pptx OA presentation.pptx
pateldhruv002338
 
PPTX
What-is-the-World-Wide-Web -- Introduction
tonifi9488
 
PDF
Get More from Fiori Automation - What’s New, What Works, and What’s Next.pdf
Precisely
 
PDF
AI-Cloud-Business-Management-Platforms-The-Key-to-Efficiency-Growth.pdf
Artjoker Software Development Company
 
PDF
SparkLabs Primer on Artificial Intelligence 2025
SparkLabs Group
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PDF
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
PPTX
Dev Dives: Automate, test, and deploy in one place—with Unified Developer Exp...
AndreeaTom
 
PPTX
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
PPTX
Agile Chennai 18-19 July 2025 | Emerging patterns in Agentic AI by Bharani Su...
AgileNetwork
 
PDF
GDG Cloud Munich - Intro - Luiz Carneiro - #BuildWithAI - July - Abdel.pdf
Luiz Carneiro
 
Economic Impact of Data Centres to the Malaysian Economy
flintglobalapac
 
Introduction to Flutter by Ayush Desai.pptx
ayushdesai204
 
cloud computing vai.pptx for the project
vaibhavdobariyal79
 
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
Orbitly Pitch Deck|A Mission-Driven Platform for Side Project Collaboration (...
zz41354899
 
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
Simple and concise overview about Quantum computing..pptx
mughal641
 
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
OA presentation.pptx OA presentation.pptx
pateldhruv002338
 
What-is-the-World-Wide-Web -- Introduction
tonifi9488
 
Get More from Fiori Automation - What’s New, What Works, and What’s Next.pdf
Precisely
 
AI-Cloud-Business-Management-Platforms-The-Key-to-Efficiency-Growth.pdf
Artjoker Software Development Company
 
SparkLabs Primer on Artificial Intelligence 2025
SparkLabs Group
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
Dev Dives: Automate, test, and deploy in one place—with Unified Developer Exp...
AndreeaTom
 
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
Agile Chennai 18-19 July 2025 | Emerging patterns in Agentic AI by Bharani Su...
AgileNetwork
 
GDG Cloud Munich - Intro - Luiz Carneiro - #BuildWithAI - July - Abdel.pdf
Luiz Carneiro
 

Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

  • 1. CHEP2012 – New York (USA), May 21-25 2012 Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience Vincenzo Capone, bRosario Esposito, bSilvio Pardi, b,cFrancesco Taurino, bGennaro Tortone a,b Università degli Studi di Napoli Federico II – Napoli, Italy a b INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy c CNR-SPIN - Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy Email: [email protected], [email protected], [email protected], [email protected], [email protected] Introduction In this work, we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features which is currently in production at INFN-Napoli and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a results of our efforts we developed a complete solution for data and computing centers that can be easily replicated using commodity hardware. A snapshot of some the custom CLI tools. Hardware We started from commodity hardware we already owned, that was upgraded in order to fulfill the requested performance. In particular, three Dell PowerEdge 1950 rack servers have been used as VM executors, and two Dell PowerEdge 2950 as VM stores. All servers are equipped with dual Intel Xeon E5430, providing 8 cpu cores per server, with 8 Gbyte of RAM. The upgrades consisted in a 8 Gbyte RAM and 2 ports Ethernet NIC on hypervisors, 6 x 1.5 TByte SATA hard disk and a 4 ports Ethernet NIC on both storage servers. The storage server disks are configured in RAID5 (dm-raid software mode), so the total available storage space is 7.5 Tbyte per server. Hypervisor servers have 2 x 500 Gbyte disks configured in RAID1. Furthermore a dedicated 24 gigabit ports Cisco Catalyst 2960G switch was added to the hardware configuration to provide a dedicated storage LAN. Network The main requirements for the network serving our infrastructure are: performance, reliability and resiliency. To achieve these goals, we set up a double path between every hypervisor and both storage servers, with two different switches involved, so that a failure in one of them doesn’t impact on the execution of the Virtual Machines, whose disk images are hosted on the storage servers. The Cisco Catalyst 6509 is the core switch of our science department network infrastructure, and every server is connected to it via the onboard dual gigabit ethernet port, in LACP bonding mode, so to provide the necessary connectivity and the sufficient bandwidth to the VMs: this link is in trunk mode, so that every VM can be connected to the desired VLAN. The second switch (Cisco 2960G) is connected to the former via a 3 x 1 Gbit LACP bond link. A private VLAN hosts the data traffic between the storage servers and the hypervisors; within this VLAN every storage server is connected with three gigabit links to the Cisco 2960G and the fourth to the Cisco 6509, while every hypervisor is connected with one link to both switches; the multiple connection of the servers to the two switches is achieved with the Adaptive Load Balancing mode. Within this topology, the Cisco 2960G is completely dedicated to the network traffic of the storage VLAN, while the Cisco 6509 is used as access switch towards the LAN, and as the redundant switch for the storage VLAN. Software The OS used on all servers was initially Scientific Linux 5.5, with KVM as virtualization system. We selected KVM as the best architecture for virtualization on modern processors with fast hardware virtualization support (VT-x and NPT on Intel or AMD-V and EPT on AMD). After, we updated all servers to Scientific Linux 6.2 to use the new KVM version and KSM (Kernel Samepage Merging), a memory deduplication feature which enables more guests sharing to share the same memory pages of the host. Storage We chose GlusterFS as a fault-tolerant backend storage for virtual machine images. GlusterFS is an open source, clustered file- system for scaling the storage capacity of many servers to several petabytes. It aggregates various storage servers or bricks over Infiniband RDMA and/or TCP/IP interconnection into one large parallel network file system. Key features of GlusterFS: - Modular, stackable storage OS architecture - Data stored in native formats - No metadata – Elastic hashing - Automatic file replication with self-healing. In the GlusterFS world a volume is a logical collection of bricks, where each brick is an export directory on a server in the trusted storage pool. Most of the Gluster management operations happen on the volume. In our local setup we used a replicated Gluster volume created on top of 2 servers to store virtual machine disk images in qcow2 file format. Each storage server exports a storage brick which consists of an ext3 file system built on a Linux software RAID5 array. IOZone benchmark measuring disk r/w performance to GlusterFS: 1) storage server local array, 2) KVM host, 3) guest VM disk image In the picture on the right are shown the disk performances under various use cases. Features We have developed some CLI scripts for day by day tasks on our private cloud in order to reduce administration efforts, like rapid provisioning of guest, listing, rapid migration, load balancing and automatic migration and restart of VMs hosted on a failed hypervisor. With our deployment we’ve achieved all the goals we intended to: ease of management, high availability and fault tolerance. The functional integrity of the whole cloud system is preserved even after the fault of multiple elements of the system: in fact, no other effects, but the declining of the overall performance, happens after the failure of one of the two switches, one of the two storage servers, all but one of the KVM hypervisors, even if all this happens at the same time. In conclusion, our system has proved itself a solid and efficient solution, after more than one year of uninterrupted uptime, to deploy all those services that don’t require a heavy load on the I/O subsystem, but that are a crucial element of a modern datacenter. http://www.na.infn.it