没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论

























A Micron Reference Architecture
Micron
®
9200 MAX NVMe™ SSDs +
Ceph
®
Luminous 12.2.8 + BlueStore
Reference Architecture

2
A Micron Reference Architecture
Contents
Executive Summary ...................................................................................................................................... 3
Why Micron for this Solution ................................................................................................................... 3
Ceph Distributed Architecture Overview ....................................................................................................... 4
Reference Architecture Overview ................................................................................................................. 6
Software .................................................................................................................................................. 6
Ceph Luminous 12.2.8....................................................................................................................... 6
Red Hat Enterprise Linux 7.5 ............................................................................................................ 6
Software by Node Type .......................................................................................................................... 7
Hardware ................................................................................................................................................ 8
Ceph Storage Node ........................................................................................................................... 8
Ceph Monitor Node ............................................................................................................................ 9
Micron 9200 MAX NVMe SSDs ......................................................................................................... 9
Network Switches .............................................................................................................................. 9
Mellanox ConnectX
®
-5 EN Dual Port NICs ..................................................................................... 10
Planning Considerations ............................................................................................................................. 11
Number of Ceph Storage Nodes .......................................................................................................... 11
Number of Ceph Monitor Nodes ........................................................................................................... 11
Replication Factor ................................................................................................................................. 11
CPU Sizing ........................................................................................................................................... 11
Ceph Configuration Tuning ................................................................................................................... 11
Networking ............................................................................................................................................ 11
Number of OSDs per Drive ................................................................................................................... 12
OS Tuning/NUMA ................................................................................................................................. 13
Measuring Performance .............................................................................................................................. 14
4KB Random Workloads: FIO + RBD .................................................................................................. 14
4MB Object Workloads: RADOS Bench ............................................................................................... 14
Baseline Test Methodology ......................................................................................................................... 15
Storage Baseline Results ..................................................................................................................... 15
Network Baseline Results ..................................................................................................................... 15
Test Results and Analysis ........................................................................................................................... 15
4KB Random Workload Testing ........................................................................................................... 16
4KB Random Write Workload Analysis ........................................................................................... 16
4KB Random Read Workload Analysis ........................................................................................... 18
Random Read Results Summary .................................................................................................... 20
4KB Random 70% Read / 30% Write Workload Analysis ............................................................... 20
Random 70/30 R/W Results Summary ............................................................................................ 22
4MB Object Workloads ......................................................................................................................... 22
Summary ..................................................................................................................................................... 24
Appendix A: Configuration Details .............................................................................................................. 25
About Micron ............................................................................................................................................... 31
Why Community Edition? ............................................................................................................................ 31
About Ceph Storage .................................................................................................................................... 31

3
A Micron Reference Architecture
Executive Summary
This document describes an example configuration of a performance-optimized Ceph
®
Luminous 12.2.8
Storage cluster using Micron
®
SSDs with NVMe™, standard x86 architecture rack-mount servers and 100
GbE networking.
It details the hardware and software building blocks used to construct this reference architecture
(including the Red Hat
®
Enterprise Linux
®
OS configuration, network switch configurations and Ceph
tuning parameters) and shows the performance test results and measurement techniques for a scalable
4-node Ceph architecture.
This all-NVMe solution is optimized for block performance while also providing very high object
performance in a compact, rack-efficient design to enable:
Faster deployment
: The configuration has been pre-validated and is thoroughly documented to enable
faster deployment.
Balanced design
: The right combination of NVMe SSDs, DRAM, processors and networking ensures
subsystems are balanced and performance-matched.
Broad use
: Complete tuning and performance characterization across multiple I/O profiles for broad
deployment across multiple uses.
Exceptional performance results were recorded for 4KB random block workloads and 4MB object workloads.
Tables 1a and 1b: Performance Summary
Note: The entry “—” in Tables 1a and 1b indicates this performance metric is not commonly used with this performance profile;
therefore, it was not measured.
Why Micron for this Solution
Storage (SSDs and DRAM) represent a large portion of the value of today’s advanced server/storage
solutions. Micron’s storage expertise starts at memory technology research, innovation and design and
extends through collaborating with customers on total data solutions. Micron develops and manufactures
the storage and memory products that go into the enterprise solutions we architect.
4KB Random Block Performance
IO Profile
IOPS
Ave. Latency
100% Read
2,277,453
1.4 ms
70%/30% R/W
1,033,696
6.23 ms
100% Writes
479,882
6.7 ms
4MB Object Performance
IO Profile
GiB/s
Ave. Latency
100% Read
47.2
53.27 ms
70%/30% R/W
—
—
100% Writes
22.9
27.31 ms
Micron’s Reference Architectures
Micron Reference Architectures are optimized, pre-engineered, enterprise-leading
solution templates for platforms co-developed between Micron and industry leading
hardware and software companies.
Designed and tested at Micron’s Storage Solutions Center, they provide end users,
system builders, independent software vendors (ISVs) and OEMs with a proven
template to build next-generation solutions with reduced time investment and risk.

4
A Micron Reference Architecture
Ceph Distributed Architecture Overview
A Ceph storage cluster is frequently built from large numbers of Ceph nodes for scalability, fault-
tolerance, and performance. Each node is based on industry standard hardware and uses intelligent
Ceph daemons that communicate with each other to:
•
Store, retrieve and replicate data
•
Monitor and report on cluster health
•
Redistribute data dynamically (remap and backfill)
•
Ensure data integrity (scrubbing)
•
Detect and recover from faults and failures
To the Ceph client interface that reads and writes data, a Ceph storage cluster looks like a simple pool
where data is stored. However, the storage cluster performs many complex operations in a manner that is
completely transparent to the client interface. Ceph clients and Ceph Object Storage Daemons (Ceph
OSD daemons, or OSDs) both use the Controlled Replication Under Scalable Hashing (CRUSH)
algorithm for storage and retrieval of objects.
For a Ceph client, the storage cluster is very simple. When a Ceph client reads or writes data (referred to
as an I/O context), it connects to a logical storage pool in the Ceph cluster. The figure below illustrates
the overall Ceph architecture, with concepts that are described in the sections that follow.
Figure 1: Ceph Architecture

5
A Micron Reference Architecture
Clients write to Ceph storage pools while the CRUSH ruleset determines how placement groups are
distributed across object storage daemons (OSDs).
•
Pools:
A Ceph storage cluster stores data objects in logical dynamic partitions called pools. Pools
can be created for specific data types, such as for block devices, object gateways or simply to
separate user groups. The Ceph pool configuration dictates the number of object replicas and the
number of placement groups (PGs) in the pool. Ceph storage pools can be either replicated or
erasure coded as appropriate for the application and cost model. Additionally, pools can “take root” at
any position in the CRUSH hierarchy, allowing placement on groups of servers with differing
performance characteristics—allowing storage to be optimized for different workloads.
•
Placement groups:
Ceph maps objects to placement groups (PGs). PGs are shards or fragments of
a logical object pool that are composed of a group of Ceph OSD daemons that are in a peering
relationship. Placement groups provide a means of creating replication or erasure coding groups of
coarser granularities than on a per object basis. A larger number of placement groups (for example,
200 per OSD or more) leads to better balancing.
•
CRUSH ruleset:
The CRUSH algorithm provides controlled, scalable, and disparate placement of
replicated or erasure-coded data within Ceph and determines how to store and retrieve data by
computing data storage locations. CRUSH empowers Ceph clients to communicate with OSDs
directly, rather than through a centralized server or broker. By determining a method of storing and
retrieving data by an algorithm, Ceph avoids a single point of failure, a performance bottleneck, and a
physical limit to scalability.
•
Ceph monitors (MONs):
Before Ceph clients can read or write data, they must contact a Ceph MON
to obtain the current cluster map. A Ceph storage cluster can operate with a single monitor, but this
introduces a single point of failure. For added reliability and fault tolerance, Ceph supports an odd
number of monitors in a quorum (typically three or five for small to mid-sized clusters). Consensus
among various monitor instances ensures consistent knowledge about the state of the cluster.
•
Ceph OSD daemons:
In a Ceph cluster, Ceph OSD daemons store data and handle data replication,
recovery, backfilling, and rebalancing. They also provide some cluster state information to Ceph
monitors by checking other Ceph OSD daemons with a heartbeat mechanism. A Ceph storage cluster
configured to keep three replicas of every object requires a minimum of three Ceph OSD daemons,
two of which need to be operational to successfully process write requests. Ceph OSD daemons
roughly correspond to a file system on a physical hard disk drive.
剩余30页未读,继续阅读
资源评论


Yannick_J
- 粉丝: 475
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


最新资源
- 基于规则算法的功率跟随控制:燃料电池汽车能量管理策略及其MATLAB数据分析
- (源码)基于C++的贪吃蛇游戏.zip
- 基于模态计算与声振耦合仿真的玻璃隔声量研究及其工程应用
- (源码)基于Python和Arduino的复古LED条形音频可视化器.zip
- 基于Matlab的ESMD信号分解算法:极值点驱动的数据处理与分析 · 时频分析
- 基于MATLAB的特征子集选择(FSS)与前后搜索法实现及应用
- (源码)基于Arduino的JoystickBuzzer音乐控制器项目.zip
- 模块化多电平换流器MMC的载波移相调制及PLECS仿真研究:工况参数为AC3.3kvDC6kv,采用N=6配置,优化双闭环控制与均压策略
- 基于UDP千兆以太网协议栈的纯FPGA Verilog OV5640图像采集系统实现
- (源码)基于Android的学习应用.zip
- CNG加气站设计:从背景到工艺流程的全面解析与实施方案
- (源码)基于C++的面试算法学习项目.zip
- 基于MATLAB的石川公式法齿轮时变啮合刚度计算及应用 宝典
- 基于MATLAB的EKF-GMPHD与UKF-GMPHD多目标跟踪算法研究及仿真 v4.0
- (源码)基于C++语言的RGB到YCbCr颜色空间转换系统.zip
- 永磁同步电机接地故障检测与处理的技术解析及Python代码实现 信号处理 (07月)
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈



安全验证
文档复制为VIP权益,开通VIP直接复制
