© 2012 IBM Corporation
V6.4.0 Technical Update:
SAN Volume Controller and
Storwize V7000
Bill Wiegand - ATS
Consulting I/T Specialist
Storage Virtualization
2 © Copyright IBM Corporation, 2011
Agenda
 FCoE Support
 Non-Disruptive Volume Move
 Compression Overview
 Storwize V7000 Clustered System
– Unified Update
 Miscellaneous
3 © Copyright IBM Corporation, 2011
 Part of the T11 Technical committee Fibre Channel BB-5 project
 Not intended to displace or replace Fibre Channel and is not iSCSI
 Designed to enable convergence between Ethernet and Fibre
networks in the data center
– Simplifies networking and reduces costs
 Technically speaking, the FC0 and FC1 layers of Fibre Channel are
replaced by a new, “Beefed-up” or “lossless” Ethernet
– Full duplex 802.3 Ethernet required
FCoE – Basics
4 © Copyright IBM Corporation, 2011
 At a very high level – FCoE takes a normal FC frame and
packages it within an Ethernet packet
 Additional services (e.g. nameserver) are also provided by a
Fibre Channel Forwarder (FCF) allowing interoperation with
today’s Fibre Channel networks
 Requires
– Ethernet Jumbo Frames
– 10Gb/s Ethernet only
FCoE – Basics
5 © Copyright IBM Corporation, 2011
FCoE – Topologies
 FCoE can be routed by Ethernet switches on the same
subnet supporting the protocol
 Fibre Channel Forwarders (FCF's) perform switching onto
Fibre Channel fabrics
6 © Copyright IBM Corporation, 2011
FCoE – SVC/Storwize V7000 Support
 With V6.4 the following hardware will support FCoE:
– The SVC model 2145-CG8 nodes will support FCoE if the optional 10
Gb/s Ethernet/CNA adapter is installed
– The Storwize V7000 2076-3xx model control enclosures
 Both SVC and Storwize V7000 systems can be non-
disruptively upgraded to support FCoE
 There are two FCoE ports per node or node canister
7 © Copyright IBM Corporation, 2011
FCoE – Interoperability
The SVC & Storwize V7000 will support attaching to
all existing Fibre Channel hosts, storage and each
other via the FCoE ports on the nodes
Additional support for native FCoE hosts and
controllers will be added over time
SVC Stretch cluster supports use of the FCoE ports
iSCSI and FCoE can be used on the same 10Gb/s
ports at the same time if required
8 © Copyright IBM Corporation, 2011
FCoE – DCBx Configuration
 The SVC and Storwize V7000 10Gb/s Ethernet ports will
use the following classes of service:
– NIC Class will carry iSCSI traffic
– FCoE Class will carry FCoE traffic
– The iSCSI Class is not currently being used but may be used at some
point in the future
9 © Copyright IBM Corporation, 2011
FCoE – Configuration Rules
VLAN Tagging is not supported
The FCF and the 10 Gb/s ports MUST be on the same
VLAN for it to be a supported configuration
A single FCoE port is not able to discover multiple
FCFs
–If multiple FCFs are discovered then the system will use the first
one in the list
• Which may not be the one that the customer wants to use
10 © Copyright IBM Corporation, 2011
FCoE – WWPN Changes
 Each Hardware platform has a range of WWPNs associated to it:
– SVC: 5005076801xxxxxx
– Storwize V7000: 5005076802xxxxxx
 When a customer accepts a new Hardware Configuration using
the “variable hardware” technology, then all WWPNs will be re-
allocated
– In most cases this won’t happen but in future configurations it will become
more likely
 WWPNs are assigned in the following order from within the
assigned range:
1.Fibre Channel
2.FCoE
3.SAS
4.Other internal WWPNs
11 © Copyright IBM Corporation, 2011
SVC shares all of the FC WWPNs between the FC and
FCoE physical ports
–1 WWPN per 10GbE port
–Maximum 6 WWPNs per node (4 FC and 2 FCoE)
–2x10Gb != 4x8Gb
• Full migration from FC to FCoE only needs to take this into account
The new “lsportfc” command will provide details of the
WWPNs in the system
FCoE – Interface Changes
12 © Copyright IBM Corporation, 2011
View: lsportfc
– Captures current port status for FCoE/FC ports on the system
– Similar to lsportip for iSCSI
IBM_2076:cluster:superuser>lsportfc
id fc_io_port_id port_id type port_speed node_id node_name WWPN nportid status
0 1 1 fc 4Gb 23 tb28-0-1 500507680110497E 02E100 active
1 2 2 fc 4Gb 23 tb28-0-1 500507680120497E 02E000 active
2 3 3 fc 4Gb 23 tb28-0-1 500507680130497E 043E00 active
3 4 4 fc 4Gb 23 tb28-0-1 500507680140497E 04BE00 active
4 5 3 ethernet 10Gb 23 tb28-0-1 500507680150497E 040C0F active
5 6 4 ethernet 10Gb 23 tb28-0-1 500507680160497E 021003 active
IBM_2076:tbcluster-28:superuser>lsportfc 4
id 4
fc_io_port_id 5
port_id 3
type ethernet
port_speed 10Gb
node_id 23
node_name tb28-0-1
WWPN 500507680150497E
nportid 040C0F
status active
switch_WWPN 100000051E07F464
fpma 0E:FC:00:04:0C:0F
vlanid 100
fcf_MAC 00:05:73:C2:CA:F0
FCoE – Interface Changes
13 © Copyright IBM Corporation, 2011
V6.4 provides both FCoE Target and Initiator functions
The SVC/Storwize V7000 FCoE interface can be used for the
following functionality:
–FC Host access to a Volume (via either FC or FCoE ports)
–FCoE Host access to a Volume (via either FC or FCoE ports)
–SVC/Storwize V7000 access (via either FC or FCoE ports) to an external
storage system FC accessed LUN
–SVC/Storwize V7000 access (via either FC or FCoE ports) to an external
storage system FCoE accessed LUN
–SVC/Storwize V7000 to another SVC/Storwize V7000 via any combination
of FC and FCoE
• Can dedicate FCoE ports for replication or use for host/storage access to allow
dedicating two FC ports for replication or direct connection of server HBA ports
FCoE – Support
14 © Copyright IBM Corporation, 2011
CAUTION
3
1
4
2
Disconnectall
supplypowerfor
completeisolation
Dis c on nec tall
s up ply pow er forc om ple teis o latio n
4
2
3
1
C AUT I O N
3
1
4
2
4
2
3
1
CAUTION
Disconnectall
supplypowerfor
completeisolation
CAU TI O N
Dis c o nne c ta ll
s u p p ly po we r forc o mp leteis o la tion
2
3
1
4
21
1 2
2 1
1
3
2
412
2 14 3 2 14 3
21 4321 43
12
1 2
LN K
T x /R x
1 0 Gb /s
LNK
Tx/Rx
10Gb/s
S to r w iz e V 7 0 0 0
C o n v e rg e d S w itc h B 3 2
4 5 6 70 2 31 1 2 1 3 1 4 1 58 1 0 1 19 2 0 2 1 2 2 2 31 6 1 8 1 91 7 4 5 6 70 2 31G b e
C o n v e r g e d S w itc h B 3 2
CAUTION
3
1
4
2
Disconnectall
supplypowerfor
completeisolation
Dis c onn e c tall
s u p ply p o w er fo rc o mplete is o lation
4
2
3
1
C AU T I O N
3
1
4
2
4
2
3
1
CAUTION
Disconnectall
supplypowerfor
completeisolation
CA U TI O N
Dis c o nn e c ta ll
s u pp ly p o we r fo rc o mp lete is ola tion
2
3
1
4
21
1 2
2 1
1
3
2
412
2 14 3 2 14 3
21 4321 43
12
1 2
L NK
T x /Rx
10 Gb /s
LNK
Tx/Rx
10Gb/s
C o n v e rg e d S w itc h B 3 2
4 5 6 70 2 31 1 2 1 3 1 4 1 58 1 0 1 19 2 0 2 1 2 2 2 31 6 1 8 1 91 7 4 5 6 70 2 31G b e
19 2318 2217 2116 2011 159 13 10 148 123 71 5 2 6402 4 9 8 -B 2 4
S A N 2 4 B -4
19 2318 2217 2116 2011 159 13 10 148 123 71 5 2 6402 4 9 8 -B 2 4
F a b r i c
V6.4 supports remote copy/replication via FCoE
–Requires use of FCF and a full FC ISL
• Could use FCIP and routers as we do today
–DWDM based links are supported as well
–FCoE is not iSCSI, currently no native IP replication capability
All current bandwidth sizing and SVC/Storwize V7000
system sizing and planning for replication applies
FCoE – Support
15 © Copyright IBM Corporation, 2011
FCoE – Resources
 FCIA Guide:
– http://www.fibrechannel.org/documents/doc_download/1-fcia-solution-guide
 IBM Red Paper:
– http://www.redbooks.ibm.com/redpapers/pdfs/redp4493.pdf
16 © Copyright IBM Corporation, 2011
Non-Disruptive Volume Move
17 © Copyright IBM Corporation, 2011
Non-Disruptive Volume Move Across I/O Groups
What this is:
 Allows SVC customers to move a Volume assigned
to one I/O group over to another I/O group without
disruption to the I/O between the Volume and the
Host
 Non-disruptive movement of the Volume requires
interaction with the host and its multi-pathing
software to ensure paths are active and available
during the move
Why it matters:
 Growing virtualization environments or
performance considerations require movement of
Volumes to other I/O groups better equipped to
meet the customer’s requirements
I/O group 1
2. Volume
Move
Host
I/O
SVC node
SVC node
SVC node
SVC node
I/O group 0
1. Multi-path I/O
to Volumes
3. Active paths to
relocated Volumes
18 © Copyright IBM Corporation, 2011
Non-Disruptive Volume Move Across I/O Groups
I/O group 1
2. Volume
Move
Host
I/O
SVC node
SVC node
SVC node
SVC node
I/O group 0
1. Multi-path I/O
to Volumes
3. Active paths to
relocated Volumes
How it works:
 The Volume belongs to a single I/O Group,
referred to as the “caching I/O group”, and
all I/O is sent to the nodes in that I/O group
 The Volume is made accessible through one
or more additional I/O groups referred to as
“access I/O groups”
– Any host I/O which is sent to the access I/O group
will be forwarded back to the caching I/O group
 The “caching I/O group” is switched to the
desired I/O group
– Using a new command called “movevdisk”
– Host I/O to the original I/O group is now forwarded
to the new I/O group
 The host multi-pathing drivers are now
reconfigured to discover the additional
paths to the Volume on the new I/O group
– Some zoning changes may also be required
 Once the multi-pathing drivers have
discovered the new paths, access to the
Volume through the original I/O group can
be unconfigured
– The multi-pathing drivers can now be reconfigured
a second time to remove the now dead paths
19 © Copyright IBM Corporation, 2011
NDVM – Using the GUI Wizard
20 © Copyright IBM Corporation, 2011
NDVM – Using the GUI Wizard
21 © Copyright IBM Corporation, 2011
NDVM – Using the GUI Wizard
22 © Copyright IBM Corporation, 2011
NDVM – Using the GUI Wizard
23 © Copyright IBM Corporation, 2011
NDVM – Using the GUI Wizard
24 © Copyright IBM Corporation, 2011
NDVM – Details
 A Volume which is in a Metro or Global Mirror relationship cannot
change it’s “caching I/O group” currently
 If a Volume in a FlashCopy mapping is moved, the “bitmaps” are left in
the original I/O group
– This will cause additional inter-node messaging to allow FlashCopy to operate
 The SCSI ID of the Volume to host mapping will usually change during
this procedure and it is not currently possible to select the new SCSI ID
– If the Volume is mapped to multiple servers, then the LUN may use different SCSI
IDs for each host
 The maximum number of paths per Volume (8) has not changed
– Customers who are already using 8 paths per Volume will not be able to use NVDM
because NVDM requires adding paths
 If the caching I/O group fails for any reason, the Volume will go offline
– Even if access I/O groups are configured
 This function can be used to change the preferred node for a Volume to
a different node in the same I/O group by first moving the Volume to a
second I/O group and then moving it back again
– System allows for selection of which node you want to use as the preferred node
when moving the Volume back to the original I/O group
– For Storwize V7000 this would require a clustered system configuration
– Note: The multi-pathing driver may not detect the change without a reboot
25 © Copyright IBM Corporation, 2011
NDVM – Host Support/Restrictions
 At initial launch the following restrictions are likely to be in force
– No iSCSI Host support
– No support for Host based clustering
• MSCS, VMWare Cluster, HACMP, etc.
 There will be restrictions on what operating systems are
supported at GA
– Currently supported
• SLES 11
• RHEL 6.1 (probably 6.2 and 6.3 as well)
– Should be supported at or shortly after GA
• AIX (SDD Fix required, round robins I/Os unless rebooted)
• VMWare without VAAI Support (awaiting test)
• VMWare with VAAI (needs SVC code changes)
• W2K8 (SDD Fix required, can’t delete old paths)
 Review the support matrix at GA on June 15th
for official support
status
26 © Copyright IBM Corporation, 2011
Compression
27 © Copyright IBM Corporation, 2011
Real-time Compression – Basics
 Compression is an alternative to Thin Provisioning
– They both allow you to use less physical space on disk than is presented to
the host
 A Compressed Volume is “a kind of” Thin Provisioning
– Only uses physical storage to store compressed data
– Volume can be built from a pool using internal or external MDisks
 Compression requires the I/O group hardware be one of the
following platforms
– SVC Model 2145-CF8/CG8 Nodes
– Storwize V7000 Model 2076-1xx/3xx Control Enclosure
 Can use Volume mirroring to
convert to a Compressed Volume
28 © Copyright IBM Corporation, 2011
Real-time Compression – Basics
 Maximum of 200 Compressed Volumes per I/O group will
initially be supported
 Licensing is as follows:
– For SVC it is per TB of Volume capacity as seen by a host
• Need fifty 100GB Compressed Volumes so need 5TB license
– For Storwize V7000 it is per enclosure
• E.g. Customer has 4 enclosure system and is virtualizing an external disk
system with 2 enclosures they would require 6 enclosure license
 Note: Creating the first Compressed Volume in an I/O
group will instantly dedicate CPU and memory resources
from the nodes/node canisters in that I/O group to the
compression engine
– So planning/sizing should be done before implementing in a production
environment
 More detail on this and how compression works will be
provided on the June 13th
call tomorrow
29 © Copyright IBM Corporation, 2011
C lie n ts
S V C S /W C o m p o n e n t
R A C E S /W C o m p o n e n t
F ro n t E n d
R e m o te C o p y
C a c h e
F la s h C o p y
M irro rin g
T h in P ro v is io n in g
V irtu a liz a tio n
S to r a g e
B a c k E n d
R a n d o m A c c e s s
C o m p r e s s io n
E n g in e ™
 All copy services will interoperate
with compressed Volumes
– All copy services will be working with
uncompressed data
• No real changes in sizing and planning
for FlashCopy or replication
– Bandwidth sizing for replication same for
compressed/non-compressed Volumes
– Compression engine resources
allocated per I/O group need considered
in sizing
 All Thin Provisioning properties
apply to compressed Volumes
– Virtual capacity, real capacity, used
capacity, etc.
 New property introduced
– Uncompressed capacity
• Provides an indication of how much
uncompressed data has been written to
the Volume
Real-time Compression – Basics
30 © Copyright IBM Corporation, 2011
Real-time Compression – GUI Support
 GUI Displays Compression Savings on a Volume, Pool and
System basis:
31 © Copyright IBM Corporation, 2011
Real-time Compression – GUI Support
 GUI Performance panel shows separate CPU utilization for
Compression and System workloads
32 © Copyright IBM Corporation, 2011
Real-time Compression – Sizing Tools
 The following tools will be available to support customers
deploying Compression
– Disk Magic
• Will ask the user to provide an “Effectiveness” value (similar to Easy Tier)
– Available later this year
– Capacity Magic
• Will ask the user to provide a compression ratio to complete the sizing
– Comprestimator
• A tool to estimate the compression ratio which is achievable for a given set
of data
• Loaded on customer’s hosts
33 © Copyright IBM Corporation, 2011
Real-time Compression – 45 Day Trial License
 45 Day Free Trial License of Compression Function
– Included in software so simply activate using the GUI by setting to
something other then zero to avoid errors in event log
34 © Copyright IBM Corporation, 2011
Storwize V7000 Clustered System
35 © Copyright IBM Corporation, 2011
Scale the Storwize V7000 Multiple Ways
 An I/O Group is a control
enclosure and its associated
SAS attached expansion
enclosures
 Clustered system can consist
of 2-4 I/O Groups
 Scale capacity/throughput 4x
– Up to 1.4PB raw capacity or 960
drives in two 42U racks
 Non-disruptive upgrades
– From smallest to largest
configurations
– Purchase hardware only when
you need it
• No extra feature to order and no
extra charge for a clustered
system
• Configure one system using
USB stick and then add second
using GUI
 Virtualize storage arrays
behind Storwize V7000 for even
greater capacity and
throughput
Expand
Cluster
Cluster
Control Enclosure Control Enclosure Control Enclosure
Expansion
Enclosures
Expansion
Enclosures
Expansion
Enclosures
Storwize V7000
One I/O Group
System
Storwize V7000
2-4 I/O Groups
Clustered System
An I/O Group is a
control enclosure
and its associated
SAS connected
expansion
enclosures
Expand
No interconnection of SAS chains
between control enclosures as control
enclosures communicate via FC and
must use all 8 FC ports on enclosures
NOTE: No SCORE/RPQ required
36 © Copyright IBM Corporation, 2011
Storwize V7000 Unified Scaling Unchanged
 Storwize V7000 Unified can scale
disk capacity by adding up to nine
expansion enclosures to the
standard control enclosure
 Virtualize external storage arrays
behind Storwize V7000 Unified for
even greater with externally
virtualized capacity
– CIFS not supported currently with
externally virtualized storage
 CAN NOT horizontally scale out by
adding another Storwize V7000
control enclosure and associated
expansion enclosures
 Nor an additional Unified system
– If customer has clustered Storwize
V7000 system today they will not be
able to upgrade to Unified system when
MES is available until we support this in
a future release
 V6.4 won’t be picked up by Unified
so won’t currently benefit from new
functions discussed today
Control Enclosure
Expansion
Enclosures
Storwize V7000 Unified
2-4 I/O Groups
Clustered System
NOT SUPPORTED
Storwize V7000
Unified
One I/O Group
System
Control Enclosure
Expansion
Enclosures
Expansion
Enclosures
Expand
An I/O Group is a
control enclosure
and its associated
SAS connected
expansion
enclosures
Control Enclosure
37 © Copyright IBM Corporation, 2011
Storwize V7000 – Pre-V6.4 behavior
All cabling
shown is
logical
SAN
I/O Group 1I/O Group 0
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Control Enclosure #2
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Control Enclosure #1
Storage Pool B
MDisk MDiskMDisk MDisk
Storage Pool A
MDisk MDisk
Storage Pool C
MDisk MDisk
Node Canister Node Canister Node Canister Node Canister
MDiskMDisk
Volumes assigned
to I/O Group that
owns most MDisks
in pool
Volumes assigned
to I/O Group that
owns most MDisks
in pool
Default behavior is
a storage pool per
I/O Group per drive
class
Default behavior is
a storage pool per
I/O Group per drive
class
Volumes assigned
to I/O Group 0 if
pool has equal # of
MDisks from each
I/O Group
• Expansion enclosures are connected through one control
enclosure and can be part of only one I/O group
• Storage pools can contain MDisks from more than one I/O
group
• Inter-control enclosure communications happens over the SAN
• All MDisks are accessed via owning I/O group
• A Volume is serviced by only one I/O group
38 © Copyright IBM Corporation, 2011
Storwize V7000 – V6.4 and later behavior
• Expansion enclosures are connected through one control
enclosure and can be part of only one I/O group
• Storage pools can contain MDisks from more than one I/O
group
• Inter-control enclosure communications happens over the SAN
• All MDisks are accessed via owning I/O group
• A Volume is serviced by only one I/O group
All cabling
shown is
logical
SAN
I/O Group 1I/O Group 0
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Control Enclosure #2
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Control Enclosure #1
Storage Pool B
MDisk MDiskMDisk MDisk
Storage Pool A
MDisk MDisk
Storage Pool C
MDisk MDisk
Node Canister Node Canister Node Canister Node Canister
MDiskMDisk
Default behavior is
a storage pool per
I/O Group per drive
class
Default behavior is
a storage pool per
I/O Group per drive
class
Volume ownership
balanced across node
canisters in all I/O Groups
when pool contains
MDisks from multiple I/O
Groups
39 © Copyright IBM Corporation, 2011
SVC and Storwize V7000 Interop
40 © Copyright IBM Corporation, 2011
SVC to Storwize V7000 Remote Copy
 When V6.3 GA’d we provided the ability to replicate
between SVC and Storwize V7000 systems
 V6.3 introduced a new cluster property called “layer”
– SVC is always in “replication layer” mode
– Storwize V7000 is either in “replication layer” mode or “storage
layer” mode
• Storwize V7000 is in “storage layer” mode by default
• Switch to “replication layer” using “svctask chcluster -layer replication”
– Can only be changed via CLI
 “Replication layer” clusters can use storage layer clusters
as storage systems to virtualize
– With V6.4 you can now virtualize a Storwize V7000 with layer=storage
behind another Storwize V7000 with layer=replication
41 © Copyright IBM Corporation, 2011
Remote Copy – Configuration Example
SVC V6.3.x
Cluster B
SWV7K V6.4.x
Cluster C Layer = replication
SWV7K V6.4.x
Cluster D Layer = storage
SVC V6.4.x
Cluster A
Replication layer
Storage layer
RC_partnership_1 RC_partnership_2
SWV7K V6.x
Cluster E Layer = storage
RC_partnership_1
NOTE: To provision SWV7K
storage to another SWV7K with
layer=replication requires that
both SWV7Ks be running V6.4
or later software
42 © Copyright IBM Corporation, 2011
Miscellaneous
43 © Copyright IBM Corporation, 2011
Miscellaneous
Space Efficient Volume grain size
 Due to performance considerations and interaction with Easy Tier, the
default grain size of a Thin Provisioned Volume has been changed to
256KB rather than 32KB
– Also helps avoid I/Os up to 256K from host not being decomposed into smaller I/Os
to MDisks
SCSI-3 Persistent Reserve
 This release extends the existing persistent reservation support to add
additional persistent reserve functions
– PR reservation type “Write Exclusive All Registrants”
– PR reservation type “Exclusive Access All Registrants”
– Report capabilities service action of the “Persistent Reserve In” command
 These additional persistent reserve functions will allow GPFS to use
persistent reserves on a Storwize V7000 or SVC system
44 © Copyright IBM Corporation, 2011
Miscellaneous
 V6.4 will support direct attached hosts via FC with SCORE/RPQ only
– Full support to be added in later release
 Requires changes to host properties
– In the current release all direct attach host status will report as “degraded”
– When fully support direct attached host status will report as “active/inactive”
– Will be status of “offline” if not connected
 The status field will be “online” if the host has an active login in
each I/O group where it can see Volumes mapped to it
 Direct Attach hosts can only use FC ports that are not required for
intra-cluster connectivity or SAN use for hosts, disk or replication
– In a single control enclosure Storwize V7000 there will be 8 ports available
– A clustered Storwize V7000 will not currently support direct attach FC hosts
 The view “lsportfc” will report direct or fabric attachment of the port
 No changes to “lshbaportcandidate, mkhost or addhostport” cmds
45 © Copyright IBM Corporation, 2011
Miscellaneous
 FlashCopy GUI panel now displays timestamp showing
when mapping was started
46 © Copyright IBM Corporation, 2011
Miscellaneous
 Ability to create multiple Volumes more quickly
47 © Copyright IBM Corporation, 2011
Miscellaneous
 SVC and Storwize V7000 software upgrade has a new “prepare”
phase whenever upgrading from V6.4 to a later release
– Initially this will not do anything but is part of future plans related to the cache
architecture
– We have introduced new CCU states:
• Preparing, prepared, prepare_failed
• For information only as you will see these possibly and again for now you can ignore
them
 New quorum scanning design to try and recover from corrupt
quorum data caused by drive faults
– Quorum will regularly be read and validated
– Invalid quorum will ideally be moved to a new device
– If no new device available, quorum will be re-written
 Software upgrade package size increasing to about 500MB from
about 340MB
 TPC stats collection for internal MDisks will show a response time
in V6.3.0.2 and later
48 © Copyright IBM Corporation, 2011
49 © Copyright IBM Corporation, 2011
50 © Copyright IBM Corporation, 2011
51 © Copyright IBM Corporation, 2011
52 © Copyright IBM Corporation, 2011
53 © Copyright IBM Corporation, 2011
54 © Copyright IBM Corporation, 2011
55 © Copyright IBM Corporation, 2011
56 © Copyright IBM Corporation, 2011
57 © Copyright IBM Corporation, 2011
58 © Copyright IBM Corporation, 2011
The following terms are trademarks of International Business Machines Corporation in the United States, other countries, or both:
IBM, IBM Logo, on demand business logo, Enterprise Storage Server, xSeries, BladeCenter, eServer, ServeRAID and
FlashCopy, System Storage, Tivoli, Easy Tier, Active Cloud Engine
The following are trademarks or registered trademarks of other companies.
Intel is a trademark of the Intel Corporation in the United States and other countries.
Java and all Java-related trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc., in the United States and other countries.
Lotus, Notes, and Domino are trademarks or registered trademarks of Lotus Development Corporation.
Linux is a registered trademark of Linus Torvalds.
Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation.
SET and Secure Electronic Transaction are trademarks owned by SET Secure Electronic Transaction LLC.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Storwize and the Storwize logo are trademarks or registered trademarks of Storwize Inc., an IBM Company.
* All other products may be trademarks or registered trademarks of their respective companies.
Notes:
Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual
throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the
storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the
performance ratios stated here.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results
they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.
This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information
may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area.
The information on the new products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information on
the new products is for informational purposes only and may not be incorporated into any contract. The information on the new products is not a commitment, promise, or
legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for our products remains at our
sole discretion.
All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot
confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.
Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.
This presentation and the claims outlined in it were reviewed for compliance with US law. Adaptations of these claims for use in other geographies must be reviewed by the
local country counsel for compliance with local laws.
Legal Information and Trademarks

More Related Content

PDF
Inter as vpn option c
PDF
Inter-AS MPLS VPN Deployment
PPTX
NETWORKERS HOME Cisco UCS PPT .
PPT
MPLS VPN Per Vrf Traffic
PPTX
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
PDF
PLNOG 13: Artur Pająk: Storage w sieciach Ethernet, czyli coś o iSCSI I FCoE
PPTX
BGP Monitoring Protocol
PDF
IBM Flex System FC5052 2-port and FC5054 4-port 16Gb FC Adapters
Inter as vpn option c
Inter-AS MPLS VPN Deployment
NETWORKERS HOME Cisco UCS PPT .
MPLS VPN Per Vrf Traffic
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
PLNOG 13: Artur Pająk: Storage w sieciach Ethernet, czyli coś o iSCSI I FCoE
BGP Monitoring Protocol
IBM Flex System FC5052 2-port and FC5054 4-port 16Gb FC Adapters

What's hot (20)

PDF
Converged data center_f_co_e_iscsi_future_storage_networking
 
PDF
Mpls vpn.rip
PDF
VRF (virtual routing and forwarding)
PPTX
Virtual Routing and Forwarding, (VRF-lite)
PDF
BGP Advance Technique by Steven & James
PDF
IBM Flex System CN4054 10Gb Virtual Fabric Adapter and EN4054 4-port 10Gb Eth...
PPTX
Cisco Live Milan 2015 - BGP advance
PDF
VRF Configuration
PPT
Cisco data center support
PPTX
BGP Graceful Shutdown - IOS XR
PDF
Next Generation Storage Networking for Next Generation Data Centers
PPT
Bigbgp (1)
PDF
Policy Based Routing (PBR)
PPT
Interautonomous System PLS VPN Advanced Concepts
PPTX
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
PDF
MPLS + BGP Presentation
PPTX
L3 and Multicasting PPT by NETWORKERS HOME
PDF
Storage Networking Interfaces
PDF
Network tips tricks
Converged data center_f_co_e_iscsi_future_storage_networking
 
Mpls vpn.rip
VRF (virtual routing and forwarding)
Virtual Routing and Forwarding, (VRF-lite)
BGP Advance Technique by Steven & James
IBM Flex System CN4054 10Gb Virtual Fabric Adapter and EN4054 4-port 10Gb Eth...
Cisco Live Milan 2015 - BGP advance
VRF Configuration
Cisco data center support
BGP Graceful Shutdown - IOS XR
Next Generation Storage Networking for Next Generation Data Centers
Bigbgp (1)
Policy Based Routing (PBR)
Interautonomous System PLS VPN Advanced Concepts
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
MPLS + BGP Presentation
L3 and Multicasting PPT by NETWORKERS HOME
Storage Networking Interfaces
Network tips tricks
Ad

Similar to 3379930 (20)

PPT
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
PDF
Converged Data Center: FCoE, iSCSI, & the Future of Storage Networking ( EMC ...
 
PDF
Analyst Perspective - Next Generation Storage Networking for Next Generation ...
PPTX
Designing and deploying converged storage area networks final
PPTX
Creating Competitive Advantage by Revolutionizing I/O
PPT
Converged Networks: FCoE, iSCSI and the Future of Storage Networking
PPTX
SAN_Module3_Part1_PPTs.pptx
PDF
Presentation dc design for small and mid-size data center
PPTX
Power vc for powervm deep dive tips & tricks
PPTX
2014/09/02 Cisco UCS HPC @ ANL
PDF
IBM Flex System FC5172 2-port 16Gb FC Adapter
PDF
Module 06 (1).pdf
PDF
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...
 
PDF
BRKACI-1003 ACI Brownfield Migration - Real World Experiences and Best Practi...
PPTX
VMware EMC Service Talk
PPSX
Brocade Administration & troubleshooting
PDF
Workshop: IMS & VoLTE in minutes
PDF
EVPN: Migration from Legacy to Modern Architecture
PPT
FCoE Origins and Status for Ethernet Technology Summit
PDF
Integration and Interoperation of existing Nexus networks into an ACI Archite...
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Converged Data Center: FCoE, iSCSI, & the Future of Storage Networking ( EMC ...
 
Analyst Perspective - Next Generation Storage Networking for Next Generation ...
Designing and deploying converged storage area networks final
Creating Competitive Advantage by Revolutionizing I/O
Converged Networks: FCoE, iSCSI and the Future of Storage Networking
SAN_Module3_Part1_PPTs.pptx
Presentation dc design for small and mid-size data center
Power vc for powervm deep dive tips & tricks
2014/09/02 Cisco UCS HPC @ ANL
IBM Flex System FC5172 2-port 16Gb FC Adapter
Module 06 (1).pdf
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...
 
BRKACI-1003 ACI Brownfield Migration - Real World Experiences and Best Practi...
VMware EMC Service Talk
Brocade Administration & troubleshooting
Workshop: IMS & VoLTE in minutes
EVPN: Migration from Legacy to Modern Architecture
FCoE Origins and Status for Ethernet Technology Summit
Integration and Interoperation of existing Nexus networks into an ACI Archite...
Ad

More from solarisyougood (20)

PPTX
Emc vipr srm workshop
PPTX
Emc recoverpoint technical
PPTX
Emc vmax3 technical deep workshop
PPTX
EMC Atmos for service providers
PPTX
Cisco prime network 4.1 technical overview
PPTX
Designing your xen desktop 7.5 environment with training guide
PPT
Ibm aix technical deep dive workshop advanced administration and problem dete...
PPT
Ibm power ha v7 technical deep dive workshop
PPT
Power8 hardware technical deep dive workshop
PPT
Power systems virtualization with power kvm
PPTX
Emc data domain technical deep dive workshop
PPT
Ibm flash system v9000 technical deep dive workshop
PPTX
Emc vnx2 technical deep dive workshop
PPTX
Emc isilon technical deep dive workshop
PPTX
Emc ecs 2 technical deep dive workshop
PPTX
Emc vplex deep dive
PPTX
Cisco mds 9148 s training workshop
PPTX
Cisco cloud computing deploying openstack
PPTX
Se training storage grid webscale technical overview
PPTX
Vmware 2015 with vsphereHigh performance application platforms
Emc vipr srm workshop
Emc recoverpoint technical
Emc vmax3 technical deep workshop
EMC Atmos for service providers
Cisco prime network 4.1 technical overview
Designing your xen desktop 7.5 environment with training guide
Ibm aix technical deep dive workshop advanced administration and problem dete...
Ibm power ha v7 technical deep dive workshop
Power8 hardware technical deep dive workshop
Power systems virtualization with power kvm
Emc data domain technical deep dive workshop
Ibm flash system v9000 technical deep dive workshop
Emc vnx2 technical deep dive workshop
Emc isilon technical deep dive workshop
Emc ecs 2 technical deep dive workshop
Emc vplex deep dive
Cisco mds 9148 s training workshop
Cisco cloud computing deploying openstack
Se training storage grid webscale technical overview
Vmware 2015 with vsphereHigh performance application platforms

Recently uploaded (20)

PPT
Galois Field Theory of Risk: A Perspective, Protocol, and Mathematical Backgr...
PPTX
Microsoft User Copilot Training Slide Deck
PDF
NewMind AI Weekly Chronicles – August ’25 Week IV
PDF
MENA-ECEONOMIC-CONTEXT-VC MENA-ECEONOMIC
PDF
Comparative analysis of machine learning models for fake news detection in so...
PDF
AI.gov: A Trojan Horse in the Age of Artificial Intelligence
PDF
Data Virtualization in Action: Scaling APIs and Apps with FME
PDF
giants, standing on the shoulders of - by Daniel Stenberg
PDF
Planning-an-Audit-A-How-To-Guide-Checklist-WP.pdf
PDF
Transform-Your-Streaming-Platform-with-AI-Driven-Quality-Engineering.pdf
PDF
Accessing-Finance-in-Jordan-MENA 2024 2025.pdf
PDF
Co-training pseudo-labeling for text classification with support vector machi...
PDF
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
PDF
SaaS reusability assessment using machine learning techniques
PDF
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
PDF
sbt 2.0: go big (Scala Days 2025 edition)
PDF
Enhancing plagiarism detection using data pre-processing and machine learning...
PDF
Dell Pro Micro: Speed customer interactions, patient processing, and learning...
PPTX
future_of_ai_comprehensive_20250822032121.pptx
PDF
The-Future-of-Automotive-Quality-is-Here-AI-Driven-Engineering.pdf
Galois Field Theory of Risk: A Perspective, Protocol, and Mathematical Backgr...
Microsoft User Copilot Training Slide Deck
NewMind AI Weekly Chronicles – August ’25 Week IV
MENA-ECEONOMIC-CONTEXT-VC MENA-ECEONOMIC
Comparative analysis of machine learning models for fake news detection in so...
AI.gov: A Trojan Horse in the Age of Artificial Intelligence
Data Virtualization in Action: Scaling APIs and Apps with FME
giants, standing on the shoulders of - by Daniel Stenberg
Planning-an-Audit-A-How-To-Guide-Checklist-WP.pdf
Transform-Your-Streaming-Platform-with-AI-Driven-Quality-Engineering.pdf
Accessing-Finance-in-Jordan-MENA 2024 2025.pdf
Co-training pseudo-labeling for text classification with support vector machi...
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
SaaS reusability assessment using machine learning techniques
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
sbt 2.0: go big (Scala Days 2025 edition)
Enhancing plagiarism detection using data pre-processing and machine learning...
Dell Pro Micro: Speed customer interactions, patient processing, and learning...
future_of_ai_comprehensive_20250822032121.pptx
The-Future-of-Automotive-Quality-is-Here-AI-Driven-Engineering.pdf

3379930

  • 1. © 2012 IBM Corporation V6.4.0 Technical Update: SAN Volume Controller and Storwize V7000 Bill Wiegand - ATS Consulting I/T Specialist Storage Virtualization
  • 2. 2 © Copyright IBM Corporation, 2011 Agenda  FCoE Support  Non-Disruptive Volume Move  Compression Overview  Storwize V7000 Clustered System – Unified Update  Miscellaneous
  • 3. 3 © Copyright IBM Corporation, 2011  Part of the T11 Technical committee Fibre Channel BB-5 project  Not intended to displace or replace Fibre Channel and is not iSCSI  Designed to enable convergence between Ethernet and Fibre networks in the data center – Simplifies networking and reduces costs  Technically speaking, the FC0 and FC1 layers of Fibre Channel are replaced by a new, “Beefed-up” or “lossless” Ethernet – Full duplex 802.3 Ethernet required FCoE – Basics
  • 4. 4 © Copyright IBM Corporation, 2011  At a very high level – FCoE takes a normal FC frame and packages it within an Ethernet packet  Additional services (e.g. nameserver) are also provided by a Fibre Channel Forwarder (FCF) allowing interoperation with today’s Fibre Channel networks  Requires – Ethernet Jumbo Frames – 10Gb/s Ethernet only FCoE – Basics
  • 5. 5 © Copyright IBM Corporation, 2011 FCoE – Topologies  FCoE can be routed by Ethernet switches on the same subnet supporting the protocol  Fibre Channel Forwarders (FCF's) perform switching onto Fibre Channel fabrics
  • 6. 6 © Copyright IBM Corporation, 2011 FCoE – SVC/Storwize V7000 Support  With V6.4 the following hardware will support FCoE: – The SVC model 2145-CG8 nodes will support FCoE if the optional 10 Gb/s Ethernet/CNA adapter is installed – The Storwize V7000 2076-3xx model control enclosures  Both SVC and Storwize V7000 systems can be non- disruptively upgraded to support FCoE  There are two FCoE ports per node or node canister
  • 7. 7 © Copyright IBM Corporation, 2011 FCoE – Interoperability The SVC & Storwize V7000 will support attaching to all existing Fibre Channel hosts, storage and each other via the FCoE ports on the nodes Additional support for native FCoE hosts and controllers will be added over time SVC Stretch cluster supports use of the FCoE ports iSCSI and FCoE can be used on the same 10Gb/s ports at the same time if required
  • 8. 8 © Copyright IBM Corporation, 2011 FCoE – DCBx Configuration  The SVC and Storwize V7000 10Gb/s Ethernet ports will use the following classes of service: – NIC Class will carry iSCSI traffic – FCoE Class will carry FCoE traffic – The iSCSI Class is not currently being used but may be used at some point in the future
  • 9. 9 © Copyright IBM Corporation, 2011 FCoE – Configuration Rules VLAN Tagging is not supported The FCF and the 10 Gb/s ports MUST be on the same VLAN for it to be a supported configuration A single FCoE port is not able to discover multiple FCFs –If multiple FCFs are discovered then the system will use the first one in the list • Which may not be the one that the customer wants to use
  • 10. 10 © Copyright IBM Corporation, 2011 FCoE – WWPN Changes  Each Hardware platform has a range of WWPNs associated to it: – SVC: 5005076801xxxxxx – Storwize V7000: 5005076802xxxxxx  When a customer accepts a new Hardware Configuration using the “variable hardware” technology, then all WWPNs will be re- allocated – In most cases this won’t happen but in future configurations it will become more likely  WWPNs are assigned in the following order from within the assigned range: 1.Fibre Channel 2.FCoE 3.SAS 4.Other internal WWPNs
  • 11. 11 © Copyright IBM Corporation, 2011 SVC shares all of the FC WWPNs between the FC and FCoE physical ports –1 WWPN per 10GbE port –Maximum 6 WWPNs per node (4 FC and 2 FCoE) –2x10Gb != 4x8Gb • Full migration from FC to FCoE only needs to take this into account The new “lsportfc” command will provide details of the WWPNs in the system FCoE – Interface Changes
  • 12. 12 © Copyright IBM Corporation, 2011 View: lsportfc – Captures current port status for FCoE/FC ports on the system – Similar to lsportip for iSCSI IBM_2076:cluster:superuser>lsportfc id fc_io_port_id port_id type port_speed node_id node_name WWPN nportid status 0 1 1 fc 4Gb 23 tb28-0-1 500507680110497E 02E100 active 1 2 2 fc 4Gb 23 tb28-0-1 500507680120497E 02E000 active 2 3 3 fc 4Gb 23 tb28-0-1 500507680130497E 043E00 active 3 4 4 fc 4Gb 23 tb28-0-1 500507680140497E 04BE00 active 4 5 3 ethernet 10Gb 23 tb28-0-1 500507680150497E 040C0F active 5 6 4 ethernet 10Gb 23 tb28-0-1 500507680160497E 021003 active IBM_2076:tbcluster-28:superuser>lsportfc 4 id 4 fc_io_port_id 5 port_id 3 type ethernet port_speed 10Gb node_id 23 node_name tb28-0-1 WWPN 500507680150497E nportid 040C0F status active switch_WWPN 100000051E07F464 fpma 0E:FC:00:04:0C:0F vlanid 100 fcf_MAC 00:05:73:C2:CA:F0 FCoE – Interface Changes
  • 13. 13 © Copyright IBM Corporation, 2011 V6.4 provides both FCoE Target and Initiator functions The SVC/Storwize V7000 FCoE interface can be used for the following functionality: –FC Host access to a Volume (via either FC or FCoE ports) –FCoE Host access to a Volume (via either FC or FCoE ports) –SVC/Storwize V7000 access (via either FC or FCoE ports) to an external storage system FC accessed LUN –SVC/Storwize V7000 access (via either FC or FCoE ports) to an external storage system FCoE accessed LUN –SVC/Storwize V7000 to another SVC/Storwize V7000 via any combination of FC and FCoE • Can dedicate FCoE ports for replication or use for host/storage access to allow dedicating two FC ports for replication or direct connection of server HBA ports FCoE – Support
  • 14. 14 © Copyright IBM Corporation, 2011 CAUTION 3 1 4 2 Disconnectall supplypowerfor completeisolation Dis c on nec tall s up ply pow er forc om ple teis o latio n 4 2 3 1 C AUT I O N 3 1 4 2 4 2 3 1 CAUTION Disconnectall supplypowerfor completeisolation CAU TI O N Dis c o nne c ta ll s u p p ly po we r forc o mp leteis o la tion 2 3 1 4 21 1 2 2 1 1 3 2 412 2 14 3 2 14 3 21 4321 43 12 1 2 LN K T x /R x 1 0 Gb /s LNK Tx/Rx 10Gb/s S to r w iz e V 7 0 0 0 C o n v e rg e d S w itc h B 3 2 4 5 6 70 2 31 1 2 1 3 1 4 1 58 1 0 1 19 2 0 2 1 2 2 2 31 6 1 8 1 91 7 4 5 6 70 2 31G b e C o n v e r g e d S w itc h B 3 2 CAUTION 3 1 4 2 Disconnectall supplypowerfor completeisolation Dis c onn e c tall s u p ply p o w er fo rc o mplete is o lation 4 2 3 1 C AU T I O N 3 1 4 2 4 2 3 1 CAUTION Disconnectall supplypowerfor completeisolation CA U TI O N Dis c o nn e c ta ll s u pp ly p o we r fo rc o mp lete is ola tion 2 3 1 4 21 1 2 2 1 1 3 2 412 2 14 3 2 14 3 21 4321 43 12 1 2 L NK T x /Rx 10 Gb /s LNK Tx/Rx 10Gb/s C o n v e rg e d S w itc h B 3 2 4 5 6 70 2 31 1 2 1 3 1 4 1 58 1 0 1 19 2 0 2 1 2 2 2 31 6 1 8 1 91 7 4 5 6 70 2 31G b e 19 2318 2217 2116 2011 159 13 10 148 123 71 5 2 6402 4 9 8 -B 2 4 S A N 2 4 B -4 19 2318 2217 2116 2011 159 13 10 148 123 71 5 2 6402 4 9 8 -B 2 4 F a b r i c V6.4 supports remote copy/replication via FCoE –Requires use of FCF and a full FC ISL • Could use FCIP and routers as we do today –DWDM based links are supported as well –FCoE is not iSCSI, currently no native IP replication capability All current bandwidth sizing and SVC/Storwize V7000 system sizing and planning for replication applies FCoE – Support
  • 15. 15 © Copyright IBM Corporation, 2011 FCoE – Resources  FCIA Guide: – http://www.fibrechannel.org/documents/doc_download/1-fcia-solution-guide  IBM Red Paper: – http://www.redbooks.ibm.com/redpapers/pdfs/redp4493.pdf
  • 16. 16 © Copyright IBM Corporation, 2011 Non-Disruptive Volume Move
  • 17. 17 © Copyright IBM Corporation, 2011 Non-Disruptive Volume Move Across I/O Groups What this is:  Allows SVC customers to move a Volume assigned to one I/O group over to another I/O group without disruption to the I/O between the Volume and the Host  Non-disruptive movement of the Volume requires interaction with the host and its multi-pathing software to ensure paths are active and available during the move Why it matters:  Growing virtualization environments or performance considerations require movement of Volumes to other I/O groups better equipped to meet the customer’s requirements I/O group 1 2. Volume Move Host I/O SVC node SVC node SVC node SVC node I/O group 0 1. Multi-path I/O to Volumes 3. Active paths to relocated Volumes
  • 18. 18 © Copyright IBM Corporation, 2011 Non-Disruptive Volume Move Across I/O Groups I/O group 1 2. Volume Move Host I/O SVC node SVC node SVC node SVC node I/O group 0 1. Multi-path I/O to Volumes 3. Active paths to relocated Volumes How it works:  The Volume belongs to a single I/O Group, referred to as the “caching I/O group”, and all I/O is sent to the nodes in that I/O group  The Volume is made accessible through one or more additional I/O groups referred to as “access I/O groups” – Any host I/O which is sent to the access I/O group will be forwarded back to the caching I/O group  The “caching I/O group” is switched to the desired I/O group – Using a new command called “movevdisk” – Host I/O to the original I/O group is now forwarded to the new I/O group  The host multi-pathing drivers are now reconfigured to discover the additional paths to the Volume on the new I/O group – Some zoning changes may also be required  Once the multi-pathing drivers have discovered the new paths, access to the Volume through the original I/O group can be unconfigured – The multi-pathing drivers can now be reconfigured a second time to remove the now dead paths
  • 19. 19 © Copyright IBM Corporation, 2011 NDVM – Using the GUI Wizard
  • 20. 20 © Copyright IBM Corporation, 2011 NDVM – Using the GUI Wizard
  • 21. 21 © Copyright IBM Corporation, 2011 NDVM – Using the GUI Wizard
  • 22. 22 © Copyright IBM Corporation, 2011 NDVM – Using the GUI Wizard
  • 23. 23 © Copyright IBM Corporation, 2011 NDVM – Using the GUI Wizard
  • 24. 24 © Copyright IBM Corporation, 2011 NDVM – Details  A Volume which is in a Metro or Global Mirror relationship cannot change it’s “caching I/O group” currently  If a Volume in a FlashCopy mapping is moved, the “bitmaps” are left in the original I/O group – This will cause additional inter-node messaging to allow FlashCopy to operate  The SCSI ID of the Volume to host mapping will usually change during this procedure and it is not currently possible to select the new SCSI ID – If the Volume is mapped to multiple servers, then the LUN may use different SCSI IDs for each host  The maximum number of paths per Volume (8) has not changed – Customers who are already using 8 paths per Volume will not be able to use NVDM because NVDM requires adding paths  If the caching I/O group fails for any reason, the Volume will go offline – Even if access I/O groups are configured  This function can be used to change the preferred node for a Volume to a different node in the same I/O group by first moving the Volume to a second I/O group and then moving it back again – System allows for selection of which node you want to use as the preferred node when moving the Volume back to the original I/O group – For Storwize V7000 this would require a clustered system configuration – Note: The multi-pathing driver may not detect the change without a reboot
  • 25. 25 © Copyright IBM Corporation, 2011 NDVM – Host Support/Restrictions  At initial launch the following restrictions are likely to be in force – No iSCSI Host support – No support for Host based clustering • MSCS, VMWare Cluster, HACMP, etc.  There will be restrictions on what operating systems are supported at GA – Currently supported • SLES 11 • RHEL 6.1 (probably 6.2 and 6.3 as well) – Should be supported at or shortly after GA • AIX (SDD Fix required, round robins I/Os unless rebooted) • VMWare without VAAI Support (awaiting test) • VMWare with VAAI (needs SVC code changes) • W2K8 (SDD Fix required, can’t delete old paths)  Review the support matrix at GA on June 15th for official support status
  • 26. 26 © Copyright IBM Corporation, 2011 Compression
  • 27. 27 © Copyright IBM Corporation, 2011 Real-time Compression – Basics  Compression is an alternative to Thin Provisioning – They both allow you to use less physical space on disk than is presented to the host  A Compressed Volume is “a kind of” Thin Provisioning – Only uses physical storage to store compressed data – Volume can be built from a pool using internal or external MDisks  Compression requires the I/O group hardware be one of the following platforms – SVC Model 2145-CF8/CG8 Nodes – Storwize V7000 Model 2076-1xx/3xx Control Enclosure  Can use Volume mirroring to convert to a Compressed Volume
  • 28. 28 © Copyright IBM Corporation, 2011 Real-time Compression – Basics  Maximum of 200 Compressed Volumes per I/O group will initially be supported  Licensing is as follows: – For SVC it is per TB of Volume capacity as seen by a host • Need fifty 100GB Compressed Volumes so need 5TB license – For Storwize V7000 it is per enclosure • E.g. Customer has 4 enclosure system and is virtualizing an external disk system with 2 enclosures they would require 6 enclosure license  Note: Creating the first Compressed Volume in an I/O group will instantly dedicate CPU and memory resources from the nodes/node canisters in that I/O group to the compression engine – So planning/sizing should be done before implementing in a production environment  More detail on this and how compression works will be provided on the June 13th call tomorrow
  • 29. 29 © Copyright IBM Corporation, 2011 C lie n ts S V C S /W C o m p o n e n t R A C E S /W C o m p o n e n t F ro n t E n d R e m o te C o p y C a c h e F la s h C o p y M irro rin g T h in P ro v is io n in g V irtu a liz a tio n S to r a g e B a c k E n d R a n d o m A c c e s s C o m p r e s s io n E n g in e ™  All copy services will interoperate with compressed Volumes – All copy services will be working with uncompressed data • No real changes in sizing and planning for FlashCopy or replication – Bandwidth sizing for replication same for compressed/non-compressed Volumes – Compression engine resources allocated per I/O group need considered in sizing  All Thin Provisioning properties apply to compressed Volumes – Virtual capacity, real capacity, used capacity, etc.  New property introduced – Uncompressed capacity • Provides an indication of how much uncompressed data has been written to the Volume Real-time Compression – Basics
  • 30. 30 © Copyright IBM Corporation, 2011 Real-time Compression – GUI Support  GUI Displays Compression Savings on a Volume, Pool and System basis:
  • 31. 31 © Copyright IBM Corporation, 2011 Real-time Compression – GUI Support  GUI Performance panel shows separate CPU utilization for Compression and System workloads
  • 32. 32 © Copyright IBM Corporation, 2011 Real-time Compression – Sizing Tools  The following tools will be available to support customers deploying Compression – Disk Magic • Will ask the user to provide an “Effectiveness” value (similar to Easy Tier) – Available later this year – Capacity Magic • Will ask the user to provide a compression ratio to complete the sizing – Comprestimator • A tool to estimate the compression ratio which is achievable for a given set of data • Loaded on customer’s hosts
  • 33. 33 © Copyright IBM Corporation, 2011 Real-time Compression – 45 Day Trial License  45 Day Free Trial License of Compression Function – Included in software so simply activate using the GUI by setting to something other then zero to avoid errors in event log
  • 34. 34 © Copyright IBM Corporation, 2011 Storwize V7000 Clustered System
  • 35. 35 © Copyright IBM Corporation, 2011 Scale the Storwize V7000 Multiple Ways  An I/O Group is a control enclosure and its associated SAS attached expansion enclosures  Clustered system can consist of 2-4 I/O Groups  Scale capacity/throughput 4x – Up to 1.4PB raw capacity or 960 drives in two 42U racks  Non-disruptive upgrades – From smallest to largest configurations – Purchase hardware only when you need it • No extra feature to order and no extra charge for a clustered system • Configure one system using USB stick and then add second using GUI  Virtualize storage arrays behind Storwize V7000 for even greater capacity and throughput Expand Cluster Cluster Control Enclosure Control Enclosure Control Enclosure Expansion Enclosures Expansion Enclosures Expansion Enclosures Storwize V7000 One I/O Group System Storwize V7000 2-4 I/O Groups Clustered System An I/O Group is a control enclosure and its associated SAS connected expansion enclosures Expand No interconnection of SAS chains between control enclosures as control enclosures communicate via FC and must use all 8 FC ports on enclosures NOTE: No SCORE/RPQ required
  • 36. 36 © Copyright IBM Corporation, 2011 Storwize V7000 Unified Scaling Unchanged  Storwize V7000 Unified can scale disk capacity by adding up to nine expansion enclosures to the standard control enclosure  Virtualize external storage arrays behind Storwize V7000 Unified for even greater with externally virtualized capacity – CIFS not supported currently with externally virtualized storage  CAN NOT horizontally scale out by adding another Storwize V7000 control enclosure and associated expansion enclosures  Nor an additional Unified system – If customer has clustered Storwize V7000 system today they will not be able to upgrade to Unified system when MES is available until we support this in a future release  V6.4 won’t be picked up by Unified so won’t currently benefit from new functions discussed today Control Enclosure Expansion Enclosures Storwize V7000 Unified 2-4 I/O Groups Clustered System NOT SUPPORTED Storwize V7000 Unified One I/O Group System Control Enclosure Expansion Enclosures Expansion Enclosures Expand An I/O Group is a control enclosure and its associated SAS connected expansion enclosures Control Enclosure
  • 37. 37 © Copyright IBM Corporation, 2011 Storwize V7000 – Pre-V6.4 behavior All cabling shown is logical SAN I/O Group 1I/O Group 0 Expansion Enclosure Expansion Enclosure Expansion Enclosure Control Enclosure #2 Expansion Enclosure Expansion Enclosure Expansion Enclosure Control Enclosure #1 Storage Pool B MDisk MDiskMDisk MDisk Storage Pool A MDisk MDisk Storage Pool C MDisk MDisk Node Canister Node Canister Node Canister Node Canister MDiskMDisk Volumes assigned to I/O Group that owns most MDisks in pool Volumes assigned to I/O Group that owns most MDisks in pool Default behavior is a storage pool per I/O Group per drive class Default behavior is a storage pool per I/O Group per drive class Volumes assigned to I/O Group 0 if pool has equal # of MDisks from each I/O Group • Expansion enclosures are connected through one control enclosure and can be part of only one I/O group • Storage pools can contain MDisks from more than one I/O group • Inter-control enclosure communications happens over the SAN • All MDisks are accessed via owning I/O group • A Volume is serviced by only one I/O group
  • 38. 38 © Copyright IBM Corporation, 2011 Storwize V7000 – V6.4 and later behavior • Expansion enclosures are connected through one control enclosure and can be part of only one I/O group • Storage pools can contain MDisks from more than one I/O group • Inter-control enclosure communications happens over the SAN • All MDisks are accessed via owning I/O group • A Volume is serviced by only one I/O group All cabling shown is logical SAN I/O Group 1I/O Group 0 Expansion Enclosure Expansion Enclosure Expansion Enclosure Control Enclosure #2 Expansion Enclosure Expansion Enclosure Expansion Enclosure Control Enclosure #1 Storage Pool B MDisk MDiskMDisk MDisk Storage Pool A MDisk MDisk Storage Pool C MDisk MDisk Node Canister Node Canister Node Canister Node Canister MDiskMDisk Default behavior is a storage pool per I/O Group per drive class Default behavior is a storage pool per I/O Group per drive class Volume ownership balanced across node canisters in all I/O Groups when pool contains MDisks from multiple I/O Groups
  • 39. 39 © Copyright IBM Corporation, 2011 SVC and Storwize V7000 Interop
  • 40. 40 © Copyright IBM Corporation, 2011 SVC to Storwize V7000 Remote Copy  When V6.3 GA’d we provided the ability to replicate between SVC and Storwize V7000 systems  V6.3 introduced a new cluster property called “layer” – SVC is always in “replication layer” mode – Storwize V7000 is either in “replication layer” mode or “storage layer” mode • Storwize V7000 is in “storage layer” mode by default • Switch to “replication layer” using “svctask chcluster -layer replication” – Can only be changed via CLI  “Replication layer” clusters can use storage layer clusters as storage systems to virtualize – With V6.4 you can now virtualize a Storwize V7000 with layer=storage behind another Storwize V7000 with layer=replication
  • 41. 41 © Copyright IBM Corporation, 2011 Remote Copy – Configuration Example SVC V6.3.x Cluster B SWV7K V6.4.x Cluster C Layer = replication SWV7K V6.4.x Cluster D Layer = storage SVC V6.4.x Cluster A Replication layer Storage layer RC_partnership_1 RC_partnership_2 SWV7K V6.x Cluster E Layer = storage RC_partnership_1 NOTE: To provision SWV7K storage to another SWV7K with layer=replication requires that both SWV7Ks be running V6.4 or later software
  • 42. 42 © Copyright IBM Corporation, 2011 Miscellaneous
  • 43. 43 © Copyright IBM Corporation, 2011 Miscellaneous Space Efficient Volume grain size  Due to performance considerations and interaction with Easy Tier, the default grain size of a Thin Provisioned Volume has been changed to 256KB rather than 32KB – Also helps avoid I/Os up to 256K from host not being decomposed into smaller I/Os to MDisks SCSI-3 Persistent Reserve  This release extends the existing persistent reservation support to add additional persistent reserve functions – PR reservation type “Write Exclusive All Registrants” – PR reservation type “Exclusive Access All Registrants” – Report capabilities service action of the “Persistent Reserve In” command  These additional persistent reserve functions will allow GPFS to use persistent reserves on a Storwize V7000 or SVC system
  • 44. 44 © Copyright IBM Corporation, 2011 Miscellaneous  V6.4 will support direct attached hosts via FC with SCORE/RPQ only – Full support to be added in later release  Requires changes to host properties – In the current release all direct attach host status will report as “degraded” – When fully support direct attached host status will report as “active/inactive” – Will be status of “offline” if not connected  The status field will be “online” if the host has an active login in each I/O group where it can see Volumes mapped to it  Direct Attach hosts can only use FC ports that are not required for intra-cluster connectivity or SAN use for hosts, disk or replication – In a single control enclosure Storwize V7000 there will be 8 ports available – A clustered Storwize V7000 will not currently support direct attach FC hosts  The view “lsportfc” will report direct or fabric attachment of the port  No changes to “lshbaportcandidate, mkhost or addhostport” cmds
  • 45. 45 © Copyright IBM Corporation, 2011 Miscellaneous  FlashCopy GUI panel now displays timestamp showing when mapping was started
  • 46. 46 © Copyright IBM Corporation, 2011 Miscellaneous  Ability to create multiple Volumes more quickly
  • 47. 47 © Copyright IBM Corporation, 2011 Miscellaneous  SVC and Storwize V7000 software upgrade has a new “prepare” phase whenever upgrading from V6.4 to a later release – Initially this will not do anything but is part of future plans related to the cache architecture – We have introduced new CCU states: • Preparing, prepared, prepare_failed • For information only as you will see these possibly and again for now you can ignore them  New quorum scanning design to try and recover from corrupt quorum data caused by drive faults – Quorum will regularly be read and validated – Invalid quorum will ideally be moved to a new device – If no new device available, quorum will be re-written  Software upgrade package size increasing to about 500MB from about 340MB  TPC stats collection for internal MDisks will show a response time in V6.3.0.2 and later
  • 48. 48 © Copyright IBM Corporation, 2011
  • 49. 49 © Copyright IBM Corporation, 2011
  • 50. 50 © Copyright IBM Corporation, 2011
  • 51. 51 © Copyright IBM Corporation, 2011
  • 52. 52 © Copyright IBM Corporation, 2011
  • 53. 53 © Copyright IBM Corporation, 2011
  • 54. 54 © Copyright IBM Corporation, 2011
  • 55. 55 © Copyright IBM Corporation, 2011
  • 56. 56 © Copyright IBM Corporation, 2011
  • 57. 57 © Copyright IBM Corporation, 2011
  • 58. 58 © Copyright IBM Corporation, 2011 The following terms are trademarks of International Business Machines Corporation in the United States, other countries, or both: IBM, IBM Logo, on demand business logo, Enterprise Storage Server, xSeries, BladeCenter, eServer, ServeRAID and FlashCopy, System Storage, Tivoli, Easy Tier, Active Cloud Engine The following are trademarks or registered trademarks of other companies. Intel is a trademark of the Intel Corporation in the United States and other countries. Java and all Java-related trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc., in the United States and other countries. Lotus, Notes, and Domino are trademarks or registered trademarks of Lotus Development Corporation. Linux is a registered trademark of Linus Torvalds. Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation. SET and Secure Electronic Transaction are trademarks owned by SET Secure Electronic Transaction LLC. UNIX is a registered trademark of The Open Group in the United States and other countries. Storwize and the Storwize logo are trademarks or registered trademarks of Storwize Inc., an IBM Company. * All other products may be trademarks or registered trademarks of their respective companies. Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. The information on the new products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on the new products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for our products remains at our sole discretion. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography. This presentation and the claims outlined in it were reviewed for compliance with US law. Adaptations of these claims for use in other geographies must be reviewed by the local country counsel for compliance with local laws. Legal Information and Trademarks

Editor's Notes

  • #2: {DESCRIPTION} This is a title page. The module presented in this page is called - V6.4.0 Technical Update: SAN Volume Controller and Storwize V7000 Bill Wiegand - ATS Consulting I/T Specialist Storage Virtualization {TRANSCRIPT} Mary: Hi, welcome everyone to today’s SVC Storwize V7000 version 6.4 technical training call. As you know, version 6.4 was announced last week and today we have Bill Wiegand who will take us through the new functions. We also have scheduled for tomorrow, a deeper dive call on compression on that aspect of things and so we hope you will be able to join us on that call tomorrow. The calling information and calendar buttons are available on SSI, if you did not receive them in an email or if you need the link then please send me an email or give me a ping and I will get you the link for that. With that, thank you for joining us and I will turn the call over to Bill now. Bill: Thank you Mary and good day everyone and welcome to this update on 6.4 for SVC and Storwize V7000. If you open the presentation you all should have and put it in screen show mode there is some animation on some of the charts, we will go ahead and get started.
  • #3: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} Take a look at the agenda. We’re going to cover FCoE, which is now supported, we will also look at non-disruptive volume move. I will overview compression, although again, Mary mentioned we’re going to get into much deeper on that tomorrow on a whole separate session and today I will just give you some of the highlights and we will look at the Storwize V7000 clustering, some changes there, a little update on unified and then several miscellaneous changes that are also in the 6.4 code.
  • #4: {DESCRIPTION} Part of the T11 Technical committee Fibre Channel BB-5 project Not intended to displace or replace Fibre Channel and is not iSCSI Designed to enable convergence between Ethernet and Fibre networks in the data center Simplifies networking and reduces costs Technically speaking, the FC0 and FC1 layers of Fibre Channel are replaced by a new, “Beefed-up” or “lossless” Ethernet Full duplex 802.3 Ethernet required {TRANSCRIPT} With that, let’s go ahead and get started and go onto chart 3. Let’s just take a look at the FCoE basics. This is kind of new to me, FCoE is probably new to a lot of folks out here to, but it is part of the T11 Technical committee that is putting this project together and obviously everybody understands the reason for it is to simply the network and reduce costs like converging Ethernet and Fibre Channel into one thing basically, so now we can just have a single card in a server and run the drivers that will support both Fibre Channel and iSCSI and Fibre Channel over Ethernet all through an Ethernet connection. So really as you look at the Fibre Channel architecture which is down there at the bottom of this page, what we’re really doing is we’re replacing the lower layers, some physical layers of the Fibre Channel Network with Ethernet and basically lossless Ethernet is very important because Ethernet was traditionally designed for data was designed and if we had packets congestion we would just toss packets and have to retransmit those, that’s not very good for data environment, that is why Fibre Channel is not so popular, but now with this lossless Ethernet functionality we’re able to converge these environments. So you’re going to see more and more of this in the industry as we go along. So we’re introducing (inaudible) for this with the 10Gb Ethernet card that is in the Storwize V7000 and the SVC is optionally for that as well.
  • #5: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} So let’s go to the next chart, chart 4. Basically at a very high level we’re taking the Fibre Channel frames and just packaging them within a Ethernet packet. We do need something called a fibre channel forward to support this environment as part of the FCoE standard, just because it has come up and this isn’t on the charts but it has come up, but recently there were some questions about our top of rack switches that we sell today, they are 10Gb switches and some of them 40Gb support but they do not currently as of inside themselves support SCF. So we cannot connect the SVC or the Storwize to them and a host with an FCoE card in them and have FCoE working because we need some kind of Fibre Channel forwarding capability and that is not currently in our top of rack switches, I believe that is coming later this year. Also, some other requirements for this environment are jumbo frames and this is only supported on 10Gb, we can’t do this on any 1Gb infrastructure, this does require the 10Gb cards inside the SVC or the Storwize.
  • #6: {DESCRIPTION} This slide contains a graphic that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} So if we go to the next chart, this Fibre Channel forwarding capability not only does what I’ve mentioned earlier, but also allows us to combine those Fibre Channel networks and FCoE environments together so that if we do need to connect to Fibre Channel switches and host or SVC or Storwize for that matter that are not FCoE or don’t have FCoE ports we can get out to those environments using a Fibre Channel forwarding function is kind of depicted here with the orange-red lines in this chart.
  • #7: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} So again, looking on chart 6, we do support FCoE on our CG8 nodes, that’s an optional 10Gb card that can be installed, that is mutually exclusive to the solid state drives. You already have CG8s with solid state drives so you’re not going to be able to install these 10Gb cards. The Storwize V7000 we do have the model 3xx that include the 10Gb Ethernet cards in them standard. You can upgrade the existing Storwize node canisters, the 1xx models to the 3xx models to pick up that 10Gb interface. We can do this undisputable and this is how you can upgraded to support FCoE if that is what your customers needs and we do have two FCoE ports per node or per node canister. Those are two port cards that are installed in the node or the node canisters.
  • #8: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} Going to the next chart, chart 7, the SVC and Storwize V7000 will support attaching to all Fibre Channel hosts, storage, and each other via the FCoE ports on the nodes. So from our standpoint and your standpoint FCoE are hosting things and they support Fibre Channel accessing the Storwize V7000 on the SVC FCoE ports. Additional support for native FCoE and controllers are going to be added over time as other vendors provide FCoE ports are the service controllers that we made virtualized or host have that set of capability then those will really be picked up and shown as support on our support matrix so keep an eye on those. An SVC stretch cluster environment, we will support the use of the FCoE ports as well. I am not going to get into a lot of detail on that but if you’re familiar with the Stretch Clusters, it’s the ability to have an SVC node at one side and another one at the other side for high availability, we can use those FCoE ports if you’re familiar with what we presented. A few months ago on the 6.3 enhancements for this, we can use the FCoE ports for let’s say the private network that we need for the node to node communication. Also just be aware that iSCSI and RCoE are different but they can be supported on the same 10Gb/s port. So you need to do both (inaudible) as possible today. That is part of the standard.
  • #9: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} If we go to chart 8, DCBx – this is the data center bridging. It is supported in the FCoE standard and we do support all of that within our implementation as well obviously. So again, if you’re going to try and use iSCSI and FCoE over the same 10Gb cards then the SVC nodes or the Storwize v7000 just for your information the NIC, there are three classes, there is a NIC class, an FCoE class, and an iSCSI class. Today we support iSCSI traffic over the NIC class and FCoE traffic obviously over the FCoE class. The iSCSI class is not currently supported, maybe some time in the future, not really sure what all of that means but we can run FCoE protocols and iSCSI protocols over the same 10Gb ports.
  • #10: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} If you go to chart 9. A couple of things to just let you know – VLAN Tagging, we do not support that today on the FCoE ports. The FCF and the 10Gb ports must be on the same VLAN for it to be a supported configuration and a single FCoE port is not able to discover multiple FCFs. So what that means is if we do it, we can discover multiple but we only use the first one in the list, so I fthe customer has multiple forwards and they want a specific one, they would have to address that because otherwise we’re going to find them all and just use the very first one can, we can’t use a multiple FCF.
  • #11: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} If we go to chart 10, here are some other things to add to change now that we support FCoE. If you think about the F4 Fibre Channel port, traditionally on the SVC nodes and the Storwize V7000 node canisters, well today we’re actually adding with a 10Gb Ethernet ports and Fibre Channel over Ethernet, we now have two more Fibre Channel ports available to us. So if you’re familiar with how the SVC and Storwize V7000 generates the WWPMs that we see out there as we’re traditionally use to Fibre Channel zoning in networking, this here at the beginning the SVC first ten digits are always the same and the same for the Storwize, they are unique numbers. But the last five after the WWNN and then all we do is change that sixth digit from the right to indicate whether you need WWPN. Now we have two more additional ports, we can two more WWPNs available on these node and node canisters. We’ve also interviewed in this code in preparation of things in the future where we will have things of what we call flexible or variable hardware technology where you may be able to order your nodes and node canisters with different configurations then what we have today. This is going to be kind of important, so right now it’s more for the future, but more and more likely to come into play as an example of all we had was FCoE ports and then we’ve added Fibre Channel ports that would change the Fibre Channel numbering on the FCoE ports, which is obviously something we need to be cognisant about. We don’t have that situation today on your Storwize V7000 or SVC nodes, we’re going to discover the Fibre Channel port as we do today as ports WWPN and 1, 2, 3 and 4 and then the FCoE ports will be discovered as ports 5 and 6.
  • #12: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} Here on chart 11, the SVC will be sharing all of the WWPNs. So we have one WWPN per 10Gb ports and now we’re going to have a maximum of 6 WWPNs and be aware that if the two 10Gb ports doesn’t equal four 8Gb ports. So if you were to do a full migration from Fibre Channel to FCoE technology then you need to take that into account that we’re going to be limiting throughputs since there is only two 10Gb ports on each node or node canister. Today that’s not even an option because again, with SVC and with even the Storwize and if you’re doing a clustered Storwize V7000, you need to use all Fibre Channel ports or we have to have at least four Fibre Channel ports used. So since there is only two of the FCoE ports, we would have to use two more of the Fibre Channel ports anyway, so right now we wouldn’t be able to migrate purely to an FCoE environment, that is something we plan to support in the future. There is a new command, lsportfc, similar to the lsportip command that we have that will provide the information on these WWPNs.
  • #13: {DESCRIPTION} View: lsportfc Captures current port status for FCoE/FC ports on the system Similar to lsportip for iSCSI IBM_2076:cluster:superuser>lsportfc id fc_io_port_id port_id type port_speed node_id node_name WWPN nportid status 0 1 1 fc 4Gb 23 tb28-0-1 500507680110497E 02E100 active 1 2 2 fc 4Gb 23 tb28-0-1 500507680120497E 02E000 active 2 3 3 fc 4Gb 23 tb28-0-1 500507680130497E 043E00 active 3 4 4 fc 4Gb 23 tb28-0-1 500507680140497E 04BE00 active 4 5 3 ethernet 10Gb 23 tb28-0-1 500507680150497E 040C0F active 5 6 4 ethernet 10Gb 23 tb28-0-1 500507680160497E 021003 active IBM_2076:tbcluster-28:superuser>lsportfc 4 id 4 fc_io_port_id 5 port_id 3 type ethernet port_speed 10Gb node_id 23 node_name tb28-0-1 WWPN 500507680150497E nportid 040C0F status active switch_WWPN 100000051E07F464 fpma 0E:FC:00:04:0C:0F vlanid 100 fcf_MAC 00:05:73:C2:CA:F0 {TRANSCRIPT} If we go to chart 12 you will see here is the output of that lsportfc command and you will see that we have our four Fibre Channel ports like we normally do, and then we also have our 10Gb ports showing up as fibre channel ports effectively to be used for febre channel traffic using the Ethernet environment. So this is a little different then we’ve had before, obviously we didn’t support FCoE, so I wanted to point that out.
  • #14: {DESCRIPTION} V6.4 provides both FCoE Target and Initiator functions The SVC/Storwize V7000 FCoE interface can be used for the following functionality: FC Host access to a Volume (via either FC or FCoE ports) FCoE Host access to a Volume (via either FC or FCoE ports) SVC/Storwize V7000 access (via either FC or FCoE ports) to an external storage system FC accessed LUN SVC/Storwize V7000 access (via either FC or FCoE ports) to an external storage system FCoE accessed LUN SVC/Storwize V7000 to another SVC/Storwize V7000 via any combination of FC and FCoE Can dedicate FCoE ports for replication or use for host/storage access to allow dedicating two FC ports for replication or direct connection of server HBA ports {TRANSCRIPT} If we go to chart 13, V6.4 does provide both FCoE target and initiator functions. So this kind of gets into how can I use all of these ports and in what combinations. So let’s take a look at few examples here. If we have Fibre Channel host, they can access volumes on the SVC or the Storwize V7000 either through the Fibre Channel ports or through the FCoE ports. Again, if we have an FCoE host it can access volumes either via the Fibre Channel or the FCoE ports. Or the SVC Storwize 7000 access to an external storage system, either using Fibre channels, so an external storage system is Fibre Channels ports only to access those LUNs, we can either use Fibre Channel or FCoE ports and we have the ability those Fibre Channel, it’s the fact that (inaudible) all intensive purposes, it’s all Fibre Channel. Again, with the SVC and the Storwize V7000, if you want to access external storage that is connected via FCoE ports on the storage controller we can do that again through any of the FC or FCoE ports. And even communication between the Storwize V7000 either each other or for replication or for just being in a cluster environment, any combination of the FC and the FCoE ports is available. We can even, as an example, if we have all four Fibre Channel ports and the two FCoE ports configured on the SVC cluster the we can actually use any of those for communication between each other, but where I see it would be very handy if we have the ability to dedicate, especially if you’re doing replication to maybe dedicate those FCoE ports for replication so that we can continue to use the other four Fibre Channel ports for the regular use that they are used for today, for host IO and access to the back end storage. So again, this might be a way if your customer has FCoE in their environment to be able to take 8 ports for replication which I think will go a long way in making our global mirror a little bit more robust as well and just dedicating those ports. We could just use those FCoE ports for the node to node and host and traffic and just dedicate Fibre Channel ports. The least out of those six ports we could possibly dedicate some of those ports to replication, something to consider. The key to remember here is that none of this is talking iSCSI or anything, this is not providing any kind of IT replication natively through the Ethernet ports between systems, this is all Fibre Channel, so even through you’re using a 10Gb Ethernet port, this is the FC protocol.
  • #15: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} If we go to chart 14, here is just an example of what we would need to do for replication. So here we have an example of the Storwize V7000 at one side and another at the other. The 10Gb ports are plugged into a converged switch, we then need some kind of another switch that allows us to do our standard Fibre Channel connection using the FC as the Fibre Channel forward to be able to forward those packets across a regular Fibre Channel network over to the other side and then back to the Storwize system. So again, FCF is required, we do need a full Fibre Channel ISL, DWDM would be supported and again, remember that FCoE is not iSCSI so no native IT replication capabilities, that is something that we’re looking to introduce in a future release. As far as sizing and bandwidth sizing and sizing of your systems, pretty much all of the same rules apply here we have currently. So everything we’ve been doing today and hopefully trained you to planned for replication apply, whether you’re using the FCoE ports or not for the replication function.
  • #16: {DESCRIPTION} FCIA Guide: http://www.fibrechannel.org/documents/doc_download/1-fcia-solution-guide IBM Red Paper: http://www.redbooks.ibm.com/redpapers/pdfs/redp4493.pdf {TRANSCRIPT} Let’s go to chart 15. Here are some resources that you might find very useful. I found the Red Paper in particular very useful for background on understanding of lossless Ethernet and the FCoE standards. So it’s a very easy read, it explains it very well so if you want more information on this I encourage you to go to that Red Paper.
  • #17: {DESCRIPTION} This is a title slide that leads into the next section called – Non-Disruptive Volume Move. {TRANSCRIPT} That’s really kind of all I had on the FCoE, just to give you a background on what it is. We do now officially support it with the 6.4 code. Let’s take a look at another feature we’re introducing that we’ve all been asking for in the SVC and Storwize V7000, the ability to move a volume from one I/O group to another I/O group non-disruptively. So let’s see how that works.
  • #18: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} If you go to chart 17, this is just a description of what it is, it is basically that simple. We need the ability to move volumes from an I/O group to another I/O non-disruptive. We can do that today but it did take an outage on the server although a brief one to do that but it did require the manual interactions to do that. Now what we will be able to do is non-disruptively move a volume from one I/O group to another. Why that matters is with the growth of customers environments, growing their clusters, they need the ability to move volumes to I/O group to I/O group to balance workloads or isolate workloads or whatever it may be. Now we’re introducing the ability to do this non-disruptively.
  • #19: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} So let’s take a look at how it works if we go to chart 18. How it works, basically it is rather straight forward, technology, the volume today belongs to a single I/O group. We refer to that now as a caching I/O group, obviously mere the right to come in, we mere the rights to the two nodes in that I/O group. So all I/Os are sent to the nodes in that I/O group. Now the volume is made and then the next step is we make that volume available through another I/O groups or initial I/O groups and these are referred to as the access I/O groups. So a host could get to the volume through the other I/O group at this time, however, it would forward all of the I/Os back to the caching I/O group to handle all of those I/Os during this stage. So the next step is just to be able to access the volume through two I/O groups. Now we issued a command called movevdisk to move the caching I/O group to the new I/O group that you want to be servicing volume. So we do that, now I/Os because the host multi-path drivers are still seeing the volume on the originally I/O group even through we moved the caching I/O group to another I/O group, we now need to forward those I/Os over to the new caching I/O group. Once we’ve done that, now we can get to that volume through either one of these I/O groups, preferably through the new ones, but right now we’re probably doing all of the I/O through the original one. So now we can go up to the whole multi-pathing drivers and reconfigure them to direct the I/Os to the new I/O group and we can generally do that non-disruptively on most operating systems and we will talk about that. You may need to do some zoning changes for that hosting to even see the other I/O group, would kind of make sense obviously. So once we do that we get going now to the new I/O group and we can then make sure everything is the way we want it and then we can delete the old paths through the old I/O groups so that all I/Os are now going through the new I/O group. So again there is actually a wizard to help you go through that and we will take a look at those. That was just general steps to go through.
  • #20: {DESCRIPTION} This slide contains a screen capture that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} Here on chart 19 it gives you an idea of what you would really do. Here you could right click on one of the volumes that you want moved, you check the box move to another I/O group.
  • #21: {DESCRIPTION} This slide contains a screen capture that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} If you hit enter and go to chart 20, it actually shows you that you selected a volume to move to another I/O group or some multiple volumes if that is what you want to do and then it actually walks you through some of the processes, what it’s going to do, how it does it, again very similar to the migration wizard that we had for bringing existing storage under the Storwize or SVC, this is the same thing kind of walking you through the process.
  • #22: {DESCRIPTION} This slide contains a screen capture that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} If we hit the enter key again it comes up to another panel on chart 21 that shows you what I/O group do you want to move to, do you want to pick a preferred node or do you want to just let the system automatically do that, so you have your choice there.
  • #23: {DESCRIPTION} This slide contains a screen capture that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} If we go to chart 22, step 3 of 4 comes along and asks you some questions, now you need to go up and the volume is now accessible through both I/O groups. So now we need to go do some of the pathing stuff and verify those paths that are picked up on the server. So assuming that is all occurred now you go to the next step.
  • #24: {DESCRIPTION} This slide contains a screen capture that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} On the next chart, chart 23, we’ve actually can walk through and delete the paths from the original I/O group to clean things up. So it’s kind of directing you through those things to make sure everything is the way you want it before you actually get rid of the old paths and make sure it is going through the new path. It is kind of a multi-step process, you will have to work both at the Storwize V7000 and SVC level to move the volumes between the I/O groups and then also work at the host level to make sure the pathing is active and working to the new I/O group.
  • #25: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} So if we go to chart 24, there are a few things that you need to be aware of. Right now a volume which is in a Metro or Global Mirror relationship cannot change its caching I/O group. So that is something we will have to take a look at in the future, right now if you’re doing a replication on volumes we won’t be able to move those volumes between I/O groups. If a value is in a FlashCopy mapping, yes we can move that, the bit maps will be left in the original I/O groups, so that work will be handled by that regional owning I/O group but that is definitely not a problem, we’ve have had that ability to allocate the best space for FlashCopy mapping from any I/O group, traditionally it’s always the I/O group that the source volume is in but it doesn’t have to be. So that shouldn’t be a problem being able to move volumes between I/O groups if they have FlashCopy mapping. One thing that is kind of a challenge and potentially a problem is we do move volumes between I/O groups iSCSI (inaudible) very well may change. In some cases that it is not a problem, and in some clustering environments that could be so we’re going through some extra testing of various environments, I will show you that on the next chart that discusses some of that. So again, a volume may change a LUN ID when it moves from I/O group to I/O group, so that’s something to be aware of. Also, the maximum number of paths that we support to a volume are 8. We recommend that you only have 4 and in this case if we were moving between I/O groups we’re going to have four paths to a volume to each I/O group will be our total of 8 while we’re doing this migration. If you already have 8 now we want to move to another I/O group where I actually am going to end up with more then 8 which is not supported. Will it work? Most multi-path drivers will, this is going to be a short period of time for this so you may be able to get around this, there is nothing that is going to stop it from doing, we’re not policing the path. A customer should be preferably be using 4 paths and then moving a volume from I/O group to I/O group, we will have 8 for a period of time until we finish the migration. Also note if the caching I/O group fails, this is the one doing the read/writes, if for some reason the I/O group was to fail, even though we’re accessing the volume through another I/O group, obviously that volume is going to go offline. So there is no end way access to a volume even though you could be accessing a volume through any of the I/O groups back to the caching I/O group, if that one would fail then the volume is still going to go offline. Another use of this function is the ability to change the preferred node within an I/O group, today we can and still can at 6.4 being able to move the preferred node within an I/O group, but you cant move the volume from I/O group 0 to I/O group 1 and then migrate it back to I/O group 0 following this procedure and non-disruptively move back to I/O group 0 and when you do those on other I/O groups, remember that one chart, you can pick the preferred node. So it gives you that ability if you would need to balance workloads within an individual I/O group, again that assumes on a Storwize V7000 you have a clustered system to be able to do that meaning you have at least two I/O groups.
  • #26: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} Let’s go on to chart 25. Host restrictions – at this time and we hope to list these in the future but right now there is no iSCSI host support for moving volumes between I/O groups and there is no support for host based clustering, Microsoft cluster, HAC will be your powerHA. VMware Cluster which is basically means really VMware right at this moment is not supported. There is some concern about the iSCSI IDs changing, so again we’re going through some more testing and hoping these restrictions will be lifted, they may be listed to GA, unfortunately GA is Friday and we wont no what exactly what is and isn’t supported until then but this is the latest I have. So watch the support websites when they are updated with this information this Friday. So currently supported for this are really just SLES 11 and RHEL 6.1, probably 6.2 and 6.3 will be as well. It should be supported add or shortly after GA or AIX. Actually it works with SDD but it doesn’t pick up the preferred node idea so you would be actually round robining I/Os to the new I/O group, not really a problem, so we may support that until we get a change fix in the SDD and as far as Windows 2008 we did that in our lab and everything seemed to work fine, however, it doesn’t delete the old paths to the original I/O groups unless you reboot the server, same thing goes for AIX, you can reboot the servers and that will fix everything that kind of defeats the non-disruptive purpose for the functions. So those are fixes that are coming in the multi-path drivers for AIX and Windows and RSDB, PCM, and DSM. And then VMware without VAI, that is still in testing, some changes are going to need to be made. So unfortunately this doesn’t really work, I would say basically it doesn’t work for VMware today, hopefully will address very quickly. So again, check the support matrix, June 15th for what is and isn’t, maybe you will be pleasantly surprised.
  • #27: {DESCRIPTION} This is a title slide that leads into the next section called – Compression. {TRANSCRIPT} So let’s go to chart 26. Now what I would like to cover is just an overview of the compression. We will get into much more depth on this tomorrow, how it works under the covers, I am just going to hit the highlights.
  • #28: {DESCRIPTION} Compression is an alternative to Thin Provisioning They both allow you to use less physical space on disk than is presented to the host A Compressed Volume is “a kind of” Thin Provisioning Only uses physical storage to store compressed data Volume can be built from a pool using internal or external MDisks Compression requires the I/O group hardware be one of the following platforms SVC Model 2145-CF8/CG8 Nodes Storwize V7000 Model 2076-1xx/3xx Control Enclosure Can use Volume mirroring to convert to a Compressed Volume {TRANSCRIPT} So if we go to chart 27. Just again to give you some backgrounds, some charts, here are some pictures of what the panels look like, we now have a new pre-set call to compress and also use the add volume copy function to converge using volume mirroring from fully allocated volume to a compressed volume. So compression is an alternative to thin provisioning, they both allow you to use less physical space then is presented by the host and the compressed volume that they kind of thin provisioning, we’re only going to allocate expense as we need physical capacity to write the compressed data on to. So you can think of them as thin provision and all we’re doing then is compressing the data to even use less physical space but they are calling it thin provisioning, so if you’re familiar with how thin provisioning works today, the compressed volume is going to work very much like that. Compression does require that the I/O group hardware be one of the following. So compression only works on SVC model 2145CF8 and CG8 nodes and it will work by any of the Storwize V7000 models. So if your customer has 8 FX models or 8 AG4 models in the compression it will not be available to them. I am hoping although I haven’t had a chance to prove this or to check this, but the compressed icon on the new volume probably wont display if they don’t have the right model, maybe not, maybe it just wont work when they click on it but a lot of times we try to hide those things that aren’t supported on particular hardware. So we will have to see but you need the newer hardware to be able to do the compression function because it does take some additional resources to run so we don’t just want to do it on the really older models. That is the same way with 0 detect and those kind of functions that we do these thin provisioning, you’re only on the new models of the hardware.
  • #29: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} Let’s go to chart 8. A couple other things to highlight – there is currently a maximum of 200 compressed volumes per I/O group that will be supported. That probably will increase in the future but right now it’s 200, we support 2,000 volumes per I/O group as you know and just remember that is limited at this point and time. Licensing is as follows for SVC it is a per TB basis, so if you wanted to create 5100Gb compressed volumes, remember those 100Gb is one of the host fees, so 50x100 turns out to be 5TB licensing that you would need to purchase. For the Storwize V7000 it is license per enclosure, so if you had a customer who had four and four enclosure V7000 as was virtualizing in a couple of external disk enclosures, that we need a total of the six enclosure licenses. All of this is on the honour system, I will talk about a special old thing that we have available to you as well just here in a minute in a free trial. So do note grading the first compressed volume in an I/O group will instantly dedicate some CPU resources and some memory resources from the node to node canisters to the compression engine. So it does take some resources that will be allocated for that. So if you’re going to use compression, I wouldn’t just create a volume and then see what happens and just go from there, the idea is that you have more volumes compressed, the goal is to do less and less I/O to the backend because of the technology you will see tomorrow. So again, you want to make sure you do some planning and sizing, you don’t want to be probably enabling compression on a Storwize V7000 or SVC that is already very heavily utilized, I am talking 50, 60, 70% CPU utilization type things. So again, in most environments this shouldn’t be a problem, but again, we need to plan and size just like we do for replication or any other functions that we have on the system, so we will be glad and ATS will try to help you out and we will talk about some of the tools that are available tomorrow. Again, as I mentioned here on the last bullet of tomorrows call I would encourage you to join to get the details on compression.
  • #30: {DESCRIPTION} This slide contains a graphic that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} If we go to chart 29, here is a graphic explaining how the data flows through the I/O stack on the SVC and Storwize V7000. As you can see we have a cache up at the top of the stack and the thin provisioning and the RACE engine or the random access compression engine is in line with thin provisioning and that we actually kind of use that technology. So they will get into more of that tomorrow, but a couple things I wanted to make sure everybody understands is all of the copy services will interoperate with compressed volumes, the host has no idea what volume is, it acts under the covers, the copy service function since they’re higher up then then I/O stack there on the left don’t know that they are compressed volumes. So all copies will be working with uncompressed data. I’ve head some people talking that this compression will compress the data even when we replicate the data or when we do FlashCopy and that is not correct. So as you can see if we have to read the data off the storage we have to uncompress it to pass it of the FlashCopy or pass it off to remote copy or once you have remote copy consistently synchronize the two sides, any changes come in at the remote copy level get replicated over to the other side long before it every gets down the stack end of the compress. So there is no compression being done by the Storwize V7000 or the SVC or FlashCopy or replication or what we call remote copy. So as far as sizing and planning for these types of thing it is the same as usual, since there is no compression you’re still going to need the two sides, your communication side for replication based on the amount of change, data, rates, and coming into the system just as we do today. As far as thin provisioning properties, it all relate and I will show you a chart here in a bit and I will talk about it more tomorrow. You’re going to have such things for compressed volume such as virtual capacity, what the host sees and the real capacity, what is allocated physically and then the use capacity which is how much of that real capacity is used. I will show you a chart on that and we will get into that more tomorrow. There is also a new property being introduced called uncompressed capacity, which gives you an indication of how much compression is really helping or not, you will see this in some of the charts, so let’s go ahead…
  • #31: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} Chart 40. Here is just kind of an example of the different places that the compressed savings can be discovered. So we can do it on the left had side there where we see volume basis, you can see that the capacity there on the copy 0 if you take a look at that we have a total capacity of 48.7Gb is what the host sees. We’ve got used and real capacity of about 9.3Gb. If it wasn’t compressed that is what the four compression is indicating, it would be take up around 20.5Gb, so you can see that we’re saving an additional 54% the compression is providing here. So we kind of have that here on different panels that you can see there is a new tab and new column for compression savings when you look at the volumes in the Gui. There in the middle you can actually get a compressed view that will show you information as well and then just multiple places that you can find on the storage pools as well. When you get a chance take a look at those and again we will go into that a little bit more tomorrow. There is a lot of different ways to find out how much compression is helping you in your environment.
  • #32: {DESCRIPTION} This slide contains a screen capture that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} If you go to chart 31, there is also if you look at the performance panels you also see that we do have a line indicating CPU utilization for compression and system workloads. So you will see some differences there also on the performance panels within the Gui.
  • #33: {DESCRIPTION} The following tools will be available to support customers deploying Compression Disk Magic Will ask the user to provide an “Effectiveness” value (similar to Easy Tier) Available later this year Capacity Magic Will ask the user to provide a compression ratio to complete the sizing Comprestimator A tool to estimate the compression ratio which is achievable for a given set of data Loaded on customer’s hosts {TRANSCRIPT} If you go to chart 32, these are the tools that will be available and we will talk about them a bit more tomorrow. In particular the Comprestimator, but this Disk Magic will be enhanced to help do sizing and modeling, I don’t believe that is going to be available or is available right at the moment, I believe it is sometime later this year, we can let you know exactly when that is on the Q&A part of this call. Capacity magic I believe has been updated to help you with compression to help do some sizing of that as well and then the Comprestimator is a tool that will actually install up on a customers host to help them look at how much savings they may be able to have to determine whether or not it makes sense to enable compression for those volumes. So we will talk about that in detail tomorrow.
  • #34: {DESCRIPTION} This slide contains a screen capture that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} So if we go to chart 33, I do want to point out and you may have seen in the announcement letter, we are including a 45 day free trial license of the compression functions, again, everything is in code, if you’re familiar with SVC and Storwize V7000 it’s all built in, there is no license keys or anything to enable things, it’s all on the honour system. So we are letting you turn this or enable compression if you will and to test it out, work with it and get familiar with it. At least on my data system that we’ve had, I didn’t even have to do what I am showing here, but which is going into the real time compression limit and change 0 to something else, it seems to work even though we don’t have that enabled, maybe that was our level of Bata code, I think we would be wanting or making them put something other then 0 in this field in the GA code but I haven’t really had a chance to verify that. So normally what they would come in and do is they would just change the real-time compression and closures, put some number in there to test out the system and then if they decide compression would be a benefit for them and they like what it offers them they can purchase the license, they have 45 days to try it and see how it works for them.
  • #35: {DESCRIPTION} This is a title slide that leads into the next section called – Storwize V7000 Clustered System. {TRANSCRIPT} So let’s go on to chart 34 and take a look at the V7000 clustered system and some change is there. Some of this I want to clarify only because we talked about things that were coming in some other training sessions earlier that are not going to occur now, so I am going to cover those.
  • #36: {DESCRIPTION} This slide contains a graphic that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} So if we go to chart 35, you will have to hit the enter key a couple of times. First off obviously we can have a Storwize control enclosure, just one control enclosure is 0-24 drives in that enclosure. If you get the energy again we can expand it to a total of 10 enclosures, that is what we call an I/O group in a Storwize V7000 environment in particular, it includes the SAS attached enclosures. If you hit the enter key again, we now are supporting up to 4 I/O groups, so you can do this without an RPG, before we were limited to 2, now we can do 4, so we can do 4 I/O groups just like SVC. That would give me the ability to scale up to 1.4 petabyte of raw capacity or 960 drive all in a 2-42U rack, so a very compact environment. So again, it gives us the ability to grow the system quite large. If you hit the enter key again, one of the things I’ve talked about that you weren’t going to have to do is use all 4 fibre channel ports on each node canister, unfortunately that is not the case, we still have to do that today in 6.4, at least in the end term. There are plans to reduce that to only needing 2, but right now we do in a cluster Storwize need and still use all 4 Fibre Channel ports on each node canister and they all have to be zoned together. We are using all of them for communications between the control enclosures. This is the same thing that applies to SVC, that’s what we’re trying to get changed.
  • #37: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} If we go to chart 35, one of the things also that I need to clarify on the unified system, we’ve talked about the ability to start clustering at the Storwize control enclosures anyway on unified and that is not going to happen with the 6.4 release on the code. So if you hit the enter key here on chart 36 you will see that we have a Storwize V7000 unified with the file module, control enclosure, and up to 9 expansion enclosures. The same we thought we were going to be able to do if you hit the enter key is to be able to add additional control enclosures to cluster and provide more physical storage, that is not going to occur, not occurring in the 6.4 as I mentioned it was in some previous training, early training that was going on in the last month. We wont be able to do that today, that is planned for hopefully later this year. So what that means is if you hit the enter key a couple of times is that what basically is going to happen is we’re not going to pick up any of these 6.4 functions in the unified system at this point and time, that will be later this year, so that is why we wont be able to do compression or any of that kind of thing with the file modules as well. So just realize 6.4 code won’t be running on the unified box at this point and time so you wont be able to do any of the things we’ve talked about, that’s going to occur later this year.
  • #38: {DESCRIPTION} This slide contains a graphic that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} Let’s go to the next chart, chart 37, another thing, a change in the system, so we’re going to kind of go through this animation as well. The way it works in Pre-V6.4 code is if I wanted to do a cluster system, if you hit the enter key, by default when you created a storage pool the MDisk or all from that I/O group, all of the RAID Arrays from that I/O group are all put into one storage pool and then volume were owned by an I/O group 0 on storage pool A which would keep any I/O within that particular control enclosure and it’s associated expansion enclosures. If you hit the enter key again, a RAID Array in the MDisk in the I/O group one in the clustered system, they would get put into their own storage pool and again, the volumes would be owned by I/O group one in this case, whenever you created them using storage pool C. So we’ve kept everything within that I/O group. Now what we have the ability to do if you hit the enter key again, is create a storage pool that has MDisks from physical drives in both I/O groups and that is perfectly fine to do, the odd behaviour was in the past if you had the same number of MDisks in the storage pool, then all volumes were owned by I/O group 0. Hit the enter key again, add an I/O group to add more MDisks in it then the other one, the again, that I/O group is the one that the volumes would be owned by. So if you hit the enter key again you will see that they could, really depending on which I/O group had more MDisks owed in that storage pool. So what could happen is you could easily end up with all of the volumes owned by one I/O group and the other I/O group really not sure if it is in any volumes unless you manually pick which I/O group you wanted the volume created to. Most people just sort of took the default and it would automatically do it as I just kind of mentioned on this chart.
  • #39: {DESCRIPTION} This slide contains a graphic that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} So what we’re doing in 6.4 code, if you go to the next chart, is the same thing we had for the default behaviours, we’re going to create storage pools with MDisks from the same I/O group. Hit the enter key again and you will see we will have them on I/O group 0 and 1. Now if you hit the enter key and you have a storage pool that has MDisks from both I/O groups, it doesn’t matter how many in there, it could be 1 from 1 or 10 from the other, or any combination. Now we’re going to distribute the volume ownership across all of the node canisters in all of the I/O groups so that we balance the workloads across all of the resource, just like we do on the SVC. So this is an enhancement of 6.4 code to help out in the cluster environment to give you the ability to automatically distribute workloads when you have a clustered system and you have a shared storage pool.
  • #40: {DESCRIPTION} This is a title slide that leads into the next section called – SVC and Storwize V7000 Interop. {TRANSCRIPT} If we go to the next chart we’re going to take a look now at a couple of other things, some SVC Storwize Interop. So let’s take a look at that.
  • #41: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} If you go to chart 40. If you remember in version 6.3 we introduced the ability to replicate in an SVC and Storwize V7000, both had to be a 6.3 and we had to set the Storwize V7000 into a replication layer so they can be tiers to each other and replicate between them. And then we also shared that the replication layer, this last bullet, clusters can use storage layer clusters as storage systems virtualized. Now one thing we couldn’t do though is we could not have a Storwize V7000 that has the layer replication replicating to an SVC and the also have the Storwize V7000 and its normal default storage layer present volumes to be virtualized by another Storwize V7000. In 6.4 we are giving you that ability to do that.
  • #42: {DESCRIPTION} This slide contains a graphic that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} So if you to chart 41, you can see here that I can take the Storwize V7000 that is in the default layer of storage by how it is generally speaking, I can replicate between two Storwize down there at the bottom if I wanted to, but I can also present volumes from the Storwize V7000 as a 6.4 code level up to SVCs running 6.4 or 6.3. If I wanted to present volumes up to a Storwize V7000 running the type of layer replication it does have to be at the 6.4, so both of them have to be at the 6.4 code to be able to do what this picture is depicting. So not exactly sure if there are real use cases for this, but if you need to be able to do it you could and it might help in some kind of migration scenarios, but again it is able to do this now like we were with the SVC in the 6.3 code.
  • #43: {DESCRIPTION} This is a title slide that leads into the next section called – Miscellaneous. {TRANSCRIPT} If you go to the next chart we will just take a look at a few miscellaneous things that are new into the product as well. So let’s take a look at them.
  • #44: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} If you go to chart 43. It should be thin provision volume grain size, we don’t use the term space efficient anymore, but if you every remember creating thin provision volume as the default grain size was 32K and if you recall easy tier introduced, we ran into a little problem there and the behaviour of that because easy tier is looking for small I/Os and this grain size would break everything up into 32K I/Os, we ended up even sequential workloads with large I/Os turned into candidate data for easy tier to migrate which kind of messed up the Easy Tier. So there was a Flash introduced out there stating that if you’re going to use thin provisioning, thin provisioning volumes and Easy Tier that you need to use the 256 grain size. So that is what we always kind of recommended anyway so we have changed that in the GUI and in the CLI I believed to default to 256K instead of 32K. I believe that it is going to be imported back to 63EF as well. So again, 256K is what we will default to now and then that is good. Another thing we introduced is some enhancements of SCSI-3 persistent reserve, this is specifically for GPFS requested this. I don’t really know what these three little bullets are for write exclusive, exclusive access and persistent reserve in, but there are things that are needed to support for GPFS and we have released those in the 6.4 code. So if you have that requirement, 6.4 will provide that.
  • #45: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} If you go to chart 44, you probably heard we were talking about support direct attached host to the Storwize V7000. That is still going to be supported through a score on RPQ, you just submit that, there are only going to be some of the hosts that we can support at this point and time. One of the problems we have at least until we get that fixed is all of the hosts when you direct connect them show up as degraded. So we want to make sure that customers understand through the score RPQ process, yes it does work but these are some of the caveats until we get those addressed. We basically made the Fibre Channel ports on the Storwize V7000 look like little virtual switches to be able to do this direct attach but unfortunately we have a couple of things that are still not quite ready for GA, so the requirement for the score RPQ. When this is fully supported it will show up as an active and inactive paths when you have them physically connected. Direct attached hosts can only use the Fibre Channel ports if we have and they are not required for intercluster communication. If you remember on a cluster Storwize V7000 I said we have to use all four Fibre Channel ports and so unfortunately if we’re going to do a clustered Storwize, I have used all four Fibre Channels, I have no ports left over for direct connection if you wanted to do that. The only way to possibly…that would be if we were doing STOE and we could use those for two Fibre Channel ports and two STOE ports for the clustering part and then that would free up two ports for the direct attach if you wanted to do that. Again, commands that we’ve already talked about earlier that were introduced to support this kind of stuff as well. So just be aware we do kind of support direct attached clustering going to introduce some problems until we can get down to where we can support fewer parts for clustered environments, which again will hopefully be later this year.
  • #46: {DESCRIPTION} This slide contains a screen capture that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} If we look at chart 45, a couple of things I just noticed when I was going through out Beta code, which are kind of nice enhanced, if you look at the FlashCopy GUI panel now it will actually list the Flash time which is kind of handy to find out when was this Flash Copy actually started, what point and time. So that is now on display which is a nice enhancement to the GUI to be able to find that information.
  • #47: {DESCRIPTION} This slide contains a screen capture that is covered by the narration written in the transcription of this slide. {TRANSCRIPT} If you go to chart 46 you will also see something else that was a customer request which is quite handy. If you remember what we wanted to create more multiple volumes, we have the ability to hit the plus sign but if you wanted to create 100 of them you would have to hit the plus time 100 times. We’ve actually introduced a little window now that you can fill in how many volumes you want to create and not have to click the plus sign so many times. So that was a specific customer request, again, trying to enhance the GUI based on customers requests. So if you have customers that would like to see something different we can get that information into development to introduce those things in the future. So that is a nice little enhancement.
  • #48: {DESCRIPTION} This slide contains the topics that are covered by the narration written in the transcription of this slide. {TRANSCRIPT} If you go to chart 47, here are a couple of other things that are in (inaudible) to give you some ideas in case, again, you probably wont really notice these but we are introducing something new into the code and preparation for something in the future. This is a new prepare phase when upgrading the code. So if you’re upgrading from 6.4 to 6.401 or 2 you will see this new prepare state, it’s really not doing anything right now but it is in there for some future plans on some changes we’re looking for in the cache architecture that will be required. So it’s kind of the beginning of that and we’re introducing that as a 6.4 code. So you will see that we’ve introduced some concurrent upgrade states that you might notice when you’re doing this particular, if you’re doing kind of a manual upgrade versus just having the system do it all automatically for you. You will see a prepared and a prepared failed state; again these should not really come into play until the future but if you see them and wonder what they are just know they are there for preparation for some future plans. As far as quorum disks, nothing is really changed there but we can do some scanning of the quorum disks to make sure that the data on there that we write to all three quorum disks, this isn’t have anything with (inaudible) grain configuration environment, but we do keep a lot of information on all three quorum candidates in case we ever need to do some kind of recovery if we have something really bad happen. So that information is out there, obviously we don’t want to discover it is corrupt, so we’re doing more scanning and checking of that data just while the system is operational and running and if need to be we will either rewrite that data if it is not useable or we will move it to another quorum disk. A couple other last bullets down there just to notice the size of the upgrade package when you download it from the website is growing to about 500MB from about 340MB. So if you wondered why it is so much bigger it just is. I am going to guess some of that has to do with the raised compression stuff and also this is something I wasn’t aware of TPC stats apparently for the internal MDisks on the Storwize V7000 or even on the SVC nodes themselves, you can now get response times for those particular MDisks and arrays in the 6.3.0.2 and later. So apparently that has been updated in TPC code to be able to pick those up. I haven’t really had a change to play with but hopefully that is good news to be able to get latency information.
  • #49: {DESCRIPTION} This is a Q&A slide. {TRANSCRIPT} Bill: So with that it is really all I have of the new things available in this product and tomorrow we’re going to go into much more detail on the compression which is probably what a lot of folks are looking for more information on so we will be able to do that. Operator: We will take our first question from Paul Santos. Paul: On chart 27 it mentions that compression is an alternative for thin provisioning, does that mean you can’t use both, you have to choose one or the other? Bill: You can use either one or you can create a storage pool and a storage pool (inaudible) fully allocated are what we call generic luns, I can create a thin provision volume or I can create a compressed volume as well. I think what you’re trying to emphasize is not purely thin provisioning, there is a difference between the two but all intensive purposes it is thin provisioning under the covers we’re just compressing the data to use less space. So make it any way you want, we don’t have to have a pressed storage pool if you will if that is all that you can use. Paul: Okay, I have a second question. For replication or volume copy operations, does it uncompress the data first or does it copy the compressed data state? Bill: I kind of talk about that on chart 29 and if you looked at how I/O flows through that stack, when I am copying the data initially from the primary to the secondary volume when I am doing metro mirror or global mirror, I have to read the data from the physical disk, I have to come up through the stack meaning it has to get uncompressed there at that layer of thin provisioning and random access engine layers. So uncompressed as it goes up the stack and gets sent over to the over locations. So again it’s going to be uncompressed when it is sent across the wire and then when I/Os come in at the top side of that stack, you notice the front end hits remote copy log before it ever gets down into the compression engine. So that data is replicated over to the other site in a non-compressed format. So you need to be using compression at the communication layer like you do today. Unidentified male sperker: And you know Bill, this approach means you can do replications to a remote system that doesn’t have or isn’t capable of running compression if you want to and it is interesting, one of the analysts who was writing about compression for us actually regarded and I was quite surprised quite honestly, this as an advantage because they say by us decompressing today so when we send it to the remote site that allows the customers to take advantage of compression tools that are specifically optimized for network compression as opposed to the compression that we’re do inside our system which is optimized from the storage system. So that is one perspective on our approach. Paul: On a remote I understand that it uncompressed but what about local volume copies does it do the same? I don’t know if I missed that. Bill: So you are talking about volume mirroring them? The volume mirroring again you can see that the mirroring sits above the thin provisioning, so yes when I mirror a volume, you’re probably going to use the mirroring to go from compressed or non-compressed or vice versa, in that case I am going to be reading, let’s assume you want to go from fully allocated to compressed copies of that data and then get rid of the fully allocated so you can sort of reclaim space and compress the data, it’s going to read the data and then compress it when it writes to the copy. If I was going the other way I would have to read the data, uncompress it and write it to the other. Maybe you’re talking if I do volume mirroring the two compressed copies, yes it has got to…
  • #50: {DESCRIPTION} This is a Q&A slide. {TRANSCRIPT} Paul: I guess I am talking about data that is already compressed and I just want a mirror of it. Would I have to uncompress or can I just copy the compressed state? Bill: It has to read it, uncompress it and then write to the other copy because that layer sits above the compression. Unidentified male speaker: I think although this discussion is been about mirroring, somebody is sure to ask the next question which would be about FlashCopy. The answer is the same, the data gets decompressed. Bill: That’s kind of why the stack of how the I/O flows through here that if I have to read it to go up the stack, uncompress a path to the mirroring layer who is going to mirror it to the other copy of the data and then if that is compressed it is going to go back down through the stack and compress it again. So that is just how it works. It flows through the I/O stack so it can give you an idea of how and try to help you answer those questions. Unidentified male speaker: I think some people may be looking at this discussion saying “wow that doesn’t seem so good but it is compressing and uncompressing data like that” but I think the thing people should bare in mind is that because we have this very strict layered approach to the software it means that when we make changes like putting in compression, it fits in with everything you’ve already got. So there aren’t any restrictions here on you can’t use compression with this, you can’t use compression with that, or if you’re using FlashCopy there aren’t really serious restrictions on whether the source or the target can be compressed and what have you. So it actually has a great deal of flexibility that you don’t always see with other folks systems. Paul: Also are there any performance implications with compression? Unidentified male speaker: I think we’re probably going to talk about that tomorrow. Bill: Common sense, sure. Any time that you do anything more in the environment, FlashCopy, remote copy, mirroring, whatever it is, you’re doing more workloads. So whether that impacts the performance of that given I/O is one thing but if your system is already busy then those are resources that are tied up. Like your computer, if you’re trying to do more things then one thing it is going to potentially…you might see some kind of an impact but we will talk about that more tomorrow. Paul: Thank you, that’s all the questions I have. Operator: Our next question comes from Michael <Coshmar>. Michael: Hi Bill, nice job. My question has to do with the volume move. Is there any house changes to the application or is that all non-disruptive? I thought there was some LUN changes to reassign the LUN or something had to be done in the host software to let the application take over on the new location. Educate me a little more on that. I didn’t know, you mentioned the pathing changes, does that fix the host application? Bill: When I’m talking about the pathing and I apologize I see it wasn’t clear enough, that was the pathing of the first level that runs DD or whatever. So application doesn’t care because the application is just seeing if it is Windows or something to see an MDFS file system or if you have a data base that is disk presented from the operating system. It is the operating system that has to pick up the path using the multi-path drivers to say, “hey, my volume, I use to go to these WWPNs on an I/O group 0, now I need to go access that volume through these WWPNs on I/O group one”. So you have to go into the Windows server or into the multi-path drivers. A lot of this they pick it up automatically, our Windows is an example and in our lab we did this, it picked up the fact that it had paths, when I did the move VDisk from I/O one, the multi-path drivers picked up those new paths automatically.
  • #51: {DESCRIPTION} This is a Q&A slide. {TRANSCRIPT} …and then we just had to delete the olds ones, which are the ones we have right now. The multi-path driver I can’t get rid of the old paths but it did start driving to the new I/O group automatically. Now what we’re saying is before we get rid of the old paths, make sure that new paths are active and if there is anything you have to do on the host to do that and when I say the host it means the multi-pathing driver running on the hosts, nothing on the application then you need to do that and verify you have the paths through the new I/O group for you get rid of the paths of the old I/O group. Michael: And the WWPNs could reassign is what you said? Bill: Well they are going to be totally new WWPNs because each node has its own unique WWPN and if you sort of remember one of the previous charts we have had. So he is going to discover what looks to him like the volume changed from one controller to another controller and that is fine, there is the data, I can I/Os to it, multi-pathing will pick that up automatically, what that little wizard and those little panes are trying to help you do is make sure the host level everything looks good before you take away the original paths, which again at the host level or zone it from that I/O group, whatever you may do. You just want to make sure you don’t cause an outing, you want to make sure it’s picking up the new paths from the host to the new I/O group. Michael: And as activities are occurring from those applications, is going to both volumes the old and the new properly? Bill: Well there is only one volume, the host uses one volume, before it was before serviced by I/O group 0 and now it’s being I/O group 1. However, you do have for a pointed time if you will, 8 paths, 4 paths I/O group 0 and 4 paths I/O group 1 for that volume. As we’re moving it from I/O group to I/O group it can use any of those paths to get to it. If it goes to the original group we will forward the I/Os over to the I/O group 1 and the return back to I/O group 0, the information to go back up to that host. But what we’re trying to do is get the pathing at the host level to not have to go that extra route and switch over to talk directly to the new caching I/O group that owns the volume. There is only one volume, and so you can get to it through multiple I/O groups during this process. Unidentified male speaker: Bill is making the right distinction there, but with the ability we’ve had in the path to move volumes from one storage pool to another, there is a period in that process where there are two sets of data, but this is a different thing, this is moving the access, it’s not moving the data. Michael: Yeah I kind of understand that from a competitive point of view as I study and I am just trying to understand that better so I understand how that is implemented because that is key. And does it matter, you said a single volume, I guess consistency groups don’t come into play here then? Bill: This is for FlashCopy and replication, this is just a standard volume. Michael: Alright, very good. Thank you. Operator: Next up we will hear from Henry Ortiz. Henry: Great job Bill, thank you very much. I have a question, I think it was answered as I was reading along these slides, but it looks like when you were talking about cluster V7000 and you can balance across the different I/O groups, I was thinking that what does this do for virtualized storage underneath, I guess the VDisks, are there any plans for that in the future?
  • #52: {DESCRIPTION} This is a Q&A slide. {TRANSCRIPT} Bill: I am not quite sure what you mean there. Henry: I mean if you have different volumes and you’re bringing up a clustered environment, can you balance the workload if you can sort of…maybe I got a little confused in my questioning because I am starting to look at this a little bit more and it looks like your MDisks as you create a storage pool, you cannot create a storage pool across different I/O groups underneath. Bill: If you look at chart 38, we can create a storage pool that takes MDisks built from spindles that are physically in I/O group 0 and I talked about I/O group 0 and the control enclosure and any SAS attached expansion enclosures. So if I build a RAID Array there and of course we built an MDisk on that array if it is part of that I/O group and by default we’re going to try and put it in a storage pool in that, if you just let the GUI almost configure everything it is going to create a pool on each I/O group with MDisks built on spindles from that I/O group. However, if you want to build a pool that spans all of the I/O groups and from this example we have two I/O groups that say they’re fully populated, you would have 480 spindles in that in those two systems, we could literally build a volume that we present to a server that strikes across all 480 spindles because that storage pool B has got all of the MDisks in it built on all of the spindles inside both of those Storwize V7000 that are clustered together. So what we’re introducing here is with 6.4, if I did that, let’s just say I didn’t even have storage pool A and C, I just had storage pool B, in the past it could end up that all volumes were owned by I/O groups 0 and no I/Os were even through the control enclosure on I/O group 1 unless you manually chose which I/O group that would own the volumes that you’re creating to give to the servers. Now in the 6.4 code we will round robin those volume ownerships across both I/O groups so that we balance the workloads across all of the resources when you’re creating volumes. Henry: Right, but how is that for external storage? Bill: It doesn’t matter. External storage virtualized by the Storwize is just Mdisk. So if you put those MDisks in a storage pool, so let’s just say you had a DS5000 behind this and you put all of those MDisks in a storage pool and we will do the same thing, we will if you created 10 volumes we will spread those 10 volumes across all of the node canisters, all of the I/O groups in the clustered Storwize. Unidentified male speaker: The external storage can be accessed by any of the node canisters and any of the control enclosures. Henry: Excellent, that’s what I was trying to get to in a round about kind of way. Thank you. Bill: Anything that is under there when we create a volume is going to be evenly distributed across all of the node canisters and all of the control enclosures, whether it’s internal or external.
  • #53: {DESCRIPTION} This is a Q&A slide. {TRANSCRIPT} Operator: Next we’re taking a question from Joe <Trosure>. Joe: Hello, thanks Bill. Back on slide 14 you talked about replication when you’re using FCoE and FCoE is the communication protocol of choice for a pair of V7000s and granted it is not iSCSI but when it comes to native IT replication is there something specific about our implementation that would preclude having gears that routed FCoE between the sites, so you replicate across the converged switches instead of having to convert to Fibre Channel? Bill: That is one I am trying to find out but I don’t believe…what I understand, FCoE is not routable, so you have to get it out to something that goes to another network. This is again my learning curve on FCoE as well, but from what I am being told we require STFS and a full Fibre Channel IFL between the sites, this picture depicts to do the replication. I don’t believe you can connect the cable from that converged switch B32 on the left to another on at the other site, in any kind of (inaudible) is my understanding. Unidentified male speaker: That is my understanding to Bill and I think the desire for iSCSI replication is to be able to eliminate things like the SCIP hardware that you need to get and I remember I thought that this FCoE support is actually this IP replication but in fact it doesn’t. It doesn’t gives you what is required, so it really is not a solution for that customer requirements unfortunately. That is still something on our to do list. Joe: Thank you. Operator: Next is Nigel Bartlett. Nigel: Hi Bill, my question is also on FCoE support on slide 14. I don’t understand why you need the SAN24B4 switches there. My understand is that with a converged B32 switch you can have RCoE functionality in that switch and that you can have native FC ports on that switch so that you can just connect your DWDN between the two converged B32 switches and not require the SAN24 B4 switches. Bill: It is very possible, if I actually got to the folks I could blame Andrew Martin for this but I think he was just trying to depict we need Fibre Channels. So you’re right, if there are standard Fibre Channel ports on that converged switch and in support of FDS to forward it to those ports then I would agree with you that that is possible. I don’t know all of the switches and things like that so maybe this is a bad example. I apologize for that, I will try to learn that myself, but it sounds logical if it has the FTS function and it has native Fibre Channel ports on there then I don’t know why that wouldn’t work and you wouldn’t have what we have in this picture at the very bottom. Norman: Yeah this is true, you would be able to contract what is in the B32 across your BDM or whatever to another B32. Bill: So I need to take those off and move those little red lines to the next switch up. I think he was just trying to point out that he can’t do FCoE across there like that. Thanks Norm. Did that help? Did that answer your question?
  • #54: {DESCRIPTION} This is a Q&A slide. {TRANSCRIPT} Bill: Any more questions? Operator: Yes, we will move on to Dave Macdonald. Dave: I do have several questions. One coming back to compression; if you want to test out the performance of the compression and as you said the first time you create that compressed volume it allocates processor and memory. So when you test them you say “OK, I want to delete that volume and go back to no compressed volume”, does that processor memory get released back automatically? Bill: Yes and it is on a per I/O group basis. So I/O as an example if it was SVC and you created a compressed volume in I/O groups 0 and yes it would serve those resources and (inaudible) all the compressed volumes in that I/O group and release that back. Dave: Okay. Next question is on the volume move to another I/O group and it kind of sounds that when you’ve done your testing, pretty much SDD, PCM is required as opposed to a native MPIO, is that correct? Bill: Well today for instance Storwize we require either SDD PCM on AIX or the AIX PCM, which I guess is the native multi-path driver with a PCM module kind of built in. So either one of those reach score but I don’t know if AIX PCM picks up the paths exactly. Those are things that will be on the support matrix I hope, in which neighbours we do support and why but I don’t know why it wouldn’t work with AIX PCM as well, that’s what we know as the MPIO function. Dave: Alright. I will look for that when that comes available. Next question is clustering V7000 and now with 6.4 the ability to spread the MDisk across I/O enclosures verses I/O groups. If you have just a single V7000 today, you add in another V7000 or up to four at this point, is there any ability to rebalance a particular storage pool across those or is that disk going to take just having your own storage or additional storage and then copy? Bill: Well let me clarify a couple – first in the current code we have had the 6.2 and 6.3 we can do this storage pool in I/O groups. So we’ve been able to do that, what we didn’t do was evenly balance the volume creation across all of nodes. But in your case if you had a Storwize today, one Storwize V7000 I/O group and you added another one to create a clustered system, if you wanted to engage your RAID Arrays and your MDisks on this second I/O group and put them in the same storage pool that all of your other ones are in on your original system you can do that and now that pool will obviously have MDisks run both I/Os in it. Just as today we don’t have a rebalance of the volume extents across all of the MDisk pool when you add more Mdisk and there is a pearl script out there to allow you to do that. If you take one we’ve used for SVC for years now out on www.ibm.com alpha works and if you search on SVC tools it is a pearl script that will go out and look at all of the volumes on that pool and try to evenly redistribute all of these extents across all of the MDisks. So that is what you’d be doing here, let’s say you had 10 MDisks in a pool and you added another Storwize system you would then have another 10 MDisks and we would re-stripe all of the volumes across all 20 MDisks in that pool. Does that make sense? Dave: It does.
  • #55: {DESCRIPTION} This is a Q&A slide. {TRANSCRIPT} Dave: Last question and it also involves cluster V7000. It was said in the presentation that if you had V7000 clustered and you want to have or it is required to have all the Fibre Channel ports all zoned together for communication and if I introduce replication into this and I am kind of speaking from the SVC where I’ve seen some documentation about if you’re doing remote mirroring doing SVCs that you don’t fill the ports for the storage, you just need one or two other ports on the SVC engines for replication. Norman: We need to clarify that – even on the SVC today you have to use all four Fibre Channel ports on the nodes just like you do on the Storwize in a cluster specific, you have to use all of the. They all have to be zoned together to see each other in that local system. Now what we’re saying I think is a Flash or tip you probably saw it when you start doing replication between one SVC and another, or one Storwize V7000 and another, you can own only two of those ports on each node canisters, two node ports on each node canister on the other side as well so that the replication is only trying to drive I/Os over two of those 4 ports. But locally all 4 are still being used for local stuff. They’re not dedicated two of them for that, that’s what we would like to get to but unfortunately that’s what I thought we would be able to do here in 6.4 but that is going to be later this year. And then with the FCoE ports we could dedicate ports for global mirror. Bill: Yeah that would be the best thing to do there at that point with the FCoE is dedicate those ports and leave everything else, Fibre Channel, for direct storage. Dave: Great, that’s all I have. Thank you. Operator: Next up is Tucker Johnson. Tucker: Good presentation Bill, very exciting, customer driven features this time. With regards to drivers for the dynamic movement of the volumes between the I/O groups, are we kind of moving toward an SDD architecture for all O/Ss? Could you kind of review that a little bit and if there is anything going on there? Because opposed to my XIV where we go to the O/S driven resident driver. Bill: Well I would say just the opposite, we’re probably driving to using a native multi-path drivers and they worked fine for this volume move as well. There is a lot of existing testing that we need to do to make sure everything works the way we want it and of course VMware let some more complexities in there, but it is really any multi-path driver DMP from VERITAS or any of the native ones should also work in this, you just have to interface with those multi-path drivers. I just mentioned that SDD because on AIX and Windows, those are two of the main ones we support and I believe we’re actually moving away from having our own multi-path drivers from everything I’ve seen and heard.
  • #56: {DESCRIPTION} This is a Q&A slide. {TRANSCRIPT} I don’t know if and when SDD will ever go away but right now we recommend SDD PCF for AIX and DSM for Windows but if you already see AIX PCM is out there natively and we support it as well so you could use it… Johnson: Does that mean there is really no host resident software that is required for the feature? I kind of put that in context because we kind of said on the slides that a fix was required. Bill: Well that’s for SDDs as most people for AIX and Windows are using our (inaudible) modules with DSM and VCM but there is no requirement for those to be up there on the host to do that. Obviously if they’re using our stuff we want it to work so there is a couple of fixes in ours for the path. So you do not have to have SDD for this to work, we can use whatever is native, DM in Linux, I mean that is what we’re using there is no SDD on the Linux ones that we’re supporting today, so that is a native multi-path driver. Johnson: Maybe I missed it but I didn’t see some Solaris for HPUX on the list, are those just futures or…? Bill: That’s a good question, I don’t know either, we have to look (inaudible) I think we support (inaudible) and open VMS and all of those. I don’t know, those will probably be later, we will pick up the most notable one so we will have to look at the port side, I just really don’t know. Norman: Right now I think would be a score request… Bill: Yeah, if they’re not on the support list then you would just score and we will prioritized it that way. Johnson: Okay, thank you. Operator: Our next question today will come from Matt Brookland Matt: Hey Bill, how’re you doing? Just a question that careful and non-disruptive migration it kind of popped in my mind here. So once you do the VDisk or the move VDisk command, is that VDisk forever known for two I/O groups or at some point and time is there a command to stop the presentation from the originating I/O group so that once you’ve done all of the steps is there a way to stop it from being presented from the original? Bill: Well that would be you’re zoning and you’re multi-pathing to… Matt: Assume that I am using zoning from the zoning perspective and I’ve got a 8 node SVC cluster and so I use daily and I’ve got a bunch of customers that do that today, so that is not feasible. Every time that I do a multi-path on the RedHat server let’s just say, will I see those paths again from the originating? So does that make sense? Bill: Yeah, I am not real sure myself. I haven’t had a chance to play with this a whole lot but I believe that when you do the moving disk it’s no longer presented. Well I shouldn’t say that because we’re introducing the ability that you could actually drive I/O to any I/O group or forward it over to the caching I/O group although I don’t recommend that. But I don’t know if they’re presented, if you would see them through all of them or not, I don’t believe so because you would have to many paths. So in some way to do that I will have to find out. I am not real sure, that’s a good point. Johnson: Okay, let me know when you find out. Thank you. Operator: And again ladies and gentleman, *1 for questions. Next is Christian Karp. Christian: Hi guys, quick question on chart 31. You draw two lines here for the system workload and the compression load. Does that imply that the compression stuff and the system stuff always run in different CPUs?
  • #57: {DESCRIPTION} This is a Q&A slide. {TRANSCRIPT} Bill: It just implies that, well there is only one CPU if you will in the Storwize V7000 canisters and the SVC nodes, there are multiple cores so we’re using different resources for each of these and so this is giving you some idea of how much of that resources of the compression resources are consumed or how busy they are and how much the fast path I/O, the regular I/O stuff is. Christian: Okay, I am a little bit confused on that screen shot here. Norman: The simple answer to the question is that they do run on different cores. Bill: So this is a gimmick on how busy those cores are for compression versus the ones that are available for what you will probably hear called fast path I/O or what I call the regular I/O to use today. Christian: Yeah, I should of said core. So it adds up to more then 100%, of cores. Bill: Yes. Christian: Thanks very much. Operator: Next I will hear from Alaudia Segal. Alaudia: Hi Bill, thanks for the presentation, good one. The question I have is on the upgrading to a 6.4. With the upgrade activity check if it is a unified and will it block it right there? Bill: Well you don’t really upgrade…if you’re already unified you’re actually up grading the unified file module which then when you upgrade that it upgrades the Storwize V7000 control enclosure. So when you upgrade unified you’re not going to get the 6.4 code. Norm you could probably answer that, there probably is a way but it’s not supported to just go upgrade the Storwize block part. You have to upgrade it through the unified system, is that correct? Alaudia: There are some customers that have part of their procedures to run the upgrade ability. So I guess they do it just automatically. What I am trying to find out here is that even those we sell that, (inaudible) upgrade the 6.4. With their procedure is that (inaudible) right there. Bill: That’s a good question, Norm do you have any idea? Norman: Yeah, so the unified is not going to support 6.4, so if you go into an upgrade you’re going to run into problems. At the very minimum you’re going to run into support problems whether things continue to function or not and that it hasn’t been tested at this point to be able to say that you would be okay. Alaudia: Before we go into the upgrade through, that upgrade ability which you run before upgrading. Norman: It should tell you that you cannot go to 6.4. Unidentified male speaker: In a Storwize V7000 system unified system it will prevent you from upgrading to 6.4. Bill: Unless you go around it directly from the V7000 and do that upgrade while the unified may function still you are definitely going to be out of support. Norman: I wouldn’t even count on it functioning. Bill: Because when you’re upgrading you’re going to upgrade by the 1.3.2 code which I think is what is available no or shortly later this month I guess and the 1.3.2 code when it upgrades the unified file modules 1.3.2 it will also be upgrading the Storwize control enclosure to whatever and it’s not going to upgrade to 6.4, if you’re going around it that’s bad, don’t do that. Alaudia: Yeah, I understand that. I am just trying to avoid some procedures on things that we have with customers. So the second part of this question is I think you answered it but I just want to clarify. If you have unified system and you bought the unified just because you want to have the unified just because you want to have the unified and you’re using mostly block, are you not going to participate on the improvements on the block of 6.4, right? Bill: Correct.
  • #58: {DESCRIPTION} This is a Q&A slide. {TRANSCRIPT} Bill: And that is disruptive so you don’t want to go there. Unidentified female speaker: I think they can’t, they’re not aloud to. Bill: Yeah, they’re not aloud to. Norman: But if you say Bill this is expected to be a fairly short duration. Bill: Later this year I believe is the plan. Alaudia: Okay, thank you. Operator: Our next question comes from <Sumon> (inaudible). <Sumon>: So I have a quick question the V7000 clustering. So when we see we can cluster V7000 and we can actually (inaudible) from the V7000. So in case one of the V7000 expansions goes off for some reason, I might use (inaudible) so how does it (inaudible) because my volume might go offline? Bill: If you loose an entire enclosure on either of the Storwize V7000 I/O groups in the clustered system, any volumes that have extents on spindles on that expansion enclosure, that storage pool is going to go offline. So the volumes within will also go offline that are striped on that storage pool. That is the same whether it is clustered or not, so if an MDisk fails in a storage pool, then the storage pool goes offline and the volumes are all offline that are just on that pool. So it’s no different whether it’s a single Storwize or a clustered Storwize or SVC for that matter. So the clustering (inaudible) unless you were going to try and do some kind of volume mirroring to protect it or you’re doing an enclosure protection, there is no enclosure protection if that is what you’re really getting to. Norman: I think you hit on the right answer there Bill that we should remember that clustered systems are intended to scale capacity and performance and they are not intended to add additional redundancy because a Storwize V7000 system is already fully redundant anyway. Bill: That’s why probably presents that you can’t use Storwize V7000 today for any kind of a stretched clustered high availability thing, we just do that with the SVC. Did that help? <Sumon>: Yeah, great. Thanks Bill. Bill: Any other questions? Operator: Yes, we do have a question from Ted Letosky. Ted: Really good presentation, I found it really educational. I feel like I am beating a horse to death but I want to follow home. I am looking back at slides 38 and we’re talking about the ability to do non-disruptive volume migrations, if my LUN lived originally or my VDisk lives originally in storage pool A and I don’t have a storage pool B that is span, so if I have a storage pool A in my first isle group and storage pool C in my second I/O group...
  • #59: {DESCRIPTION} The following terms are trademarks of International Business Machines Corporation in the United States, other countries, or both: IBM, IBM Logo, on demand business logo, Enterprise Storage Server, xSeries, BladeCenter, eServer, ServeRAID and FlashCopy, System Storage, Tivoli, Easy Tier, Active Cloud Engine The following are trademarks or registered trademarks of other companies. Intel is a trademark of the Intel Corporation in the United States and other countries. Java and all Java-related trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc., in the United States and other countries. Lotus, Notes, and Domino are trademarks or registered trademarks of Lotus Development Corporation. Linux is a registered trademark of Linus Torvalds. Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation. SET and Secure Electronic Transaction are trademarks owned by SET Secure Electronic Transaction LLC. UNIX is a registered trademark of The Open Group in the United States and other countries. Storwize and the Storwize logo are trademarks or registered trademarks of Storwize Inc., an IBM Company. * All other products may be trademarks or registered trademarks of their respective companies. Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. The information on the new products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on the new products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for our products remains at our sole discretion. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography. This presentation and the claims outlined in it were reviewed for compliance with US law. Adaptations of these claims for use in other geographies must be reviewed by the local country counsel for compliance with local laws. {TRANSCRIPT} …I can still do a non-disruptive volume move but I thought I heard you say that there is only one LUN, the VDisk is not cloned of itself, it is what is presented by that may change but the VDisk still lives in this original storage pool. Was that correct? Bill: Think about volume and VDisk as pointers only. So in this case is your example, you have a volume that is presented to a server and when I/Os come into that volume the pointers of that volume point to where the data is on what MDisk and what storage pool. So in this case it’s all going to stay right there in I/O group 0. If you want to use non-disruptive volume move to move the ownership of that volume from the control enclosure one there to control enclosure two to I/O group one, you can do that, that’s perfectly fine. The host will still see the same volume, the pointers are still pointing back to those MDisks and storage pool A. The only I guess you would say thing that would happen in this case is now that the I/O is coming to control enclosure two, which is I/O group one, I have to forward across the Fibre Channels the I/Os to the MDisk and storage pool A to get to the data. And that’s no big deal, the SVC does that today, it goes out. If it didn’t have the data it goes out to the backend controller and in fact that is what we’re doing here. So you can use non-disruptive volume moves, that just determines who owns the volume and where the write cache is occurring, but the data is physically on some other disk in another control enclosure, (inaudible) controller and that’s fine, we just go across the Fibre Channel and get it. Unidentified male speaker: Now perhaps Bill it would help to just again to illustrate this point is that this is largely not different from today, so today with a clustered Storwize V7000 system you can have a volume that is owned by one I/O group and where the physical storage for that volume is actually connected to a different I/O group within the different clustered systems. So you can do that today; all the changing with non-disruptive volume move is now you have the ability to change which I/O group owns the volume, but the physical storage stays exactly where it is. There is not much changes here, the way we’ve talked about it we first made it more complicated then it already is. There is not a big change here. Bill: Yeah, we could always move a volume from an I/O group to another, just took an outage briefly to do that and where the data resided as Chris said is irrelevant and now we’re just letting you move without having to take that outage to the host and that is the balance workloads and things like that. Ted: I guess what I am looking for Bill has to do with the idea of while it might not be common I can foresee a scenario in which at a certain point would be desired to actually move the MDisk itself as well as the pointers? Bill: We can do that today. So if you wanted a change in the ownership of the volume from I/O group 0 to I/O group one you could do that now non-disruptively and then oh by the way I want all of the data migrated to that storage pool C so it stays local within that SAS chain, that’s fine, you can do that. We’ve always been able to do a volume migration from one pool to another, so you can do all of this non-disruptively, it just doesn't happen both at the same time. Ted: That’s what I was looking to clarify that it doesn’t have to have time but I can still do it. Bill: Sure you can. Ted: Outstanding, that answered my questions completely. Operator: And everyone at this time there are no further questions. I will turn the call back over to our speakers for any additional or closely remarks. Bill: Mary did you want to say anything? Mary: No thanks Bill. That was a good presentation and I guess I will just remind everyone that we have the follow up part 2 call on compression tomorrow. Thanks everyone for joining and hope we hear from you tomorrow.