How to Configure OpenStack Swift with Modern Tape

Page created by Floyd Campbell
 
CONTINUE READING
How to Configure OpenStack Swift with Modern Tape
How to Configure OpenStack Swift
with Modern Tape
ORACLE WHITE PAPER   |   MAY 2015
How to Configure OpenStack Swift with Modern Tape
Table of Contents

Introduction                                                                     1

Architecture                                                                     1

Software Requirements                                                            3

Hardware Requirements                                                            3

    Swift Proxy Server                                                           3

    Swift Storage Node/QFS Client                                                3

    Oracle HSM Metadata Server                                                   3

    Oracle HSM Primary Disk Cache                                                4

    Tape Library                                                                 4

Sizing Considerations                                                            4

Oracle HSM Installation                                                          4

    Prior to Installation                                                        5

    Downloading the Oracle HSM Software Packages from E-Delivery and Installation 5

    Update Environment Variables for Commands and Man Pages                      7

    Install the Oracle HSM Web GUI                                               7

    Configure the QFS File Systems                                               7

    Install the Oracle HSM Client on the Swift Storage Nodes                    10

    Enabling the Network Time Protocol Daemon (Shared QFS)                      14

Swift Installation                                                              14

    Install the Software Packages on the Swift Proxy and Storage Nodes          14

Configure Authentication—Openstack Keystone                                     15

    Configure MySQL                                                             15

HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
How to Configure OpenStack Swift with Modern Tape
Configure the MySQL Administrative User in Keystone     15

    Create the Keystone Service Database                    16

    Configure the Keystone Configuration File               16

    Configure the Keystone Services                         17

    Configure the proxy-server.conf File                    18

Configure Authentication—TempAuth                           19

Configure Swift                                             20

    Create the swift.conf File on the Swift Proxy Node      20

    Configure the Storage Nodes                             21

    Configure the Rings on the Proxy Server                 24

    Update Storage Node Configuration Files                 25

    Start the Swift Services on the Storage Nodes           27

    Saving Swift Configuration Files                        28

Using Swift                                                 28

Configuring Tape Archiving in Oracle HSM                    28

    Configuring the Archiver                                29

Backup and Recovery                                         35

Performance Tuning and Monitoring                           36

    Performance Benchmarking                                36

    Identifying Bottlenecks                                 36

    Performance Tuning                                      36

    Oracle HSM Benchmarking and Tuning                      37

Determining Object, Container, and Account of File in QFS   37

HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
Introduction
As digital data grows exponentially, the demands on storage systems are increasing. There is a need
for storage systems that can scale, handle many concurrent users, and be accessible via a URL.
Public cloud storage systems are addressing the challenge of handling web-scale workloads.
OpenStack Swift is an open source option to create a durable and scalable object storage system as
part of a private cloud or public cloud offering.

Oracle’s StorageTek tape drives, tape libraries, and related software provide a comprehensive
portfolio. Oracle Hierarchical Storage Manager (Oracle HSM) software is a mature, scalable storage
management tool that is optimized for writing and reading tapes. Oracle HSM was formerly known as
Oracle’s StorageTek Storage Archive Manager.

This paper describes how to set up OpenStack Swift with Oracle HSM for the lowest cost cold storage.
By combining these two products, users get the best of web interfaces via OpenStack and the best in
tape archive management software via Oracle HSM.

Architecture
The OpenStack Swift architecture consists of the following four services:

» Proxy—authenticates users and routes requests to the appropriate storage nodes
» Account—tracks the containers associated with an account
» Container—tracks the objects associated with a container
» Object—stores objects in the file system
A typical installation has one or more proxy servers and multiple storage nodes that run the account, container, and
object services. You do have the ability to separate the services onto different servers to increase performance.

The storage nodes are grouped into zones. When deploying Swift with disk-only storage, you have multiple zones
and data spread across zones to improve durability. For example, zones may represent different data centers or
racks. When using Swift with Oracle HSM, there is only one zone since Swift manages one instance of an object
and Oracle HSM creates multiple copies of each object. Following is a diagram of a traditional Swift environment
with disk-only storage nodes.

1 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
Oracle HSM is a traditional hierarchical storage manager that is efficient at utilizing low-cost tape as a storage
medium. A typical Oracle HSM environment consists of a metadata server, a primary disk cache (Fibre Channel or
iSCSI), a tape library, and QFS clients. QFS is a shared file system that scales out to multiple server nodes.
Incoming data is written to the primary disk cache by the QFS clients, and the Oracle HSM metadata server
migrates data to tape based on policy. In the 6.0 release of Oracle HSM, support is added for extended attributes
allowing QFS to participate as the underlying file system in a Swift cluster.

When running Swift with Oracle HSM, the QFS clients function as the Swift storage nodes. The proxy server
handles the incoming Swift requests and writes data to the storage nodes. The Oracle HSM server migrates data
from shared disk to tape based on policies. Here is what a typical Swift/Oracle HSM environment looks like:

The solution in this document describes how to install an environment with one Swift proxy server, three Swift
storage nodes, and one Oracle HSM metadata server. Each storage node has two QFS file systems for storing
objects as file data. Oracle HSM has the ability to separate file metadata onto a different file system from the file
data. This is done to increase performance. In addition, the Swift account and container information is stored on a
local file system on the storage nodes to separate it from the file data and improve performance.

2 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
Software Requirements
Operating systems:

» Oracle HSM metadata server—Oracle Solaris 11.1 or higher (x86 or Oracle’s SPARC)
» Swift proxy node—RedHat 6.5, Oracle Linux 6.5, or CentOS 6.5 (x86)
» Swift storage node/QFS client—RedHat 6.5, Oracle Linux 6.5, or CentOS 6.5 (x86)
The Oracle HSM metadata server requires a minimum of Oracle HSM version 6.0. The Icehouse release of
OpenStack Swift is tested with Oracle HSM 6.0.

Hardware Requirements
When selecting a hardware design for OpenStack Swift, there is flexibility in the options such as the CPU, memory,
and network cards. This section outlines a general hardware design and covers some of the considerations that go
into the hardware selection.

Swift Proxy Server
The proxy server handles all incoming Swift requests. This is typically a more powerful server so that it does not
become the bottleneck. You can add more proxy servers if required, but it is recommended to have fewer, more
powerful servers.

» Processor: dual quad-core
» Memory: 64 GB
» Network I/O: 2 x 10 Gb/sec (one for external connectivity and one for storage node connectivity)

Swift Storage Node/QFS Client
The storage node servers are responsible for writing incoming objects onto the QFS file system. Data is ingested by
the nodes and written to the disk cache. A typical architecture has multiple Swift storage nodes. The storage node
server runs the account, container, and object services.

» Processor: single quad-core.
» Memory: 16 GB.
» Network I/O: 1 Gb/sec or 10 Gb/sec for incoming requests from the Swift proxy server.
» Storage I/O: 2 x 4 Gb Fibre Channel for connectivity to the disk cache.
» Local storage: Used to store the account and container SQLite databases. Consider using flash to increase the
  objects/second performance. One million objects require 1 GB of local storage for each replica copy on the
  storage nodes running the account and container services. In this configuration, there are three replica copies
  spread across the three storage nodes. Therefore, each storage node needs 1 GB of local storage for every 1
  million objects. This is covered in more detail in the Swift installation section.

Oracle HSM Metadata Server
This server migrates data between the disk cache and tape based on policy. It can be x86 or SPARC, but requires
the Oracle Solaris operating system. It is generally not CPU intensive but should be sized to handle the desired
throughput. A second Oracle HSM metadata server can be added for availability.

» Processor: single or dual quad-core
» Memory: 64 GB
» Network I/O: 1 Gb/sec (metadata communication with QFS clients)

3 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
» Storage I/O: 4 x 8 Gb Fibre Channel (two for disk cache connectivity and two for tape connectivity)
Oracle’s recommendation is to create a separate partition for metadata as described in the Oracle HSM
configuration section as the metadata can reside on flash for improved performance. With 1 billion Swift objects, it is
recommended to have approximately 12 TB of flash for metadata.

Oracle HSM Primary Disk Cache
All incoming objects/files are staged here by the Swift storage nodes/QFS clients prior to the Oracle HSM metadata
server migrating the files to tape. Fibre Channel is recommended, but iSCSI is supported. There needs to be
enough capacity and performance to ingest data and copy it to tape. The disk cache is also used to stage data when
reading. You need to configure enough capacity to hold objects for a specified retention period plus additional
overhead for the Swift file system. If you ingest data at 1 GB/sec and need to keep the data on disk for eight hours,
then you need: 1 GB/sec*60 sec*60 min*8 hours = 28.8 TB.

A disk cache that is 10 percent of tape capacity is a general recommendation.

Tape Library
All objects are eventually written to tape by the Oracle HSM metadata server. There should be enough capacity to
hold the data and enough tape drives to deliver the needed throughput. A general recommendation is to have two
copies of data on tape for resiliency, but this is not a requirement if there are more copies stored elsewhere. You
may want to have additional tape drives available for reads depending on the volume of read requests as a tape
drive needs to mount a volume for data not resident in the disk cache. To maximize throughput, a general guideline
is one tape drive per file system or more tape drives if a single file system can stream more than the ingest rate of a
single tape drive (for example, 252 MB/sec on Oracle’s StorageTek T10000D tape drive). Multiple file systems
cannot write to a single tape drive simultaneously.

Sizing Considerations
The two key factors that influence the hardware design are the number of objects and the amount of throughput
required.

The number of objects determines the number of QFS file systems required. Each object is stored as a file in the
QFS file system. The rule of thumb is store up to 25 million objects in a single file system when metadata is stored
on flash or 5 million objects when the metadata is stored on disk. The number of objects is limited so as not to
impact performance. A storage node can have multiple QFS file systems assuming it can drive sufficient throughput.

The Oracle HSM metadata server, Swift storage nodes, and Swift proxies need to support the required performance
based on the system throughput, network cards, and storage adapters. The Oracle HSM metadata server needs to
support the ability to read the data from disk and write to tape simultaneously. Oracle HSM distributed I/O servers
may be added to increase the performance when accessing tape. Details on distributed I/O can be found in the
Oracle HSM Maintenance and Administration Guide. Performance also impacts the disk cache and tape library.
Note that the disk cache should be able to handle twice the amount of throughput required as it both writes data to
the disk cache and reads off the disk cache to write copies to tape.

Oracle HSM Installation
The installation of Oracle HSM is well documented in the manuals available here:
http://docs.oracle.com/cd/E51305_01/index.html

4 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
Prior to Installation
You should have a tape library and primary disk cache available and set up for use. The Oracle HSM metadata
server runs the Oracle Solaris operating system while the Swift proxy and storage nodes run the Linux operating
system.

Downloading the Oracle HSM Software Packages from E-Delivery and Installation
» Log in to edelivery.oracle.com, select the “Oracle StorageTek Products” product pack. For the platform, select
  your version of Oracle Solaris (x86 or SPARC).
» You need to install Oracle HSM version 6.0 at a minimum. This version is enhanced to support extended
  attributes so that Swift can run on the QFS file system.
» Download the zip file “Oracle Hierarchical Storage Manager,” which includes both Oracle HSM and the QFS file
  system. You do not need to download Oracle’s “StorageTek QFS Software” as it is included with the Oracle
  Hierarchical Storage Manager download.
» Unzip the zip file:
# unzip V74688-01.zip

# ls -1

./

../

COPYRIGHT.txt

README.txt

Oracle-HSM_6.0\

linux.iso

» Move to the Oracle-HSM_6.0 directory and then to the subdirectory that corresponds to your host architecture,
  either solaris_sparc/ or solaris_x64/, and list the contents:
In the example, note the change to the solaris_sparc/ subdirectory:

# cd solaris_sparc/

# ls -1

./

../

S10/

S11/

S11_ips/

fsmgr_5.4.01.zip

fsmgr_setup*

» When Oracle Solaris 11 or later is installed on the host, you can install the software using the Image Packaging
  System feature. To use Image Packing System, change to the subdirectory S11_ips/ and Install Oracle HSM
  Software Using the Image Packaging System:
# cd S11_ips/

5 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
» List the contents of the subdirectory S11_ips/:
# ls -1

./

../

repo.samqfs/
» Change to the repository subdirectory repo.samqfs/:
# cd repo.samqfs/

# ls -1

./

../

pkg5.repository

publisher/

» To install both the Oracle HSM and StorageTek QFS software packages, use the command pkg install -g . --
  accept SUNWsamfs SUNWsamqassy, where . is the current directory (the repository) and SUNWsamfs and
  SUNWsamqassy are the Oracle HSM Image Packaging System package names:
# pkg install -g . --accept SUNWsamfs SUNWsamqassy

Creating plan

...

* The licence and distribution terms for any publically available version or

* derivative of this code cannot be changed. i.e. this code cannot simply be

* copied and put under another distribution license

* [including the GNU Public License.]

*/

Packages to install: 2

Create boot environment: No

Create backup boot environment: Yes

DOWNLOAD PKGS FILES XFER (MB) SPEED

Completed 2/2 520/520 21.4/21.4 0B/s

PHASE ITEMS

Installing new actions 693/693

Updating package state database Done

Updating image state Done

Creating fast lookup database Done

6 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
» When the packages finish installing, run the post-installation script, sam-qfs-post-install. It is located in the util/
  subdirectory of the Oracle HSM installation directory (either /opt/SUNWsamfs/ or /opt/SUNWqfs/). The example
  runs /opt/SUNWsamfs/util/sam-qfs-post-install:
# /opt/SUNWsamfs/util/sam-qfs-post-install

- The administrator commands will be executable by root only (group bin).

If this is the desired value, enter "y". If you want to change

the specified value enter "c".

...

#

Update Environment Variables for Commands and Man Pages
» Add /opt/SUNWsamfs/bin and /opt/SUNWsamfs/sbin to the system PATH variable.
» Add the Oracle HSM directory /opt/SUNWsamfs/man to the system MANPATH variable.

Install the Oracle HSM Web GUI
» Start the installation process by running the fsmgr_setup script from the directory where you downloaded the
  Oracle HSM software.
# ./fsmgr_setup

Configure the QFS File Systems
Typically, metadata information is stored on the same device as the file data. However, QFS provides the option of
storing the metadata on a separate device from the file data to improve performance. In this example, the metadata
is separated from the file data by leveraging the ma file system. Metadata is stored in the mm device, and file data is
stored in the md device. It is recommended to store the metadata (mm device) on flash for improved performance
during ingest and when backing up the metadata.

There are two file systems per client for a total of six (cache1 – cache6). There is also one tape library with four tape
drives.
» NOTE: More than one file system per LUN is not recommended for performance reasons. However, you may
  assign multiple LUNs to a single file system.
On the Oracle HSM metadata server, add the QFS client hosts to the file /etc/opt/SUNWsamfs/hosts.cache1. Create
this file for each of the six file systems (cache1 – cache6). The file will look like the following:

#

# Host file for family set 'cache1'

#

# Host                      Host IP      Server     Not       Server

# Name                       Addresses      Priority Used Host

# -----------------------   ----------- -------- ---- ------

swift-sam                    swift-sam          1       - server

swift-client1.example.com 10.80.0.3                 -     -

7 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
swift-client2.example.com 10.80.0.4         -   -

swift-client3.example.com 10.80.0.5         -   -

Add devices to the MCF file /etc/opt/SUNWsamfs/mcf on the Oracle HSM metadata server:

# Sample MCF File

# Disk cache configuration for Swift-Tape

#

# Equipment                                                    Eq Eq Family           Device

# Identifier                                                   Ord Type Set           State

#-----------                                                   --- ---- ------   ------

cache1                                                         10 ma cache1 on shared

/dev/dsk/c3t1d0s0                                              11 mm cache1

/dev/dsk/c0t600A0B80004795BC00000C6A540EAB6Dd0s0               12 md cache1

cache2                                                         20 ma cache2 on shared

/dev/dsk/c3t2d0s0                                              21 mm cache2

/dev/dsk/c0t600A0B80004795BC00000C6D540EAC18d0s0               22 md cache2

cache3                                                         30 ma cache3 on shared

/dev/dsk/c3t3d0s0                                              31 mm cache3

/dev/dsk/c0t600A0B80004795BC00000C70540EACA1d0s0               32 md cache3

cache4                                                         40 ma cache4 on shared

/dev/dsk/c3t4d0s0                                              41 mm cache4

/dev/dsk/c0t600A0B800047958800000AB1540EB653d0s0               42 md cache4

cache5                                                         50 ma cache5 on shared

/dev/dsk/c3t5d0s0                                              51 mm cache5

/dev/dsk/c0t600A0B800047958800000AB4540EB6D2d0s0               52 md cache5

cache6                                                         60 ma cache6 on shared

/dev/dsk/c3t6d0s0                                              61 mm cache6

/dev/dsk/c0t600A0B800047958800000AB7540EB75Ed0s0               62 md cache6

/dev/scsi/changer/c9t500104F000AFB438d0                        90 s3 swift1 on

/dev/rmt/8cbn                                                  91 li swift1 on

/dev/rmt/6cbn                                                  92 li swift1 on

/dev/rmt/11cbn                                                 93 li swift1 on

/dev/rmt/0cbn                                                  94 li swift1 on

8 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
Check the mcf file for errors by running the sam-fsd command. The sam-fsd command reads the Oracle HSM
configuration files and initializes the file systems. It will stop if it encounters an error:

# sam-fsd

Tell the samd service to reread the mcf file and reconfigure itself accordingly. Correct any errors reported and repeat
as necessary:

# samd config

Create the file system using the /opt/SUNWsamfs/sbin/sammkfs -S command and the family set name of the file
system:

# sammkfs -S cache1

Building 'cache1' will destroy the contents of devices:

/dev/dsk/c3t1d0s0

/dev/dsk/c0t600A0B80004795BC00000C6A540EAB6Dd0s0

Do you wish to continue? [y/N]yes

Add the new file system to the operating system's virtual file system configuration. The /etc/vfstab file on the Oracle
HSM metadata server would have the following entries based on this example:

root@swift-sam:/opt/SUNWsamfs# cat /etc/vfstab

#device              device                mount                FS         fsck       mount     mount

#to mount            to fsck               point                type       pass       at boot   options

#

/devices              -                    /devices              devfs     -          no        -

/proc                 -                    /proc                proc       -          no        -

ctfs                 -                     /system/contract     ctfs       -          no        -

objfs                -                     /system/object       objfs      -          no        -

sharefs              -                     /etc/dfs/sharetab     sharefs -            no        -

fd                   -                     /dev/fd              fd         -          no        -

swap                 -                     /tmp                 tmpfs      -          yes       -

/dev/zvol/dsk/rpool/swap                   -          -         swap       -          no        -

cache1               -                     /cache1              samfs      -          no        shared

cache2               -                     /cache2              samfs      -          no        shared

cache3               -                     /cache3              samfs      -          no        shared

cache4               -                     /cache4              samfs      -          no        shared

cache5               -                     /cache5              samfs      -          no        shared

9 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
cache6                  -               /cache6           samfs     -         no        shared

» If the new QFS shared file system does not already have a mount point, create the directory for the mount point.
  For example:
# mkdir /cache1
» Give the mount point the 755 set of permissions. For example:
# chmod 755 /cache1
» Mount the file system on the metadata server before you mount the file system on any client hosts. For example,
  run the following on the metadata server:
# mount /cache1

» Verify that the file system is mounted on the metadata server:
# df -k

Install the Oracle HSM Client on the Swift Storage Nodes
The QFS file system is enhanced to support extended attributes so that it can run Swift. The Swift storage nodes
need to be installed with the QFS file system.

The first step is to download the software from e-delivery as you did for the Oracle HSM metadata server and install
the software. The Linux files are included in the same bundle that you downloaded for Oracle Solaris.

You need to perform several steps to ensure the system is ready to install.

Edit the file /etc/selinux/config to disable SELINUX:

SELINUX=disabled

Edit the file /boot/grub/grub.conf to boot standard Linux and not Oracle's Unbreakable Enterprise Kernel:

default=1

If you changed the default value, then reboot.

Run the following command to determine the Linux version:

# uname -r

Next install the Kernel development package:

# yum -y install kernel-devel

You may need to install dependencies. For example, the following had to be installed during installation:

# yum -y install ksh rpm-build

Updating to the latest version is recommended:

# yum -y update

If you update the system, confirm that grub.conf did not change.

Type the following commands as root on the Linux system:

# mount -o ro,loop -t iso9660 linux.iso /mnt

# /mnt/linux1/Install

10 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
Repeat this for each of the three storage nodes.

The Linux client software automatically generates an mcf file. If an mcf file does not exist, the Linux client will create
one when the system is booted or when samd config is run. Verify that the /etc/opt/SUNWsamfs/mcf file contains the
correct paths. Here is a sample MCF file for the QFS client:

#

# This MCF file was auto generated using /opt/SUNWsamfs/sbin/samfsconfig

#

#

# Family Set 'cache1' Created Sun Sep 14 20:54:32 2014

# Generation 0 Eq count 2 Eq meta count 1

#

# zoned-off or missing metadata device

#

cache1 10 ma cache1 - shared

nodev       11   mm cache1 -

/dev/sdj1   12   md cache1 -

#

# Family Set 'cache2' Created Sun Sep 14 20:54:40 2014

# Generation 0 Eq count 2 Eq meta count 1

#

# zoned-off or missing metadata device

#

cache2 20 ma cache2 - shared

nodev       21   mm cache2 -

/dev/sdl1   22   md cache2 -

#

# Family Set 'cache3' Created Sun Sep 14 20:54:44 2014

# Generation 0 Eq count 2 Eq meta count 1

#

# zoned-off or missing metadata device

#

cache3 30 ma cache3 - shared

11 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
nodev       31   mm cache3 -

/dev/sdn1   32   md cache3 -

#

# Family Set 'cache4' Created Sun Sep 14 20:54:47 2014

# Generation 0 Eq count 2 Eq meta count 1

#

# zoned-off or missing metadata device

#

cache4 40 ma cache4 - shared

nodev       41   mm cache4 -

/dev/sde1   42   md cache4 -

#

# Family Set 'cache5' Created Sun Sep 14 20:54:50 2014

# Generation 0 Eq count 2 Eq meta count 1

#

# zoned-off or missing metadata device

#

cache5 50 ma cache5 - shared

nodev       51   mm cache5 -

/dev/sdg1   52   md cache5 -

#

# Family Set 'cache6' Created Sun Sep 14 20:54:54 2014

# Generation 0 Eq count 2 Eq meta count 1

#

# zoned-off or missing metadata device

#

cache6 60 ma cache6 - shared

nodev       61   mm cache6 -

/dev/sdi1   62   md cache6 -

12 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
The /opt/SUNWsamfs/sbin/samfsconfig command will display file system names and device path names if you need
to modify the Linux client mcf file. Take into account the differences in logical unit numbers (LUNs) between Oracle
Solaris and Linux.

Check the mcf file for errors by running the sam-fsd command, and correct any errors found.

The sam-fsd command reads Oracle HSM configuration files and initializes file systems. It will stop if it encounters
an error. For example:

# /opt/SUNWsamfs/sbin/sam-fsd

Verify entries in the /etc/fstab file on the QFS clients. Here is a sample file for client1:

[root@swift-client1 examples]# cat /etc/fstab

#

# /etc/fstab

# Created by anaconda on Fri Sep 12 10:44:53 2014

#

# Accessible filesystems, by reference, are maintained under '/dev/disk'

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#

/dev/mapper/vg_swiftclient1-lv_root /                          ext4   defaults      11

UUID=3a48fa93-cae3-4efd-893e-d94bad59f4a2 /boot                ext4   defaults      12

/dev/mapper/vg_swiftclient1-lv_home /home                      ext4   defaults      12

/dev/mapper/vg_swiftclient1-lv_swap swap                       swap    defaults      00

tmpfs                /dev/shm                                  tmpfs defaults        00

devpts               /dev/pts                                  devpts gid=5,mode=620 0 0

sysfs                /sys                                      sysfs defaults        00

proc                 /proc                                     proc   defaults      00

/dev/sdb1            /srv/node/sdb1                            xfs    noatime,nodiratime,nobarrier,logbufs=8 0 0

cache1               /srv/node/cache1                          samfs shared           00

Create the mount point specified in the /etc/fstab file, and set the access permissions for the mount point:

# mkdir /srv/node

# mkdir /srv/node/cache1

# chmod 755 /srv/node/cache1

Mount the file systems on the storage nodes under /srv/node. For example:

# mount -t samfs -o shared cache1 /srv/node/cache1

13 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
If the file system does not mount, then you may need to run the following command:

# /opt/SUNWsamfs/sbin/samd config

Enabling the Network Time Protocol Daemon (Shared QFS)
This section describes how to enable the network time protocol daemon in a shared QFS environment.

In the /etc/inet/ntp.conf file, add the following lines:

server nettime prefer

server earth

Issue the following commands:

# sync

# reboot

You also need to point the Swift proxy and clients to the same server by editing /etc/ntp.conf on those systems.
Then restart the NTP service on each server:

# service ntp restart

Swift Installation

Install the Software Packages on the Swift Proxy and Storage Nodes
» Operating system: RedHat 6.5, CentOS 6.5, or Oracle Linux 6.5 (x86)
» OpenStack Keystone is used for the authentication service in this example with Swift:
     » Download OpenStack packages from Fedora (Icehouse release is recommended).
     » https://repos.fedorapeople.org/repos/openstack.
     » You need to run the install as root or superuser.
     » If you are downloading via a proxy, then export the proxy. For example:
#export http_proxy=http://www-proxy.example.com:80

#export https_proxy=http://www-proxy.example.com:80

     » You can also add an entry to /etc/yum.conf. For example:
proxy=http://www-proxy.example.com:80
     » Create a temporary file /tmp/yumtmp and copy the contents of /etc/yum.conf:
# cat /etc/yum.conf > /tmp/yumtmp
     » Then add the following at the bottom of /tmp/yumtmp:
[epel-release]

name=epel-release

enabled=1

mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch

     » Install the epel-release using the temporary file:
#/usr/bin/yum install -y --nogpg -c /tmp/yumtmp epel-release

14 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
» Install the OpenStack rpm:
#/usr/bin/yum install -y --nogpg https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-
icehouse.rpm

     » Install the Swift packages including Keystone for authentication services:
#/usr/bin/yum install –y ntp memcached mysql-server openstack-keystone openstack-swift openstack-swift-account
openstack-swift-container openstack-swift-object openstack-swift-proxy openstack-utils rsync

     » If yum tries to install the wrong version of the files you can try to reset yum by running the following:
#/usr/bin/yum clean all

#/usr/bin/yum info kernel

Remember to repeat the install for the Swift proxy and all Swift storage nodes. You do not need to install mysql,
Keystone, or the proxy packages on the Swift storage nodes.

Configure Authentication—Openstack Keystone
Swift supports several authentication services for identity management of users. You may choose to use Keystone,
TempAuth, or another supported authentication service. Skip to the next section if you plan to install TempAuth
instead of Keystone. This example shows the steps for configuring OpenStack Keystone on the proxy server. Users
connect to Keystone endpoints by using usernames and tenant credentials. A user is part of a tenant (group) or
multiple tenants. Keystone returns a token used to access other OpenStack endpoints, such as Swift.

When a user connects to Swift supplying the authentication token, Swift checks with Keystone to confirm the
credentials and then allows the user access to Swift.

Configure MySQL
Keystone requires MySQL, and you installed the mysql-server above. First, start the mysqld. Also run the chkconfig
command so that the service starts on system boot:

# /sbin/service mysqld start

# /sbin/chkconfig mysqld on

Delete all the anonymous users by running the following command, responding “yes” to the prompts:

# mysql_secure_installation

Configure the MySQL Administrative User in Keystone
Next you create the MySQL administrative user by running the following command and replacing mysql_admin and
mysql_admin_pw with e the username and password of your choosing:

# /usr/bin/mysqladmin --user=mysql_admin --password=mysql_admin_pw

Then run the following command:

# /usr/bin/mysql --user=mysql_admin --password=mysql_admin_pw

At the prompts, run the following command:

GRANT ALL PRIVILEGES ON *.* TO ‘mysql_admin’@’localhost’ IDENTIFIED BY ‘mysql_admin_pw’ WITH GRANT
OPTION;

15 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
GRANT ALL PRIVILEGES ON *.* TO ‘mysql_admin’@’swift_proxy’ IDENTIFIED BY ‘mysql_admin_pw’ WITH
GRANT OPTION;

GRANT ALL PRIVILEGES ON *.* TO ‘mysql_admin’@’%’ IDENTIFIED BY ‘mysql_admin_pw’ WITH GRANT
OPTION;

exit

Finally, run the following command to flush privileges:

# /usr/bin/mysqladmin --user=mysql_admin --password=mysql_admin_pw --host=localhost flush-privileges

Create the Keystone Service Database
# Start the Keystone service and configure it to start at system boot:

# /sbin/service openstack-keystone start

# /sbin/chkconfig openstack-keystone on

Create the Keystone service database:

# /usr/bin/mysql --user= mysql_admin --password=mysql_admin_pw

At the prompts, run the following command:

CREATE DATABASE keystone

GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’localhost’ IDENTIFIED BY ‘keystone_pw’ WITH GRANT
OPTION;

GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’ IDENTIFIED BY ‘keystone_pw’ WITH GRANT
OPTION;

exit

Run the following command to flush privileges:

# /usr/bin/mysqladmin --user= mysql_admin --password= mysql_admin_pw --host=localhost flush-privileges

Make sure that ownership of Keystone's files is correct since the installation is done as root.

# /bin/chown -R keystone:keystone /etc/keystone

# /bin/touch /var/log/keystone/keystone.log

# /bin/chown -R keystone:keystone /var/log/keystone

Configure the Keystone Configuration File
Generate a random token and add it to the /etc/keystone/keystone.conf file:

# openssl rand -hex 10

Input the token into the /etc/keystone/keystone.conf file under [DEFAULT] along with the following parameters:

[DEFAULT]

admin_token = random_token

public_bind_host = 0.0.0.0

16 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
admin_bind_host = 0.0.0.0

public_port = 5000

admin_port = 35357

Set the database connection to MySQL in the /etc/keystone/keystone.conf file. In this example, the Keystone
password is keystone, thus the entry is keystone:keystone.

[database]

connection = mysql://keystone:keystone@localhost/keystone

Update the token_format parameter in /etc/keystone/keystone.conf file under [signing]:

[signing]

token_format=UUID

Then remove the SQLite file if it exists in /var/lib/keystone:

# rm –rf /var/lib/keystone/keystone.sqlite

Restart the Keystone database and check the synchronization:

# /sbin/service openstack-keystone restart

# /usr/bin/keystone-manage db_sync

Configure the Keystone Services
Export the following environment variables:

# export OS_SERVICE_TOKEN=random_token

# export OS_SERVICE_ENDPOINT=http://swift-proxy:35357/v2.0

Next create a tenant, user, and the role for an administrative user called admin:

# /usr/bin/keystone tenant-create --name=admin --description=’Admin Tenant’ --enabled=true

# /usr/bin/keystone user-create --name=admin --pass=admin_pw --tenant=admin --email=root@localhost --
enabled=true

# /usr/bin/keystone role-create --name=admin

# /usr/bin/keystone user-role-add --user=admin --tenant=admin --role=admin

Next create a tenant, user, and the role for a Swift user called swift:

# /usr/bin/keystone tenant-create --name=swift-tenant --description=’Swift Tenant’ --enabled=true

# /usr/bin/keystone user-create --name=swift --pass=swift_pw --tenant=swift-tenant --email=swift@localhost --
enabled=true

# /usr/bin/keystone role-create --name=Member

# /usr/bin/keystone user-role-add --user=swift --tenant=swift-tenant –role= Member

# /usr/bin/keystone user-role-add --user=swift --tenant=swift-tenant –role= admin

17 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
Create the Keystone and Swift services:

# /usr/bin/keystone service-create --name=keystone --type=identity --description=’Keystone Identity Service’

# /usr/bin/keystone service-create --name=swift --type=object-store --description=’Swift Service’

Run the following command to list the service IDs, which are needed:

# /usr/bin/keystone service-list

Create the Keystone and Swift endpoints:

# /usr/bin/keystone endpoint-create --service-id KEYSTONE_SERVICE_ID --region RegionOne \

--publicurl ‘http://swift-proxy:5000/v2.0’ --adminurl ‘http://swift-proxy:35357/v2.0’\

--internalurl ‘http://swift-proxy:5000/v2.0’

# /usr/bin/keystone endpoint-create --service-id SWIFT_SERVICE_ID --region RegionOne \

--publicurl ‘http://swift-proxy:8080/v1/AUTH_%(tenant_id)s’ --adminurl ‘http://swift-proxy:8080/v1’ \

--internalurl ‘http://swift-proxy:8080/v1/AUTH_%(tenant_id)s’

Unset the environment variables:

# unset OS_SERVICE_TOKEN

# unset OS_SERVICE_ENDPOINT

Verify the Keystone service is working by requesting an auth token:

# /usr/bin/keystone --os-username=admin --os-password=admin_pw --os-tenant-name=admin --os-auth-
url=http://swift-proxy:35357/v2.0 token-get

Test the Keystone service by listing the users:

# /usr/bin/keystone --os-token=Random Token --os-endpoint=http://swift-proxy:35357/v2.0 user-list

Configure the proxy-server.conf File
Ensure that you have correctly edited the /etc/swift/proxy-server.conf file so that it has the correct auth_host,
admin_token, admin_tenant_name, admin_user, and admin_password. You also need to update the node_timeout
and conn_timeout to 600 since data could be recalled from tape.

[DEFAULT]

bind_port = 8080

workers = auto

user = swift

log_level = INFO

[pipeline:main]

pipeline = healthcheck cache authtoken keystone proxy-server

18 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
[app:proxy-server]

use = egg:swift#proxy

allow_account_management = true

account_autocreate = true

node_timeout = 600

conn_timeout = 600

[filter:cache]

use = egg:swift#memcache

memcache_servers = 127.0.0.1:11211

[filter:catch_errors]

use = egg:swift#catch_errors

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:keystone]

use = egg:swift#keystoneauth

operator_roles = admin, SwiftOperator

is_admin = true

cache = swift.cache

[filter:authtoken]

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

#delay_auth_decision = true

admin_tenant_name = swift-tenant

admin_user = swift

admin_password = swift

auth_host = 10.80.28.02

auth_port = 35357

auth_protocol = http

signing_dir = /tmp/keystone-signing-swift

Configure Authentication—TempAuth
The following describes how to set up and configure TempAuth as the authentication service for Swift. TempAuth
should only be used in test deployments and not in production. User names and passwords are stored as clear text
in the file /etc/swift/proxy-server.conf.

19 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
Edit the file /etc/swift/proxy-server.conf to enable tempauth in the pipeline and add a tempauth section for user
information. You also need to update the node_timeout and conn_timeout to 600 since data could be recalled from
tape. A sample file might look like the following:

[DEFAULT]

bind_port = 8080

workers = auto

user = swift

[pipeline:main]

pipeline = cache tempauth proxy-server

[app:proxy-server]

use = egg:swift#proxy

account_autocreate = true

allow_account_management = true

node_timeout = 600

conn_timeout = 600

[filter:cache]

use = egg:swift#memcache

memcache_servers = 127.0.0.1:11211

[filter:tempauth]

use = egg:swift#tempauth

user_admin_admin = admin .admin .reseller_admin

user_test_tester = testing .admin

user_test2_tester2 = testing2 .admin

user_test_tester3 = testing3

To verify tempauth authentication once the Swift proxy is configured, restart the swift-proxy service:

# service swift-proxy restart

And use the following command:

# curl -v -H 'X-Auth-User: test:tester' -H 'X-Auth-Key: testing' http://localhost:8080/auth/v1.0/

Configure Swift

Create the swift.conf File on the Swift Proxy Node
The first step is to create the /etc/swift directory and assign permissions to Swift by running the following commands:

20 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
# mkdir -p /etc/swift

# chown -R swift:swift /etc/swift

The next step is to create the file /etc/swift/swift.conf and add the variable swift_hash_path_suffix. This is a random
string that can be generated by running the following command:

# openssl rand -hex 10

Use the string generated as the swift_hash_path_suffix in the /etc/swift/swift.conf file. The file should look something
like the following:

[swift-hash]

# random unique string used on all nodes and does not change – DO NOT LOSE

swift_hash_path_suffix= b9d1f4a9334b88094a05

The Swift proxy and all Swift storage nodes should have an identical copy of /etc/swift/swift.conf.

Configure the Storage Nodes
Ensure that the following packages are installed: swift-account, swift-container, and swift-object.

Create the /etc/swift directory and assign permissions to swift by running the following commands:

# mkdir -p /etc/swift

# chown -R swift:swift /etc/swift

# chown -R swift:swift /var/run/swift

Copy the /etc/swift/swift.conf file that you created on the Swift proxy node to each Swift storage node.

Since account and container information is stored on local flash disk drives, the flash drives need to be formatted.
This information will be replicated across three separate nodes so it does not need to be stored with RAID. You may
need to install the XFS packages as follows:

# yum install xfsprogs xfsdump

If the device is sdb, run the following commands:

# fdisk /dev/sdb

# umount /dev/sdb1

#/sbin/mkfs.xfs -f -i size=1024 /dev/sdb1

Create the directory that will be used by Swift in /srv/node and change the permissions:

# mkdir -p /srv/node/sdb1

# chown -R swift:swift /srv/node

Add an entry to /etc/fstab:

/dev/sdb1             /srv/node/sdb1     xfs   noatime,nodiratime,nobarrier,logbufs=8 0 0

Mount the file system:

# mount /srv/node/sdb1

21 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
The next step is to configure rsync.

Install xinetd for rsync:

# yum install xinetd

Edit the file /etc/rsyncd.conf as follows:

uid = swift

gid = swift

log file = /var/log/rsyncd.log

pid file = /var/run/rsyncd.pid

#address = 10.80.28.2

[account]

max connections = 2

path = /srv/node

read only = false

lock file = /var/lock/account.lock

[container]

max connections = 4

path = /srv/node

read only = false

lock file = /var/lock/container.lock

[object]

max connections = 8

path = /srv/node

read only = false

lock file = /var/lock/object.lock

Edit the file /etc/xinetd.d/rsync so that rsync is not disabled:

# description: The rsync server used for replication

service rsync

{

    disable            = no

    flags              = IPv6

    socket_type        = stream

    wait               = no

22 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
user             = root

    server           = /usr/bin/rsync

    server_args      = --daemon

    log_on_failure   += USERID

}

Restart rsync:

# /sbin/chkconfig xinetd on

# /sbin/service xinetd start

Create the Swift recon cache directory and set the permissions:

# mkdir -p /var/swift/recon

# chown -R swift:swift /var/swift/recon

Run the following commands to create the directories needed by Swift and assign ownership to the Swift user:

# mkdir -p /etc/swift/object-server

# mkdir -p /etc/swift/container-server

# mkdir -p /etc/swift/account-server

# mkdir -p /var/run/swift

# chown -R swift:swift /etc/swift/object-server

# chown -R swift:swift /etc/swift/container-server

# chown -R swift:swift /etc/swift/account-server

# chown -R swift:swift /var/run/swift

Then create directories in /srv/node to mount the QFS file systems. The first storage node owns cache1 and cache2
so run the following on the first storage node:

# mkdir -p /srv/node/cache1

# mkdir -p /srv/node/cache2

# chown -R swift:swift /srv/node/cache1

# chown -R swift:swift /srv/node/cache2

Confirm that /etc/fstab includes entries for cache1 and cache2 on the first storage node:

[root@swift-client1 examples]# cat /etc/fstab

#

# /etc/fstab

# Created by anaconda on Fri Sep 12 10:44:53 2014

#

23 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
# Accessible filesystems, by reference, are maintained under '/dev/disk'

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#

/dev/mapper/vg_swiftclient1-lv_root /                        ext4   defaults     11

UUID=3a48fa93-cae3-4efd-893e-d94bad59f4a2 /boot              ext4   defaults     12

/dev/mapper/vg_swiftclient1-lv_home /home                    ext4   defaults     12

/dev/mapper/vg_swiftclient1-lv_swap swap                     swap    defaults     00

tmpfs             /dev/shm                                   tmpfs defaults       00

devpts             /dev/pts                                  devpts gid=5,mode=620 0 0

sysfs             /sys                                       sysfs defaults      00

proc              /proc                                      proc   defaults     00

/dev/sdb1           /srv/node/sdb1                           xfs    noatime,nodiratime,nobarrier,logbufs=8 0 0

cache1             /srv/node/cache1                          samfs shared          00

cache2             /srv/node/cache2                          samfs shared          00

Then mount the file systems:

# /bin/mount -t samfs -o shared cache1 /srv/node/cache1

# /bin/mount -t samfs -o shared cache2 /srv/node/cache2

Repeat this section for the other two storage nodes using the QFS file systems associated with those nodes.

Configure the Rings on the Proxy Server
Now configure the rings on the proxy server and copy the files generated to all the Swift nodes in the cluster. First
configure the object ring:

# cd /etc/swift

# swift-ring-builder object.builder create 10 1 1

The 10 parameter specifies that 2 to the power of 10 partitions be created. You determine the parameter value by
multiplying the number of file systems by 100 and then rounding up to the nearest power of two. In this case there
are three nodes with two QFS file systems each for a total of six file systems for object data. Then, multiply six by
100, which equals 600. Then, round up to the nearest power of two, or 1,024 (2^10). The next parameter specifies
the number of replica copies, which is one when Swift is used with Oracle HSM. Oracle HSM will manage additional
copies of data. The final parameter indicates not to move a partition more than once in an hour. This setting is not
used for objects with Oracle HSM as you want to avoid rebalancing Swift nodes.

Next create the account and container rings on the local file systems. Utilizing flash for the local file system can
increase performance if you anticipate lots of operations hitting the account and container databases. There is one
local file system on each storage node that holds account and container data. Therefore, the first parameter is 9
since the same formula as above is used. If you have more than one device for holding account and container
databases, then the calculation may increase for the number of partitions. There are three copies of this data:

24 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
# cd /etc/swift

# swift-ring-builder account.builder create 9 3 1

# swift-ring-builder container.builder create 9 3 1

The next step is to add each file system to the rings. In the following example, the storage node with IP address
10.80.28.8 owns the file systems cache1 and cache2. In a standard Swift implementation, there is a separate
network for replication, and the address is specified after the R. In this example, the same network is used for
replication so the IP address is the same. The final parameter (100) is the weight which indicates how much data
this file system holds relative to the other file systems. With even distribution, the weight is 100 for all file systems:

# swift-ring-builder account.builder add r1z1-10.80.28.8:6002R10.80.28.8:6002/sdb1 100

# swift-ring-builder container.builder add r1z1-10.80.28.8:6001R10.80.28.8:6001/sdb1 100

# swift-ring-builder object.builder add r1z1-10.80.28.8:6000R10.80.28.8:6000/cache1 100

# swift-ring-builder object.builder add r1z1-10.80.28.8:6000R10.80.28.8:6000/cache2 100

Then repeat these commands for the other two storage nodes and their associated file systems.

The final step is to create the ring files. Do this by running the rebalance command:

# /usr/bin/swift-ring-builder account.builder rebalance

# /usr/bin/swift-ring-builder container.builder rebalance

# /usr/bin/swift-ring-builder object.builder rebalance

# chown –R swift:swift /etc/swift/*

You can get a summary of a ring by running the swift-ring-builder command specifying a ring. For example:

# /usr/bin/swift-ring-builder object.builder

NOTE: While Swift allows storage nodes to be added dynamically, this should be avoided when using tape storage.
When Swift rebalances, objects are copied from one storage node to another. This causes objects to be read from
tape and impacts performance. Instead, configure enough Swift storage nodes/QFS clients and QFS file systems to
handle the anticipated total object count.

After running the commands, the following files are created in /etc/swift:
» account.ring.gz, container.ring.gz, object.ring.gz
Copy these files to every storage node in the Swift cluster under the /etc/swift directory, which should be owned by
the Swift user.

Finally, start the proxy service with the following command on the Swift proxy:

# service openstack-swift-proxy restart

Update Storage Node Configuration Files
The account, container, and object servers on the storage node need to be updated. In this step you update the
configuration files to point to the location of the storage devices, allow the services to bind to any IP address, and
specify the correct port for communication.

Update /etc/swift/account-server.conf with the following:

25 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
[DEFAULT]

devices = /srv/node

bind_ip = 0.0.0.0

bind_port = 6002

mount_check = true

user = swift

log_facility = LOG_LOCAL2

workers = auto

[pipeline:main]

pipeline = account-server

[app:account-server]

use = egg:swift#account

[account-replicator]

[account-auditor]

[account-reaper]

Update /etc/swift/container-server.conf with the following:

[DEFAULT]

devices = /srv/node

bind_ip = 0.0.0.0

bind_port = 6001

mount_check = true

user = swift

log_facility = LOG_LOCAL2

workers = auto

[pipeline:main]

pipeline = container-server

[app:container-server]

use = egg:swift#container

[container-replicator]

[container-updater]

[container-auditor]

Update /etc/swift/object-server.conf with the following:

26 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
[DEFAULT]

devices = /srv/node

bind_ip = 0.0.0.0

bind_port = 6000

mount_check = true

user = swift

log_facility = LOG_LOCAL2

workers = auto

[pipeline:main]

pipeline = object-server

[app:object-server]

use = egg:swift#object

[object-replicator]

[object-updater]

[object-auditor]

Start the Swift Services on the Storage Nodes
Start the services on the storage nodes:

# service openstack-swift-account start

# service openstack-swift-account-reaper start

# service openstack-swift-account-replicator start

# service openstack-swift-account-auditor start

# service openstack-swift-container start

# service openstack-swift-container-replicator start

# service openstack-swift-container-updater start

# service openstack-swift-container-auditor start

# service openstack-swift-object start

# service openstack-swift-object-expirer start

# service openstack-swift-object-replicator start

# service openstack-swift-object-updater start

NOTE: Do not start the auditor service on the storage nodes for object data as this causes files to be recalled from
tape and impacts performance. Instead, use the data integrity validation feature of Oracle HSM to confirm that data
is valid.

27 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
Restart memcached:

# service memcached restart

You should also run the chkconfig command for each service so they start on system boot:

# /sbin/chkconfig  on

When rebooting a Swift storage node, it is recommended that you stop the OpenStack services and unmount the
QFS file systems prior to reboot.

Saving Swift Configuration Files
For protection, copy the contents of /etc/swift/* to each QFS file system because you can’t lose the contents of
/etc/swift/swift.conf or the ring builder files. If you are using Keystone for authentication, then also copy the contents
of /etc/keystone/* to each QFS file system.

Using Swift
Swift installs with a client utility that can be used to do ad hoc testing.

If you are using Keystone for authentication, run the following command on the proxy:

/usr/bin/swift -A http://localhost:5000/v2.0 -U swift-tenant:swift -K swift-pw -V 2.0 list

If you are using TempAuth for authentication, run the following command on the proxy:

/usr/bin/swift -A http://localhost:8080/auth/v1.0/ -U test:tester -K testing stat

You may use the Swift command to create containers and add objects. For more information on the Swift command,
see the Swift Administration Guide:

http://docs.openstack.org/user-guide/content/managing-openstack-object-storage-with-swift-cli.html.

For more information on using Swift, see the Swift API Reference Manual:

http://docs.openstack.org/api/openstack-object-storage/1.0/content/.

Configuring Tape Archiving in Oracle HSM
Below are steps to configure tape library devices using samsetup. It detects tape drives and generates the required
mcf entries.

# samsetup

 *** Main Menu ***

  Please select from one of the following options:

     1) Create and configure a new QFS file system

     2) Modify an existing QFS file system

     3) Disk archiving

     4) Tape archiving

     5) Archive administration

28 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
h) Help with menu options

     q) Quit

  Option: 4

 *** Configure Tape Archiving ***

  This option is used to configure tape libraries.

  Direct-attached libraries are libraries attached through a SCSI or

  Fibre Channel attachment. This utility automatically detects and

  configures direct-attached libraries.

  Do you want to configure direct attached libraries (yes/no) [yes]? yes

 *** Configure Direct-Attached Tape Libraries ***

  This option is used to automatically detect and configure direct-

  attached tape libraries.

Detecting tape libraries...

The given direct-attached libraries have been detected and the mcf will be generated.

/dev/scsi/changer/c5t500104F0008E6D78d0 10 sn LIBRARY1 on

/dev/rmt/10cbn 11 li LIBRARY1 on

/dev/rmt/11cbn 12 li LIBRARY1 on

  Is it okay to continue (yes/no) [yes]? yes

If you need to do any of the following tasks, please see the Oracle HSM Configuration and Administration Guide:

» Setup SNMP alerts
» Configure system logging
» Attach to a library controlled by Oracle's StorageTek Automated Cartridge System Library Software

Configuring the Archiver
You need to configure the /etc/opt/SUNWsamfs/archiver.cmd file, which instructs the archiver on what to do. Below
is a sample archiver.cmd file. At a high level, it instructs the archiver how to behave.

In the global directives section, the following are specified:
» Specifies /var/opt/SUNWsamfs/archiver/log for logging. The log file contains information about each file that is
  archived, rearchived, or automatically unarchived.
» Have the archiver scan the file systems every 30 seconds.
» Specifies a buffer size of 256 for “li” media type. This is multiplied by the li_blksize value for the media type, and
  the resulting buffer size is used. The dev_blksize value is specified in the defaults.conf file. In this case, it is 2,048.
The example file then specifies archive sets for each QFS file system. It specifies which files should not be archived
(see notes below). It specifies that files ending in .data in the objects directory should have one copy and be
archived after 1 minute. All other files have one copy and are archived after 10 minutes. It is important to note that
membership definitions local to a file system are chosen before any global definitions.

29 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
The file then specifies the media pools for the archive sets. Finally, it sets the copy parameters for each archive set.
It states the following:

» Three tape drives can be used to archive one archive set instead of the default of one tape drive.
» Ten minutes can elapse between the first file in a scan being marked for inclusion in an archive request and the
  start of archiving.
» Once 10 GB of data is accumulated for archiving, then start archiving.
» Once 100,000 files are marked for archiving, then start archiving.
» For all non .data files, the files are archived after four hours, 1 GB of data, or 100,000 files.
#nowait

logfile = /var/opt/SUNWsamfs/archiver/log

interval = 30s

bufsize = li 256

no_archive .

#

# Notes on the following:

#

# + The "*.ts" files are not archived since they are will be deleted

# eventually. If a user deletes the object, its ".data" file is

# replaced with a zero-length ".ts" ("ts" for "tombstone").

# This file is a delete marker that will be eventually reaped, but it

# exists to ensure that the delete properly propagates to all replicas

# in the cluster.

#

# + The "*.pkl" files are not archived. These files are used by the

# replicator to identify the objects that need to be replicated. While

# examing test code, if the "*.pkl" file are missing then the replicator

# has to additional work to rebuild them. Since they can and will be

# rebuilt, there's no need to archive them.

#

# + The "*.lock" files are not archived because they are transient files

# used by the object server to maintain the integrity of the "*.pkl"

# files (I think), in the partition directories used to store the

# objects when there are multiple server workers,

#

30 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
# + The "*.db.pending" files are not archived. These files are used to

# cache updates to account and container SQLite databases. The updates

# are saved in these files until they reach a certain size. When the

# size is reached, the pending updates are performed as one transaction.

# These files are volatile it makes no sense to archive them because

# their pending updates will never be applicable if these file are

# restored.

#

# Tests have shown that overall performance of the Swift cluster is poor

# if the accounts and containers are on Oracle HSM file systems. It is best

# when these are on native SSD devices using XFS.

#

# These files might be applicable to the 'grizzly' release and the

# following entry might have replaced it.

#

# + The "async_pending" files are not archived. ...

#

# + The "tmp" files are not archived. ...

#

fs = cache1

no_archive .                  -name async_pending

no_archive .                  -name tmp

no_archive .                  -name \.lock$

no_archive accounts           -name \.db.pending$

no_archive containers         -name \.db.pending$

no_archive objects            -name \.ts$

no_archive objects            -name hashes\.pkl$

objects1 objects              -name \.data$

     1 1m

cache1 .

     1 10m

31 | HOW TO CONFIGURE OPENSTACK SWIFT WITH MODERN TAPE
You can also read