TIBCO ActiveSpaces 4 on Amazon - EKS - TIBCO Community

Page created by Patrick Gonzales
 
CONTINUE READING
TIBCO Software Inc.
Global Headquarters
3307 Hillview Avenue
                                TIBCO® ActiveSpaces 4 on Amazon
Palo Alto, CA 94304             EKS
Tel: +1 650-846-1000
Toll Free: 1 800-420-8450
                                This document describes how to configure and run TIBCO
Fax: +1 650-846-1005
                                ActiveSpaces v4 in an Amazon Elastic Container Service for
www.tibco.com                   Kubernetes (Amazon EKS).

                                Version 1.0   July 2019             Initial Document

TIBCO fuels digital business
by enabling better decisions
and faster, smarter actions
through the TIBCO
Connected Intelligence
Cloud. From APIs and
systems to devices and
people, we interconnect
everything, capture data in
real time wherever it is, and
augment the intelligence of
your business through
analytical insights.
Thousands of customers
around the globe rely on us
to build compelling
experiences, energize
operations, and propel
innovation. Learn how
TIBCO makes digital smarter
at www.tibco.com.
Copyright Notice
COPYRIGHT© 2019 TIBCO Software Inc. All rights reserved.

Trademarks
TIBCO, the TIBCO logo, TIBCO Enterprise Message Service, TIBCO FTL, TIBCO Rendezvous, and TIBCO
SmartSockets are either registered trademarks or trademarks of TIBCO Software Inc. in the United States
and/or other countries. All other product and company names and marks mentioned in this document are
the property of their respective owners and are mentioned for identification purposes only.

Content Warranty
The information in this document is subject to change without notice. THIS DOCUMENT IS PROVIDED "AS
IS" AND TIBCO MAKES NO WARRANTY, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING BUT
NOT LIMITED TO ALL WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE. TIBCO Software Inc. shall not be liable for errors contained herein or for incidental or
consequential damages in connection with the furnishing, performance or use of this material.

For more information, please contact:

TIBCO Software Inc.
3303 Hillview Avenue
Palo Alto, CA 94304
USA

©2019 TIBCO Software Inc. All Rights Reserved.                                                    2
Table of Contents
1      Overview ................................................................................................................................ 5
1.1     AWS/AS4 Architecture .................................................................................................................... 5
   1.1.1 Development Architecture .......................................................................................................... 5
   1.1.2 Production Architecture .............................................................................................................. 5
1.2     Supported Versions ........................................................................................................................ 6
1.3     Prerequisites .................................................................................................................................. 6
1.4     Prepare Local Environment............................................................................................................. 7
1.5     Prepare Preliminary AWS Account and Kubernetes Configuration .................................................. 7
   1.5.1 General (Required) ..................................................................................................................... 7
   1.5.2 Configure the Kubernetes Dashboard (Optional) ......................................................................... 8
2      AWS EKS Setup ....................................................................................................................... 9
2.1         Create a New Elastic Container Service for Kubernetes (EKS) .......................................................... 9
2.2         Configuring AWS ECR Container Registry ...................................................................................... 11
2.3         Tag and Push the Docker Images to ECR ....................................................................................... 12
3      Configuring ActiveSpaces 4 in EKS ........................................................................................ 13
    3.1.1      Configuring AS4 for Kubernetes ................................................................................................ 13
    3.1.1      Stopping or Deleting the AS4 processes .................................................................................... 18
4      Accessing and Testing AS4 .................................................................................................... 19
4.1         Accessing AS4 internally ............................................................................................................... 19
4.2         Accessing AS4 Externally............................................................................................................... 20
4.3         Monitoring ActiveSpaces .............................................................................................................. 21

©2019 TIBCO Software Inc. All Rights Reserved.                                                                                                    3
Table of Figures

FIGURE 1 - EKSCTL INPUTS ................................................................................................................................................. 9
FIGURE 2 - EKS CLUSTER CREATION USING EKSCTL ................................................................................................................. 10
FIGURE 3 - KUBECTL RESULTS ........................................................................................................................................... 10
FIGURE 4 - KUBECTL GET NODES ....................................................................................................................................... 11
FIGURE 5 - CREATE ECR REGISTRIES .................................................................................................................................. 12
FIGURE 6 - GET ECR LOGIN ............................................................................................................................................. 12
FIGURE 7 - TAG AND PUSH AS4 DOCKER IMAGES ................................................................................................................. 12
FIGURE 8 – AS4-LB-SMALL.YAML EXAMPLE ......................................................................................................................... 15
FIGURE 9 - EXAMPLE OF AS4 STORAGE AND LB PODS RUNNING IN EKS ...................................................................................... 15
FIGURE 10 - RUNNING AS4 ENVIRONMENT ......................................................................................................................... 18
FIGURE 11 - TO STOP AND START THE AS4 STATEFULSETS ...................................................................................................... 18
FIGURE 12 - ACCESS TIBDG EXAMPLE ................................................................................................................................. 19
FIGURE 13 - ACCESS OPERATIONS APP EXAMPLE ................................................................................................................... 20
FIGURE 14 - EXTERNAL ACCESS WITH TIBDG EXAMPLE............................................................................................................ 21
FIGURE 15 - ACCESS AS4 EXTERNALLY FROM THE OPERATIONS APPLICATION ............................................................................... 21
FIGURE 16 - FTL MONITOR-START EXAMPLE ........................................................................................................................ 22
FIGURE 17 - IMPORTING THE AS4 DASHBOARDS ................................................................................................................... 22
FIGURE 18 - ACTIVESPACES GIRD ACTIVITY DASHBOARD ........................................................................................................ 23

©2019 TIBCO Software Inc. All Rights Reserved.                                                                                                                     4
1 Overview

This document will outline how to configure the TIBCO ActiveSpaces® v4 in a Kubernetes cluster
on AWS. The Kubernetes cluster will be built using Amazon’s Elastic Container Service for
Kubernetes (EKS).
Running TIBCO AS4 on Amazon Web Services (AWS) involves:
   • Configuring the Amazon Elastic Container Service for Kubernetes (EKS) for TIBCO
      ActiveSpaces.
   • Configuring the Amazon Elastic Container Registry (ECR) for the Docker® image registry,
      and hosting the AS4 and FTL5.4 Docker images in ECR.
   • Configuring and creating Kubernetes containers based on the Docker images for the
      individual components

1.1      AWS/AS4 Architecture

Using this document, two different architectures can be produced. A smaller configuration suitable
for development/testing, or a larger fault-tolerant configuration suitable for production. Essentially,
they are the same, with the larger environment consisting of a F/T Realm Server, and additional
copy sets, nodes, and proxies.
 1.1.1    Development Architecture
The development architecture created will contain:
   • One (1) VPC
   • Three (3) AWS Availability Zones (AZ) in one AWS region.
   • Three (3) private and three (3) public subnets which are provided with internet access
      through a NAT gateway.
   • EKS cluster will be spread across the three AZs.
   • EKS cluster will consist of eight (8) nodes with a minimum of 16 GB of RAM and 4 CPUs.
   • EBS storage for all configuration and data. Io1 storage for the AS4 nodes, and GP2 storage
      for all other services
   • ECR Repositories for all containers
   • ELB (Classic - Kubernetes) for external access to the Realm Services and the AS4 Proxies.
   • One (1) AS4 Copyset
   • Two (2) AS4 Nodes (1 – replica set)
   • Three (3) AS4 Statekeepers
   • Two (2) AS4 Proxies
   • One (1) FTL Realm Server
 1.1.2    Production Architecture
The created architecture created will contain:
   • One (1) VPC

©2019 TIBCO Software Inc. All Rights Reserved.                                                    5
•   Three (3) AWS Availability Zones (AZ) in one AWS region. Note: Additional AZs in the
          same region can be used. Verify that resources are available in the Availability zones
          selected.
      •   Three (3) private and three (3) public subnets which are provided with internet access
          through a NAT gateway. More private/public subnets can be used, and should match the
          number of availability zones used.
      •   EKS cluster will be spread across the three (or more) AZs.
      •   EKS cluster will consist of ten (10) nodes with a minimum of 16 GB of RAM and 4 CPUs.
          Larger cluster nodes can be used.
      •   EBS storage for all configuration and data. Io1 storage for the AS4 nodes, and GP2 storage
          for all other services
      •   ECR Repositories for all containers
      •   ELB (Classic - Kubernetes) for external access to the Realm Services and the AS4 Proxies.
      •   Two (2) AS4 Copysets
      •   Four (4) AS4 Nodes (2 – replica sets)
      •   Three (3) AS4 Statekeepers
      •   Four (4) AS4 Proxies
      •   Two (2) FTL Realm Servers – (FTL 5.4 Primary and Secondary configuration)

1.2       Supported Versions

The steps described in this document are supported for the following versions of the products and
components involved:
      •   TIBCO ActiveSpaces 4.1
      •   TIBCO FTL 5.4.1
      •   Docker Community/Enterprise Edition should be most recent version, (at least 18.09.2), to
          address recent security vulnerabilities
      •   Amazon Linux 2 for EKS
      •   Kubernetes 1.12 or newer

1.3       Prerequisites

The reader of this document must be familiar with:
      •   Docker concepts
      •   Amazon AWS console, AWS CLI, and eksctl
      •   Kubernetes installation and administration
      •   Kubernetes CLI, kubectl
      •   TIBCO AS4 installation and configuration
      •   All necessary downloads discussed in the next section
      •   The appropriate TIBCO Messaging license(s).

©2019 TIBCO Software Inc. All Rights Reserved.                                                  6
1.4       Prepare Local Environment
General:
The following infrastructure should already be in place:
      •   A Linux or macOS machine equipped for building Docker images
      •   The following software must already be downloaded to the Linux or macOS machine
          equipped for building Docker images.
          Note: All software must be for Linux!
      •   TIBCO ActiveSpaces v4.1 Docker images which are part of the TIBCO AS4 installation
          package. The Enterprise Edition or the Community Edition can be used. Download the EE
          from edelivery.tibco.com , and the CE at https://www.tibco.com/products/messaging-event-
          processing
      •   The tibas4_eks_files.zip. The zip file contains all of the necessary Kubernetes build files.
          Download from https://community.tibco.com/wiki/tibcor-messaging-article-links-quick-
          access
      •   Create a directory, place tibas4_eks_files.zip in the directory.
      •   Unzip tibas4_eks_files.zip.
      •   Unzip the AS installation package, and copy all of the as-*.xz and ftl-*.xz Docker images to
          tibas4_eks_files/docker directory.

1.5       Prepare Preliminary AWS Account and Kubernetes Configuration

Use the following to prepare the preliminary environment to install ActiveSpaces 4 on EKS.
 1.5.1    General (Required)
In general, follow the Getting Started EKS User Guide to create a new Kubernetes cluster.
      •   An AWS account is required. If necessary, create one at http://aws.amazon.com and follow
          the on-screen instructions.
      •   Use the region selector in the navigation bar to choose the AWS Region to deploy AS4 in
          EKS. The documentation will refer to us-east-1.
          Note: Currently, not all AWS regions and Availability Zones (AZ) support EKS.
          As of June 2019, the following AWS regions support EKS:

          US                                EU                         AP

          N. Virginia (us-east-1)           Ireland (eu-west-1)        Singapore (ap-southeast-1)
          Ohio (us-east-2)                  Frankfurt (eu-central-1)   Tokyo (ap-northeast-1)
          Oregon (us-west-2)                Stockholm (eu-north-1)     Sydney (ap-southeast-2)
                                            London (eu-west-2)         Seoul (ap-northeast-2)
                                            Paris (eu-west-3)          Mumbai (ap-south-1)
      •   Install and configure Amazon AWS CLI on the workstation used.

©2019 TIBCO Software Inc. All Rights Reserved.                                                      7
Note: After creating the AWS account, ensure the AWS credential and config files are
         created, and contain the appropriate AWS key, secret key, profile, and region.
         Export the following environmental variable: export AWS_SDK_LOAD_CONFIG=1 when
         the default AIM Role is not used.
     •    Install Docker on the workstation to build the component images.
     •    Install the kubectl command-line tool to manage and deploy applications to Kubernetes in
          AWS from a workstation.
     •    Install the eksctl command line tool to configure AWS EKS. Note: if configuring EKS
          through the AWS console, this step is not necessary.
 1.5.2    Configure the Kubernetes Dashboard (Optional)
The Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows
users to manage applications running in the cluster and troubleshoot them.
Note: The following setup assumes the dashboard will be run on a local workstation.
Use the following to setup un the dashboard:
   • Issues the following command to deploy the Kubernetes Dashboard:
         kubectl apply -f
         https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src
         /deploy/recommended/kubernetes-dashboard.yaml
    •    Run kubectl proxy & - to run the proxy in the background, or open a second terminal
         shell, and run kubectl proxy
    •    Open web browser and access: http://localhost:8001/api/v1/namespaces/kube-
         system/services/https:kubernetes-dashboard:/proxy/.
    •    Use the Authentication link to setup a Service Account Token to access the Dashboard.

©2019 TIBCO Software Inc. All Rights Reserved.                                                  8
2 AWS EKS Setup

2.1       Create a New Elastic Container Service for Kubernetes (EKS)

A new Kubernetes cluster must be created in EKS. There are now just a few steps necessary to
create the Kubernetes cluster in EKS. Follow the Getting Started EKS User Guide to create a new
Kubernetes cluster. Cluster can be created via the AWS console or with eksctl.
Note: Creating the EKS cluster via eksctl is highly recommended. This following will be based on
using eksctl.

Note: If AWS account has multiple roles or there are multiple clusters, it is recommended that the
AWS CLI be used, and the –profile=XXX is used to ensure the correct permissions/roles are used.

      •   Install eksctl following the AWS documentation, if not already installed.
      •   Create the new EKS cluster. The following example shows the minimum inputs. The
          names, number of nodes, zones, EC2 instance size can all be adjusted to meet the
          requirements of the environment. It will take several minutes for the cluster to be created.
          Note: Read through the eksctl documentation. There are other parameters which can be
          used, such as using an Ubuntu kernel, rather than the default Amazon2 – Linux kernel.

                          eksctl create cluster \
                          -n as4eks \ (1)
                          -r us-east-1 \ (2)
                          --zones us-east-1a,us-east-1b,us-east-1c \ (3)
                          --nodegroup-name as4-workers \ (4)
                          -t t3.xlarge \ (5)
                          --nodes 8 \ (6)
                          --nodes-min 4 \ (7)
                          --nodes-max 10 \ (8)
                          --node-volume-size 20 \ (9)
                          --vpc-cidr x.x.x./x (10)
                                                 Figure 1 - eksctl inputs

                 (1): The name of the EKS Cluster
                 (2): The AWS region where the EKS cluster will be built
                 (3): The AWS Zones. This can be increased to fit requirements. Verify the AZ has
                 the resources first.
                 (4): The Node Group name for the nodes (workers)
                 (5): The EC2 instance size. A larger instance can be used to fit the requirements.
                 Note: If the size of the tibdgnode containers is increased, it is highly recommended
                 to increase the EC2 instance size.
                 (6): The number of cluster nodes. Eight (8) for development, and ten (10) for
                 production.

©2019 TIBCO Software Inc. All Rights Reserved.                                                     9
(7): The minimum number of nodes. Four (4) for development, and eight (8) for
                 production
                 (8): The maximum number of nodes. Ten (10) for development, and twelve (12)
                 for production. Can be increased for auto scaling in the production environment.
                 (9): The volume size (GB) of each node. Not used for AS4 persisted data. Can be
                 small.
                 (10): The CIDR of the VPC to use. If not included, eksctl will choose.

                                      Figure 2 - EKS cluster creation using eksctl

             o To check on the status of the creation of the cluster, use:

             o Test the configuration before continuing. Use kubectl get svc. The results should be
               similar to the following. Cluster-IP can be different:

                                                 Figure 3 - Kubectl results

©2019 TIBCO Software Inc. All Rights Reserved.                                                10
If a Kubernetes cluster is not shown, resolve any issues before continuing. If there
                 are issues, it is usually a Kubernetes/AWS permissions/IAM role issue.
             o To verify all of the nodes have been created, use kubectl get nodes. All ten nodes
               (production) should be Ready.

                                             Figure 4 - Kubectl get nodes

2.2       Configuring AWS ECR Container Registry

The AWS Elastic Container Registry (ECR) must be configured to host the TIBCO FTL/AS4
Docker images. Use this section to create the necessary ECR registries. Seven (7) Docker images
need to be tagged and pushed to ECR. All images can be in the same registry, or separated in to
individual registries. This document will create one registry for AS4, and one registry for FTL.
Load the the TIBCO AS4/FTL Docker images into the local Docker registry.
      •   Change directory to the directory where the AS4/FTL Docker images are located.
      •   Load the the Docker images in the local registry:
                docker      load     -i    as-tibdg-4.1.0.dockerimage.xz
                docker      load     -i    as-tibdgnode-4.1.0.dockerimage.xz
                docker      load     -i    as-tibdgkeeper-4.1.0.dockerimage.xz
                docker      load     -i    as-tibdgproxy-4.1.0.dockerimage.xz
                docker      load     -i    as-tibdgadmind-4.1.0.dockerimage.xz
                docker      load     -i    ftl-tibrealmserver-5.4.1.dockerimage.xz
                docker      load     –i    as-operations-4.1.0.dockerimage.xz
New ECR registry(s) must be created to host the EMS Docker image(s).
      •   Create two new ECR registries named AS41 and FTL54 in AWS. The registry can be
          created via the AWS CLI or via the console. Please note the URL of your ECR repository
          (e.g. 123456789012.dkr.ecr.us-east-1.amazonaws.com).

©2019 TIBCO Software Inc. All Rights Reserved.                                                   11
Figure 5 - Create ECR Registries

      •   Retrieve the login command to use to authenticate your Docker client with AWS registry.
          Adjust AWS region and access keys created in the previous step.

                                                 Figure 6 - Get ECR login

2.3       Tag and Push the Docker Images to ECR

Once the Docker images are loaded, the images can be tagged and pushed to ECR.
      •   Tag the image and push each of the Docker images to the ECR repository. Use following as
          an example. Replace the URL of the appropriate repository account ID instead of
          123456789012. Do the same for each of the Seven (7) images.

                                     Figure 7 - Tag and Push AS4 Docker images

      •   Push the EMS Docker images to ECR. Use the following as an example. Replace
          123456789012 with the appropriate repository account ID.

©2019 TIBCO Software Inc. All Rights Reserved.                                               12
3 Configuring ActiveSpaces 4 in EKS

After the FTL and AS4 Docker images are pushed to ECR, Kubernetes can be configured to run
the AS4/FTL containers.

 3.1.1    Configuring AS4 for Kubernetes
There are different templates used depending on if the development or production environment is
being configured. In the as4_eks_files/kubernetes directory, will five templates; two for the
development environment (small), and two for the production environment (large), and a fifth
template, as4-storage.yaml, which is used by both to create the required persisted storage. The as4-
eks-small.yaml and the as4-eks-large.yaml are used to create the bulk of the environment. The as4-
lb-small.yaml and the as4-lb-large.yaml will create the Kubernetes load balancers for the
environment. The only difference with the load balancer files is the number of Realm Server and
proxy load balancers created. The main configuration file and the LB configuration file will need
minor changes. The storage configuration file requires no modifications.
 3.1.1.1 Load Balancer Configuration
Following, is the as4_eks_files/kubernetes/as4-lb-small.yaml used to configure the load balancers.
The are three load balancers created for the development environment, while there six created in
the production environment. The change required is exactly the same for both. The trusted IP
range will determine what IP addresses can connect to the load balancer.
NOTE: The load balancer file must be configured and applied first. No other changes should be
made! If port changes are made, the port changes must also be made in the as4-eks yaml file.

apiVersion: v1
kind: Service
metadata:
  labels:
    name: realmserver-lb
  name: realmserver-lb
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: realm
    nodePort: 30080
    port: 30080
    targetPort: 30080
    protocol: TCP
  - name: admind
    nodePort: 30081
    port: 30081
    targetPort: 30081
    protocol: TCP
  - name: ftl
    nodePort: 30083

©2019 TIBCO Software Inc. All Rights Reserved.                                                13
port: 30083
    targetPort: 30083
    protocol: TCP
  selector:
      com.tibco.datagrid.service: realmserver
  sessionAffinity: None
  type: LoadBalancer
  loadBalancerSourceRanges:
  -  (1)
status:
  loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: tibdg
    name: proxy-0-lb
  name: proxy-0-lb
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: proxy-0
    nodePort: 30086
    port: 30086
    targetPort: 8555
    protocol: TCP
  selector:
      com.tibco.datagrid.service: tibdgproxy
  sessionAffinity: None
  type: LoadBalancer
  loadBalancerSourceRanges:
  -  (1)
status:
  loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: tibdg
    name: proxy-1-lb
  name: proxy-1-lb
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: proxy-1
    nodePort: 30087
    port: 30087
    targetPort: 8555
    protocol: TCP
  selector:
      com.tibco.datagrid.service: tibdgproxy

©2019 TIBCO Software Inc. All Rights Reserved.            14
sessionAffinity: None
  type: LoadBalancer
  loadBalancerSourceRanges:
  -  (1)
status:
  loadBalancer: {}

                                   Figure 8 – as4-lb-small.yaml Example

(1): The trusted IP address range to connect to the load balancer.
 3.1.1.2 Applying the LB and Storage configurations in Kubernetes
Once the AS4 LB yaml files has been updated, this file, along with the storage yaml files can be
applied using kubectl to EKS.
Use kubectl apply –f as4-storage.yaml,as4-lb-small.yaml to apply both files. Use kubectl get
storageclass,svc to verify the storage classes and load balancers are available. Do not continue until
the load balancers have been assigned an External IP address similar to the following example.
These IP address are required for the main AS4 yaml file.

                            Figure 9 - Example of AS4 storage and LB pods running in EKS

 3.1.1.3 AS4 configuration
Once there are load balancer IP addresses for the proxies, the main configuration file as4-eks-
small.yaml can be configured. On the again, changes to the as4-eks-large.yaml files are the same,
just additional locations for modification.
The only changes required is to the proxy_client_listen_external_host for proxy-0 and proxy-1 and
the ECR image for each container.

---
# This is used to populate the grid configuration.
apiVersion: v1
kind: ConfigMap
metadata:
  name: tibdg-conf
  labels:

©2019 TIBCO Software Inc. All Rights Reserved.                                                 15
app: tibdg
data:
  conf.tibdg: |
    grid create copyset_size=2 proxy_client_listen_port=8555
    copyset create cs-01
    node create --copyset cs-01 --dir /data/cs-01-node-0 cs-01-node-0
    node create --copyset cs-01 --dir /data/cs-01-node-1 cs-01-node-1
    keeper create --dir /data/keeper-0 keeper-0
    keeper create --dir /data/keeper-1 keeper-1
    keeper create --dir /data/keeper-2 keeper-2
    proxy create proxy_mirroring_listen_port=8556
proxy_client_listen_external_host=
proxy_client_listen_external_port=30086 proxy-0 (1)
    proxy create proxy_mirroring_listen_port=8557
proxy_client_listen_external_host=
proxy_client_listen_external_port=30087 proxy-1 (1)
    table create t1 key long
    column create t1 value string

...
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: realmserver
  labels:
    com.tibco.datagrid.service: realmserver
    app: tibdg
spec:
  serviceName: realmserver
  replicas: 1
  selector:
    matchLabels:
      com.tibco.datagrid.service: realmserver
  template:
    metadata:
      name: realmserver
      labels:
        com.tibco.datagrid.service: realmserver
        app: tibdg
    spec:
      containers:
        - name: realmserver
          image:  (2)
          imagePullPolicy: Always
          resources:
            requests:
              cpu: 1
              memory: 4Gi
            limits:
               cpu: 2
               memory: 8Gi

©2019 TIBCO Software Inc. All Rights Reserved.                          16
args:
             - '--ftl'
             - '*:30083'
             - '--gui'
             - '*:30085'
             - '--http'
             - '*:30080'
           volumeMounts:
             - mountPath: /data
               name: realmserver-data
         - name: tibdgadmind
           image:  (2)
           imagePullPolicy: Always
           resources:
             requests:
               cpu: 1
               memory: 4Gi
             limits:
                cpu: 2
                memory: 8Gi
           args: [ '-r', 'http://realmserver:30080', '-l', ':30081' ]
   volumeClaimTemplates:
   - metadata:
       name: realmserver-data
     spec:
       accessModes:
         - ReadWriteOnce
       storageClassName: as4-gp2
       resources:
         requests:
           storage: 5Gi

(1): The proxy_client_listen_external_host. For the the development (small) configuration, this is proxy-0
and proxy-1. For the large configuration, this will also include proxy-2 and proxy-3. Insert the value of the
External IP for the matching load balancer. An example would be a20984e5a96b8e881kd4k33f62833-
1476430161.us-east-1.elb.amazonaws.com.

(2): The name and location of the Elastic Container Repository (ECR) where the FTL/AS4 Docker
containers are located. Ensure the proper permissions are set. The image maybe something
different than latest, depending on how it was tagged in Docker.

Note: If port or other changes are made, the port changes must also be made in the as4-lb yaml
file. If is not recommended to make other changes unless, you are very familiar with Kubernetes.

After making the necessary modifications, use kubectl apply –f as4-eks-small.yaml to apply the
configuration to Kubernetes.

©2019 TIBCO Software Inc. All Rights Reserved.                                                          17
Note: This will take a few minutes, and there will be some pod restarts. This is normal. Use
kubectl get pods to get the status pf the pods. Wait until all pods are running, and the tibdgconfig
pod is completed before continuing.

                                        Figure 10 - Running AS4 environment

 3.1.1    Stopping or Deleting the AS4 processes
To stop all of the processes without deleting them, use the kubectl scale operation to set its
number of replicas to 0.
For example:
> kubectl scale --replicas=0 statefulset proxy,keeper,cs-01-
node,realmserver
To start the process again, set its number of replicas back to their original values. Node is two,
keeper is three, proxy is two, and realmserver is one. For example:
> kubectl scale --replicas=3 statefulset keeper
                                  Figure 11 - To Stop and Start the AS4 Statefulsets

To delete the all of the statefulsets and service entirely, use the kubectl delete operation:
kubectl delete –f as4-eks-small.yaml,as4-lb-small.yaml,as4-storage.yaml

All of the corresponding pods and statefulsets, and storage classes, and LB services will be deleted.
The PVC and PV will not be deleted, nor will the corresponding data. To delete the data, PV, and
PVC, use the following:
Kubectl delete pvc,pv --all

©2019 TIBCO Software Inc. All Rights Reserved.                                                   18
4 Accessing and Testing AS4

This section will outline testing ActiveSpaces 4 running in EKS. Access can be tested internally
and externally.

4.1     Accessing AS4 internally

Testing access internally can be done by trying to access tibdg and running the operations sample
application.
The External IP Address the realmserver-lb is all that is need. Use:
docker run –rm –it as-tibdg:4.1.0 –r http://:30080 status.

Substitute the location of the as-tibdg:4.1.0 for your Docker registry, and your Realm Server LB
external address.

                                          Figure 12 - Access Tibdg example

To test with the operations apps. Use the following:
docker run –rm –it as-operations:4.1.0 –r http://:30080

©2019 TIBCO Software Inc. All Rights Reserved.                                               19
Figure 13 - Access Operations app example

4.2     Accessing AS4 Externally

In most instances, accessing ActiveSpaces will be from an external site. To do this test,
ActiveSpaces 4 must be installed in workstation. You will still need the External IP Address for the
Realm Server LB.

To test with tibdg:
   • Change directory to /opt/tibco/as/4.1/bin
   • Run ./tibdg –r http://:30080 status. You should see
        something similar to the following example.

©2019 TIBCO Software Inc. All Rights Reserved.                                               20
Figure 14 - External Access with Tibdg example

Testing with the operations application is similar:
   • Change directory to /opt/tibco/as/4.1/samples/bin
   • Run ./operations –r http://. You should get the operations
       menu. Feel free to add/get/delete data. This will verify that AS4 is active and available from
       an external source.

                          Figure 15 - Access AS4 externally from the Operations application

4.3       Monitoring ActiveSpaces

AS4 can be monitored utilizing the FTL5.4/AS4.1 monitoring utilities from an external location.
You will need the External IP Address for the Realm Server LB.
Note: The Enterprise edition of FTL 5.4 and AS 4.1 required for the monitoring components.
      •   On the workstation that was used to build the EKS environment, install both FTL5.4 and
          ActiveSpaces 4.1following their respective installation guides.
      •   Once both products are installed, change directory to /opt/tibco/ftl/5.4/monitoring
      •   Run ./monitor-start.py operations –r http://. You should get
          output similar to the following example:

©2019 TIBCO Software Inc. All Rights Reserved.                                                21
Figure 16 - FTL monitor-start example

    •   Change directory to /opt/tibco/as/4.1/monitor/scripts, and run ./import-activespaces-
        dashboards. This will import the AS4 dashboards into the FTL monitor. This only has to be
        done once. The output should be similar to the following example:

                                      Figure 17 - Importing the AS4 dashboards

    •   Using a web browser, go to http://:3000
    •   Login in using admin/admin. The user password can be changed, as well as securing the
        monitor. See the FTL 5.4 and ActiveSpace 4.1 documentation for details.
    •   Click on the Home to get the Dashboards, and then click on the ActiveSpaces Grid Activity
        dashboard. You should get something similar to the following example:

©2019 TIBCO Software Inc. All Rights Reserved.                                              22
Figure 18 - ActiveSpaces Gird Activity Dashboard

    •   Feel free to use the operations sample application to enter data into the grid, and see the
        rates change.
    •   Use monitor-stop.py to stop the monitoring software.

©2019 TIBCO Software Inc. All Rights Reserved.                                                   23
You can also read