Table of Contents

Generated: 04/21/2025
Order Number: XXXXX
Version: Stable 2025.04
Release: 20250421.1745214394396

This page contains all the README files for the SAS Viya platform. When you purchase the SAS Viya platform, you receive a subset of README files specific to your order and cadence.

SAS strongly recommends that you refer to that subset when deploying the SAS Viya platform. For more information, see SAS Viya Platform Operations Guide.

All of the information in the READMEs and in the SAS Viya Platform Operations Guide assumes that you have met all System Requirements for the SAS Viya platform.

If you have any feedback on the contents of these README files, please email the SAS Documentation Feedback team.

Kubernetes Tools

SAS Viya Platform Deployment Operator

Mirror Registry

sitedefault.yaml File

CAS

SAS Programming Environment

SAS Infrastructure Data Server

Messaging

Redis

Open-Source Configuration

High Availability and Scaling

Security

Auditing

Migration

Backup and Restore

Update Checker

Product-Specific Instructions

Ingress Configuration

Inventory Collector

Model Publish service

OpenSearch

Process Orchestration

Risk Reporting Framework Core Service

SAS Allowance for Credit Loss

SAS Asset and Liability Management

SAS Business Orchestration Services

SAS Clinical Acceleration Repository

SAS Cloud Data Exchange

SAS Common Planning Service

SAS Compute Server

SAS Compute Service

SAS Configurator for Open Source

SAS Data Catalog

SAS Data Quality

SAS Data Quality for Payment Integrity Health Care

SAS Detection Architecture

SAS Dynamic Actuarial Modeling

SAS Event Stream Processing

SAS Expected Credit Loss

SAS Image Staging

SAS Insurance Capital Management

SAS Insurance Contract Valuation Foundation

SAS Integrated Regulatory Reporting

SAS Launcher Service

SAS Law Enforcement Intelligence

SAS Micro Analytic Service

SAS Model Repository Service

SAS Model Risk Management

SAS RFC Solution Configuration

SAS Real-Time Watchlist Screening

SAS Regulatory Capital Management

SAS Risk Cirrus Builder Microservice

SAS Risk Cirrus Core

SAS Risk Cirrus KRM Service

SAS Risk Cirrus Objects Microservice

SAS Risk Engine on Viya

SAS Risk Factor Manager

SAS Risk Modeling

SAS Startup Sequencer

SAS Stress Testing

SAS Viya File Service

SAS Workload Orchestrator Service

SAS with SingleStore

SAS/ACCESS

SAS/CONNECT Spawner

Uncategorized

Using Kubernetes Tools from the sas-orchestration Image

Overview

The sas-orchestration image includes several tools that help deploy and manage the software. It includes a lifecycle command that can run various lifecycle operations as well as the recommended versions of both kustomize and kubectl. These latter tools may be used with the --entrypoint option that is available on both Docker and Podman container runtime CLIs.

Note: The examples use Docker, but the Podman container engine can also be used.

Note: All examples below are auto-generated based on your order.

Prerequisites

To run the sas-orchestration image, Docker must be installed. Pull the sas-orchestration image:

docker pull cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-orchestration:1.141.0-20250403.1743683531608

Replace ‘cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-orchestration:1.141.0-20250403.1743683531608’ with a local tag for ease of use in the examples that will follow:

docker tag cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-orchestration:1.141.0-20250403.1743683531608 sas-orchestration

Examples

The examples that follow assume:

lifecycle

The lifecycle command executes deployment-wide operations over the assets deployed from an order. See the README file at $deploy/sas-bases/examples/kubernetes-tools/README.md (for Markdown) or $deploy/sas-bases/docs/using_kubernetes_tools_from_the_sas-orchestration_image.htm (for HTML) for lifecycle operation documentation.

Docker uses the following options:

Additional Lifecycle command documentation

lifecycle list

The list sub-command displays the available operations of a deployment

lifecycle list example
cd $deploy
docker run --rm \
  -v "$(pwd):/cwd" \
  -v /home/user/kubernetes:/kubernetes \
  -e "KUBECONFIG=/kubernetes/config" \
  -w /cwd \
  sas-orchestration \
  lifecycle list --namespace {{ NAME-OF-NAMESPACE }}

lifecycle run

The run sub-command runs a given operation. Arguments before -- indicate the operation to run and how lifecycle should locate the operation’s definition. Arguments after -- apply to the operation itself, and may vary between operations.

lifecycle run example
cd $deploy
docker run --rm \
  -v "$(pwd):/cwd" \
  -v /home/user/kubernetes:/kubernetes \
  -e "KUBECONFIG=/kubernetes/config" \
  sas-orchestration \
  lifecycle run \
    --operation assess \
    --deployment-dir /cwd \
    -- \
    --manifest /cwd/site.yaml \
    --namespace {{ NAME-OF-NAMESPACE }}

As indicated in the example, the run sub-command needs an operation (--operation) and the location of your assets (–deployment-dir). The assess lifecycle operation needs a manifest (--manifest) and the Kubernetes namespace to assess, (--namespace). To connect and assess the Kubernetes cluster, the KUBECONFIG environment variable is set on the container; (-e).

To see all possible assess operation arguments, run assess with the --help flag:

docker run --rm \
      -v "$(pwd):/cwd" \
      sas-orchestration \
      lifecycle run \
          --operation assess \
          --deployment-dir /cwd/sas-bases \
          -- \
          --help

kustomize

The example assumes that the $deploy directory contains the kustomization.yaml and supporting files. Note that the kustomize call here is a simple example. Refer to the deployment documentation for full usage details.

cd $deploy
docker run --rm \
  -v "$(pwd):/cwd" \
  -w /cwd \
  --entrypoint kustomize \
  sas-orchestration \
  build . > site.yaml

kubectl

This example assumes a site.yaml manifest file exists in $deploy. See the SAS Viya Platform Deployment Guide. for instructions on how to create the site.yaml manifest.

Note The kubectl call here is a simple example. Refer to the deployment documentation for full usage details.

cd $deploy
docker run --rm \
  -v "$(pwd):/cwd" \
  -v /home/user/kubernetes:/kubernetes \
  -w /cwd \
  --entrypoint kubectl \
  sas-orchestration \
  --kubeconfig=/kubernetes/kubeconfig apply -f site.yaml

Additional Resources

Lifecycle Operation: Assess

Overview

The assess lifecycle operation assesses an undeployed manifest file for its eventual use in a cluster.

For general lifecycle operation execution details, please see the README file at $deploy/sas-bases/examples/kubernetes-tools/README.md (for Markdown) or $deploy/sas-bases/docs/using_kubernetes_tools_from_the_sas-orchestration_image.htm (for HTML).

Note: $deploy refers to the directory containing the deployment assets.

The following example assumes:

Example

cd $deploy
docker run --rm \
  -v "$(pwd):/cwd" \
  -v /home/user/kubernetes:/kubernetes \
  -e "KUBECONFIG=/kubernetes/config" \
  sas-orchestration \
  lifecycle run \
    --operation assess \
    --deployment-dir /cwd \
    -- \
    --manifest /cwd/site.yaml \
    --namespace {{ NAME-OF-NAMESPACE }}

Note: To see the commands that would be executed from the operation without making any changes to the cluster, add -e "DISABLE_APPLY=true" to the container.

Lifecycle Operation: Start-all

You can stop and start your SAS Viya platform deployment by using CronJobs or by applying transformers. For details about both methods, see Starting and Stopping a SAS Viya Platform Deployment.

In order to schedule the start-all Cronjob, use the start-stop example in $deploy/sas-bases/examples/kubernetes-tools/lifecycle-operations/schedule-start-stop.

Lifecycle Operation: Stop-all

You can stop and start your SAS Viya platform deployment by using CronJobs or by applying transformers. For details about both methods, see Starting and Stopping a SAS Viya Platform Deployment.

In order to schedule the stop-all Cronjob, use the start-stop example in $deploy/sas-bases/examples/kubernetes-tools/lifecycle-operations/schedule-start-stop.

Lifecycle Operation: schedule-start-stop

The start-all and stop-all Cronjobs can be run on a schedule using the example file in $deploy/sas-bases/examples/kubernetes-tools/lifecycle-operations/schedule-start-stop. Copy the ‘schedule-start-stop.yaml’ into the site-config directory and revise it to insert a schedule for start-all and another schedule for stop-all. Add a reference to the file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

transformers:
...
site-config/kubernetes-tools/lifecycle-operations/schedule-start-stop/schedule-start-stop.yaml

Note: This file should be included after the line

- sas-bases/overlays/required/transformers.yaml

SAS Viya Platform Deployment Operator

The $deploy/sas-bases/examples/deployment-operator directory contains files for deploying and running the SAS Viya Platform Deployment Operator. For information about what the operator is and how to deploy and run it, see the SAS Viya Platform Deployment Guide.

Disassociate a SAS Viya Platform Deployment from the SAS Viya Platform Deployment Operator

To remove SAS Viya Platform Deployment Operator management of updates to a SAS Viya platform deployment, you must disassociate the deployment from the SASDeployment custom resource and then delete the SASDeployment custom resource. The $deploy/sas-bases/examples/deployment-operator/disassociate/disassociate-deployment-operator.sh script performs these actions.

Running the script requires bash, kubectl, and jq. SAS recommends that you save the current SASDeployment custom resource before executing the script because the script deletes it.

First, make the script executable with the following command.

chmod 755 ./disassociate-deployment-operator.sh

Then execute the script, specifying the namespace which contains the SASDeployment custom resource.

./disassociate-deployment-operator.sh <name-of-namespace>

The script removes the SASDeployment ownerReference from the .metadata. ownerReferences field and the kubectl.kubernetes.io/last-applied-configuration annotation in all resources in the namespace. It then removes the SASDeployment custom resource. The SAS Viya platform deployment is otherwise unchanged.

Note: Running the disassociate script might cause the following message to be displayed. This message can be safely ignored.

Warning: path <API-path-for-URLs> cannot be used with pathType Prefix

If you want to use the SAS Viya Platform Deployment Operator for this SAS Viya platform deployment in the future, a SASDeployment custom resource can be reintroduced into the namespace. See the SAS Viya Platform: Deployment Guide for details.

Using a Mirror Registry

A mirror registry is a local registry of the software necessary to create your deployment. For the SAS Viya platform, a mirror registry is created with SAS Mirror Manager.

For more information about mirror repositories and SAS Mirror Manager, see Using a Mirror Registry.

Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.

Deploying with an Additional ImagePullSecret

Overview

This overlay is used to apply an additional imagePullSecret. This overlay is required for SAS Viya platform deployments on Red Hat OpenShift version 4.16 and later that use the OpenShift Container Registry as a mirror for their deployment assets.

Installation

Use these steps to apply the desired property to your SAS Viya platform deployment.

  1. Create the $deploy/site-config/add-imagepullsecret directory and copy $deploy/sas-bases/examples/add-imagepullsecret/configuration.env into it.

  2. Define the property in the configuration.env file. To define the property, update its token value as described in the comments in the file.

  3. Add the following path to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml):

    ...
    resources:
    ...
    - sas-bases/overlays/add-imagepullsecret/resources
    ...
  4. Add the following entry to the configMapGenerator block of the base kustomization.yaml file:

    ...
    configMapGenerator:
    ...
    - behavior: merge
      name: add-imagepullsecret-configuration
      envs:
        - site-config/add-imagepullsecret/configuration.env
    ...
  5. Add the following entry to the transformers block of the base kustomization.yaml file:

    ...
    transformers:
    ...
    - sas-bases/overlays/add-imagepullsecret/transformers.yaml
    ...

Modify the sitedefault.yaml File

Overview

The sitedefault.yaml file specifies configuration properties that will be written to the Consul key value store when the sas-consul-server is started.

Each property in the sitedefault.yaml file will be written to the Consul key value store if it does not already exist.

Example:

The following properties specify the configuration for the LDAP provider and base points from which to search for groups and users.

- sas.identities.providers.ldap:
    - connection:
      - host: ldap.example.com
      - password:
      - port: 3269
      - url: ldaps://${sas.identities.providers.ldap.connection.host}:${sas.identities.providers.ldap.connection.port}
      - userDN: cn=AdminUser,dn=example,dn=com
    - group:
      - baseDN: ou=groups,dc=example,dn=com
    - user:
      - baseDN: DC=example,DC=com

Instructions

Caution: The example requires a value for a password. Due to security concerns for providing a value for the required password field, an alternative method is described. Using the sitedefault file to set LDAP properties is not required because an administrator can set the LDAP connection using SAS Environment Manager.

  1. Copy the sitedefault.yaml file from $deploy/sas-bases/examples/configuration to the site-config directory.

  2. In the file you just copied, provide the values you want to use for your deployment as described in the “Properties” section below.

  3. After you have entered the values for your deployment, revise the base kustomization.yaml file as described in “Add a sitedefault File to Your Deployment”.

Note: There will be an LDAP AuthenticationException in the log for the identities service. It can be safely ignored if you follow the remaining steps.

  1. Log in to SAS Environment Manager as sasboot.

  2. Using SAS Environment Manager, replace the temporary values you used for the ldap.connection password and userDN with real values.

  3. When the changes are picked up by SAS Environment Manager, select the SAS Administrators group under Custom Groups to see the LDAP users. You can add any LDAP user that is listed as an administrator.

  4. Log out of SAS Environment Manager and log back in as a user that was added in step 6. Use that user to get administrator privileges.

Properties

This section describes the properties associated with Lightweight Directory Access Protocol (LDAP) that can be specified in the sitedefault.yaml file. Any required properties must have a value specified in order to have their defaults applied.

For information about all the properties that can be configured in the sitedefault.yaml file, see “Configuration Properties: Reference (Services)”.

Lightweight Directory Access Protocol (LDAP)

The set of properties that are used to configure the LDAP provider.

sas.identities.providers.ldap.connection

The set of properties that are used to configure the connection to the LDAP provider.

host

The LDAP server’s host name.

Example: ldap.example.com

password

The password for logging on to the LDAP server.

Example: tempPassword

Caution: SAS recommends setting the password to a temporary string, such as tempPassword. See the Instructions for post-deployment steps to insert a real password.

port

The LDAP server’s port.

Example: 3269

url

The URL for connecting to the LDAP server.

Example: ldaps://${sas.identities.providers.ldap.connection.host}:${sas.identities.providers.ldap.connection.port}

userDN

The distinguished name (DN) of the user account for logging on to the LDAP server.

Example: tempUser

Caution: SAS recommends setting the userDN to a temporary string, such as tempUser. See the Instructions for post-deployment steps to insert a real userDN.

sas.identities.providers.ldap.group

The set of properties that are used to configure information for retrieving group information from the LDAP provider.

baseDN

The point from which the LDAP server searches for groups.

Example: ou=groups,dc=example,dn=com

sas.identities.providers.ldap.user

The set of properties that are used to configure additional information for retrieving user information from the LDAP provider.

baseDN

The point from which the LDAP server searches for users.

Example: DC=example,DC=com

Additional Resources

For more information about the sitedefault.yaml file, see “Add a sitedefault File to Your Deployment”.

CAS Server for the SAS Viya Platform

Overview

This directory contains files to Kustomize your SAS Viya platform deployment to use a multi-node SAS Cloud Analytic Services (CAS) server, referred to as MPP.

Instructions

Edit the kustomization.yaml File

In order to add this CAS server to your deployment, add a reference to the cas-server overlay to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml).

resources:
- sas-bases/overlays/cas-server

Modifying the Number of CAS Workers (MPP Only)

On an MPP CAS Server, the number of workers helps determine the processing power of your cluster. The server is SMP by default which means there are no workers. The default number of workers in the cas-server overlay (0) can be modified by using the cas-manage-workers.yaml example located in the cas examples directory at /$deploy/sas-bases/examples/cas/configure. The number of workers cannot exceed the number of nodes in your k8s cluster, so ensure that you have enough resources to accommodate the value you choose.

Additional Modifications

You can make modifications to the overlay through the use of Patch Transformers. Examples are located in /$deploy/sas-bases/examples/cas/configure, including how to add additional volume mounts and data connectors, modifying CAS server resource allocation, and changing the default PVC access modes.

To be included in the manifest, any yaml files containing Patch Transformers must also be added to the transformers block of the base kustomization.yaml file:

transformers:
- {{ PATCH-FILE-1 }}
- {{ PATCH-FILE-2 }}

Optional CAS Server Placement Configuration

If you have an environment where there are untainted nodes, the Kubernetes scheduler may consider them candidates for the CAS Server. You can use an additional overlay to restrict the scheduling of the CAS server to nodes that have the dedicated label.

The dedicated label is workload.sas.com/class=cas

The label can be applied to a node with this command:

kubectl label nodes node1 workload.sas.com/class=cas --overwrite

To add the label to the CAS Server, add sas-bases/overlays/cas-server/require-cas-label.yaml to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

Here is an example:

...
transformers:
...
- sas-bases/overlays/cas-server/require-cas-label.yaml
...

Alternatively, you can use the sas-bases/overlays/cas-server/require-cas-label-pools.yaml transformer if your deployment meets all of the following conditions:

If your deployment meets these conditions, add sas-bases/overlays/cas-server/require-cas-label-pools.yaml to the transformers block of the base kustomization.yaml file. Here is an example:

...
transformers:
...
- sas-bases/overlays/cas-server/require-cas-label-pools.yaml
...

CAS Configuration on an OpenShift Cluster

The /$deploy/sas-bases/examples/cas/configure directory contains a file to grant Security Context Constraints for fsgroup 1001 on an OpenShift cluster. A Kubernetes cluster administrator should add these Security Context Constraints to their OpenShift cluster prior to deploying the SAS Viya platform. Use one of the following commands:

Step 1:

kubectl apply -f cas-server-scc.yaml

or

oc create -f cas-server-scc.yaml

Step 2:

After the SCC has been applied, you must link the SCC to the appropriate ServiceAccount that will use it. Perform the following command which corresponds to the appropriate host launch type:

No host launch: oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-cas-server -z sas-cas-server

Host launch enabled: oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-cas-server-host -z sas-cas-server

Note: If you are enabling host launch, use the SecurityContexConstraint file cas-server-scc-host-launch.yaml instead of cas-server-scc.yaml. This file sets the correct capabilities and privilege escalation

CAS Auto-Restart During Version Updates

By default, CAS does not automatically restart during version updates performed by the SAS Viya Platform Deployment Operator. The default prevents the disruption of active CAS sessions so that tables do not need to be reloaded. This default behavior can be changed by applying the cas-auto-restart.yaml example file located at /$deploy/sas-bases/examples/cas/configure. The example applies the autoRestart option to the pod spec. The deployment operator checks for this option on all existing CAS servers during software updates, and it automatically restarts servers that are tagged in this way.

  1. Copy the /$deploy/sas-bases/examples/cas/configure/cas-auto-restart.yaml to the site-config directory.

  2. By default, the target for this patch applies to all CAS servers:

    target:
      group: viya.sas.com
      kind: CASDeployment
      name: .*
      version: v1alpha1

    To target specific CAS servers, list the CAS servers to which the change should be applied in the name field.

    target:
      group: viya.sas.com
      kind: CASDeployment
      name: {{ NAME-OF-SERVER }}
      version: v1alpha1
  3. Add the cas-auto-restart.yaml file to the transformers section of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example that assumes the file was copied to the $deploy/site-config/cas/configure directory:

    transformers:
    ...
    - site-config/cas/configure/cas-auto-restart.yaml
    ...
  4. In order to validate that the auto-restart option has been enabled on for a CAS server, this command may be run:

    kubectl -n <name-of-namespace> get pods <cas-server-pod-name> --show-labels

    If the label sas.com/cas-auto-restart=true is visible, then the auto-restart option has been applied successfully.

  5. If you subsequently want to disable auto-restart then remove cas-auto-restart.yaml from your transformers list to disable auto-restart for any future CAS servers. If you want to disable auto-restart on a CAS server that is already running, run the following command to disable auto-restart for that active server:

    kubectl -n <name-of-namespace> label pods --selector=app.kubernetes.io/instance=<cas-deployment-name> sas.com/cas-auto-restart=false

Note: You cannot enable both CAS auto-restart and state transfer in the same SAS Viya platform deployment.

Note: Ideally this option should be set as a pre-deployment task. However, it can be applied to an already running CAS server, but that server must be manually restarted in order for the auto-restart option to be turned on.

Build

After you configure Kustomize, continue your SAS Viya platform deployment as documented.

Additional Resources

For more information about the difference between SMP and MPP CAS, see What is the CAS Server, SMP, and MPP?.

Create an Additional CAS Server

Overview

This README describes how to create additional CAS server definitions with the create-cas-server.sh script. The script creates a Custom Resource (CR) that can be added to your manifest and deployed to the Kubernetes cluster.

Running this script creates all of the artifacts that are necessary for deploying a new CAS server in the Kubernetes cluster in one directory. The directory can be referenced in the base kustomization.yaml.

Note: The script does not modify your Kubernetes cluster. It creates the manifests that you can apply to your Kubernetes cluster to add a CAS server.

Create a CAS Server

  1. Run the create-cas-server.sh script and specify, at a minimum, the instance name. The instance name is used to label the server and differentiate it from the default instance that is provided automatically. The default tenant name is “shared” and provided automatically when multi-tenancy is not enabled in your deployment.

    ./create-cas-server.sh -i {{ INSTANCE }}

    The sample command creates a top-level directory cas-{{ TENANT }}-{{ INSTANCE }} that contains everything that is required for a new CAS server instance. For example, the directory contains the CR, PVC definitions for the permstore and data PVs, and so on.

    Optional arguments:

    • -o, –output: Output location. This argument is used to specify the parent directory for the output. For example, you can specify -o $deploy/site-config. If you do not create the output in that directory, you should move the new directory to $deploy/site-config.
    • -w, –workers: Specify the number of CAS worker nodes. Default is 0 (SMP).
    • -b, –backup: Set this to include a CAS backup controller. Disabled by default.
    • -t, –tenant: Set the tenant name to be used for this deployment. Default is ‘shared’.
    • -r, –transfer: Set this to enable support for state transfer between restarts. Disabled by default.
    • -a, –affinity: Specify the workload.sas.com/class node affinity and toleration to use for this deployment. Default is ‘cas’.
    • -q, –required-affinity: Set this to have the node affinity be a required node affinity. Default is preferred node affinity.
    • -v, –version: Provides the version of this CAS server creation utility tool.
    • -h, –help: Display help for all the available options.
  2. In the base kustomization.yaml file, add the new directory to the resources section so that the CAS server is included when the manifest is rebuilt. This server is fully customizable with the use of patch transformers.

    resources:
      - site-config/cas-{{ TENANT }}-{{ INSTANCE }}
  3. Deploy your software using the steps in Deploy the Software according to the method you are using.

    kubectl get pods -l casoperator.sas.com/server={{ TENANT }}-{{ INSTANCE }}
    cas-{{ TENANT }}-{{ INSTANCE }}-controller     3/3     Running     0          1m
    
    kubectl get pvc -l sas.com/cas-instance: {{ TENANT }}-{{ INSTANCE }}
    NAME                                                  STATUS  ...
    cas-{{ TENANT }}-{{ INSTANCE }}-data                   Bound  ...
    cas-{{ TENANT }}-{{ INSTANCE }}-permstore              Bound  ...

Example

Run the script with more options:

./create-cas-server.sh --instance sample --output . --workers 2 --backup 1

This sample command creates a new directory named cas-shared-sample in the current location and creates a new CAS distributed server (MPP) CR with 2 worker nodes and a backup controller.

Configuration Settings for CAS

Overview

This document describes the customizations that can be made by the Kubernetes administrator for deploying CAS in both symmetric multiprocessing (SMP) and massively parallel processing (MPP) configurations.

An SMP server requires one Kubernetes node. An MPP server requires one Kubernetes node for the server controller and two or more nodes for server workers. The SAS Viya Platform: Deployment Guide provides information to help you decide. A link to the deployment guide is provided in the Additional Resources section.

Installation

SAS provides example files for many common customizations. Read the descriptions for the example files in the following list. If you want to use an example file to simplify customizing your deployment, copy the file to your $deploy/site-config directory.

Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ NUMBER-OF-WORKERS }}. Replace the entire variable string, including the braces, with the value you want to use.

After you edit a file, add a reference to it in the transformer block of the base kustomization.yaml file.

Examples

The example files are located at $deploy/sas-bases/examples/cas/configure. The following is a list of each example file for CAS settings and the file name.

Manage the Number of Workers

Note: If you are using an SMP configuration, skip this section.

By default, MPP CAS has two workers. To modify the number of workers, you must modify the cas-manage-workers.yaml transformer file. The file can be modified before or after the initial deployment of your SAS Viya platform. Adding or removing workers does not require a restart, but existing CAS tables will not be load balanced to use the new workers by default. New tables should take advantage of the new workers.

To enable load balancing when changing the number of workers, you should enable CAS Node Scaling, which requires a modification to the cas-add-environment-variables.yaml transformer file. If automatic balancing of tables is desired when adding workers to a running server, the environment variables should be set at the time of the initial deployment, regardless of whether you are changing the number of workers at that time. Setting the variables allows you to use CAS Node Scaling after the software has been deployed without having to change any of the transformers or the base kustomization.yaml file. For details about CAS Node Scaling, see CAS Node Scaling.

Use the cas-manage-workers.yaml Transformer

To use the cas-manage-workers.yaml transformer, copy the file to the $deploy/sas-config subdirectory. Then modify the file as described in the comments of the file itself before adding the file to the transformer block of the base kustomization.yaml file.

Set the Environment Variables for CAS Node Scaling

To set the environment variables for CAS Node Scaling, copy the cas-add-environment-variables.yaml file to the $deploy/sas-config subdirectory. Modify the file to add the following environment variables:

...
patch: |-
  - op: add
    path: /spec/controllerTemplate/spec/containers/0/env/-
    value:
      name: CAS_GLOBAL_TABLE_AUTO_BALANCE
      value: "true"
  - op: add
    path: /spec/controllerTemplate/spec/containers/0/env/-
    value:
      name: CAS_SESSION_TABLE_AUTO_BALANCE
      value: "true"

Ensure that you accurately designate which CAS servers are receiving the new environment variables in the target block of the file. Then add the file to the transformer block of the base kustomization.yaml file.

Disable Cloud Native Mode

Perform these steps if cloud native mode should be disabled in your environment.

  1. Add the following code to the configMapGenerator block of the base kustomization.yaml file:

    ```yaml
    ...
    configMapGenerator:
    ...
    - name: sas-cas-config
      behavior: merge
      literals:
        - CASCLOUDNATIVE=0
    ...
    ```
    
  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Enable System Security Services Daemon (SSSD) Container

Note: If you are enabling SSSD on an OpenShift cluster, use the SecurityContextConstraint patch cas-server-scc-sssd.yaml instead of cas-server-scc.yaml. This will set the correct capabilities and privilege escalation.

If SSSD is required in your environment, add sas-bases/overlays/cas-server/cas-sssd-sidecar.yaml as the first entry to the transformers list of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

  ```yaml
  ...
  transformers:
  ...
  - sas-bases/overlays/cas-server/cas-sssd-sidecar.yaml
  ...
  ```

Note: In the transformers list, the cas-sssd-sidecar.yaml file must precede the entry sas-bases/overlays/required/transformers.yaml and any TLS transformers.

Add a Custom Configuration for System Security Services Daemon (SSSD)

Use these steps to provide a custom SSSD configuration to handle user authorization in your environment.

  1. Copy the $deploy/sas-bases/examples/cas/configure/cas-sssd-example.yaml file to the location of your CAS server overlay. Example: site-config/cas-server/cas-sssd-example.yaml

  2. Add the relative path of cas-sssd-example.yaml to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    ```yaml
    ...
    transformers:
    ...
    - site-config/cas-server/cas-sssd-example.yaml
    ...
    ```
    
  3. Copy your custom SSSD configuration file to sssd.conf.

  4. Add the following code to the secretGenerator block of the base kustomization.yaml file with a relative path to sssd.conf:

    ```yaml
    ...
    secretGenerator:
    ...
    - name: sas-sssd-config
      files:
        - SSSD_CONF=site-config/cas-server/sssd.conf
      type: Opaque
    ...
    ```
    

Enable Host Launch in the CAS Server

Note: If you use Kerberos in your deployment, or enable SSSD and disable CASCLOUDNATIVE, you must enable host launch.

By default, CAS cannot launch sessions under a user’s host identity. All sessions run under the cas service account instead. CAS can be configured to allow for host identity launches by including a patch transformer in the kustomization.yaml file. The /$deploy/sas-bases/examples/cas/configure directory contains a cas-enable-host.yaml file, which can be used for this purpose.

Note: If you are enabling host launch on an OpenShift cluster, specify one of the following files to create the SecurityContextConstraint instead of cas-server-scc.yaml:

This will set the correct capabilities and privilege escalation.

To enable this feature:

  1. Copy the $deploy/sas-bases/examples/cas/configure/cas-enable-host.yaml file to the location of your CAS server overlay. For example, site-config/cas-server/cas-enable-host.yaml.

  2. The example file defaults to targeting all CAS servers by specifying a name component of .*. To target specific CAS servers, comment out the name: .* line and choose which CAS servers you want to target. Either uncomment the name: and replace NAME-OF-SERVER with one particular CAS server or uncomment the labelSelector line to target only the default deployment.

  3. Add the relative path of the cas-enable-host.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml) before the reference to the sas-bases/overlays/required/transformers.yaml file and any SSSD transformers. Here is an example:

    ```yaml
    transformers:
    ...
    - site-config/cas-server/cas-enable-host.yaml
    ...
    - sas-bases/overlays/required/transformers.yaml
    ...
    ```
    

Enable CAS Internode Encryption

CAS supports encrypting connections between the worker nodes. When internode encryption is configured, any data sent between worker nodes is sent over a TLS connection.

By default, CAS internode communication is not encrypted in any of the SAS Viya platform encryption modes. If required, CAS internode encryption should only be enabled in the “Full-stack TLS” encryption mode.

Before deciding to enable CAS internode encryption, you should be familiar with the content in SAS Viya Platform Encryption: Data in Motion.

Note: Encryption has performance costs. Enabling CAS internode encryption will degrade your performance and increase the amount of CPU time that is required to complete any action. Actions that move large amounts of data are penalized the most. Session start-up time is also impacted negatively. Testing indicates that scenarios that move large blocks of data between nodes can increase elapsed action times by a factor of ten.

Perform these steps to enable CAS internode encryption.

  1. Copy the $deploy/sas-bases/examples/cas/configure/cas-enable-internode-tls.yaml into your /site-config directory. For example: site-config/cas-server/cas-enable-host.yaml

  2. The cas-enable-internode-tls.yaml transformer file defaults to targeting all CAS servers by specifying a name component of .*. Edit the transformer to indicate the CAS servers you want to target for CAS internode encryption. For more information about selecting specific CAS servers, see Targeting CAS Servers.

  3. Add the relative path of the cas-enable-internode-tls.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml) before the reference to the sas-bases/overlays/required/transformers.yaml. Here is an example:

    ```yaml
    transformers:
    ...
    - site-config/cas-server/cas-enable-host.yaml
    ...
    - sas-bases/overlays/required/transformers.yaml
    ...
    ```
    

    Configure Behavior of CAS State Transfer

For the instructions to set up a CAS State transfer, including configuration steps, see the README file located at $deploy/sas-bases/overlays/cas-server/state-transfer/README.md (for Markdown format) or at $deploy/sas-bases/docs/state_transfer_for_cas_server_for_the_sas_viya_platform.htm (for HTML format).

Enable a Backing Store for CAS

Generally, when CAS allocates memory, it uses memory allocated from the threaded kernel. However, such memory is susceptible to the Linux Out of Memory (OOM) killer, potentially causing the entire deployment of CAS to restart and interrupting the functionality of CAS. To avoid some of the risk, you can enable a backing store for the memory allocation.

One of the following patch transformers can be used to enable the use of a backing store for CAS memory allocation.

Follow the instructions in the comments of the patch transformers to replace variables with the appropriate values.

Note: For information about CAS resource management policies, see CAS Resource Management Policies

Targeting CAS Servers

Each example patch has a target section which tells it what resource(s) it should apply to. There are several parameters including object name, kind, version, and labelSelector. By default, the examples in this directory use name: .* which applies to all CAS server definitions. If there are multiple CAS servers and you want to target a specific instance, you can set the “name” option to the name of that CASDeployment. If you want to target the default “cas-server” overlay you can use a labelSelector:

Example:

target:
  name: cas-example
  labelSelector: "sas.com/cas-server-default"
  kind: CASDeployment

Note: When targeting the default CAS server provided explicitly the path option must be used, because the name is a config map token that cannot be targeted.

Additional Resources

For more information about CAS configuration and using example files, see the SAS Viya Platform: Deployment Guide.

Auto Resources for CAS Server for the SAS Viya Platform

Overview

This directory contains files to Kustomize your SAS Viya platform deployment to enable automatic resource limit allocation.

Instructions

Edit the kustomization.yaml File

In order to add this CAS server to your deployment, perform both of the following steps.

First, add a reference to the auto-resources overlay to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml). This enables the ClusterRole and ClusterRoleBinding for the sas-cas-operator Service Account.

resources:
...
- sas-bases/overlays/cas-server/auto-resources

Next, add the transformer to remove any hardcoded resource requests for CPU and memory from your CAS deployment. This allows the resources to be auto-calculated.

transformers:
...
- sas-bases/overlays/cas-server/auto-resources/remove-resources.yaml

Build

After you configure Kustomize, continue your SAS Viya platform deployment as documented.

State Transfer for CAS Server for the SAS Viya Platform

Overview

This directory contains files to Kustomize your SAS Viya platform deployment to enable state transfers. Enabling state transfers allows the sessions, tables, and state of a running CAS server to be preserved between a running CAS server and a new CAS server instance which will be started as part of the CAS server upgrade.

Note: You cannot enable both CAS auto-restart and state transfer in the same SAS Viya platform deployment. If you have already enabled auto-restart, disable it before continuing.

Instructions

Edit the kustomization.yaml File

To add the new CAS server to your deployment:

  1. Add a reference to the state-transfer overlay to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml). This overlay adds a PVC to the deployment to store the temporary state data during a state transfer. This PVC is mounted to both the source and target system and must be large enough to hold all session and global tables that are loaded at transfer time. If you need to increase the size of the transfer PVC, consider using the cas-modify-pvc-storage.yaml example file.

    resources:
    ...
    - sas-bases/overlays/cas-server/state-transfer
  2. Add the state-transfer transformer to enable the state transfer feature to the deployment

    transformers:
    ...
    - sas-bases/overlays/cas-server/state-transfer/support-state-transfer.yaml
    3. Determine the method to transfer the state. The model ‘readonly’ has a shorter window where the server is unresponsive. However, during the transfer, attempts to alter or create global tables will fail. The model ‘suspend’ has a longer window where the server is unresponsive, and attempts to alter or create global tables will wait until the transfer is complete.

    The default state transfer model is ‘suspend’. If you want to specify a model at deployment time, copy the $deploy/sas-bases/examples/cas/configure/cas-add-environment-variables.yaml file to $deploy/site-config/cas/configure/cas-add-environment-variables.yaml, if you have not already done so. In the copied file, change the value of CASCFG_STATETRANSFERMODEL to the model you want to use. The model can also be changed by altering the CAS server option stateTransferModel.

    Here is an example of the code used to set the state transfer model to ‘readonly’.

    ...
    patch: |-
      - op: add
        path: /spec/controllerTemplate/spec/containers/0/env/-
        value:
          name: CASCFG_STATETRANSFERMODEL
          value: "readonly"
  3. Decide if you want to limit the amount of data in individual sessions to be transferred. The server will be unresponsive while session tables are transferred between the original server and the new server. The length of this period of unresponsiveness can be managed by setting the MAXSESSIONTRANSFERSIZE server option. Any session that has more data loaded than the value of this option will not be transferred to the new session. The default behavior is to impose no limit. Smaller values of this option can reduce the amount of time that the server is unresponsive during a state transfer.

    If you want to specify a limit at deployment time, copy the $deploy/sas-bases/examples/cas/configure/cas-add-environment-variables.yamlfile to $deploy/site-config/cas/configure/cas-add-environment-variables.yaml, if you have not already done so. In the copied file, set the environment variable CASCFG_MAXSESSIONTRANSFERSIZE.

    Here is an example of the code used to set the session transfer size limit to 10 million bytes.

    ...
    patch: |-
      - op: add
        path: /spec/controllerTemplate/spec/containers/0/env/-
        value:
          name: CASCFG_MAXSESSIONTRANSFERSIZE
          value: "10000000"
  4. If you have changed the values CASCFG_STATETRANSFERMODEL or CASCFG_MAXSESSIONTRANSFERSIZE, add a reference to the cas-add-environment-variables.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    transformers:
    ...
    - site-config/cas/configure/cas-add-environment-variables.yaml

    If you have already made some configuration changes for CAS, this entry may already exist in the transformers block.

Build

After you configure Kustomize, continue your SAS Viya platform deployment as documented.

SAS GPU Reservation Service

Overview

The SAS GPU Reservation Service aids SAS processes in resource sharing and utilization of the Graphic Processing Units (GPUs) that are available in a Kubernetes Pod. It is available by default in every SAS Cloud Analytic Services (CAS) Pod, but it must be enabled in order to take advantage of the GPUs in your cluster.

In MPP CAS server configurations, GPU resources are only used on the CAS worker nodes. Additional CAS server placement configuration can be used to configure distinct node pools for CAS controller pods and CAS worker pods. This allows CAS controller pods to be scheduled on more economical nodes while CAS worker pods are scheduled on nodes that provide GPU resources.

For instructions to set up distinct node pools for CAS Controllers and CAS Workers, including configuration steps, see the README file located at $deploy/sas-bases/overlays/cas-server/auto-resources/README.md (for Markdown format) or at $deploy/sas-bases/docs/auto_resources_for_cas_server_for_sas_viya.htm (for HTML format).

The SAS GPU Reservation Service is supported on all of the supported cloud platforms. If you are deploying on Microsoft Azure, refer to Azure Configuration, Using Azure CLI or Azure Portal, Using SAS Viya Infrastructure as Code for Microsoft Azure, and Using the NVIDIA Device Plug-In. If you are deploying on a provider other than Microsoft Azure, refer to Installing the NVIDIA GPU Operator.

Note: If you are using Kubernetes 1.20 and later and you choose to use Docker as your container runtime, the NVIDIA GPU Operator is not needed.

Azure Configuration

If you are deploying the SAS Viya platform on Microsoft Azure, before you enable CAS to use GPUs, the Azure Kubernetes Service (AKS) cluster must be properly configured.

The cas node pool must be configured with a properly sized N-Series Virtual Machine (VM). The N-Series VMs in Azure have GPU capabilities.

Using Azure CLI or Azure Portal

If the cas node pool already exists, the VM node size cannot be changed. The cas node pool must first be deleted and then re-created to the proper VM size and node count.

WARNING: Deleting a node pool on an actively running SAS Viya platform deployment will cause any CAS sessions to be prematurely terminated. These steps should only be performed on an idle deployment. The node pool can be deleted and re-created using the Azure portal or the Azure CLI.

az aks nodepool delete --cluster-name <replace-with-aks-cluster-name> --name cas --resource-group <replace-with-resource-group>

az aks nodepool add --cluster-name <replace-with-aks-cluster-name> --name cas --resource-group <replace-with-resource-group> --node-count <replace with node count> --node-vm-size "<replace with N-Series VM>" [--zones <replace-with-availability-zone-number>]

Using SAS Viya Infrastructure as Code for Microsoft Azure

SAS Viya 4 Infrastructure as Code (IaC) for Microsoft Azure (viya4-iac-azure) contains Terraform scripts to provision Microsoft Azure Cloud infrastructure resources required to deploy SAS Viya platform products. Edit the terraform.tfvars file and change the machine_type for the cas node pool to an N-Series VM.

node_pools = {
  cas = {
    "machine_type" = "<Change to N-Series VM>"
  ...
  }
},
...

Verify the cas node pool was created and properly sized.

az aks nodepool list -g <resource-group> --cluster-name <cluster-name> --query '[].{Name:name, vmSize:vmSize}'

Using the NVIDIA Device Plug-In

An additional requirement in a Microsoft Azure environment is that the NVIDIA device plug-in must be installed and configured. The example nvidia-device-plugin-ds.yaml manifest requires the following addition to the tolerations block so that the plug-in will be scheduled on to the CAS node pool.

tolerations:
...
- key: workload.sas.com/class
  operator: Equal
  value: "cas"
  effect: NoSchedule
...

Create the gpu-resources namespace and apply the updated manifest to create the NVIDIA device plug-in DaemonSet.

kubectl create namespace gpu-resources
kubectl apply -f nvidia-device-plugin-ds.yaml

Installing the NVIDIA GPU Operator

Beginning with Kubernetes version 1.20, Docker was deprecated as the default container runtime in favor of the ICO-compliant containerd. In order to leverage GPUs using this new container runtime, install the NVIDIA GPU Operator into the same cluster as the SAS Viya platform. After the NVIDIA GPU Operator is deployed into your cluster, proceed with enabling the SAS GPU Reservation Service.

Enable the SAS GPU Reservation Service

CAS GPU patch files are located at $deploy/sas-bases/examples/gpu.

  1. Copy the appropriate patch files for your CAS Server configuration:

    • For SMP CAS servers and MPP CAS servers without distinct cascontroller and casworker node pool configurations, copy $deploy/sas-bases/examples/gpu/cas-gpu-patch.yaml and $deploy/sas-bases/examples/gpu/kustomizeconfig.yaml to $deploy/site-config/gpu/.

    • For MPP CAS servers with distinct cascontroller and casworker node pool configurations, copy $deploy/sas-bases/examples/gpu/cas-gpu-patch-worker-only.yaml and $deploy/sas-bases/examples/gpu/kustomizeconfig.yaml to $deploy/site-config/gpu/.

  2. In the copied cas-gpu-patch.yaml or cas-gpu-patch-worker-only.yaml file, make the following changes:

    • Revise the values for the resource requests and resource limits so that they are the same and do not exceed the maximum number of GPU devices on a single node.

    • In the cas-vars section, consider whether you require a different level of information from the GPU process. The value for SASGPUD_LOG_TYPE can be info, json, debug, or trace.

      After you have made your changes, save and close the revised file.

  3. After you edit the file, add the following references to the base kustomization.yaml file ($deploy/kustomization.yaml):

    • Add the path to the selected cas-gpu-patch file used as the first entry in the transformers block.

    • Add the path to the kustomizeconfig.yaml file to the configurations block. If the configurations block does not exist yet, create it.

    Here are examples of these changes:

    ...
    transformers:
    - site-config/gpu/cas-gpu-patch.yaml
    
    ...
    configurations:
    - site-config/gpu/kustomizeconfig.yaml

Additional Resources

Configure SAS Compute Server to Use SAS Refresh Token Sidecar

Overview

The SAS Compute server provides the ability to execute SAS Refresh Token, which by use of a sidecar works as a silent partner to the main container, refreshing the client token as needed. Using the sidecar is valuable for long-running tasks that exceed the default life of the client token, which in turn inhibits the successful completion of such tasks. The sidecar seamlessly refreshes the token so that these tasks can continue running unimpeded.

The SAS Refresh Token facility is disabled by default. This README file describes how to customize your SAS Viya platform deployment to allow SAS Compute server to run the SAS Refresh Token sidecar.

Installation

Enable the ability for the pod where the SAS Compute server is running to run SAS Refresh Token. SAS Refresh Token starts when the SAS Compute server is started. It exists for the life of the SAS Compute server.

Enable SAS Refresh Token in the SAS Compute Server

SAS has provided an overlay to enable SAS Refresh Token in your environment.

To use the overlay:

  1. Add a reference to the sas-programming-environment/refreshtoken overlay to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    Here is an example:

    ```yaml
    ...
    transformers:
    ...
    - sas-bases/overlays/sas-programming-environment/refreshtoken
    - sas-bases/overlays/required/transformers.yaml
    ...
    ```
    

    NOTE: The reference to the sas-programming-environment/refreshtoken overlay MUST come before the required transformers.yaml, as shown in the example above.

  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Disable SAS Refresh Token in the SAS Compute Server

To disable SAS Refresh Token:

  1. Remove sas-bases/overlays/sas-programming-environment/refreshtoken from the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

LOCKDOWN Settings for the SAS Programming Environment

Overview

This document describes the customizations that can be made by the Kubernetes administrator for managing the settings for the LOCKDOWN feature in the SAS Programming Environment.

For more information about LOCKDOWN, see LOCKDOWN System Option.

Installation

Read the descriptions for the example files in the following list. If you want to use an example file to simplify customizing your deployment, copy the file to your $deploy/site-config directory.

Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ AMOUNT-OF-STORAGE }}. Replace the entire variable string, including the braces, with the value you want to use.

After you edit a file, add a reference to it in the transformers block of the base kustomization.yaml file.

Here is an example using the enable LOCKDOWN access methods transformer, saved to $deploy/site-config/sas-programming-environment/lockdown:

  transformers:
  ...
  - /site-config/sas-programming-environment/lockdown/enable-lockdown-access-methods.yaml
  ...
  ```

## Examples

The default behavior allows the following access methods to be enabled via
LOCKDOWN:

- HTTP
- EMAIL
- FTP
- HADOOP
- JAVA

These settings can be toggled using the transformers in the example files.
The example files are located at
 `$deploy/sas-bases/examples/sas-programming-environment/lockdown`.

- To enable access methods not included in the list above, such as PYTHON or
PYTHON_EMBED, replace {{ ACCESS-METHOD-LIST }}
in `enable-lockdown-access-methods.yaml`. For example,

```yaml
...
patch : |-
  - op: add
    path: /data/VIYA_LOCKDOWN_USER_METHODS
    value: "python python_embed"
...

NOTE: The names of the access methods are case-insensitive.

...
patch : |-
  - op: add
    path: /data/VIYA_LOCKDOWN_USER_DISABLED_METHODS
    value: "java"
...

NOTE: The names of the access methods are case-insensitive.

Additional Resources

For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.

Disable Generation of Java Security Policy File for SAS Programming Environment

Overview

This document describes the customizations that can be made by the Kubernetes administrator for managing the Java security policy file that is generated for the SAS Programming Environment.

By default the deployment of the SAS Programming Environment generates a Java security policy file to prevent SAS programmers from executing Java code that would be deemed unsafe by the administrator directly from SAS code. This README file describes the customizations that can be made by the Kubernetes administrator to manage the Java security policy file that is generated.

Installation

The generated Java security policy controls permissions for Java access inside of the SAS Programming Environment. In cases where the application of the policy file is deemed restrictive by the administrator, the generation of the policy file can be disabled.

Disable the Generation of the Java Security Policy File

SAS has provided an overlay to disable the generation of the Java security policy.

To use the overlay:

  1. Add a reference to the sas-programming-environment/java-security-policy/ disable-java-policy-file-generation.yaml overlay to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    Here is an example:

    ...
    transformers:
    ...
    - sas-bases/overlays/sas-programming-environment/java-security-policy/disable-java-policy-file-generation.yaml
    - sas-bases/overlays/required/transformers.yaml
    ...

    NOTE: The reference to the sas-programming-environment/java-security-policy/disable-java-policy-file-generation.yaml overlay MUST come before the required transformers.yaml, as seen in the example above.

  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Enable the Generation of the Java Security Policy File

To enable the generation of the Java security policy file:

  1. Remove sas-bases/overlays/sas-programming-environment/java-security-policy/disable-java-policy-file-generation.yaml from the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Adding Classes to Java Security Policy File Used by SAS Programming Environment

Overview

This document describes the customizations that can be made by the Kubernetes administrator for managing the Java security policy file that is generated for the SAS Programming Environment.

By default the SAS Programming Environment generates a Java security policy file to prevent SAS programmers from executing Java code directly from SAS code that would be deemed unsafe by the administrator. This README describes the customizations that can be made by the Kubernetes administrator for managing the Java security policy file that is generated for the SAS Programming Environment.

If a class is determined acceptable by the Kubernetes administrator, the following steps allow that class to be added.

Installation

The default behavior generates a Java security policy file similar to

grant {
permission java.lang.RuntimePermission "*";
permission java.io.FilePermission "<<ALL FILES>>", "read, write, delete";
permission java.util.PropertyPermission "*", "read, write";
permission java.net.SocketPermission "*", "connect,accept,listen";
permission java.io.FilePermission "com.sas.analytics.datamining.servertier.SASRScriptExec", "exec";
permission java.io.FilePermission "com.sas.analytics.datamining.servertier.SASPythonExec", "exec";
};

The Java security policy file can be modified by using the add-allowed-java-class.yaml file.

  1. Copy the $deploy/sas-bases/examples/sas-programming-environment/java-security-policy/add-allowed-java-class.yaml file to the site-config directory.

  2. To add classes with an exec permission to this generated policy file, replace the following in the copied file.

    • Replace {{ NAME }} with an unique name for the class. This is for internal identification.
    • Replace {{ CLASS-NAME}} with the Java class name that is to be allowed.

    For example,

    ...
    patch: |-
      - op: add
        path: /data/SAS_JAVA_POLICY_ALLOW_TESTCLASS
        value: "my.org.test.testclass"
    ...
  3. After you edit the file, add a reference to it in the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    Here is an example assuming the file has been saved to $deploy/site-config/sas-programming-environment/java-security-policy:

    transformers:
    ...
    - /site-config/sas-programming-environment/java-security-policy/add-allowed-java-class.yaml
    ...

Additional Resources

For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.

Configuring SAS Compute Server to Use SAS Watchdog

Overview

The SAS Compute server provides the ability to execute SAS Watchdog, which monitors spawned processes to ensure that they comply with the terms of LOCKDOWN system option.

The LOCKDOWN system option employs an allow list in the SAS Compute server. Only files that reside in paths or folders that are included in the allow list can be accessed by the SAS Compute server. The limitation on the LOCKDOWN system option is that it can only block access to files and folders directly accessed by SAS Compute server processing. The SAS Watchdog facility extends this checking to files and folders that are used by languages that are invoked by the SAS Compute server. Therefore, code written in Python, R, or Java that is executed directly in the SAS Compute server process is checked against the allow list. The configuration of the SAS Watchdog facility replicates the allow list that is configured by the LOCKDOWN system option by default.

Note: For more information about the LOCKDOWN system option, see LOCKDOWN System Option

The SAS Watchdog facility is disabled by default. This README file describes how to customize your SAS Viya platform deployment to allow SAS Compute server to run SAS Watchdog.

Installation

Enable the ability for the pod where the SAS Compute Server is running to run SAS Watchdog. SAS Watchdog starts when the SAS Compute server is started, and exists for the life of the SAS Compute server.

Enable SAS Watchdog in the SAS Compute Server

SAS has provided an overlay to enable SAS Watchdog in your environment.

To use the overlay:

  1. Add a reference to the sas-programming-environment/watchdog overlay to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    Here is an example:

    ...
    transformers:
    ...
    - sas-bases/overlays/sas-programming-environment/watchdog
    - sas-bases/overlays/required/transformers.yaml
    ...

    NOTE: The reference to the sas-programming-environment/watchdog overlay MUST come before the required transformers.yaml, as seen in the example above.

  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Disabling SAS Watchdog in the SAS Compute Server

To disable SAS Watchdog:

  1. Remove sas-bases/overlays/sas-programming-environment/watchdog from the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Additional Instructions for an OpenShift Environment

Apply Security Context Constraint (SCC)

As a Kubernetes cluster administrator of the OpenShift cluster, use one of the following commands to apply the Security Context Constraint. An example of the yaml may be found in sas-bases/examples/sas-programming-environment/watchdog/sas-watchdog-scc.yaml.

kubectl apply -f sas-watchdog-scc.yaml
oc apply -f sas-watchdog-scc.yaml

Grant the Service Account Use of the SCC

oc -n <namespace> adm policy add-scc-to-user sas-watchdog -z sas-programming-environment

Remove the Service Account from the SCC

Run the following command to remove the service account from the SCC:

oc -n <namespace> adm policy remove-scc-from-user sas-watchdog -z sas-programming-environment

Delete the SCC

Run one of the following commands to delete the SCC after it has been removed:

kubectl delete scc sas-watchdog
oc delete scc sas-watchdog

NOTE: Do not delete the SCC if there are other SAS Viya platform deployments in the cluster. Only delete the SCC after all namespaces running SAS Viya platform in the cluster have been removed.

Configuring SAS Compute Server to Use a Personal CAS Server

Overview

The SAS Compute server provides the ability to execute SAS code that can drive requests into the shared CAS server in the cluster. For development purposes in applications such as SAS Studio, you might need to allow data scientists the ability to work with a CAS server that is local to their SAS Compute session.

This README file describes how to customize your SAS Viya platform deployment to allow SAS Compute server users access to a personal CAS server. This personal CAS server uses symmetric multiprocessing (SMP) architecture.

Note: The README for Personal CAS Server with GPU is located at $deploy/sas-bases/examples/sas-programming-environment/personal-cas-server-with-gpu/README.md (for Markdown format) or $deploy/sas-bases/doc/configuring_sas_compute_server_to_use_a_personal_cas_server-with-gpu.htm (for HTML format).

Installation

Enable the ability for the pod where the SAS Compute server is running to contain a personal CAS server instance. This CAS server starts when the SAS Compute server is started, and exists for the life of the SAS Compute server. Code executing in the SAS Compute session can then be directed to this personal CAS server.

Enable the Personal CAS Server in the SAS Compute Server

SAS has provided an overlay to enable the personal CAS server in your environment.

To use the overlay:

  1. Add a reference to the sas-programming-environment/personal-cas-server overlay to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    Here is an example:

    ...
    transformers:
    ...
    - sas-bases/overlays/sas-programming-environment/personal-cas-server
    - sas-bases/overlays/required/transformers.yaml
    ...

    NOTE: The reference to the sas-programming-environment/personal-cas-server overlay MUST come before the required transformers.yaml, as seen in the example above.

  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Disabling the Personal CAS Server in the SAS Compute Server

To disable the personal CAS Server:

  1. Remove sas-bases/overlays/sas-programming-environment/personal-cas-server from the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Configuring SAS Compute Server to Use a Personal CAS Server with GPU

Overview

The SAS Compute server provides the ability to execute SAS code that can drive requests into the shared CAS server in the cluster. For development purposes in applications such as SAS Studio, you might need to allow data scientists the ability to work with a CAS server that is local to their SAS Compute session.

This README file describes how to customize your SAS Viya platform deployment to allow SAS Compute server users access to a personal CAS server with GPU. This personal CAS server uses symmetric multiprocessing (SMP) architecture.

Installation

Enable the ability for the pod where the SAS Compute server is running to contain a personal CAS server (with GPU) instance. This CAS server starts when the SAS Compute server is started, and exists for the life of the SAS Compute server. Code executing in the SAS Compute session can then be directed to this personal CAS server (with GPU).

Installing this overlay is the same as installing the overlay that adds Personal CAS Server (without GPU). The only difference is that the overlay name is different.

Add GPU to an Existing Personal CAS Server

If you want to add GPU to an existing Personal CAS Server, perform these steps:

  1. Follow the instructions in the README to remove the Personal CAS Server. The README for Personal CAS Server (without GPU) is located at $deploy/sas-bases/examples/sas-programming-environment/personal-cas-server/README.md (for Markdown format) or $deploy/sas-bases/doc/configuring_sas_compute_server_to_use_a_personal_cas_server.htm (for HTML format).

  2. Use this overlay (and these instructions) to add Personal CAS Server with GPU.

Note: Only one Personal CAS Server may be present in SAS Compute Server

Enable the Personal CAS Server in the SAS Compute Server

SAS has provided an overlay to enable the personal CAS server in your environment.

To use the overlay:

  1. Add a reference to the sas-programming-environment/personal-cas-server-with-gpu overlay to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    Here is an example:

    ...
    transformers:
    ...
    - sas-bases/overlays/sas-programming-environment/personal-cas-server-with-gpu
    - sas-bases/overlays/required/transformers.yaml
    ...

    Note: The reference to the sas-programming-environment/personal-cas-server-with-gpu overlay must come before the required transformers.yaml, as seen in the example above.

  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Disabling the Personal CAS Server in the SAS Compute Server

To disable the personal CAS Server:

  1. Remove sas-bases/overlays/sas-programming-environment/personal-cas-server-with-gpu from the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Configuration Settings for the Personal CAS Server

Overview

This document describes the customizations that can be made by the Kubernetes administrator for deploying the Personal CAS Server.

Installation

The SAS Viya Platform provides example files for many common customizations. Read the descriptions for the example files in the following list. If you want to use an example file to simplify customizing your deployment, copy the file to your $deploy/site-config directory.

Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ AMOUNT-OF-STORAGE }}. Replace the entire variable string, including the braces, with the value you want to use.

After you edit a file, add a reference to it in the transformers block of the base kustomization.yaml file.

Here is an example using the host path transformer, saved to $deploy/site-config/sas-programming-environment/personal-cas-server:

```yaml
transformers:
...
- /site-config/sas-programming-environment/personal-cas-server/personal-cas-modify-host-cache.yaml
...

```

Examples

The example files are located at $deploy/sas-bases/examples/sas-programming-environment/personal-cas-server. The following is a list of each example file.

Additional Resources

For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.

SAS Programming Environment Storage Tasks

Overview

The SAS Viya platform requires the ability to have write access to certain locations in the environment. An example of this is the SASWORK location, where data used at runtime may be created or modified. The SAS Programming Environment container image is set up by default to use an emptyDir volume for this purpose. Depending on workload, you may need to configure different storage classes for these volumes.

A storage class in Kubernetes is defined by a StorageClass resource. Examples of StorageClasses can be found at Storage Classes.

This README describes how to use example files to configure the required storage classes.

Installation

The following processes assign their runtime storage locations using the process described above.

The default behavior assigns an emptyDir volume for use for runtime storage by these server applications.

This processing takes place at the initialization of the server application; therefore these changes take effect upon the next launch of a pod for the server application.

The volume storage class for these applications can be modified by using the transformers in the example file located at $deploy/sas-bases/examples/sas-programming-environment/storage.

  1. Copy the $deploy/sas-bases/examples/sas-programming-environment/storage/change-viya-volume-storage-class.yaml file to the site-config directory.

  2. To change the StorageClass replace the {{ VOLUME-STORAGE-CLASS }} variable in the copied file with a different volume storage class. The example file provided looks like the following:

    - op: add
      path: /template/spec/volumes/-
      value:
        name: viya
        {{ VOLUME-STORAGE-CLASS }}

    For example, assume that the storage location you want to use is an NFS volume. That volume may be described in the following way:

    nfs:
      server: myserver.mycompany.com
      path: /path/to/my/location

    To use this in the transformer, substitute in the volume definition in the {{ VOLUME-STORAGE-CLASS }} location. The result would look like this:

    - op: add
      path: /template/spec/volumes/-
      value:
        name: viya
        nfs:
          server: myserver.mycompany.com
          path: /path/to/my/location

    Note: The transformer defined here delete the previously defined viya volume specification in the associated podTemplates. Any content that may exist in the current viya volume is not affected by this transformer.

  3. After you edit the change-viya-volume-storage-class.yaml file, add it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    Note: The reference to the site-config/change-viya-volume-storage-class.yaml overlay must come before the required transformers.yaml.

    Here is an example assuming the file has been saved to $deploy/site-config:

    transformers:
    ...
      - site-config/change-viya-volume-storage-class.yaml
      - sas-bases/overlays/required/transformers.yaml
    ...

Additional Resources

For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.

SAS Batch Server Storage Task for Checkpoint/Restart

Overview

A SAS Batch Server has the ability to restart a SAS job using either SAS’s data step checkpoint/restart capability or SAS’s label checkpoint/restart capability. For the checkpoint/restart capability to work properly, the checkpoint information must be stored on storage that persists across all compute nodes in the deployment. When the Batch Server job is restarted, it will have access to the checkpoint information no matter what compute node it is started on.

The checkpoint information is stored in SASWORK, which is allocated in the volume named viya. Since a Batch Server is a SAS Viya platform server that uses the SAS Programming Run-Time Environment, it is possible that the viya volume may be set to ephemeral storage by the $deploy/sas-bases/examples/sas-programming-environment/storage/change-viya-volume-storage-class.yaml transformers. If that is the case, the Batch Server’s viya volume would need to be changed to persistent storage without changing any other server’s storage.

Note: For more information about changing the storage for SAS Viya platform servers that use the SAS Programming Run-Time Environment, see the README file at $deploy/sas-bases/examples/sas-programming-environment/storage/README.md (for Markdown format) or at $deploy/sas-bases/docs/sas_programming_environment_storage_tasks.htm (for HTML format).

The transformers described in this README sets the storage class for the SAS Batch Server’s viya volume defined in the SAS Batch Server pod templates without changing the storage of the other SAS Viya platform servers that use the SAS Programming Run-Time Environment.

Installation

The changes described by this README take place at the initialization of the server application; therefore the changes take effect at the next launch of a pod for the server application.

The volume storage class for these applications can be modified by using the example file located at $deploy/sas-bases/examples/sas-batch-server/storage.

  1. Copy the $deploy/sas-bases/examples/sas-batch-server/storage/change-batch-server-viya-volume-storage-class.yaml file to the site-config directory.

  2. To change the storage class, replace the {{ VOLUME-STORAGE-CLASS }} variable in the copied file with a different volume storage class. The unedited example file contains a transformer that looks like this:

     ---
     apiVersion: builtin
     kind: PatchTransformer
     metadata:
       name: add-batch-viya-volume
     patch: |-
       - op: add
         path: /template/spec/volumes/-
         value:
           name: viya
           {{ VOLUME-STORAGE-CLASS }}
     target:
       kind: PodTemplate
       labelSelector: "launcher.sas.com/job-type=sas-batch-job"

    Assume that the storage location you want to use is an NFS volume. That volume may be described in the following way:

     nfs:
       server: myserver.mycompany.com
       path: /path/to/my/location

    To use this storage location in the transformer, substitute in the volume definition in the {{ VOLUME-STORAGE-CLASS }} location. The result would look like this:

     ---
     apiVersion: builtin
     kind: PatchTransformer
     metadata:
       name: add-batch-viya-volume
     patch: |-
       - op: add
         path: /template/spec/volumes/-
         value:
           name: viya
           nfs:
             server: myserver.mycompany.com
             path: /path/to/my/location
     target:
       kind: PodTemplate
       labelSelector: launcher.sas.com/job-type=sas-batch-job

    Note: The first transformer defined in the example file deletes the previously defined viya volume specification in the associated podTemplates and the second transformer in the example file adds the viya volume you defined. Any content that may exist in the current viya volume is not affected by these transformers.

  3. After you edit the change-batch-server-viya-volume-storage-class.yaml file, add it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml) before the required transformers.yaml.

    Note: If the $deploy/sas-bases/examples/sas-programming-environment/storage/change-viya-volume-storage-class.yaml transformers file is also being used in the base kustomization.yaml file, ensure the Batch Server transformers file is located after the entry for the change-viya-volume-storage-class.yaml patch. Otherwise the Batch Server patch will have no effect.

    Here is an example assuming the file has been saved to $deploy/site-config:

    transformers:
    ...
    <...other transformers...>
    < site-config/change-viya-volume-storage-class.yaml if used>
    - site-config/change-batch-server-viya-volume-storage-class.yaml
    - sas-bases/overlays/required/transformers.yaml
    ...

Additional Resources

For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.

Controlling User Access to the SET= System Option

Overview

This document describes the customizations that can be made by the Kubernetes administrator for controlling the access a user has to change environment variables by way of the SET= System Option.

The SAS language includes the SET= System Option, which allows the user to define or change the value of an environment variable in the session that the user is working in. However, an administrator might want to limit the ability of the user to change certain environment variables. The steps described in this README provide the administrator with the ability to block specific variables from being set by the user.

Installation

The list of environment variables that should be blocked for users to change can be modified by using the transformer in the example file located at $deploy/sas-bases/examples/sas-programming-environment/options-set.

  1. Copy the $deploy/sas-bases/examples/sas-programming-environment/options-set/deny-options-set-variables.yaml file to the site-config directory.

  2. To add variables that users should be prevented from changing, replace the {{ OPTIONS-SET-DENY-LIST }} variable in the copied file with the list of environment variables to be protected. Here is an example:

    NOTE: The environment variables _JAVA_OPTIONS, JAVA_TOOL_OPTIONS, JDK_JAVA_OPTIONS, ODBCINST, ODBCINI are blocked out of the box as they pose potential security risks if left unblocked. The transformer in the example overwrites this list, so you must include these environment variables along with additional environment variables that you wish to block in the list.

    ...
    patch : |-
      - op: add
        path: /data/SAS_OPTIONS_SET_DENY_LIST
        value: "_JAVA_OPTIONS JAVA_TOOL_OPTIONS JDK_JAVA_OPTIONS ODBCINST ODBCINI VAR1 VAR2 VAR3"
    ...
  3. After you edit the file, add a reference to it in the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example assuming the file has been saved to $deploy/site-config/sas-programming-environment/options-set:

    transformers:
    ...
    - site-config/sas-programming-environment/options-set/deny-options-set-variables.yaml
    ...

Additional Resources

For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.

SAS GPU Reservation Service for SAS Programming Environment

Overview

The SAS GPU Reservation Service aids SAS processes in resource sharing and utilization of the Graphic Processing Units (GPUs) that are available in a Kubernetes pod. The SAS Programming Environment container image makes this service available, but it must be enabled in order to take advantage of the GPUs in your cluster.

Note: The following servers create Kubernetes pods using the SAS Programming Environment container image:

The SAS GPU Reservation Service is supported on all of the supported cloud platforms. In a Microsoft Azure Kubernetes deployment, additional configuration steps are required.

Azure Configuration

If you are deploying the SAS Viya platform on Microsoft Azure, before you enable the SAS Programming Environment to use GPUs, you must configure the Azure Kubernetes Service (AKS) cluster. The compute node pool must be configured with a properly sized N-Series Virtual Machine (VM). The N-Series VMs in Azure have GPU capabilities.

Using Azure CLI or Azure Portal

If the compute node pool already exists, the VM node size cannot be changed. The compute node pool must be deleted and then recreated to the proper VM size and node count with the following commands.

WARNING: Deleting a node pool on an actively running SAS Viya platform deployment will cause any active sessions to be prematurely terminated. These steps should only be performed on an idle deployment. The node pool can be deleted and recreated using the Azure portal or the Azure CLI.

az aks nodepool delete --cluster-name <replace-with-aks-cluster-name> --name compute --resource-group <replace-with-resource-group>

az aks nodepool add --cluster-name <replace-with-aks-cluster-name> --name compute --resource-group <replace-with-resource-group> --node-count <replace with node count> --node-vm-size "<replace with N-Series VM>" [--zones <replace-with-availability-zone-number>]

Using SAS Viya Infrastructure as Code for Microsoft Azure

SAS Viya 4 Infrastructure as Code (IaC) for Microsoft Azure (viya4-iac-azure) contains Terraform scripts to provision Microsoft Azure Cloud infrastructure resources required to deploy SAS Viya platform products. Edit the terraform.tfvars file and change the machine_type for the compute node pool to an N-Series VM.

node_pools = {
  compute = {
    "machine_type" = "<Change to N-Series VM>"
  ...
  }
},
...

Then verify the compute node pool was created and properly sized.

az aks nodepool list -g <resource-group> --cluster-name <cluster-name> --query '[].{Name:name, vmSize:vmSize}'

Using the NVIDIA Device Plug-In

An additional requirement in a Microsoft Azure environment is that the NVIDIA device plug-in must be installed and configured. Download the example nvidia-device-plugin-ds.yaml manifest from that Microsoft page. Then add the following to the tolerations block of the manifest so that the plug-in will be scheduled on to the compute node pool.

tolerations:
...
- key: workload.sas.com/class
  operator: Equal
  value: "compute"
  effect: NoSchedule
...

Create the gpu-resources namespace and apply the updated manifest to create the NVIDIA device plug-in DaemonSet.

kubectl create namespace gpu-resources
kubectl apply -f nvidia-device-plugin-ds.yaml

Enable the SAS GPU Reservation Service for SAS Programming Environment

SAS has provided an overlay to enable the SAS GPU Reservation Service for SAS Programming Environment in your environment.

To use the overlay:

  1. Add a reference to the sas-programming-environment/gpu overlay to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    Here is an example:

    ...
    transformers:
    ...
    - sas-bases/overlays/sas-programming-environment/gpu
    - sas-bases/overlays/required/transformers.yaml
    ...

    NOTE: The reference to the sas-programming-environment/gpu overlay MUST come before the required transformers.yaml, as seen in the example above.

  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Disabling the SAS GPU Reservation Service for SAS Programming Environment

To disable the SAS GPU Reservation Service.

  1. Remove sas-bases/overlays/sas-programming-environment/gpu from the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Configure PostgreSQL

Overview

The default PostgreSQL server (used by most micro-services) in the SAS Viya platform is called “Platform PostgreSQL”. The SAS Viya platform can handle multiple PostgreSQL servers at once, but only specific micro-services use servers besides the default. Consult the documentation for your order to see if you have products that require their own PostgreSQL in addition to the default.

The SAS Viya platform provides two options for your PostgreSQL servers: internal instances provided by SAS or external PostgreSQL that you would like the SAS Viya platform to utilize. Before deploying, you must select which of these options you want to use for your SAS Viya platform deployment. If you follow the instructions in the SAS Viya Platform Deployment Guide, the deployment includes an internal instance of PostgreSQL.

Note: PostgreSQL servers must be all internally managed or all externally managed. SAS does not support mixing internal and external PostgreSQL servers in the same deployment. For information about moving from an internal PostgreSQL server to an external one, see the PostgreSQL Data Transfer Guide.

Installation

Platform PostgreSQL

Platform PostgreSQL is required in the SAS Viya platform.

Go to the base kustomization.yaml file ($deploy/kustomization.yaml). In the resources block of that file, add the following content, including adding the block if it doesn’t already exist:

resources:
- sas-bases/overlays/postgres/platform-postgres

Then, follow the appropriate subsection to continue installing or configuring Platform PostgreSQL as either internally or externally managed.

Internally Managed

Follow the steps in the “Configure Crunchy Data PostgreSQL” README located at $deploy/sas-bases/examples/crunchydata/README.md (for Markdown format) or $deploy/sas-bases/docs/configure_crunchy_data_postgresql.htm (for HTML format).

Externally Managed

Follow the steps in the section “External PostgreSQL Configuration”.

Common Data Store (CDS) PostgreSQL

CDS PostgreSQL is an additional PostgreSQL server that some services in your SAS Viya platform deployment may want to utilize, providing a second database that can be configured separately from the default PostgreSQL server.

Go to the base kustomization.yaml file ($deploy/kustomization.yaml). In the resources block of that file, add the following content, including adding the block if it doesn’t already exist:

resources:
- sas-bases/overlays/postgres/cds-postgres

Then, follow the appropriate subsection to continue installing or configuring CDS PostgreSQL as either internally or externally managed.

Internally Managed

Follow the steps in the “Configure Crunchy Data PostgreSQL” README located at $deploy/sas-bases/examples/crunchydata/README.md (for Markdown format) or $deploy/sas-bases/docs/configure_crunchy_data_postgresql.htm (for HTML format).

Externally Managed

Follow the steps in the section “External PostgreSQL Configuration”.

External PostgreSQL Configuration

External PostgreSQL is configured by modifying the DataServer CustomResource to describe your PostgreSQL server. Follow the below steps separately for each external PostgreSQL server in your Viya deployment.

  1. Copy the file $deploy/sas-bases/examples/postgres/postgres-user.env into your $deploy/site-config/postgres/ directory and make it writable:

    chmod +w $deploy/site-config/postgres/postgres-user.env
  2. Rename the copied file to something unique. SAS recommends following the naming convention: {{ POSTGRES-SERVER-NAME }}-user.env. For example, a copy of the file for Platform PostgreSQL might be called platform-postgres-user.env.

    Note: Take note of the name and path of your copied file. This information will be used in a later step.

  3. Adjust the values in your copied file following the in-line comments.

  4. Go to the base kustomization file ($deploy/kustomization.yaml). In the secretGenerator block of that file, add the following content, including adding the block if it doesn’t already exist:

    secretGenerator:
    - name: {{ POSTGRES-USER-SECRET-NAME }}
      envs:
      - {{ POSTGRES-USER-FILE }}
  5. In the added secretGenerator, fill out the user-defined values as follows:

    1. Replace {{ POSTGRES-USER-SECRET-NAME }} with a unique name for the secret. For example, you might use platform-postgres-user if specifying the user for Platform PostgreSQL.

    2. Replace {{ POSTGRES-USER-FILE }} with the path of the file you copied in Step 2. For example, this may be something like site-config/postgres/platform-postgres-user.env.

    Note: Take note of the name you give this secretGenerator. This information will be used in a later step.

  6. Copy the file $deploy/sas-bases/examples/postgres/dataserver-transformer.yaml into your $deploy/site-config/postgres directory and make it writable:

    chmod +w $deploy/site-config/postgres/dataserver-transformer.yaml
  7. Rename the copied file to something unique. SAS recommends following the naming convention: {{ POSTGRES-SERVER-NAME }}-dataserver-transformer.yaml. For example, a copy of the transformer targeting Platform PostgreSQL might be called platform-postgres-dataserver-transformer.yaml, and if you have CDS PostgreSQL, then a copy of the transformer targeting CDS PostgreSQL might be called cds-postgres-dataserver-transformer.yaml.

    Note: Take note of the name and path of your copied file. This information will be used in step 9.

  8. Adjust the values in your copied file following the guidelines in the comments.

  9. In the transformers block of the base kustomization.yaml ($deploy/kustomization.yaml), add references to the files you renamed in step 7. The following example is based on the deployment using a file named platform-postgres-dataserver-transformer.yaml for the Platform PostgreSQL instance:

    transformers:
    - site-config/postgres/platform-postgres-dataserver-transformer.yaml

Setting a Custom Database Name

By default, the SAS Viya platform uses a database named “SharedServices” in each PostgreSQL server.

To set a custom database name, uncomment the surrounding block and replace the {{ DB-NAME }} variable in your copied dataserver-transformer.yaml file(s) with the custom database name.

 **Note:** Do not use "postgres" as your custom database. "postgres" is the default system database for the PostgreSQL server. The Viya Restore utility does not work with "postgres".

Security Considerations

SAS strongly recommends the use of SSL/TLS to secure data in transit. You should follow the documented best practices provided by your cloud platform provider for securing access to your database using SSL/TLS. Securing your database server with SSL/TLS entails the use of certificates. Upon securing your database server, your cloud platform provider may provide you with a server CA certificate. In order for the SAS Viya platform to connect directly to a secure database server, you must provide the server CA certificate to the SAS Viya platform prior to deployment. Failing to configure the SAS Viya platform to trust the database server CA certificate results in “Connection refused” errors or in communications falling back to insecure modes. For instructions on how to provide CA certificates to the SAS Viya platform, see the section labeled “Incorporating Additional CA Certificates into the SAS Viya Platform Deployment” in the README file at $deploy/sas-bases/examples/security/README.md (for Markdown format) or at $deploy/sas-bases/docs/configure_network_security_and_encryption_using_sas_security_certificate_framework.htm (for HTML format).

When using an SQL proxy for database communication, it might be possible to secure database communication in accordance with the cloud platform vendor’s best practices without the need to import your database server CA certificate. Some cloud platforms, such as the Google Cloud Platform, allow the use of a proxy server to connect to the database server indirectly in a manner similar to a VPN tunnel. These platform-provided SQL proxy servers obtain certificates directly from the cloud platform. In this case, a database server CA certificate is obtained automatically by the proxy and you do not need to provide it during deployment. To find out more about SQL proxy connections to the database server, consult your cloud provider’s documentation.

Google Cloud Platform Cloud SQL for PostgreSQL Prerequisites

If you are using Google Cloud SQL for PostgreSQL, the following steps are required for each PostgreSQL server. For example, if you have both a Platform PostgreSQL server and a CDS PostgreSQL server, then you need a separate sql-proxy for each server.

  1. Copy the file $deploy/sas-bases/examples/postgres/cloud-sql-proxy.yaml to your $deploy/site-config/postgres/ directory and make it writable:

    chmod +w $deploy/site-config/postgres/cloud-sql-proxy.yaml
  2. Rename the copied file to something unique. SAS recommends following the naming convention: {{ POSTGRES-SERVER-NAME }}-cloud-sql-proxy.yaml. For example, a copy of the transformer targeting Platform PostgreSQL might be called platform-postgres-cloud-sql-proxy.yaml, and if you have CDS PostgreSQL, then a copy of the transformer targeting CDS PostgreSQL might be called cds-postgres-cloud-sql-proxy.yaml.

    Note: Take note of the name and path of your copied file. This information will be used in step 4.

  3. Adjust the values in your copied file following the guidelines in the file’s comments.

  4. In the resources block of the base kustomization.yaml ($deploy/kustomization.yaml), add references to the files you renamed in step 2. The following example is based on the deployment using a file named platform-postgres-cloud-sql-proxy.yaml:

    resources:
    - site-config/postgres/platform-postgres-cloud-sql-proxy.yaml
  5. The Google Cloud SQL Auth Proxy requires a Google Service Account Key. It retrieves this key from a Kubernetes Secret. To create this secret you must place the Service Account Key required by the Google sql-proxy in the file $deploy/site-config/postgres/ServiceAccountKey.json (in JSON format).

  6. Go to the base kustomization file ($deploy/kustomization.yaml). In the secretGenerator block of that file, add the following content, including adding the block if it doesn’t already exist:

    secretGenerator:
    - name: sql-proxy-serviceaccountkey
      files:
      - credentials.json=site-config/postgres/ServiceAccountKey.json
  7. The file $deploy/sas-bases/overlays/postgres/external-postgres/gcp-tls-transformer.yaml allows database clients and the sql-proxy pod to communicate in clear text. This transformer must be added after all other security transformers.

    transformers:
    ...
    - sas-bases/overlays/postgres/external-postgres/gcp-tls-transformer.yaml

DataServer CustomResource

You can add PostgreSQL servers to the SAS Viya platform via the DataServer.webinfdsvr.sas.com CustomResource. This CustomResource is used to inform the SAS Viya platform of the location and credentials for PostgreSQL servers. DataServers can be configured to reference either internally managed Crunchy Data PostgreSQL clusters or externally managed PostgreSQL servers.

Note: DataServer CustomResources will not provision PostgreSQL servers on your behalf.

To view the DataServer CustomResources in your SAS Viya platform deployment, run the following command.

kubectl get dataservers.webinfdsvr.sas.com -n {{ NAME-OF-NAMESPACE }}

Configure Crunchy Data PostgreSQL

Overview

Internally managed instances of PostgreSQL use the PostgreSQL Operator and Containers provided by Crunchy Data behind the scenes to create the PostgreSQL servers.

Prerequisites

Before installing any Crunchy Data components, you should know which PostgreSQL servers are required by your SAS Viya platform order.

Additionally, you should have followed the steps to configure PostgreSQL in the SAS Viya platform described in the “Configure PostgreSQL” README located at $deploy/sas-bases/examples/postgres/README.md (for Markdown format) or $deploy/sas-bases/docs/configure_postgresql.htm (for HTML format).

Installation

You must install the Crunchy Data PostgreSQL Operator in conjunction with specific PostgreSQL servers.

To install the PostgreSQL Operator, go to the base kustomization.yaml file ($deploy/kustomization.yaml). In the resources block of that file, add the following content, including adding the block if it doesn’t already exist:

resources:
- sas-bases/overlays/crunchydata/postgres-operator

Additionally, you must add content to the components block based on whether you are deploying Platform PostgreSQL or CDS PostgreSQL.

Internal Platform PostgreSQL

Go to the base kustomization.yaml file ($deploy/kustomization.yaml). In the components block of that file, add the following content, including adding the block if it doesn’t already exist:

components:
- sas-bases/components/crunchydata/internal-platform-postgres

Note: The internal-platform-postgres entry should be listed before any entries that do not relate to Crunchy Data.

Internal Common Data Store (CDS) PostgreSQL

Go to the base kustomization.yaml file ($deploy/kustomization.yaml). In the components block of that file, add the following content, including adding the block if it doesn’t already exist:

components:
- sas-bases/components/crunchydata/internal-cds-postgres

Note: The internal-cds-postgres entry should be listed before any entries that do not relate to Crunchy Data.

Examples

Crunchy Data supports many PostgreSQL features and configurations. Here are the supported options:

Configuration Settings for PostgreSQL Database Tuning

Overview

PostgreSQL is highly configurable, allowing you to tune the server(s) to meet expected workloads. This README describes how to tune and adjust the configuration for your PostgreSQL clusters. Here are the transformers in $deploy/sas-bases/examples/crunchydata/tuning/ with a description of the purpose of each: - crunchy-tuning-connection-params-transformer.yaml: Change PostgreSQL connection parameters - crunchy-tuning-log-params-transformer.yaml: Change PostgreSQL log parameters - crunchy-tuning-patroni-params-transformer.yaml: Change Patroni parameters - crunchy-tuning-pg-hba-no-tls-transformer.yaml: Set the entry for the pg_hba.conf file to disable TLS

Installation

  1. Copy the transformer file (for example, $deploy/sas-bases/examples/crunchydata/tuning/crunchy-tuning-connection-params-transformer.yaml) into your $deploy/site-config/crunchydata/.

  2. Rename the copied file to something unique. For example, the above transformer targeting Platform PostgreSQL could be named as platform-postgres-crunchy-tuning-connection-params-transformer.yaml.

  3. Adjust the values in your copied file using the in-line comments of the file and the directions in “Customize the Configuration Settings” below.

  4. Add a reference to the file in the transformers block of the base kustomization.yaml ($deploy/kustomization.yaml). The following example uses an example transformer file named platform-postgres-crunchy-tuning-connection-params-transformer.yaml:

    transformers:
    - site-config/crunchydata/platform-postgres-crunchy-tuning-connection-params-transformer.yaml

Customize the Configuration Settings

Change PostgreSQL Configuration Parameters

To change the PostgreSQL parameters, such as a log filename with a timestamp instead of the name of the week, use the crunchy-tuning-log-params-transformer.yaml file as a sample transformer. You can add, remove, or update log parameters and their values following the pattern shown in the sample file. For the complete list of available PostgreSQL configuration parameters, see PostgreSQL Server Configuration.

PostgreSQL HBA Setting to Disable TLS

Deployments that use non-TLS or Front-Door TLS can use the crunchy-tuning-pg-hba-no-tls-transformer.yaml file to make the incoming client connections go through without TLS.

Additional Resources

SAS Viya Platform Deployment Guide

PostgreSQL Client Host-Based Authentication

Configuration Settings for PostgreSQL Replicas Count

Overview

PostgreSQL High Availability (HA) cluster deployments have one primary database node and one or more standby database nodes. Data is replicated from the primary node to the standby node(s). In Kubernetes, a standby node is referred to as a replica. This README describes how to configure the number of replicas in a PostgreSQL HA cluster.

Installation

  1. Copy the file $deploy/sas-bases/examples/crunchydata/replicas/crunchy-replicas-transformer.yaml into your $deploy/site-config/crunchydata/ directory.

  2. Adjust the values in your copied file following the in-line comments.

  3. Add a reference to the file in the transformers block of the base kustomization.yaml ($deploy/kustomization.yaml), including adding the block if it doesn’t already exist:

    transformers:
    - site-config/crunchydata/crunchy-replicas-transformer.yaml

Additional Resources

For more information, see SAS Viya Platform Deployment Guide.

Configuration Settings for Crunchy Data pgBackRest Utility

Overview

PostgreSQL backups play a vital role in disaster recovery. Automatically scheduled backups and backup retention policies prevent unnecessary storage accumulation and further support disaster recovery. SAS installs Crunchy Data PostgreSQL servers with automatically scheduled backups and a retention policy. This README describes how to change the configuration settings of these backups.

Note: The backup settings here are for the internal Crunchy Data pgBackRest utility, not for SAS Viya backup and restore utility.

Installation

  1. Copy the file $deploy/sas-bases/examples/crunchydata/backups/crunchy-pgbackrest-backup-config-transformer.yaml into your $deploy/site-config/crunchydata/ directory.

  2. Adjust the values in your copied file following the in-line comments.

  3. Add a reference to the file in the transformers block of the base kustomization.yaml ($deploy/kustomization.yaml), including adding the block if it doesn’t already exist:

    transformers:
    - site-config/crunchydata/crunchy-pgbackrest-backup-config-transformer.yaml

Note: Avoid scheduling backups during times when the environment might be shut down, such as Saturday or Sunday if you regularly scale down your Kubernetes cluster on weekends.

Additional Resources

For more information about deployment, see SAS Viya Platform Deployment Guide.

For more information about pgBackRest, see pgBackRest User Guide and pgBackRest Command Reference.

Configuration Settings for PostgreSQL Storage

Overview

PostgreSQL data is stored inside Kubernetes PersistentVolumeClaims (PVCs). This README describes how to adjust PostgreSQL PVC settings such as size and storage classes.

Important: Changing the storage class for PostgreSQL PVCs after the initial SAS Viya platform deployment must use the process described in Change the Storage Class of the Data Pod. Changing the access mode is not allowed after the initial SAS Viya platform deployment. The only supported access mode is ReadWriteOnce (RWO), and it is a placeholder for future use.

Installation

  1. Copy the file $deploy/sas-bases/examples/crunchydata/storage/crunchy-storage-transformer.yaml into your $deploy/site-config/crunchydata/ directory.

  2. Rename the copied file to something unique. SAS recommends following the naming convention {{ CLUSTER-NAME }}-crunchy-storage-transformer.yaml. For example, a copy of the transformer targeting Platform PostgreSQL could be named platform-postgres-crunchy-storage-transformer.yaml.

  3. Adjust the values in your copied file following the in-line comments.

  4. Add a reference to the file in the transformers block of the base kustomization.yaml ($deploy/kustomization.yaml), including adding the block if it doesn’t already exist. The following example shows the content based on a file named platform-postgres-crunchy-storage-transformer.yaml:

    transformers:
    - site-config/crunchydata/platform-postgres-crunchy-storage-transformer.yaml

For reference, SAS uses the following default values:

Additional Resources

For more information, see SAS Viya Platform Deployment Guide.

Configuration Settings for PostgreSQL Pod Resources

Overview

This README describes how to adjust the CPU and memory usage of the PostgreSQL-related pods. The minimum for each of these values is described by their request and the maximum for each of these values is described by their limit.

Installation

  1. Copy the file $deploy/sas-bases/examples/crunchydata/pod-resources/crunchy-pod-resources-transformer.yaml into your $deploy/site-config/crunchydata/ directory.

  2. Adjust the values in your copied file following the in-line comments. As a point of reference, the SAS defaults are as follows:

    # PostgreSQL values
    requests:
      cpu: 150m
      memory: 2Gi
    limits:
      cpu: 8000m
      memory: 8Gi
    
    # pgBackrest values
    requests:
      cpu: 100m
      memory: 256Mi
    limits:
      cpu: 1000m
      memory: 500Mi
  3. Add a reference to the file in the transformers block of the base kustomization.yaml ($deploy/kustomization.yaml), including adding the block if it doesn’t already exist:

    transformers:
    - site-config/crunchydata/crunchy-pod-resources-transformer.yaml

Additional Resources

For more information, see SAS Viya Platform Deployment Guide.

For more information about Pod CPU resource configuration, go here.

For more information about Pod memory resource configuration, go here.

Configuration Settings for Arke

Overview

Arke is a message broker proxy that sits between all services and RabbitMQ. This README file describes the settings available for deploying Arke.

Installation

Based on the following description of the available example files, determine if you want to use any example file in your deployment. If you do, copy the example file and place it in your site-config directory.

Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ MEMORY-LIMIT }}. Replace the entire variable string, including the braces, with the value you want to use.

After you have edited the file, add a reference to it in the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example using the Arke transformers:

transformers:
...
- site-config/arke/arke-modify-cpu.yaml
- site-config/arke/arke-modify-memory.yaml
- site-config/arke/arke-modify-hpa-replicas.yaml

Examples

The example files are located at $deploy/sas-bases/examples/arke. The following list contains a description of each example file for Arke settings and the file names.

Configuration Settings for RabbitMQ

Overview

This README file describes the settings available for deploying RabbitMQ.

Installation

Based on the following description of the available example files, determine if you want to use any example file in your deployment. If you do, copy the example file and place it in your site-config directory.

Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ NUMBER-OF-NODES }}. Replace the entire variable string, including the braces, with the value you want to use.

After you have edited the file, add a reference to it in the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example using the RabbitMQ nodes transformer:

transformers:
...
- site-config/rabbitmq/configuration/rabbitmq-node-count.yaml

Examples

The example files are located at $deploy/sas-bases/examples/rabbitmq/configuration. The following list contains a description of each example file for RabbitMQ settings and the file names.

Note: The default number of nodes is 3. SAS recommends a node count that is odd such as 1, 3, or 5.

Note: The default memory limit is 8Gi which may not be sufficient under some workloads. If the RabbitMQ pods are restarting on their own or if you notice memory usage above 4Gi, then you should increase the memory limit. RabbitMQ requires the additional 4Gi for garbage collection.

Note: You must delete the RabbitMQ statefulset and PVCs before applying the PVC size change. Use the following procedure:

  1. Delete the RabbitMQ statefulset.

    kubectl -n <name-of-namespace> delete statefulset sas-rabbitmq-server
  2. Wait for all of the pods to terminate before deleting the PVCs. You can check the status of the RabbitMQ pods with the following command:

    kubectl -n <name-of-namespace> get pods -l app.kubernetes.io/name=sas-rabbitmq-server
  3. When no pods are listed as output for the command in step 2, delete the PVCs:

    kubectl -n <name-of-namespace> delete pvc -l app.kubernetes.io/name=sas-rabbitmq-server
    4. (Optional) Enable access to the RabbitMQ Management UI (rabbitmq-enable-management-ui.yaml).

Note: SAS does not recommend leaving the RabbitMQ Management UI enabled. However, the rabbitmq-enable-management-ui.yaml file can be used for that purpose. SAS does not recommend adding it to the base kustomization.yaml file.

Note: Consider the following when you are reducing resources allocated for RabbitMQ:

IMPORTANT: Starving RabbitMQ of CPU, memory, or disk space can cause RabbitMQ to become unstable, affecting the operation of SAS Viya platform.

Configuration Settings for Redis

Overview

Redis is used as a distributed cache for SAS Viya platform services. This README file describes the settings available for deploying Redis.

Installation

The redis-modify-memory.yaml transformer file allows you to change the memory resources for Redis nodes. The default required value is 90Mi, and the default limit is 500Mi. The Redis ‘maxmemory’ setting is set to 90% of the container memory limit. To change those values:

  1. Copy the $deploy/sas-bases/examples/redis/server/redis-modify-memory.yaml file to site-config/redis/server/redis-modify-memory.yaml.

  2. The variables in the copied file are set off by curly braces and spaces, such as {{ MEMORY-LIMIT }}. Replace each variable string, including the braces, with the values you want to use. If you want to use the default for a variable, make no changes to that variable.

  3. After you have edited the file, add a reference to it in the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    transformers:
    ...
    - site-config/redis/server/redis-modify-memory.yaml

Configure Python and R Integration with the SAS Viya Platform

Table of Contents

Overview

The SAS Viya platform can allow two-way communication between SAS (CAS and Compute engines) and open source environments (Python and R). This README describes the various post-installation steps required to install, configure, and deploy Python and R to enable integration in the SAS Viya platform.

Prerequisites

The SAS Viya platform provides YAML files that the Kustomize tool uses to configure Python. Before you use those files, you must perform the following tasks:

  1. Configure persistent storage for your SAS Viya platform deployment such as an NFS server that can be mounted to a persistent volume. For more information, see Storage Requirements.
  2. Configure the ASTORES persistent volume claim for the SAS Micro Analytic Service. For details, see the README file at $deploy/sas-bases/examples/sas-microanalytic-score/astores/README.md (for Markdown format) or at $deploy/sas-bases/docs/configure_sas_micro_analytic_service_to_support_analytic_stores.htm (for HTML format).
  3. Provision and deploy the SAS Viya platform (2021.2.2 or later release).
  4. To download and install Python and R, you must have access to the public internet. If your SAS Viya platform environment does not have public internet access, you must first download, install, and configure Python and R onto a separate internet-connected Linux environment. Then you should package up the directories (such as in a tarball) and copy them to the persistent storage available to your SAS Viya platform environment.

Installing and Configuring Python and R

Each of the following numbered sections provides details about installation and configuration steps required to enable various open source integration points.

1. Installation of Python

SAS provides the SAS Configurator for Open Source utility, which automates the download and installation of Python from source by creating and executing the sas-pyconfig job. For details, including the steps to configure one or more Python environments using the SAS Configurator for Open Source, see the README at $deploy/sas-bases/examples/sas-pyconfig/README.md (for Markdown format) or $deploy/sas-bases/doc/sas_configurator_for_open_source_options.htm (for HTML format). The example file $deploy/sas-bases/examples/sas-pyconfig/change-configuration.yaml contains default options that can be run as is or tailored to your environment, including which Python version to install, which collection of Python libraries to install, and whether to install multiple Python environments with different configurations (such as Python libraries or Python versions). Python is installed into a persistent volume that is mounted into the SAS Viya platform pods later (see Step 3: Configure Python and R to Be Visible in the SAS Viya Platform).

SAS recommends that you increase CPU and memory beyond the default values when using the SAS Configurator for Open Source to avoid out-of-memory errors during the installation of Python. See the Resource Management section of the README. Also, per SAS Documentation: Required Updates by Component, #3, you must delete the sas-pyconfig job after successful completion of the Python installation and before deploying a manual update. Otherwise, you will see an error similar to the following:

Job.batch "sas-pyconfig" is invalid: spec.template: Invalid value: ... field is immutable".

You might also want to turn off the sas-pyconfig job by setting the global.enabled value to false in $deploy/site-config/sas-pyconfig/change-configuration.yaml file prior to executing future manual deployments, to prevent a restart of the sas-pyconfig job.

Note that the SAS Configurator for Open Source requires an internet connection. If your SAS Viya platform environment does not have access to the public internet, you will need to download, install, and configure Python on an internet-accessible device and transfer those files to your deployment environment.

2. Installation of R from Source

Install R from source in a persistent volume that will be mounted to the SAS Viya platform pods during Step 3: Configure Python and R to be Visible in the SAS Viya Platform. After installing R, you should also download and install all desired R packages (for example, by starting an R session and executing the install.packages(my-desired-package) command). Two notes of caution:

  1. Any shared library dependencies should be copied into the R directory that will be mounted to the SAS Viya platform pods. For example, required shared libraries can be copied from /lib/[your-linux-distribution] into the /your-R-parent-directory/lib/R/lib within the PVC directory where you install R (/your-R-parent-directory).
  2. During installation of R, some hardcoded paths are pre-configured in the R and Rscript files to point to the installation directory. These hardcoded paths should match the mountPath that you plan to make available in the SAS Viya platform pods. If you configure a mountPath that differs from the installation location, there are at least two approaches available to you:
    • Modify these values in the R and Rscript files.
    • First install R into a temporary directory that matches the mountPath directory (such as /r-mount). You can specify the directory during the configuration of your R installation by setting the --prefix=/{{ your-mountPath }} option (where you replace {{ your-mountPath }} with the desired mountPath in your pods) when running ./configure. Install all R packages within that /r-mount directory, and copy all shared libraries into the subdirectory /r-mount/lib/R/lib). Finally, copy or move the entire contents of {{ your-mountPath }} into your PVC directory of choice.

If your SAS Viya platform environment does not have access to the public internet, you will need to download, install, and configure R on an internet-accessible device and transfer those files to your deployment environment.

3. Configure Python and R to Be Visible in the SAS Viya Platform

Add NFS mounts for Python and R directories. Now that Python and R are installed on your persistent storage, you need to mount those directories so that they are available to the SAS Viya platform pods. Do this by copying transformers for Python and R from the $deploy/sas-bases/examples/sas-open-source-config/python and $deploy/sas-bases/examples/sas-open-source-config/r directories into your $deploy/site-config/sas-open-source-config Python and R directories. For details, refer to the following two READMEs:

This step makes the installed software visible to the SAS Viya platform pods. You must enable lockdown access methods (for Python) and configure the SAS Viya platform to connect to your open-source packages (both Python and R) to enable users to connect to R or Python from within a SAS Viya platform GUI.

4. Enable Lockdown Access Methods

This step opens up communication between Python or R, and the SAS Viya platform. You will need to enable python and python_embed methods for most, if not all, Python integration points; the socket method is also required to enable PROC Python and the Python Code Editor. For details, see $deploy/sas-bases/examples/sas-programming-environment/lockdown/README.md.

5. Configure the SAS Viya Platform to Connect to Python and R

These steps tell the SAS Viya platform how to connect to your Python and R binaries that you installed in the mounted directories. For details, see:

Following the steps in these two READMEs, you will update the Python- and R-specific kustomization.yaml files (in their respective folders within $deploy/site-config/sas-open-source-config) to replace the {{ }} placeholders with your installation’s details (for example, RHOME path pointing to the parent directory where R is mounted). These kustomization files create environment variables that are made available in the SAS Viya platform pods. These new environment variables tell the SAS Viya platform where to look for the Python and R executables and associated libraries.

If you have licensed SAS/IML, you also need to create two new environment variables to enable R to be called by PROC IML in a SAS Program (for details, see SAS Documentation on the RLANG system option):

  1. R_HOME must point to {{ r-parent-directory }}/lib/R within your mounted R directory (for example, /r-mount/lib/R if R is mounted to /r-mount).
  2. The SASV9_OPTIONS environment variable must be set to =-RLANG

You can automate the creation of these two environment variables by adding them to $deploy/site-config/sas-open-source-config/r/kustomization.yaml, or after deploying your updates by adding them within the SAS Environment Manager GUI.

For both Python and R, you also need to create a single new XML file with the “External languages settings”. This is required for FCMP and PROC TSMODEL’s EXTLANG package.

6. Configure External Access to CAS

By default, CAS resources can be accessed by Python and R from within the cluster, but not external to the cluster. To access CAS resources outside the cluster (such as from an existing JupyterHub deployment elsewhere or from a desktop installation of R-Studio), additional configuration steps are required to enable binary (recommended) access. For details, see the README at $deploy/sas-bases/examples/cas/configure/README.md (for Markdown format) or at $deploy/sas-bases/docs/configuration_settings_for_cas.htm (for HTML format). See also SAS Viya Platform Operations: Configure External Access to CAS.

External connections to the SAS Viya platform, including CAS, can be made using resources that SAS provides for developers, open-source programmers, and system administrators who want to leverage or manage the computational capabilities of the SAS Viya platform but from open-source coding interfaces. See the SAS Developer Home page for up-to-date information about the different collections of resources, such as code libraries and APIs for building apps with SAS, SAS Viya Platform and CAS REST APIs, and end-to-end example API use cases.

7. Configure SAS Model Repository Service for Python and R Models

The SAS Viya platform must be configured to enable users to register and publish open-source models in the SAS Viya platform. For details and configuration options, see the following resources:

8. Configure Git Integration in SAS Studio

The SAS Viya platform allows direct integration with Git within the SAS Studio interface. Follow the steps outlined in the following resources:

The configuration properties can be edited within the SAS Environment Manager console, or by using the SAS Viya Platform Command Line Interface tool’s Configuration plug-in.

Additional Resources

The following links were referenced in this README or provide further useful information:

Table of Capabilities and READMEs

The following table maps each specific open-source integration point to the relevant resource(s) containing details about configuring that specific integration point.

README PROC Python PROC FCMP (Python) PROC IML (R) Open Source Code Node (Python) Open Source Code Node (R) EXTLANG Package (Python) EXTLANG Package (R) SWAT (Python & R)
Python configuration x x x x
R configuration x x x
Lockdown methods x x x x
External access to CAS x

Python configuration: see the README at $deploy/sas-bases/examples/sas-open-source-config/python/README.md (for Markdown format) or at $deploy/sas-bases/docs/configure_python_for_sas_viya_using_a_kubernetes_persistent_volume.htm (for HTML format).

R configuration: see the README at $deploy/sas-bases/examples/sas-open-source-config/r/README.md (for Markdown format) or at $deploy/sas-bases/docs/configure_r_for_sas_viya.htm (for HTML format).

Lockdown methods: See the README at $deploy/sas-bases/examples/sas-programming-environment/lockdown/README.md (for Markdown format) or at $deploy/sas-bases/docs/lockdown_settings_for_the_sas_programming_environment.htm (for HTML format).

External access to CAS: See the README at $deploy/sas-bases/examples/cas/configure/README.md (for Markdown format) or at $deploy/sas-bases/docs/configuration_settings_for_cas.htm (for HTML format).

Configure Python for the SAS Viya Platform Using a Kubernetes Persistent Volume

Overview

The SAS Viya platform can use a customer-prepared environment consisting of a Python installation and any required packages stored on a Kubernetes PersistentVolume. This README describes how to make that volume available to your deployment.

SAS provides a utility, SAS Configurator for Open Source, that facilitates the download and management of Python from source and partially automates the steps to integrate Python with the SAS Viya platform. SAS recommends that you use this utility.

For comprehensive documentation related to the configuration of open-source language integration, including the use of SAS Configurator for Open Source, see SAS Viya Platform: Integration with External Languages.

Note: The examples provided in this README are appropriate for a manual deployment of Python integration. For a deployment that uses SAS Configurator for Open Source, consult SAS Viya Platform: Integration with External Languages.

Prerequisites

The SAS Viya platform provides YAML files that the Kustomize tool uses to configure Python. Before you use those files, you must perform the following tasks:

  1. Make note of the attributes for the volume where Python and the associated packages are to be deployed. For example, note the server and directory for NFS. For more information about various types of PersistentVolumes in Kubernetes, see Additional Resources. If you are deploying on Red Hat OpenShift cluster, you may need to define permissions to the service account for the volume that you mount for Python. For more information about installing the service account overlay, refer to the README file at /$deploy/sas-bases/overlays/sas-microanalytic-score/service-account/README.md (for Markdown format) or $deploy/sas-bases/docs/configure_sas_micro_analytic_service_to_add_service_account.htm (for HTML format).

  2. Install Python and any necessary packages on the volume.

  3. In addition to the volume attributes, you must have the following information:

    • {{ PYTHON-EXECUTABLE }} - the name of the Python executable file (for example, python or python3.8)
    • {{ PYTHON-EXE-DIR }} - the directory or partial path (relative to the mount) containing the executable (for example, /bin or /virt_environs/envron_dm1/bin). Note the mount point for your Python deployment should be its top level directory.
    • {{ SAS-EXTLANG-SETTINGS-XML-FILE }} - configuration file for enabling Python and R integration in CAS. This is only required if you are using Python with CMP or the EXTLANG package.
    • {{ SAS-EXT-LLP-PYTHON-PATH }} - list of directories to look for when searching for run-time shared libraries (similar to LD_LIBRARY_PATH)
  4. The Python overlay for sas-microanalytic-score uses a Persistent Volume named astores-volume, which is defined in the astores overlay. The Python and astore overlays are usually installed together. If you choose to install the python overlay only, you still need to install the astores overlay as well. For more information on installing the astores overlay, refer to the “Configure SAS Micro Analytic Service to Support Analytic Stores” README file at $deploy/sas-bases/examples/sas-microanalytic-score/astores/README.md (for Markdown format) or $deploy/sas-bases/docs/configure_sas_micro_analytic_service_to_support_analytic_stores.htm (for HTML format).

Installation

  1. Copy the files in the $deploy/sas-bases/examples/sas-open-source-config/python directory to the $deploy/site-config/sas-open-source-config/python directory. Create the destination directory, if it does not already exist.

    Note: If the destination directory already exists, verify that the overlay has been applied. If the output contains the /python mount directory path, you do not need to take any further actions, unless you want to change the overlay parameters to use a different Python environment.

  2. The kustomization.yaml file defines all the necessary environment variables. Replace all tags, such as {{ PYTHON-EXE-DIR }}, with the values that you gathered in the Prerequisites step. Then, set the following parameters, according to the SAS products you will be using:

    • MAS_PYPATH and MAS_M2PATH are used by SAS Micro Analytic Service.
    • PROC_PYPATH and PROC_M2PATH are used by PROC PYTHON in the Compute Server. PROC_M2PATH defaults to the correct location in the install, so it’s not required to be provided in the kustomization.yaml. However, the example file shows the correct path as the value.
    • DM_PYPATH is used by the Open Source Code node in SAS Visual Data Mining and Machine Learning. You can add DM_PYPATH2, DM_PYPATH3, DM_PYPATH4 and DM_PYPATH5 if you need to specify multiple Python environments. The Open Source Code node allows you to choose which of these five environment variables to use during execution.
    • SAS_EXTLANG_SETTINGS is used by applications that run Python and R code on CAS. This includes PROC FCMP and the Time Series External Languages (EXTLANG) package. SAS_EXTLANG_SETTINGS should only be set in one example file; for example, if you set it in the Python example, you should not set it the R example. SAS_EXTLANG_SETTINGS should point to an XML file that is readable by all users. The path can be in the same volume that contains the R environment or in any other volume that is accessible to CAS. Refer to the documentation for the Time Series External Languages (EXTLANG) package for details on the expected XML schema.
    • SAS_EXT_LLP_PYTHON is used when the base distribution or packages for open-source software require additional run-time libraries that are not part of the shipped container image.

    Note: Any environment variables that you define in this example will be set on all pods, although they might not have an effect. For example, setting MAS_PYPATH will not affect the Python executable used by the EXTLANG package. That executable is set in the SAS_EXTLANG_SETTINGS file. However, if you define $MAS_PYPATH you can then use it in the SAS_EXTLANG_SETTINGS file. For example,

    <LANGUAGE name="PYTHON3" interpreter="$MAS_PYPATH"></LANGUAGE>

  3. Attach storage to your SAS Viya platform deployment. The python-transformer.yaml file uses PatchTransformers in Kustomize to attach the volume containing your Python installation to the SAS Viya platform. Replace {{ VOLUME-ATTRIBUTES }} with the appropriate volume specification.

    For example, when using an NFS mount, the {{ VOLUME-ATTRIBUTES }} tag should be replaced with nfs: {path: /vol/python, server: myserver.sas.com} where myserver.sas.com is the NFS server and /vol/python is the NFS path you recorded in the Prerequisites step.

    The relevant code excerpt from python-transformer.yaml file before the change:

    patch: |-
    # Add Python Volume
      - op: add
        path: /spec/template/spec/volumes/-
        value: { name: python-volume, {{ VOLUME-ATTRIBUTES }} }

    The relevant code excerpt from python-transformer.yaml file after the change:

    patch: |-
    # Add Python Volume
      - op: add
        path: /spec/template/spec/volumes/-
        value: { name: python-volume, nfs: {path: /vol/python, server: myserver.sas.com} }
  4. Also in the python-transformer.yaml file, there is a PatchTransformer called sas-python-sas-java-policy-allow-list. This PatchTransformer sets paths to the Python executable so that the SAS runtime allows execution of the Python code. Replace the {{ PYTHON-EXE-DIR }} and {{ PYTHON-EXECUTABLE }} tags with the appropriate values. If you are specifying multiple Python environments, set each of them here. Here is an example:

    apiVersion: builtin
    kind: PatchTransformer
    metadata:
      name: add-python-sas-java-policy-allow-list
    patch: |-
      - op: add
        path: /data/SAS_JAVA_POLICY_ALLOW_DM_PYPATH
        value: /python/python3/bin/python3.8
      - op: add
        path: /data/SAS_JAVA_POLICY_ALLOW_DM_PYPATH2
        value: /python/python2/bin/python2.7
    target:
      kind: ConfigMap
      name: sas-programming-environment-java-policy-config
  5. Python runs in a separate container in the sas-microanalytic-score pod. Default resource limits are defined for the Python container in the python-transformer.yaml file. Depending upon your application requirements, the CPU and memory values can be modified in the resources section of that file.

    ```yaml
     command: ["$(MAS_PYPATH)", "$(MAS_M2PATH)"]
     envFrom:
     - configMapRef:
         name: sas-open-source-config-python
     - configMapRef:
         name: sas-open-source-config-python-mas
     resources:
       requests:
         memory: 50Mi
         cpu: 50m
       limits:
         memory: 500Mi
         cpu: 500m
    ```
    
  6. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    • Add site-config/sas-open-source-config/python to the resources block.
    • Add site-config/sas-open-source-config/python/python-transformer.yaml to the transformers block before the sas-bases/overlays/required/transformers.yaml.

    Here is an example:

      resources:
      - site-config/sas-open-source-config/python
    
      transformers:
      ...
      - site-config/sas-open-source-config/python/python-transformer.yaml
      - sas-bases/overlays/required/transformers.yaml
  7. The Process Orchestration feature requires additional tasks to configure Python. If your deployment includes the Process Orchestration feature, then perform the steps in the README located at $deploy/sas-bases/examples/sas-airflow/python/README.md (for Markdown format) or at $deploy/sas-bases/docs/configure_python_for_process_orchestration.htm (for HTML format).

    Note: If you are not certain if your deployment includes Process Orchestration, look at the directory path for the README described above. If the README is present, then Process Orchestration is included in your deployment. If the README is not present, Process Orchestration is not included in the deployment, and you should go to the next step.

  8. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

    Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

    All affected pods, except the CAS Server pod, are automatically restarted when the overlay is applied. If the overlay is applied after the initial deployment, the CAS Server might need an explicit restart. For information, see Restart CAS Server.

Verify Overlay for Python Volume

  1. Run the following command to verify whether the overlay has been applied:

    kubectl describe pod  <sas-microanalyticscore-pod-name> -n <name-of-namespace>
  2. Verify that the output contains the following mount directory paths:

    Mounts:
      /python (r)

Additional Resources

Configure Python for the SAS Viya Platform Using a Docker Image

Overview

The SAS Viya platform can use a customer-prepared environment consisting of a Python installation (and any required packages) that are stored on a Kubernetes PersistentVolume or a Docker image. This README describes how to make a Docker image that contains a Python installation available to your deployment.

Note: Python can be used by the Micro Analytic Score service, Cloud Analytic Services (CAS) and the Compute service. However, accessing Python via a Docker image is currently available as an option only for the Micro Analytic Score service. Therefore, if you use this method and you require Python for CAS or the Compute Server, a Python distribution must also be available via a Kubernetes persistent volume.

Prerequisites

Because Python can be used from a Docker image only by the Micro Analytic Score service, until the Docker image is available to other pods, make sure that the Python environment in the Docker image is available in the mounted volume for other pods. The SAS Viya platform provides YAML files that the Kustomize tool uses to configure Python. Before you use those files, you must perform the following tasks:

  1. Prepare the Python Docker image with all the necessary Python packages that you will be using. Make note of the Python image URL in the Docker registry ( {{ PYTHON-DOCKER-IMAGE-URL }} parameter in python-transformer.yaml) and the configuration settings for accessing the registry with the Python image ( {{ DOCKER-REGISTRY-CONFIG }} parameter in kustomization.yaml).

    Here is a sample Docker registry configuration setting:

    {"auths": {"registry.company.com": {"username": "myusername","password": "mypassword","email":"[email protected]","auth":"< mysername:mypassword in base64 encoded form>"}}}

    For more information about Python image preparation and registry configuration settings, see Additional Resources.

  2. Make note of the attributes for the volume where Python and the associated packages are to be deployed. For example, note the server and directory for NFS. For more information about various types of PersistentVolumes in Kubernetes, see Additional Resources.

  3. Install Python and any necessary packages on the volume.

  4. In addition to the volume attributes, you must have the following information:

    • {{ PYTHON-IMAGE-EXECUTABLE }} - the name of the Python executable file (for example, python or python3.8) in the Python image.
    • {{ PYTHON-IMAGE-EXE-DIR }} - the directory (relative to the root) that contains the executable (for example, /bin).
    • {{ PYTHON-EXECUTABLE }} - the name of the Python executable file (for example, python or python3.8) in the Python mount.
    • {{ PYTHON-EXE-DIR }} - the directory or partial path (relative to the mount) that contains the executable (for example, /bin or /virt_environs/envron_dm1/bin). Note that the mount point for your Python deployment should be its top-level directory.
    • {{ SAS-EXTLANG-SETTINGS-XML-FILE }} - the configuration file used to enable Python and R integration in CAS. This is required only if you are using Python with CMP or the EXTLANG package.
    • {{ SAS-EXT-LLP-PYTHON-PATH }} - the list of directories to look for when searching for run-time shared libraries (similar to LD_LIBRARY_PATH).

Installation

  1. Copy the files in the $deploy/sas-bases/examples/sas-open-source-config/python directory to the $deploy/site-config/sas-open-source-config/python directory. Create the destination directory, if it does not already exist.

    Note: If the destination directory already exists, verify that the overlay has been applied. If the output contains the /mas2py mount directory path, you do not need to take any further action unless you want to change the overlay parameters to use a different Python environment.

  2. Use the kustomization.yaml file to define the necessary environment variables. Replace all tags, such as {{ PYTHON-EXE-DIR }}, with the values that you gathered in the Prerequisites step. Then set the following parameters according to the SAS products that you will be using:

    • MAS_PYPATH and MAS_M2PATH are used by SAS Micro Analytic Service.
    • PROC_PYPATH and PROC_M2PATH are used by PROC PYTHON in the Compute Server. PROC_M2PATH defaults to the correct location in the install, so it is not required to be provided in the kustomization.yaml. However, the example file shows the correct path as the value.
    • DM_PYPATH is used by the Open Source Code node in SAS Visual Data Mining and Machine Learning. You can add DM_PYPATH2, DM_PYPATH3, DM_PYPATH4 and DM_PYPATH5 if you need to specify multiple Python environments. The Open Source Code node allows you to choose which of these five environment variables to use during execution.
    • SAS_EXTLANG_SETTINGS is used by applications that run Python and R code on CAS. This includes PROC FCMP and the Time Series External Languages (EXTLANG) package. SAS_EXTLANG_SETTINGS should be set in only one example file; for example, if you set it in the Python example, you should not set it in the R example. SAS_EXTLANG_SETTINGS should point to an XML file that is readable by all users. The path can be in the same volume that contains the R environment or in any other volume that is accessible to CAS. Refer to the documentation for the Time Series External Languages (EXTLANG) package for details about the expected XML schema.
    • SAS_EXT_LLP_PYTHON is used when the base distribution or packages for open-source software require additional run-time libraries that are not part of the shipped container image.

    Note: Any environment variables that you define in this example will be set on all pods, although they might not have an effect. For example, setting MAS_PYPATH will not affect the Python executable used by the EXTLANG package. That executable is set in the SAS_EXTLANG_SETTINGS file. However, if you define $MAS_PYPATH you can then use it in the SAS_EXTLANG_SETTINGS file. Here is an example:

    <LANGUAGE name="PYTHON3" interpreter="$MAS_PYPATH"></LANGUAGE>

  3. Attach storage to your SAS Viya platform deployment. The python-image-transformer.yaml file uses PatchTransformers in Kustomize to attach the Python installation volume to the SAS Viya platform. Replace {{ VOLUME-ATTRIBUTES }} with the appropriate volume specification.

    For example, when using an NFS mount, the {{ VOLUME-ATTRIBUTES }} tag should be replaced with nfs: {path: /vol/python, server: myserver.sas.com} where myserver.sas.com is the NFS server and /vol/python is the NFS path that you recorded in the Prerequisites step.

    Here is the relevant code excerpt from the python-image-transformer.yaml file before the change:

    patch: |-
    # Add side car Container
      - op: add
        path: /spec/template/spec/containers/-
        value:
          name: viya4-mas-python-runner
          image: {{ PYTHON-DOCKER-IMAGE-URL }}
    patch: |-
    # Add Python Volume
      - op: add
        path: /spec/template/spec/volumes/-
        value: { name: python-volume, {{ VOLUME-ATTRIBUTES }} }

    Here is the relevant code excerpt from the python-image-transformer.yaml file after the change:

    patch: |-
    # Add side car Container
      - op: add
        path: /spec/template/spec/containers/-
        value:
          name: viya4-mas-python-runner
          image: registry.company.com/python-env:latest
     ```
    
    ```yaml
    patch: |-
    # Add Python Volume
      - op: add
        path: /spec/template/spec/volumes/-
        value: { name: python-volume, nfs: {path: /vol/python, server: myserver.sas.com} }
    Here is the relevant code excerpt from the kustomization.yaml file before the change:
    
    secretGenerator:
    - name: python-regcred
      type: kubernetes.io/dockerconfigjson
      literals:
      - '.dockerconfigjson={{ DOCKER-REGISTRY-CONFIG }}'

    The relevant code excerpt from the kustomization.yaml file after the change:

    yaml secretGenerator: - name: python-regcred type: kubernetes.io/dockerconfigjson literals: - '.dockerconfigjson={"auths": {"registry.company.com": {"username": "myusername","password": "mypassword","email":"[email protected]","auth":"< mysername:mypassword in base64 encoded form>"}}}'

  4. The python-image-transformer.yaml file contains a PatchTransformer called sas-python-sas-java-policy-allow-list. This PatchTransformer sets paths to the Python executable so that the SAS runtime allows execution of the Python code. Replace the {{ PYTHON-EXE-DIR }} and {{ PYTHON-EXECUTABLE }} tags with the appropriate values. If you are specifying multiple Python environments, each need to be set here. Here is an example:

    apiVersion: builtin
    kind: PatchTransformer
    metadata:
      name: add-python-sas-java-policy-allow-list
    patch: |-
      - op: add
        path: /data/SAS_JAVA_POLICY_ALLOW_DM_PYPATH
        value: /python/python3/bin/python3.8
      - op: add
        path: /data/SAS_JAVA_POLICY_ALLOW_DM_PYPATH2
        value: /python/python2/bin/python2.7
    target:
      kind: ConfigMap
      name: sas-programming-environment-java-policy-config
  5. Python runs in a separate container in the sas-microanalytic-score pod. Default resource limits are defined for the Python container in the python-image-transformer.yaml file. Depending on your application requirements, the CPU and memory values can be modified in the resources section of that file. Here is an example:

     command: ["$(MAS_PYPATH)", "$(MAS_M2PATH)"]
     envFrom:
     - configMapRef:
         name: sas-open-source-config-python-image-mas
     resources:
       requests:
         memory: 50Mi
         cpu: 50m
       limits:
         memory: 500Mi
         cpu: 500m
  6. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    • Add site-config/sas-open-source-config/python-image to the resources block.
    • Add site-config/sas-open-source-config/python-image/python-image-transformer.yaml to the transformers block before the sas-bases/overlays/required/transformers.yaml.

    Here is an example:

     resources:
     - site-config/sas-open-source-config/python-image
    
     transformers:
     ...
     - site-config/sas-open-source-config/python-image/python-image-transformer.yaml
     - sas-bases/overlays/required/transformers.yaml
  7. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

    Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run kustomize build to create and apply the manifests.
    • If you are applying the overlay after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

    All affected pods, except the CAS Server pod, are automatically restarted when the overlay is applied. If the overlay is applied after the initial deployment, the CAS Server might need an explicit restart. For information, see Restart CAS Server.

Verify the Overlay for the Python Docker Image

  1. Run the following command to verify whether the overlay has been applied:

    kubectl describe pod  <sas-microanalyticscore-pod-name> -n <name-of-namespace>
  2. Verify that the output contains the following mount directory paths:

    Mounts:
      /mas2py

Additional Resources

Configure R for the SAS Viya Platform

Overview

The SAS Viya platform can use a customer-prepared environment consisting of an R installation and any required packages stored on a Kubernetes Persistent Volume. This README describes how to make that volume available to your deployment.

Prerequisites

The SAS Viya platform provides YAML files that the Kustomize tool uses to configure R. Before you use those files, you must perform the following tasks:

  1. Make note of the attributes of the volume where R and the associated packages are to be deployed. For example, note the server and directory for NFS. For more information about various types of persistent volumes in Kubernetes, see Additional Resources.

  2. Install R and any necessary packages on the volume.

  3. In addition to the volume attributes, you must have the following information:

    • {{ R-MOUNTPATH }} - the install path used when R is built excluding top-level directory (for example, /nfs/r-mount).
    • {{ R-HOMEDIR }} - the top-level directory of the R installation on that volume (for example, R-3.6.2).
    • {{ SAS-EXTLANG-SETTINGS-XML-FILE }} - configuration file for enabling Python and R integration in CAS. This is only needed if using R with either CMP or the EXTLANG package.
    • {{ SAS-EXT-LLP-R-PATH }} - list of directories to look for when searching for run-time shared libraries (similar to LD_LIBRARY_PATH).

Installation

  1. Copy the files in the $deploy/sas-bases/examples/sas-open-source-config/r directory to the $deploy/site-config/sas-open-source-config/r directory. Create the target directory, if it does not already exist.

    Note: If the destination directory already exists, verify that the overlay has been applied. If the output contains the /nfs/r-mount directory path, you do not need to take any further actions, unless you want to change the overlay parameters to use a different R environment.

  2. The kustomization.yaml file defines all the necessary environment variables. Replace all tags, such as {{ R-HOMEDIR }}, with the values that you gathered in the Prerequisites step. Then, set the following parameters, according to the SAS products that you will be using:

    • DM_RHOME is used by the Open Source Code node in SAS Visual Data Mining and Machine Learning.
    • SAS_EXTLANG_SETTINGS is used by applications that run Python and R code on Cloud Analytic Services (CAS). This includes PROC FCMP and the Time Series External Languages (EXTLANG) package. SAS_EXTLANG_SETTINGS should only be set in one example file; for example, if you set it in the Python example, you should not set it the R example. SAS_EXTLANG_SETTINGS should point to an XML file that is readable by all users. The path can be in the same volume that contains the R environment or in any other volume that is accessible to CAS. Refer to the documentation for the Time Series External Languages (EXTLANG) package for details about the expected XML schema.
    • SAS_EXT_LLP_R is used when the base distribution or packages for open source software require additional run-time libraries that are not part of the shipped container image.
  3. Attach storage to your SAS Viya platform deployment. The r-transformer.yaml file uses PatchTransformers in kustomize to attach the volume containing your R installation to the SAS Viya platform.

    • Replace {{ VOLUME-ATTRIBUTES }} with the appropriate volume specification. For example, when using an NFS mount, the {{ VOLUME-ATTRIBUTES }} tag should be replaced with nfs: {path: /vol/r-mount, server: myserver.sas.com} where myserver.sas.com is the NFS server and /vol/r-mount is the NFS path that you recorded in the Prerequisites.
    • Replace {{ R-MOUNTPATH }} with the install path used when R is built, excluding top-level directory.

    The relevant code excerpt from r-transformer.yaml file before the change:

    patch: |-
    # Add R Volume
      - op: add
        path: /spec/template/spec/volumes/-
        value: { name: r-volume, {{ VOLUME-ATTRIBUTES }} }
    # Add mount path for R
      - op: add
        path: /template/spec/containers/0/volumeMounts/-
        value:
          name: r-volume
          mountPath: {{ R-MOUNTPATH }}
        readOnly: true

    The relevant code excerpt from r-transformer.yaml file after the change:

    patch: |-
    # Add R Volume
      - op: add
        path: /spec/template/spec/volumes/-
        value: { name: r-volume, nfs: {path: /vol/r, server: myserver.sas.com} }
    # Add mount path for R
      - op: add
        path: /template/spec/containers/0/volumeMounts/-
        value:
          name: r-volume
          mountPath: /nfs/r-mount
          readOnly: true
  4. Also in the r-transformer.yaml file, there is a PatchTransformer called sas-r-sas-java-policy-allow-list. This PatchTransformer sets paths to the R interpreter so that the SAS runtime allows execution of the R script. Replace the {{ R-MOUNTPATH }} and {{ R-HOMEDIR }} tags with the appropriate values. Here is an example:

    apiVersion: builtin
    kind: PatchTransformer
    metadata:
      name: add-r-sas-java-policy-allow-list
    patch: |-
      - op: add
        path: /data/SAS_JAVA_POLICY_ALLOW_DM_RHOME
        value: /nfs/r/R-3.6.2/bin/Rscript
    target:
      kind: ConfigMap
      name: sas-programming-environment-java-policy-config
  5. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    • Add site-config/sas-open-source-config/r to the resources block.
    • Add site-config/sas-open-source-config/r/r-transformer.yaml to the transformers block.

    Here is an example:

    resources:
    - site-config/sas-open-source-config/r
    
    transformers:
    - site-config/sas-open-source-config/r/r-transformer.yaml
  6. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

    **Note:** This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
    
    * If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run `kustomize build` to create and apply the manifests.
    * If the overlay is applied after the initial deployment of the SAS Viya platform, run `kustomize build` to create and apply the manifests.
    

Verify Overlay for R Volume

  1. Run the following command to verify whether the overlay has been applied:

    kubectl describe pod sas-cas-server-default-controller -n <name-of-namespace>
  2. Verify that the output contains the following mount directory paths:

    Mounts:
      /nfs/r-mount (r)

Additional Resources

Configure rpy2 for SAS Model Repository Service

Overview

The SAS Model Repository service provides support for registering, organizing, and managing models within a common model repository. This service is used by SAS Event Stream Processing, SAS Intelligent Decisioning, SAS Model Manager, Model Studio, SAS Studio, and SAS Visual Analytics.

The Model Repository service also includes support for testing and deploying R models. SAS environments such as CAS and SAS Micro Analytic Service do not support direct execution of R code. Therefore, R models in a SAS environment are executed using Python with the rpy2 package. The rpy2 package enables Python to directly access the R libraries and execute R code.

This README describes how to configure your Python and R environments to use the rpy2 package for executing models.

Prerequisites

The SAS Viya platform provides YAML files that the Kustomize tool uses to configure Python and R. Before you use those files, you must perform the following tasks:

Note: For rpy2 to work properly, Python and R must be installed on the same system. They do not have to be mounted in the same volume. However, in order to use the R libraries, Python must have access to the directory that was set for the R_HOME environment variable.

  1. Make note of the attributes for the volumes where Python and R, as well as their associated packages, are to be deployed. For example, for NFS, note the NFS server and directory. For more information about the various types of persistent volumes in Kubernetes, see Additional Resources.

  2. Verify that R 3.4+ is installed on the R volume.

  3. Verify that Python 3.5+ and the requests package are installed on the Python volume.

  4. Verify that the R_HOME environment variable is set.

  5. Verify that rpy2 2.9+ is installed as a Python package.

    Note: For information about the rpy2 package and version compatibilities, see the rpy2 documentation.

  6. Verify that both the Python and R open-source configurations have been completed. For more information, see the README files in $deploy/sas-bases/examples/sas-open-source-config/.

Installation

  1. Copy the files in the $deploy/sas-bases/examples/sas-model-repository/r directory to the $deploy/site-config/sas-model-repository/r directory. Create the target directory, if it does not already exist.

  2. In rpy2-transformer.yaml replace the {{ R-HOME }} value with the R_HOME directory path. The value for the R_HOME path is the same as the DM_RHOME value in the kustomization.yaml file, which was specified as part of the R open-source configuration. That file is located in $deploy/site-config/open-source-config/r/.

    There are three sections in the rpy2-transformer.yaml file that you must update.

    Here is a sample of one of the sections before the change:

    patch: |-
    # Add R_HOME Path
      - op: add
        path: /template/spec/containers/0/env/-
        value:
          name: R_HOME
          value:  {{ R-HOME }}
    target:
      kind: PodTemplate
      name: sas-launcher-job-config

    Here is a sample of the same section after the change:

    patch: |-
    - op: add
      path: /template/spec/containers/0/env/-
      value:
        name: R_HOME
        value:  /share/nfsviyar/lib64/R
    target:
      kind: PodTemplate
      name: sas-launcher-job-config
  3. In the cas-rpy2-transformer section of the rpy2-transformer.yaml file, update the CASLLP_99_EDMR value, as shown in this example.

    Here is the relevant code excerpt before the change:

    - op: add
      path: /spec/controllerTemplate/spec/containers/0/env/-
      value:
        name: CASLLP_99_EDMR
        value: {{ R-HOME }}/lib

    Here is the relevant code excerpt after the change:

    - op: add
      path: /spec/controllerTemplate/spec/containers/0/env/-
      value:
        name: CASLLP_99_EDMR
        value: /share/nfsviyar/lib64/R/lib
  4. Add site-config/sas-model-repository/r/rpy2-transformer.yaml to the transformers block to the base kustomization.yaml file in the $deploy directory.

    transformers:
      - site-config/sas-model-repository/r/rpy2-transformer.yaml
  5. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

    Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Additional Resources

High Availability (HA) in the SAS Viya Platform

Overview

The SAS Viya platform can be deployed as a High Availability (HA) system. In this mode, the SAS Viya platform has redundant stateless and stateful services to handle service outages, such as an errant Kubernetes node.

Enable High Availability

A kustomize transformer enables High Availability (HA) in the SAS Viya platform among the stateless microservices. Stateful services, with the exception of SMP CAS, are enabled HA at initial deployment.

Add the sas-bases/overlays/scaling/ha/enable-ha-transformer.yaml to the transformers block in your base kustomization.yaml file.

...
transformers:
...
- sas-bases/overlays/scaling/ha/enable-ha-transformer.yaml
After the base kustomization.yaml file is modified, deploy the software using the commands that are described in Deploy the Software.

Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.

Single Replica Scaling the SAS Viya Platform

Important: The transformer described in this README can be used to deploy the SAS Viya platform in a mode that is not high availability (HA). A non-HA deployment might be suitable for test environments. However, non-HA deployments are not recommended for production environments.

Overview

The SAS Viya platform deploys stateful components in a High Availability configuration by default. Do not perform these steps on an environment that has already been configured.

This feature triggers outages during updates as the single replica components update.

Installation

A series of kustomize transformers modifies the appropriate SAS Viya platform deployment components to a single replica mode.

Add sas-bases/overlays/scaling/single-replica/transformer.yaml to the transformers block in your base kustomization.yaml file. Here is an example:

...
transformers:
...
- sas-bases/overlays/scaling/single-replica/transformer.yaml

To apply the change, run kustomize build -o site.yaml

Configure Network Security and Encryption Using SAS Security Certificate Framework

Prerequisites

Before reading this document, you should be familiar with the content in SAS® Viya® Platform Encryption: Data in Motion. In addition, you should have made the following decisions:

Configuring the Certificate Generator

Using the openssl Certificate Generator

Because the openssl certificate generator is the default, the absence of references to a certificate generator in your site-config directory will result in openssl being used. No additional steps are required.

Using the cert-manager Certificate Generator

For information about supported versions of cert-manager, see Kubernetes Cluster Requirements.

In order to use the cert-manager certificate generator, it must be correctly configured prior to deploying the SAS Viya platform.

Configure the SAS Viya Platform to Use cert-manager

  1. Create a configMap generator to customize the sas-certframe settings. The steps to create these customizations are located in Configuring Certificate Attributes.

  2. Set the SAS_CERTIFICATE_GENERATOR environment variable to cert-manager in the file you created in step 1. Here is an example:

    ---
    apiVersion: builtin
    kind: ConfigMapGenerator
    metadata:
      name: sas-certframe-user-config
    behavior: merge
    literals:
    - SAS_CERTIFICATE_GENERATOR=cert-manager

Configure cert-manager Issuers to Support the SAS Viya Platform

Cert-manager uses a CA Issuer to create the server identity certificates used by the SAS Viya platform. The cert-manager CA issuer requires an issuing CA certificate. The issuing CA for the issuer is stored in a secret named sas-viya-ca-certificate-secret. Add a reference to the cert-manager issuer to the resources block of the base kustomization.yaml file. Here is an example:

resources:
...
- sas-bases/overlays/cert-manager-issuer

Configuring TLS for the Ingress Controller

Using an IT-Provided Ingress Certificate on Your Ingress Controller

Copy this example file to your /site-config directory, and modify it as described in the comments:

cd $deploy
cp sas-bases/examples/security/customer-provided-ingress-certificate.yaml site-config/security
vi site-config/security/customer-provided-ingress-certificate.yaml

When you have completed your modifications, add the path to this file to the generators block of your $deploy/kustomization.yaml file (see the examples below to add a generators: block if one does not already exist).

generators:
- site-config/security/customer-provided-ingress-certificate.yaml # configures the ingress to use a secret that contains customer-provided certificate and key

Using a Provisional Ingress Controller Certificate

Using the openssl Certificate Generator to Generate the Ingress Controller Certificate

An example of the code that creates an ingress controller certificate and stores it in a secret is provided in the following file:

sas-bases/examples/security/openssl-generated-ingress-certificate.yaml

Copy the example to your /site-config directory and modify it as described in the comments that are included in the code.

cd $deploy
cp sas-bases/examples/security/openssl-generated-ingress-certificate.yaml site-config/security
vi site-config/security/openssl-generated-ingress-certificate.yaml

When you have completed your modifications, add the path to this file to the resources block of your base kustomization.yaml file:

resources:
- site-config/security/openssl-generated-ingress-certificate.yaml # causes openssl to generate an ingress certificate and key and store them in a secret

Using the cert-manager Certificate Generator to Generate the Ingress Controller Certificate

Using the cert-manager Certificate Generator to Generate the ingress-nginx Certificate

To use cert-manager to generate the ingress certificate, add the following path to the transformers block of your base kustomization.yaml file:

transformers:
- sas-bases/overlays/cert-manager-provided-ingress-certificate/ingress-annotation-transformer.yaml # causes cert-manager to generate an ingress certificate and key and store them in a secret
Using the cert-manager Certificate Generator to Generate the OpenShift Ingress Operator Certificates

An example of the code that configures cert-manager to generate the certificate and secret is provided in the following file:

sas-bases/examples/security/cert-manager-pre-created-ingress-certificate.yaml

Copy the example to your /site-config directory and modify it as described in the comments that are included in the code. Note that you will need to know the network DNS alias of your Kubernetes ingress controller.

cd $deploy
cp sas-bases/examples/security/cert-manager-pre-created-ingress-certificate.yaml site-config/security
vi site-config/security/cert-manager-pre-created-ingress-certificate.yaml

When you have completed your modifications, add the path to this file to the resources block of your base kustomization.yaml file:

resources:
- site-config/security/cert-manager-pre-created-ingress-certificate.yaml # causes cert-manager to generate an ingress certificate and key and store them in a secret

Selecting kustomize Components to Enable TLS Modes

Ensure that any of the following TLS components that are added to the components block of the base kustomization.yaml file come after any other SAS-provided components, but before any user-provided components. This ensures that TLS customizations are applied to the fully-formed manifests of individual SAS offerings without conflicting with any customizations applied by the user.

Components to Enable Full-Stack TLS Mode

In Full-stack TLS mode, the ingress controller must be configured to decrypt incoming network traffic and re-encrypt traffic before forwarding it to the back-end SAS servers. Network traffic between SAS servers is encrypted in this mode. To enable Full-Stack TLS, include the customization that corresponds to your ingress controller in the components block of the base kustomization.yaml file:

Components to Enable Full-Stack TLS Mode with ingress-nginx

components:
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls

Components to Enable Full-Stack TLS Mode with OpenShift Ingress Operator

components:
- sas-bases/components/security/network/route.openshift.io/route/full-stack-tls

Components to Enable Front-Door TLS Mode

Components to Enable Front-Door TLS Mode with ingress-nginx

components:
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/front-door-tls

Components to Enable Front-Door TLS Mode with OpenShift Ingress Operator

components:
- sas-bases/components/security/network/route.openshift.io/route/front-door-tls

Components to Enable Front-Door TLS Mode for CAS and SAS/CONNECT Spawner

Add this component to your kustomization.yaml to configure the SAS Viya platform for Front-door TLS mode and configure CAS and SAS/CONNECT to encrypt network traffic:

IMPORTANT: Do not add more than one component for SAS servers TLS. The component for each TLS mode must be used by itself.

components:
- sas-bases/components/security/core/base/front-door-tls  # component to build trust stores for all services and enable back-end TLS for CAS and SAS/CONNECT

Components to Enable Full-Stack TLS Mode for All Servers

Note: TLS for the ingress controller is required if you are using Full-stack TLS.

IMPORTANT: Do not add more than one TLS component. The component for each TLS mode must be used by itself. Include this customization in the components block of the base kustomization.yaml file:

components:
- sas-bases/components/security/core/base/full-stack-tls # component to support TLS for back-end servers

Configuring Certificate Attributes

An example configMap is provided to help you customize configuration settings. To create this configMap with non-default settings, see the comments in the provided example file, $deploy/sas-bases/examples/security/customer-provided-merge-sas-certframe-configmap.yaml:

cd $deploy
cp sas-bases/examples/security/customer-provided-merge-sas-certframe-configmap.yaml site-config/security/
vi site-config/security/customer-provided-merge-sas-certframe-configmap.yaml

When you have completed your updates, add the path to the file to the generators block of your $deploy/kustomization.yaml file. Here is an example:

generators:
- site-config/security/customer-provided-merge-sas-certframe-configmap.yaml # merges customer-provided configuration settings into the sas-certframe-user-config configmap

Incorporating Additional CA Certificates

Incorporating Additional CA Certificates into the SAS Viya Platform Deployment in Full-stack or Front-door TLS Mode

Follow these steps to add your proprietary CA certificates to the SAS Viya platform deployment. The certificate files must be in PEM format, and the path to the files must be relative to the directory that contains the kustomization.yaml file. You might have to maintain several files containing CA certificates and update them over time. SAS recommends creating a separate directory for these files.

  1. Place your CA certificate files in the site-config/security/cacerts directory. Ensure that the user ID that runs the kustomize command has Read access to the files.

  2. Copy the file $deploy/sas-bases/examples/security/customer-provided-ca-certificates.yaml into your $deploy/site-config/security directory.

  3. Edit the site-config/security/customer-provided-ca-certificates.yaml file and specify the required information.

    Instructions for editing this file are provided as comments in the file.

Here is an example:

export deploy=~/deploy
cd $deploy
mkdir -p site-config/security/cacerts
#
# the following line assumes that your CA Certificates are in a file named /tmp/my_ca_certificates.pem
#
cp /tmp/my_ca_certificates.pem site-config/security/cacerts
cp sas-bases/examples/security/customer-provided-ca-certificates.yaml site-config/security
vi site-config/security/customer-provided-ca-certificates.yaml

When you have completed your modifications, add the path to this file to the generators block of your $deploy/kustomization.yaml file. Here is an example:

generators:
- site-config/security/customer-provided-ca-certificates.yaml # generates a configmap that contains CA Certificates

Incorporating Additional CA Certificates into the SAS Viya Platform Deployment in “No TLS” Mode

In order to add CA certificates to pod trust bundles, add the following component to the components block of your base kustomization.yaml file:

IMPORTANT: Do not add this component if you have configured Front-door TLS or Full-stack TLS mode.

components:
- sas-bases/components/security/core/base/truststores-only # component to build trust stores when no TLS is desired

Example kustomization.yaml Files for ingress-nginx with the cert-manager Certificate Generator

Full-stack TLS with cert-manager Certificate Generator and cert-manager-Generated Ingress Certificates

# Full-stack TLS with cert-manager certificate generator and cert-Manager generated ingress certificates
namespace: fullstacktls
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer
- sas-bases/overlays/network/networking.k8s.io

components:
- sas-bases/components/security/core/base/full-stack-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls

transformers:
- sas-bases/overlays/required/transformers.yaml
- sas-bases/overlays/cert-manager-provided-ingress-certificate/ingress-annotation-transformer.yaml # causes cert-manager to generate the ingress certificate and key and store it in a secret

generators:
- site-config/security/customer-provided-ca-certificates.yaml # This generator is optional. Include it only if you need to add additional CA Certificates
- site-config/security/customer-provided-merge-sas-certframe-configmap.yaml # make sure edits to the site-config/security/customer-provided-merge-sas-certframe-configmap.yaml file are in place

Full-stack TLS with cert-manager Certificate Generator and Customer-Provided Ingress Certificates

# Full-stack TLS with cert-manager certificate generator and customer-provided ingress certificates
namespace: fullstacktls
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer
- sas-bases/overlays/network/networking.k8s.io

components:
- sas-bases/components/security/core/base/full-stack-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls

transformers:
- sas-bases/overlays/required/transformers.yaml

generators:
- site-config/security/customer-provided-ingress-certificate.yaml
- site-config/security/customer-provided-ca-certificates.yaml
- site-config/security/customer-provided-merge-sas-certframe-configmap.yaml # make sure edits to the site-config/security/customer-provided-merge-sas-certframe-configmap.yaml file are in place

Front-door TLS with cert-manager Certificate Generator and cert-manager-Generated Ingress Certificates

# Front-door TLS with cert-manager certificate generator and cert-Manager generated ingress certificates
namespace: frontdoortls
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer
- sas-bases/overlays/network/networking.k8s.io

components:
- sas-bases/components/security/core/base/front-door-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/front-door-tls

transformers:
- sas-bases/overlays/required/transformers.yaml
- sas-bases/overlays/cert-manager-provided-ingress-certificate/ingress-annotation-transformer.yaml # causes cert-manager to generate the ingress certificate and key and store it in a secret

generators:
- site-config/security/customer-provided-ca-certificates.yaml # This generator is optional. Include it only if you need to add additional CA Certificates
- site-config/security/customer-provided-merge-sas-certframe-configmap.yaml # make sure edits to the site-config/security/customer-provided-merge-sas-certframe-configmap.yaml file are in place

Front-door TLS with cert-manager Certificate Generator and Customer-Provided Ingress Certificates

# Front-door TLS with cert-manager certificate generator and customer-provided ingress certificates
namespace: frontdoortls
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer
- sas-bases/overlays/network/networking.k8s.io

components:
- sas-bases/components/security/core/base/front-door-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/front-door-tls

transformers:
- sas-bases/overlays/required/transformers.yaml

generators:
- site-config/security/customer-provided-ingress-certificate.yaml
- site-config/security/customer-provided-ca-certificates.yaml
- site-config/security/customer-provided-merge-sas-certframe-configmap.yaml # make sure edits to the site-config/security/customer-provided-merge-sas-certframe-configmap.yaml file are in place

Example kustomization.yaml Files for ingress-nginx with the openssl Certificate Generator

Full-stack TLS with openssl Certificate Generator and openssl-generated Ingress Certificates

# Full-stack TLS with openssl certificate generator and openssl generated ingress certificates
namespace: fullstacktls
resources:
- sas-bases/base
- sas-bases/overlays/network/networking.k8s.io
- site-config/security/openssl-generated-ingress-certificate.yaml

components:
- sas-bases/components/security/core/base/full-stack-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls

transformers:
- sas-bases/overlays/required/transformers.yaml

generators:
- site-config/security/customer-provided-ca-certificates.yaml

Full-stack TLS with openssl Certificate Generator and Customer-Provided Ingress Certificates

# Full-stack TLS with openssl certificate generator and customer-provided ingress certificates
namespace: fullstacktls
resources:
- sas-bases/base
- sas-bases/overlays/network/networking.k8s.io

components:
- sas-bases/components/security/core/base/full-stack-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls

transformers:
- sas-bases/overlays/required/transformers.yaml

generators:
- site-config/security/customer-provided-ingress-certificate.yaml
- site-config/security/customer-provided-ca-certificates.yaml

Front-door TLS with openssl Certificate Generator and Customer-Provided Ingress Certificates

# Front-door TLS with openssl certificate generator and customer-provided ingress certificates
namespace: frontdoortls
resources:
- sas-bases/base
- sas-bases/overlays/network/networking.k8s.io

components:
- sas-bases/components/security/core/base/front-door-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/front-door-tls

transformers:
- sas-bases/overlays/required/transformers.yaml

generators:
- site-config/security/customer-provided-ingress-certificate.yaml
- site-config/security/customer-provided-ca-certificates.yaml

Example kustomization.yaml Files for the OpenShift Ingress Controller with the cert-manager Certificate Generator

Full-stack TLS with cert-manager Certificate Generator and Customer-Provided OpenShift Ingress Certificates

# Full-stack TLS with cert-manager certificate generator and customer-provided ingress certificates
namespace: fullstacktls
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer
- sas-bases/overlays/network/route.openshift.io

components:
- sas-bases/components/security/core/base/full-stack-tls
- sas-bases/components/security/network/route.openshift.io/route/full-stack-tls

transformers:
- sas-bases/overlays/required/transformers.yaml

generators:
- site-config/security/customer-provided-ingress-certificate.yaml
- site-config/security/customer-provided-ca-certificates.yaml
- site-config/security/customer-provided-merge-sas-certframe-configmap.yaml # make sure edits to the site-config/security/customer-provided-merge-sas-certframe-configmap.yaml file are in place

Configuring Kerberos Single Sign-On for the SAS Viya Platform

This README describes the steps necessary to configure the SAS Viya platform for single sign-on using Kerberos.

Prerequisites

Before you start the deployment, obtain the Kerberos configuration file and keytab for the HTTP service account. Make sure you have tested the keytab before proceeding with the installation.

Installation

  1. Copy the files in the $deploy/sas-bases/examples/kerberos/http directory to the $deploy/site-config/kerberos/http directory. Create the target directory, if it does not already exist.

  2. Copy your Kerberos keytab and configuration files into the $deploy/site-config/kerberos/http directory, naming them keytab and krb5.conf respectively.

  3. Modify the parameters in $deploy/site-config/kerberos/http/configmaps.yaml.

    • Replace {{ PRINCIPAL-NAME-IN-KEYTAB }} with the name of the principal as it appears in the keytab.
    • Replace {{ SPN }} with the name of the SPN. This should have a format of HTTP/<hostname> and may be the same as the principal name in the keytab.
  4. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    • Add site-config/kerberos/http to the resources block.
    • Add sas-bases/overlays/kerberos/http/transformers.yaml to the transformers block.
  5. Use the deployment commands described in SAS Viya Platform Deployment Guide to apply the new settings.

Configuring SAS Servers for Kerberos in SAS Viya Platform

Overview

This README describes the steps necessary to configure your SAS Viya platform SAS Servers to use Kerberos.

Prerequisites

Kerberos Configuration File

Before you start the deployment, obtain the Kerberos configuration file (krb5.conf) and keytab file for the HTTP service account.

Edit the krb5.conf file and add renewable = true under the [libdefaults] section. This allows renewable Kerberos credentials to be used in SAS Viya platform. SAS servers will renew Kerberos credentials prior to expiration up to the renewable lifetime. Here is an example:

[libdefaults]
  ...
  renewable = true

Keytab File

Obtain a keytab file for the HTTP service account.

If you are using SAS/CONNECT from external clients, such as SAS 9.X, obtain a keytab for the SAS service account. The HTTP service account and SAS service account can be placed in the same keytab file for convenience. If you are using a single keytab file, the SAS service account should be placed before the HTTP service account in the keytab file.

Make sure you have tested the keytab files before proceeding with the installation.

Kerberos Connections

If you want to connect to the CAS Server from external clients through the binary or REST ports, you must also configure the CAS Server to accept direct Kerberos connections.

If SAS/ACCESS Interface to Hadoop will be used with a Hadoop deployment that is Kerberos-protected, either nss_wrapper or System Security Services Daemon (SSSD) must be configured. Unlike SSSD, nss_wrapper does not require running in a privilege elevated container. If you are using OpenShift Container Platform 4.2 or later, neither nss_wrapper nor SSSD are required. If SAS/CONNECT is configured to spawn the SAS/CONNECT Server in the SAS/CONNECT Spawner pod, SSSD must be configured regardless of the container orchestration platform being used.

nss_wrapper

To configure nss_wrapper, make the following changes to the base kustomization.yaml file in the $deploy directory. Add the following to the transformers block. These additions must come before sas-bases/overlays/required/transformers.yaml.

transformers:
...
- sas-bases/overlays/kerberos/nss_wrapper/add-nss-wrapper-transformer.yaml

System Security Services Daemon (SSSD)

To configure SSSD for SAS Compute Server and SAS Batch Server, follow the instructions in $deploy/sas-bases/examples/kerberos/sssd/README.md (for Markdown format) or $deploy/sas-bases/docs/docs/configure_system_security_services_daemon.htm (for HTML format). For CAS, follow the instructions in $deploy/sas-bases/examples/cas/configure/README.md (for Markdown format) and $deploy.sas-bases/docs/configuration_settings_for_cas.htm (for HTML format). For SAS/CONNECT, follow the instructions in $deploy/sas-bases/examples/sas-connect-spawner/README.md (for Markdown format) or $deploy/sas-bases/docs/configure_sasconnect_spawner_in_sas_viya.htm (for HTML format).

Delegation

The aim of configuring for Kerberos is to allow Kerberos authentication to flow into, between, and out from the SAS Viya platform environment. Allowing SAS servers to connect to other SAS Viya platform processes and third-party data sources on behalf of the user is referred to as delegation. SAS supports Kerberos Unconstrained Delegation, Kerberos Constrained Delegation, and Kerberos Resource-based Constrained Delegation. Delegation should be configured prior to completing the installation steps below.

The HTTP service account must be trusted for delegation. If you are using SAS/CONNECT, the SAS service account must also be trusted for delegation.

Stored Credentials

As an alternative method to Delegation, external credentials can be stored in an Authentication Domain. SAS uses the stored credentials to generate Kerberos credentials on the user’s behalf. The default Authentication Domain is KerberosAuth. The Authentication Domain, whether default or custom, will need to be created in SAS Environment Manager. SAS recommends creating a Custom Group with shared external credentials and assigning the custom group to the created Authentication Domain.

For more information about creating Authentication Domains, see External Credentials: Concepts.

Note: Stored user credentials take precedence over stored group credentials in the same Authentication Domain. For more information, see How to configure Kerberos stored credentials.

Installation

Configure SAS Servers for Kerberos in SAS Viya Platform

  1. Copy the files in the $deploy/sas-bases/examples/kerberos/sas-servers directory to the $deploy/site-config/kerberos/sas-servers directory. Create the target directory, if it does not already exist.

  2. Copy your Kerberos keytab file and configuration files into the $deploy/site-config/kerberos/sas-servers directory, naming them keytab and krb5.conf respectively.

    Note: A Kubernetes secret is generated during deployment using the content of the keytab binary file. However, the SAS Viya Platform Deployment Operator and the viya4-deployment project do not support creating secrets from binary files. For these types of deployments, the Kerberos keytab content must be loaded from an existing Kubernetes secret. If you are using either of these deployment types, see Manually Configure a Kubernetes Secret for the Kerberos Keytab for the steps.

  3. Replace {{ SPN }} in $deploy/site-config/kerberos/sas-servers/configmaps.yaml under the sas-servers-kerberos-sidecar-config stanza with the name of the principal as it appears in the keytab file.

  4. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    • Add site-config/kerberos/sas-servers to the resources block.

      resources:
      ...
      - site-config/kerberos/sas-servers
    • Add the following to the transformers block. These additions must come before sas-bases/overlays/required/transformers.yaml.

      • If TLS is enabled:

        transformers:
        ...
        - sas-bases/overlays/kerberos/sas-servers/sas-kerberos-job-tls.yaml
        - sas-bases/overlays/kerberos/sas-servers/sas-kerberos-deployment-tls.yaml
        - sas-bases/overlays/kerberos/sas-servers/cas-kerberos-tls-transformer.yaml
        If you are deploying the SAS Viya platform with TLS on Red Hat OpenShift
        and using SAS/CONNECT, replace `sas-kerberos-deployment-tls.yaml` with
        `sas-kerberos-deployment-tls-openshift.yaml`.
        
      • If TLS is not enabled:

        transformers:
        ...
        - sas-bases/overlays/kerberos/sas-servers/sas-kerberos-job-no-tls.yaml
        - sas-bases/overlays/kerberos/sas-servers/sas-kerberos-deployment-no-tls.yaml
        - sas-bases/overlays/kerberos/sas-servers/cas-kerberos-no-tls-transformer.yaml

        If you are deploying the SAS Viya platform without TLS on Red Hat OpenShift and using SAS/CONNECT, replace sas-kerberos-deployment-no-tls.yaml with sas-kerberos-deployment-no-tls-openshift.yaml.

  5. Follow the instructions in $deploy/sas-bases/examples/kerberos/http/README.md (for Markdown format) or $deploy/sas-bases/docs/configuring_kerberos_single_sign-on_for_sas_viya.htm (for HTML format) to configure Kerberos single sign-on. Specifically, in $deploy/site-config/kerberos/http/configmaps.yaml change SAS_LOGON_KERBEROS_HOLDONTOGSSCONTEXT to true.

  6. When all the SAS Servers are configured in the base kustomization.yaml file, use the deployment commands described in SAS Viya Platform Deployment Guide to apply the new settings.

  7. After the deployment is started, enable Kerberos in SAS Environment Manager. 1. Sign into SAS Environment Manager as sasboot or as an Administrator. Go to the Configuration page. 2. On the Configuration page, select Definitions from the list. Then select sas.compute. 3. Click the pencil (Edit) icon. 4. Change kerberos.enabled to on. 5. Click Save.

Configure the CAS Server for Direct Kerberos Connections in SAS Viya Platform

If you want to connect to the CAS Server from external clients through the binary port, perform the following steps in addition to the section above.

  1. Copy the files in the $deploy/sas-bases/examples/kerberos/cas-server directory to the $deploy/site-config/kerberos/cas-server directory. Create the target directory, if it does not already exist.

  2. Copy your Kerberos keytab and configuration files into the $deploy/site-config/kerberos/cas-server directory, naming them keytab and krb5.conf respectively.

  3. Replace {{ SPN }} in $deploy/site-config/kerberos/cas-server/configmaps.yaml under the cas-server-kerberos-config stanza with the name of the service principal as it appears in the keytab file without the @DOMAIN.COM.

  4. Replace {{ HTTP_SPN }} with the HTTP SPN used for the krb5 proxy sidecar container without the @DOMAIN.COM. SAS recommends that you use the same keytab file and SPN for both the CAS Server and the krb5 proxy sidecar for consistency and to allow REST port direct Kerberos connections.

  5. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    • Add site-config/kerberos/cas-server to the resources block.

      resources:
      ...
      - site-config/kerberos/cas-server
    • Add the following to the transformers block. These additions must come before sas-bases/overlays/required/transformers.yaml.

      transformers:
      ...
      - sas-bases/overlays/kerberos/sas-servers/cas-kerberos-direct.yaml
  6. Edit your $deploy/site-config/kerberos/cas-server/krb5.conf file. Add the following to the [libdefaults] section:

    [libdefaults]
    ...
    dns_canonicalize_hostname=false

Configure SAS/CONNECT for Direct Kerberos Connections in SAS Viya Platform

If you are using SAS/CONNECT from external clients, such as SAS 9.4, perform the following steps in addition to the section above.

  1. Add a reference to sas-bases/overlays/kerberos/sas-servers/sas-connect-spawner-kerberos-transformer .yaml in the transformers block of the kustomization.yaml file in the $deploy directory. The reference must come before sas-bases/overlays/required/transformers.yaml. Here is an example:

    transformers:
    ...
    - sas-bases/overlays/kerberos/sas-servers/sas-connect-spawner-kerberos-transformer.yaml
    - sas-bases/overlays/required/transformers.yaml
  2. Uncomment the sas-connect-spawner-kerberos-secrets stanza in $deploy/site-config/kerberos/sas-servers/secrets.yaml. If you are using separate keytab files for the HTTP service account and SAS service account, change the keytab name to the actual keytab file name in each stanza. The SAS SPN is required to authenticate the user with SAS/CONNECT from external clients. The HTTP SPN is required to authenticate the user with SAS Login Manager.

  3. Uncomment the sas-connect-spawner-kerberos-config stanza in $deploy/site-config/kerberos/sas-servers/configmaps.yaml.

    • Replace {{ SPN }} with the HTTP SPN from the keytab file without the @DOMAIN.COM.

    • If you are using separate keytab files for the HTTP service account and SAS service account, change the keytab name to the actual keytab file name in each stanza. The keytab file name must match the name used in secrets.yaml for step 2.

  4. Edit your $deploy/site-config/kerberos/sas-servers/krb5.conf file. Add the following to the [libdefaults] section:

    [libdefaults]
    ...
    dns_canonicalize_hostname=false

Configure Kerberos Unconstrained Delegation

If you are using MIT Kerberos as your KDC, then enabling delegation involves setting the flag ok_as_delegate on the principal. For example, the following command adds this flag to the existing HTTP principal:

kadmin -q "modprinc +ok_as_delegate HTTP/mywebserver.company.com"

If you are using Microsoft Active Directory for your KDC, you must set the delegation option after registering the SPN. The Active Directory Users and Computers GUI tool does not expose the delegation options until at least one SPN is registered against the service account. The HTTP Service account must be able to delegate to any applicable data sources. The service account must have Read all user information permissions to the approprate Domain or Orgranizational Units in Active Directory.

  1. For the HTTP service account, as a Windows domain administrator, right-click the name and select Properties.

  2. In the Properties dialog, select the Delegation tab.

  3. On the Delegation tab, you must select Trust this user for delegation to any services (Kerberos only).

  4. In the Properties dialog, select OK.

If you are using SAS/CONNECT, repeat the steps in this section for the SAS service account.

Configure Kerberos Constrained Delegation

  1. In $deploy/site-config/kerberos/http/configmaps.yaml, set SAS_LOGON_KERBEROS_HOLDONTOGSSCONTEXT to false.

  2. In the sas-servers-kerberos-sidecar-config stanza of $deploy/site-config/kerberos/sas-servers/configmaps.yaml, add the following under literals:

    - SAS_CONSTRAINED_DELEG_ENABLED="true"
  3. If you are using SAS/CONNECT, in the sas-connect-spawner-kerberos-config stanza, add the following under literals:

    - SAS_SERVICE_PRINCIPAL={{ SAS service account SPN }}
    - SAS_CONSTRAINED_DELEG_ENABLED="true"

If you are using MIT Kerberos as your KDC, then enabling delegation involves setting the flag ok_to_auth_as_delegate on the principal. For example, the following command adds the flag to the existing HTTP principal:

kadmin -q "modprinc +ok_to_auth_as_delegate HTTP/mywebserver.company.com"

If you are using Microsoft Active Directory for your KDC, you must set the delegation option after registering the SPN. The Active Directory Users and Computers GUI tool does not expose the delegation options until at least one SPN is registered against the service account. The HTTP Service account must be able to delegate to any applicable data sources. The service account must have Read all user information permissions to the approprate Domain or Orgranizational Units in Active Directory.

  1. For the HTTP service account, as a Windows domain administrator, right-click the account name and select Properties.

  2. In the Properties dialog, select the Delegation tab.

  3. On the Delegation tab, select Trust this user for delegation to the specified services only and Use any authentication protocol.

  4. Select Add...

  5. In the Add Services panel, select Users and Computers...

    1. In the Select Users or Computers dialog box, complete the following for the Kerberos-protected services that the SAS Servers access:

      1. In the `Enter the object names to select` text box, enter the account
         for the Kerberos protected services the SAS Server accesses, such as Microsoft
         SQL Server.  Then, select `Check Names`.
      
      2. If the name is found, select `OK`.
      
      3. Repeat the previous two steps to select additional SPNs for the SAS
         Servers to access.
      
      4. When you are done, select `OK`.
      
    2. In the Add Services dialog box, select OK.

  6. In the Properties dialog, select OK.

If you are using SAS/CONNECT, repeat the steps in this section for the SAS service account.

Configure Kerberos Resource-Based Constrained Delegation

  1. In $deploy/site-config/kerberos/http/configmaps.yaml, set SAS_LOGON_KERBEROS_HOLDONTOGSSCONTEXT to false.

  2. In the sas-servers-kerberos-sidecar-config stanza of $deploy/site-config/kerberos/sas-servers/configmaps.yaml, add the following under literals:

    - SAS_CONSTRAINED_DELEG_ENABLED="true"
  3. If you are using SAS/CONNECT, in the sas-connect-spawner-kerberos-config stanza, add the following under literals:

    - SAS_SERVICE_PRINCIPAL={{ SAS service account SPN }}
    - SAS_CONSTRAINED_DELEG_ENABLED="true"

Kerberos Resource-based Constrained Delegation can only be configured using Microsoft PowerShell. Resource-based constrained delegation gives control of delegation to the administrator of the back-end service, therefore, the delegation permissions are applied on the back-end service being accessed.

Note: The examples below demonstrate adding a single identity that is trusted for delegation. To add multiple identities, use the format: ($identity1),($identity2).

If the back-end service being accessed is running on Windows under the Local System account, then the front-end service principal is applied to the back-end service Computer Object.

$sashttpidentity = Get-ADUser -Identity <HTTP service account>
Set-ADComputer <back-end service hostname> -PrincipalsAllowedToDelegateToAccount $sashttpidentity

If the back-end servers being accessed is running on Unix/Linux or on Windows under a Domain Account, then the front-end service principal is applied to the Domain Account of the back-end service where the service principal is registered.

$sashttpidentity = Get-ADUser -Identity <HTTP service account>
Set-ADUser <back-end service Domain Account> -PrincipalsAllowedToDelegateToAccount $sashttpidentity

If you are using SAS/CONNECT, the HTTP service account must trust the SAS service account.

$sasidentity = GetADUser -Identity <SAS service account>
Set-ADUser <HTTP service account> -PrincipalsAllowedToDelegateToAccount $sasidentity

If you are using SAS/CONNECT and the back-end service is running on Windows under the Local System account, then the SAS service principal is applied to the back-end service Computer Object.

$sasidentity = GetADUser -Identity <SAS service account>
Set-ADComputer <back-end service hostname> -PrincipalsAllowedToDelegateToAccount $sasidentity

If you are using SAS/CONNECT and the back-end service is running on UNIX/Linux or on Windows under a Domain Account, then the SAS service principal is applied to the Domain Account of the back-end service where the principal is registered.

$sasdentity = Get-ADUser -Identity <SAS service account>
Set-ADUser <back-end service Domain Account> -PrincipalsAllowedToDelegateToAccount $sasidentity

Configure Kerberos Stored Credentials

Configure the usage of stored credentials:

  1. In the sas-servers-kerberos-sidecar-config block of $deploy/site-config/kerberos/sas-servers/configmaps.yaml, set the desired Authentication Domain to query for stored credentials.

    literals:
    ...
    - SAS_KRB5_PROXY_CREDAUTHDOMAIN=KerberosAuth # Name of authentication domain to query for stored credentials
  2. Uncomment these lines in the sas-servers-kerberos-container-config block of $deploy/site-config/kerberos/sas-servers/configmaps.yaml:

    literals:
    ...
    - SAS_KRB5_PROXY_CHECKCREDSERVICE="true" # Set to true if SAS should prefer stored credentials over Constrained Delegation
    - SAS_KRB5_PROXY_LOOKUPINGROUP="true"    # Set to true if SAS should look for a group credential if no user credential is stored

Configuring Ingress for Cross-Site Cookies

Overview

When you configure the SAS Viya platform to enable cross-site cookies via the sas.commons.web.security.cookies.sameSite configuration property, you must also update the ingress configuration so that cookies managed by the ingress controller have the same settings. Ingress or Route annotations for same-site cookie settings are applied by adding the appropriate transformer component to your kustomization.yaml.

Installation

Add the samesite-none transformer component to the components block of the base kustomization.yaml in the $deploy directory.

Example:

components:
...
- sas-bases/components/security/web/samesite-none

Configure System Security Services Daemon

Overview

System Security Services Daemon (SSSD) provides access to remote identity providers, such as LDAP and Microsoft Active Directory. SSSD can be used when using SAS/ACCESS Interface to Hadoop with a Kerberos-protected Hadoop deployment where identity lookup is required.

Note: Alternatively, nss_wrapper can be used with SAS/ACCESS Interface to Hadoop. To implement nss_wrapper, follow the instructions in the “nss_wrapper” section of the README file located at $deploy/sas-bases/examples/kerberos/sas-servers/README.md (for Markdown format) or at $deploy/sas-bases/docs/configuring_sas_servers_for_kerberos_in_sas_viya_platform.htm (for HTML format).

Enable the SSSD Container

  1. Add sas-bases/overlays/kerberos/sssd/add-sssd-container-transformer.yaml to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    **Important:** This line must come before any network transformers
    

    (transformers that start with “- sas-bases/overlays/network/”) and the required transformer (“- sas-bases/overlays/required/transformers.yaml”). Note that your configuration may not have network transformers if security is not configured. This line must also be placed after any Kerberos transformers (transformers starting with “- sas-bases/overlays/kerberos/sas-servers”).

    ```yaml
        transformers:
        ...
        # Place after any sas-bases/overlays/kerberos lines
        - sas-bases/overlays/kerberos/sssd/add-sssd-container-transformer.yaml
        # Place before any sas-bases/overlays/network lines and before
        # sas-bases/overlays/required/transformers.yaml
    ```
    
  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Add a Custom Configuration for SSSD

Use these steps to provide a custom SSSD configuration to handle user authorization in your environment.

  1. Copy the files in the $deploy/sas-bases/examples/kerberos/sssd directory to the $deploy/site-config/kerberos/sssd directory. Create the target directory, if it does not already exist.

  2. Copy your customer sssd.conf configuration file to $deploy/site-config/kerberos/sssd/sssd.conf.

  3. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    - Add the following to the generators block.
    
    ```yaml
    generators:
    ...
    - site-config/kerberos/sssd/secrets.yaml
    ```
    - Add a reference to `sas-bases/overlays/kerberos/sssd/add-sssd-configmap-transformer.yaml`
    to the transformers block. The new line must come
    

    after sas-bases/overlays/kerberos/sssd/add-sssd-container-transformer.yaml.

    ```yaml
    transformers:
    ...
    - sas-bases/overlays/kerberos/sssd/add-sssd-configmap-transformer.yaml
    ```
    
  4. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Configuring Ingress for Rate Limiting

Overview

You can use the examples found within $deploy/sas-bases/examples/security/web/rate-limiting to enforce rate-limiting at the ingress-nginx controller for SAS Viya platform endpoints. The properties are applied to all Ingress resources deployed with the SAS Viya platform. If you are using any external load balancers or API gateways, enforcing rate-limiting with ingress-nginx is not optimal. Instead, enforce rate limiting through external technology. To read more about the available options in ingress-nginx, see https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#rate-limiting.

If you are deploying on Red Hat OpenShift, you must enforce rate-limiting at the OpenShift router instead. The properties are applied to all Route resources deployed with the SAS Viya platform. To read more about the available options in OpenShift, see https://docs.openshift.com/container-platform/4.15/networking/routes/route-configuration.html#nw-route-specific-annotations_route-configuration.

Installation

ingress-nginx

Use these steps to apply the desired properties to your SAS Viya platform deployment.

  1. Copy the $deploy/sas-bases/examples/security/web/rate-limiting/ingress-nginx-configmap-inputs.yaml file to the location of your working container security overlay, such as site-config/security/web/.

  2. Define the properties in the ingress-nginx-configmap-inputs.yaml file which match the desired configuration. To define a property, uncomment it and update its token value as described in the example file.

  3. Add the relative path of ingress-nginx-configmap-inputs.yaml to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    ...
    resources:
    ...
    - site-config/security/web/rate-limiting/ingress-nginx-configmap-inputs.yaml
    ...
  4. Add the relative path(s) of the corresponding transformer file(s) to the transformers block of the base kustomization.yaml file. There should be one transformer file added per property defined within the ConfigMap. Here is an example:

    ...
    transformers:
    ...
    - sas-bases/overlays/security/web/rate-limiting/update-ingress-nginx-limit-rps.yaml
    - sas-bases/overlays/security/web/rate-limiting/update-ingress-nginx-limit-burst-multiplier.yaml
    ...

OpenShift

When deploying to Red Hat OpenShift, use these steps to apply the desired properties to your SAS Viya platform deployment. Do not use the steps for the ingress-nginx controller.

  1. Copy the $deploy/sas-bases/examples/security/web/rate-limiting/route-configmap-inputs.yaml file to the location of your working container security overlay, such as site-config/security/web/rate-limiting.

  2. Define the properties in the route-configmap-inputs.yaml file which match the desired configuration. To define a property, uncomment it and update its token value as described in the example file.

  3. Add the relative path of route-configmap-inputs.yaml to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    ...
    resources:
    ...
    - site-config/security/web/rate-limiting/route-configmap-inputs.yaml
    ...
  4. Add the relative path(s) of the corresponding transformer file(s) to the transformers block of the base kustomization.yaml file. There should be one transformer file added per property defined within the ConfigMap. Here is an example:

    ...
    transformers:
    ...
    - sas-bases/overlays/security/web/rate-limiting/update-route-rate-limit-connections.yaml
    - sas-bases/overlays/security/web/rate-limiting/update-route-rate-limit-connections-rate-http.yaml
    - sas-bases/overlays/security/web/rate-limiting/update-route-rate-limit-connections-rate-tcp.yaml
    ...

SAS Programming Environment Configuration Tasks

Overview

This readme describes how to customize your SAS Viya platform deployment for tasks related to the SAS Programming Environment.

Installation

SAS provides the ability for modifications to be made to the scripts that are used for launching processes. The following processes allow for modifications to be set in SAS Environment Manager.

Each server type has multiple configuration instances for modification of configuration files, autoexec code, and startup scripts that are used to launch the servers. Modifications to the startup script configurations for each server are disabled by default.

The system administrator can give the SAS Administrator the ability to have updates made to these configuration scripts processed by the server applications.

Since this processing takes place at the initialization of the server application, changes to these configMaps take effect upon the next launch of the pod.

Enabling Processing of SAS Administrator Script Modifications

Included in this folder is an overlay called enable-admin-script-access.yaml. This overlay provides a patchTransformer that gives the SAS Administrator the ability to have script modifications made in SAS Environment Manager processed by the server applications.

To enable this access:

  1. Add sas-bases/overlays/sas-programming-environment/enable-admin-script-access.yaml to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    ```
    ...
    transformers:
    ...
    - sas-bases/overlays/sas-programming-environment/enable-admin-script-access.yaml
    ...
    ```
    
  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Disabling Processing of SAS Administrator Script Modifications

Included in this folder is an overlay called disable-admin-script-access.yaml. This overlay provides a patchTransformer that denies the SAS Administrator the ability to have script modifications made in SAS Environment Manager processed by the server applications.

To disable this access:

  1. Add sas-bases/overlays/sas-programming-environment/disable-admin-script-access.yaml to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    ```
    ...
    transformers:
    ...
    - sas-bases/overlays/sas-programming-environment/disable-admin-script-access.yaml
    ...
    ```
    
  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Modify Container Security Settings

Overview

This README describes customizations that can be made by the Kubernetes administrator to modify container security configurations while deploying the SAS Viya platform. An administrator might want or need to change the default container security settings in a SAS Viya platform deployment such as removing, adding, or updating settings in the podSpecs. There are many reasons why an administrator might want to modify these settings.

The steps in this README for fsGroup and seccomp can be used for any platform. However, if you are deploying on Red Hat OpenShift, these settings must be modified in order to take advantage of OpenShift’s built-in security context constraints (SCCs). The title of each section indicates whether it is required for OpenShift.

What is an SCC?

SCCs are the framework, provided by OpenShift, that controls what privileges can be requested by pods in the cluster. OpenShift provides users with several built-in SCCs. Admins can attach pods to any of these SCCs or they can create dedicated SCCs. Dedicated SCCs are created specifically to address the specs and capabilities required by a certain pod/product. For more information on OpenShift SCCs, see Managing SCCs in OpenShift.

Purpose of the Customizations

You can use the customizations in this file to accomplish the following required or optional tasks:

Note: Pods that run with dedicated SCCs for Crunchy Data (the internal PostgreSQL server) or the CAS server do not need the customizations referenced in this README. They have dedicated SCCs that will contain all conditions for the pods without altering the podSpec. You can use some of these customizations for OpenSearch. For more information, see Security Requirements.

Instructions

fsGroup

The fsGroup field defines a special supplemental group that assigns a GID for all containers in the pod. Volumes that support ownership management are modified to be owned and writable by the GID specified in fsGroup. For more information about using fsGroup, see Configure a Security Context for a Pod or Container.

Update the fsGroup Field (Mandatory for OpenShift; Optional for Other Environments)

Notes: Crunchy Data currently does not support updating this value. Do not attempt to change this setting for an internal PostgreSQL server. Instead, custom SCCs grant the Crunchy Data pods the ability to run with their specific group ID (GID).

Updating this value for CAS is optional because CAS default settings work in all environments. If you want to update values for CAS, you must uncomment the corresponding PatchTransformer in the update-fsgroup.yaml file. If you are deploying on OpenShift, the corresponding SCC also must be updated to specify the new fsGroup values or be set to “RunAsAny”.

Use these steps to update the fsGroup field for pods in your SAS Viya platform deployment.

  1. Copy the $deploy/sas-bases/examples/security/container-security/configmap-inputs.yaml file to the location of your working container security overlay, such as site-config/security/container-security/.

  2. Update the {{ FSGROUP_VALUE }} token in the configmap-inputs.yaml file to match the desired numerical group value.

    Note: For OpenShift, you can get the allocated GID and value with the kubectl describe namespace <name-of-namespace> command. The value to use is the minimum value of the openshift.io/sa.scc.supplemental-groups annotation. For example, if the output is the following, you should use 1000700000.

    Name:         sas-1
    Labels:       <none>
    Annotations:  ...
                  openshift.io/sa.scc.supplemental-groups: 1000700000/10000
                  ...
  3. Add the relative path of configmap-inputs.yaml to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    ...
    resources:
    ...
    - site-config/security/container-security/configmap-inputs.yaml
    ...
  4. Add the relative path of the update-fsgroup.yaml file to the transformers block of the base kustomization.yaml file. Here is an example:

    ...
    transformers:
    ...
    - sas-bases/overlays/security/container-security/update-fsgroup.yaml
    ...
  5. (Optional) For CAS, add the relative path of the update-cas-fsgroup.yaml file to the transformers block of the base kustomization.yaml file. Here is an example:

    ...
    transformers:
    ...
    - sas-bases/overlays/security/container-security/update-fsgroup.yaml
    - sas-bases/overlays/security/container-security/update-cas-fsgroup.yaml
    ...
  6. (For OpenShift) If you performed the optional configuration for CAS from Step 5, update the dedicated SCC for CAS to allow the desired fsGroup value. This value should match the value from Step 2 above, or it should be set to RunAsAny.

Note: Crunchy Data currently does not support removing this value. Pods for an internal PostgreSQL server will remain unaffected.

To remove the fsGroup field from your deployment specification, add the relative path of the remove-fsgroup-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

...
transformers:
...
- sas-bases/overlays/security/container-security/remove-fsgroup-transformer.yaml
...

Secure Computing Mode (seccomp)

Secure computing mode (seccomp) is a security facility that restricts the actions that are available within a container. You can use this feature to restrict your application’s access. For more information about seccomp, see Seccomp security profiles for Docker.

Update the seccomp Profile

Considerations:

Use these steps to update the seccomp profile enabled for pods in your deployment specification.

  1. Copy the deploy/sas-bases/examples/security/container-security/update-seccomp.yaml file to the location of your working container security overlay.

    Here is an example: site-config/security/container-security/update-seccomp.yaml

  2. Update the “{{ SECCOMP_PROFILE }}” tokens in the update-seccomp.yaml file to match the desired seccomp profile value.

  3. Add the relative path of update-seccomp.yaml to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    ...
    transformers:
    ...
    - site-config/security/container-security/update-seccomp.yaml
    ...

Remove the seccomp Profile (Mandatory for OpenShift)

To remove the seccomp profile settings from your deployment specification, add the relative path of the remove-seccomp-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

IMPORTANT: You must make this modification in an OpenShift environment.

Here is an example:

...
transformers:
...
- sas-bases/overlays/security/container-security/remove-seccomp-transformer.yaml
...

SAS Audit Archive Configuration

Overview

The SAS Audit service can be configured to periodically archive audit records to file. If this feature is enabled, then a PersistentVolumeClaim must be created as the output location for these archive files.

Note: Because this task requires the SAS Environment Manager, it can only be performed after a successful deployment.

Prerequisites

Archiving is disabled by default, so you must enable the feature to use it. As an administrator, open the Audit service configuration in SAS Environment Manager and change the following settings to the specified values.

Setting Name Value
sas.audit.archive.process.storageType local

Installation

Copy the Example Files

Copy all of the files in $deploy/sas-bases/examples/sas-audit/archive to $deploy/site-config/sas-audit, where $deploy is the directory containing your SAS Viya platform installation files. Create the target directory, if it does not already exist.

Update the resources.yaml File

Edit the resources.yaml file to replace the following parameters with the appropriate values.

Parameter Name Description Example Value
STORAGE-CLASS The storage class of the PersistentVolumeClaim. The storage class must support ReadWriteMany. nfs-client
STORAGE-CAPACITY The size of the PersistentVolumeClaim. 1Gi

Update the Base kustomization.yaml File

After updating the example files, you should add references to them to the base kustomization.yaml file ($deploy/kustomization.yaml). * Add a reference to the resources.yaml file to the resources block. * Add a reference to the archive-transformer.yaml file to the transformers block.

For example, if you made the changes described above, then the base kustomization.yaml file should have entries similar to the following:

resources:
- site-config/sas-audit/resources.yaml
transformers:
- site-config/sas-audit/archive-transformer.yaml

Build and Apply the Manifest

As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Note: Audit service persistentvolumeclaim data does not participate in the SAS Viya platform backup and restore procedure. Therefore it contains archived data that is never restored to the SAS Viya platform system. As a result, when audit archiving is performed, SAS recommends that the cluster administrator take a backup of the audit archive data and keep that data at a secure location. Steps for backup can be found at $deploy/sas-bases/examples/sas-audit/backup/README.md

Migrate SAS Audit Archived Data from SAS Viya 4 To SAS Viya 4

Overview

Archived data from the audit process is stored in a persistent volume (PV). Audit and activity data are stored separately.

Audit service PVC data does not participate in the SAS Viya platform backup and restore procedure. Therefore, it contains archived data that is never restored to the SAS Viya platform system. As a result, when audit archiving is performed, SAS recommends that the cluster administrator take a backup of the audit archive data and keep that data at a secure location.

Prerequisites

  1. The audit service should be running on both source and target environments and should have the PV attached.

  2. To perform some elements of this task, you must have elevated Kubernetes permissions.

  3. You should follow the steps described in Hardware and Resource Requirements page, especially section Persistent Storage Volumes, PersistentVolumeClaims, and Storage Classes.

Best Practices for Performing Manual Backup

  1. Take frequent backups of audit archived data during off-peak hours. The frequency of backups should be determined by the frequency of archiving defined by your organization.

  2. The PV that contains archived data is part of the same cluster as the environment. Therefore, SAS recommends that you routine copy archived data to storage outside of the cluster, such as NFS, in case the PV or entire cluster fails.

  3. The time required to copy the archived audit data contents varies based on the size of the data, the disk I/O rate of the system, and the type of file system that you are using.

Migrate Archived Audit Data from Source SAS Viya 4 Environment to Target SAS Viya 4 Environment

The following steps use a generic method (tar and kubectl exec) and an Audit service pod to copy data between environments. The steps in this generic method are not specific to any one cloud provider. You might follow a slightly different set of steps depending on what type of storage you are using for your data.

  1. Log in to the cluster where you want to keep a backup of the archived data temporarily. You must have root level permissions to copy the archived data.

  2. Determine the temporary location of the data that you wish to copy to and from.

  3. Export or set the source machine kubeconfig file and then get source audit pod name:

    export KUBECONFIG=<source-machine-kubeconfig>
    kubectl get pods -n <name-of-namespace> | grep sas-audit
  4. Copy audit archived data from source machine to temporary location:

    kubectl -n <name-of-namespace> exec <source-audit-pod-name> -- tar cpf - -C /archive . | tar xf - -C <temp-folder-path>

    Here is an example:

    kubectl -n sourceTenant exec sas-audit-58cccfb4f7-pd870 -- tar cpf - -C /archive . | tar xf - -C /opt/tmpDir

    Note: temp-folder-path is the location that is being used to keep the data temporarily.

  5. Export or set target machine kubeconfig on same setup and get target audit pod name:

    export KUBECONFIG=<target-machine-kubeconfig>
    kubectl get pods -n <name-of-namespace> | grep sas-audit
  6. Migrate the audit archived data from the temporary location to the target machine PV:

    tar cpf - -C <temp-folder-path> * | kubectl -n <name-of-namespace> exec -i <target-audit-pod-name> -- tar xf - -C /archive

    Here is an example:

    tar cpf - -C /opt/tmpDir * | kubectl -n targetTenant exec -i sas-audit-555c58c44f-ssjx7 -- tar xf - -C /archive

    Note: The temp-folder-path is the location where archived data is kept temporarily.

Migrate to SAS Viya 4

The files in this directory are used to customize your SAS Viya 4 deployment to run migration. For information about migration and using these files, see SAS Viya Platform Administration: Migration.

Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.

Migrate to SAS Viya 4

This directory contains overlays to customize your SAS Viya 4 deployment to run migration. For information about migration and using these files, see SAS Viya Platform Administration: Migration.

Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.

Configuration Settings for SAS Viya Platform Migration

Overview

This README describes how to revise and apply the settings for configuring migration jobs.

Change Migration Job Timeout

  1. To change the migration job timeout value, edit the $deploy/kustomization.yaml file by adding an entry for the sas-restore-job-parameters configMap in the configMapGenerator block. The entry uses the following format, where {{ TIMEOUT-IN-MINUTES }} is an integer.

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
      - JOB_TIME_OUT={{ TIMEOUT-IN-MINUTES }}

    If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.

  2. Build the manifest.

    kustomize build -o site.yaml
  3. Apply the manifest.

     kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts

Filter Configuration Definition Properties

  1. To skip the migration of the configuration definition properties, edit the $deploy/kustomization.yaml file by adding an entry for the sas-restore-job-parameters configMap in the configMapGenerator block. The entry uses the following format.

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
      - RESTORE_DEFINITION_FILTER={{ RESTORE-DEFINITION-FILTER-CSV }}

    The {{ RESTORE-DEFINITION-FILTER-CSV }} is a json string containing the comma-separated list of ‘key:value’ pairs where key is in the form ‘serviceName.definitionName.version’ and value itself can be a comma-separated list of properties to be filtered. If the entire definition is to be excluded, then set the value to ‘*’. If the service name is not present in the definition then only provide ‘definitionName’. Each key and value must be enclosed in double quotes (“). Here is an example:

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
      - RESTORE_DEFINITION_FILTER='{"sas.dataserver.common.1":"*","deploymentBackup.sas.deploymentbackup.1":"*","deploymentBackup.sas.deploymentbackup.2":"*","deploymentBackup.sas.deploymentbackup.3":"*","sas.security.1":"*","vault.sas.vault.1":"*","vault.sas.vault.2":"*","SASDataExplorer.sas.dataexplorer.1":"*","SASLogon.sas.logon.sas9.1":"*","sas.cache.1":"*","sas.cache.2":"*","sas.cache.3":"*","sas.cache.4":"*","identities-SASLogon.sas.identities.providers.ldap.user.1":"accountId,address.country","SASLogon.sas.logon.saml.providers.external_saml.1":"assertionConsumerIndex,idpMetadata"}'

    If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.

  2. Build the manifest.

    kustomize build -o site.yaml
  3. Apply the manifest.

     kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts

Filter Configuration Properties

  1. To skip the migration of the configuration properties, edit the $deploy/kustomization.yaml file by adding an entry for the sas-restore-job-parameters configMap in the configMapGenerator block. The entry uses the following format.

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
      - RESTORE_CONFIGURATION_FILTER={{ RESTORE-CONFIGURATION-FILTER-CSV }}

    The {{ RESTORE-CONFIGURATION-FILTER-CSV }} is a json string containing the comma-separated list of ‘key:value’ pairs where key is in the form ‘serviceName.configurationMediaType’ and value itself can be a comma-separated list of properties to be filtered. If the entire configuration is to be excluded, then set the value to ‘*’. If the service name is not present in the configuration, then use the media type. Each key and value must be enclosed in double quotes (“). Here is an example:

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
      - RESTORE_CONFIGURATION_FILTER='{"postgres.application/vnd.sas.configuration.config.sas.dataserver.conf+json;version=1":"*","maps-reportPackages-webDataAccess.application/vnd.sas.configuration.config.sas.maps+json;version=2":"useArcGISOnlineMaps,localEsriServicesUrl"}'

    If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.

  2. Build the manifest.

    kustomize build -o site.yaml
  3. Apply the manifest.

     kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts

Modify the Resources of the Migration Job

If the default resources are not sufficient for the completion or successful execution of the migration job, modify the resources to the values you desire.

  1. Copy the file $deploy/sas-bases/examples/migration/configure/sas-migration-job-modify-resources-transformer.yaml to a location of your choice under $deploy/site-config, such as $deploy/site-config/migration.

  2. In the copied file, replace {{ CPU-LIMIT }} with the desired value of CPU. {{ CPU-LIMIT }} must be a non-zero and non-negative numeric value, such as “3” or “5”. You can specify fractional values for the CPUs by using decimals, such as “1.5” or “0.5”.

  3. In the same file, replace {{ MEMORY-LIMIT }} with the desired value of memory. {{ MEMORY-LIMIT }} must be a non-zero and non-negative numeric value followed by “Gi”. For example, “8Gi” for 8 gigabytes.

  4. Add the full path of the copied file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). For example, if you moved the file to $deploy/site-config/migration, you would modify the base kustomization.yaml file like this:

    ...
    transformers:
    ...
    - site-config/migration/sas-migration-job-modify-resources-transformer.yaml
    ...
  5. Build the manifest.

    kustomize build -o site.yaml
  6. Apply the manifest.

     kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts

Uncommon Migration Customizations

Overview

This README contains information for customizations potentially required for migrating to SAS Viya 4. These customizations are not used often.

Configure New PostgreSQL Name

  1. If you change the name of the PostgreSQL service during migration, you must map the new name to the old name. Edit $deploy/kustomization.yaml and add an entry to the restore_job_parameters configMap in configMapGenerator section. The entry uses the following format:

    data-service-{{ NEW-SERVICE-NAME }}={{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }}

    To get the value for {{ NEW-SERVICE-NAME }}:

    kubectl -n <name-of-namespace> get dataserver -o=custom-columns=SERVICE_NAME:.spec.registrations[].serviceName --no-headers

    The command lists all the PostgreSQL clusters in your deployment. Choose the appropriate one from the list.

    {{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }} is the name of the directory in backup where the PostgreSQL backup is stored (for example, 2022-03-02T09_04_11_611_0700/acme/**postgres**).

    In the following example, {{ NEW-SERVICE-NAME }} is sas-cdspostgres, and {{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }} is cpspostgres:

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
        ...
        - data-service-sas-cdspostgres=cpspostgres
  2. Build the manifest.

    kustomize build -o site.yaml
  3. Apply the manifest.

     kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts

Exclude Schemas During Migration

  1. If you need to exclude some of the schemas during migration, edit $deploy/kustomization.yaml and add an entry to the restore_job_parameters configMap in configMapGenerator section. The entry uses the following format:

    EXCLUDE_SCHEMAS={schema1, schema2,...}

    In the following example, “dataprofiles” and “naturallanguageunderstanding” are schemas that will not be migrated.

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
        ...
        - EXCLUDE_SCHEMAS=dataprofiles,naturallanguageunderstanding
  2. Build the manifest.

    kustomize build -o site.yaml
  3. Apply the manifest.

     kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts

Custom Database Name

  1. If the database name on the system you want to restore (the target system) does not match the database name on the system from where a backup has been taken (the source system), then you must provide the appropriate database name as part of the restore operation.

    The database name is provided by using an environment variable, RESTORE_DATABASE_MAPPING, which should be specified in the restore job ConfigMap, sas-restore-job-parameters. Use the following format:

    RESTORE_DATABASE_MAPPING=<source instance name>.<source database name>=<target instance name>.<target database name>

    For example, if the source system has the database name “SharedServices” and the target system database is named “TestDatabase”, then the environment variable would look like this:

    RESTORE_DATABASE_MAPPING=postgres.SharedServices=postgres.TestDatabase
  2. Build the manifest.

    kustomize build -o site.yaml
  3. Apply the manifest.

     kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts

Exclude PostgreSQL Instance During Migration

  1. If you need to exclude some of the PostgreSQL instances during migration, edit $deploy/kustomization.yaml and add an entry to the restore_job_parameters configMap in the configMapGenerator section. The entry uses the following format:

    EXCLUDE_SOURCES={instance1, instance2,...}

    In the following example, “sas-cdspostgres” are PostgreSQL instances that will not be migrated.

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
        ...
        - EXCLUDE_SOURCES=sas-cdspostgres
  2. Build the manifest.

    kustomize build -o site.yaml
  3. Apply the manifest.

     kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts

Enable Parallel Execution for the Restore Operation

  1. You can set a jobs option that reduces the amount of time required to restore the SAS Infrastructure Data server. The time required to restore the database from backup is reduced by restoring the database objects over multiple parallel jobs. The optimal value for this option depends on the underlying hardware of the server, of the client, and of the network (for example, the number of CPU cores). Refer to the –jobs parameter for more information about the parallel jobs.

    You can specify the number of parallel jobs using the following environment variable, which should be specified in the sas-restore-job-parameters config map.

    SAS_DATA_SERVER_RESTORE_PARALLEL_JOB_COUNT=<number-of-jobs>

    The following section, if not present, can be added to the kustomization.yaml file in your $deploy directory. If it is present, append the properties shown in this example in the literals section.

    configMapGenerator:
    - name: sas-restore-job-parameters
    behavior: merge
    literals:
        - SAS_DATA_SERVER_RESTORE_PARALLEL_JOB_COUNT=<number-of-jobs>
  2. Build the manifest.

    kustomize build -o site.yaml
  3. Apply the manifest.

     kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts

Additional Resources

For more information about migration, see SAS Viya Platform Administration: Migration.

Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.

Activate sas-migration-manager

Overview

The SAS Migration Management service interacts with SAS 9 Content Assessment to migrate applicable content from a SAS 9 system to SAS Viya 4.

  The SAS Migration Management service accesses and maintains information about SAS 9 objects and their statuses in the migration process.

  The SAS Migration Management service provides the following functions:

  1. Upload content from the SAS 9 system captured by SAS Content Assessment.
  2. Upload profiling information for an object.
  3. Upload code check information for an object.
  4. Update or append objects to the content.
  5. List content based on a filter.
  6. Create migration batches to subset content to be assessed by SAS Content Assessment.
  7. Maintain migration batches, including adding and deleting content based on a filter.
  8. Download a migration batch as a CSV file.
  9. Log migration batch events.

The sas-migration-manager, microservice is deployed in an idle state (scale=0) by default to save resources in Viya unless the user wants to use the migration manager. In order to use the migration manager service it will have to be activated in a deployment. To activate sas-migration-manager follow the installation steps in this document.

Installation

To activate sas-migration-manager in your deployment, copy the $deploy/sas-bases/examples/sas-migration-manager/scale-migration-on.yaml file to your $deploy/site-config/sas-migration-manager directory.

After you copy the file, add a reference to it in the transformer block of the base kustomization.yaml file.

transformers:
  - sas-migration-manager/scale-migration-on.yaml

Additional Resources

For more information about configuration and using example files, see the SAS Viya Platform: Deployment Guide.

Convert CAS Server Definitions for Migration

Overview

This readme describes how to convert SAS Viya 3.x CAS server definitions into SAS Viya 4 Custom Resources (CR) using the sas-migration-cas-converter.sh script.

Prerequisites

To convert SAS Viya 3.x CAS servers into compatible SAS Viya 4 CRs, you must first run the inventory playbook to create a migration package. The package will contain a YAML file with the name of each of your CAS servers, such as cas-shared-default.yaml. Instructions to create a migration package using this playbook are given in the SAS Viya Platform Administration Guide.

You perform the conversion process by specifying the name of the YAML file as an argument to the sas-migration-cas-converter.sh script. You can specify the -f or --file argument. You can specify the -o or --output option to specify the location of the output file for the converted custom resource. By default, if no output option is specified, the YAML file is created in the current directory.

When you run the conversion script, a file with the custom resource is created in the format of {{ CAS-SERVER-NAME }}-migration-cr.yaml.

Restore from a Backup Location

If you have data and permstore content to restore, use the cas-migration.yaml patch in \$deploy/sas-bases/examples/migration/cas/cas-components to specify the backup location to restore from. This patch is already included in the kustomization.yaml file in the cas-components directory. To configure this patch:

  1. Open cas-migration.yaml to modify its contents.

  2. Set up the NFS mount by replacing the NFS-MOUNT-PATH and NFS-SERVER tokens with the mounted path to your backup location and the NFS server where it lives:

    nfs:
      path: {{NFS-MOUNT-PATH}}
      server: {{NFS-SERVER}}
  3. To include the newly created CAS custom resource in the manifest, add a reference to it in the resources block of the base kustomization.yaml file in the migration example (there is an example commented out). After you run kustomize build and apply the manifest, your server is created. Your backup content is restored if you included the cas-migration.yaml patch with a valid backup location.

Enable State Transfer

Enabling state transfers allows the sessions, tables and state of a running cas server to be preserved between a running CAS server and a new CAS server instance which will be started as part of the CAS server upgrade.

In the base kustomization.yaml file in the migration example (there are examples commented out):

Example

Run the script:

./sas-migration-cas-converter.sh -f cas-shared-default.yaml -o .

The output from this command is a file named cas-shared-default-migration-cr.yaml.

Additional Resources

For more information about CAS migration, see SAS Viya Platform Administration: Promotion and Migration.

Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.

Granting Security Context Constraints for Migration on an OpenShift Cluster

Overview

The $deploy/sas-bases/overlays/migration/openshift directory contains a file to grant security context constraints (SCCs) for the sas-migration-job pod on an OpenShift cluster. Note: The security context constraint needs to be applied only if the backup is present on an NFS path.

Installation

  1. Use one of the following commands to apply the SCCs.
# using kubectl
kubectl apply -f migration-job-scc.yaml

# using the OpenShift CLI
oc create -f migration-job-scc.yaml
  1. Use the following command to link the SCCs to the appropriate Kubernetes service account. Replace the entire variable {{ NAME-OF-NAMESPACE }}, including the braces, with the Kubernetes namespace used for SAS Viya.
oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-migration-job -z sas-viya-backuprunner

SAS Viya Backup and Restore Utility

The files in this directory are used to create a backup of the SAS Viya platform. You can perform a one-time backup or you can schedule a regular backup of your deployment. For information about performing backups and using these files, see SAS Viya Platform Administration: Backup and Restore.

Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.

Configuration Settings for Backup Using the SAS Viya Backup and Restore Utility

Overview

This README describes how to revise and apply the settings for configuring backup jobs.

Change the StorageClass for PersistentVolumeClaims Used for Storing Backups

If you want to retain the PersistentVolumeClaim (PVC) used for backup utility when the namespace is deleted, then use a StorageClass with a ReclaimPolicy of’Retain’ as the backup PVC.

  1. Copy the file $deploy/sas-bases/examples/backup/configure/sas-common-backup-data-storage-class-transformer.yaml to a location of your choice under $deploy/site-config, such as $deploy/site-config/backup.

  2. Follow the instructions in the copied sas-common-backup-data-storage-class-transformer.yaml file to change the values in that file as necessary.

  3. Add the full path of the copied file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). For example, if you moved the file to $deploy/site-config/backup, you would modify the base kustomization.yaml file like this:

    ...
    transformers:
    ...
    - site-config/backup/sas-common-backup-data-storage-class-transformer.yaml
    ...
  4. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Change the Storage Size for the sas-common-backup-data PersistentVolumeClaim

  1. Copy the file $deploy/sas-bases/examples/backup/configure/sas-common-backup-data-storage-size-transformer.yaml to a location of your choice under $deploy/site-config, such as $deploy/site-config/backup.

  2. Follow the instructions in the copied sas-common-backup-data-storage-size-transformer.yaml file to change the values in that file as necessary.

  3. Add the full path of the copied file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). For example, if you moved the file to $deploy/site-config/backup, you would modify the base kustomization.yaml file like this:

    ...
    transformers:
    ...
    - site-config/backup/sas-common-backup-data-storage-size-transformer.yaml
    ...
  4. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Change the Default Backup Schedule to a Custom Schedule

By default, the backup utility is run once per week on Sundays at 1:00 a.m. Use the following instructions to schedule a backup more suited to your resources.

  1. Copy the file $deploy/sas-bases/examples/backup/configure/sas-scheduled-backup-job-change-default-backup-transformer.yaml to a location of your choice under $deploy/site-config, such as $deploy/site-config/backup.

  2. Replace {{ SCHEDULE-BACKUP-CRON-EXPRESSION }} with the cron expression for the desired schedule in the copied sas-scheduled-backup-job-change-default-backup-transformer.yaml.

  3. Add the full path of the copied file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). For example, if you moved the file to $deploy/site-config/backup, you would modify the base kustomization.yaml file like this:

    ...
    transformers:
    ...
    - site-config/backup/sas-scheduled-backup-job-change-default-backup-transformer.yaml
    ...
  4. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Customize the Default Incremental Backup Schedule

By default, the incremental backup is run daily at 6:00 a.m. Use the following instructions to change the schedule of this additional job to a time more suited to your resources.

  1. Copy the file $deploy/sas-bases/examples/backup/configure/sas-scheduled-backup-incr-job-change-default-schedule.yaml to a location of your choice under $deploy/site-config, such as $deploy/site-config/backup.

  2. In the copied file, replace {{ SCHEDULE-BACKUP-CRON-EXPRESSION }} with the cron expression for the desired schedule.

  3. Add the full path of the copied file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). For example, if you moved the file to $deploy/site-config/backup, you would modify the base kustomization.yaml file like this:

    ...
    transformers:
    ...
    - site-config/backup/sas-scheduled-backup-incr-job-change-default-schedule.yaml
    ...
  4. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Change the Default Schedule to Back Up All Sources to a Custom Schedule

By default, the additional job to back up all the data sources (including PostgreSQL) is suspended. When enabled, the job is scheduled to run once per week on Saturdays at 1:00 a.m by default. Use the following instructions to change the schedule of this additional job to a time more suited to your resources. This job should not be scheduled at the same time as sas-scheduled-backup-job or the sas-scheduled-backup-incr-job.

  1. Copy the file $deploy/sas-bases/examples/backup/configure/sas-scheduled-backup-all-sources-change-default-schedule.yaml to a location of your choice under $deploy/site-config, such as $deploy/site-config/backup.

  2. In the copied file, Replace {{ SCHEDULE-BACKUP-CRON-EXPRESSION }} with the cron expression for the desired schedule in the copied sas-scheduled-backup-all-sources-change-default-schedule.yaml.

  3. Add the full path of the copied file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). For example, if you moved the file to $deploy/site-config/backup, you would modify the base kustomization.yaml file like this:

    ...
    transformers:
    ...
    - site-config/backup/sas-scheduled-backup-all-sources-change-default-schedule.yaml
    ...
  4. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Modify the Resources for the Backup Job

If the default resources are not sufficient for the completion or successful execution of the backup job, modify the resources to the values you desire.

  1. Copy the file $deploy/sas-bases/examples/backup/configure/sas-backup-job-modify-resources-transformer.yaml to a location of your choice under $deploy/site-config, such as $deploy/site-config/backup.

  2. In the copied file, replace {{ CPU-LIMIT }} with the desired value of CPU. {{ CPU-LIMIT }} must be a non-zero and non-negative numeric value, such as “3” or “5”. You can specify fractional values for the CPUs by using decimals, such as “1.5” or “0.5”.

  3. In the same file, replace {{ MEMORY-LIMIT }} with the desired value of memory. {{ MEMORY-LIMIT }} must be a non-zero and non-negative numeric value followed by “Gi”. For example, “8Gi” for 8 gigabytes.

  4. Add the full path of the copied file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). For example, if you moved the file to $deploy/site-config/backup, you would modify the base kustomization.yaml file like this:

    ...
    transformers:
    ...
    - site-config/backup/sas-backup-job-modify-resources-transformer.yaml
    ...
  5. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Modify the Resources of the Backup Copy and Cleanup Job

If the default resources are not sufficient for the completion or successful execution of the backup copy and cleanup job, modify the resources to the values you desire.

  1. Copy the file $deploy/sas-bases/examples/backup/configure/sas-backup-pv-copy-cleanup-job-modify-resources-transformer.yaml to a location of your choice under $deploy/site-config, such as $deploy/site-config/backup.

  2. In the copied file, replace {{ CPU-LIMIT }} with the desired value of CPU. {{ CPU-LIMIT }} must be a non-zero and non-negative numeric value, such as “3” or “5”. You can specify fractional values for the CPUs by using decimals, such as “1.5” or “0.5”.

  3. In the same file, replace {{ MEMORY-LIMIT }} with the desired value of memory. {{ MEMORY-LIMIT }} must be a non-zero and non-negative numeric value followed by “Gi”. For example, “8Gi” for 8 gigabytes.

  4. Add the full path of the copied file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). For example, if you moved the file to $deploy/site-config/backup, you would modify the base kustomization.yaml file like this:

    ...
    transformers:
    ...
    - site-config/backup/sas-backup-pv-copy-cleanup-job-modify-resources-transformer.yaml
    ...
  5. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Modify the Resources of the Backup Agent Container in the CAS Controller Pod

If the default resources are not sufficient for the completion or successful execution of the CAS controller pod, modify the resources of backup agent container of CAS controller pod to the values you desire.

  1. Copy the file $deploy/sas-bases/examples/backup/configure/sas-cas-server-backup-agent-modify-resources-transformer.yaml to a location of your choice under $deploy/site-config, such as $deploy/site-config/backup.

  2. In the copied file, replace {{ CPU-LIMIT }} with the desired value of CPU. {{ CPU-LIMIT }} must be a non-zero and non-negative numeric value, such as “3” or “5”. You can specify fractional values for the CPUs by using decimals, such as “1.5” or “0.5”.

  3. In the same file, replace {{ MEMORY-LIMIT }} with the desired value of memory. {{ MEMORY-LIMIT }} must be a non-zero and non-negative numeric value followed by “Gi”. For example, “8Gi” for 8 gigabytes.

  4. By default the patch will be applied to all of the CAS servers. If the patch transformer is being applied to a single CAS server, replace {{ NAME-OF-CAS-SERVER }} with the named CAS server in the same file and comment out the lines ‘name: .*’ and ‘labelSelector: “sas.com/cas-server-default”’ with a hashtag (#).

  5. Add the full path of the copied file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). For example, if you moved the file to $deploy/site-config/backup, you would modify the base kustomization.yaml file like this:

    ...
    transformers:
    ...
    - site-config/backup/sas-cas-server-backup-agent-modify-resources-transformer.yaml
    ...
  6. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Change Backup Job Timeout

  1. If you need to change the backup job timeout value, add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml). The entry uses the following format, where {{ TIMEOUT-IN-MINUTES }} is an integer

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - JOB_TIME_OUT={{ TIMEOUT-IN-MINUTES }}

    If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.

  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Change Backup Retention Period

  1. If you need to change the backup retention period, add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml). The entry uses the following format, where {{ RETENTION-PERIOD-IN-DAYS }} is an integer.

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - RETENTION_PERIOD={{ RETENTION-PERIOD-IN-DAYS }}

    If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.

  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Back Up Additional Consul Properties

  1. If you want to back up additional consul properties, keys can be added to the sas-backup-agent-parameters configMap in the base kustomization.yaml file ($deploy/kustomization.yaml). To add keys, add a data block to the configMap. If the sas-backup-agent-parameters configMap is already included in your base kustomization.yaml file, you should add the last line only. If the configMap isn’t included, add the entire block.

    configMapGenerator:
    - name: sas-backup-agent-parameters
      behavior: merge
      literals:
      - BACKUP_ADDITIONAL_GENERIC_PROPERTIES="{{ CONSUL-KEY-LIST }}"

    The {{ CONSUL-KEY-LIST }} should be a comma-separated list of properties to be backed up. Here is an example:

    configMapGenerator:
    - name: sas-backup-agent-parameters
      behavior: merge
      literals:
      - BACKUP_ADDITIONAL_GENERIC_PROPERTIES="config/files/sas.files/maxFileSize,config/files/sas.files/blockedTypes"
  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Exclude Specific Folders and Files During File System Backup

  1. To exclude specific folders and files during file system backup, add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml). If the sas-backup-job-parameters configMap is already included in your base kustomization.yaml file, you should add the last line only. If the configMap isn’t included, add the entire block.

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - FILESYSTEM_BACKUP_EXCLUDELIST="{{ EXCLUDE_PATTERN }}"

    The {{ EXCLUDE_PATTERN }} should be a comma-separated list of patterns for files or folders to be excluded from the backup. Here is an example that excludes all the files with extensions “.tmp” or “.log”:

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - FILESYSTEM_BACKUP_EXCLUDELIST="*.tmp,*.log"
  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Change the Default Filter to Exclude Specific Folders and Files During File System Backup

  1. By default, the filter list is set to exclude “.lck”, “.” and “lost+found” files and folders pattern from the file system backup. To change the default filter list to exclude files and folders during file system backup, add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml). If the sas-backup-job-parameters configMap is already included in your base kustomization.yaml file, you should add the last line only. If the configMap isn’t included, add the entire block.

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - FILESYSTEM_BACKUP_OVERRIDE_EXCLUDELIST="{{ EXCLUDE_PATTERN }}"

    The {{ EXCLUDE_PATTERN }} should be a comma-separated list of patterns for files or folders to be excluded from the backup. Here is an example that excludes all the files with extensions “.tmp” or “.log”:

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - FILESYSTEM_BACKUP_OVERRIDE_EXCLUDELIST="*.tmp,*.log"
  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Disable Backup Job Failure Notification

  1. By default, you are notified if the backup job fails. To disable backup job failure notification, add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml). Replace {{ ENABLE-NOTIFICATIONS }} with the string “false”.

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - ENABLE_NOTIFICATIONS={{ ENABLE-NOTIFICATIONS }}

    If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, add the last line only. If the configMap is not present, add the entire example.

    To restore the default, change the value of {{ ENABLE-NOTIFICATIONS }} from “false” to “true”.

  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Include or Exclude All Registered PostgreSQL Servers from Backup

  1. To include or exclude all Postgres servers registered with SAS Viya in the default back up, add the INCLUDE_POSTGRES variable to sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml). If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - INCLUDE_POSTGRES="{{ INCLUDE-POSTGRES }}"
  2. To include all the registered PostgreSQL servers, replace {{ INCLUDE-POSTGRES }} in the code with a value ‘true’. To exclude all the registered PostgreSQL servers, replace {{ INCLUDE-POSTGRES }} in the code with a value ‘false’.

  3. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Modify the fsGroup Resources of the Backup and Restore Jobs

If using the default fsGroup settings does not result in the completion or successful execution of the backup job, modify the fsGroup resources to the values you desire.

  1. Copy the file $deploy/sas-bases/examples/backup/configure/sas-backup-job-modify-fsgroup-transformer.yaml to a location of your choice under $deploy/site-config, such as $deploy/site-config/backup.

  2. Follow the instructions in the copied sas-backup-job-modify-fsgroup-transformer.yaml file to change the values in that file as necessary.

  3. Add the full path of the copied file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). For example, if you moved the file to $deploy/site-config/backup, you would modify the base kustomization.yaml file like this:

    ...
    transformers:
    ...
    - site-config/backup/sas-backup-job-modify-fsgroup-transformer.yaml
    ...
  4. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Disable Resource Validation

By default, resources such as space available for a PVC are pre-validated against PVC capacity to store data for a backup job. You can disable the resource validations for backup job if necessary.

Disable the resource validation temporarily

Add an entry to the sas-backup-job-parameters configMap with the following command.

kubectl patch cm sas-backup-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/DISABLE_VALIDATION", "value":"true" }]'

Disable the resource validations permanently

  1. Add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file.

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - DISABLE_VALIDATION="true"

    If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, add the last line only. If the configMap is not present, add the entire example.

  2. Build and apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Disable Proactive Notification

By default, resources such as space available for a PVC are pre-validated against PVC capacity to store data for a backup job and send a proactive notification. You can disable the proactive notification for resource validations for backup job if necessary.

Disable the proactive notification for resource validation temporarily

Add an entry to the sas-backup-job-parameters configMap with the following command.

kubectl patch cm sas-backup-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/DISABLE_PROACTIVE_NOTIFICATION", "value":"true" }]'

Disable the proactive notification for resource validations permanently

  1. Add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file.

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - DISABLE_PROACTIVE_NOTIFICATION="true"

    If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, add the last line only. If the configMap is not present, add the entire example.

  2. Build and apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Backup Progress

The backup progress feature provides real-time updates on the total estimated time for backup completion. This feature is enabled by default but can be disabled if users do not require progress tracking.

Disable Backup Progress Temporarily

Add an entry to the sas-backup-job-parameters configMap with the following command.

kubectl patch cm sas-backup-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/BACKUP_PROGRESS", "value":"false" }]'

Disable Backup Progress Feature Permanently

  1. Add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file. Here is an example:

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - BACKUP_PROGRESS="false"

    If the sas-backup-job-parameters configMap already exists in the base kustomization.yaml file, add only the last line. If the configMap is not present, include the entire example.

  2. Build and apply the manifest.

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Change Backup Progress Update Frequency

  1. To change the frequency of the updates on backup progress, add an entry to the sas-backup-job-parameters configMap within the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml). The entry uses the following format, where {{ PROGRESS-POLL-TIME-IN-MINUTES }} is an integer. The default and minimum value for backup progress poll time is 2 minutes. The maximum allowed value for backup progress poll time is 60 minutes.

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - PROGRESS_POLL_TIME={{ PROGRESS-POLL-TIME-IN-MINUTES }}

    If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.

    “Note:” High-frequency progress updates increase network usage and should be used cautiously for backups with very long durations.

  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Configuration Settings for PostgreSQL Backup Using the SAS Viya Backup and Restore Utility

Overview

This README describes how to revise and apply the settings for backing up PostgreSQL using the SAS Viya Backup and Restore Utility.

Add Additional Options for PostgreSQL Backup Command

  1. If you need to add or change any option for the PostgreSQL backup command (pg_dump), add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    configMapGenerator:
    - name: sas-backup-job-parameters
      behavior: merge
      literals:
      - SAS_DATA_SERVER_BACKUP_ADDITIONAL_OPTIONS={{ OPTION-1-NAME OPTION-1-VALUE }},{{ FLAG-1 }},{{ OPTION-2-NAME OPTION-2-VALUE }}

    The {{ OPTION-NAME OPTION-VALUE }} and {{ FLAG }} variables should be a comma-separated list of options to be added, such as -Z 0,--version.

    If the sas-backup-job-parameters configMap is already present in the ($deploy/kustomization.yaml) file, you should add the last line only. If the configMap is not present, add the entire example.

    Note: Do not use –format or -F in SAS_DATA_SERVER_BACKUP_ADDITIONAL_OPTIONS; the backup process defaults to directory format, ensuring compatibility during restoration.

  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Optional Configurations for Backup Jobs

Enable a Suspended Incremental Backup Job

To enable a suspended incremental backup job, edit the base kustomization file ($deploy/kustomization.yaml).

  1. In the transformers block, add /sas-bases/overlays/backup/sas-scheduled-backup-incr-job-enable.yaml. Here is an example:

    ...
    transformers:
    - sas-bases/overlays/backup/sas-scheduled-backup-incr-job-enable.yaml
    ...

    The above transformer also sets INCLUDE_POSTGRES=False in sas-backup-job-parameters configmap.

  2. Build and Apply the Manifest As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Enable a Suspended Job to Back Up All the Sources

To enable a suspended job to back up all sources (including PostgreSQL), edit the base kustomization file ($deploy/kustomization.yaml).

  1. In the transformers block, add /sas-bases/overlays/backup/sas-scheduled-backup-all-sources-enable.yaml. Here is an example:

    ...
    transformers:
    - sas-bases/overlays/backup/sas-scheduled-backup-all-sources-enable.yaml
    ...
  2. Build and Apply the Manifest As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Restore a SAS Viya Platform Deployment

Overview

The files in this directory are used to customize your SAS Viya platform deployment to perform a restore. For information about the restore function and using these files, see SAS Viya Platform Administration: Backup and Restore.

Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.

Restore a SAS Viya Platform Deployment

This directory contains overlays to customize your SAS Viya platform deployment to run restore. For information about the restore function and using these files, see SAS Viya Platform Administration: Backup and Restore.

Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.

Configuration Settings for Restore Using the SAS Viya Backup and Restore Utility

Overview

This README describes how to revise and apply the settings for configuring restore jobs.

Change Restore Job Timeout

To change the restore job timeout value temporarily, edit the sas-restore-job-parameters configMap using the following command, where {{ TIMEOUT-IN-MINUTES }} is an integer.

kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[ {"op": "replace", "path": "/data/JOB_TIME_OUT", "value":"{{ TIMEOUT-IN-MINUTES }}" }]'

If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.

  1. To change the restore job timeout value, edit the $deploy/kustomization.yaml file by adding an entry for the sas-restore-job-parameters configMap in the configMapGenerator block. The entry uses the following format, where {{ TIMEOUT-IN-MINUTES }} is an integer.

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
      - JOB_TIME_OUT={{ TIMEOUT-IN-MINUTES }}

    If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.

  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Filter Configuration Definition Properties

To skip the restore of the configuration definition properties once, edit the sas-restore-job-parameters configMap using the following command.

kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/RESTORE_DEFINITION_FILTER", "value":"{{ RESTORE-DEFINITION-FILTER-CSV }}" }]'

The {{ RESTORE-DEFINITION-FILTER-CSV }} is a json string containing the comma-separated list of ‘key:value’ pairs where the key is in the form ‘serviceName.definitionName.version’ and the value can be a comma-separated list of properties to be filtered. If the entire definition is to be excluded, then set the value to ‘*’. If the service name is not present in the definition, then only provide ‘definitionName’. Each key and value must be enclosed in double quotes (“). Here is an example:

kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/RESTORE_DEFINITION_FILTER", "value":"{\"sas.dataserver.common.1\":\"*\",\"deploymentBackup.sas.deploymentbackup.1\":\"*\",\"deploymentBackup.sas.deploymentbackup.2\":\"*\",\"deploymentBackup.sas.deploymentbackup.3\":\"*\",\"sas.security.1\":\"*\",\"vault.sas.vault.1\":\"*\",\"vault.sas.vault.2\":\"*\",\"SASDataExplorer.sas.dataexplorer.1\":\"*\",\"SASLogon.sas.logon.sas9.1\":\"*\",\"sas.cache.1\":\"*\",\"sas.cache.2\":\"*\",\"sas.cache.3\":\"*\",\"sas.cache.4\":\"*\",\"identities-SASLogon.sas.identities.providers.ldap.user.1\":\"accountId,address.country\",\"SASLogon.sas.logon.saml.providers.external_saml.1\":\"assertionConsumerIndex,idpMetadata\"}" }]'

If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.

  1. Edit the $deploy/kustomization.yaml file by adding an entry for the sas-restore-job-parameters configMap in the configMapGenerator block. The entry uses the following format.

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
      - RESTORE_DEFINITION_FILTER={{ RESTORE-DEFINITION-FILTER-CSV }}

    The {{ RESTORE-DEFINITION-FILTER-CSV }} is a json string containing the comma-separated list of ‘key:value’ pairs where key is in the form ‘serviceName.definitionName.version’ and value itself can be a comma-separated list of properties to be filtered. If entire definition is to be excluded, then set the value to ‘*’. If service name is not present in the definition then only provide ‘definitionName’. Each key and value must be enclosed in double quotes (“). Here is an example:

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
      - RESTORE_DEFINITION_FILTER='{"sas.dataserver.common.1":"*","deploymentBackup.sas.deploymentbackup.1":"*","deploymentBackup.sas.deploymentbackup.2":"*","deploymentBackup.sas.deploymentbackup.3":"*","sas.security.1":"*","vault.sas.vault.1":"*","vault.sas.vault.2":"*","SASDataExplorer.sas.dataexplorer.1":"*","SASLogon.sas.logon.sas9.1":"*","sas.cache.1":"*","sas.cache.2":"*","sas.cache.3":"*","sas.cache.4":"*","identities-SASLogon.sas.identities.providers.ldap.user.1":"accountId,address.country","SASLogon.sas.logon.saml.providers.external_saml.1":"assertionConsumerIndex,idpMetadata"}'

    If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.

  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Filter Configuration Properties

To skip the restore of the configuration properties once, edit the sas-restore-job-parameters configMap using the following command.

kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/RESTORE_CONFIGURATION_FILTER", "value":"{{ RESTORE-CONFIGURATION-FILTER-CSV }}" }]'

The {{ RESTORE-CONFIGURATION-FILTER-CSV }} is a json string containing the comma-separated list of ‘key:value’ pairs where the key is in the form ‘serviceName.configurationMediaType’ and the value can be a comma-separated list of properties to be filtered. If the entire configuration is to be excluded, then set the value to ‘*’. If the service name is not present in the configuration, then use the media type. Each key and value must be enclosed in double quotes (“). Here is an example:

kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/RESTORE_DEFINITION_FILTER", "value":"{\"postgres.application/vnd.sas.configuration.config.sas.dataserver.conf+json;version=1\":\"*\",\"maps-reportPackages-webDataAccess.application/vnd.sas.configuration.config.sas.maps+json;version=2\":\"useArcGISOnlineMaps,localEsriServicesUrl\"}" }]'

If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.

  1. Edit the $deploy/kustomization.yaml file by adding an entry for the sas-restore-job-parameters configMap in the configMapGenerator block. The entry uses the following format.

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
      - RESTORE_CONFIGURATION_FILTER={{ RESTORE-CONFIGURATION-FILTER-CSV }}

    The {{ RESTORE-CONFIGURATION-FILTER-CSV }} is a json string containing the comma-separated list of ‘key:value’ pairs where key is in the form ‘serviceName.configurationMediaType’ and value itself can be a comma-separated list of properties to be filtered. If the entire configuration is to be excluded, then set the value to ‘*’. If service name is not present in the configuration, then use the media type. Each key and value must be enclosed in double quotes (“). Here is an example:

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
      - RESTORE_CONFIGURATION_FILTER='{"postgres.application/vnd.sas.configuration.config.sas.dataserver.conf+json;version=1":"*","maps-reportPackages-webDataAccess.application/vnd.sas.configuration.config.sas.maps+json;version=2":"useArcGISOnlineMaps,localEsriServicesUrl"}'

    If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.

  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Disable Restore Job Failure Notification

By default, you are notified if the restore job fails. To disable the restore job failure notification once, add an entry to the sas-restore-job-parameters configMap with the following command. Replace {{ ENABLE-NOTIFICATIONS }} with the string “false”.

kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/ENABLE_NOTIFICATIONS", "value":"{{ ENABLE-NOTIFICATIONS }}" }]'

To restore the default, change the value of {{ ENABLE-NOTIFICATIONS }} from “false” to “true”.

If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.

  1. Add an entry to the sas-restore-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file. Replace {{ ENABLE-NOTIFICATIONS }} with the string “false”.

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
      - ENABLE_NOTIFICATIONS={{ ENABLE-NOTIFICATIONS }}

    If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, add the last line only. If the configMap is not present, add the entire example.

    To restore the default, change the value of {{ ENABLE-NOTIFICATIONS }} from “false” to “true”.

  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Modify the Resources of the Restore Job

In some cases, the default resources may not be sufficient for completion or successful execution of the restore job, resulting in the pod status being marked as OOMKilled. In this case, modify the resources to the values you desire.

Replace {{ CPU-LIMIT }} with the desired value of CPU. {{ CPU-LIMIT }} must be a non-zero and non-negative numeric value, such as “3” or “5”. You can specify fractional values for the CPUs by using decimals, such as “1.5” or “0.5”.

   kubectl patch cronjob sas-restore-job -n name-of-namespace --type json -p '[{"op": "replace", "path": "/spec/jobTemplate/spec/template/spec/containers/0/resources/limits/cpu", "value":"{{ CPU-LIMIT }}" }]'

Replace {{ MEMORY-LIMIT }} with the desired value for memory. {{ MEMORY-LIMIT }} must be a non-zero and non-negative numeric value followed by “Gi”. For example, “8Gi” for 8 gigabytes.

 ```bash
    kubectl patch cronjob sas-restore-job -n name-of-namespace --type json -p '[{"op": "replace", "path": "/spec/jobTemplate/spec/template/spec/containers/0/resources/limits/memory", "value":"{{ MEMORY-LIMIT }}" }]'
 ```

If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.

  1. Copy the file $deploy/sas-bases/examples/restore/configure/sas-restore-job-modify-resources-transformer.yaml to a location of your choice under $deploy/site-config, such as $deploy/site-config/restore.

  2. In the copied file, replace {{ CPU-LIMIT }} with the desired value of CPU. {{ CPU-LIMIT }} must be a non-zero and non-negative numeric value, such as “3” or “5”. You can specify fractional values for the CPUs by using decimals, such as “1.5” or “0.5”.

  3. In the same file, replace {{ MEMORY-LIMIT }} with the desired value of memory. {{ MEMORY-LIMIT }} must be a non-zero and non-negative numeric value followed by “Gi”. For example, “8Gi” for 8 gigabytes.

  4. Add the full path of the copied file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). For example, if you moved the file to $deploy/site-config/restore, you would modify the base kustomization.yaml file like this:

    ...
    transformers:
    ...
    - site-config/restore/sas-restore-job-modify-resources-transformer.yaml
    ...
  5. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Switch PostgreSQL Server Hosts After Restore Without SQL Proxy

External PostgreSQL servers can be backed up and restored externally. Point in time recovery performed in such cases creates a new PostgreSQL server with a new host name. To automatically update the host names of the PostgreSQL server after the restore is completed using the SAS Viya Backup and Restore Utility, update the sas-restore-job-parameters config map with the following parameters before performing the restore.

Here is an example command that adds the AUTO_SWITCH_POSTGRES_HOST and DATASERVER_HOST_MAP parameters to the config map:

kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/AUTO_SWITCH_POSTGRES_HOST", "value":"TRUE" }, {"op": "replace", "path": "/data/DATASERVER_HOST_MAP","value":"sas-platform-postgres:restored-postgres.postgres.azure.com,sas-cds-postgres:restored-cds-postgres.postgres.azure.com" }]'

Switch PostgreSQL Server Hosts After Restore With SQL Proxy

This section is used when SQL proxy is used to interface the external PostgreSQL server. External PostgreSQL servers can be backed up and restored externally. Point in time recovery performed in such cases creates a new PostgreSQL server with a new host name. To automatically update the host names of the PostgreSQL server after the restore is completed using the SAS Viya Backup and Restore Utility, update the sas-restore-job-parameters config map with the following parameters before performing the restore.

Here is an example command that adds the AUTO_SWITCH_POSTGRES_HOST and SQL_PROXY_POSTGRES_CONNECTION_MAP parameters to the config map:

kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/AUTO_SWITCH_POSTGRES_HOST", "value":"TRUE" }, {"op": "replace", "path": "/data/SQL_PROXY_POSTGRES_CONNECTION_MAP","value":"platform-postgres-sql-proxy:sub7:us-east1:restored-postgres-default-pgsql-clone,cds-postgres-sql-proxy:restored-cds-postgres-default-pgsql-clone" }]'

Disable Resource Validations

By default, resources like CPU and memory are pre-validated in order for the restore job to be completed successfully. You can disable the resource validation to complete the restore job successfully

Disable Resource Validations Temporarily

Add an entry to the sas-restore-job-parameters configMap with the following command.

kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/DISABLE_VALIDATION", "value":"true" }]'

Disable Resource Validation Permanently

  1. Add an entry to the sas-restore-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file.

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
      - DISABLE_VALIDATION="true"

    If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, add the last line only. If the configMap is not present, add the entire example.

  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Uncommon Restore Customizations

Overview

This README file contains information about customizations that are potentially required for restoring SAS Viya Platform from a backup. These customizations are not used often.

Custom Database Name

If the database name on the system you want to restore (the target system) does not match the database name on the system from where a backup has been taken (the source system), then you must provide the appropriate database name as part of the restore operation.

 The database name is provided by using an environment variable, RESTORE_DATABASE_MAPPING, which should be specified in the restore job ConfigMap, sas-restore-job-parameters. Use the following command:

 ```bash
 kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/RESTORE_DATABASE_MAPPING", "value":"<source instance name>.<source database name>=<target instance name>.<target database name>" }]'
 ```

For example, if the source system has the database name “SharedServices” and the target system database is named “TestDatabase”, then the environment variable would look like this:

 ```bash
 kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/RESTORE_DATABASE_MAPPING", "value":"postgres.SharedServices=postgres.TestDatabase" }]'
 ```

If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.

  1. The database name is provided by using an environment variable, RESTORE_DATABASE_MAPPING, which should be specified in the restore job ConfigMap, sas-restore-job-parameters. Use the following format:

    RESTORE_DATABASE_MAPPING=<source instance name>.<source database name>=<target instance name>.<target database name>

    For example, if the source system has the database name “SharedServices” and the target system database is named “TestDatabase”, then the environment variable would look like this:

    RESTORE_DATABASE_MAPPING=postgres.SharedServices=postgres.TestDatabase
  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Configure New PostgreSQL Name

If you change the name of the PostgreSQL service during migration, you must map the new name to the old name. Edit the sas-restore-job-parameters configMap using the following command:

 ```bash
 kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/data-service-{{ NEW-SERVICE-NAME }}", "value":"{{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }}" }]'
 ```

To get the value for {{ NEW-SERVICE-NAME }}:

 ```bash
 kubectl -n <name-of-namespace> get dataserver -o=custom-columns=SERVICE_NAME:.spec.registrations[].serviceName --no-headers
 ```

The command lists all the PostgreSQL clusters in your deployment. Choose the appropriate one from the list. {{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }} is the name of the directory in backup where the PostgreSQL backup is stored (for example, 2022-03-02T09_04_11_611_0700/acme/**postgres**).

In the following example, {{ NEW-SERVICE-NAME }} is sas-cdspostgres, and {{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }} is cpspostgres:

 ```bash
    kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/data-service-sas-cdspostgres", "value":"cpspostgres" }]'
 ```

If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.

  1. Edit $deploy/kustomization.yaml and add an entry to the restore_job_parameters configMap in the configMapGenerator section. The entry uses the following format:

    data-service-{{ NEW-SERVICE-NAME }}={{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }}

    To get the value for {{ NEW-SERVICE-NAME }}:

    kubectl -n <name-of-namespace> get dataserver -o=custom-columns=SERVICE_NAME:.spec.registrations[].serviceName --no-headers

    The command lists all the PostgreSQL clusters in your deployment. Choose the appropriate one from the list.

    {{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }} is the name of the directory in backup where the PostgreSQL backup is stored (for example, 2022-03-02T09_04_11_611_0700/acme/**postgres**).

    In the following example, {{ NEW-SERVICE-NAME }} is sas-cdspostgres, and {{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }} is cpspostgres:

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
        ...
        - data-service-sas-cdspostgres=cpspostgres
  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Exclude Schemas During Restore

If you need to exclude some of the schemas during migration once, edit the sas-restore-job-parameters configMap using the following command:

 ```yaml
 kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/EXCLUDE_SCHEMAS", "value":"{{ schema1, schema2,... }}" }]'
 ```

In the following example, “dataprofiles” and “naturallanguageunderstanding” are schemas that will not be restored.

 ```bash
 kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/EXCLUDE_SCHEMAS", "value":"dataprofiles,naturallanguageunderstanding" }]'
 ```

If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.

  1. Edit $deploy/kustomization.yaml by adding an entry to the restore_job_parameters configMap in the configMapGenerator section. The entry uses the following format:

    EXCLUDE_SCHEMAS={schema1, schema2,...}

    In the following example, “dataprofiles” and “naturallanguageunderstanding” are schemas that will not be restored.

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
        ...
        - EXCLUDE_SCHEMAS=dataprofiles,naturallanguageunderstanding
  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Exclude PostgreSQL Instance During Restore

If you need to exclude some of the PostgreSQL instances during restore once, edit the sas-restore-job-parameters configMap using the following command:

 ```bash
 kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/EXCLUDE_SOURCES", "value":"{{ instance1, instance2,... }}" }]'
 ```

In the following example, “sas-cdspostgres” are PostgreSQL instances that will not be restored.

 ```bash
 kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/EXCLUDE_SOURCES", "value":"sas-cdspostgres" }]'
 ```

If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.

  1. Edit $deploy/kustomization.yaml by adding an entry to the restore_job_parameters configMap in configMapGenerator section. The entry uses the following format:

    EXCLUDE_SOURCES={instance1, instance2,...}

    In the following example, “sas-cdspostgres” are PostgreSQL instances that will not be restored.

    configMapGenerator:
    - name: sas-restore-job-parameters
      behavior: merge
      literals:
        ...
        - EXCLUDE_SOURCES=sas-cdspostgres
  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Enable Parallel Execution for the Restore Operation

You can set a jobs option that reduces the amount of time required to restore the SAS Infrastructure Data server. The time required to restore the database from backup is reduced by restoring the database objects over multiple parallel jobs. The optimal value for this option depends on the underlying hardware of the server, of the client, and of the network (for example, the number of CPU cores). Refer to the –jobs parameter for more information about the parallel jobs.

You can specify the number of parallel jobs once using the following environment variable, which should be specified in the sas-restore-job-parameters configMap.

 ```bash
 kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/SAS_DATA_SERVER_RESTORE_PARALLEL_JOB_COUNT", "value":"{{ number-of-jobs }}" }]'
 ```

If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.

  1. Specify the number of parallel jobs using the following environment variable, which should be specified in the sas-restore-job-parameters config map.

    SAS_DATA_SERVER_RESTORE_PARALLEL_JOB_COUNT=<number-of-jobs>

    The following section, if not present, can be added to the kustomization.yaml file in your $deploy directory. If it is present, append the properties shown in this example in the literals section.

    configMapGenerator:
    - name: sas-restore-job-parameters
    behavior: merge
    literals:
        - SAS_DATA_SERVER_RESTORE_PARALLEL_JOB_COUNT=<number-of-jobs>
  2. Build and Apply the Manifest

    As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.

Restore Scripts

Overview

This README file contains information about the execution of scripts that are potentially required for restoring the SAS Viya Platform from a backup.

Append the Execute Permissions to Scripts

To execute the scripts described in this README, append the execute permission by running the following command.

chmod +x ./sas-backup-pv-copy-cleanup.sh ./scale-up-cas.sh ./sas-backup-pv-copy-cleanup-using-pvcs.sh ./sas-backup-pv-cleanup.sh

Clean Up CAS Persistent Volume Claims

Persistent volumes claims (PVCs) are used by the CAS server to restore CAS data. To clean up the CAS PVCs after the restore job has completed, execute the sas-backup-pv-copy-cleanup.sh or the sas-backup-pv-copy-cleanup-using-pvcs.sh bash script. Both scripts have three arguments: namespace, operation to perform, and a comma-separated list of CAS instances or persistent volume claims. If you are attempting a restore after a successful SAS Viya 3.x to SAS Viya 4 migration, method 2 is recommended.

Method 1 - Use a List of CAS instances

./sas-backup-pv-copy-cleanup.sh [namespace] [operation] "[CAS instances list]"

Here is an example:

./sas-backup-pv-copy-cleanup.sh viya04 remove "default"

Note: The default CAS instance name is “default” if the user has not changed it.

Use the following command to determine the name of the CAS instances.

kubectl -n name-of-namespace get casdeployment -L 'casoperator.sas.com/instance'

Verify that the output for the command contains the name of the CAS instances. Here is an example of the output:

test.host.com> kubectl -n viya04 get casdeployment -L 'casoperator.sas.com/instance'
NAME      AGE   INSTANCE
default   14h   default

In this example, the CAS instance is named “default”. If the instance value in the output is empty, use “default” as the instance value.

Method 2 - Use a List of Persistent Volume Claims

To get the list of persistent volume claims for CAS instances, execute the following command.

kubectl -n name-of-namespace get pvc -l 'sas.com/backup-role=provider,app.kubernetes.io/part-of=cas'

Verify that the output contains the persistent volume claims.

test.host.com> kubectl -n viya04 get pvc -l 'sas.com/backup-role=provider,app.kubernetes.io/part-of=cas'
NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cas-acme-default-data             Bound    pvc-6c4b3b65-cc11-4757-ac00-059d8e19f307   8Gi        RWX            nfs-client     20h
cas-acme-default-permstore        Bound    pvc-1a7cc621-5770-4e5d-b829-46eaad433460   100Mi      RWX            nfs-client     20h
cas-cyberdyne-default-data        Bound    pvc-cd5c173a-9bcf-4649-bea3-ea463930c9b4   8Gi        RWX            nfs-client     20h
cas-cyberdyne-default-permstore   Bound    pvc-253ff153-f309-4700-bef1-e041f63a7810   100Mi      RWX            nfs-client     20h
cas-default-data                  Bound    pvc-52d98061-d296-40f0-92e9-eaa34ca856c5   8Gi        RWX            nfs-client     21h
cas-default-permstore             Bound    pvc-cd8c3e86-a848-4029-9456-5841c85b15fd   100Mi      RWX            nfs-client     21h

Select list of data and permstore persistent volume claim for a CAS instance.

./sas-backup-pv-copy-cleanup-using-pvcs.sh [namespace] [operation] "[PVCs]"

Here is an example:

./sas-backup-pv-copy-cleanup-using-pvcs.sh viya04 remove "cas-default-data,cas-default-permstore"

Method 3 - Use the sas-backup-pv-cleanup.sh Script

To remove data from the CAS PVCs after the restore job is completed, execute the sas-backup-pv-cleanup.sh script.

To retrieve the list of persistent volume claims (PVCs) for the source data, run the following command:

kubectl -n name-of-namespace get pvc -l 'sas.com/backup-role=provider,app.kubernetes.io/part-of=cas'

Verify that the output contains the persistent volume claims.

test.host.com> kubectl -n viya04 get pvc -l 'sas.com/backup-role=provider,app.kubernetes.io/part-of=cas'
NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                    VOLUMEATTRIBUTESCLASS   AGE
cas-default-data        Bound    pvc-5feb5df5-daf9-4100-b998-64d48e221861   8Gi        RWX            nfs-client                      <unset>                 2d1h
cas-default-permstore   Bound    pvc-29d9ba36-7da5-4870-b7ec-719811f41caa   100Mi      RWX            nfs-client                      <unset>                 2d1h

In the command below, replace “[PVCs]” with the PVC names from the NAME column in the list above

./sas-backup-pv-cleanup.sh [namespace] "[PVCs]"

Here is an example:

./sas-backup-pv-cleanup.sh viya04 "cas-default-data,cas-default-permstore"

Copy Backup Data to and from Backup Persistent Volume Claims

You can also use a Kubernetes job (sas-backup-pv-copy-cleanup-job) to copy backup data to and from the backup persistent volume claims like sas-common-backup-data and sas-cas-backup-data.

Method 1 - Use a List of CAS instances

  1. To create a copy job from the cronjob sas-backup-pv-copy-cleanup-job, execute the sas-backup-pv-copy-cleanup.sh script with three arguments: namespace, operation to perform, and a comma-separated list of CAS instances.

    ./sas-backup-pv-copy-cleanup.sh [namespace] [operation] "[CAS instances list]"

    Here is an example:

    ./sas-backup-pv-copy-cleanup.sh viya04 copy "default"

    Note: The default CAS instance name is “default” if the user hasn’t changed it.

  2. The script creates a copy job for each CAS Instance that is included in the comma-separated list of CAS instances. Check for the sas-backup-pv-copy-job pod that is created for each individual CAS Instance

    kubectl -n name-of-namespace get pod | grep -i sas-backup-pv-copy

    If you do not see the results you expect, see the console output of the sas-backup-pv-copy-cleanup.sh script.

Method 2 - Use a List of Persistent Volume Claims

  1. To create a copy job from the cronjob sas-backup-pv-copy-cleanup-job, execute the sas-backup-pv-copy-cleanup-using-pvcs.sh script with three arguments: namespace, operation to perform, and the backup persistent volume claim particular to the CAS instance.

    To get the list of backup persistent volume claims for CAS instances, execute the following command.

    kubectl -n name-of-namespace get pvc -l 'sas.com/backup-role=storage,app.kubernetes.io/part-of=cas'

    Verify that the output contains the name of backup persistent volume claim particular to the cas instances.

    test.host.com> kubectl -n viya04 get pvc -l 'sas.com/backup-role=storage,app.kubernetes.io/part-of=cas'
    NAME                                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    sas-cas-backup-data                     Bound    pvc-3b16a5c0-b4af-43a1-95f7-53aa30103a59   8Gi        RWX            nfs-client     21h
    sas-cas-backup-data-acme-default        Bound    pvc-ceb3f86d-c0da-419b-bc06-825a6cddb5d9   4Gi        RWX            nfs-client     21h
    sas-cas-backup-data-cyberdyne-default   Bound    pvc-306f6b28-7d5a-4769-885c-b21d3b734207   4Gi        RWX            nfs-client     21h

    Select backup persistent volume claim for a CAS instance.

    ./sas-backup-pv-copy-cleanup-using-pvcs.sh [namespace] [operation] "[PVC]"

    Here is an example:

    ./sas-backup-pv-copy-cleanup-using-pvcs.sh viya04 copy "sas-cas-backup-data"
  2. The script creates a copy job that mounts the cas specific backup persistent volume claim and the sas-common-backup-data persistent volume claim. Check for the sas-backup-pv-copy-job pod that is created.

    kubectl -n name-of-namespace get pod | grep -i sas-backup-pv-copy

If you do not see the results you expect, see the console output of the sas-backup-pv-copy-cleanup.sh script.

The copy job pod mounts two persistent volume claims per CAS instance. The ‘sas-common-backup-data’ PVC is mounted at ‘/sasviyabackup’ and the ‘sas-cas-backup-data’ PVC is mounted at ‘/cas’.

Scaling CAS Deployments

To scale up the CAS deployments that are used to restore CAS data for each CAS instance, execute the scale-up-cas.sh bash script with two arguments: namespace and a comma-separated list of CAS instances.

./scale-up-cas.sh [namespace] "[CAS instances list]"

Here is an example:

./scale-up-cas.sh viya04 "default"

Note: The default CAS instance name is “default” if the user has not changed it.

Ensure that all the required sas-cas-controller pods are scaled up, especially if you have multiple CAS controllers.

Granting Security Context Constraints for Copy and Cleanup Job on an OpenShift Cluster

The $deploy/sas-bases/examples/restore/scripts/openshift directory contains a file to grant security context constraints (SCCs) for the sas-backup-pv-copy-cleanup-job pod on an OpenShift cluster. If you enable host launch on an OpenShift cluster, use the sas-backup-pv-copy-cleanup-job-scc.yaml SCC. If you did not enable host launch on an OpenShift cluster and are facing issues related to file deletion, use the sas-backup-pv-copy-cleanup-job-scc-fsgroup.yaml SCC.

Note: The security context constraint needs to be applied only if CAS is configured to allow for host identity.

  1. Use one of the following commands to apply the SCCs.

    Using kubectl

    kubectl apply -f sas-backup-pv-copy-cleanup-job-scc.yaml

    or

    kubectl apply -f sas-backup-pv-copy-cleanup-job-scc-fsgroup.yaml

    Using the OpenShift CLI

    oc create -f sas-backup-pv-copy-cleanup-job-scc.yaml

    or

    oc create -f sas-backup-pv-copy-cleanup-job-scc-fsgroup.yaml
  2. Use the following command to link the SCCs to the appropriate Kubernetes service account. Replace the entire variable {{ NAME-OF-NAMESPACE }}, including the braces, with the Kubernetes namespace used for the SAS Viya platform.

    oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-backup-pv-copy-cleanup-job -z sas-viya-backuprunner

Configure Restore Job Parameters for SAS Model Repository Service

Overview

The SAS Model Repository service provides support for registering, organizing, and managing models within a common model repository. This service is used by SAS Event Stream Processing, SAS Intelligent Decisioning, SAS Model Manager, Model Studio, SAS Studio, and SAS Visual Analytics.

Analytic store (ASTORE) files are extracted from the analytic store’s CAS table in the ModelStore caslib and written to the ASTORES persistent volume, when the following actions are performed:

When Python models (or decisions that use Python models) are published to the SAS Micro Analytic Service or CAS, the Python score resources are copied to the ASTORES persistent volume. Score resources for project champion models that are used by SAS Event Stream Processing are also copied to the persistent volume.

During the migration process, the analytic stores models and Python models are restored in the common model repository, along with their associated resources and analytic store files in the ASTORES persistent volume.

Note: The Python score resources from a SAS Viya 3.5 to SAS Viya 4 environment are not migrated with the SAS Model Repository service. For more information, see Promoting and Migrating Content in SAS Model Manager: Administrator’s Guide.

This README describes how to make the restore job parameters available to the sas-model-repository container within your deployment, as part of the backup and restore process. The restore process is performed during start-up of the sas-model-repository container, if the SAS_DEPLOYMENT_START_MODE parameter is set to RESTORE or MIGRATION.

Prerequisites

No prerequisite steps are required.

Installation

  1. Copy the files in the $deploy/sas-bases/examples/sas-model-repository/restore directory to the $deploy/site-config/sas-model-repository/restore directory. Create the target directory, if it does not already exist.

  2. Make a copy of the kustomization.yaml file to recover after temporary changes are made: cp kustomization.yaml kustomization.yaml.save

  3. Add site-config/sas-model-repository/restore/restore-transformer.yaml to the transformers block of the base kustomization.yaml file in the $deploy directory.

    transformers:
      - site-config/sas-model-repository/restore/restore-transformer.yaml

    Excerpt from the restore-transformer.yaml file:

    patch: |-
      # Add restore job parameters
      - op: add
        path: /spec/template/spec/containers/0/envFrom/-
        value:
        configMapRef:
        name: sas-restore-job-parameters
  4. Add the sas-restore-job-parameters code below to the configMapGenerator section of kustomization.yaml, and remove the configMapGenerator line, if it is already present in the default kustomization.yaml:

    configMapGenerator:
      - name: sas-restore-job-parameters
        behavior: merge
        literals:
          - SAS_BACKUP_ID={{ SAS-BACKUP-ID-VALUE }}
          - SAS_DEPLOYMENT_START_MODE=RESTORE

    Here are more details about the previous code.

    • Replace the value for {{SAS-BACKUP-ID-VALUE}} with the ID of the backup that is selected for restore.
    • To increase the logging levels, add the following line to the literals section:
      • SAS_LOG_LEVEL=DEBUG

    For more information, see Backup and Restore: Perform a Restore in SAS Viya Platform Operations.

  5. If you need to rerun a migration, you must remove the RestoreBreadcrumb.txt file from the /models/resources/viya directory.

    Here is example code for removing the file:

    kubectl get pods -n <namespace> | grep model-repository
    kubectl exec -it -n <namespace> <podname> -c sas-model-repository -- bash
    rm /models/resources/viya/RestoreBreadcrumb.txt
  6. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

    Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Additional Resources

Update Checker Cron Job

The Update Checker cron job builds a report comparing the currently deployed release with available releases in the upstream repository. The report is written to the stdout of the launched job pod and indicates when new content related to the deployment is available.

This example includes the following kustomize transform that defines proxy environment variables for the report when it is running behind a proxy server:

$deploy/sas-bases/examples/update-checker/proxy-transformer.yaml

For information about using the Update Checker, see View the Update Checker Report.

Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.

Configuring General Ingress Options

Overview

You can use the examples found within $deploy/sas-bases/examples/ingress-configuration/ to set general configuration values for Ingress resources.

The INGRESS_CLASS_NAME specifies the name of the IngressClass which SAS Viya Platform Ingress resources should use for this deployment. By default, SAS Viya Platform Ingress resources will use the nginx IngressClass. For more information about IngressClass resources, see Ingress class and Using IngressClasses.

The corresponding transformer file to override the ingressClassName field in Ingress resources is found at sas-bases/overlays/ingress-configuration/update-ingress-classname.yaml.

Installation

Use these steps to apply the desired properties to your SAS Viya platform deployment.

  1. Copy the $deploy/sas-bases/examples/ingress-configuration/ingress-configuration-inputs.yaml file to the location of your working container security overlay, such as site-config/ingress-configuration/.

  2. Define the properties in the ingress-configuration-inputs.yaml file which match the desired configuration. To define a property, uncomment it and update its token value as described in the comments in the file.

  3. Add the relative path of ingress-configuration-inputs.yaml to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    ...
    resources:
    ...
    - site-config/ingress-configuration/ingress-configuration-inputs.yaml
    ...
  4. Add the relative path(s) of the corresponding transformer file(s) to the transformers block of the base kustomization.yaml file. There should be one transformer file added per option defined within the ConfigMap. Here is an example:

    ...
    transformers:
    ...
    - sas-bases/overlays/ingress-configuration/update-ingress-classname.yaml
    ...

Using the Inventory Collector

Overview

The Inventory Collector is a CronJob that contains two Jobs. They are available to run after deployment is fully up and running. The first Job creates inventory tables and the second Job creates an inventory comparison table. Tables are created in the protected SystemData caslib and used by SAS Inventory Reports located in the Content/Products/SAS Environment Manager/Dashboard Items folder. Access to the tables and reports are restricted to users that are members of the SAS Administrators group.

For more information, see SAS Help Center Documentation

Usage

Inventory Collector Job

The Inventory Collector Job must be run before the Inventory Comparison Job. It collects an inventory of artifacts created by various SAS Viya platform services. It also creates the SASINVENTORY4 and SASVIYAINVENTORY4_CASSVRDETAILS CAS tables in the SystemData caslib that are referenced by the SAS Viya 4 Inventory Report.

Run Inventory Collection on All Tenants

kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-collector-job

Run Inventory Collection on a Single Tenant

Set the TENANT environment variable to “provider”, then create and run the Job. Here is an example:

kubectl set env cronjob/sas-inventory-collector TENANT=acme
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-collector-job

Run Inventory Collection on the Provider Tenant

Set the TENANT environment variable to “provider”, then create and run the Job. Here is an example:

kubectl set env cronjob/sas-inventory-collector TENANT=provider
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-collector-job

Remove the TENANT Environment Variable

kubectl set env cronjob/sas-inventory-collector TENANT-

Schedule an Inventory

The sas-inventory-collector CronJob is disabled by default. To enable it, run this command:

kubectl patch cronjob sas-inventory-collector -p '{"spec":{"suspend": false}}'

Schedule an Inventory in Single-Tenant Environments

A schedule can be set in the CronJob Kubernetes resource by using the kubectl patch command. For example, to run once a day at midnight, run this command:

kubectl patch cronjob sas-inventory-collector -p '{"spec":{"schedule": "0 0 * * *"}}'

Scheduling the CronJob in the cluster is permitted for single-tenant environments.

Schedule an Inventory in Multi-Tenancy Environments

Multi-tenant environments should run CronJobs outside the cluster on a machine where the admin can run kubectl commands. This approach allows multi-tenant Jobs to run independently and simultaneously. Here is an example that runs the provider tenant at midnight and the acme tenant five minutes later:

Add a crontab to a server with access to kubectl and the cluster namespace

$ crontab -e
Crontab entries
0 0 * * * /PATH_TO/inventory-collector.sh provider
5 0 * * * /PATH_TO/inventory-collector.sh acme

Sample inventory-collector.sh

This sample script can be called by a crontab entry in a server running outside the cluster.

#!/bin/bash
TENANT=$1
export KUBECONFIG=/PATH_TO/kubeconfig
# unset the COMPARISON environment variable if set
kubectl set env cronjob/sas-inventory-collector COMPARISON-
# set the TENANT= environment variable
/PATH_TO/kubectl set env cronjob/sas-inventory-collector TENANT=$TENANT
# delete any previously run job
/PATH_TO/kubectl delete job sas-inventory-collector-$TENANT
# run the job
/PATH_TO/kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-collector-$TENANT


Inventory Comparison Job

The inventory comparison job compares two inventory tables. The resulting table is used by the SAS Viya Inventory Comparison report.

Run Inventory Comparison in an non-MT environment

kubectl set env cronjob/sas-inventory-collector COMPARISON=true
kubectl delete job sas-inventory-comparison-job
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-comparison-job
kubectl set env cronjob/sas-inventory-collector COMPARISON-

Run Inventory Comparison on the Provider Tenant in an MT environment

Here is an example:

kubectl set env cronjob/sas-inventory-collector TENANT=provider
kubectl set env cronjob/sas-inventory-collector COMPARISON=true
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-comparison-job
kubectl set env cronjob/sas-inventory-collector COMPARISON-

Run Inventory Comparison for a single tenant in a MT environment

kubectl set env cronjob/sas-inventory-collector TENANT=<tenant-name>
kubectl set env cronjob/sas-inventory-collector COMPARISON=true
kubectl delete job sas-inventory-comparison-job
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-comparison-job
kubectl set env cronjob/sas-inventory-collector COMPARISON-

Comparing Viya 3 to Viya 4+ after a migration

Inventory collection or scanning as it is referred to in SAS Viya 3, is typically run before a migration. Running a collection then a comparison, the first time following a migration, will compare pre-migration to post-migration artifacts. Subsequent collection/comparison runs will compare post-migration to post-migration artifacts. To re-run a pre-migration to post migration comparison, set the COMPARISON=”migration” environment variable.

kubectl set env cronjob/sas-inventory-collector TENANT=<tenant-name>
kubectl set env cronjob/sas-inventory-collector COMPARISON=migration
kubectl delete job sas-inventory-comparison-job
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-comparison-job
kubectl set env cronjob/sas-inventory-collector COMPARISON-

Configure Git for SAS Model Publish Service

Overview

The Model Publish service uses the sas-model-publish-git dedicated PersistentVolume Claim (PVC) as a workspace. When a user publishes a model to a Git destination, sas-model-publish creates a local repository under /models/git/publish/, which is then mounted from the sas-model-publish-git PVC in the start-up process.

Files

In order for the Model Publish service to successfully publish a model to a Git destination, the user must prepare and adjust the following file that are located in the $deploy/sas-bases/examples/sas-model-publish/git directory:

storage.yaml - defines a PVC for the Git local repository.

The following file is located in the $deploy/sas-bases/overlays/sas-model-publish/git directory and does not need to be modified:

git-transformer.yaml - adds the sas-model-publish-git PVC to the sas-model-publish deployment object.

Installation

  1. Copy the files in the $deploy/sas-bases/examples/sas-model-publish/git directory to the $deploy/site-config/sas-model-publish/git directory. Create the target directory, if it does not already exist.

    Note: If the destination directory already exists, verify that the overlay has been applied. If the output contains the /models/git/ mount directory path, you do not need to take any further actions, unless you want to change the overlay parameters for the mounted directory.

  2. Modify the parameters in storage-git.yaml. For more information about PersistentVolume Claims (PVCs), see Persistent Volume Claims on Kubernetes.

    • Replace {{ STORAGE-CAPACITY }} with the amount of storage required.
    • Replace {{ STORAGE-CLASS-NAME }} with the appropriate storage class from the cloud provider that supports ReadWriteMany access mode.
  3. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    • Add site-config/sas-model-publish/git to the resources block.
    • Add sas-bases/overlays/sas-model-publish/git/git-transformer.yaml to the transformers block.

    Here is an example:

    resources:
      - site-config/sas-model-publish/git
    
    transformers:
      - sas-bases/overlays/sas-model-publish/git/git-transformer.yaml
  4. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

    Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Verify Overlay for the Persistent Volume

  1. Run the following command to verify whether the overlays have been applied:

    kubectl describe pod  <sas-model-publish-pod-name> -n <name-of-namespace>
  2. Verify that the output contains the following mount directory paths:

    Mounts: /models/git/publish

Additional Resources

Configure Kaniko for SAS Model Publish Service

Overview

Kaniko is a tool to build container images from a Dockerfile without depending on a Docker daemon. The Kaniko container can load the build context from cloud storage or a local directory, and then push the built image to the container registry for a specific destination.

The Model Publish service uses the sas-model-publish-kaniko dedicated PersistentVolume Claim (PVC) as a workspace, which is shared with the Kaniko container. When a user publishes a model to a container destination, sas-model-publish creates a temporary folder (publish-xxxxxxxx) on the volume (/models/kaniko/), which is then mounted from the sas-model-publish-kaniko PVC in the start-up process.

The publishing process generates the following content:

Note: The “xxxxxxxx” part of the folder names is a system-generated alphanumeric string and is 8 characters in length.

The Model Publish service then loads a pod template from the sas-model-publish-kaniko-job-config (as defined in podtemplate.yaml) and dynamically constructs a job specification. The job specification helps mount the directories in the Kaniko container. The default pod template uses the official Kaniko image URL gcr.io/kaniko-project/executor:latest. Users can replace this image URL in the pod template, if the user wants to host the Kaniko image in a different container registry or use a Kaniko debug image.

The Kaniko container is started after a batch job is executed. The Model Publish service checks the job status every 30 seconds. The job times out after 30 minutes, if it has not completed.

The Model Publish service deletes the job and the temporary directories after the job has completed successfully, completed with errors, or has timed out.

Prerequisites

If you are deploying in a Red Hat OpenShift cluster, use this command to link the service account to run as root user.

oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user anyuid -z
sas-model-publish-kaniko

Files

In order for the Model Publish service to successfully publish a model to a container destination, the user must prepare and adjust the following files that are located in the $deploy/sas-bases/examples/sas-model-publish/kaniko directory:

storage.yaml - defines a PVC for the Kaniko workspace.

podtemplate.yaml - defines a pod template for the batch job that launches the Kaniko container.

** sa.yaml

defines the service account for running the Kaniko job.

The following file is located in the $deploy/sas-bases/overlays/sas-model-publish/kaniko directory and does not need to be modified:

kaniko-transformer.yaml - adds the sas-model-publish-kaniko PVC to the sas-model-publish deployment object.

Installation

  1. Copy the files in the $deploy/sas-bases/examples/sas-model-publish/kaniko directory to the $deploy/site-config/sas-model-publish/kaniko directory. Create the destination directory, if it does not already exist.

    Note: If the destination directory already exists, verify that the overlay has been applied. If the output contains the /models/kaniko/ mount directory path, you do not need to take any further actions, unless you want to change the overlay parameters for the mounted directory.

  2. Modify the parameters in the podtemplate.yaml file, if you need to implement customized requirements, such as the location of Kaniko image.

  3. Modify the parameters in storage.yaml. For more information about PersistentVolume Claims (PVCs), see Persistent Volume Claims on Kubernetes.

    • Replace {{ STORAGE-CAPACITY }} with the amount of storage required.
    • Replace {{ STORAGE-CLASS-NAME }} with the appropriate storage class from the cloud provider that supports ReadWriteMany access mode.
  4. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    • Add site-config/sas-model-publish/kaniko to the resources block.
    • Add sas-bases/overlays/sas-model-publish/kaniko/kaniko-transformer.yaml to the transformers block.

    Here is an example:

    resources:
      - site-config/sas-model-publish/kaniko
    
    transformers:
      - sas-bases/overlays/sas-model-publish/kaniko/kaniko-transformer.yaml
  5. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

    Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Verify Overlay for the Persistent Volume

  1. Run the following command to verify whether the overlays have been applied:

    kubectl describe pod  <sas-model-publish-pod-name> -n <name-of-namespace>
  2. Verify that the output contains the following mount directory paths:

    Mounts: /models/kaniko

Additional Resources

Configure Buildkit for SAS Decisions Runtime Builder Service

Overview

BuildKit is a tool that is used to build container images from a Dockerfile without depending on a Docker daemon. BuildKit can build a container image in Kubernetes, and then push the built image to the container registry for a specific destination.

The Decisions Runtime Builder service uses the sas-decisions-runtime-builder-buildkit dedicated PersistentVolume Claim (PVC) as a cache. It caches builder images and layers beyond the life cycle of single job execution.

An Update request to the Decisions Runtime Builder service starts a Kubernetes job that builds a new image. The service checks the job status every 30 seconds. If a job is not complete after 30 minutes, it times out.

The Decisions Runtime Builder service deletes the job and the temporary directories after the job has completed successfully, completed with errors, or has timed out.

Installation

  1. Copy the files in the $deploy/sas-bases/examples/sas-decisions-runtime-builder/buildkit directory to the $deploy/site-config/sas-decisions-runtime-builder/buildkit directory. Create the destination directory, if it does not already exist.

    Note: Verify that the overlay has been applied. If the Buildkit daemon deployment already exists, you do not need to take any further action, unless you want to change the overlay parameters for the mounted directory.

  2. Modify the parameters in the files storage.yaml and publish-storage.yaml in the directory $deploy/site-config/sas-decisions-runtime-builder/buildkit. For more information about PersistentVolume Claims (PVCs), see Persistent Volume Claims on Kubernetes.

    • Replace {{ STORAGE-CAPACITY }} with the amount of storage required.
    • Replace {{ STORAGE-CLASS-NAME }} with the appropriate storage class from the cloud provider that supports ReadWriteMany access mode.
  3. (OpenShift deployments only) Uncomment and update the {{ FSGROUP_VALUE }} token in the $deploy/site-config/sas-decisions-runtime-builder/buildkit/publish-job-template.yaml and $deploy/site-config/sas-decisions-runtime-builder/buildkit/update-job-template.yaml files to match the desired numerical group value.

    Note: For OpenShift, you can obtain the allocated GID and value by using this command:

    kubectl describe namespace <name-of-namespace>

    Use the minimum value of the openshift.io/sa.scc.supplemental-groups annotation. For example, if the output is as follows, you would use 1000700000.

    Name:         sas-1
    Labels:       <none>
    Annotations:  ...
                  openshift.io/sa.scc.supplemental-groups: 1000700000/10000
                  ...
  4. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    • Add site-config/sas-decisions-runtime-builder/buildkit to the resources block.
    • Add sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-transformer.yaml to the transformers block.

    Here is an example:

    resources:
      - site-config/sas-decisions-runtime-builder/buildkit
    
    transformers:
      - sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-transformer.yaml
  5. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

    Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.
  6. (OpenShift deployments only) Apply a security context constraint (SCC):

    kubectl apply -f $deploy/sas-bases/overlays/sas-decisions-runtime-builder/buildkit/service-account/buildkit-scc.yaml

    Bind the SCC to the service account with the command that includes the name of the SCC that you applied:

    oc -n name-of-namespace adm policy add-scc-to-user sas-buildkit -z sas-buildkit

Using Buildkit on Clusters with an Incorrect User Namespace Configuration

The sas-buildkitd deployment typically starts without any issues. However, for some cluster deployments, you might receive the following error:

/proc/sys/user/max_user_namespaces needs to be set to nonzero

If this occurs, use the buildkit-userns-transformer to configure user namespace support. This is done with an init container that is running in privileged mode during start-up.

  1. Add ‘sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-certificates-transformer.yaml’ to the transformers block after the ‘buildkit-transformer.yaml’ entry. Here is an example:

    transformers:
      - sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-transformer.yaml
      - sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-userns-transformer.yaml
  2. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

Using Buildkit with Registries That Use Self-Signed Certificates

If the registry contains SAS Viya platform deployment images or the destination registry is using self-signed certificates, those certificates should be added to the buildkit deployment. If they are not, the image build generates a ‘certificate signed by unknown authority’ error.

If you receive that error, complete the following steps to add self-signed certificates to the Buildkit deployment.

  1. Copy the files in the $deploy/sas-bases/examples/sas-decisions-runtime-builder/buildkit/cert directory to the $deploy/site-config/sas-decisions-runtime-builder/buildkit/certs directory. Create the destination directory, if it does not already exist.

  2. Add the self-signed certificates that you want to be trusted to the $deploy/site-config/sas-decisions-runtime-builder/buildkit/certs directory.

    In that directory, edit the kustomization.yaml file to add the certificate files to the files field in the secretGenerator section.

    resources: []
    secretGenerator:
      - name: sas-buildkit-registry-secrets
        files:
          - registry1.pem
          - regsitry2.pem
  3. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    • Add site-config/sas-decisions-runtime-builder/buildkit/config to the resources block.
    • Add sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-certificates-transformer.yaml to the transformers block after buildkit-transformer.

    Here is an example:

    resources:
      - site-config/sas-decisions-runtime-builder/buildkit
      - site-config/sas-decisions-runtime-builder/buildkit/certs
    
    transformers:
      - sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-transformer.yaml
      - sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-certificate-transformer.yaml
  4. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

Verify Overlay for the Buildkit

Run the following command to verify whether the Buildkit overlay has been applied. It should show at least one pod starting with the prefix ‘buildkitd’.

kubectl -n <name-of-namespace> get pods  |  grep buildkitd

Note: SAS plans to discontinue the use of Kaniko in the future.

Additional Resources

Configure Kaniko for SAS Decisions Runtime Builder Service

Overview

Kaniko is a tool that is used to build container images from a Dockerfile without depending on a Docker daemon. The Kaniko can build a container image in Kubernetes and then push the built image to the container registry for a specific destination.

The Decisions Runtime Builder service then loads a pod template from the sas-decisions-runtime-builder-kaniko-job-config (as defined in updateJobtemplate.yaml) and dynamically constructs a job specification. The job specification helps mount the directories in the Kaniko container.

The Kaniko container is started after a batch job is executed. The Decisions Runtime Builder service checks the job status every 30 seconds. The job times out after 30 minutes, if it has not completed.

Prerequisites

If you are deploying in a Red Hat OpenShift cluster, use the following command to link the service account to run as the root user.

oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user anyuid -z sas-decisions-runtime-builder-kaniko

Installation

  1. Copy the files in the $deploy/sas-bases/examples/sas-decisions-runtime-builder/kaniko directory to the $deploy/site-config/sas-decisions-runtime-builder/kaniko directory. Create the destination directory, if it does not already exist.

  2. Modify the parameters in the $deploy/site-config/sas-decisions-runtime-builder/kaniko/storage.yaml file. For more information about PersistentVolume Claims (PVCs), see Persistent Volume Claims on Kubernetes.

    • Replace {{ STORAGE-CAPACITY }} with the amount of storage required.
    • Replace {{ STORAGE-CLASS-NAME }} with the appropriate storage class from the cloud provider that supports ReadWriteMany access mode.
  3. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    • Add site-config/sas-decisions-runtime-builder/kaniko to the resources block.
    • Add sas-bases/overlays/sas-decisions-runtime-builder/kaniko/kaniko-transformer.yaml to the transformers block. Here is an example:
    resources:
      - site-config/sas-decisions-runtime-builder/kaniko
    
    transformers:
      - sas-bases/overlays/sas-decisions-runtime-builder/kaniko/kaniko-transformer.yaml
  4. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya platform: Deployment Guide.

Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

Verify the Overlay for Kaniko

Run the following command to verify whether the overlays have been applied. It the overlay is applied, it shows a podTemplate named ‘sas-decisions-runtime-builder-kaniko-job-config’.

kubectl get podTemplates  | grep sas-decisions-runtime-buider-kaniko-job-config

Additional Resources

Configure SAS Model Publish Service to Add Service Account

Note: This guide applies only to SAS Viya platform deployments in a Red Hat OpenShift environment.

In OpenShift, a security context constraint (SCC) is required for publishing objects (models or decisions), as well as updating and validating published objects. These actions create jobs within the cluster that must run as user 1001 (sas), must have permission to mount volumes containing container registry credentials, and must have access existing image pull secrets. This README explains how to apply the sas-model-publish SCC to the appropriate service accounts:

Prerequisites

Granting SCC on an OpenShift Cluster

The /$deploy/sas-bases/overlays/sas-model-publish/service-account directory contains a file to grant SCC to sas-model-publish and sas-decisions-runtime-builder jobs.

A Kubernetes cluster administrator should add this SCC to their OpenShift cluster prior to deploying the SAS Viya platform. Use each of the following commands:

kubectl apply -f sas-model-publish-scc.yaml

Bind the SCC to a Service Account

After the SCC has been applied, you must link it to the appropriate service accounts that will use it. Use the following commands:

oc -n {{ NAME-OF-VALIDATION-NAMESPACE }} adm policy add-scc-to-user sas-model-publish -z
default

oc -n {{ NAME-OF-VIYA-NAMESPACE }} adm policy add-scc-to-user sas-model-publish -z
sas-model-publish-buildkit

oc -n {{ NAME-OF-VIYA-NAMESPACE }} adm policy add-scc-to-user sas-model-publish -z
sas-decisions-runtime-builder-buildkit

Installation

Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

Post-Installation Tasks

  1. Run the following command to verify whether the overlay has been applied:

    kubectl -n <name-of-namespace> get rolebindings -o wide | grep sas-model-publish
  2. Verify that the sas-model-publish SCC is bound to sas-model-publish-buildkit and sas-decisions-runtime-builder-buildkit service accounts.

OpenSearch for SAS Viya Platform

Overview

OpenSearch is an Apache 2.0 licensed search and analytics suite based on Elasticsearch 7.10.2 . The SAS Viya platform provides two options for your search cluster: an internal instance provided by SAS or an external instance you would like the SAS Viya platform to utilize. Before deploying, you must select which of these options you want to use for your SAS Viya platform deployment.

Note: The search cluster must be either internally managed or externally managed. SAS does not support mixing internal and external search clusters in the same deployment. Once deployed, you cannot switch between an internal and external search cluster.

Internally Managed

SAS Viya platform support for an internally managed search cluster is provided by a proprietary sas-opendistro Kubernetes operator.

If you want to use an internal instance of OpenSearch, refer to the README file located at $deploy/sas-bases/overlays/internal-elasticsearch/README.md (for Markdown format) or at $deploy/sas-bases/docs/configure_an_internal_opensearch_instance_for_sas_viya.htm (for HTML format).

Externally Managed

If you want to use an external instance of OpenSearch, you should refer to the README file located at $deploy/sas-bases/examples/configure-elasticsearch/external/README.md (for Markdown format) or at $deploy/sas-bases/docs/configure_an_external_opensearch_instance.htm (for HTML format).

Externally managed cloud subscriptions to Elasticsearch and Open Distro for Elasticsearch are not supported.

Security Considerations

SAS strongly recommends the use of SSL/TLS to secure data in transit. You should follow the documented best practices provided by OpenSearch and your cloud platform provider for securing access to your external OpenSearch instance using SSL/TLS. Securing your OpenSearch cluster with SSL/TLS entails the use of certificates. In order for the SAS Viya platform to connect directly to a secure OpenSearch cluster, you must provide the OpenSearch cluster’s CA certificate to the SAS Viya platform prior to deployment. Failing to configure the SAS Viya platform to trust the OpenSearch cluster’s CA certificate results in “Connection refused” errors. For instructions on how to provide CA certificates to the SAS Viya platform, see the section labeled “Incorporating Additional CA Certificates into the SAS Viya Platform Deployment” in the README file at $deploy/sas-bases/examples/security/README.md (for Markdown format) or at $deploy/sas-bases/docs/configure_network_security_and_encryption_using_sas_security_certificate_framework.htm (for HTML format).

Configure an Internal OpenSearch Instance for the SAS Viya Platform

Note: SAS terminology standards prohibit the use of the term “master.” However, this document refers to the term “master node” to maintain alignment with OpenSearch documentation.

Note: In previous releases, the SAS Viya platform included OpenDistro for Elasticsearch. Many Kubernetes resources keep the name OpenDistro for backwards compatiblity.

This README file describes the files used to customize an internally managed instance of OpenSearch using the sas-opendistro operator provided by SAS.

Instructions

In order to use the internal search cluster instance, you must customize your deployment to point to the required overlay and transformers.

  1. Go to the base kustomization.yaml file ($deploy/kustomization.yaml). In the resources block of that file, add the following content, including adding the block if it does not already exist.

    resources:
    ...
    - sas-bases/overlays/internal-elasticsearch
    ...
  2. Go to the base kustomization.yaml file ($deploy/kustomization.yaml). In the transformers block of that file, add the following content, including adding the block if it does not already exist.

    transformers:
    ...
    - sas-bases/overlays/internal-elasticsearch/internal-elasticsearch-transformer.yaml
    ...
  3. Deploying OpenSearch requires configuration to support the ability to create many memory mapped areas if vm.max_map_count is set too low.

    Several methods are available to configure the sysctl option vm.max_map_count documented below. Choose a method which is supported for your platform.

    Method Platforms Requirements
    Use sas-opendistro-sysctl init container (recommended) Microsoft Azure Kubernetes Service (AKS) without Microsoft Defender
    Amazon Elastic Kubernetes Service (EKS)
    Google Kubernetes Engine (GKE)
    RedHat Openshift
    Privileged Containers
    Allow Privilege Escalation
    Use sas-opendistro-sysctl DaemonSet Microsoft Azure Kubernetes Service (AKS) with Microsoft Defender Privileged Containers
    Allow Privilege Escalation
    Kubernetes nodes for stateful workloads labeled with workload.sas.com/class as stateful
    Apply sysctl configuration manually All platforms Ability to configure sysctl on stateful Kubernetes nodes
    Disable mmap support All platforms Unable to apply sysctl configuration manually or use privileged containers
  4. Use sas-opendistro-sysctl init container: If your deployment allows privileged containers, add a reference to sas-bases/overlays/internal-elasticsearch/sysctl-transformer.yaml to the transformers block of the base kustomization.yaml. The sysctl-transformer.yaml transformer must be included before the sas-bases/overlays/required/transformers.yaml transformer. Here is an example:

    transformers:
    - sas-bases/overlays/internal-elasticsearch/sysctl-transformer.yaml
    - sas-bases/overlays/required/transformers.yaml
  5. Use sas-opendistro-sysctl DaemonSet (Microsoft Azure Kubernetes Service with Microsoft Defender only): If your deployment allows privileged containers and you are deploying to an environment secured by Microsoft Defender, add a reference to sas-bases/overlays/internal-elasticsearch/sysctl-daemonset.yaml to the resources block of the base kustomization file. Here is an example:

    resources:
    - sas-bases/overlays/internal-elasticsearch/sysctl-daemonset.yaml
  6. Apply sysctl configuration manually: If your deployment does not allow privileged containers, the Kubernetes administrator should set the vm.max_map_count property to be at least 262144 for stateful workload nodes.

  7. Disable mmap support: If your deployment does not allow privileged containers and you are in an environment where you cannot control the memory map settings, add a reference to sas-bases/overlays/internal-elasticsearch/disable-mmap-transformer.yaml to the transformers block of the base kustomization.yaml to disable memory mapping instead. The disable-mmap-transformer.yaml transformer must be included before the sas-bases/overlays/required/transformers.yaml. Here is an example:

    transformers:
    - sas-bases/overlays/internal-elasticsearch/disable-mmap-transformer.yaml
    - sas-bases/overlays/required/transformers.yaml

    Disabling memory mapping is discouraged since doing so will negatively impact performance and may result in out of memory exceptions.

  8. For additional customization options, refer to the following README files:

  9. Update the storage class used by OpenSearch: $deploy/sas-bases/examples/configure-elasticsearch/internal/storage/README.md (for Markdown format) or $deploy/sas-bases/docs/configure_a_default_storageclass_for_opensearch.htm (for HTML format).

  10. Configure a custom topology for OpenSearch: $deploy/sas-bases/examples/configure-elasticsearch/internal/topology/README.md (for Markdown format) or $deploy/sas-bases/docs/configure_a_default_topology_for_opensearch.htm (for HTML format).
  11. Configure a custom run user for OpenSearch: $deploy/sas-bases/examples/configure-elasticsearch/internal/run-user/README.md (for Markdown format) or $deploy/sas-bases/docs/configure_a_run_user_for_opensearch.htm (for HTML format).
  12. Additional configuration steps for Red Hat OpenShift: $deploy/sas-bases/examples/configure-elasticsearch/internal/openshift/README.md (for Markdown format) or $deploy/sas-bases/docs/opensearch_on_red_hat_openshift.htm (for HTML format).
  13. Additional configuration for OpenSearch Security Audit Logs: $deploy/sas-bases/examples/configure-elasticsearch/internal/security-audit-logs/README.md (for Markdown format) or $deploy/sas-bases/docs/opensearch_security_audit_logs.htm (for HTML format).
  14. Configure a temporary directory for JNA in OpenSearch: $deploy/sas-bases/examples/configure-elasticsearch/internal/jna/README.md (for Markdown format) or $deploy/sas-bases/docs/configure_a_temporary_directory_for_jna_in_opensearch.htm (for HTML format).

  15. After you revise the base kustomization.yaml file, continue your SAS Viya platform deployment as documented in SAS Viya Platform: Deployment Guide.

Supported Topologies

A single cluster is supported with the following topologies:

Operator Constraints

The operator does not support the following actions:

Configure a Default StorageClass for OpenSearch

OpenSearch requires a StorageClass to be configured in the Kubernetes cluster that provides block storage (e.g. virtual disks) or a local file system mount to store the search indices. Remote file systems, such as NFS, should not be used to store the search indices.

By default, the OpenSearch deployment uses the default StorageClass defined in the Kubernetes cluster. If a different StorageClass is required to meet the requirements, this README file describes how to specify a new StorageClass and configure it to be used by OpenSearch.

Note: The default StorageClass should be set according to the target environment and usage requirements. The transformer can reference an existing or custom StorageClass.

In order to specify a default StorageClass to be used by OpenSearch, you must customize your deployment to include a transformer.

Configure Storage Class

If a new StorageClass must be defined in the target cluster to meet the requirements for OpenSearch, consult the documentation for the target Kubernetes platform for details on available storage options and how to configure a new StorageClass.

Configure Default Storage Class

  1. Copy the StorageClass transformer from $deploy/sas-bases/examples/configure-elasticsearch/internal/storage/storage-class-transformer.yaml into the $deploy/site-config directory.

  2. Open the storage-class-transformer.yaml file for editing and replace {{ STORAGE-CLASS }} with the name of the StorageClass to be used by OpenSearch.

  3. Add the storage-class-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    transformers:
    ...
    - site-config/storage-class-transformer.yaml

StorageClass Limitations

Additional Resources

For more information, see SAS Viya Platform: Deployment Guide.

Configure a Default Topology for OpenSearch

Overview

This README file describes the files used to specify and modify the topology to be used by the sas-opendistro operator.

Note: The default topology should be set according to the target environment and usage requirements. The transformer can reference an existing or custom topology.

Note: SAS terminology standards prohibit the use of the term “master.” However, this document refers to the term “master node” to maintain alignment with OpenSearch documentation.

Modifying Topologies

The default installation topology consists of one OpenSearch node configured as both a master and a data node. Although this topology is acceptable for initial small scale data imports, configuration, and testing, SAS does not recommend that it be used in a production environment.

The recommended production topology should consist of no less than three master nodes and no less than three data storage nodes. This topology provides the following benefits:

Migrating to Production Setup

If you wish to migrate your initial data from the initial setup to the production setup, you must modify the cluster topology in such a manner that no data or configuration is lost.

One way of doing this is to transition your topology through an intermediate state into your final production state. Here is an example

Initial State Intermediate State Final State
[Master/Data Node] [Master/Data Node]
[Master Node 1] [Master Node 1]
[Master Node 2] [Master Node 2]
[Master Node 3] [Master Node 3]
[Data Node 1] [Data Node 1]
[Data Node 2] [Data Node 2]
[Data Node 3] [Data Node 3]

This example allows the cluster to copy the data stored on the Master/Data Node across to the data nodes. The migration will have to pause in the intermediate state for a period while the data is spread across the cluster. Depending on the volume of data, this should be completed within a few tens of minutes.

Migration Process

  1. Copy the migrate-topology-step1.yaml file into your site-config directory.

  2. Edit the example topology to reflect your desired topology:

    • set the appropriate number of master nodes and data nodes
    • set the heap size for each of the nodes - data nodes will need more heap space
    • set the amount of disk space allowed to store the indexes
  3. Remove the following line from the transformers block of the base kustomization file ($deploy/kustomization.yaml) if it is present.

    transformers:
    ...
    - sas-bases/overlays/internal-elasticsearch/ha-transformer.yaml
    ...
  4. Add the topology reference to the transformers block of the base kustomization.yaml file. Here is an example of a modified base kustomization.yaml file with a reference to the custom topology example:

    transformers:
    ...
    - site-config/configure-elasticsearch/internal/topology/migrate-topology-step1.yaml
  5. Perform the commands to update the software. These are the same as the commands to originally deploy the software as outlined in SAS Viya Platform: Deployment Guide: Deployment: Installation: Deploy the Software. The important difference to note is that as you have now modified the $deploy/kustomization.yaml file to include your topology changes, the deployment process will not perform a complete rebuild but will instead adapt the existing system to your new configuration.

  6. Once the new configuration has deployed, wait for the new servers to share out all the data.

  7. Repeat steps 1 through 5 using the migrate-topology-step2.yaml file. Ensure that you make the same modifications to the step2 file as you made in the step1 file.

Topology Examples

Custom Topology Example

The custom topology example should be used to define and customize highly available production OpenSearch deployments. See the example file located at sas-bases/examples/configure-elasticsearch/internal/topology/custom-topology.yaml.

Single Node Topology Example

The single node topology example should not be used in production. The single node topology is intended to minimize resources in development, demonstration, class, and test deployments. sas-bases/examples/configure-elasticsearch/internal/topology/single-node-topology.yaml.

Additional Configuration

In addition to the general cluster topology, properties such as the heap size and disk size of each individual node set can be adjusted depending on the use case for the OpenSearch cluster, expected index sizes, shard numbers, and/or hardware constraints.

Configuring the Volume Claim

When the volume claim’s storage capacity is not specified in the node spec, the operator creates a PersistentVolumeClaim with a capacity of 128Gi for each node in the OpenSearch cluster by default.

Similarly, when the volume claim’s storage class is not specified in the node spec, the operator creates a PersistentVolumeClaim using either the default StorageClass for that OpenSearch cluster (if specified) or the default storage class for the Kubernetes cluster (see sas-bases/examples/configure-elasticsearch/internal/storage/README.md for instructions for configuring a default storage class for the OpenSearch cluster).

To define your own volume claim template with your desired storage capacity and the Kubernetes storage class that is associated with the persistent volume, see the example file located at sas-bases/examples/configure-elasticsearch/internal/topology/custom-topology-with-custom-volume-claim.yaml . Replace {{ STORAGE-CLASS }} with the name of the StorageClass and {{ STORAGE-CAPACITY }} with the desired storage capacity for this volume claim.

Configuring the Heap Size

The amount of heap size dedicated to each node directly impacts the performance of OpenSearch. If the heap is too small, the garbage collection will cause frequent pauses, resulting in reduced throughput and regular small latency spikes. If the heap is too large, on the other hand, full-heap garbage collection may cause infrequent but long latency spikes.

Generally, the heap size value should be up to half of the available physical RAM with a maximum of 32GB.

The maximum heap size also affects the maximum number of shards that can be safely stored on the node without suffering from oversharding and circuit breaker events. As a rule of thumb you should aim for 25 shards or fewer per GB of heap memory with each shard not exceeding 50 GB.

See sas-bases/examples/configure-elasticsearch/internal/topology/custom-topology-with-custom-heap-size.yaml for an example of how to configure the amount of heap memory dedicated to OpenSearch nodes. Replace {{ HEAP-SIZE }} with the appropriate heap size for your needs.

Installing a Custom Topology

  1. Copy the example topology file into your site-config directory.

  2. Edit the example topology as directed by comments in the file.

  3. Remove the following line from the transformers block of the base kustomization file ($deploy/kustomization.yaml) if it is present.

    transformers:
    ...
    - sas-bases/overlays/internal-elasticsearch/ha-transformer.yaml
    ...
  4. Add the topology reference to the transformers block of the base kustomization.yaml file. Here is an example of a modified base kustomization.yaml file with a reference to the custom topology example:

    transformers:
    ...
    - site-config/configure-elasticsearch/internal/topology/custom-topology.yaml

Additional Resources

For more information, see SAS Viya Platform: Deployment Guide.

Configure a Run User for OpenSearch

In a default deployment of the SAS Viya platform, the OpenSearch JVM process runs under the fixed user ID (UID) of 1000. A fixed UID is required so that files that are written to storage for the search indices can be successfully read after subsequent restarts.

If you do not want OpenSearch to run with UID 1000, you can specify a different UID for the process. You can take the following steps to apply a transformer that changes the UID of the OpenSearch processes to another value.

Note: The decision to change the UID of the OpenSearch processes must be made at the time of the initial deployment. The UID cannot be changed after the SAS Viya platform has been deployed.

Configure Run User

To configure OpenSearch to run as a different UID:

  1. Copy the Run User transformer from $deploy/sas-bases/examples/configure-elasticsearch/internal/run-user/run-user-transformer.yaml into the $deploy/site-config directory.

  2. Open the run-user-transformer.yaml file for editing. Replace {{ USER-ID }} with the UID under which the OpenSearch processes should run.

  3. Add the run-user-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    transformers:
    ...
    - site-config/run-user-transformer.yaml

Limitations

Additional Resources

For more information, see SAS Viya Platform: Deployment Guide.

OpenSearch on Red Hat OpenShift

Before deploying your SAS Viya platform software, perform the following steps in order to run OpenSearch on OpenShift in that deployment.

Configure Security Context Constraints for OpenSearch

An example Security Context Constraints is available at $deploy/sas-bases/examples/configure-elasticsearch/internal/openshift/sas-opendistro-scc.yaml. A Kubernetes cluster administrator must add these Security Context Constraints to their OpenShift cluster before deploying the SAS Viya platform.

Consult Common Customizations for information about the additional transformers, which might require changes to the Security Context Constraints.

If modifications are required, place a copy of the sas-opendistro-scc.yaml file in the site-config directory and apply the changes to the copy.

Modify sas-opendistro-scc.yaml for run-user-transformer.yaml

If you are planning to use run-user-transformer.yaml to specify a custom UID for the OpenSearch processes, update the uid property of the runAsUser option to match the custom UID. For example, if UID 2000 will be configured in the run-user-transformer.yaml, update the file sas-opendistro-scc.yaml as follows.

runAsUser:
   type: MustRunAs
   uid: 2000

Modify sas-opendistro-scc.yaml for sysctl-transformer.yaml

If your deployment will use sysctl-transformer.yaml to apply the necessary sysctl parameters, the sas-opendistro-scc.yaml file must be modified. Otherwise, you should skip these steps.

  1. Set the allowPrivilegeEscalation and allowPrivilegedContainer options to true. This allows a privileged init container to execute and apply the necessary sysctl parameters.

    allowPrivilegeEscalation: true
    allowPrivilegedContainer: true
  2. Update the runAsUser option to RunAsAny, using the following example as your guide. This allows the privileged init container to run as a different user to apply the necessary sysctl parameters.

    runAsUser:
       type: RunAsAny

Apply Security Context Constraints

As a Kubernetes cluster administrator of the OpenShift cluster, use one of the following commands to apply the Security Context Constraints.

kubectl apply -f sas-opendistro-scc.yaml
oc apply -f sas-opendistro-scc.yaml

Add Security Context Constraints to sas-opendistro Service Account

The sas-opendistro SecurityContextConstraints must be added to the sas-opendistro ServiceAccount within each target deployment namespace to grant the necessary privileges.

Use the following command to configure the ServiceAccount. Replace the entire variable {{ NAME-OF-NAMESPACE }}, including the braces, with the Kubernetes namespace used for the SAS Viya platform.

oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-opendistro -z sas-opendistro

Remove Seccomp Profile Property and Annotation on OpenSearch Pods

An example transformer that removes the seccomp property and annotation from the OpenSearch pods through the OpenDistroCluster resource is available at $deploy/sas-bases/overlays/internal-elasticsearch/remove-seccomp-transformer.yaml.

To include this transformer, add the following to the base kustomization.yaml file ($deploy/kustomization.yaml).

 ```yaml
 transformers:
 ...
 - sas-bases/overlays/internal-elasticsearch/remove-seccomp-transformer.yaml
 ```

OpenSearch Security Audit Logs

Overview

Security audit logs track a range of OpenSearch cluster events. The OpenSearch audit logs can provide beneficial information for compliance purposes or assist in the aftermath of a security breach.

The audit logs are written to audit indices in the OpenSearch cluster. Audit indices can build up over time and use valuable resources. By default, an Index State Management (ISM) policy named ‘viya_delete_old_security_audit_logs’ is applied by the operator which deletes security audit log indices after seven days with an ISM priority of 50. OpenSearch enables ISM history logs, which are also stored to new indices. By default, ISM history retention is seven days.

The ISM policy can be disabled or configured to retain OpenSearch audit log indices for a specified length of time.

If you have already manually created an ISM policy for OpenSearch audit logs, the policy with the higher priority value will take precedence.

Configure the viya_delete_old_security_audit_logs ISM policy

Configurable Parameters

Configurable Parameter Description Default
enableIndexCleanup Apply the ISM policy to remove OpenSearch security audit log indices after the length of time specified in indexRetentionPeriod. If you want to retain the indices indefinitely, set to “false”.
Note: In order to prevent performance issues, SAS recommends that you change the indexRetentionPeriod to a higher period rather than disabling index cleanup.
true
indexRetentionPeriod Period of time an OpenSearch audit log is retained for if the ISM policy is applied. Supported units are d (days), h (hours), m (minutes), s (seconds), ms (milliseconds), and micros (microseconds). 7d
ismPriority A priority to disambiguate when multiple policies match an index name. OpenSearch takes the settings from the template with the highest priority and applies it to the index. 50
enableISMPolicyHistory Additional indices are also created to log ISM history data. Specifies whether ISM audit history is enabled or not. true
ismLogRetentionPeriod Period of time ISM history indices are kept if they are enabled. Supported units are d (days), h (hours), m (minutes), s (seconds), ms (milliseconds), and micros (microseconds). 7d

Configuration Instructions

  1. Copy the audit log retention transformer from $deploy/sas-bases/examples/configure-elasticsearch/internal/security-audit-logs/audit-log-retention-transformer.yaml into the $deploy/site-config directory. Adjust the value for each parameter listed above that you would like to change.

  2. Add the audit-log-retention-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    transformers:
    ...
    - site-config/audit-log-retention-transformer.yaml

Note: The ISM policy values can be adjusted and reconfigured after the initial deployment.

Disable Security Audit Logs

OpenSearch security audit logging can be disabled completely.

  1. Copy the disable security audit transformer from $deploy/sas-bases/examples/configure-elasticsearch/internal/security-audit-logs/disable-security-audit-transformer.yaml into the $deploy/site-config directory.

  2. Add the disable-security-audit-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    transformers:
    ...
    - site-config/disable-security-audit-transformer.yaml

Additional Resources

For more information on OpenSearch audit logs or Index State Management (ISM) policies, see the OpenSearch Documentation.

Configure an External OpenSearch Instance

This README file describes the files used to configure the SAS Viya platform deployment to use an externally managed instance of OpenSearch.

Prerequisites

Before deploying the SAS Viya platform, make sure you have the following prerequisites:

If you are deploying SAS Visual Investigator, the external instance of OpenSearch requires a specific configuration of OpenSearch and its security plugin. For more information, see the README file at $deploy/sas-bases/examples/configure-elasticsearch/external/config/README.md (for Markdown format) or at $deploy/sas-bases/docs/external_opensearch_configuration_requirements_for_sas_visual_investigator.htm (for HTML format).

Instructions

In order to use an external OpenSearch instance, you must customize your deployment to point to the required resources and transformers.

  1. If you are deploying in Front-door or Full-stack TLS modes, copy the file $deploy/sas-bases/examples/configure-elasticsearch/external/client-config-tls.yaml into your $deploy/site-config/external-opensearch/ directory. Create the $deploy/site-config/external-opensearch/ directory if it does not already exist.

    If you are deploying in No TLS mode, copy the file $deploy/sas-bases/examples/configure-elasticsearch/external/client-config-no-tls.yaml into your $deploy/site-config/external-opensearch/ directory. Create the $deploy/site-config/external-opensearch/ directory if it does not already exist.

    Adjust the values in your copied file following the in-line comments.

  2. Copy the file $deploy/sas-bases/examples/configure-elasticsearch/external/secret.yaml into your $deploy/site-config/external-opensearch/ directory . Adjust the values in your copied file following the in-line comments.

  3. Copy the file $deploy/sas-bases/examples/configure-elasticsearch/external/external-opensearch-transformer.yaml into your $deploy/site-config/external-opensearch/ directory .

  4. Go to the base kustomization file ($deploy/kustomization.yaml). In the transformers block of that file, add the following content, including adding the block if it doesn’t already exist:

    transformers:
    - site-config/external-opensearch/external-opensearch-transformer.yaml
  5. If you are deploying in Full-stack TLS or Front-door TLS mode, add the following content in the resources block of the base kustomization file. Add the resources block if it does not already exist.

    resources:
    ...
    - site-config/external-opensearch/client-config-tls.yaml
    - site-config/external-opensearch/secret.yaml
    ...

    If you are deploying in Front-door TLS mode and the external instance of OpenSearch is not in the same cluster, add the following content in the resources block of the base kustomization file. Add the resources block if it does not already exist.

    resources:
    ...
    - site-config/external-opensearch/client-config-tls.yaml
    - site-config/external-opensearch/secret.yaml
    ...

    If you are deploying in Front-door TLS mode and the external instance of OpenSearch is in the same cluster, add the following content in the resources block of the base kustomization file. Add the resources block if it does not already exist.

    resources:
    ...
    - site-config/external-opensearch/client-config-no-tls.yaml
    - site-config/external-opensearch/secret.yaml
    ...

    If you are not using TLS, add the following content in the resources block of the base kustomization file, including adding the block if it doesn’t already exist.

    resources:
    ...
    - site-config/external-opensearch/client-config-no-tls.yaml
    - site-config/external-opensearch/secret.yaml
    ...

Recommendations

To ensure the optimal functionality of index creation within the SAS Viya platform, ensure that the action section inside the config/opensearch.yml file has the auto_create_index set to -sand__*,-viya_catalog__*,-cirrus__*,-viya_cirrus__*,+*.

External OpenSearch Configuration Requirements for SAS Visual Investigator

This README file describes OpenSearch’s configuration requirements for SAS Visual Investigator.

Note: If your deployment does not include SAS Visual Investigator, this README contains no information that pertains to you.

OpenSearch Configuration Requirements

In the action section inside the config/opensearch.yml file, the destructive_requires_name setting should be set to false.

Security Plugin Configuration Requirements

In the config.dynamic section inside the config/opensearch-security/config.yml file, the do_not_fail_on_forbidden setting should be set to true.

In the config.dynamic.authc section inside the config/opensearch-security/config.yml file, the following four authentication domains must be defined in this exact order:

  1. Basic authentication with challenge set to false.

  2. OpenID authentication using user_name as subject key.

  3. Configure the openid_connect_url to point to SAS Logon’s OpenID endpoint.

  4. Configure the openid_connect_idp.pemtrustedcas_filepath to point to the certificates needed to connect to SAS Logon.

  5. OpenId authentication using client_id as subject key.

  6. Configure the openid_connect_url to point to SAS Logon’s OpenID endpoint.

  7. Configure the openid_connect_idp.pemtrustedcas_filepath to point to the certificates needed to connect to SAS Logon.

  8. Basic authentication with challenge set to true.

Security Plugin Config Example

For a security config example, see $deploy/sas-bases/examples/configure-elasticsearch/external/config/config.yaml.

Configure a Temporary Directory for JNA in OpenSearch

By default, OpenSearch creates its temporary directory within /tmp using an emptyDir volume mount. However, some hardened installations mount /tmp on emptyDir volumes with the noexec option, preventing JNA and libffi from functioning correctly. This can cause startup failures with exceptions like java.lang.UnsatisfiedLinkerError or messages indicating issues with mapping segments or allocating closures.

In order to allow JNA loading without relaxing filesystem restrictions, OpenSearch can be configured to use a memory-backed temporary directory.

Configure Temporary Directory for JNA

To configure OpenSearch to use a memory-backed temporary directory:

  1. Copy the JNA Temporary Directory transformer from $deploy/sas-bases/examples/configure-elasticsearch/internal/jna/jna-tmp-dir-transformer.yaml into the $deploy/site-config directory.

  2. Add the jna-tmp-dir-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    transformers:
    ...
    - site-config/jna-tmp-dir-transformer.yaml

Additional Resources

For more information, see SAS Viya Platform: Deployment Guide.

Configure Apache Airflow for Process Orchestration

Overview

Process Orchestration is enabled in some of the Risk solutions. As part of this enablement, you can view, monitor, and manage job flow executions.

Process Orchestration uses Apache Airflow.

Prerequisites

Apache Airflow requires a dedicated PostgreSQL database. Here are the potential locations for the Airflow database:

Note: SAS recommends that the Airflow database be hosted on the PostgreSQL server that hosts the SAS Infrastructure Data Server.

If you choose to host the Apache Airflow database on an external instance of PostgreSQL, when you create the Apache Airflow database, you must also create a special user (such as airflow_user). This is done for security reasons so that the user has access to the Apache Airflow database only.

If you choose to host the Apache Airflow database on the internal instance of PostgreSQL that also hosts the SAS Infrastructure Data Server, then the Apache Airflow database can be automatically created on that instance, along with a secure PostgreSQL user.

For details about the SAS Infrastructure Data Server or SAS Common Data Store, see PostgreSQL Server Requirements in System Requirements for the SAS Viya Platform.

Installation

The Apache Airflow database can be hosted on either an external instance of PostgreSQL or an internal instance. Use the section below that corresponds to the type of instance of PostgreSQL that you use.

Install and Configure Apache Airflow on an External Instance of PostgreSQL

Create the PostgreSQL Database

  1. If your external instance of PostgreSQL already exists, skip to step 2. Otherwise, use the documentation for your PostgreSQL provider to create an external instance of PostgreSQL. This instance must meet the SAS Viya platform system requirements. See PostgreSQL Server Requirements in System Requirements for the SAS Viya Platform for these requirements.

  2. When the external PostgreSQL instance exists, create the Airflow user name and database. See the Apache Airflow documentation.

    For reference, here are the necessary commands:

    CREATE DATABASE airflow_db;
    CREATE USER airflow_user WITH PASSWORD 'airflow_password';
    GRANT ALL PRIVILEGES ON DATABASE airflow_db TO airflow_user;
    -- PostgreSQL 15 requires additional privileges:
    USE airflow_db;
    GRANT ALL ON SCHEMA public TO airflow_user;

    Where:

    • airflow_db is the name of the database to use with Airflow.
    • airflow_user is the Airflow user name that can access the database.
    • airflow_password is the password specified to access the database.

    TIP: Be sure to enclose the password in single quotation marks.

Configure the Database for Use by Apache Airflow

Configure the PostgreSQL database for use by Apache Airflow. To do so, create the sas-airflow-metadata Secret to specify the location of the database:

  1. Copy the file $deploy/sas-bases/examples/sas-airflow/metadata/metadata.env into the $deploy/site-config/sas-airflow/metadata directory.

  2. Issue the following command to make the file writable:

    chmod +w $deploy/site-config/sas-airflow/metadata/metadata.env
  3. Edit the file $deploy/site-config/sas-airflow/metadata/metadata.env.

  4. Replace {{ METADATA-URL }} with the full PostgreSQL connection URI of the database to be used by Apache Airflow. Follow the example given in the comments of the metadata.env file, being sure to replace the airflow_user, airflow_password, airflow_db_host, airflow_db, and sslmode with the appropriate values.

  5. Edit the base kustomization file ($deploy/kustomization.yaml).

  6. Locate the components block in the file. If the block does not exist, add it. Then, add the following line:

    components:
    - sas-bases/components/sas-airflow/external-airflow
  7. Locate the secretGenerator block in the file. If the block does not exist, add it. Then, add the following content:

    secretGenerator:
    - name: sas-airflow-metadata
      envs:
      - site-config/sas-airflow/metadata/metadata.env

Configure the Internal Database for Use by Apache Airflow

Configure the internal PostgreSQL database for use by Apache Airflow.

  1. Edit the base kustomization file ($deploy/kustomization.yaml).

  2. Locate the following line in the components block in the file.

    components:
    - sas-bases/components/crunchydata/internal-platform-postgres
  3. Add two lines, using the example that follows. The two new lines must immediately follow the - sas-bases/components/crunchydata/internal-platform-postgres line.

    components:
    - sas-bases/components/crunchydata/internal-platform-postgres
    - sas-bases/components/crunchydata/internal-platform-airflow
    - sas-bases/components/sas-airflow/internal-airflow

Additional Resources

Configuration Settings for Airflow Redis

Overview

The Process Orchestration feature of the SAS Viya platform uses Apache Airflow which uses an instance of Redis. This README file describes how to modify the persistent storage allocation and class used by Airflow Redis.

Installation

  1. Copy the files in the $deploy/sas-bases/examples/sas-airflow/sas-airflow-redis directory to the $deploy/site-config/sas-airflow/sas-airflow-redis directory. Create the destination directory if it does not already exist.

  2. Edit the sas-airflow-redis-modify-storage.yaml file to replace the variables with actual values. Do not use quotes in the replacement.

    Replace {{ STORAGE-SIZE }} with the desired size. The default is 1Gi. Also replace {{ STORAGE-CLASS }} with the desired storage class. The default is the default storage class in Kubernetes. Replace the entire variable string, including the braces, with the value you want to use.

  3. After you have edited the file, add a reference to it in the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml):

    transformers:
    ...
    - site-config/sas-airflow/sas-airflow-redis/sas-airflow-redis-modify-storage.yaml

Configure Python for Process Orchestration

Overview

Users of SAS Risk products can make use of additional features when an administrator enables Python integration with the SAS Viya platform. The SAS Process Orchestration framework provides a set of features for SAS Risk solutions. Some of these features use PROC PYTHON in SAS code that runs in Process Orchestration flows.

SAS Process Orchestration can use a customer-prepared environment consisting of a Python installation and any required packages. Some configuration is required.

Prerequisites

The requirements to install and configure Python for the SAS Viya platform are described in the official documentation for open-source language integration, SAS Viya Platform Operations: Integration with External Languages.

Configure Python

SAS recommends that you use the SAS Configurator for Open Source tool to configure the integration with Python. SAS Configurator for Open Source partially automates the download, installation, and ongoing management of Python from source.

SAS has provided the YAML files in the $deploy/sas-bases/examples/sas-airflow/python directory to assist you in setting up the Python integration for SAS Process Orchestration. For a full set of instructions for using these files to configure the integration, see Enabling Python Integration with SAS Process Orchestration.

Configuration Settings for Risk Reporting Framework Core Service

Overview

Risk Reporting Framework Core Service support SAS Integrated Regulatory Reporting solution and SAS Insurance Capital Management solution in xbrl generation, validation execution, and filing instance template UI services. This README file describes the settings available for deploying Risk Reporting Framework Core Service. The example files described in this README file are located at ‘/$deploy/sas-bases/examples/sas-risk-rrf-core/configure’.

Installation

Based on the following descriptions of available example files, determine if you want to use any example file in your deployment. If you do, copy the example file and place it in your site-config directory.

Requests and Limits for CPU

The default values and maximum values for CPU requests and CPU limits can be specified in an rrf pod template. The risk-rrf-core-cpu-requests-limits.yaml file allows you to change these default and maximum values for the CPU resource. To update the defaults, replace the {{ DEFAULT-CPU-REQUEST }}, {{ MAX-CPU-REQUEST }}, {{ DEFAULT-CPU-LIMIT }}, and {{ MAX-CPU-LIMIT }} variables with the value you want to use. Here is an example:

patch: |-
  - op: add
    path: /metadata/annotations/launcher.sas.com~1default-cpu-request
    value: 50m
  - op: add
    path: /metadata/annotations/launcher.sas.com~1max-cpu-request
    value: 100m
  - op: add
    path: /metadata/annotations/launcher.sas.com~1default-cpu-limit
    value: "2"
  - op: add
    path: /metadata/annotations/launcher.sas.com~1max-cpu-limit
    value: "2"

Note: For details on the value syntax used above, see “Manage Requests and Limits for CPU and Memory” section, located at https://documentation.sas.com/?cdcId=itopscdc&cdcVersion=default&docsetId=itopssrv&docsetTarget=p0wvl5nf1lvyzfn16pqdgf9tybuo.htm.

After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

transformers:
...
- site-config/sas-risk-rrf-core/configure/risk-rrf-core-cpu-requests-limits.yaml

Note: The current example PatchTransformer targets only RRF PodTemplate used by Risk Reporting Framework Core Service.

Requests and Limits for Memory

The default values and maximum values for memory requests and memory limits can be specified in an rrf pod template. The risk-rrf-core-memory-requests-limits.yaml file allows you to change these default and maximum values for the memory resource. To update the defaults, replace the {{ DEFAULT-MEMORY-REQUEST }}, {{ MAX-MEMORY-REQUEST }}, {{ DEFAULT-MEMORY-LIMIT }}, and {{ MAX-MEMORY-LIMIT }} variables with the value you want to use. Here is an example:

patch: |-
  - op: add
    path: /metadata/annotations/launcher.sas.com~1default-memory-request
    value: 300M
  - op: add
    path: /metadata/annotations/launcher.sas.com~1max-memory-request
    value: 2Gi
  - op: add
    path: /metadata/annotations/launcher.sas.com~1default-memory-limit
    value: 500M
  - op: add
    path: /metadata/annotations/launcher.sas.com~1max-memory-limit
    value: 2Gi

Note: For details on the value syntax used above, see “Manage Requests and Limits for CPU and Memory” section, located at https://documentation.sas.com/?cdcId=itopscdc&cdcVersion=default&docsetId=itopssrv&docsetTarget=p0wvl5nf1lvyzfn16pqdgf9tybuo.htm.

After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

transformers:
...
- site-config/sas-risk-rrf-core/configure/risk-rrf-core-memory-requests-limits.yaml

Note: The current example PatchTransformer targets only RRF PodTemplate used by Risk Reporting Framework Core Service.

Preparing and Configuring SAS Allowance for Credit Loss for Deployment

Prerequisites

When SAS Allowance for Credit Loss is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Risk Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS Allowance for Credit Loss solution successfully, you must deploy the Risk Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-rcc/README.md (Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm (HTML format).

For storage options for your solution, such as external databases, refer to the Risk Cirrus Core README.

For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Allowance for Credit Loss: Administrator’s Guide.

Installation

  1. Complete steps 1-4 described in the Risk Cirrus Core README.

  2. Complete step 5 described in the Risk Cirrus Core README to modify your Risk Cirrus Core .env configuration file. Because SAS Allowance for Credit Loss uses workflow service tasks, a default service account must be configured for the Risk Cirrus Objects workflow client. If you know which user account to use before installation and prefer having it configured during installation, you should set the SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG variable to “Y” and assign the user ID to the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT variable. If you choose not to configure this during installation, you can set the default service account after deployment via SAS Environment Manager.

  3. If you are upgrading from a cadence prior to 2025.02, you should complete this step. Otherwise, you can skip to the next step.

    If you have a $deploy/site-config/sas-risk-cirrus-acl/resources directory, take note of the values in your acl_transform.yaml file. You may want to use them in the following steps. Once you have the values you need, delete the directory and its contents. Then, edit your base kustomization.yaml file ($deploy/kustomization.yaml) to remove the following line from the transformers section: - site-config/sas-risk-cirrus-acl/resources/acl_transform.yaml.

  4. Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-acl to the $deploy/site-config/sas-risk-cirrus-acl directory. Create a destination directory if one does not exist.

    IMPORTANT: If the destination directory already exists, make sure it has the expected configuration.env and sas-risk-cirrus-acl-secret.env files, not the old acl_transform.yaml file from previous cadences (prior to 2025.02). If the directory already exists and already has the expected configuration.env and sas-risk-cirrus-acl-secret.env files, verify that overlay settings have been applied successfully to the configmap and verify that overlay settings have been applied successfully to the secret have been correctly applied. No further actions are required unless you want to change the connection settings to different overrides.

  5. Modify the configuration.env file (located in the $deploy/site-config/sas-risk-cirrus-acl directory). Lines with a # at the beginning are commented out; their values will not be applied during deployment. If there are any parameters for which you want to override the default value, uncomment that variable’s line by removing the # at the beginning of the line and replace the placeholder with the desired value as explained in the following section. Specify, if needed, your settings as follows:

    Parameter Name Description
    SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER Replace {{ INFO-OR-DEBUG }} with the logging level desired. (Default is INFO)
    SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES Replace {{ Y-OR-N }} to specify whether you want to include steps flagged as sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts:

    - The create_cas_lib step creates the default ACLReporting CAS library that is used for reporting in SAS Allowance for Credit Loss.
    - The create_db_auth_domain step creates an ACLDBAuth domain for the riskcirrusacl schema and assigns default permissions.
    - The create_db_auth_domain_user step creates an ACLUserDBAuth domain for the riskcirrusacl schema and assigns default group permissions.
    - The import_main_dataloader_files step uploads the Cirrus_ACL_main_loader.xlsx file into the file service under the Products/SAS Allowance for Credit Loss directory.
    - The import_sample_data_loader_files step uploads the Cirrus_ACL_sample_data_loader.zip file into the file service under the Products/SAS Allowance for Credit Loss directory.
    - The import_va_reports step imports SAS-provided reports created in SAS Visual Analytics.
    - The install_riskengine_curves_project step loads the sample ACL Curves project into SAS Risk Engine.
    - The install_sampledata step loads sample load data into the riskcirrusacl database schema library.
    - The install_scenarios_sampledata step loads the sample scenarios into SAS Risk Factor Manager.
    - The load_sampledata_dataloader_objects step loads sample Class Members, Class Member Translations, NamedTreePaths, Named Tree Path Translations, Roles, RolePermissions, and Positions. It also loads sample object instances, like Attribution Templates, Configuration Sets, Configuration Tables, Cycles, Data Definitions, Models, Rule Sets and Scripts, as well as the Link Instances, Object Classifications, and Workflows associated with those objects. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
    - The load_workflows step loads and activates the ACL workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.
    - The localize_va_reports step imports localized SAS-provided reports created in SAS Visual Analytics.
    - The manage_cas_lib_acl step sets up permissions for the default ACLReporting CAS library. Users in the ACLUsers, ACLAdministrators and SASAdministrators groups have full access to the tables.
    - The transfer_sampledata_files step stores a copy of all sampledata files loaded into the environment into the file service under the Products/SAS Allowance for Credit Loss directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.
    - The update_db_sampledata_scripts_pg step stores a copy of the install_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the sample data.

    WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N. The following steps have not been marked as sample artifacts and will always be deployed:

    - The check_services step checks if the ACL dependent services are up and running.
    - The check_solution_existence step checks to see if the ACL solution is already running.
    - The check_solution_deployment step checks for the successful deployment of Risk Cirrus Core.
    - The create_solution_repo step creates the ACL repository.
    - The check_solution_running step checks to entire the ACL solution is running.
    - The import_solution step imports the solution in the ACL repository.
    - The load_app_registry step loads the ACL solution into the SAS application registry.
    - The load_auth_rules step assigns authorization rules for the ACL solution.
    - The load_group_memberships step assigns members to various ACL groups.
    - The load_identities step loads the ACL identities.
    - The load_main_dataloader_objects step loads the Cirrus_ACL_main_loader.xlsx file which contains required object instances, like Source System codes, Sequence Definitions and Code Libraries. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
    - The setup_code_lib_repo step creates the ACL code library directory.
    - The share_ia_script_with_solution step shares the Risk Cirrus Core individual assessment script with the ACL solution.
    - The share_objects_with_solution step shares the Risk Cirrus Core code library with the ACL solution.
    - The upload_notifications step loads workflow notifications into SAS Workflow Manager.
    SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS Replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to run. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment.

    For example, if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then the “transfer_sampledata” and the “load_sample_data” steps will be skipped during deployment. After the deployment finishes, you decide you want to include the SAS-provided sample data to use. You can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS to “transfer_sampledata,load_sample_data” and then delete the sas-risk-cirrus-acl pod to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS.

    WARNING: This list is absolute; the deployment will only run the steps included in this list. If you are deploying this environment for the first time, this variable should be an empty string, or you risk an incomplete or failed deployment.
    SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS Replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. Typically, the only use case for this would be skipping the load of sample data. To skip the load of sample data, set this variable to “load_sample_data”. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then set this variable to an empty string to skip load_sample_data and any other steps that are marked as sample data. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to Y, then set this variable with the IDs of any steps you would like to skip, including those flagged as sample data. (Default is \<Empty list>).
    Note: If this variable is empty, all steps will be executed unless the solution has already deployed successfully in which case no steps will be executed. If this step is non-empty, only the steps listed in this variable will be executed.
    SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME Replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-NAME }} with the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the Shared Services database.
    SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET Replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }} with the database schema user secret for the user name that was used for SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME.

    The following is an example of a configuration.env that you could use for SAS Allowance for Credit Loss. This example uses the default values provided by SAS except for the solution input data database user name variable. The SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME should be replaced with the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the Shared Services database.

    # SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
    # SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES={{ Y-OR-N }}
    # SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
    # SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
    SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME=acluser
  6. In the base kustomization.yaml file, add site-config/sas-risk-cirrus-acl/configuration.env to the configMapGenerator block. Here is an example:

    configMapGenerator:
    ...
    - name: sas-risk-cirrus-acl-parameters
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-acl/configuration.env
    ...

    Save the kustomization.yaml file.

  7. Modify the sas-risk-cirrus-acl-secret.env file (in the $deploy/site-config/sas-risk-cirrus-acl directory) and specify your settings as follows:

    For the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }} with the database schema user secret. If the directory already exists and already has the expected .env file, verify that overlay settings have been applied successfully to the secret have been correctly applied. No further actions are required unless you want to change the secret.

    The following is an example of secret.env file that you could use for SAS Allowance for Credit Loss.

    SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET=aclsecret

    Save the sas-risk-cirrus-acl-secret.env file.

  8. In the base kustomization.yaml file, add site-config/sas-risk-cirrus-acl/sas-risk-cirrus-acl-secret.env to the secretGenerator block. Here is an example:

    secretGenerator:
    ...
    - name: sas-risk-cirrus-acl-secret
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-acl/sas-risk-cirrus-acl-secret.env
    ...

    Save the kustomization.yaml file.

  9. When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide to apply the new settings.

    Note: The .env overlay can be applied during or after the initial deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, and then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Verify That Overlay Settings Have Been Applied Successfully to the ConfigMap

Before verifying the settings for SAS Allowance for Credit Loss solution, complete step 9 specified in the Risk Cirrus Core README to verify for Risk Cirrus Core.

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl describe configmap sas-risk-cirrus-acl-parameters -n <name-of-namespace>
  2. Verify that the output contains the desired configurations that you configured.

Verify That Overlay Settings Have Been Applied Successfully to the Secret

To verify that your overrides were applied successfully to the secret, run the following commands:

  1. Find the name of the secret on the namespace.

    kubectl describe secret sas-risk-cirrus-acl-secret -n <name-of-namespace>
  2. Retrieve the name of the secret on the namespace from the “Name:” line on the generated output.

  3. Verify that the output contains the desired database schema user secret that you configured.

    kubectl get secret <name-of-the-secret> -n <name-of-namespace>-o jsonpath='{.data}'

Additional Resources

Preparing and Configuring SAS Asset and Liability Management for Deployment

Prerequisites

When SAS Asset and Liability Management is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Risk Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS Asset and Liability Management solution successfully, you must deploy the Risk Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-rcc/README.md (Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm (HTML format).

For storage options for your solution, such as external databases, refer to the Risk Cirrus Core README.

For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Asset and Liability Management: Administrator’s Guide.

Installation

  1. Complete steps 1-4 described in the Risk Cirrus Core README.

  2. Complete step 5 described in the Risk Cirrus Core README to modify your Risk Cirrus Core .env configuration file. Because SAS Asset and Liability Management uses workflow service tasks, a user account must be configured for a workflow client. If you know which user account to use before installation and prefer having it configured during installation, you should set the SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG variable to “Y” and assign the user account to the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT variable. If you choose not to configure this during installation, you can set the default service account after deployment via SAS Environment Manager.

  3. If you are upgrading from a cadence prior to 2025.02, you should complete this step. Otherwise, you can skip to the next step.

    If you have a $deploy/site-config/sas-risk-cirrus-alm/resources directory, take note of the values in your alm_transform.yaml file. You may want to use them in the following steps. Once you have the values you need, delete the directory and its contents. Then, edit your base kustomization.yaml file ($deploy/kustomization.yaml) to remove the following line from the transformers section: - site-config/sas-risk-cirrus-alm/resources/alm_transform.yaml.

  4. Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-alm/ to the $deploy/site-config/sas-risk-cirrus-alm directory. Create a destination directory if one does not exist.

    IMPORTANT: If the destination directory already exists, make sure it has the expected configuration.env file, not the old alm_transform.yaml file from previous cadences (prior to 2025.02). If the directory already exists and already has the expected configuration.env file, verify that overlay settings have been applied successfully to the configmap. No further actions are required unless you want to change the connection settings to different overrides.

  5. Modify the configuration.env file (located in the $deploy/site-config/sas-risk-cirrus-alm directory). Lines with a # at the beginning are commented out; their values will not be applied during deployment. If there are any parameters for which you want to override the default value, uncomment that variable’s line by removing the # at the beginning of the line and replace the placeholder with the desired value as explained in the following section. Specify, if needed, your settings as follows:

    a. For SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER, replace {{ INFO-OR-DEBUG }} with the logging level desired. (Default is INFO)

    b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES, replace {{ Y-OR-N }} to specify whether you want to include steps flagged as sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts:

    • The transfer_sampledata step stores a copy of all sample data files in the file service under the Products/SAS Asset and Liability Management directory. This directory will include DDLs, sample data and scripts.
    • The install_sample_data step loads the sample portfolio data.
    • The load_sampledata_dataloader_objects step loads sample Class Members, Class Member Translations, NamedTreePaths, and Positions. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
    • The import_va_reports step imports SAS-provided reports created in SAS Visual Analytics.

      WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N.

    c. For SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to run. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment. For example, if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then the upload_notifications step will be skipped during deployment. After the deployment finishes, you decide you want to include the SAS-provided notifications to use in your custom workflow definitions. You can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS to “upload_notifications” and then delete the sas-risk-cirrus-alm pod to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS.

    WARNING: This list is absolute; the deployment will only run the steps included in this list. If you are deploying this environment for the first time, this variable should be an empty string, or you risk an incomplete or failed deployment.

    d. For SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. Typically, the only use case for this would be skipping the load of sample data. To skip the load of sample data, set this variable to “load_sample_data”. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then set this variable to an empty string to skip load_sample_data and any other steps that are marked as sample data. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to Y, then set this variable with the IDs of any steps you would like to skip, including those flagged as sample data.

  6. In the base kustomization.yaml file, add site-config/sas-risk-cirrus-alm/configuration.env to the configMapGenerator block. Here is an example:

    configMapGenerator:
    ...
    - name: sas-risk-cirrus-alm-parameters
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-alm/configuration.env
    ...

    Save the kustomization.yaml file.

  7. When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide to apply the new settings.

    Note: The configuration.env overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, and then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Verify That Overlay Settings Have Been Applied Successfully

Before verifying the settings for SAS Asset and Liability Management solution, complete step 9 specified in the Risk Cirrus Core README to verify for Risk Cirrus Core.

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl describe configmap sas-risk-cirrus-alm-parameters -n <name-of-namespace>
  2. Verify that the output contains the desired connection settings that you configured.

Additional Resources

Deploying SAS Business Orchestration Services

Overview

To deploy SAS Business Orchestration Services, you must create an init container image that includes all the configuration files for SAS Business Orchestration Services. A reference to the init container must be added to the base kustomization.yaml file.

Additionally, you can run SAS Business Orchestration Services in legacy mode.

Instructions

Configure the Init Container

You must create a SAS Business Orchestration Services init container image that contains configuration files. To create this init container image, follow the instructions at Configuring an Init Container.

To add a SAS Business Orchestration Services init container to a SAS Business Orchestration Services deployment, complete these steps:

  1. Copy the files in the $deploy/sas-bases/examples/sas-boss/init-container directory to the $deploy/site-config/sas-boss/init-container directory. Create the destination directory if it does not exist.

  2. Edit the file add-init-container.yaml in the $deploy/site-config/sas-boss/init-container directory. Replace the image name sas-boss-hydrator with the full image name of your SAS Business Orchestration Services init container.

  3. Add site-config/sas-boss/init-container/add-init-container.yaml to the patches block of the base kustomization.yaml file. Create this block if it does not exist. Here is an example:

    patches:
    - target:
        group: apps
        version: v1
        kind: Deployment
        name: sas-boss
      path: site-config/sas-boss/init-container/add-init-container.yaml
    ...
  4. Deploy the software using the commands described in SAS Viya Platform Deployment Guide.

Configure for Legacy Mode (Optional)

You can choose to run SAS Business Orchestration Services in Legacy Mode. For more information about Legacy Mode, see Native Mode and Legacy Mode. If you need to run SAS Business Orchestration Services in legacy mode, you must follow the steps below:

  1. Copy the file $deploy/sas-bases/examples/sas-boss/legacy-mode/enable-legacy-mode.yaml to the $deploy/site-config/sas-boss/legacy-mode directory. Create the destination directory if it does not exist.

  2. Edit the $deploy/sas-bases/examples/sas-boss/legacy-mode/enable-legacy-mode.yaml by replacing the file name boss-context.xml with the relative path of your SAS Business Orchestration Services context file in your SAS Business Orchestration Services init container.

  3. Add site-config/sas-boss/legacy-mode/enable-legacy-mode.yaml to the transformers block of the base kustomization.yaml file. Here is an example:

    transformers:
    ...
    - site-config/sas-boss/legacy-mode/enable-legacy-mode.yaml
    ...
  4. Deploy the software using the commands described in SAS Viya Platform Deployment Guide.

Configure for Netty Services (Optional)

You can choose to run SAS Business Orchestration Services so that Netty endpoint ports are exposed with a Kubernetes type of LoadBalancer. Follow the steps below:

  1. Copy the file $deploy/sas-bases/examples/sas-boss/netty-service/netty-service-transformer.yaml to the $deploy/site-config/sas-boss/netty-service directory. Create the destination directory if it does not exist.

  2. Follow the comments in the copied netty-service-transformer.yaml file to edit the port numbers as needed.

  3. Add sas-bases/overlays/sas-boss/netty-service/netty-service.yaml to the resources block and site-config/sas-boss/netty-service/netty-service-transformer.yaml to the transformers block of the base kustomization.yaml file. Here is an example:

    resources:
    ...
    - sas-bases/overlays/sas-boss/netty-service/netty-service.yaml
    ...
    
    transformers:
    ...
    - site-config/sas-boss/netty-service/netty-service-transformer.yaml
    ...
  4. Deploy the software using the commands described in SAS Viya Platform Deployment Guide.

Configure for Minimal Setup (Optional)

If the SAS Business Orchestration Services is not delivered with other SAS solutions, a patch transformer is provided to scale down unused pods. Follow the steps below:

  1. Copy the file $deploy/sas-bases/examples/sas-boss/minimal/scale-others-to-zero.yaml to the $deploy/site-config/sas-boss/minimal directory. Create the destination directory if it does not exist.

  2. In the copied scale-others-to-zero.yaml file, edit the boss-patch and readiness-patch transformer blocks, if needed, as directed by the comments in the file.

  3. Add $deploy/site-config/sas-boss/minimal/scale-others-to-zero.yaml and $deploy/sas-bases/overlays/startup/disable-startup-transformer.yaml to the transformers block of the base kustomization.yaml file. Here is an example:

    transformers:
    ...
    - site-config/sas-boss/minimal/scale-others-to-zero.yaml
    - sas-bases/overlays/startup/disable-startup-transformer.yaml
    ...
  4. In the base kustomization.yaml file, comment out or remove any lines with “postgres” in them.

  5. Deploy the software using the commands described in SAS Viya Platform Deployment Guide.

Additional Resources

For more information about SAS Business Orchestration Services, see SAS Business Orchestration Services: User’s Guide

SAS Business Orchestration Worker Configuration

Overview

This README file describes the configuration settings for a cloud-native engine that enables users to declare their orchestrations through a set of workloads and flows in YAML format. This version of the product is also referred to as SAS Business Orchestration Worker.

SAS Business Orchestration Services has two versions. The first is the one that has been shipping for some time and uses an engine that is based on Apache Camel. The README for deploying and configuring that version of SAS Business Orchestration Services is located at $deploy/sas-bases/examples/sas-boss/README.md (for Markdown format) or at $deploy/sas-bases/docs/deploying_sas_business_orchestration_services.htm (for HTML format).

Installation

Configure with Initial SAS Viya Platform Deployment

Create a copy of the example template in $deploy/sas-bases/examples/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml. Save this copy in $deploy/site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml.

Placeholders are indicated by curly brackets, such as {{ NAMESPACE }}. Find and replace the placeholders with the values you want for your deployment. After all placeholders have been filled in, directly apply your deployment yaml via SAS Viya platform Kustomize or direct kubectl apply commands.

If you are using the SAS Viya platform Kustomize process, add the resource $deploy/site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml to the resources block of the base kustomization.yaml file. The use case here is to deploy a SAS Business Orchestration Worker project with SAS Viya platform. Here is an example:

resources:
...
- site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml
...

Data in Motion - TLS

The Deployment Resource sections below describe several TLS configurations for sas-business-orchestration-worker deployments. These configurations must align with SAS Viya security requirements, as specified in SAS Viya Platform Operations guide Security Requirements. Here are the specific TLS deployment requirements:

Deployment Resource

The business-orchestration-worker-deployment.yaml resource has customizable sections.

Section - config map

This section provides a ConfigMap example that mounts the project.yaml into pods. The project.yaml describes the orchestration.

Section - image pull secrets

This section provides an image pull secret example that grants access to the container registry images.

The image pull secret can be grepped from the SAS Viya platform Kustomize build command output:

kustomize build . > site.yaml
grep '.dockerconfigjson:' site.yaml
    .dockerconfigjson: <SECRET>

Alternatively, if SAS Viya platform has already been deployed the image pull secret can be queried:

kubectl -n {{ NAMESPACE }} get secret --field-selector=type=kubernetes.io/dockerconfigjson -o yaml | grep '.dockerconfigjson:'
    .dockerconfigjson: <SECRET>

Replace the namespace and image pull secret values in the example.

Section - service

This section provides an example that configures high availability routing for sas-business-orchestration-worker pods.

Section - deployment

This section provides an example that shows pod configuration and behaviors.

Configure the sas-business-orchestration-worker init Container

When using ODE processor, you must create a sas-business-orchestration-worker init container that fetches the required SFM jar files from by pulling a docker image.

  1. Create a docker image that contains the required SFM jar files. Here is a sample Dockerfile.

    FROM ubuntu
    
    # Package updates and install dependencies
    RUN apt-get update -y && apt-get upgrade -y && apt-get install -y \
        curl \
        apt-transport-https \
        ca-certificates \
        && rm -rf /var/lib/apt/lists/*
    
    # Install kubectl
    RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
    RUN chmod +x ./kubectl
    RUN mv ./kubectl /usr/local/bin
    
    # Grab SAS Fraud Management JARs (boss-worker-sb ode plugin dependency, used in performance/component/processor-ode.yaml)
    RUN mkdir /sfmlibs
    RUN mkdir /sfmlibs/44
    RUN cd /sfmlibs/44 && curl -LO http://ivy.fyi.sas.com/Repositories/sds/dev/f0rapt44/DEVD/ivy-repo/SAS_content/sas.finance.fraud.transaction/404001.0.0.20161020100850_f0rapt44/sas.finance.fraud.transaction.jar
    RUN cd /sfmlibs/44 && curl -LO http://ivy.fyi.sas.com/Repositories/sds/dev/f0rapt44/DEVD/ivy-repo/SAS_content/sas.finance.fraud.engine/404001.0.0.20161020100936_f0rapt44/sas.finance.fraud.engine.jar
    RUN mkdir /sfmlibs/61
    RUN cd /sfmlibs/61 && curl -LO http://ivy.fyi.sas.com/Repositories/sds/dev/f0rapt61/DEVD/ivy-repo/SAS_content/sas.finance.fraud.transaction/601002.0.0.20220622174613_f0rapt61/sas.finance.fraud.transaction.jar
    RUN cd /sfmlibs/61 && curl -LO http://ivy.fyi.sas.com/Repositories/sds/dev/f0rapt61/DEVD/ivy-repo/SAS_content/sas.finance.fraud.engine/601002.0.0.20220622174651_f0rapt61/sas.finance.fraud.engine.jar
    RUN mkdir /sfmlibs/62
    RUN cd /sfmlibs/62 && curl -LO http://ivy.fyi.sas.com/Repositories/sds/dev/d4rapt62/DEVD/ivy-repo/SAS_content/sas.finance.fraud.transaction/602000.0.0.20231003221024_d4rapt62/sas.finance.fraud.transaction.jar
    RUN cd /sfmlibs/62 && curl -LO http://ivy.fyi.sas.com/Repositories/sds/dev/d4rapt62/DEVD/ivy-repo/SAS_content/sas.finance.fraud.engine/602000.0.0.20231003221203_d4rapt62/sas.finance.fraud.engine.jar
  2. Run the following docker command to create a docker image that is used in the init container.

    docker build -t <image_name>:<tag> <path_to_Dockerfile_directory>
  3. Tag the image and push it to a Docker registry.

    docker tag <image_name>:<tag> <repository_url>/<image_name>:<tag>

    Replace : with the name and tag of your Docker image, and with the URL of your Docker repository. For example:

    docker tag myimage:latest myrepository/myimage:latest

    Log in to the Docker registry and push the Docker image to the repository.

    docker login <registry_url>
    
    docker push <repository_url>/<image_name>:<tag>

    For example:

    docker push myrepository/myimage:latest
  4. Edit the $deploy/site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml file. In the Deployment section, uncomment the init container for “fetch-ode-jars”, and replace {{ SFM_JAR_IMAGE }} with the URL to the Docker image generated in Step 2. Here is an example:

    initContainers:
    - name: fetch-ode-jars
        image: myrepository/myimage:latest
        command: ["sh", "-c"]
        args: ["cp -R /sfmlibs/* /tmp/data"]
        imagePullPolicy: Always
        volumeMounts:
        - name: sfmlibs
            mountPath: "/tmp/data"
sas-business-orchestration-worker Container

The sas-business-orchestration-worker container includes categories of environmental properties. The properties include properties for logging, external services (such as Apache Kafka, Redis and RabbitMQ), processing options, and probe options. Optional security-related properties are covered in the Security section.

Images

Update the two image values that are contained in the $deploy/site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml file. Revise the value “sas-business-orchestration-worker” to include the registry server, relative path, name, and tag. The registry relative server and relative path are the same as other SAS Viya platform deployment images.

The name of the container is ‘sas-business-orchestration-worker’. The registry relative path, name, and tag values are found in the sas-components-* configmap in the Viya deployment.

Perform the following commands to determine the appropriate information. When you have the information, add it to the appropriate places in the three files listed above.

$ # generate site.yaml file
$ kustomize build -o site.yaml

## get the sas-business-orchestration-worker registry information
$ cat manifests.yaml | grep 'sas-business-orchestration-worker:' | grep -v -e "VERSION" -e 'image'

$ # manually update the sas-business-orchestration-worker-example images using the information gathered below: <container registry>/<container relative path>/sas-business-orchestration-worker:<container tag>

$ # apply site.yaml file
$ kustomize apply -f site.yaml 

Perform the following commands to get the required information from a running SAS Viya platform deployment.


# get the registry server, kubectl needs to point to the SAS Viya platform deployment namespace, and replace {{ NAMESPACE }} with the namespace value
$ kubectl -n {{ NAMESPACE }} get deployment sas-readiness -o yaml | grep -e "image:.*sas-readiness" | sed -e 's/image: //g' -e 's/\/.*//g'  -e 's/^[ \t]*//'
    <container registry>

# get registry relative path and tag, kubectl needs to point to the SAS Viya platform deployment namespace, and replace {{ NAMESPACE }} with the namespace value
$ CONFIGMAP="$(kubectl -n {{ NAMESPACE }} get cm | grep sas-components | tr -s '' | cut -d ' ' -f1)"
$ kubectl -n {{ NAMESPACE }} get cm "$CONFIGMAP" -o yaml | grep 'sas-business-orchestration-worker:' | grep -v "VERSION"
    SAS_COMPONENT_RELPATH_sas-business-orchestration-worker: <container relative path>/sas-business-orchestration-worker
    SAS_COMPONENT_TAG_sas-business-orchestration-worker: <container tag>
Logging Properties

The SAS_LOG_LEVEL environment variable specifies the minimum severity level for emitting logs. To control the verbosity of the log output, the level can be set to TRACE, DEBUG, INFO, WARN, or ERROR.

The SAS_LOG_FORMAT environment variable specifies the format of the emitted logs. The format can be set to json or plain.

The SAS_LOG_LOCALE environment variable determines which locale messages should be included in the output. The default value is “en”.

External Services Properties

External services that are used by a workload require defined properties that are specific to the technology in use. See the comments in the $deploy/site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml resource file for specific examples.

Processing Options

Project yaml files can include multiple workloads that scale independently. This means that a pod can run only one workload. Use the WORKLOAD_ENABLED_BY_INDEX environment variable to specify which workload to execute. If the property is missing the index 0, meaning the first, workload is executed.

Readiness Probe

The sas-business-orchestration-worker container uses a readiness probe, which allows Kubernetes to determine when a pod is ready to receive data. The initialDelaySeconds field specifies how many seconds Kubernetes should wait before performing the initial probe. The periodSeconds field specifies how many seconds Kubernetes should wait between probes.

For more information about readiness probes, see https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/.

sas-business-orchestration-worker-sb Container

The sas-business-orchestration-worker-sb container includes a Spring Boot application sidecar. Some orchestration components need to leverage Java libraries to connect to other Java services. This includes SAS Fraud Management engines or when parsing certain SWIFT and ISO standardized message formats is required. See comments in the resource file for specifics. This sidecar can be removed or commented out if those JAVA specific implemented features are not needed by the project orchestration workload being executed.

Section - horizontal pod autoscaler

This section provides an example of a Horizontal Pod Autoscaler.

Section - OpenShift route

This section provides an example of an ingress in an OpenShift environment. If you use this ingress, comment out other ingresses in the file.

Section - ingress

This section provides an example of an ingress in an OpenShift environment. If you use this ingress, comment out other ingresses in the file.

Section - ingress tls

This section provides an example of NGINX for HTTP traffic using TLS. If you use this ingress, comment out other ingresses in the file.

Section - ingress tls secret for tls certs and keys

This section provides an example of a secret that holds TLS certificates and keys.

Section - secret for a ca cert and key to make a request to sign a client cert for two-way tls

This section provides an example of a secret that holds the certificate authority certificate and key that are used for two-way TLS (mTLS).

Section - create separate cert and key for external service such as Redis, Apache Kafka, Rabbitmq, etc.

This section provides an example of a secret that holds the certificate authority certificate and key that are used for two-way TLS (mTLS) with external services.

Duplicate this section as needed if multiple external services are used by the orchestration project.

Services and Ingresses

These resources do not require much customization. They require the SUFFIX to be filled in, and the NAMESPACE to be specified, as indicated in the template. The ingresses additionally require the host property be specified.

The services are ClusterIP services, accessed externally via the ingress resources. The ports are already filled in and line up with the prefilled ingress ports.

The ingresses include the host, and rules for directing requests. For the sas-business-orchestration-worker ingress, anything sent with /sas-business-orchestration-worker as the path prefix will use this ingress. The service referenced above uses the ingress in most cases. You might not need ingress if all traffic is within the Kubernetes cluster or if the containers are hosted by another cloud technology.

OpenShift

If you are deploying your SAS Business Orchestration Worker on OpenShift, you will not be able to use the Ingress resource. In this case, replace your ingress resource with an OpenShift Route.

Security

TLS Secrets

There are optional, commented out sections that may be used to create the secrets containing TLS certificates and keys. The data must be base64 encoded and included in these definitions. These secrets could optionally be created manually via kubectl or kustomize secrets generators. If the secrets are created via some other method, the secret names must still match those referenced in the volumes and ingress definitions.

Secure Ingress Definition

To add TLS to your ingress, some annotations and spec fields must be added. These will require certificates either included in this template, or created and supplied previously. The template includes a TLS ingress that is commented out, but the below examples break down what is different in this ingress.

To secure your ingress, the following annotations can be used to add one-way TLS, two-way TLS (mTLS), or both.

annotations:
    # Used to enable TLS
    nginx.ingress.kubernetes.io/auth-tls-secret: {{ NAMESPACE }}/business-orchestration-worker-ingress-tls-ca-config-{{ SUFFIX }}
    # used to enable mTLS
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"    

For one-way TLS, fill in the tls field under the spec field. This also includes a secretName, which includes your TLS certificate.

tls:
    - hosts:
        - {{ PREFIX }}.{{ INGRESS-TYPE }}.{{ HOST }}
    secretName: business-orchestration-worker-ingress-tls-config-{{ SUFFIX }}

See the resource comments for more specific details.

Volumes and Volume Mounts

Depending on the security configuration, mounting additional trusted certificates in your containers may be necessary. The areas to add these are tagged with SECURITY, and can be uncommented as necessary. The secret names must match whatever secrets have been configured for these certificates.

There are three volume examples created from secrets containing TLS certificates. One volume example is for sas-business-orchestration-worker certificates, one volume is for an external service certificate. These are defined for each container in the Deployment spec.

After being created, these volumes may be mounted in the sas-business-orchestration-worker container. As defined in the template, the business-orchestration-worker certificates are mounted in /var/run/security, the external service certificates are mounted in /var/run/security/ which can be duplicated if multiple external services are used by the workload.

Inline Comments of the Deployment Resource

Read through all the inline comments of deployment resource. There is considerable overlap with these instructions here. However, there are more specifics and a higher degree of detail in the actual deployment resource template.

Additional Resources

Deploy the software.

Configure after the Initial Deployment

Alternatively, SAS Business Orchestration Worker can be installed separately from the SAS Viya platform. Complete steps above, except “Deploy the Software” in the “Additional Resources” section. The use case here is to deploy a SAS Business Orchestration Worker project in a Kubernetes namespace that is not a SAS Viya platform deployment. Instead, perform the following command:

kubectl apply -f "$deploy/site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml"

Clinical Trial Foundation for SAS Viya

Overview

This directory contains an example transformer that illustrates how to change the StorageClass and size of the PVC used to store the Quality Knowledge Base (CTF) in SAS Viya.

Installation

  1. Copy the file sas-bases/examples/sas-clinical-repository/storageclass/sas-clinical-storage-class-transformer.yaml and place it in your site-config directory.

  2. Replace the {{ CTF-STORAGE-CLASS }} value with your desired StorageClass. Note that the CTF requires that your storage class support the RWX accessMode.

  3. Also replace the {{ CTF-STORAGE-SIZE }} value with the size you wish to allocate to the CTF volume. The recommended size is 8Gi. Note that using a lower value may restrict your ability to add new CTFs to SAS Viya; 1Gi is the absolute minimum required.

  4. After you edit the file, add a reference to it in the transformer block of the base kustomization.yaml file.

Additional Resources

For more information about using example files, see the SAS Viya Deployment Guide.

For more information about Kubernetes StorageClasses, please see the Kubernetes Storage Class Documentation.

Configure a Co-located SAS Data Agent

Overview

The directory $deploy/sas-bases/examples/sas-data-agent-server-colocated contains files to customize your SAS Viya platform deployment for a co-located SAS Data Agent. This README describes the steps necessary to make these files available to your SAS Viya platform deployment. It also describes how to set required environment variables to point to these files.

Note: If you make changes to these files after the initial deployment, you must restart the co-located SAS Data Agent.

Prerequisites

Before you start the deployment you should determine the OAUTH secret that will be used by co-located SAS Data Agent and any remote SAS Data Agents.

You should also create a subdirectory within $deploy/site-config to store your co-located SAS Data Agent configurations. This README uses a user-created subdirectory called $deploy/site-config/sas-data-agent-server-colocated. For more information, refer to the “Directory Structure” section of the “Pre-installation Tasks” Deployment Guide.

Installation

The base kustomization.yaml file ($deploy/kustomization.yaml) provides configuration properties for the customization process. The co-located SAS Data Agent requires specific customizations in order to communicate with remote SAS Data Agents and configure server options. Copy the example sas-data-agent-server-colocated-config.properties and sas-data-agent-server-colocated-secret.properties files from $deploy/sas-bases/examples/sas-data-agent-server-colocated to $deploy/site-config/sas-data-agent-server-colocated.

Configuration

Note: The default values listed in the descriptions that follow should be suitable for most users.

Configure the OAuth Secret

SAS_DA_OAUTH_SECRET

The sas-data-agent-server-colocated-secret.properties file contains configuration properties for the OAUTH secret. The OAUTH secret value is required and must be specified in order to communicate with a remote SAS Data Agent. There is no default value for the OAUTH secret.

Note: The following example is for illustration only and should not be used.

Enter a string value for the OAUTH secret that will be shared with the remote SAS Data Agent. Here is an example:

SAS_DA_OAUTH_SECRET=MyS3cr3t

Configure Logging

The sas-data-agent-server-colocated-config.properties file contains configuration properties for logging.

SAS_DA_DEBUG_LOGTYPE

Enter a string value to set the level of additional logging.

 * `SAS_DA_DEBUG_LOGTYPE=TRACEALL` enables trace level for all log items.
 * `SAS_DA_DEBUG_LOGTYPE=TRACEAPI` enables trace level for api calls.
 * `SAS_DA_DEBUG_LOGTYPE=TRACE` enables trace level for most log items.
 * `SAS_DA_DEBUG_LOGTYPE=PERFORMANCE` enables tracce/debug level items for performance debugging.
 * `SAS_DA_DEBUG_LOGTYPE=PREFETCH` enables trace/debug level items for prefetch debugging.
 * `SAS_DA_DEBUG_LOGTYPE=None` disables additional tracing.

If no value is specified, the default of None is used.

Here is an example:

SAS_DA_DEBUG_LOGTYPE=None

Configure Filesystem Access

The sas-data-agent-server-colocated-config.properties file contains configuration properties that restrict drivers from accessing the container filesystem. By default, drivers can only access the directory tree /data which must be mounted on the co-located SAS Data Agent container.

SAS_DA_RESTRICT_CONTENT_ROOT

When set to TRUE, the file access drivers can only access the directory structure specified by SAS_DA_CONTENT_ROOT.

When set to FALSE, the file access drivers can access any directories accessible from within the co-located SAS Data Agent container.

If no value is specified, the default of TRUE is used.

SAS_DA_RESTRICT_CONTENT_ROOT=None

SAS_DA_CONTENT_ROOT

Enter a string value to specify the directory tree that file access drivers are allowed to access. This value is ignored if SAS_DA_RESTRICT_CONTENT_ROOT=FALSE. If no value is specified, the default of /data is used.

Here is an example:

SAS_DA_CONTENT_ROOT=/accounting/data

Configure Server Timeout Options

The sas-data-agent-server-colocated-config.properties file contains configuration properties that control how the server treats client sessions that are unused for long periods of time. By default the server will try to gracefully shut down sessions that have not been used for one hour.

SAS_DA_SESSION_CLEANUP

Use this variable to specify how often the server will check for idle connections. This variable has a default of 60 seconds (1 minute).

Here is an example of how to check for idle client sessions every 5 minutes:

SAS_DA_SESSION_CLEANUP=300

SAS_DA_DEFAULT_SESSION_TIMEOUT

Use this variable to specify how long to wait before an unused client session is considered idle, and thus eligible to be killed. This value is only used when the client does not specify a value for SESSION_TIMEOUT when connecting. This variable has a default of 3600 seconds (1 hour).

Here is an example of how to default to a 20 minute wait before an unused client session is considered idle:

SAS_DA_DEFAULT_SESSION_TIMEOUT=1200

SAS_DA_MAX_SESSION_TIMEOUT

Use this variable to specify the maximum time before an unused client session is considered idle, and thus eligible to be killed. This value applies even when SESSION_TIMEOUT or SAS_DA_DEFAULT_SESSION_TIMEOUT are set to longer times. This variable has a default of 0 seconds (meaning no maximum wait time).

Here is an example of how to set the maximum wait time to 18000 seconds (5 hours) before an unused client session is considered idle:

SAS_DA_MAX_SESSION_TIMEOUT=18000

SAS_DA_MAX_OBJECT_TIMEOUT

Use this variable to specify the maximum time the server will wait for a database operation to complete when killing idle client sessions. This variable has a default of 0 seconds (meaning no maximum wait time).

Here is an example of how to set the maximum object timeout to 300 seconds (5 minutes) when killing idle client sessions:

SAS_DA_MAX_SESSION_TIMEOUT=300

SAS_DA_WORKER_TIMEOUT

Use this variable to specify the maximum time a worker pod will remain when there are no active client sessions. This variable has a default of 0 seconds (meaning the worker pod will remain active and available to service future requests). If a worker pod exits a new client request will automatically start another worker pod to service it, but this might result in a slight initialization delay.

Here is an example of how to set the worker pod timeout to 3600 seconds (1 hour):

SAS_DA_WORKER_TIMEOUT=3600

SAS_DA_PRELAUNCH_WORKERS

Use this variable to specify whether a worker pod should be launched before the first client request is received. This variable has a default of TRUE if SAS_DA_OAUTH_SECRET has been specified, otherwise the default is FALSE. If a client request is received a worker pod will be automatically started if it is not already running, but this might result in a slight initialization delay.

Here is an example of how to disable worker pod prelaunch:

SAS_DA_PRELAUNCH_WORKERS=FALSE

Configure Access to Java, Hadoop, and Spark

The sas-data-agent-server-colocated-config.properties file contains configuration properties for Java, SAS/ACCESS Interface to Spark and SAS/ACCESS to Hadoop.

Configure SAS_DA_HADOOP_JAR_PATH and SAS_DA_HADOOP_CONFIG_PATH

If your deployment includes SAS/ACCESS Interface to Spark, you must make your Hadoop JARs and configuration file available on a PersistentVolume or mounted storage. Set the options SAS_DA_HADOOP_JAR_PATH and SAS_DA_HADOOP_CONFIG_PATH to point to this location. See the SAS/ACCESS Interface to Spark documentation at $deploy/sas-bases/examples/data-access/README.md (for Markdown format) or $deploy/sas-bases/docs/configuring_sasaccess_and_data_connectors_for_sas_viya_4.htm (for HTML format) for more details. These variables have no default values.

Here are some examples:

SAS_DA_HADOOP_CONFIG_PATH=/clients/hadoopconfig/prod
SAS_DA_HADOOP_JAR_PATH=/clients/jdbc/spark/2.6.22

SAS_DA_JAVA_HOME

Use this variable to specify an alternate JAVA_HOME for use by the co-located SAS Data Agent. This variable has no default value.

Here is an example:

SAS_DA_JAVA_HOME=/java/lib/jvm/jre

Revise the Base kustomization.yaml File

Add these entries to the base kustomization.yaml file ($deploy/kustomization.yaml) in order to include the modified sas-data-agent-server-colocated-config.properties and sas-data-agent-server-colocated-secret.properties files.

configMapGenerator:
...
- name: sas-data-agent-server-colocated-config
  behavior: merge
  envs:
  - site-config/sas-data-agent-server-colocated/sas-data-agent-server-colocated-config.properties
...
secretGenerator:
...
- name: sas-data-agent-server-colocated-secrets
  behavior: merge
  envs:
  - site-config/sas-data-agent-server-colocated/sas-data-agent-server-colocated-secret.properties

Using SAS/ACCESS with a Co-located SAS Data Agent

For more information about configuring SAS/ACCESS, see the README file located at $deploy/sas-bases/examples/data-access/README.md (for Markdown format) or $deploy/sas-bases/docs/configuring_sasaccess_and_data_connectors_for_sas_viya_4.htm (for HTML format).

Configure Kubernetes PersistentVolumeClaim for SAS Common Planning Service

Overview

SAS Common Planning Service, used by SAS Assortment Planning, SAS Demand Planning, and SAS Financial Planning, requires dedicated PersistentVolumeClaims (PVCs) for storing data. During setup the sas-planning-retail PVCs are defined and then mounted in the startup process. This directory contains an example transformer that illustrates how to change the StorageClass and size of the PVCs.

Installation

  1. Copy the $deploy/sas-bases/examples/sas-planning/storage.yaml file to the $deploy/site-config directory.
  2. Revise the copied file according to the comments in the file, replacing the variables with the appropriate values.
  3. Add a reference to the base kustomization.yaml file ($deploy/kustomization.yaml) for the revise file. Here is an example that assumes you put the copied file in $deploy/site-config/sas-planning/storage.yaml:

    transformers:
    ...
    - site-config/sas-planning/storage.yaml
    4. Continue your SAS Viya platform deployment as documented in SAS Viya Platform Deployment Guide.

Configure Kubernetes ingress-nginx time out for SAS Common Planning Service

Overview

To avoid issues related to client timeouts, configure SAS Common Planning Service ingress-nginx timeout.

Installation

  1. Copy the $deploy/sas-bases/examples/sas-planning/sas-planning-ingress-patch.yaml file to the $deploy/site-config directory.
  2. Revise the copied file according to the comments in the file, replacing the variables with the appropriate values.
  3. Add a reference to the base kustomization.yaml file ($deploy/kustomization.yaml) for the revise file. Here is an example that assumes you put the copied file in $deploy/site-config/sas-planning/sas-planning-ingress-patch.yaml:

    transformers:
    ...
    - site-config/sas-planning/sas-planning-ingress-patch.yaml
    4. Continue your SAS Viya platform deployment as documented in SAS Viya Platform Deployment Guide.

Configuring Customizations for sas-planning

Overview

The sas-planning service uses the Common Data Store PostgreSQL as well as the one provided by platform.

This README describes the customizations needed for a PersistentVolumeClaim (PVC). It also contains the steps required to configure an ingress-nginx timeout.

Configure an Internal PostgreSQL Instance for sas-planning

Instructions

Pre-upgrade steps

If updating from any release prior to 2023.09, please refer to this documentation for additional steps to follow.

Install CDS PostgreSQL

For more information on using an internal instance of PostgreSQL, you should refer to the README file located at $deploy/sas-bases/examples/postgres/README.md.

Customize Planning Overlays

Add the following overlay to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml):

resources:
...
- sas-bases/overlays/sas-planning
...

Add the following overlays to the transformers block of the base kustomization.yaml file:

transformers:
...
- sas-bases/overlays/sas-planning/sas-planning-transformer.yaml
...

Configure a PersistentVolume

A PersistentVolumeClaim (PVC) states the storage requirements from cloud providers. The storage provided by cloud is mapped to the predefined paths across services that collaborate to handle files.

In the base kustomization.yaml file, immediately after the transformers block, add a patches block with the following content.

...
patches:
- path: site-config/storageclass.yaml
  target:
    kind: PersistentVolumeClaim
    annotationSelector: sas.com/component-name in (sas-planning)

Build

After you revise the base kustomization.yaml file, continue your SAS Viya platform deployment as documented in SAS Viya Platform Deployment Guide.

Configuration Settings for Compute Server

Overview

This readme describes the settings available for deploying Compute Server.

Installation

Based on the following description of different example files, determine if you want to use any example file in your deployment. If you do, copy the example file and place it in your site-config directory.

Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ NUMBER-OF-WORKERS }}. Replace the entire variable string, including the braces, with the value you want to use.

After you have edited the file, add a reference to it in the transformer block of the base kustomization.yaml file.

Examples

The example files are located at /$deploy/sas-bases/examples/compute-server/configure.

Additional Resources

For information about PersistentVolumes, see Persistent Volumes.

Update Compute Service Internal HTTP Request Timeout

Overview

The SAS Compute service makes calls to Compute server processes running in the cluster using HTTP calls. The Compute service uses a default request timeout of 600 seconds. This README describes the customizations that can be made for updating this timeout to control how long the Compute service requests to the servers wait for a response.

Installation

The SAS Compute service internal HTTP request timeout can be modified by using the change-sas-compute-http-request-timeout.yaml file.

  1. Copy the $deploy/sas-bases/examples/compute/client-request-timeout/change-sas-compute-http-request-timeout.yaml file to the site-config directory.

  2. In the copied file, replace {{ TIMEOUT }} with the number of seconds to use for the timeout. Note that the trailing “s” after {{ TIMEOUT }} should be kept.

Here is an example:

 ```yaml
 ...
 patch: |-
   - op: replace
     path: /spec/template/spec/containers/0/env/-
       value:
         name: SAS_HTTP_CLIENT_TIMEOUT_REQUEST
         value: 1200s
 ...
 ```
  1. After you edit the file, add a reference to it in the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    Here is an example assuming the file has been saved to $deploy/site-config/compute/client-request-timeout:

    transformers:
    ...
    - /site-config/compute/client-request-timeout/change-sas-compute-http-request-timeout.yaml
    ...

Additional Resources

For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.

SAS Configurator for Open Source Options

Overview

With open-source language integration, SAS Viya platform users can decide which language they want to use for a given task. They can use either the SAS programming language or an open-source programming language, such as Python, R, Lua, or Java, to develop programs for the SAS Viya platform. This integration requires some additional configuration.

SAS Configurator for Open Source is a utility that simplifies the download, configuration, building, and installation of Python and R from source. The result is a Python or R build that is located in a persistent volume (PV) and referenced by a Persistent Volume Claim (PVC). The PVC and the builds that it contains are then available for pods that require Python and R for their operations.

SAS Configurator for Open Source can build and install multiple Python and R builds or versions in the same PV. It can use profiles to handle multiple builds. Various pods can then reference different versions or builds of Python and R located in the PV.

SAS Configurator for Open Source also includes functionality to reduce downtime associated with updates. A given build is located in the PV and referenced by a pod using a symlink. In an update scenario, the symlink is changed to point to the latest build for that profile.

For system requirements and a full set of steps to use SAS Configurator for Open Source, see SAS Viya Platform: Integration with External Languages.

Summary of Steps

Building Python or R requires a number of steps. This section describes the steps performed by SAS Configurator for Open Source in its operations to manage Python and R.

SAS Configurator for Open Source only processes configuration changes after the initial execution of the job. For example, packages are reprocessed only if a change occurs in the package list and the respective versions of R or Python remain unchanged. If the version of Python or R changes, then all steps are performed from the download of the source to the updating of symlinks.

Download

For Python, downloads the source, signature file, and signer’s key from the configured location. For R, downloads only the source.

Verify

Verifies the authenticity of the Python source using the signer’s key and signature file. The R source cannot be verified at the time of this writing because signer keys are not generated for R source.

Extract

Extracts the Python and R sources into a temporary directory for building.

Build

Configures and performs a make of the Python and R sources.

Install

Installs the Python and R builds within the PV and updates supporting components, such as pip, if applicable.

Builds and installs configured packages for Python and R.

Note: Python and R packages that require additional dependencies to be installed within any combination of the SAS Configurator for Open Source container, the SAS Programming Environment container, and the CAS Server container are not supported with the SAS Configurator for Open Source.

SAS Configurator for Open Source Updates

If everything has completed successfully, creates the symbolic links, or changes the symbolic links’ targets to point to the latest builds for both Python and R.

Running SAS Configurator for Open Source with Custom Options at Deployment

THe SAS Configurator for Open Source utility runs a job named sas-pyconfig. When you enable the utility, the job runs automatically once, during the initial SAS Viya platform deployment, and runs again with subsequent SAS Viya updates.

The official documentation for SAS Configurator for Open Source, SAS Viya Platform: Integration with External Languages, provides instructions for configuring and enabling the utility.

Resource Management

SAS Configurator for Open Source requires more CPU and memory than most components. This requirement is largely due to Python and R building-related operations, such as those performed by configure and make. Because SAS Configurator for Open Source is disabled by default, pod resources are minimized so that they are not misallocated during scheduling. The default resource values are as follows:

limits:
  cpu: 250m
  memory: 250Mi
requests:
  cpu: 25m
  memory: 25Mi

Important: If the default values are used, pod execution will result in an OOMKilled (Out of Memory Killed) status in the pod list and the job does not complete. You must increase the requests and limits in order for the pod to complete successfully. The official SAS Configurator for Open Source documentation provides instructions.

The values of requests and limits can be adjusted to meet specific needs of an environment. For example, reduce values to allow scheduling within smaller environments, or increase values to reduce the time required to build multiple versions of Python and R.

A YAML file is provided in your deployment assets to help you increase CPU and memory requests. By default, the recommended CPU and memory requests are specified in the file (change-limits.yaml), and no limits are specified. Below are some examples of updates to this file.

Changing Resource Limits: Example 1

In this example, SAS Open Source Configuration is configured with a CPU request value of 4000m and memory request value of 3000mi. No limit to CPU and memory usage is specified. This configuration should not be used in environments where resource quotas are in use.

---
apiVersion: builtin
kind: PatchTransformer
metadata:
  name: sas-pyconfig-limits
patch: |-
  - op: replace
    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/cpu
    value:
      4000m
  - op: replace
    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/memory
    value:
      3000Mi
  - op: remove
    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/cpu
  - op: remove
    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/memory
target:
  group: batch
  kind: CronJob
  name: sas-pyconfig
  version: v1
#---
#apiVersion: builtin
#kind: PatchTransformer
#metadata:
#  name: sas-pyconfig-limits
#patch: |-
#  - op: replace
#    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/cpu
#    value:
#      4000m
#  - op: replace
#    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/memory
#    value:
#      3000Mi
#  - op: replace
#    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/cpu
#    value:
#      4000m
#  - op: replace
#    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/memory
#    value:
#      3000Mi
#target:
#  group: batch
#  kind: CronJob
#  name: sas-pyconfig

Changing Resource Limits: Example 2

In this example, both requests and limits values for CPU and memory have been set to 4000m and 3000mi, respectively. This configuration can be used in an environment where resource quotas are enabled.

#---
#apiVersion: builtin
#kind: PatchTransformer
#metadata:
#  name: sas-pyconfig-limits
#patch: |-
#  - op: replace
#    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/cpu
#    value:
#      4000m
#  - op: replace
#    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/memory
#    value:
#      3000Mi
#  - op: remove
#    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/cpu
#  - op: remove
#    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/memory
#target:
#  group: batch
#  kind: CronJob
#  name: sas-pyconfig
#  version: v1
---
apiVersion: builtin
kind: PatchTransformer
metadata:
  name: sas-pyconfig-limits
patch: |-
  - op: replace
    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/cpu
    value:
      4000m
  - op: replace
    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/memory
    value:
      3000Mi
  - op: replace
    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/cpu
    value:
      4000m
  - op: replace
    path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/memory
    value:
      3000Mi
target:
  group: batch
  kind: CronJob
  name: sas-pyconfig

Change the Configuration and Rerun the Job

You can change the configuration and run the sas-pyconfig job again without redeploying the SAS Viya platform. The official SAS Configurator for Open Source documentation describes the steps to run the job manually and install and configure Python or R from source.

Disable SAS Configurator for Open Source

By default, SAS Configurator for Open Source is disabled.

  1. Determine the exact name of the sas-pyconfig-parameters ConfigMap:

    kubectl get configmaps -n <name-of-namespace> | grep sas-pyconfig`

    The name will be something like sas-pyconfig-parameters-abcd1234.

  2. Edit the ConfigMap using the following command:

    kubectl edit configmap <sas-pyconfig-parameters-configmap-name> -n <name-of-namespace>

    In this example, sas-pyconfig-parameters-configmap-name is the name of the ConfigMap from step 1. Change the value of global.enabled to false.

SAS Configurator for Open Source does not run during a deployment or update of the SAS Viya platform.

Default Configuration and Options

The configuration options used by SAS Configurator for Open Source are referenced from the sas-pyconfig-parameters ConfigMap (provided for you in the change-configuration.yaml file). The official SAS Configurator for Open Source documentation describes the options available in the ConfigMap, their purpose, and their default values.

Configuration options fall into two main categories:

For a description of each global option, including the option to specify an HTTP or HTTPS web proxy server, see the official SAS Configurator for Open Source documentation.

Profiles are references to different versions or builds of Python and R in the PV, enabling SAS Configurator for Open Source to manage multiple builds of Python or R.

The predefined Python profile is named “default_py”, and the predefined R profile is named “default_r”. Profiles are described in detail in the official SAS Configurator for Open Source documentation.

Example Patch File 1

The following example change-configuration.yaml file contains the predefined profiles only:

apiVersion: builtin
kind: PatchTransformer
metadata:
  name: sas-pyconfig-custom-parameters
patch: |-
  - op: replace
    path: /data/global.enabled
    value: "false"
  - op: replace
    path: /data/global.python_enabled
    value: "false"
  - op: replace
    path: /data/global.r_enabled
    value: "false"
  - op: replace
    path: /data/global.pvc
    value: "/opt/sas/viya/home/sas-pyconfig"
  - op: replace
    path: /data/global.python_profiles
    value: "default_py"
  - op: replace
    path: /data/global.r_profiles
    value: "default_r"
  - op: replace
    path: /data/global.dry_run
    value: "false"
  - op: replace
    path: /data/global.http_proxy
    value: "none"
  - op: replace
    path: /data/global.https_proxy
    value: "none"
  - op: replace
    path: /data/default_py.pip_local_packages
    value: "false"
  - op: replace
    path: /data/default_py.pip_index_url
    value: "none"
  - op: replace
    path: /data/default_py.pip_extra_url
    value: "none"
  - op: replace
    path: /data/default_py.configure_opts
    value: "--enable-optimizations"
  - op: replace
    path: /data/default_r.configure_opts
    value: "--enable-memory-profiling --enable-R-shlib --with-blas --with-lapack --with-readline=no --with-x=no"
  - op: replace
    path: /data/default_py.cflags
    value: "-fPIC"
  - op: replace
    path: /data/default_r.cflags
    value: "-fPIC"
  - op: replace
    path: /data/default_py.pip_install_packages
    value: "Prophet sas_kernel matplotlib sasoptpy sas-esppy NeuralProphet scipy==1.10 Flask XGBoost TensorFlow pybase64 scikit-learn statsmodels sympy mlxtend Skl2onnx nbeats-pytorch ESRNN onnxruntime opencv-python zipfile38 json2 pyenchant nltk spacy gensim pyarrow hnswlib==0.7.0 sas-ipc-queue great-expectations==0.16.8"
  - op: replace
    path: /data/default_py.pip_r_packages
    value: "rpy2"
  - op: replace
    path: /data/default_py.pip_r_profile
    value: "default_r"
  - op: replace
    path: /data/default_py.python_signer
    value: https://keybase.io/pablogsal/pgp_keys.asc
  - op: replace
    path: /data/default_py.python_signature
    value: https://www.python.org/ftp/python/3.11.10/Python-3.11.10.tgz.asc
  - op: replace
    path: /data/default_py.python_tarball
    value: https://www.python.org/ftp/python/3.11.10/Python-3.11.10.tgz
  - op: replace
    path: /data/default_r.r_tarball
    value: https://cloud.r-project.org/src/base/R-4/R-4.3.3.tar.gz
  - op: replace
    path: /data/default_r.packages
    value: "dplyr jsonlite httr tidyverse randomForest xgboost forecast arrow logger"
  - op: replace
    path: /data/default_r.pkg_repos
    value: "https://cran.rstudio.com/ http://cran.rstudio.com/ https://cloud.r-project.org/ http://cloud.r-project.org/"

target:
  version: v1
  kind: ConfigMap
  name: sas-pyconfig-parameters

Example Patch File 2

The following example change-configuration.yaml file adds a Python profile called “myprofile” to the global.profiles list and adds profile options for “myprofile”. Note that the default Python profile is still listed and will also be built.

apiVersion: builtin
kind: PatchTransformer
metadata:
  name: sas-pyconfig-custom-parameters
patch: |-
  - op: replace
    path: /data/global.enabled
    value: "true"
  - op: replace
    path: /data/global.python_profiles
    value: "default_py myprofile"
  - op: add
    path: /data/myprofile.configure_opts
    value: "--enable-optimizations"
  - op: add
    path: /data/myprofile.cflags
    value: "-fPIC"
  - op: add
    path: /data/myprofile.pip_install_packages
    value: "Prophet sas_kernel matplotlib sasoptpy sas-esppy NeuralProphet scipy==1.10 Flask XGBoost TensorFlow pybase64 scikit-learn statsmodels sympy mlxtend Skl2onnx nbeats-pytorch ESRNN onnxruntime opencv-python zipfile38 json2 pyenchant nltk spacy gensim pyarrow hnswlib==0.7.0 sas-ipc-queue great-expectations==0.16.8"
  - op: replace
    path: /data/myprofile.pip_local_packages
    value: "false"
  - op: replace
    path: /data/myprofile.pip_r_packages
    value: "rpy2"
  - op: replace
    path: /data/myprofile.pip_r_profile
    value: "default_r"
  - op: add
    path: /data/myprofile.python_signer
    value: https://keybase.io/pablogsal/pgp_keys.asc
  - op: add
    path: /data/myprofile.python_signature
    value: https://www.python.org/ftp/python/3.11.10/Python-3.11.10.tgz.asc
  - op: add
    path: /data/myprofile.python_tarball
    value: https://www.python.org/ftp/python/3.11.10/Python-3.11.10.tgz
target:
  version: v1
  kind: ConfigMap
  name: sas-pyconfig-parameters

Configure SAS Data Catalog to Use JanusGraph

Overview

Janusgraph is no longer supported for SAS Data Catalog. Therefore the contents of this README and the overlay it refers to have been removed.

Quality Knowledge Base for the SAS Viya platform

Overview

This directory contains an example transformer that illustrates how to change the StorageClass and size of the PVC used to store the Quality Knowledge Base (QKB) in the SAS Viya platform.

Installation

  1. Copy the file sas-bases/examples/data-quality/storageclass/storage-class-transformer.yaml and place it in your site-config directory.

  2. Replace the {{ QKB-STORAGE-CLASS }} value with your desired StorageClass. Note that the QKB requires that your storage class support the RWX accessMode.

  3. Also replace the {{ QKB-STORAGE-SIZE }} value with the size you wish to allocate to the QKB volume. The recommended size is 8Gi. Note that using a lower value may restrict your ability to add new QKBs to the SAS Viya platform; 1Gi is the absolute minimum required.

  4. After you edit the file, add a reference to it in the transformer block of the base kustomization.yaml file.

Additional Resources

For more information about using example files, see the SAS Viya Platform Deployment Guide.

For more information about Kubernetes StorageClasses, please see the Kubernetes Storage Class Documentation.

SAS Quality Knowledge Base Maintenance Scripts

Overview

This readme describes the scripts available for maintaining Quality Knowledge Base (QKB) content in the SAS Viya platform. QKBs support the SAS Data Quality product.

These scripts are intended for ad hoc use after deployment. They generate YAML that is suitable for consumption by kubectl. The YAML creates Kubernetes Job objects to perform the specific task designated by the script name. After these jobs have finished running, some jobs will be deleted automatically and the rest can be manually deleted.

Script Details

containerize-qkb.sh

Usage

  containerize-qkb.sh "NAME" PATH REPO[:TAG]

Description

This script runs Docker to create a specially formatted container that allows the QKB to be imported into the SAS Viya platform running in Kubernetes.

For the NAME argument, provide the name by which the QKB will be surfaced in the SAS Viya platform. It may include spaces, but must be enclosed with quotation marks.

The PATH argument should be the location on disk where the QKB QARC file is located.

The REPO argument specifies the repository to assign to the Docker container that will be created. TAG may be specified after a colon in standard Docker notation.

After the script runs, a new Docker container with the specified tag is created in the local Docker registry.

Example

  $ bash containerize-qkb.sh "My Own QKB" /tmp/myqkb.qarc registry.mycompany.com/myownqkb:v1
  Setting up staging area...
  Generating Dockerfile...
  Running docker...
  Docker container generated successfully.

  REPOSITORY                      TAG IMAGE ID     CREATED      SIZE
  registry.mycompany.com/myownqkb v1  8dfb63e527c8 1 second ago 945.3MB

After the script completes, information about the new container is output, as shown above. If the local docker registry is not accessible to your Kubernetes cluster, you should then push the container to one that is.

  $ docker push registry.mycompany.com/myownqkb:v1
  The push refers to repository [registry.mycompany.com/myownqkb]
  f2409fb2f83e: Pushed
  076d9dcc6e6a: Mounted from myqkb-image1
  ce30860818b8: Mounted from myqkb-image1
  dfadf160ceab: Mounted from myqkb-image1
  v2: digest: sha256:b9802cff2f81dba87e7bb92355f2eb0fd14f91353574233c4d8f662a0b424961 size: 1360

deploy-qkb.sh

Usage

  deploy-qkb.sh REPO[:TAG]

Description

This script deploys a containerized QKB into the SAS Viya platform. The REPO argument specifies a Docker repo (and, optionally, tag) from which to pull the container. Note that this script does not make any changes to your Kubernetes configuration directly; instead it generates a Kubernetes Job that can then be piped to the kubectl command.

While the SAS Viya platform persists all deployed QKBs in the sas-quality-knowledge-base PVC, we recommend following the GitOps pattern of storing the generated YAML file in version control, under your $deploy/site-config directory. Doing so allows you to easily re-deploy the same QKB again later, should the PVC be deleted.

Examples

Generate a Kubernetes Job to deploy a QKB, and run it immediately:

  bash deploy-qkb.sh registry.mycompany.com/myownqkb:v1 | kubectl apply -n name-of-namespace -f -

Generate a Kubernetes Job to deploy a QKB, and write it into your site’s overlays directory:

  bash deploy-qkb.sh registry.mycompany.com/myownqkb:v1 >> $deploy/site-config/data-quality/custom-qkbs.yaml

This command appends the job configuration for the new QKB to the file called “custom-qkbs.yaml”. This is a convenient place to store all custom QKB jobs, and is suitable for inclusion into your SAS Viya platform’s base kustomization.yaml file as a resource overlay.

NOTE: The Kubernetes job will be deleted immediately upon successful completion.

If you do not yet have a $deploy/site-config/data-quality directory, you can create and initialize it as follows:

  mkdir -p $deploy/site-config/data-quality
  cp $deploy/sas-bases/overlays/data-quality/* $deploy/site-config/data-quality

To attach custom-qkbs.yaml to your SAS Viya platform’s configuration, edit your base kustomization.yaml file, and find or create the “resources:” section. Under that section, add the following line:

  - site-config/data-quality

You can re-apply these kustomizations to bring the new QKB into your SAS Viya platform.


list-qkbs.sh

Usage

  list-qkbs.sh

Description

A parameter-less script that generates Kubernetes Job YAML to list the names of all QKBs available on sas-quality-knowledge-bases volume. Output is sent to the log for the pod created by the job.

Examples

  $ bash list-qkbs.sh | kubectl apply -n name-of-namespace -f -
  job.batch/sas-quality-knowledge-base-list-job-ifvw01lr created

  $ kubectl -n name-of-namespace logs job.batch/sas-quality-knowledge-base-list-job-ifvw01lr
  QKB CI 31
  My Own QKB

  $ kubectl -n name-of-namespace delete job.batch/sas-quality-knowledge-base-list-job-ifvw01lr
  job.batch "sas-quality-knowledge-base-list-job-ifvw01lr" deleted

If a QKB is in the process of being deployed, or was aborted for some reason, you may see the string “(incomplete)” after that QKB’s name:

  $ kubectl -n name-of-namespace logs job.batch/sas-quality-knowledge-base-list-job-ifvw01lr
  QKB CI 31
  My Own QKB  (incomplete)

remove-qkb.sh

Usage

  remove-qkb.sh NAME

Description

Generates Kubernetes Job YAML that removes a QKB from the sas-quality-knowledge-bases volume. The QKB to remove is specified by NAME, which is returned by list-qkbs.sh. Any errors or other output is written to the associated pod’s log and can be viewed using the kubectl logs command.

NOTE: The Kubernetes job will be deleted immediately upon successful completion. The kubectl logs and delete commands below can be used to check logs in case of failures in the job.

Examples:

  $ bash remove-qkb.sh "My Own QKB" | kubectl apply -n name-of-namespace -f -
  job.batch/sas-quality-knowledge-base-remove-job-zbl4sxmq created

  $ kubectl logs -n name-of-namespace job.batch/sas-quality-knowledge-base-remove-job-zbl4sxmq
  Reference data content "My Own QKB" was removed.

  $ kubectl delete -n name-of-namespace job.batch/sas-quality-knowledge-base-remove-job-zbl4sxmq
  job.batch "sas-quality-knowledge-base-remove-job-zbl4sxmq" deleted

Additional Resources

For more information about the QKB, see the SAS Data Quality documentation.

Installation of SAS Data Quality for Payment Integrity Health Care

Overview

SAS Data Quality for Payment Integrity Health Care (DQHFWA) provides a tool for forensic accountants and data analysts to discover wasteful and fraudulent activity with submittal and payment of medical claims.

Pre-Installation Steps

PostgreSQL Database Considerations

An external PostgreSQL database is required. Although SAS Data Quality for Payment Integrity Health Care does not require the PostgreSQL Common Data Store (CDS) database, SAS recommends that the external CDS PostgreSQL database be configured along with the external Platform PostgreSQL database due to the expected data volumes. Data volume includes size of the temporary (work) tables for merges and joins of transient table data that the application might choice to use, Stage tables where the Data Quality algorithm might process the data, and finally the Warehouse tables where pristine datasets might rest.

Strictly using the SAS Viya Platform PostgreSQL database to contain the customer data can negatively affect performance of your Viya platform. For more information, see SAS Common Data Store Requirements.

Platform PostgreSQL

Platform PostgreSQL is required in the SAS Viya platform. Refer to the instructions in the README file located at $deploy/sas-bases/examples/postgres/README.md (for Markdown format) or at $deploy/sas-bases/docs/configure_postgresql.htm (for HTML format) for information about configuring an external instance of PostgreSQL.

CDS PostgreSQL

Use of CDS PostgreSQL is optional but recommended for the SAS Data Quality for Payment Integrity Health Care.
Refer to the README file located at $deploy/sas-bases/examples/postgres/README.md (for Markdown format) or at $deploy/sas-bases/docs/configure_postgresql.htm (for HTML format) for information about configuring an external instance of PostgreSQL for CDS.

SAS Data Quality for Payment Integrity Health Care Configuration

  1. In the top of the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml), add the following entry to allow the Compute server to refresh authorization tokens.

    Note: This entry must be placed above the - sas-bases/overlays/required/transformers.yaml line.

    transformers:
    - sas-bases/overlays/sas-programming-environment/refreshtoken
  2. In the transformers block of the base kustomization.yaml, add the following entry to allow the Compute server startup script to run.

    transformers:
    - sas-bases/overlays/sas-programming-environment/enable-admin-script-access.yaml
  3. In the transformers block of the base kustomization.yaml file, add the follwing entry to add the required overlays for the sas-data-quality-hfwa application.

    Note: This entry must be placed above the - sas-bases/overlays/required/transformers.yaml line.

    transformers:
    - sas-bases/overlays/sas-data-quality-hfwa/hfwa-required-transfomers.yaml
  4. The access token validity time needs to be increased for the SAS Compute Server and SAS Studio to handle long-running jobs. This also exposes the file paths to generated application code in SAS Studio.

    a. Copy the file $deploy/sas-bases/examples/configuration/sitedefault.yaml to the $deploy/site-config directory if it does not already exist.

    b. Add the following content to the $deploy/site-config/sitedefault.yaml file.

        sas.studio:
            showServerFiles: true
            fileNavigationRoot: "CUSTOM"
            fileNavigationCustomRootPath: "/dqhfwa"
        oauth2.client:
            Services: "cas-shared-default, Compute Service, Credentials service, Job Execution service, Launcher service"
            accessTokenValidty: 216000
            refreshTokenValidity: 216000
        sas.logon.jwt:
            policy.accessTokenValiditySeconds: 216000
            policy.global.accessTokenValiditySeconds: 216000
            policy.global.refreshTokenValiditySeconds: 216000
            policy.refreshTokenValiditySeconds: 216000

If you are using the recommended CDS PostgresSQL instance, also perform steps 5 and 6.

  1. In the transformers block of the base kustomization.yaml file, add a reference to the file sas-bases/overlays/sas-data-quality-hfwa/hfwa-server-use-cds-postgres-config-ma p.yaml.

    Note: This entry must be placed above the - sas-bases/overlays/required/transformers.yaml line.

    transformers:
    - sas-bases/overlays/sas-data-quality-hfwa/hfwa-server-use-cds-postgres-config-map.yaml
  2. In the generators block of the base kustomization.yaml file, add a reference to the cds-config-map file sas-bases/overlays/sas-data-quality-hfwa/hfwa-add-cds-config-map.yaml.

    generators:
    - sas-bases/overlays/sas-data-quality-hfwa/hfwa-add-cds-config-map.yaml

Set Up NFS File Shares for SAS Data Quality for Payment Integrity Health Care

Before deploying SAS Data Quality for Payment Integrity Health Care, you need to create the necessary directories required by the application on the NFS server, and then assign those directories to the volumes and volume mounts defined in the application, SAS compute server, and the SAS CAS server.

Create the file shares on the NFS server for use by the application.

Note: You will need the SSH private key created for access to the jumpserver and the user ID and public IP address of the jumpserver. Replace the indicated values enclosed with {{ }} in the export statements in the script with your specific values.

  1. Copy the file $deploy/sas-bases/examples/sas-data-quality-hfwa/hfwa_create_nfs_directories.sh to $deploy/site-config/sas-data-quality-hfwa directory and make it writable. If the directory $deploy/site-config/sas-data-quality-hfwa doesn’t exist create it.

    chmod +wx $deploy/site-config/sas-data-quality-hfwa/hfwa_create_nfs_directories.sh
  2. Replace the variables in the script $deploy/site-config/sas-data-quality-hfwa/hfwa_create_nfs_directories.sh with values specific for your environment.

    • Replace {{ NAMESPACE }} with the namespace for the Kubernetes namespace for your Viya installation.
    • Replace {{ SSH_PRIVATE_KEY }} with the path to the ssh private key file for access to the jumpserver.
    • Replace {{ JUMP_SERVER }} with the ip address of the jump server.
    • Replace {{ JUMP_SERVER_JUMP_USER }} with the username of the user with access to the jump server.
  3. Execute the the modified script from a Linux terminal on your deployment server.

    $deploy/site-config/sas-data-quality-hfwa/hfwa_create_nfs_directories.sh

Copy and Modify Files to Point to Application File Shares

  1. Copy the file $deploy/sas-bases/examples/sas-data-quality-hfwa/hfwa-nfs-config-map.yaml into your $deploy/site-config/sas-data-quality-hfwa directory and make it writable:

    chmod +w $deploy/site-config/sas-data-quality-hfwa/hfwa-nfs-config-map.yaml
  2. Replace the value of {{ V4_CFG_RWX_FILESTORE_ENDPOINT}} with the IP address of your cluster’s NFS server. Replace the value of {{V4_CFG_RWX_FILESTORE_DATA_PATH }} with the value of the path to your NFS server viya share (for example, /export/mynamespace).

  3. In the transformers block of the base kustomization.yaml, add a reference to the file you just copied.

    transformers:
    - site-config/sas-data-quality-hfwa/hfwa-nfs-config-map.yaml

Set Up Secrets

Before deploying SAS Data Quality for Payment Integrity Health Care, secrets for the database encryption and, if you are using an SFTP server, the SFTP secrets need to be defined.

Database Encryption Secret

  1. Copy the file $deploy/sas-bases/examples/sas-data-quality-hfwa/hfwa-security-add-secret-database-key.yaml into your $deploy/site-config/sas-data-quality-hfwa directory and make it writable:

    chmod +w $deploy/site-config/sas-data-quality-hfwa/hfwa-server-add-secret-database-key.yaml
  2. Edit the file and change the value of {{ DATABASE_ENCRYPTION_KEY }} in the literals section to a phrase with exactly 32 characters (no spaces) of your choice. Here is an example:

    ## This SecretGenerator creates a Secret containing an AES key used by
    sas-data-quality-hfwa to securely store data
    ---
    apiVersion: builtin
    kind: SecretGenerator
    metadata:
      name: sas-data-quality-hfwa-db-key
    literals:
      - key=thisisanexample32byteaeskey12345  # Change me
    type: Opaque

SFTP Server Secret Key

  1. Copy the file $deploy/sas-bases/examples/sas-data-quality-hfwa/hfwa-security-add-secret-sftp- keys.yaml into your $deploy/site-config/sas-data-quality-hfwa directory and make it writable:

    chmod +w $deploy/site-config/sas-data-quality-hfwa/hfwa-server-add-secret-sftp-keys.yaml
  2. Copy the SFTP server private RSA key file to the $deploy/site-config/security directory. If the security directory does not exist, create it in the $deploy/site-config directory. Replace the {{ CONNECTION_NAME }} value with the name of the connection that you will use for a SFTP connection. Replace the {{ RELATIVE_PATH_TO_KEY_FILE }} with the relative path to the file you just copied (such as site-config/security/sftpkey). If you have multiple SFTP servers, you can add additional entries under the files section.

Add References to the Base kustomization.yaml File

Add references to these files under the generators: section of the kustomization.yaml file:

generators:
  - site-config/sas-data-quality-hfwa/hfwa-security-add-secret-database-key.yaml
  - site-config/sas-data-quality-hfwa/hfwa-security-add-secret-sftp-keys.yaml

Compute Server Considerations

Hardware Considerations

SAS code programs execute in the Compute Server service. For performant execution, the Compute Server relies on fast data storage for transient intermediate datafiles it generates within SASWORK. Therefore, depending on the data volume anticipated it is strongly recommended to configure the Compute Servers to have access to fast local storage. On Azure, a good fit and recommendation is Ls-Series v3 server instances. The local NVMe storage available on these instances can be configured to use RAID and striping to provide both disk size/volume and performance for SASWORK utilization.

Note: Adding NVMe storage and changing the SASWORK location is optional. It is only required if the data volume being processed exceeds the capacity of the default location for SASWORK.

(Optional) Add NVMe Storage to the SAS Compute Server

If you decide to use a server that has fast local storage for the compute server nodes, in the resources block of the base kustomization.yaml, add a reference to the file sas-bases/overlays/sas-data-quality-hfwa/compute-server/compute-nvme-ssd.yaml.

resources:
- sas-bases/overlays/sas-data-quality-hfwa/compute-server/compute-nvme-ssd.yaml

(Optional) Configure the Custom SASWORK Location

In the transformers block of the base kustomization.yaml, add a reference to the file sas-bases/overlays/sas-data-quality-hfwa/compute-server/custom-saswork-location.yaml.

transformers:
- sas-bases/overlays/sas-data-quality-hfwa/compute-server/custom-saswork-location.yaml

Change the SAS Compute Server HTTP Timeout Setting

Processing large data volume requires increasing the default SAS Compute server HTTP timeout setting. To adjust the setting, refer to the README file located at $deploy/sas-bases/examples/compute/client-request-timeout/README.md (for Markdown format) or at $deploy/sas-bases/docs/update_compute_service_internal_http_request_timeout.htm (for HTML format).

CAS Server Considerations

Hardware Considerations

Some of the SAS code programs execute in the SAS CAS Server in a distributed manner across all CAS instances (depending on SMP vs MPP deployment). Similar to Compute Server, CAS instances also rely on fast data storage for transient intermediate datafiles it memory maps and generate within CASCACHE. Therefore, depending on the data volume being represented within memory or spilled on to disk it is strongly recommended to configure the CAS Servers to have access to fast local storage. On Azure, a good fit and recommendation is Ls-Series v3 server instances. The local NVMe storage available on these instances can be configured to use RAID and striping to provide both disk size/volume and performance for CASCACHE utilization.

Note: Adding NVMe storage and changing the CASCACHE location is optional and only required if the data volume being processed exceed the capacity of the default location for CASCACHE.

(Optional) Add NVMe Storage to the CAS Server

If you have decided to use a server that has fast local storage for the CAS server nodes, in the resources block of the base kustomization.yaml file, add a reference to the file sas-bases/overlays/sas-data-quality-hfwa/cas-server/cas-nvme-ssd.yaml.

resources:
- sas-bases/overlays/sas-data-quality-hfwa/cas-server/cas-nvme-ssd.yaml

(Optional) Configure the CASCACHE Location

In the transformers block of the base kustomization.yaml, add a reference to the file sas-bases/overlays/sas-data-quality-hfwa/cas-server/custom-caswork-location.yaml.

transformers:
- sas-bases/overlays/sas-data-quality-hfwa/cas-server/custom-caswork-location.yaml

Increase the Number of CAS Workers

Due to large volumes of data being processed, SAS recommends that the number of CAS workers be increased to at least three for increased performance. To increase the number of CAS workers. see the “Manage the Number of Workers” section of the README file located at $deploy/sas-bases/examples/cas/configure/README.md (for Markdown format) or at $deploy/sas-bases/docs/configuration_settings_for_cas.htm (for HTML format).

Set Up the CAS Allowlist Paths

Before deploying SAS Data Quality for Payment Integrity Health Care, the file shares used by the application need to be allowed proper access in CAS.

  1. Copy the file $deploy/sas-bases/examples/cas/configure/cas-add-allowlist-paths.yaml into your $deploy/site-config/sas-data-quality-hfwa directory and make it writable:

    chmod +w $deploy/site-config/sas-data-quality-hfwa/cas-add-allowlist-paths.yaml
  2. Replace the patch: |- section of the yaml file with the following code.

    Note: If you already have this file in your deployment for other applications, add the code starting at the line following the patch: |- line to your existing file in the patch block.

    patch: |-
      - op: add
        path: /spec/appendCASAllowlistPaths/-
        value:
          /dqhfwa/data/incoming
      - op: add
        path: /spec/appendCASAllowlistPaths/-
        value:
          /dqhfwa/sascode/data/module_specific
      - op: add
        path: /spec/appendCASAllowlistPaths/-
        value:
          /dqhfwa/job_code/data/module_specific/entity_resolution
  3. In the transformers block of the base kustomization.yaml, add a reference to the file you just copied, or skip this step if the reference already exists.

    transformers:
    - site-config/sas-data-quality-hfwa/cas-add-allowlist-paths.yaml

Additional Resources

For more information about configuration and using example files, see the SAS Viya Platform: Deployment Guide.

SAS Detection Engine Configuration

Overview

This README file describes the configuration settings available for deploying and running SAS Detection Engine. The sections of this README correspond to sections of the full example template, detection-engine-deployment.yaml. In addition to the full template, examples of how to complete each section are also available in /$deploy/sas-bases/examples/sas-detection/.

Installation

Create a copy of the example template in /$deploy/sas-bases/examples/sas-detection/detection-engine-deployment.yaml. Save this copy in /$deploy/site-config/sas-detection/detection-engine-deployment.yaml.

Placeholders are indicated by curly brackets, such as {{ DECISION }}. Find and replace the placeholders with the values you want for your deployment. After all placeholders have been filled in, directly apply your deployment yaml via kubectl apply, indicating the file you’ve just filled in.

kubectl apply -f detection-engine-deployment.yaml

Examples

The example files are located at /$deploy/sas-bases/examples/sas-detection/. Each item in the list includes a description of the example and the example file name.

Deployment Resource Section

This is the most customizeable section of the template. Each container has various environmental options that can be set.

sas-sda-scr container

The SAS Container Runtime (SCR) container requires an image to be specified. This will be available in your configured Docker registry. This is where the output from your design time work using the SAS Viya platform will be.

containers:
- name: sas-sda-scr
    # Image from your docker registry
    image: {{ DECISION }}

Other than the image, the only required properties for the sas-sda-scr container are SAS_REDIS_HOST and SAS_REDIS_PORT. The other properties are optional security properties covered in detail in the security section. See the container-configuration.yaml file for the minimal required configuration.

sas-detection container

The sas-detection container includes a few categories of environmental properties: logging properties, Kafka properties, Redis properties, and processing options. Optional security-related properties are covered further in the security section. See the container-configuration.yaml file for the minimal required configuration.

Logging Properties

SAS_LOG_LEVEL can be DEBUG, INFO, WARN, or ERROR. The value determines the verbosity of the log output. WARN or ERROR should be used where performance is important.

SAS_LOG_LOCALE determines which locale messages should be included in the output, where the default is “en”.

Kafka Properties

There is a property for the bootstrap server to connect, a few properties to indicate the topics sas-detection will use to read/write, and a boolean determining whether reading from Kafka is enabled.

SAS_DETECTION_KAFKA_SERVER is the Kafka bootstrap server.

SAS_DETECTION_KAFKA_TDR_TOPIC is the the transaction detection repository (output) topic.

SAS_DETECTION_KAFKA_REJECTTOPIC is the reject topic for when errors occur.

SAS_DETECTION_KAFKA_TOPIC is the input message topic.

SAS_DETECTION_KAFKA_CONSUMER_ENABLED determines whether sas-detection will consume messages from the SAS_DETECTION_KAFKA_TOPIC.

Redis Properties

SAS_DETECTION_REDIS_HOST is the Redis host and SAS_DETECTION_REDIS_PORT is the port used to connect to Redis.

SAS_DETECTION_REDIS_POOL_SIZE is the size of the connection pool for the go-redis client. If not specified, this defaults to 10.

Processing Options

For metrics gathering and reporting to work correctly, SAS_DETECTION_DEPLOYMENT_NAME must match your deployment name and SAS_DETECTION_PROCESSING_DISABLEMETRICS must be set to “false”.

SAS_DETECTION_PROCESSING_SLA determines the threshold in milliseconds after which a transaction should fail with an SLA error.

SAS_DETECTION_PROCESSING_SETVERBOSE is an integer between 1 and 13, inclusive, which determines the logging level within the sas-sda-scr container.

SAS_DETECTION_PROCESSING_OUTPUT_FILTER allows the output REST response to be filtered. It is a comma separated list of variable sets or variables in your message. message.sas.system,message.request,message.sas.decision, for example.

SAS_DETECTION_KAFKA_BYPASS disables Kafka reads and writes if set to “true”.

SAS_DETECTION_RULE_METRICS_BYPASS disables rule metrics reads and writes to Redis if set to “true”.

SAS_DETECTION_WATCHER_INTERVAL_SEC is the interval in seconds at which the watcher will check your docker registry for an update to the image in your sas-sda-scr container.

Services and Ingresses

These resources don’t need much customization. They require the SUFFIX to be filled in, and the NAMESPACE to be specified, as indicated in the template. The ingresses additionally require the host property be specified. There is a service and ingress for each of the containers defined in the deployment.

The services are ClusterIP services, accessed externally via the ingress resources. The ports are already filled in and line up with the prefilled ingress ports.

The ingresses include the host, and rules for directing requests. For the sas-detection ingress, anything sent with /detection as the path prefix will use this ingress. The services above are referenced in these ingresses.

See the ingress-setup-insecure.yaml file for an example.

OpenShift

If you are deploying your SAS Detection Engine on OpenShift, you will not be able to use the Ingress resource. In this case, replace your ingress resource with an OpenShift Route.

See the openshift-route.yaml file for an example.

Roles and RoleBindings

These only require that the NAMESPACE be specified.

The reader role allows the pods in the specified namespace to retrieve info on deployments and pods used to report metrics for all replicas. The SAS Container Runtime (SCR) container also uses this role to read service and endpoint information The scaler role allows the pods to scale themselves up or down, which is necessary for them to restart themselves upon seeing an update to a decision image. The secretReader role allows the pods to access Kubernetes secrets, in order to get the authorization information required to interact with the tag registry.

The RoleBinding resources add these roles to the service account in your NAMESPACE, in order to attach and enable these Roles.

See the roles-and-rolebinding.yaml file for an example.

Readiness Probe

The sas-detection container uses a readiness probe, which allowes Kubernetes to determine when that pod is ready to receive transactions. The initialDelaySeconds field specifies how many seconds Kubernetes should wait before performing the initial probe. The periodSeconds field specifies how many seconds Kubernetes should wait between probes.

More information on readiness probes is available here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

Security

TLS Secrets

There are optional, commented out sections that may be used to create the secrets containing TLS certificates and keys. The data must be base64 encoded and included in these definitions. These secrets could optionally be created manually via kubectl, or managed via cert-manager. If the secrets are created via some other method, the secret names must still match those referenced in the volumes and ingress definitions.

An alternative is using the selfsigned-certificates.yaml example file. Placeholders in this file are indicated by curly brackets, such as {{ DNS_NAME }}. Find and replace the placeholders with the values you want for your certificates. This file is optional and may be edited as needed to fit your purposes. As with the detection-engine deployment file, you create these resources directly using kubectl apply. This file must be applied once, and it will generate secrets containing your certificates and keys.

Mutual TLS

In addition to one-way TLS, the Detection Engine allows the optional configuration of mutual TLS (mTLS) connections to itself, as well as outgoing mutual TLS connections to Redis and Kafka. Mutual TLS allows the server to authenticate the client using a client certificate and client key that the client sends to the server. This certificate and key pair needs to be signed by a CA the server is configured to trust, and then supplied by the client to connect to the server. Examples of client certificates can be found in the /$deploy/sas-bases/examples/sas-detection/selfsigned-certificates example file, where the usage field includes “client auth” as a value.

Secure Ingress Definition

To add TLS to your ingress, some annotations and spec fields must be added. These will require certificates either included in this template, or created and supplied previously. The template includes a TLS ingress that is commented out, but the below examples break down what is different in this ingress.

To secure your ingress, the following annotations can be used to add one-way TLS, mutual TLS, or both.

annotations:
    # Used to enable TLS
    nginx.ingress.kubernetes.io/auth-tls-secret: {{ NAMESPACE }}/detection-ingress-tls-ca-config-{{ ORGANIZATION }}
    # used to enable mTLS
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"    

For one-way TLS, fill in the tls field under the spec field. This also includes a secretName, which includes your TLS certificate.

tls:
    - hosts:
        - {{ ORGANIZATION }}.{{ INGRESS-TYPE }}.{{ HOST }}
    secretName: detection-ingress-tls-config-{{ ORGANIZATION }}

See the ingress-setup-secure.yaml file for an example of where to add these fields to your deployment yaml.

Volumes and Volume Mounts

Depending on the security configuration, mounting additional trusted certificates in your containers may be necessary. The areas to add these are tagged with SECURITY, and can be uncommented as necessary. The secret names must match whatever secrets have been configured for these certificates.

There are three volumes created from secrets containing TLS certificates. One volume is for sas-detection certificates, one volume is for Redis certificates, and one volume is for Kafka certificates. These are defined for each container in the Deployment spec.

After being created, these volumes may be mounted in the sas-sda-scr and sas-detection containers. As defined in the template, the detection certificates are mounted in /security, the redis certificates are mounted in /security/redis, and the kafka certificates are mounted in /security/kafka. The sas-sda-scr container does not access Kafka, so it does not require the Kafka mount.

See the container-configuration-secure.yaml file for an example. Note that the volumes are created once outside the container definitions, and then used to create volumeMounts within each container.

sas-sda-scr

The security properties for this container deal with Redis TLS. Not all are required. They cover authentication, one-way TLS, and mutual TLS.

SAS_REDIS_AUTH_USER and SAS_REDIS_AUTH_PASSWORD are required when the Redis service is configured with user password. They can be entered directly, or referenced from a Kubernetes secret.

SAS_REDIS_CA_CERT is the path to the certificate in the container for one-way TLS.

SAS_REDIS_TRUST_CERT_PATH is optional and may be used to add additional trusted certificates.

SAS_REDIS_CLIENT_CERT_FILE and SAS_REDIS_CLIENT_PRIV_KEY_FILE are required only to configure mutual TLS. They contain the client certificate and key used for client verification by the server.

SAS_REDIS_TLS is used with a TLS-enabled Redis. A value of “1”, “Y”, or “T” will allow TLS. A value of “0”, “N”, or “F” will prohibit TLS. If a value is not entered, the default behavior is to prohibit TLS.

sas-detection

SAS detection includes properties to enable TLS and mutual TLS for Redis and Kafka.

For Redis:

SAS_DETECTION_REDIS_AUTH_USER allows a user to be entered for Redis. Not required, defaults to “default” user.

SAS_DETECTION_REDIS_AUTH_PASS allows a password to be entered for Redis.

SAS_DETECTION_REDIS_TLS_ENABLED should be set to true if the Redis server has TLS enabled.

SAS_DETECTION_REDIS_TLS_CACERT is optional and may be used to add a trusted CA.

SAS_DETECTION_REDIS_CLIENT_CERT_FILE and SAS_DETECTION_REDIS_CLIENT_PRIV_KEY_FILE are optional and may be used to supply a client certificate and client key if connecting to Redis with mutual TLS enabled.

SAS_DETECTION_REDIS_SERVER_DOMAIN can be used to supply the correct hostname for hostname verification of the certificate

For Kafka

SAS_DETECTION_KAFKA_SECURITY_PROTOCOL can be PLAINTEXT, SSL, SASL_PLAINTEXT, or SASL_SSL to indicate which combination of TLS enabled and Authentication enabled protocol Kafka is using. This defaults to PLAINTEXT. SAS_DETECTION_KAFKA_TRUSTSTORE can be used to add trusted certificates.

SAS_DETECTION_KAFKA_ENABLE_HOSTNAME_VERIFICATION enables DNS verification for TLS, defaulting to true.

SAS_DETECTION_KAFKA_CERTIFICATE_LOCATION is the location of the client certificate used to enable mTLS.

SAS_DETECTION_KAFKA_KEY_LOCATION is the location of the client key used to enable mTLS.

SAS_DETECTION_KAFKA_KEY_PASSWORD is the password for the supplied key, if a password is used.

SAS_DETECTION_KAFKA_SASL_USERNAME and SAS_DETECTION_KAFKA_SASL_PASSWORD define the username and password if authentication is enabled for the Kafka cluster.

Configure SAS Detection Definition Service to Add Service Account

Overview

This README describes how a service account with defined privileges can be added to the sas-detection-definition pod. A service account is required in an OpenShift cluster if it needs to mount NFS. Models are mounted in the detection-definition container using an NFS mount. To enable use of models, the service account requires NFS volume mounting privilege.

Prerequisites

Grant Security Context Constraints on an OpenShift Cluster

The /$deploy/sas-bases/overlays/sas-detection-definition/service-account directory contains a file to grant security context constraints for using NFS on an OpenShift cluster.

A Kubernetes cluster administrator should add the security context constraints to their OpenShift cluster prior to deploying the SAS Viya platform. Use one of the following commands:

kubectl apply -f sas-detection-definition-scc.yaml

or

oc create -f sas-detection-definition-scc.yaml

Bind the Security Context Constraints to a Service Account

After the security context constraints have been applied, you must link the security context constraints to the appropriate service account that will use it. Use the following command:

oc -n <name-of-namespace> adm policy add-scc-to-user sas-detection-definition -z sas-detection-definition

Installation

  1. Make the following changes to the kustomization.yaml file in the $deploy directory:

    • Add sas-bases/overlays/sas-detection-definition/service-account/sa.yaml to the resources block.
    • Add sas-bases/overlays/sas-detection-definition/service-account/sa-transformer.yaml to the transformers block.

    Here is an example:

    resources:
    - sas-bases/overlays/sas-detection-definition/service-account/sa.yaml
    
    transformers:
    - sas-bases/overlays/sas-detection-definition/service-account/sa-transformer.yaml
  2. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

    Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Post-Installation Tasks

Verify the Service Account Configuration

  1. Run the following command to verify whether the overlay has been applied:

    kubectl -n <name-of-namespace> get pod <sas-detection-definition-pod-name> -o yaml | grep serviceAccount
  2. Verify that the output contains the service-account sas-detection-definition.

    serviceAccount: sas-detection-definition
    serviceAccountName: sas-detection-definition

Preparing and Configuring SAS Dynamic Actuarial Modeling for Deployment

Prerequisites

When SAS Dynamic Actuarial Modeling is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer (Cirrus Core) that is used by multiple solutions. Therefore, in order to fully deploy SAS Dynamic Actuarial Modeling, you must deploy, at minimum, the Cirrus Core content in addition to SAS Dynamic Actuarial Modeling. Preparing and configuring Cirrus Core for deployment is described in the Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-core/README.md (Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm (HTML format).

The Risk Cirrus Core README also contains information about storage options, such as external databases, for your solution. You must complete steps 1-4 described in the Risk Cirrus Core README before deploying SAS Dynamic Actuarial Modeling. Please read that document for important information about the pre-deployment tasks that should be completed prior to deploying SAS Dynamic Actuarial Modeling.

Installation

  1. Complete steps 1-4 described in the Cirrus Core README.

  2. Complete step 4 described in the Cirrus Core README to modify your Cirrus Core configuration file. Because SAS Dynamic Actuarial Modeling uses workflow service tasks, a user account must be configured for a workflow client. If you know which user account to use before installation and prefer having it configured during installation, you should set the SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG variable to “Y” and assign the user account to the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT variable.

  3. If you have a $deploy/site-config/sas-risk-cirrus-pcpricing/resources directory, delete it and its contents. Remove the reference to this directory from the transformers section of your base kustomization.yaml file ($deploy/kustomization.yaml). This step should only be necessary if you are upgrading from a cadence prior to 2025.02.

  4. Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-pcpricing to the $deploy/site-config/sas-risk-cirrus-pcpricing directory. Create a destination directory if one does not exist.

    IMPORTANT: If the destination directory already exists, make sure it has the expected configuration.env file, not the old pcpricing_transform.yaml file from previous cadences (prior to 2025.02). If the directory already exists and already has the expected .env file, verify that the overrides have been correctly applied. No further actions are required unless you want to change the connection settings to different overrides.

  5. Modify the configuration.env file (located in the $deploy/site-config/sas-risk-cirrus-pcpricing directory). Lines with a # at the beginning are commented out; their values will not be applied during deployment. To override a default provided by SAS for a given variable, uncomment the line by removing the # at the beginning of the line and modify as explained in the following section. Specify, if needed, your settings as follows:


    a. For SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER, replace {{ INFO-OR-DEBUG }} with the logging level desired. (Default is INFO).


    b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES, replace {{ Y-OR-N }} to specify whether you want to include steps flagged as sample artifacts. If this value is N, then steps marked as sample_step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. For SAS Dynamic Actuarial Modeling, the following are interrelated sample installation steps:

    • The transfer_files_sampledata step loads SAS sample data to the file service.
    • The transfer_files_csv_sampledata step loads csv sample data to the file service.
    • The install_sample_data step creates the pcprfm Cas library and loads tables into it.
    • The manage_cas_lib_aclstep setups permissions for the pcprfm Cas library.
    • The install_discovery_agentstep creates an agent for data analysis in SAS Information Catalog.

    To perform the sample installation steps, set this variable to Y. To skip them, set this variable to N. (Default is Y)


    c. For SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. Typically, you should leave this variable blank, the only use case for this would be skipping the load of sample data. To skip the load of sample data, set this variable to “transfer_files_sampledata”. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then set this variable to an empty string to skip sample data and any other steps that are marked as samples. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to Y, then set this variable with the IDs of any steps you would like to skip, including those flagged as sample data. (Default is \<Empty list>).
    Note: If this variable is empty, all steps will be executed unless the solution has already deployed successfully in which case no steps will be executed. If this step is non-empty, only the steps listed in this variable will be executed.


    d. For SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }} with the IDs of the steps you want to run. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment. For example, if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then the “transfer_files_sampledata” and the “install_sample_data” steps will be skipped during deployment. After the deployment finishes, you decide you want to include the SAS-provided sample data to use. You can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS to “transfer_files_sampledata,install_sample_data” and then delete the sas-risk-cirrus-pcpricing pod to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS. IMPORTANT: In your initial deployment this variable shoud be an empty string, or you risk an incomplete or failed deployment. If you specify a list of comma-separated steps to run, only those steps are performed. If the environment variable is not set, every step is run except for sample steps if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N. (Default is \<Empty list>).

    The following is an example of a configuration.env that you could use for SAS Dynamic Actuarial Modeling. The uncommented parameters will be added to the solution configuration map.

    SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER=INFO
    SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES=Y
    # SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
    # SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
  6. In the base kustomization.yaml file in the $deploy directory, add site-config/sas-risk-cirrus-pcpricing/configuration.env to the configMapGenerator block. Here is an example:

     configMapGenerator:
       ...
       - name: sas-risk-cirrus-pcpricing-parameters
         behavior: merge
         env:
           - site-config/sas-risk-cirrus-pcpricing/configuration.env
       ...

Complete the Deployment Process

When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide.

Note: The .env overlay can be applied during or after the initial deployment of the SAS Viya platform.

Verify That Configuration Overrides Have Been Applied Successfully

Before verifying the settings for SAS Dynamic Actuarial Modeling solution, complete step 6 specified in the Cirrus Core README to verify for Cirrus Core.

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl describe configmap sas-risk-cirrus-pcpricing-parameters -n <name-of-namespace>
  2. Verify that the output contains the desired connection settings that you configured.

Additional Resources

Adding Global Configuration Settings for SAS Event Stream Processing Projects

Overview

Use the $deploy/sas-bases/examples/sas-esp-operator/espconfig/espconfig-properties.yaml and $deploy/sas-bases/examples/sas-esp-operator/espconfig/espconfig-env-variables.yaml files to set default settings and environment variables for the SAS Event Stream Processing Kubernetes Operator and all SAS Event Stream Processing servers that start within a Kubernetes environment.

Each default setting and environment variable that is described in these example files represents optional settings that enable you to change the default settings and environment variables. If no configuration changes are required, do not add these example files to your kustomization.yaml file.

Prerequisites

By default, each default setting or environment variable in the example files is commented out. Start by determining which of the commented settings or environment variables you intend to set. The following list describes the settings and environment variables that can be added and provides information about how to set them.

Installation

  1. Copy the example files from the $deploy/sas-bases/examples/sas-esp-operator/espconfig directory to the $deploy/site-config/sas-esp-operator/espconfig directory. Create the destination directory if it does not exist.

  2. Use the $deploy/site-config/sas-esp-operator/espconfig/espconfig-properties.yaml file to specify custom SAS Event Stream Processing default settings.

    For each SAS Event Stream Processing default setting that you intend to use, uncomment the op, path, and value lines that are associated with the setting. Then replace the {{ VARIABLE-NAME }} variable with the desired value.

    Here are some examples:

    ...
      - op: add
        path: /spec/espProperties/server.disableTrace
        value: "true"
    ...
      - op: add
        path: /spec/espProperties/server.loglevel
        value: esp=trace
    ...
      - op: replace
        path: /spec/limits/maxReplicas
        value: "2"
    ...
  3. Use the $deploy/site-config/sas-esp-operator/espconfig/espconfig-env-variables.yaml file to specify custom SAS Event Stream Processing default environment variables.

    For each SAS Event Stream Processing default environment variable that you intend to use, uncomment the op, path, value, name, and value lines that are associated with the environment variable. Then replace the {{ VARIABLE-NAME }} variable with the desired value.

    If you would like to include additional environment variables that are not in the example file, add new sections for them after the provided examples.

    Here are some examples:

    ...
      - op: add
        path: /spec/projectTemplate/deployment/spec/template/spec/containers/0/env/-
        value:
          name: DFESP_QKB_LIC
          value: /mnt/data/sas/data/quality/license
    ...
      - op: add
        path: /spec/projectTemplate/deployment/spec/template/spec/containers/0/env/-
        value:
          name: CUSTOM_ENV_VAR_NUMBER
          value: "1234"
      - op: add
        path: /spec/projectTemplate/deployment/spec/template/spec/containers/0/env/-
        value:
          name: CUSTOM_ENV_VAR_FLAG
          value: "true"
    ...
  4. Add site-config/sas-esp-operator/espconfig/espconfig-properties.yaml and/or site-config/sas-esp-operator/espconfig/espconfig-env-variables.yaml to the transformers block of the base kustomization.yaml file.

    Here is an example:

    ...
    transformers:
    ...
    - site-config/sas-esp-operator/espconfig/espconfig-properties.yaml
    - site-config/sas-esp-operator/espconfig/espconfig-env-variables.yaml
    ...

After the base kustomization.yaml file is modified, deploy the software using the commands that are described in SAS Viya Platform: Deployment Guide.

Change the Storage Size for the SAS Event Stream Processing PersistentVolumeClaim

Overview

SAS Event Stream Processing creates a PersistentVolumeClaim (PVC) with a default storage capacity of 5 GB. Follow these instructions to change that value.

Instructions

  1. Copy the file $deploy/sas-bases/examples/sas-event-stream-processing-studio-app/storage/esp-storage-size-transformer.yaml to a location of your choice under $deploy/site-config, such as $deploy/site-config/sas-event-stream-processing-studio-app/storage.

  2. Follow the instructions in the copied esp-storage-size-transformer.yaml file to change the values in that file as necessary.

  3. Add the full path of the copied file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). For example, if you moved the file to $deploy/site-config/backup, you would modify the base kustomization.yaml file like this:

...
transformers:
...
- site-config/backup/esp-storage-size-transformer.yaml
...

After the base kustomization.yaml file is modified, deploy the software using the commands described in SAS Viya Platform Deployment Guide.

Configure the SAS Event Stream Processing Operator for ESP Server Connectors

Overview

The $deploy/sas-bases/examples/sas-esp-operator/esp-server-connectors-config directory contains files to configure the SAS Event Stream Processing Kubernetes Operator to include SAS Event Stream Processing connectors configuration. For information, see Overview to Connectors.

Examples

The example files provided assume the following:

Installation

  1. Create the $deploy/sas-config/esp-server-connectors-config directory. Copy the content from the $deploy/sas-bases/examples/sas-esp-operator/esp-server-connectors-config directory to the $deploy/site-config/esp-server-connectors-config directory.

  2. The $deploy/site-config/esp-server-connectors-config/secret.yaml file contains a Kubernetes secret resource. The secret contains a value for the ESP Server connectors.config file content. The connectors.config value should be updated with SAS Event Stream Processing Server connector configuration parameters. For information, see Setting Configuration Parameters in a Kubernetes Environment.

  3. Make the following changes to the base kustomization.yaml file ($deploy/kustomization.yaml).

    • Add $deploy/site-config/esp-server-connectors-config/secret.yaml to the resources block.
    • Add $deploy/site-config/esp-server-connectors-config/patchtransformer.yaml to the transformers block.

    The references should look like this:

    ...
    resources:
    ...
    - site-config/esp-server-connectors-config/secret.yaml
    ...
    transformers:
    ...
    - site-config/esp-server-connectors-config/patchtransformer.yaml
    ...
  4. After you modify the $deploy/kustomization.yaml file, deploy the software using the commands described in Deploy the Software.

Additional Resources

Configuring an Analytic Store for SAS Event Stream Processing Studio

Overview

To configure SAS Event Stream Processing Studio to use analytic store (ASTORE) files inside the application’s container, a volume mount with a PersistentVolumeClaim (PVC) of sas-microanalytic-score-astores is required in the deployment.

Prerequisites

Before proceeding, ensure that a PVC is defined by the SAS Micro Analytic Service Analytic Store Configuration for the sas-microanalytic-score service.

Consult the $deploy/sas-bases/examples/sas-microanalytic-score/astores/README.md file.

Installation

In the base kustomization.yaml file in the $deploy directory, add sas-bases/overlays/sas-event-stream-processing-studio-app/astores/astores-transformer.yaml to the transformers block. The reference should look like this:

...
transformers:
...
- sas-bases/overlays/sas-event-stream-processing-studio-app/astores/astores-transformer.yaml
...

After the base kustomization.yaml file is modified, deploy the software using the commands described in SAS Viya Platform Deployment Guide.

Configuring an Analytic Store for SAS Event Stream Manager

Overview

To configure SAS Event Stream Manager to use analytic store (ASTORE) files inside the application’s container, a volume mount with a PersistentVolumeClaim (PVC) of sas-microanalytic-score-astores is required in the deployment.

Prerequisites

Before proceeding, ensure that a PVC is defined by the SAS Micro Analytic Service Analytic Store Configuration for the sas-microanalytic-score service.

Consult the $deploy/sas-bases/examples/sas-microanalytic-score/astores/README.md file.

Installation

In the base kustomization.yaml file in the $deploy directory, add sas-bases/overlays/sas-event-stream-manager-app/astores/astores-transformer.yaml to the transformers block. The reference should look like this:

...
transformers:
...
- sas-bases/overlays/sas-event-stream-manager-app/astores/astores-transformer.yaml
...

After the base kustomization.yaml file is modified, deploy the software using the commands described in SAS Viya Platform Deployment Guide.

Preparing and Configuring SAS Expected Credit Loss for Deployment

Prerequisites

When SAS Expected Credit Loss is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Risk Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS Expected Credit Loss solution successfully, you must deploy the Risk Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-rcc/README.md (Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm (HTML format).

For storage options for your solution, such as external databases, refer to the Risk Cirrus Core README.

For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Expected Credit Loss: Administrator’s Guide.

Installation

  1. Complete steps 1-4 described in the Risk Cirrus Core README.

  2. Complete step 5 described in the Risk Cirrus Core README to modify your Risk Cirrus Core .env configuration file. Because SAS Expected Credit Loss uses workflow service tasks, a default service account must be configured for the Risk Cirrus Objects workflow client. If you know which user account to use before installation and prefer having it configured during installation, you should set the SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG variable to “Y” and assign the user ID to the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT variable. If you choose not to configure this during installation, you can set the default service account after deployment via SAS Environment Manager.

  3. If you are upgrading from a cadence prior to 2025.02, you should complete this step. Otherwise, you can skip to the next step.

    If you have a $deploy/site-config/sas-risk-cirrus-ecl/resources directory, take note of the values in your ecl_transform.yaml file. You may want to use them in the following steps. Once you have the values you need, delete the directory and its contents. Then, edit your base kustomization.yaml file ($deploy/kustomization.yaml) to remove the following line from the transformers section: - site-config/sas-risk-cirrus-ecl/resources/ecl_transform.yaml.

  4. Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-ecl to the $deploy/site-config/sas-risk-cirrus-ecl directory. Create a destination directory if one does not exist.

    IMPORTANT: If the destination directory already exists, make sure it has the expected configuration.env and sas-risk-cirrus-ecl-secret.env files, not the old ecl_transform.yaml file from previous cadences (prior to 2025.02). If the directory already exists and already has the expected configuration.env and sas-risk-cirrus-ecl-secret.env files, verify that overlay settings have been applied successfully to the configmap and verify that overlay settings have been applied successfully to the secret have been correctly applied. No further actions are required unless you want to change the connection settings to different overrides.

  5. Modify the configuration.env file (located in the $deploy/site-config/sas-risk-cirrus-ecl directory). Lines with a # at the beginning are commented out; their values will not be applied during deployment. If there are any parameters for which you want to override the default value, uncomment that variable’s line by removing the # at the beginning of the line and replace the placeholder with the desired value as explained in the following section. Specify, if needed, your settings as follows:

    Parameter Name Description
    SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER Replace {{ INFO-OR-DEBUG }} with the logging level desired. (Default is INFO)
    SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES Replace {{ Y-OR-N }} to specify whether you want to include steps flagged as sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts:

    - The create_cas_lib step creates the default ECLReporting CAS library that is used for reporting in SAS Expected Credit Loss.
    - The create_db_auth_domain step creates an ECLDBAuth domain for the riskcirrusecl schema and assigns default permissions.
    - The create_db_auth_domain_user step creates an ECLUserDBAuth domain for the riskcirrusecl schema and assigns default group permissions.
    - The import_main_dataloader_files step uploads the Cirrus_ECL_main_loader.xlsx file into the file service under the Products/SAS Expected Credit Loss directory.
    - The import_sample_data_loader_files step uploads the Cirrus_ECL_sample_data_loader.zip file into the file service under the Products/SAS Expected Credit Loss directory.
    - The import_va_reports step imports SAS-provided reports created in SAS Visual Analytics.
    - The install_riskengine_curves_project step loads the sample ECL Curves project into SAS Risk Engine.
    - The install_sampledata step loads sample load data into the riskcirrusecl database schema library.
    - The install_scenarios_sampledata step loads the sample scenarios into SAS Risk Factor Manager.
    - The load_sampledata_dataloader_objects step loads sample Class Members, Class Member Translations, NamedTreePaths, Named Tree Path Translations, Roles, RolePermissions, and Positions. It also loads sample object instances, like Attribution Templates, Configuration Sets, Configuration Tables, Cycles, Data Definitions, Models, Rule Sets and Scripts, as well as the Link Instances, Object Classifications, and Workflows associated with those objects. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
    - The load_workflows step loads and activates the ECL workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.
    - The localize_va_reports step imports localized SAS-provided reports created in SAS Visual Analytics.
    - The manage_cas_lib_acl step sets up permissions for the default ECLReporting CAS library. Users in the ECLUsers, ECLAdministrators and SASAdministrators groups have full access to the tables.
    - The transfer_sampledata_files step stores a copy of all sampledata files loaded into the environment into the file service under the Products/SAS Expected Credit Loss directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.
    - The update_db_sampledata_scripts_pg step stores a copy of the install_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the sample data.

    WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N. The following steps have not been marked as sample artifacts and will always be deployed:

    - The check_services step checks if the ECL dependent services are up and running.
    - The check_solution_existence step checks to see if the ECL solution is already running.
    - The check_solution_deployment step checks for the successful deployment of Risk Cirrus Core.
    - The create_solution_repo step creates the ECL repository.
    - The check_solution_running step checks to entire the ECL solution is running.
    - The import_solution step imports the solution in the ECL repository.
    - The load_app_registry step loads the ECL solution into the SAS application registry.
    - The load_auth_rules step assigns authorization rules for the ECL solution.
    - The load_group_memberships step assigns members to various ECL groups.
    - The load_identities step loads the ECL identities.
    - The load_main_dataloader_objects step loads the Cirrus_ECL_main_loader.xlsx file which contains required object instances, like Source System codes, Sequence Definitions and Code Libraries. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
    - The setup_code_lib_repo step creates the ECL code library directory.
    - The share_ia_script_with_solution step shares the Risk Cirrus Core individual assessment script with the ECL solution.
    - The share_objects_with_solution step shares the Risk Cirrus Core code library with the ECL solution.
    - The upload_notifications step loads workflow notifications into SAS Workflow Manager.
    SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS Replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to run. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment.

    For example, if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then the “transfer_sampledata” and the “load_sample_data” steps will be skipped during deployment. After the deployment finishes, you decide you want to include the SAS-provided sample data to use. You can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS to “transfer_sampledata,load_sample_data” and then delete the sas-risk-cirrus-ecl pod to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS.

    WARNING: This list is absolute; the deployment will only run the steps included in this list. If you are deploying this environment for the first time, this variable should be an empty string, or you risk an incomplete or failed deployment.
    SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS Replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. Typically, the only use case for this would be skipping the load of sample data. To skip the load of sample data, set this variable to “load_sample_data”. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then set this variable to an empty string to skip load_sample_data and any other steps that are marked as sample data. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to Y, then set this variable with the IDs of any steps you would like to skip, including those flagged as sample data. (Default is \<Empty list>).
    Note: If this variable is empty, all steps will be executed unless the solution has already deployed successfully in which case no steps will be executed. If this step is non-empty, only the steps listed in this variable will be executed.
    SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME Replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-NAME }} with the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the Shared Services database.
    SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET Replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }} with the database schema user secret for the user name that was used for SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME.

    The following is an example of a configuration.env that you could use for SAS Expected Credit Loss. This example uses the default values provided by SAS except for the solution input data database user name variable. The SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME should be replaced with the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the Shared Services database.

    # SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
    # SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES={{ Y-OR-N }}
    # SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
    # SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
    SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME=ecluser
  6. In the base kustomization.yaml file, add site-config/sas-risk-cirrus-ecl/configuration.env to the configMapGenerator block. Here is an example:

    configMapGenerator:
    ...
    - name: sas-risk-cirrus-ecl-parameters
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-ecl/configuration.env
    ...

    Save the kustomization.yaml file.

  7. Modify the sas-risk-cirrus-ecl-secret.env file (in the $deploy/site-config/sas-risk-cirrus-ecl directory) and specify your settings as follows:

    For the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }} with the database schema user secret. If the directory already exists and already has the expected .env file, verify that overlay settings have been applied successfully to the secret have been correctly applied. No further actions are required unless you want to change the secret.

    The following is an example of secret.env file that you could use for SAS Expected Credit Loss.

    SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET=eclsecret

    Save the sas-risk-cirrus-ecl-secret.env file.

  8. In the base kustomization.yaml file, add site-config/sas-risk-cirrus-ecl/sas-risk-cirrus-ecl-secret.env to the secretGenerator block. Here is an example:

    secretGenerator:
    ...
    - name: sas-risk-cirrus-ecl-secret
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-ecl/sas-risk-cirrus-ecl-secret.env
    ...

    Save the kustomization.yaml file.

  9. When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide to apply the new settings.

    Note: The .env overlay can be applied during or after the initial deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, and then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Verify That Overlay Settings Have Been Applied Successfully to the ConfigMap

Before verifying the settings for SAS Expected Credit Loss solution, complete step 9 specified in the Risk Cirrus Core README to verify for Risk Cirrus Core.

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl describe configmap sas-risk-cirrus-ecl-parameters -n <name-of-namespace>
  2. Verify that the output contains the desired configurations that you configured.

Verify That Overlay Settings Have Been Applied Successfully to the Secret

To verify that your overrides were applied successfully to the secret, run the following commands:

  1. Find the name of the secret on the namespace.

    kubectl describe secret sas-risk-cirrus-ecl-secret -n <name-of-namespace>
  2. Retrieve the name of the secret on the namespace from the “Name:” line on the generated output.

  3. Verify that the output contains the desired database schema user secret that you configured.

    kubectl get secret <name-of-the-secret> -n <name-of-namespace>-o jsonpath='{.data}'

Additional Resources

SAS Image Staging Configuration Options

Overview

SAS Image Staging ensures images are pulled to and staged properly on respective nodes in an effort to decrease start-up times of various SAS Viya platform components. This README describes how to customize your SAS Viya platform deployment for tasks related to SAS Image Staging.

SAS provides the ability to modify the behavior of the SAS Image Staging application to fit the needs of specific environments.

This README describes two areas that can be configured, the mode of operation and the check interval.

SAS Image Staging Requirements

SAS Image Staging requires that Workload Node Placement (WNP) be used. Specifically, at least one node in the Kubernetes cluster be labeled “workload.sas.com/class=compute” in order for SAS Image Staging to function properly.

If WNP is not used, the SAS Image Staging application will not pre-stage images. Timeouts can occur when images are pulled into the cluster the first time or when the image is removed from the image cache and the image needs to be pulled again for use.

For more information about WNP, see Plan the Workload Placement.

Modes of Operation

The default behavior of SAS Image Staging is to start pods on nodes via a daemonset at interval to ensure that relevant images have been pulled to hosts. While this default behavior accomplishes the goal of pulling images to nodes and decreasing start-up times, some users may want more intelligent and specific control with less churn in Kubernetes.

In order for the non-default option described in this README to function, the SAS Image Staging application must have the ability to list nodes. The nodes resource is cluster-scoped and resides outside of the SAS Viya platform namespace. Requirements may not allow for this sort of access, and default namespace-scoped resources do not provide the view needed for this option to work.

The SAS Image Staging application uses the list of nodes to determine which images are currently pulled to the node and their respective version. If an image is missing or a different version exists on the node, the SAS Image Staging application will target that node for a pull of the image instead of starting daemonsets to pull images.

Regardless of the mode of operation, it is normal to see a number of pods that contain the word “prepull” in their name. The name and frequency in which these pods show up depend on the mode of operation used. These pods are transient and are used to pull images to respective nodes.

Advantages and Disadvantages of the Two Options

Daemonset (Default Behavior)

Advantages:

Disadvantages:

Node List (Optional Behavior)

Advantages:

Disadvantages:

Installation

Enable the Node List Option

$deploy/sas-bases/examples/sas-prepull contains an example file named add-prepull-cr-crb.yaml. This example provides a resource to permit access to resource node and verb list for the namespaced sas-prepull service account.

To enable the Node List Option:

  1. Copy $deploy/sas-bases/examples/sas-prepull/add-prepull-cr-crb.yaml to $deploy/site-config/sas-prepull/add-prepull-cr-crb.yaml.

  2. Modify add-prepull-cr-crb.yaml by replacing all instances of ‘{{ NAMESPACE }}’ with the namespace of the SAS Viya platform deployment where you want node and list access granted for the sas-prepull service account.

  3. Add site-config/sas-prepull/add-prepull-cr-crb.yaml to the resourcess block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    Here is an example:

    ...
    resources:
    ...
    - site-config/sas-prepull/add-prepull-cr-crb.yaml
    ...
  4. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Modify the Resource Limits

You should increase the resource limit of the SAS Image Staging deployment if the node list option is used and you plan to use autoscaling in your cluster. The default values for CPU and Memory limits are 1 and 1Gi respectively.

The $deploy/sas-bases/examples/sas-prepull directory contains an example file named change-resource-limits.yaml.

This example provides a patch that will change the values for resources limits in the SAS Image Staging application pod.

Steps to modify:

  1. Copy $deploy/sas-bases/examples/sas-prepull/change-resource-limits.yaml to $deploy/site-config/sas-prepull/change-resource-limits.yaml.

  2. Modify change-resource-limits.yaml by replacing the resource limit values to match your needs.

  3. Add site-config/sas-prepull/change-resource-limits.yaml to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    Here is an example:

    ...
    transformers:
    ...
    - site-config/sas-prepull/change-resource-limits.yaml
    ...

Disable the Node List Option

  1. Remove site-config/sas-prepull/add-prepull-cr-crb.yaml from the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml). This is to ensure the option does not get applied in future Kustomize builds.

  2. If there are no other SAS Viya platform deployments in other namespaces in the cluster, execute kubectl delete -f $deploy/site-config/sas-prepull/add-prepull-cr-crb.yaml to remove the ClusterRole and ClusterRoleBinding from the cluster. If there are other SAS Viya platform deployments in other namespaces in the cluster, execute kubectl delete clusterrolebinding sas-prepull-v2-{{ NAMESPACE }} -n {{ NAMESPACE }}, where {{ NAMESPACE }} is the namespace of the deployment in which you want the ClusterRoleBinding removed.

Modify the Check Interval

The check interval is the time the SAS Image Staging application pauses between checks for newer versions of images. By default, the check interval in Daemonset mode is 1 hour and the check interval for Node List mode is 30 secs. These defaults are reasonable given their operation and impact to an environment. However, you may wish to adjust the interval to further reduce churn in the environment. This section of the README describes how to make those interval adjustments.

The interval is configured via two options located in the sas-prepull-parameters configmap. Those options are called SAS_PREPULL_DAEMON_INT and SAS_PREPULL_CRCRB_INT and control the intervals of Daemon Mode and Node List Mode respectively.

The $deploy/sas-bases/examples/sas-prepull directory contains an example file named change-check-interval.yaml. This example provides a patch that will change the values for the intervals in the configmap referenced by the SAS Image Staging application.

Steps to modify:

  1. Copy $deploy/sas-bases/examples/sas-prepull/change-check-interval.yaml to $deploy/site-config/sas-prepull/change-check-interval.yaml.

  2. Modify change-check-interval.yaml by replacing all instances of ‘{{ DOUBLE-QUOTED-VALUE-IN-SECONDS }}’ with the value in seconds for each respective mode. Note that the value must be wrapped in double quotes in order for Kustomize to appropriately reference the value.

  3. Add site-config/sas-prepull/change-check-interval.yaml to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

    Here is an example:

    ...
    transformers:
    ...
    - site-config/sas-prepull/change-check-interval.yaml
    ...

Using Flattened Image Paths with Red Hat OpenShift

If you are deploying on Red Hat OpenShift and are using a mirror registry, SAS Image Staging requires a modification to work properly. The change-relpath.yaml file in the $deploy/sas-bases/overlays/sas-prepull directory contains a patch for the relative path of images that are pre-staged by SAS Image Staging.

To use the patch, add sas-bases/overlays/sas-prepull/change-relpath.yaml to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Make sure the addition is above the line sas-bases/overlays/required/transformers.yaml.

Here is an example:

...
transformers:
...
- sas-bases/overlays/sas-prepull/change-relpath.yaml
- sas-bases/overlays/required/transformers.yaml
...

Preparing and Configuring SAS Insurance Capital Management for Deployment

Modify the Configuration for SAS Insurance Capital Management

Overview of the Configuration for SAS Insurance Capital Management

SAS Insurance Capital Management provides a ConfigMap whose values control various aspects of its deployment process. This includes variables such as logging level for the deployment, deployment steps to skip, etc. SAS provides default values for these variables as described in the next section. You can override these default values by configuring a configuration.env file with your override values and configuring your kustomization.yaml file to apply these overrides.

For a list of variables that can be overridden and their default values, see SAS Insurance Capital Management Configuration Parameters.

For the steps needed to override the default values with your own values, see Apply your own overrides to the configuration parameters.

SAS Insurance Capital Management Configuration Parameters and Secrets

The following table contains a list of parameters that can be specified in the SAS Insurance Capital Management .env configuration file. These parameters can all be found in the template configuration file (configuration.env) but are commented out in the template file. Lines with a ‘#’ at the beginning are commented out, and their values will not be applied during deployment. If you want to override a SAS-provided default for a given variable, you must uncomment the line by removing the ‘#’ at the beginning of the line.

Parameter Name Description
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER Specifies a logging level for the deployment. The logging level value: "INFO" is used if the variable is not overridden by your YAML file. For a more verbose level of logging, specify value: "DEBUG".
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES Specifies whether you want to include deployment steps that relate to sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts:
- The update_db_sampledata_scripts_pg_ics step prepares the ICS sample data scripts into a temporary folder.
- The create_db_auth_domain_ics step creates an authentication domain to allow the deployer script to add the ICS sample data to the library.
- The create_db_auth_domain_user_ics step adds the install user to the authentication domain.
- The update_db_sampledata_scripts_pg_s2 step prepares the Solvency II (SII) sample data scripts into a temporary folder.
- The create_db_auth_domain_s2 step creates an authentication domain to allow the deployer script to add the SII sample data to the library.
- The load_sample_objects_ics step uploads the sample data resources for ICS to the Cirrus web interface.
- The import_sample_dataloader_files_ics step imports the uploaded sample data resources for ICS into Cirrus.
- The load_sample_objects_s2 step uploads the sample data resources for SII to the Cirrus web interface.
- The import_sample_dataloader_files_s2 step imports the uploaded sample data resources for SII into Cirrus.
- The install_sampledata_ics step adds the sample data for ICS to the database.
- The install_sampledata_s2 step adds the sample data for SII to the database.
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME Specifies the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the SharedServices database.
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS Specifies whether you want to skip specific steps during the deployment of SAS Insurance Risk Management.
Note: Typically, you should set this value blank: "". The value: "" is used if the variable is not overridden by your YAML file. This means no deployment steps will be explicitly skipped.
SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS Specifies whether you want to run specific steps during the deployment of SAS Insurance Risk Management.
Note: Typically, you should set this value blank: "". The value: "" is used if the variable is not overridden by your YAML file. This means all deployment steps will be executed.

The following table contains a parameter that can be specified in the SAS Insurance Capital Management .env secret file.

Parameter Name Description
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET Specifies the secret to be used for the user specified in SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME above.

Apply Overrides to the Configuration Parameters

If you want to override any of the SAS Insurance Capital Management configuration parameters rather than using the default values, complete these steps:

  1. If you have a $deploy/site-config/sas-risk-cirrus-icm directory, take note of the values in your icm_transform.yaml file. You may want to use them in the following steps. Once you have the values you need, delete the directory and its contents.
    Then, edit your base kustomization.yaml file ($deploy/kustomization.yaml) to remove the following line from the transformers section:

    - site-config/sas-risk-cirrus-icm/resources/icm_transform.yaml

    This step should only be necessary if you are upgrading from a cadence prior to 2025.02.

  2. Copy the configuration.env from $deploy/sas-bases/examples/sas-risk-cirrus-icm to the $deploy/site-config/sas-risk-cirrus-icm directory. Create the destination directory if one does not exist. If the directory already exists and already has the expected .env file, verify that the overrides have been correctly applied. No further actions are required, unless you want to apply different overrides.

  3. In the base kustomization.yaml file, add the sas-risk-cirrus-icm-parameters ConfigMap to the configMapGenerator block. If that block does not exist, create it. Here is an example of what the inserted code block should look like in the kustomization.yaml file:

    configMapGenerator:
    ...
    - name: sas-risk-cirrus-icm-parameters
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-icm/configuration.env
    ...
  4. Save the kustomization.yaml file.

  5. Modify the configuration.env file (in the $deploy/site-config/sas-risk-cirrus-icm directory) and specify your settings as follows:

    a. For the parameter SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER, replace {{ INFO-or-DEBUG }} with the logging level desired.

    b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES, replace {{ Y-or-N }} with "Y" or "N". This value determines if the deployment steps that deploy sample artifacts will be executed. If the value is "N", the deployment process does not execute the install steps that deploy sample artifacts.

    c. For the parameter SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip.

    d. For the parameter SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }} with the IDs of the steps you want to run. Typically, you should leave this variable blank.
    Note: If this variable is empty, all steps will be executed unless the solution has already deployed successfully in which case no steps will be executed. If this step is non-empty, only the steps listed in this variable will be executed.

  6. Save the configuration.env file.

    The following is an example of a .env file that you could use for SAS Insurance Capital Management. This example will use all of the default values provided by SAS except for the sample artifacts deployment.

    # SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-or-DEBUG }}
    SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES=N
    # SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
    # SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
  7. Modify the sas-risk-cirrus-icm-secret.env file (in the $deploy/site-config/sas-risk-cirrus-icm directory) and specify your settings as follows:

    a. For the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }} with the database schema user secret.

  8. Save the sas-risk-cirrus-icm-secret.env file.

    The following is an example of a .env file that you could use for SAS Insurance Capital Management.

    SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET=EXAMPLESECRET
  9. In the base kustomization.yaml file, in the $deploy directory, add sas-risk-cirrus-icm-secret.env to the secretGenerator block. If that block does not exist, create it. Here is an example of what the inserted code block should look like in the kustomization.yaml file:

    secretGenerator:
    ...
    - name: sas-risk-cirrus-icm-secret
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-icm/sas-risk-cirrus-icm-secret.env
    ...
  10. Save the kustomization.yaml file.

  11. When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings.

Verify That Configuration Overrides Have Been Applied Successfully to the configmap

Note: If you configured overrides during a past deployment, your overrides should be available in the SAS Insurance Risk Management ConfigMap. To verify that your overrides were applied successfully to the ConfigMap, run the following command:

kubectl describe configmap sas-risk-cirrus-icm-parameters -n <name-of-namespace>

Verify that the output contains your configured overrides.

Verify That Overrides Have Been Applied Successfully to the secret

To verify that your overrides were applied successfully to the secret, run the following commands:

  1. Find the name of the secret on the namespace.

    kubectl describe secret sas-risk-cirrus-icm-secret -n <name-of-namespace>
  2. Retrieve the name of the secret on the namespace from the “Name:” line on the generated output.

  3. Verify the database schema user secret.

    kubectl get secret <name-of-the-secret> -n <name-of-namespace>-o jsonpath='{.data}’

Verify that the output contains your configured override. Note that this value will be BASE64 encoded.

Complete the Deployment Process

When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software.

Review the Administrator Guide for suggested post-deployment steps

Once the deployment has been completed, SAS recommends reviewing the Administrator Guide for necessary post-deployment instructions (for example, installing Python packages for report generation), suggested site-specific considerations, and performance tunings.

Preparing and Configuring SAS Insurance Contract Valuation Foundation for Deployment

Modify the Configuration Files for SAS Insurance Contract Valuation Foundation

Overview of Configuration for SAS Insurance Contract Valuation Foundation

SAS Insurance Contract Valuation Foundation provides a ConfigMap whose values control various aspects of its deployment process. This includes variables such as logging level for the deployment, deployment steps to skip, etc. SAS provides default values for these variables as described in the next section. You can override these default values by configuring a configuration.env file with your override values and configuring your kustomization.yaml file to apply these overrides.

For a list of variables that can be overridden and their default values, see SAS Insurance Contract Valuation Foundation Configuration Parameters.

For the steps needed to override the default values with your own values, see Apply your own overrides to the configuration parameters.

SAS Insurance Contract Valuation Foundation File Configuration Parameters and Secrets

The following table contains a list of parameters that can be specified in the SAS Insurance Contract Valuation Foundation .env configuration file. These parameters can all be found in the template configuration file (configuration.env) but are commented out in the template file. Lines with a ‘#’ at the beginning are commented out, and their values will not be applied during deployment. If you want to override a SAS-provided default for a given variable, you must uncomment the line by removing the ‘#’ at the beginning of the line.

Parameter Name Description
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER Specifies a logging level for the deployment. The logging level value: "INFO" is used if the variable is not overridden by your YAML file. For a more verbose level of logging, specify value: "DEBUG".
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES Specifies whether you want to include deployment steps that relate to sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts:
- The update_db_sampledata_scripts_pg_ifrs17 step prepares the IFRS17 sample data scripts into a temporary folder.
- The create_db_auth_domain_ifrs17 step creates an authentication domain to allow the deployer script to add the IFRS17 sample data to the library.
- The create_db_auth_domain_user_ifrs17 step adds the install user to the authentication domain.
- The load_sample_objects_common step uploads the sample data resources for IFRS17 to the Cirrus web interface.
- The import_sample_dataloader_files_common step imports the uploaded sample data resources for IFRS17 into Cirrus.
- The install_sampledata_ifrs17 step adds the sample data for IFRS17 to the database.
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME Specifies the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the SharedServices database.
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS Specifies whether you want to skip specific steps during the deployment of SAS Insurance Risk Management.
Note: Typically, you should set this value blank: "". The value: "" is used if the variable is not overridden by your YAML file. This means no deployment steps will be explicitly skipped.
SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS Specifies whether you want to run specific steps during the deployment of SAS Insurance Risk Management.
Note: Typically, you should set this value blank: "". The value: "" is used if the variable is not overridden by your YAML file. This means all deployment steps will be executed.

The following table contains a parameter that can be specified in the SAS Insurance Contract Valuation Foundation .env secret file.

Parameter Name Description
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET Specifies the secret to be used for the user specified in SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME above.

Apply Overrides to the Configuration Parameters

If you want to override any of the SAS Insurance Contract Valuation Foundation configuration parameters rather than using the default values, complete these steps:

  1. If you have a $deploy/site-config/sas-risk-cirrus-icv/resources directory, delete it and its contents. Then, edit your base kustomization.yaml file ($deploy/kustomization.yaml) to remove the following line from the transformers section:

    - site-config/sas-risk-cirrus-icv/resources/icv_transform.yaml

    This step should only be necessary if you are upgrading from a cadence prior to 2025.02.

  2. Copy the configuration.env from $deploy/sas-bases/examples/sas-risk-cirrus-icv to the $deploy/site-config/sas-risk-cirrus-icv directory. Create a destination directory if one does not exist.

    IMPORTANT: If the destination directory already exists, make sure it has the expected configuration.env and sas-risk-cirrus-icv-secret.env files, not the old icv_transform.yaml file from previous cadences (prior to 2025.02). If the directory already exists and already has the expected configuration.env and sas-risk-cirrus-icv-secret.env files, verify that overlay settings have been applied successfully to the configmap and verify that overlay settings have been applied successfully to the secret. No further actions are required unless you want to change the connection settings to different overrides.

  3. Modify the configuration.env file (in the $deploy/site-config/sas-risk-cirrus-icv directory) and specify your settings as follows:

    a. For the parameter SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER, replace {{ INFO-or-DEBUG }} with the logging level desired.

    b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES, replace {{ Y-or-N }} with "Y" or "N". This value determines if the deployment steps that deploy sample artifacts will be executed. If the value is "N", the deployment process does not execute the install steps that deploy sample artifacts.

    c. For the parameter SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip.

    d. For the parameter SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }} with the IDs of the steps you want to run. Typically, you should leave this variable blank.
    Note: If this variable is empty, all steps will be executed unless the solution has already deployed successfully in which case no steps will be executed. If this step is non-empty, only the steps listed in this variable will be executed.

  4. Save the configuration.env file.

    The following is an example of a .env file that you could use for SAS Insurance Contract Valuation Foundation. This example will use all of the default values provided by SAS except for the sample artifacts deployment.

    # SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-or-DEBUG }}
    SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES=N
    # SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
    # SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
  5. In the base kustomization.yaml file, add the sas-risk-cirrus-icv-parameters ConfigMap to the configMapGenerator block. If that block does not exist, create it. Here is an example of what the inserted code block should look like in the kustomization.yaml file:

    configMapGenerator:
    ...
    - name: sas-risk-cirrus-icv-parameters
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-icv/configuration.env
    ...

    Save the kustomization.yaml file.

  6. Modify the sas-risk-cirrus-icv-secret.env file (in the $deploy/site-config/sas-risk-cirrus-icv directory) and specify your settings as follows:

    a. For the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }} with the database schema user secret.

  7. Save the sas-risk-cirrus-icv-secret.env file.

    The following is an example of a .env file that you could use for SAS Insurance Contract Valuation Foundation.

    SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET=EXAMPLESECRET
  8. In the base kustomization.yaml file, add site-config/sas-risk-cirrus-icv/sas-risk-cirrus-icv-secret.env to the secretGenerator block. Here is an example:

    secretGenerator:
    ...
    - name: sas-risk-cirrus-icv-secret
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-icv/sas-risk-cirrus-icv-secret.env
    ...

    Save the kustomization.yaml file.

Verify That Overlay Settings Have Been Applied Successfully to the Configmap

Note: If you configured overrides during a past deployment, your overrides should be available in the SAS Insurance Risk Management ConfigMap. To verify that your overrides were applied successfully to the ConfigMap, run the following command:

kubectl describe configmap sas-risk-cirrus-icv-parameters -n <name-of-namespace>

Verify That Overlay Settings Have Been Applied Successfully to the Secret

To verify that your overrides were applied successfully to the secret, run the following commands:

  1. Find the name of the secret on the namespace.

    kubectl describe secret sas-risk-cirrus-icv-secret -n <name-of-namespace>
  2. Retrieve the name of the secret on the namespace from the “Name:” line on the generated output.

  3. Verify the database schema user secret.

    kubectl get secret <name-of-the-secret> -n <name-of-namespace>-o jsonpath='{.data}'

Verify that the output contains your configured override. Note that this value will be BASE64 encoded.

Complete the Deployment Process

When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide to apply the new settings.

Note: The .env overlay can be applied during or after the initial deployment of the SAS Viya platform. - If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, and then run kustomize build to create and apply the manifests. - If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Review the Administrator Guide for Suggested Post-Deployment Steps

When the deployment has been completed, SAS recommends that you review the Administrator Guide for suggested site-specific considerations, configurations and performance tunings.

Preparing and Configuring SAS Integrated Regulatory Reporting for Deployment

Prerequisites

When SAS Integrated Regulatory Reporting is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS Integrated Regulatory Reporting solution successfully, you must deploy the Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-core/resources/README.md (Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm(HTML format).

For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Integrated Regulatory Reporting: Administrator’s Guide.

Modify the Configuration Files for SAS Integrated Regulatory Reporting

Overview of Configuration for SAS Integrated Regulatory Reporting

SAS Integrated Regulatory Reporting provides a ConfigMap whose values control various aspects of its deployment process. This includes variables such as logging level for the deployment, deployment steps to skip, etc. SAS provides default values for these variables as described in the next section. You can override these default values by configuring a configuration.env file with your override values and configuring your kustomization.yaml file to apply these overrides.

For a list of variables that can be overridden and their default values, see SAS Integrated Regulatory Reporting Configuration Parameters and Secrets.

For the steps needed to override the default values with your own values, see Apply Overrides to the Configuration Parameters and Secrets.

SAS Integrated Regulatory Reporting Configuration Parameters and Secrets

The following table contains a list of parameters that can be specified in the SAS Integrated Regulatory Reporting .env configuration file. These parameters can all be found in the template configuration file (configuration.env) but are commented out in the template file. Lines with a ‘#’ at the beginning are commented out, and their values will not be applied during deployment. If you want to override a SAS-provided default for a given variable, you must uncomment the line by removing the ‘#’ at the beginning of the line.

Parameter Name Description
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER Specifies a logging level for the deployment. The logging level value: "INFO" is used if the variable is not overridden by your .env file. For a more verbose level of logging, specify value: "DEBUG".
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES Specifies whether you want to include deployment steps that relate to sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts:

- The create_sampledata_folders step creates all sample data folders in the file service under the Products/SAS Integrated Regulatory Reporting directory.
- The transfer_sampledata_files step stores a copy of all sample data files in the file service under the Products/SAS Integrated Regulatory Reporting directory. This directory will include DDLs, reports, sample data, and scripts used to load the sample data.
- The import_sample_dataloader_files step stores a copy of the Cirrus_EBA_sample_data_loader.xlsx file in the file service under the Products/SAS Integrated Regulatory Reporting directory. Administrators can then download the file from the Data Load page in SAS Integrated Regulatory Reporting and use it as a template to load and unload data.
- The install_sampledata step loads the sample data into a EBA library.
- The load_sampledata_dataloader_objects step loads sample Class Members, Class Member Translations, NamedTreePaths, Named Tree Path Translations, and Object Classifications.
- The update_db_sampledata_scripts_pg step prepares the EBA sample data scripts into a temporary folder.
- The create_db_auth_domain_user_tax_eba step adds the install user to the authentication domain.
- The create_db_auth_domain_stg step creates an authentication domain to allow the deployer script to add the sample data of staging tables to the library.

WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N.
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS Specifies whether you want to skip specific steps during the deployment of SAS Integrated Regulatory Reporting.
Note: Typically, you should set this value blank: "". The value: "" is used if the variable is not overridden by your .env file. This means no deployment steps will be explicitly skipped.
SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS Specifies whether you want to run specific steps during the deployment of SAS Integrated Regulatory Reporting.
Note: Typically, you should set this value blank: "". The value: "" is used if the variable is not overridden by your .env file. This means all deployment steps will be executed.
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME Specifies the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the SharedServices database.

The following table contains a parameter that can be specified in the SAS Integrated Regulatory Reporting .env secret file.

Parameter Name Description
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET Specifies the secret to be used for the user specified in SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME above.The SAS Integrated Regulatory Reporting.env secret file (sas-risk-cirrus-eba-secret.env) contains information about it. Its value will not be applied during deployment as it is commented out in the file. The ‘#’ at the start of the line must be removed in order to uncomment the line and override a SAS-provided default for a given variable.

Apply Overrides to the Configuration Parameters and Secrets

If you want to override any of the SAS Integrated Regulatory Reporting configuration parameters rather than using the default values, complete these steps:

  1. If you have a $deploy/site-config/sas-risk-cirrus-eba directory, delete it and its contents. Then, edit your base kustomization.yaml file ($deploy/kustomization.yaml) to remove the following line from the transformers section:

    - site-config/sas-risk-cirrus-eba/resources/eba_transform.yaml

    This step should only be necessary if you are upgrading from a cadence prior to 2025.02.

  2. Copy the configuration.env from $deploy/sas-bases/examples/sas-risk-cirrus-eba to the $deploy/site-config/sas-risk-cirrus-eba directory. Create the destination directory if one does not exist. If the directory already exists and already has the expected .env file, verify that the overrides have been correctly applied. No further actions are required, unless you want to apply different overrides.

  3. In the base kustomization.yaml file, add the sas-risk-cirrus-eba-parameters ConfigMap to the configMapGenerator block. If that block does not exist, create it. Here is an example of what the inserted code block should look like in the kustomization.yaml file:

    configMapGenerator:
    ...
    - name: sas-risk-cirrus-eba-parameters
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-eba/configuration.env
    ...
  4. Save the kustomization.yaml file.

  5. Modify the configuration.env file (in the $deploy/site-config/sas-risk-cirrus-eba directory) and specify your settings as follows:

    a. For the parameter SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER, replace {{ INFO-or-DEBUG }} with the logging level desired.

    b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES, replace {{ Y-or-N }} with "Y" or "N". This value determines if the deployment steps that deploy sample artifacts will be executed. If the value is "N", the deployment process does not execute the install steps that deploy sample artifacts.

    c. For the parameter SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip.

    d. For the parameter SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }} with the IDs of the steps you want to run. Typically, you should leave this variable blank.
    Note: If this variable is empty, all steps will be executed unless the solution has already deployed successfully in which case no steps will be executed. If this step is non-empty, only the steps listed in this variable will be executed.

    e. Replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-NAME }} with the the user who is intended to own the solution database schema for the SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME parameter. The owner of the SharedServices database is used by default if no value is specified.

  6. Save the configuration.env file.

    The following is an example of a configuration.env file that you could use for SAS Integrated Regulatory Reporting. This example will use all of the default values provided by SAS.

    # SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
    # SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES={{ Y-OR-N }}
    # SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
    # SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
    # SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME={{ SOLUTION-INPUT-DATA-SCHEMA-USER-NAME }}
  7. Modify the sas-risk-cirrus-eba-secret.env file (in the $deploy/site-config/sas-risk-cirrus-eba directory). For the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }} with the input data schema secert for the user name that was used for SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME.

  8. Save the sas-risk-cirrus-eba-secret.env file.

    The following is an example of a secret.env file that you could use for SAS Integrated Regulatory Reporting. This example will use the default value provided by SAS.

    # SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET={{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }}

Verify That Configuration Overrides Have Been Applied Successfully

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl describe configmap sas-risk-cirrus-eba-parameters -n <name-of-namespace>
  2. Run the following command to verify whether the overlay has been applied to the secret:

    kubectl get secret sas-risk-cirrus-eba-secret -n <name-of-namespace>

    Verify that the output contains your configured overrides.

Complete the Deployment Process

When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software

Configuration Settings for SAS Launcher Service

Overview

This README file describes the settings available for deploying SAS Launcher Service. The example files described in this README file are located at ‘/$deploy/sas-bases/examples/sas-launcher/configure’.

Installation

Based on the following descriptions of available example files, determine if you want to use any example file in your deployment. If you do, copy the example file and place it in your site-config directory.

Process Limits

Example files are provided that contain suggested process limits based on your deployment size. There is a file provided for each of the two types of users, regular users and super users.

Regular users (non-super users) have the following suggested defaults according to your deployment size: * 10 (small) * 25 (medium) * 50 (large)

Super users have the following suggested defaults according to your deployment size: * 15 (small) * 35 (medium) * 65 (large)

In the example files, uncomment the value you wish to keep, and comment out the rest. After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

Here is an example using the transformer for regular users:

transformers:
...
- site-config/sas-launcher/configure/launcher-user-process-limit.yaml

Configure Home Directories

Use NFS Server To Mount Home Directory

The launcher-nfs-mount.yaml file allows you to change the location of the NFS server hosting the user’s home directories. The path is determined by the Identities service.

  1. Create the location site-config/sas-launcher/configure/.

  2. Copy the sas-bases/examples/sas-launcher/configure/launcher-nfs-mount.yaml file to the site-config/sas-launcher/configure/ location.

  3. In the file, replace {{ NFS-SERVER-LOCATION }} with the location of the NFS server. Here is an example:

    patch: |-
      - op: add
        path: /template/metadata/annotations/launcher.sas.com~1nfs-server
        value: myserver.nfs.com
  4. After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file. Here is an example:

    transformers:
    ...
    - site-config/sas-launcher/configure/launcher-nfs-mount.yaml

    Note: If you are performing the tasks in this README before the initial deployment of your SAS Viya software, you should perform the next step after the deployment is completed. If you are updating an existing deployment, you should perform the next step now.

  5. In SAS Environment Manager, set the Identities identifier.homeDirectoryPrefix to the parent path to the home directory location on the NFS server.

Use Kubernetes Volumes for User Home Directories

The launcher-user-homedirectory-volume.yaml allows you to specify the runtime storage location of the user’s home directory. The path is determined by the identities service and is mounted using the specified {{ VOLUME-STORAGE-CLASS }}.

Note: Using this feature overrides changes made for the Use NFS Server To Mount Home Directory feature.

  1. Create the location site-config/sas-launcher/configure/.

  2. Copy the sas-bases/examples/sas-launcher/configure/launcher-user-homedirectory-volume.yaml file to the site-config/sas-launcher/configure/ location.

  3. In the file, replace {{ VOLUME-STORAGE-CLASS }} with the location of the volume storage call of your choice. Here is an example:

    patch: |-
      - op: add
        path: /template/spec/volumes/-
        value:
          name: sas-launcher-userhome
          persistentVolumeClaim:
            claimName: home-rwx-claim
  4. After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file. Here is an example:

    transformers:
    ...
    - site-config/sas-launcher/configure/launcher-user-homedirectory-volume.yaml

    Note: If you are performing the tasks in this README before the initial deployment of your SAS Viya software, you should perform the next step after the deployment is completed. If you are updating an existing deployment, you should perform the next step now.

  5. In SAS Environment Manager, set the Identities identifier.homeDirectoryPrefix to the parent path to mount the home directory location in the pod.

Locale and Encoding Defaults

The launcher-locale-encoding-defaults.yaml file allows you to modify the SAS LOCALE and SAS ENCODING defaults. The defaults are stored in a Kubernetes ConfigMap called sas-launcher-init-nls-config, which the Launcher service will use to determine which default values are needed to be set. The LOCALE and ENCODING defaults specified here will affect all consumers of SAS Launcher (SAS Compute Server, SAS/CONNECT, and SAS Batch Server) unless overridden (see below). To update the defaults, replace {{ LOCALE-DEFAULT }} and {{ ENCODING-DEFAULT }}. Here is an example:

patch: |-
  - op: replace
    path: /data/SAS_LAUNCHER_INIT_LOCALE_DEFAULT
    value: en_US
  - op: replace
    path: /data/SAS_LAUNCHER_INIT_ENCODING_DEFAULT
    value: utf8

Note: For a list of the supported values for LOCALE and ENCODING, see LOCALE, ENCODING, and LANG Value Mapping Table.

After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file. Here is an example:

transformers:
...
- site-config/sas-launcher/configure/launcher-locale-encoding-defaults.yaml

The defaults from this ConfigMap can be overridden on individual launcher contexts. For more information on overriding specific launcher contexts, see Change Default SAS Locale and SAS Encoding.

The defaults from this ConfigMap are also overridden by effective LOCALE and ENCODING values derived from an export LANG=langValue statement that is present in a startup_commands configuration instance of sas.compute.server, sas.connect.server, or sas.batch.server. For more information on setting or removing these statements, see Edit Server Configuration Instances.

Note: When following links to SAS documentation, use the version number selector towards the left side of the header to select your currently deployed release version.

Requests and Limits for CPU

The default values and maximum values for CPU requests and CPU limits can be specified in a Launcher service pod template. The launcher-cpu-requests-limits.yaml allows you to change these default and maximum values for the CPU resource. To update the defaults, replace the {{ DEFAULT-CPU-REQUEST }}, {{ MAX-CPU-REQUEST }}, {{ DEFAULT-CPU-LIMIT }}, and {{ MAX-CPU-LIMIT }} variables with the value you want to use. Here is an example:

patch: |-
  - op: add
    path: /metadata/annotations/launcher.sas.com~1default-cpu-request
    value: 50m
  - op: add
    path: /metadata/annotations/launcher.sas.com~1max-cpu-request
    value: 100m
  - op: add
    path: /metadata/annotations/launcher.sas.com~1default-cpu-limit
    value: "2"
  - op: add
    path: /metadata/annotations/launcher.sas.com~1max-cpu-limit
    value: "2"

Note: For details on the value syntax used above, see Resource units in Kubernetes

After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file. Here is an example:

transformers:
...
- site-config/sas-launcher/configure/launcher-cpu-requests-limits.yaml

Note: The current example PatchTransformer targets all PodTemplates used by sas-launcher. If you only wish to target only one PodTemplate, update the PatchTransformer to target a specific PodTemplate name.

Requests and Limits for Memory

The default values and maximum values for memory requests and memory limits can be specified in a Launcher service pod template. The launcher-memory-requests-limits.yaml allows you to change these default and maximum values for the memory resource. To update the defaults, replace the {{ DEFAULT-MEMORY-REQUEST }}, {{ MAX-MEMORY-REQUEST }}, {{ DEFAULT-MEMORY-LIMIT }}, and {{ MAX-MEMORY-LIMIT }} variables with the value you want to use. Here is an example:

patch: |-
  - op: add
    path: /metadata/annotations/launcher.sas.com~1default-memory-request
    value: 300M
  - op: add
    path: /metadata/annotations/launcher.sas.com~1max-memory-request
    value: 2Gi
  - op: add
    path: /metadata/annotations/launcher.sas.com~1default-memory-limit
    value: 500M
  - op: add
    path: /metadata/annotations/launcher.sas.com~1max-memory-limit
    value: 2Gi

Note: For details on the value syntax used above, see Resource units in Kubernetes

After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file. Here is an example:

transformers:
...
- site-config/sas-launcher/configure/launcher-memory-requests-limits.yaml

Note: The current example PatchTransformer targets all PodTemplates used by sas-launcher. If you only wish to target only one PodTemplate, update the PatchTransformer to target a specific PodTemplate name.

Configuring SAS Launcher Service to Disable the Resource Exhaustion Protection

This README describes the steps necessary to disable your SAS Viya platform deployment SAS Launcher Resource Exhaustion protection. Disabling this feature allows users to have no limit to the number of processes they can launch through the SAS Launcher API.

Installation

  1. To disable SAS Launcher Resource Exhaustion protection, add sas-bases/overlays/sas-launcher/launcher-disable-user-process-limits.yaml to the transformers block of the base kustomization.yaml file in the $deploy directory. Here is an example:

    ```yaml
    transformers:
    ...
    - sas-bases/overlays/sas-launcher/launcher-disable-user-process-limits.yaml
    ```
    
  2. When the reference is added to the base kustomization.yaml, use the deployment commands described in SAS Viya Platform: Deployment Guide to apply the new settings.

SAS Law Enforcement Intelligence - Tripwires ESP

Overview

Tripwires ESP provides real-time notifications for the Investigation Content Pack Tripwires functionality.

The example files provided require SAS Event Stream Processing to be licensed in addition to SAS Law Enforcement Intelligence.

Summary of Deployment

Tripwires ESP comprises an ESP project XML model and an ESP server instance.

Instructions

Copy the Examples

The directory $deploy/sas-bases/examples/sas-tripwires-esp contains the example project and server definition.

  1. Copy $deploy/sas-bases/examples/sas-tripwires-esp to $deploy/site-config/sas-tripwires-esp.

  2. Add site-config/sas-tripwires-esp to the resources block of the base kustomization.yaml ($deploy/kustomization.yaml) file.

    Here is an example:

    resources:
      - site-config/sas-tripwires-esp

Configuration

  1. The $deploy/site-config/sas-tripwires-esp/tripwires.env file is used to configure the ESP server instance. The variables in the file should be updated to reflect the requirements of your deployment.

    Here is an example:

    # The IP or hostname of the smtp server used to send notifications
    SMTPHOST=mailhost
    # The tripwire entity configured in SAS Visual Investigator
    ENTITY=tripwire
    # The interval at which to refresh information from PostgreSQL
    PGINTERVAL=60
    # The duration to throttle multiple events into a single notification
    THROTTLE=10
  2. If the deployment does not use internal TLS then edit $deploy/site-config/sas-tripwires-esp/tripwires.env to disable TLS for RabbitMQ and PostgreSQL.

    Here is an example:

    RMQSSL=false
    PGENCRYPTION=0
  3. If the deployment uses external PostgreSQL:

    1. Edit $deploy/site-config/sas-tripwires-esp/kustomization.yaml to comment the sas-tripwires-internal-postgres-config.yaml transformer and uncomment the sas-tripwires-external-postgres-config.yaml transformer.

      Here is an example:
      
      ```
      transformers:
        - transformers/sas-tripwires-esp-labels.yaml
        - transformers/sas-tripwires-tls-config.yaml
      # - transformers/sas-tripwires-internal-postgres-config.yaml
        - transformers/sas-tripwires-external-postgres-config.yaml
      ```
      
    2. Edit $deploy/site-config/sas-tripwires-esp/transformers/sas-tripwires-external-postgres-config.yaml to update the two name properties under secretKeyRef to match the name of the secret used for configuring the Platform PostgreSQL instance.

      Here is an example:
      
      ```
      valueFrom:
        secretKeyRef:
          name: platform-postgres-user
      ```
      
    3. Edit $deploy/site-config/sas-tripwires-esp/tripwires.env to supply the hostname, port and database of the Platform PostgreSQL server.

      Here is an example:
      
      ```
      PGHOST=viya-postgres.example.com
      PGPORT=5432
      PGDATABASE=viya
      ```
      

Configuring the SAS NIBRS Data Loader CronJob

Overview

The SAS NIBRS Data Loader CronJob runs on a configurable schedule to ingest NIBRS-compliant data files into a PostgreSQL database for consumption by other SAS Viya platform applications.

This README describes the steps necessary to configure the SAS NIBRS Data Loader CronJob.

Pre-Requisites

Provision a Kubernetes Persistent Volume and Persistent Volume Claim

The SAS NIBRS Data Loader CronJob reads the NIBRS files from a Persistent Volume (PV) in the SAS Viya platform namespace. Create a PV in the SAS Viya platform namespace that supports ReadWriteMany (RWX). See your infrastructure documentation for instructions on how to create a PV. Create a PersistentVolumeClaim (PVC) associated with the PV to mount the PV to the CronJob.

Review the Kubernetes documentation for Persistent Volumes and PersistentVolumeClaims on Kubernetes for more information.

Installation

  1. The directory $deploy/sas-bases/examples/sas-nibrs-data-loader contains the necessary configuration files. Copy $deploy/sas-bases/examples/sas-nibrs-data-loader to $deploy/site-config/sas-nibrs-data-loader. Create the destination directory, if it does not already exist.

  2. In the base kustomization.yaml file ($deploy/kustomization.yaml), add a reference to the copied sas-nibrs-data-loader directory in the resources block. Here is an example:

    resources:
    ...
    - site-config/sas-nibrs-data-loader

Configure the Volume Mount (required)

Edit the $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-cronjob/volume-transformer.yaml file, replacing {{ PVC-NAME }} with the name of the PVC configured as part of the pre-requisites.

Set the CronJob Schedule (required)

Review the Kubernetes documentation for CronJob schedule syntax.

Update the CronJob to the required schedule by editing the $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-cronjob/schedule-transformer.yaml file, replacing {{ CRON-SCHEDULE }} with the required schedule.

Note that this transformer will also set the suspend value of the CronJob to false, enabling the CronJob to run on the specified schedule. If you want the CronJob to remain suspended, set this value to true.

Handling States with Non-Standard NIBRS Specifications (Optional)

In some cases, certain US states do not strictly adhere to the FBI’s NIBRS specification. Additional configuration is required for these exceptional cases.

  1. Edit the $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-cronjob/us-state-transformer.yaml file. Replace {{ NIBRS_STATE }} with a two-character code representing the required state as specified in ISO 3166-2:US.

  2. Open the CronJob kustomization file located at $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-cronjob/kustomization.yaml. In the transformers section, include the following content:

    transformers:
    ...
    - us-state-transformer.yaml

This ensures that the data loader correctly handles the unique data format of the specified state.

Tenant-Specific Configuration (optional)

The following steps create a configuration for a tenant named “tenant1”. Repeat these steps for each tenant that requires a SAS NIBRS Data Loader CronJob using the actual tenant name in place of “tenant1”.

  1. Copy the example tenant configuration directory from $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-tenant-example to $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-tenant1

  2. Replace the {{ TENANT-NAME }} placeholders in $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-tenant-example/kustomization.yaml with the tenant name.

  3. Replace the {{ TENANT-NAME }} placeholder in $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-tenant-example/nibrs-tenant.env with the tenant name.

  4. Add the tenant-specific resource to the $deploy/site-config/sas-nibrs-data-loader/kustomization.yaml file.

    Here is an example:

    resources:
    #  - sas-nibrs-data-loader-cronjob
       - sas-nibrs-data-loader-tenant1

Custom Schema Name (optional)

The NIBRS Data Loader will, by default, create a database schema named nibrsdataloader on the first run. It is possible to override this behaviour and specify a custom schema name.

  1. Edit the $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-cronjob/custom-schema-transformer.yaml file. Replace {{ NIBRS_SCHEMA }} with the new schema name. Note that this name must comply with PostgreSQL schema naming rules.

  2. Open the CronJob kustomization file located at $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-cronjob/kustomization.yaml. In the transformers section, include the following content:

    transformers:
    ...
    - custom-schema-transformer.yaml

Configure SAS Micro Analytic Service to Support Analytic Stores

Overview

Configuring analytic store (ASTORE) directories is required in order to publish analytic store models from SAS Intelligent Decisioning, SAS Model Manager, and Model Studio to a SAS Micro Analytic Service publishing destination.

Configuring SAS Micro Analytic Service to use ASTORE files inside the container requires persistent storage from the cloud provider. A PersistentVolumeClaim (PVC) is defined to state the storage requirements from cloud providers. The storage provided by cloud is mapped to predefined paths across services collaborating to handle ASTORE files.

Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

Prerequisites

Storage for the ASTORE files must support ReadWriteMany access permissions.

Note: The STORAGE-CLASS-NAME from the provider is used to determine the STORAGE-CAPACITY that is required for your ASTORE files. The required storage capacity depends on the size and number of ASTORE files.

Installation

  1. Copy the files in $deploy/sas-bases/examples/sas-microanalytic-score/astores to the $deploy/site-config/sas-microanalytic-score/astores directory. Create the destination directory, if it does not already exist.

    Note: If the destination directory already exists, verify that the overlays have been applied. If the output contains the /models/astores/viya and /models/resources/viya mount directory paths, you do not need to take any further actions, unless you want to change the overlay parameters for the mounted directories.

  2. The resources.yaml file in $deploy/site-config/sas-microanalytic-score/astores contains the parameters of the storage that is required in the PeristentVolumeClaim. For more information about PersistentVolumeClaims, see Additional Resources.

    • Replace {{ STORAGE-CAPACITY }} with the amount of storage required.
    • Replace {{ STORAGE-CLASS-NAME }} with the appropriate storage class from the cloud provider that supports ReadWriteMany access mode.
  3. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    • Add site-config/sas-microanalytic-score/astores/resources.yaml to the resources block.
    • Add sas-bases/overlays/sas-microanalytic-score/astores/astores-transformer.yaml to the transformers block.

    Here is an example:

    resources:
    - site-config/sas-microanalytic-score/astores/resources.yaml
    
    transformers:
    - sas-bases/overlays/sas-microanalytic-score/astores/astores-transformer.yaml
  4. Complete one of the following deployment steps to apply the new settings.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, and then see Deploy the Software in SAS Viya Platform: Deployment Guide for more information.
    • If you are applying the overlay after the initial deployment of the SAS Viya platform, see Modify Existing Customizations in a Deployment in SAS Viya Platform: Deployment Guide for information about how to redeploy the software.

Verify Overlays for the Persistent Volumes

  1. Run the following command to verify whether the overlays have been applied:

    kubectl describe pod  <sas-microanalyticscore-pod-name> -n <name-of-namespace>
  2. Verify that the output contains the following mount directory paths:

    Mounts:
      /models/astores/viya from astores-volume (rw,path="models")
      /models/resources/viya from astores-volume (rw,path="resources")

Additional Resources

Configure CPU and Memory Resources for SAS Micro Analytic Service

Overview

By default, SAS Micro Analytic Service is deployed with 750 MB of memory and 250m CPU.

If your SAS Micro Analytic Service deployment requires different resources, you can use the resources-transformer.yaml file in the $deploy/sas-bases/examples/sas-microanalytic-score/resources directory to configure different values.

Prerequisites

Determine the minimum and maximum value of memory and CPU required for your deployment. The values depend on available resources in the cluster and your desired throughput.

Installation

  1. Copy the files in $deploy/sas-bases/examples/sas-microanalytic-score/resources to the $deploy/site-config/sas-microanalytic-score/resources directory. Create destination directory if it does not exist.

    Note: If the destination directory already exists, verify that the overlay has been applied. You do not need to take any further actions, unless you want to change the CPU and memory parameters to different values.

  2. Modify the resources-transformer.yaml in $deploy/site-config/sas-microanalytic-score/resources to specify your resource settings. For more information about Kubernetes resources, see Additional Resources.

    • Replace {{ MEMORY-REQUIRED }} with the minimum amount of memory required for SAS Micro Analytic Service.
    • Replace {{ MEMORY-LIMIT }} with the maximum amount of memory that can be claimed for SAS Micro Analytic Service.
    • Replace {{ CPU-REQUIRED }} with the minimum number of cores required for SAS Micro Analytic Service.
    • Replace {{ CPU-LIMIT }} with the maximum number of cores that can be claimed for SAS Micro Analytic Service.

    Note: Kubernetes uses units of measurement that are different from the standard. For memory, use Gi for gigabytes and Ti for terabytes. For cores, Kubernetes uses millicores as its standard, and there are 1000 millicores to a core. Therefore, if you want to use 4 cores, use 4000m as your value. 500m is equivalent to half a core.

  3. In the base kustomization.yaml in $deploy directory, add site-config/sas-microanalytic-score/resources/resources-transformer.yaml to the transformers block.

    Here is an example:

    transformers:
    - site-config/sas-microanalytic-score/resources/resources-transformer.yaml
  4. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

    Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Verify Overlay for the Resources

  1. Run the following command to verify whether the overlay has been applied:

    kubectl describe pod  <sas-microanalyticscore-pod-name> -n <name-of-namespace>
  2. Verify that the output contains the desired CPU and memory values that you configured:

    Limits:
      cpu:     4
      memory:  2Gi
    Requests:
      cpu:      250m
      memory:   750M

Additional Resources

Configure SAS Micro Analytic Service to Support Archive for Log Step Execution

Overview

If enabled, the SAS Micro Analytic Service archive feature records the inputs and outputs of step execution to a set of rolling log files. To use the archive feature, SAS Micro Analytic Service must be configured with a persistent volume to use as a location in which to store the log files. This README describes how to configure SAS Micro Analytic Service to use a PersistentVolumeClaim to define storage for the archive logs.

By default, the archive feature is not enabled. This README also provides a link to where you can find more information about how to enable the archive feature in SAS Micro Analytic Service.

Prerequisites

The archive feature requires storage with ReadWriteMany access mode for storing transaction logs. A peristentVolumeClaim is defined to specify the storage required.

Note: The STORAGE-CLASS-NAME from the cloud provider is used to determine the STORAGE-CAPACITY that is required for your archives. The required storage capacity depends on the expected transaction volume, the size of your payloads, and your backup strategy.

Installation

  1. Copy the files in $deploy/sas-bases/examples/sas-microanalytic-score/archive to the $deploy/site-config/sas-microanalytic-score/archive directory. Create the destination directory if it does not exist.

    Note: If the destination directory already exists, verify that the overlay has been applied. If the output contains the /opt/sas/viya/config/var/log/microanalyticservice/default/archive mount directory path, you do not need to take any further actions, unless you want to change the overlay parameters for the mounted directory.

  2. The resources.yaml file in $deploy/site-config/sas-microanalytic-score/archive contains the parameters of the storage that is required in the PeristentVolumeClaim. For more information about PersistentVolumeClaims, see Additional Resources.

    • Replace {{ STORAGE-CAPACITY }} with the amount of storage required.
    • Replace {{ STORAGE-CLASS-NAME }} with the appropriate storage class from the cloud provider that supports ReadWriteMany access mode.
  3. Make the following changes to the kustomization.yaml file in the $deploy directory:

    • Add site-config/sas-microanalytic-score/archive/resources.yaml to the resources block.
    • Add sas-bases/overlays/sas-microanalytic-score/archive/archive-transformer.yaml to the transformers block.

    Here is an example:

    resources:
    - site-config/sas-microanalytic-score/archive/resources.yaml
    
    transformers:
    - sas-bases/overlays/sas-microanalytic-score/archive/archive-transformer.yaml
  4. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

    Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Post-Installation Tasks

Verify Overlay for the Persistent Volume

  1. Run the following command to verify whether the overlay has been applied:

    kubectl describe pod  <sas-microanalyticscore-pod-name> -n <name-of-namespace>
  2. Verify that the output contains the following mount directory path:

    Mounts:
      /opt/sas/viya/config/var/log/microanalyticservice/default/archive from archives-volume (rw)

Enable the Archive Feature in SAS Environment Manager

After the deployment is complete, the SAS Micro Analytic Service archive feature must be enabled in SAS Environment Manager. For more information, see Archive Feature Configuration in SAS Micro Analytic Service: Programming and Administration Guide.

Additional Resources

Configuration Settings for SAS Micro Analytic Service

Overview

This document describes the customizations that can be made by the Kubernetes administrator for deploying, tuning, and troubleshooting SAS Micro Analytic Service.

Installation

SAS provides example files for many common customizations. Read the descriptions for the example files in the examples section. Follow these steps to use transformers from examples to customize your deployment.

  1. Copy the example transformer file in $deploy/sas-bases/examples/sas-microanalytic-score/config to the $deploy/site-config/sas-microanalytic-score/config directory. Create the destination directory if it does not exist.

  2. Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ VARIABLE-NAME }}. Replace the entire variable string, including the braces, with the value you want to use.

  3. In the base kustomization.yaml in $deploy directory, add site-config/sas-microanalytic-score/config/ to the transformers block.

    transformers:
    - site-config/sas-microanalytic-score/config/mas-add-environment-variables.yaml   
  4. Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

    Note: These transformers can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

    • If you are applying the transformer during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Examples

The example files are located at $deploy/sas-bases/examples/sas-microanalytic-score/config. The following is a list of each example file for SAS Micro Analytic Service settings and the file name.

Verify Transformer for the New Configuration

  1. Run the following command to verify whether the transformer has been applied:

    kubectl describe pod  <sas-microanalyticscore-pod-name> -n <name-of-namespace>
  2. Verify that the output contains the values that you configured:

    Environment:
      my-variable: my-value

Additional Resources

Configure SAS Micro Analytic Service to Grant Security Context Constraints to Its Service Account

Overview

This README describes how privileges can be added to the sas-microanalytic-score pod service account. Security context constraints are required in an OpenShift cluster if the sas-micro-analytic-score pod needs to mount an NFS volume. If the Python environment is made available through an NFS mount, the service account requires NFS volume mounting privileges.

Note: For information about using NFS to make Python available, see the README file at /$deploy/sas-bases/examples/sas-open-source-config/python/README.md (for Markdown format) or /$deploy/sas-bases/docs/configure_python_for_sas_viya.htm (for HTML format).

Prerequisites

Granting Security Context Constraints on an OpenShift Cluster

The /$deploy/sas-bases/overlays/sas-microanalytic-score/service-account directory contains a file to grant security context constraints for using NFS on an OpenShift cluster.

A Kubernetes cluster administrator should add these security context constraints to their OpenShift cluster prior to deploying the SAS Viya platform. Use one of the following commands:

kubectl apply -f sas-microanalytic-score-scc.yaml

or

oc create -f sas-microanalytic-score-scc.yaml

Bind the Security Context Constraints to a Service Account

After the security context constraints have been applied, you must link the security context constraints to the appropriate service account that will use it. Use the following command:

oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-microanalytic-score -z sas-microanalytic-score

Post-Installation Tasks

Restart sas-microanalytic-score Service Pod

  1. Run this command to restart pod with new privileges added to the service account:

    kubectl rollout restart deployment sas-microanalytic-score -n <name-of-namespace>

Configure SAS Micro Analytic Service to Enable Access to the IBM DB2 Client

Overview

This document describes customizations that must be performed by the Kubernetes administrator for deploying SAS Micro Analytic Service to enable access to a DB2 database.

SAS Micro Analytic Service uses the installed DB2 client environment. This environment must be accessible from a PersistentVolume.

Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

Prerequisites

The DB2 Client must be installed. After the initial DB2 Client setup, two directories (for example, /db2client and /db2) must be created and accessible to SAS Micro Analytic Service. Ensure that the two directories contain the installed client files (for example, /db2client) and the configured server definition files (/db2).

Installation

  1. Copy the files in $deploy/sas-bases/examples/sas-microanalytic-score/db2-config to the $deploy/site-config/sas-microanalytic-score/db2-config directory. Create the destination directory, if it does not already exist.

  2. Modify the three files under the site-config/sas-microanalytic-score/db2-config folder to point to your settings.

    • Modify the $deploy/site-config/sas-microanalytic-score/db2-config/data-mount-mas.yaml file:

      • Replace each instance of {{ DB2_CLIENT_DIR_NAME }} with a desired name (for example, db2client)
      • Replace {{ DB2_CLIENT_DIR_MOUNT_PATH }} with an appropriate path for the installed DB2 client files (for example, “/db2client”)
      • Replace {{ DB2_CLIENT_DIR_PATH }} with the location of the db2client folder (for example, /shared/gelcontent/access-clients/db2client)
      • Replace {{ DB2_CLIENT_DIR_SERVER_NAME }} with the name of the server where DB2 Client is installed (for example, cloud.example.com)
      • Replace each instance of {{ DB2_CONFIGURED_DIR_NAME }} with a desired name (for example, db2)
      • Replace {{ DB2_CONFIGURED_DIR_MOUNT_PATH }} with an appropriate path for the DB2 configured server definition files (for example, “/db2”)
      • Replace {{ DB2_CONFIGURED_DIR_PATH }} with the location where the DB2 configured server definition files exist (for example, /shared/gelcontent/access-clients/db2)
      • Replace {{ DB2_CONFIGURED_DIR_SERVER_NAME }} with the name of the server where the DB2 configured server definition files exist (for example, cloud.example.com)
    • Modify the $deploy/site-config/sas-microanalytic-score/db2-config/etc-hosts-mas.yaml file:

      • Replace {{ DB2_DATABASE_IP }} with the IP address of the DB2 database server (for example, “192.0.2.0”)
      • Replace {{ DB2_DATABASE_HOSTNAME }} with the DB2 database host name (for example, “MyDBHost”)
    • Modify the $deploy/site-config/sas-microanalytic-score/db2-config/db2-environment-variables-mas.yaml file:

      • Replace {{ VALUE_1 }} with the appropriate value of DB2DIR (for example, “/db2client/sqllib”)
      • Replace {{ VALUE_2 }} with the appropriate value of DB2INSTANCE (for example, “sas”)
      • Replace {{ VALUE_3 }} with the appropriate value of DB2LIB (for example, “/db2client/sqllib/lib”)
      • Replace {{ VALUE_4 }} with the appropriate value of DB2_HOME (for example, “/db2client/sqllib”)
      • Replace {{ VALUE_5 }} with the appropriate value of DB2_NET_CLIENT_PATH (for example, “/db2client/sqllib”)
      • Replace {{ VALUE_6 }} with the appropriate value of IBM_DB_DIR (for example, “/db2client/sqllib”)
      • Replace {{ VALUE_7 }} with the appropriate value of IBM_DB_HOME (for example, “/db2client/sqllib”)
      • Replace {{ VALUE_8 }} with the appropriate value of IBM_DB_INCLUDE (for example, “/db2client/sqllib”)
      • Replace {{ VALUE_9 }} with the appropriate value of IBM_DB_LIB (for example, “/db2client/sqllib/lib”)
      • Replace {{ VALUE_10 }} with the appropriate value of INSTHOME (for example, “/db2”)
      • Replace {{ VALUE_11 }} with the appropriate value of INST_DIR (for example, “/db2client/sqllib”)
      • Replace {{ VALUE_12 }} with the appropriate value of DB2 (for example, “/db2client/sqllib/lib64:/db2client/sqllib/lib64/gskit:/db2client/sqllib/lib32”)
      • Replace {{ VALUE_13 }} with the appropriate value of DB2_BIN (for example, “/db2client/sqllib/bin:/db2client/sqllib/adm:/db2client/sqllib/misc”)
      • Replace {{ VALUE_14 }} with the appropriate value of SAS_EXT_LLP_ACCESS (for example, “/db2client/sqllib/lib64:/db2client/sqllib/lib64/gskit:/db2client/sqllib/lib32”)
      • Replace {{ VALUE_15 }} with the appropriate value of SAS_EXT_PATH_ACCESS (for example, “/db2client/sqllib/bin:/db2client/sqllib/adm:/db2client/sqllib/misc”)
  3. Make the following changes to the transformers block of base kustomization.yaml file (‘$deploy/kustomization.yaml’)

    • Add site-config/sas-microanalytic-score/db2-config/data-mount-mas.yaml
    • Add site-config/sas-microanalytic-score/db2-config/etc-hosts-mas.yaml
    • Add site-config/sas-microanalytic-score/db2-config/db2-environment-variables-mas.yaml

    Here is an example:

    transformers:
    - site-config/sas-microanalytic-score/db2-config/data-mount-mas.yaml # patch to setup mount for mas
    - site-config/sas-microanalytic-score/db2-config/etc-hosts-mas.yaml # Host aliases
    - site-config/sas-microanalytic-score/db2-config/db2-environment-variables-mas.yaml  # patch to inject environment variables for DB2
  4. Complete one of the following deployment steps to apply the new settings.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, and then see Deploy the Software in SAS Viya Platform: Deployment Guide for more information.
    • If you are applying the overlay after the initial deployment of the SAS Viya platform, see Modify Existing Customizations in a Deployment in SAS Viya Platform: Deployment Guide for information about how to redeploy the software.

Verify Overlays for the Persistent Volumes

  1. Run the following command to verify whether the overlays have been applied:

    kubectl describe pod  <sas-microanalyticscore-pod-name> -n <name-of-namespace>
  2. Verify that the output contains the following mount directory paths:

    Mounts:
      /db2 from db2 (rw)
      /db2client from db2client (rw)
  3. Verify that the output shows that each environment variable is assigned the appropriate value. Here is an example:

    Environment:
       SAS_K8S_DEPLOYMENT_NAME:               sas-microanalytic-score
       DB2DIR:                                /db2client/sqllib
       DB2INSTANCE:                           sas
       DB2LIB:                                /db2client/sqllib/lib
       DB2_HOME:                              /db2client/sqllib
       DB2_NET_CLIENT_PATH:                   /db2client/sqllib
       IBM_DB_DIR:                            /db2client/sqllib
       IBM_DB_HOME:                           /db2client/sqllib
       IBM_DB_INCLUDE:                        /db2client/sqllib/
       IBM_DB_LIB:                            /db2client/sqllib/lib
       INSTHOME:                              /db2
       INST_DIR:                              /db2client/sqllib
       DB2:                                   /db2client/sqllib/lib64:/db2client/sqllib/lib64/gskit:/db2client/sqllib/lib32
       DB2_BIN:                               /db2client/sqllib/bin:/db2client/sqllib/adm:/db2client/sqllib/misc
       SAS_EXT_LLP_ACCESS:                    /db2client/sqllib/lib64:/db2client/sqllib/lib64/gskit:/db2client/sqllib/lib32
       SAS_EXT_PATH_ACCESS:                   /db2client/sqllib/bin:/db2client/sqllib/adm:/db2client/sqllib/misc

Additional Resources

Configure SAS Model Repository Service to Add Service Account

Overview

This README describes how a service account with defined privileges can be added to the sas-model-repository pod. A service account is required in an OpenShift cluster if it needs to mount NFS. If the Python environment is made available through an NFS mount, the service account requires NFS volume mounting privilege.

Note: For information about using NFS to make Python available, see the README file at /$deploy/sas-bases/examples/sas-open-source-config/python/README.md (for Markdown format) or /$deploy/sas-bases/docs/configure_python_for_sas_viya.htm (for HTML format).

Prerequisites

Granting Security Context Constraints on an OpenShift Cluster

The /$deploy/sas-bases/overlays/sas-model-repository/service-account directory contains a file to grant security context constraints for using NFS on an OpenShift cluster.

A Kubernetes cluster administrator should add these security context constraints to their OpenShift cluster prior to deploying the SAS Viya platform. Use one of the following commands:

kubectl apply -f sas-model-repository-scc.yaml

or

oc create -f sas-model-repository-scc.yaml

Bind the Security Context Constraints to a Service Account

After the security context constraints have been applied, you must link the security context constraints to the appropriate service account that will use it. Use the following command:

oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-model-repository -z
sas-model-repository

Installation

Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.

Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

Post-Installation Tasks

Verify the Service Account Configuration

  1. Run the following command to verify whether the overlay has been applied:

    kubectl -n <name-of-namespace> get pod <sas-model-repository-pod-name> -oyaml | grep serviceAccount
  2. Verify that the output contains the service-account sas-model-repository.

    serviceAccount: sas-model-repository
    serviceAccountName: sas-model-repository

Preparing and Configuring SAS Model Risk Management for Deployment

Prerequisites

When SAS Model Risk Management is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Risk Cirrus Core, that is used by all SAS Risk Cirrus solutions. Therefore, in order to deploy the SAS Model Risk Management solution successfully, you must deploy the Risk Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/docs/preparing_and_configuring_risk_cirrus_core_for_deployment.htm (HTML format) or at $deploy/sas-bases/examples/sas-risk-cirrus-rcc/README.md (Markdown format).

The Risk Cirrus Core README also contains information about storage options, such as external databases, for your solution. You must complete the pre-deployment described in the Risk Cirrus Core README before deploying SAS Model Risk Management. Please read that document for important information about the pre-installation tasks that should be completed prior to deploying SAS Model Risk Management.

IMPORTANT: You must complete the step described in the Cirrus Core README to modify your Cirrus Core configuration file. SAS Model Risk Management uses workflow service tasks, so a user account must be configured for a workflow client. If you know before your deployment which user account you will use and you want to have it configured during installation, then you should set the {{SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG}} variable to Y and assign the user account to the {{SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT}} variable. The Cirrus Core README contains more information about these two environment variables.

For more information about deploying Risk Cirrus Core, you can also read Deployment Tasks in the SAS Risk Cirrus: Administrator’s Guide.

For more information about the tasks that should be completed prior to deploying SAS Model Risk Management, see Deployment Tasks in the SAS Model Risk Management: Administrator’s Guide.

Overview of Configuration for SAS Model Risk Management

SAS Model Risk Management provides a ConfigMap whose values control various aspects of its deployment process. It includes variables such as the logging level for the deployment, deployment steps to skip, etc. SAS provides default values for these variables as described in the next section. You can override these default values by configuring a configuration.env file with your override values and then configuring your kustomization.yaml file to apply those overrides.

For a list of variables that can be overridden and their default values, see SAS Model Risk Management Configuration Parameters.

For the steps needed to override the default values with your own values, see Apply Overrides to the Configuration Parameters.

SAS Model Risk Management Configuration Parameters

The following list describes the parameters that can be specified in the SAS Model Risk Management .env configuration file. These parameters can be found in the template configuration file (configuration.env), but they are commented out in that file. Lines that begin with # will not be applied during deployment. If you want to use one of those skipped variables, remove the # at the beginning of the line.

  1. The SAS_LOG_LEVEL_RISKCIRRUSDEPLOYERparameter specifies a logging level for the deployment. The logging level INFO is used if the variable is not overridden by your configuration.env file. For a more verbose level of logging, specify the value DEBUG.

  2. The SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLESparameter specifies whether you want to include steps flagged as sample artifacts. The value N is used if the variable is not overridden by your configuration.env file. That means steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts:

    • The load_workflows step loads and activates the SAS-provided workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.
    • The upload_notifications step loads notification templates that are used with the SAS-provided workflow definitions. If you are not using SAS-provided workflow definitions, then you do not need these templates.
    • The load_sample_data step loads sample Class Members, Class Member Translations, NamedTreePaths, Roles, RolePermissions, Positions, ReportFacts, ReportObjectRegistrations, and ReportExtractConfigurations. It also loads sample object instances, like models and findings, as well as the LinkInstances, ObjectClassifications, and Workflows associated with those objects. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
    • The import_main_data_loader_files step imports the Cirrus_MRM_loader.zip file into the file service. Administrators can then download the file from the Data Load page in SAS Model Risk Management and use it as a template to load and unload data.
    • The import_sample_data_loader_files step imports the Cirrus_MRM_sample_data_loader.xlsx and Cirrus_MRM_sample_data_workflow_change_state_loader.xlsx files into the files service. Administrators can then download the files from the Data Load page in SAS Model Risk Management and use them as a template to load and unload data.
    • The import_va_reports step imports SAS-provided reports created in SAS Visual Analytics.
    • The localize_va_reports step imports localized labels for SAS-provided reports created in SAS Visual Analytics.

    WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N.

  3. The SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS parameter specifies whether you want to skip specific steps during the deployment of SAS Model Risk Management. The value "" is used if the variable is not overridden by your configuration.env file. This means none of the deployment steps will be skipped explicitly. Typically, the only use case for overriding this value would be to load some sample artifacts, like workflows, but skip the loading of sample data. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to Y, then set this variable with the IDs of any steps you would like to skip, including those flagged as sample data. If you want to skip the loading of sample data, for example, set this variable to “load_sample_data”. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then set this variable to an empty string to skip load_sample_data and any other steps that are marked as sample data.

  4. The SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS parameter specifies an explicit list of steps you want to run during a deployment. The value "" is used if the variable is not overridden by your configuration.env file. This means all of the deployment steps will be run except steps flagged as sample artifacts (if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is N) or steps skipped in SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment. For example, if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then the upload_notifications step will be skipped during deployment. After the deployment finishes, you decide you want to include the SAS-provided notifications to use in your custom workflow definitions. You can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS to “upload_notifications” and then trigger the sas-risk-cirrus-mrm CronJob to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS.

    WARNING: This list is absolute; the deployment will only run the steps included in this list. This variable should be an empty string if you are deploying this environment for the first time, or if you are upgrading from a previous version. Otherwise you risk a failed or incomplete deployment.

Apply Overrides to the Configuration Parameters

Note: If you configured overrides during a previous deployment, those overrides should already be available in the SAS Model Risk Management ConfigMap. You can verify that here.

If you want to override any of the SAS Model Risk Management configuration properties rather than using the default values, complete these steps:

  1. If you are upgrading from a cadence prior to 2025.02, you should complete this step. Otherwise, you can skip to the next step.

If you have a $deploy/site-config/sas-risk-cirrus-mrm directory, take note of the values in your mrm_transform.yaml file. You may want to use them in the following steps. Once you have the values you need, delete the directory and its contents. Then, edit your base kustomization.yaml file ($deploy/kustomization.yaml) to remove the following line from the transformers section: -site-config/sas-risk-cirrus-mrm/resources/mrm_transform.yaml.

  1. Create a $deploy/site-config/sas-risk-cirrus-mrm directory if one does not exist. Then copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-mrm to that directory.

    IMPORTANT: If the destination directory already exists, confirm it contains the configuration.env file, not the mrm_transform.yaml file that was used for cadences prior to 2025.02. If the directory already exists, and it has the configuration.env file, then verify that the overlay connection settings have been applied correctly. No further actions are required unless you want to change the connection settings to different values.

  2. In the base kustomization.yaml file, add the sas-risk-cirrus-mrm-parameters ConfigMap to the configMapGenerator block. If that block does not exist, create it. Here is an example:

    configMapGenerator:
      - name: sas-risk-cirrus-mrm-parameters
        behavior: merge
        envs:
          - site-config/sas-risk-cirrus-mrm/configuration.env
  3. Save any changes you made to the kustomization.yaml file.

  4. If you want to change the default settings provided by SAS or update overridden values from previous cadences, modify the configuration.env file (located in the $deploy/site-config/sas-risk-cirrus-mrm directory). If there are any parameters for which you want to override the default value, remove the # at the beginning of that variable’s line in your configuration.env file and replace the placeholder with the desired value. You can read more about each step in SAS Model Risk Management Configuration Parameters.

    The following is an example of a configuration.env file you could use for SAS Model Risk Management. This example uses the default values provided by SAS except for the SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES and SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS variables. In this case, it will run the sample steps described in SAS Model Risk Management Configuration Parameters except the step that loads sample data (load_sample_data). That means your deployment will contain workflows, notifications, localized reports, and links to data loaders; but it will not contain roles, positions, object instances, or other sample data.

    # SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
    SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES=Y
    SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS=load_sample_data
    # - SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
    6. Save any changes you made to the configuration.env file.

Complete the Deployment Process

When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide.

Verify Overlay Connection Settings Applied Successfully

Before verifying the settings for the SAS Model Risk Management solution, you should first verify Risk Cirrus Core’s settings. Those instructions can be found in the Risk Cirrus Core README. To verify the settings for SAS Model Risk Management, do the following:

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl describe configmap sas-risk-cirrus-mrm-parameters -n <name-of-namespace>
  2. Verify that the output contains the desired connection settings that you configured.

Additional Resources

For administration information related to the SAS Model Risk Management solution, see SAS Model Risk Management: Administrator’s Guide.

For more generalized deployment information, see SAS Viya Platform: Deployment Guide.

SAS RFC Solution Configuration Patch to Communicate with Apache Kafka

Overview

The SAS RFC Solution Configuration Service installs three Kubernetes resources that define how Fraud solutions communicate with Apache Kafka.

This README file describes how to replace the placeholders in the files with values and secret data for a specific Apache Kafka cluster.

Prerequisites

Installation

  1. Copy all of the files in $deploy/sas-bases/examples/sas-rfc-solution-config/configure-kafka to $deploy/site-config/sas-rfc-solution-config, where $deploy is the directory containing your SAS Viya platform installation files. Create the target directory, if it does not already exist.

  2. Edit the $deploy/site-config/sas-rfc-solution-config/kafka-configuration-patch.yaml file. Update properties, especially the server, protocol and topics. Add any properties as recommended by product documentation or customer support. Here is an example:

    - op: replace
      path: /data
      value:
       SAS_KAFKA_SERVER: "fsi-kafka-kafka-bootstrap.kafka.svc.cluster.local:9093"
       SAS_KAFKA_CONSUMER_DEBUG: ""
       SAS_KAFKA_PRODUCER_DEBUG: ""
       SAS_KAFKA_OFFSET: earliest
       SAS_KAFKA_ACKS: "2"
       SAS_KAFKA_BATCH: ""
       SAS_KAFKA_LINGER: ""
       SAS_KAFKA_AUTO_CREATE_TOPICS: "true"
       SAS_KAFKA_SECURITY_PROTOCOL: "sasl_ssl"
       SAS_KAFKA_HOSTNAME_VERIFICATION: "false"
       SAS_DETECTION_KAFKA_TOPIC: "input-transactions"
       SAS_DETECTION_KAFKA_TDR_TOPIC: "tdr-topic"
       SAS_DETECTION_KAFKA_REJECTTOPIC: "transaction-reject"
       SAS_TRIAGE_KAFKA_TDR_TOPICS: "tdr-topic"
       SAS_TRIAGE_KAFKA_OUTBOUND_TOPIC: "sas-triage-topic-outbound"
       SAS_TRIAGE_KAFKA_QUEUE_CHANGED_TOPIC: "sas-triage-notification-queue-changed"
       SAS_TRANSACTION_MARK_TOPIC: "transaction-topic-outbound"
       SAS_RWS_KAFKA_BROKERS: "fsi-kafka-kafka-bootstrap.kafka.svc.cluster.local:9093"
       SAS_RWS_KAFKA_INPUT_TOPIC: "rws-input-transactions"
       SAS_RWS_KAFKA_OUTPUT_TOPIC: "rws-output-transactions"
       SAS_RWS_KAFKA_ERROR_TOPIC: "rws-error-transactions"
       SAS_RWS_KAFKA_REJECT_TOPIC: "rws-reject-transactions"
  3. Edit the $deploy/site-config/sas-rfc-solution-config/kafka-cred-patch.yaml file. If the security protocol for Apache Kafka includes SASL, then modify the patch to include a base64 representation of the user ID and password. Here is an example:

    - op: replace
      path: /data
      value:
        username: ...
        password: ...
  4. Edit the $deploy/site-config/sas-rfc-solution-config/kafka-truststore-patch.yaml file. If the security protocol for kafka includes SSL, then update the patch to use the correct certificate. Here is an example:

    - op: replace
      path: /data
      value:
        ca.crt: LS0tLS1CRU...
        ca.p12: MIIGogIBAz...
        ca.password: ...
  5. After updating the example files, add references to them to the base kustomization.yaml file ($deploy/kustomization.yaml). Add a reference to the kafka-configuration-patch.yaml file as a patch.

    For example, if you made the changes described above, then the base kustomization.yaml file should have entries similar to the following:

    
    patches:
    - target:
        version: v1
        kind: ConfigMap
        name: sas-rfc-solution-config-kafka-config
      path: site-config/sas-rfc-solution-config/kafka-configuration-patch.yaml
    - target:
        version: v1
        kind: Secret
        name: sas-rfc-solution-config-kafka-creds
      path: site-config/sas-rfc-solution-config/kafka-cred-patch.yaml
    - target:
        version: v1
        kind: Secret
        name: sas-rfc-solution-config-kafka-ca-cert
      path: site-config/sas-rfc-solution-config/kafka-truststore-patch.yaml
  6. As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Deploy the Software.

Configure SAS Real-Time Watchlist Screening

Overview

The configuration information in this README applies to both SAS Real-Time Watchlist Screening for Entities and SAS Real-Time Watchlist Screening for Payments.

SAS Real-Time Watchlist Screening requires running an Apache Kafka message broker and a PostgreSQL database. The instructions in this README describe how to configure the product.

Instructions

Data in Motion - TLS

The Configure with Initial SAS Viya Platform Deployment section below describes several TLS configurations. These configurations must align with SAS Viya security requirements, as specified in SAS Viya Platform Operations guide Security Requirements. Here are the specific TLS deployment requirements:

Configure with Initial SAS Viya Platform Deployment

To configure SAS Real-Time Watchlist Screening:

  1. Copy the files in the $deploy/sas-bases/examples/sas-watch-config/sample directory to the $deploy/site-config/sas-watch-config/install directory. Create the destination directory if it does not exist.

  2. If you are installing SAS Real-Time Watchlist Screening with SAS Viya platform, add $deploy/site-config/sas-watch-config/install to the resources block of the base kustomization.yaml file. Here is an example:

    resources:
    ...
    - site-config/sas-watch-config/install
    ...
  3. Update the $deploy/site-config/sas-watch-config/install/list/watchlist.xml file by replacing the variables with the appropriate values for configuring the watchlist.

  4. Update the $deploy/site-config/sas-watch-config/install/kustomization.yaml file by replacing the variables with the appropriate values for secrets. The secrets generator can be removed if the equivalent secrets are created prior to installing SAS Real-Time Watchlist Screening.

  5. Update the $deploy/site-config/sas-watch-config/install/namespace.yaml file by replacing the variable with the appropriate value for the targeted namespace.

  6. Update the $deploy/site-config/sas-watch-config/install/settings.properties file by replacing the variables with the appropriate values for the properties.

  7. Update the $deploy/site-config/sas-watch-config/install/base/rws-settings.yaml file by replacing the variables with the appropriate values for the ConfigMap.

  8. Update the $deploy/site-config/sas-watch-config/install/base/rws-image-pull-secrets.yaml file by replacing the variables with the appropriate values for the image pull secrets.

    The image pull secret can be found using the SAS Viya platform Kustomize build command:

    kustomize build . > site.yaml
    grep '.dockerconfigjson:' site.yaml

    Alternatively, if SAS Viya platform has already been deployed, the image pull secret can be found with the kubectl command:

    kubectl -n {{ NAMESPACE }} get secret --field-selector=type=kubernetes.io/dockerconfigjson -o yaml | grep '.dockerconfigjson:'

    The output for either command is .dockerconfigjson: <SECRET>. Replace the {{ IMAGE_PULL_SECRET }} variables with the value returned by the command you used.

    Replace the {{ NAMESPACE }} value.

  9. If you are deploying to Red Hat OpenShift, update configurations by following the instructions in the comments of each of the following files:

    • $deploy/site-config/sas-watch-config/install/base/kustomization.yaml
    • $deploy/site-config/sas-watch-config/install/base/rws-admin-route.yaml
    • $deploy/site-config/sas-watch-config/install/base/rws-rt-route.yaml
  10. If you are not deploying to Red Hat OpenShift, update the $deploy/site-config/sas-watch-config/install/base/rws-ingress.yaml file by replacing the variables with the appropriate values for the ingress host and namespace.

  11. Update the five image values that are contained in these three files:

    • $deploy/site-config/sas-watch-config/install/base/rws-admin-deployment.yaml
    • $deploy/site-config/sas-watch-config/install/base/rws-async-deployment.yaml
    • $deploy/site-config/sas-watch-config/install/base/rws-rt-deployment.yaml

    In those files, revise the value “sas-business-orchestration-worker” to include the registry server, relative path, name, and tag. The registry relative server and relative path are the same as other SAS Viya platform delivered images.

    The name of the container is ‘sas-business-orchestration-worker’. The registry relative path, name, and tag values are found in the sas-components-* ConfigMap in the SAS Viya platform deployment.

    Perform the following commands to determine the appropriate information. When you have the information, add it to the appropriate places in the three files listed above.

    # generate site.yaml file
    kustomize build -o site.yaml
    
    # get the sas-business-orchestration-worker registry information
    cat manifests.yaml | grep 'sas-business-orchestration-worker:' | grep -v -e "VERSION" -e 'image'
    
    # manually update the sas-business-orchestration-worker-example images using the information gathered below: <container registry>/<container relative path>/sas-business-orchestration-worker:<container tag>
    
    # apply site.yaml file
    kustomize apply -f site.yaml 

    Perform the following commands to get the required information from a running SAS Viya platform deployment.

    # get the registry server, kubectl needs to point to the SAS Viya Platform deployment namespace, and replace {{ NAMESPACE }} with the namespace value
    kubectl -n {{ NAMESPACE }} get deployment sas-readiness -o yaml | grep -e "image:.*sas-readiness" | sed -e 's/image: //g' -e 's/\/.*//g'  -e 's/^[ \t]*//'
      <container registry>
    
    # get registry relative path and tag, kubectl needs to point to the SAS Viya Platform deployment namespace, and replace {{ NAMESPACE }} with the namespace value
    CONFIGMAP="$(kubectl -n {{ NAMESPACE }} get cm | grep sas-components | tr -s '' | cut -d ' ' -f1)"
    kubectl -n {{ NAMESPACE }} get cm "$CONFIGMAP" -o yaml | grep 'sas-business-orchestration-worker:' | grep -v "VERSION"
       SAS_COMPONENT_RELPATH_sas-business-orchestration-worker: <container relative path>/sas-business-orchestration-worker
       SAS_COMPONENT_TAG_sas-business-orchestration-worker: <container tag>
  12. If you are enabling TLS, follow the instructions in the appropriate comments section in each of the following files, based on the TLS mode you are deploying with:

    • $deploy/site-config/sas-watch-config/install/base/rws-admin-deployment.yaml
    • $deploy/site-config/sas-watch-config/install/base/rws-async-deployment.yaml
    • $deploy/site-config/sas-watch-config/install/base/rws-rt-deployment.yaml
    • $deploy/site-config/sas-watch-config/install/base/rws-ingress.yaml
    • $deploy/site-config/sas-watch-config/install/base/rws-tls.yaml
    • $deploy/site-config/sas-watch-config/install/base/bdsl/bdsl.yaml
  13. If you are integrating with SAS Visual Investigator, perform the following steps:

    • Populate the fields in the $deploy/site-config/sas-watch-config/install/datastore/sas-watchlist-datastore-connection.json file using the same values that exist in the sas-watchlist-db-credentials secret referenced in $deploy/site-config/sas-watch-config/install/kustomization.yaml. Do not alter the “name” field within sas-watchlist-datastore-connection.json.

    • Uncomment the following entries in the $deploy/site-config/sas-watch-config/install/kustomization.yaml file:

      secretGenerator:
      ...
      # - name: sas-watchlist-datastore-connection
      #   files:
      #     - datastore/sas-watchlist-datastore-connection.json
      
      # patches:
      #   - path: datastore/rfc-solution-config-datastore-patch.yaml
    • If you are deploying SAS Real-Time Watchlist Screening separately from the SAS Viya platform, make sure to supply these entries to the base kustomization.yaml file ($deploy/kustomization.yaml) used to deploy the SAS Viya platform.

  14. The SAS license must be applied to the deployment artifacts in order to successfully screen requests. The way that you reference the license secret depends on how SAS Real-Time Watchlist Screening is being deployed.

    • If you are deploying in the SAS Viya platform namespace and SAS Viya platform is already deployed, update the secret volume mount with the secretName of the existing sas-license- secret. The secretName can be determined with the following command:

      kubectl get secret -n <namespace> | grep "sas-license"

      The secretName must be updated in each of the following files:

      • $deploy/site-config/sas-watch-config/install/base/rws-async-deployment.yaml
      • $deploy/site-config/sas-watch-config/install/base/rws-rt-deployment.yaml
      • $deploy/site-config/sas-watch-config/install/base/rws-admin-deployment.yaml
    • If you are not deploying in the SAS Viya platform namespace or you are deploying in the SAS Viya platform namespace but SAS Viya platform has not been deployed yet, you must create a license secret. Provide your license JSON web token as input to a Kubernetes secret and replace {{ NAMESPACE }} with the namespace value:

      kubectl create secret generic sas-license --from-file=SAS_LICENSE={{ your license jwt file }} -n {{ NAMESPACE }}
  15. Deploy the software.

Configure after the Initial Deployment

Alternatively, SAS Real-Time Watchlist Screening can be installed separately from the SAS Viya platform. Complete steps 1-13 in Configure with Initial SAS Viya Platform Deployment. Instead of step 14, perform the following command:

kustomize -b $deploy/site-config/sas-watch-config/install > sas-watch.yaml
kubectl apply -f sas-watch.yaml

Preparing and Configuring SAS Regulatory Capital Management for Deployment

Prerequisites

When SAS Regulatory Capital Management is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS Regulatory Capital Management solution successfully, you must deploy the Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-core/resources/README.md (Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm (HTML format).

For storage options for your solution, such as external databases, refer to the Risk Cirrus Core README.

For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Regulatory Capital Management: Administrator’s Guide.

Overview of Configuration for SAS Regulatory Capital Management

SAS Regulatory Capital Management provides a ConfigMap whose values control various aspects of its deployment process. This includes variables such as logging level for the deployment, deployment steps to skip, etc. SAS provides default values for these variables as described in the next section. You can override these default values by configuring a configuration.env file with your override values and configuring your kustomization.yaml file to apply these overrides.

For a list of variables that can be overridden and their default values, see SAS Regulatory Capital Management Configuration Parameters and Secrets.

For the steps needed to override the default values with your own values, see Apply Overrides to the Configuration Parameters and Secrets.

SAS Regulatory Capital Management Configuration Parameters and Secrets

The following table contains a list of parameters that can be specified in the SAS Regulatory Capital Management .env configuration file. These parameters can all be found in the template configuration file (configuration.env) but are commented out in the template file. Lines with a ‘#’ at the beginning are commented out, and their values will not be applied during deployment. If you want to override a SAS-provided default for a given variable, you must uncomment the line by removing the ‘#’ at the beginning of the line.

Parameter Name Description
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER Specifies a logging level for the deployment. The logging level value: "INFO" is used if the variable is not overridden by your .env file. For a more verbose level of logging, specify value: "DEBUG".
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES Specifies whether you want to include deployment steps that relate to sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts:

- The create_sampledata_folders step creates all sample data folders in the file service under the Products/SAS Regulatory Capital Management directory.
- The transfer_sampledata_files step stores a copy of all sample data files in the file service under the Products/SAS Regulatory Capital Management directory. This directory will include DDLs, reports, sample data, and scripts used to load the sample data.
- The import_sample_dataloader_files step step stores a copy of the Cirrus_RCM_sample_data_loader.xlsx file in the file service under the Products/SAS Regulatory Capital Management directory. Administrators can then download the file from the Data Load page in SAS Regulatory Capital Management and use it as a template to load and unload data.
- The install_sampledata step loads the sample data into a RCM library.
- The load_sampledata_dataloader_objects step loads sample Class Members, Class Member Translations, NamedTreePaths, Named Tree Path Translations, and Object Classifications.
- The import_va_reports step imports SAS-provided reports created in SAS Visual Analytics.

WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N.
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS Specifies whether you want to skip specific steps during the deployment of SAS Regulatory Capital Management.
Note: Typically, you should set this value blank: "". The value: "" is used if the variable is not overridden by your .env file. This means no deployment steps will be explicitly skipped.
SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS Specifies whether you want to run specific steps during the deployment of SAS Regulatory Capital Management.
Note: Typically, you should set this value blank: "". The value: "" is used if the variable is not overridden by your .env file. This means all deployment steps will be executed.
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME Specifies the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the SharedServices database.

The SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET parameter specifies the secret for the database user who is intended to own the solution database schema. It is specified in the SAS Regulatory Capital Management .env secret file (sas-risk-cirrus-rcm-secret.env). It is commented out in the file with a ‘#’ at the beginning, so its value will not be applied during deployment. If you want to override a SAS-provided default for a given variable, you must uncomment the line by removing the ‘#’ at the beginning of the line.

Apply Overrides to the Configuration Parameters and Secrets

If you want to override any of the SAS Regulatory Capital Management configuration parameters rather than using the default values, complete these steps:

  1. If you have a $deploy/site-config/sas-risk-cirrus-rcm directory, delete it and its contents. Then, edit your base kustomization.yaml file ($deploy/kustomization.yaml) to remove the following line from the transformers section:

    - site-config/sas-risk-cirrus-rcm/resources/rcm_transform.yaml

    This step should only be necessary if you are upgrading from a cadence prior to 2025.02.

  2. Copy the configuration.env and sas-risk-cirrus-rcm-secret.env from $deploy/sas-bases/examples/sas-risk-cirrus-rcm to the $deploy/site-config/sas-risk-cirrus-rcm directory. Create the destination directory if one does not exist. If the directory already exists and already has the expected .env files, verify that the overrides have been correctly applied. No further actions are required, unless you want to apply different overrides.

  3. In the base kustomization.yaml file, add the sas-risk-cirrus-rcm-parameters ConfigMap to the configMapGenerator block and sas-risk-cirrus-rcm-secret.env to the secretGenerator block. If that blocks does not exist, create it. Here is an example of what the inserted code block should look like in the kustomization.yaml file:

    configMapGenerator:
    ...
    - name: sas-risk-cirrus-rcm-parameters
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-rcm/configuration.env
    ...
    secretGenerator:
    ...
    - name: sas-risk-cirrus-rcm-secret
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-rcm/sas-risk-cirrus-rcm-secret.env
    ...
  4. Save the kustomization.yaml file.

  5. Modify the configuration.env file (in the $deploy/site-config/sas-risk-cirrus-rcm directory). If there are any parameters for which you want to override the default value, uncomment that variable’s line in your configuration.env file and replace the placeholder with the desired value.

    The following is an example of a configuration.env file that you could use for SAS Regulatory Capital Management. This example will use all of the default values provided by SAS.

    # SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
    # SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES={{ Y-OR-N }}
    # SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
    # SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
    # SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME={{ SOLUTION-INPUT-DATA-SCHEMA-USER-NAME }}

    For a list of variables that can be overridden and their default values, see SAS Regulatory Capital Management Configuration Parameters and Secrets.

  6. Save the configuration.env file.

  7. Modify the sas-risk-cirrus-rcm-secret.env file (in the $deploy/site-config/sas-risk-cirrus-rcm directory). If you want to override the default value of the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET, uncomment the variable’s line in your sas-risk-cirrus-rcm-secret.env file and replace the placeholder with the desired value.

    The following is an example of a secret.env file that you could use for SAS Regulatory Capital Management. This example will use the default value provided by SAS.

    # SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET={{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }}

    For the variable SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET and its default value, see SAS Regulatory Capital Management Configuration Parameters and Secrets.

  8. Save the sas-risk-cirrus-rcm-secret.env file.

Verify That Configuration Overrides Have Been Applied Successfully

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl describe configmap sas-risk-cirrus-rcm-parameters -n <name-of-namespace>

    Verify that the output contains your configured overrides.

  2. Find the name of the secret on the namespace.

    kubectl describe secret sas-risk-cirrus-rcm-secret -n <name-of-namespace>

    Retrieve the name of the secret on the namespace from the “Name:” line on the generated output.

  3. Get the database schema user secret.

    kubectl get secret <name-of-the-secret> -n <name-of-namespace> -o jsonpath='{.data}'

    Verify that the output contains your configured overrides.

Complete the Deployment Process

When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software

Configure Default Settings for SAS Risk Cirrus Builder Microservice

Overview

SAS Risk Cirrus Builder Microservice is the Go service that backs the Solution Builder. The service manages the lifecycle of solutions and their customizations.

By default, SAS Risk Cirrus Builder Microservice is deployed with some default settings. These settings can be overridden via the sas_risk_cirrus_builder_transform.yaml file. There is a template (in $deploy/sas-bases/examples/sas-risk-cirrus-builder/resources) that should be used as a starting point.

There is no requirement to configure this transform. Currently all fields in the transform are optional (with the default value documented here used as default if not supplied).

Note: For more information about the SAS Risk Cirrus Builder Microservice, see Introduction to SAS Risk Cirrus in the SAS Risk Cirrus: Administrator’s Guide.

Installation

  1. Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-builder/resources to the $deploy/site-config/sas-risk-cirrus-builder/resources directory. Create a destination directory if one does not exist.

IMPORTANT: If the destination directory already exists, verify that the overlay default settings (#verify-overlay-default-settings) have been correctly applied. No further actions are required, unless you want to change the default settings to different values.

  1. Modify the sas_risk_cirrus_builder_transform.yaml file (located in the $deploy/site-config/sas-risk-cirrus-builder/resources directory) to specify your settings as follows:

  2. For RISK_CIRRUS_UI_SAVE_ENABLED, replace {{ ENABLE-ARTIFACTS-SAVE }} with the desired value. Use ‘true’ to enable saving the UI artifacts in the solution builder UI. Use ‘false’ to disable saving the UI artifacts. Note: In ‘production’ or ‘test’ systems, this should be set to ‘false’ so that the UI artifacts cannot be accidentally updated in the configured GIT repository.
    If not configured, the default is ‘true’

  3. For DEFAULT_EMAIL_ADDRESS, replace {{ EMAIL-ADDRESS }} with the email address to use for connecting to git if the logged in user does not have an email address defined. If not configured, the system will default to ‘{logged in userid}@email.address.com’
  4. For SAS_LOG_LEVEL_RISKCIRRUSBUILDER, replace {{ INFO-OR-DEBUG }} with the logging level desired. If not configured, the default is ‘INFO’.
  5. For SAS_LOG_LEVEL_RISKCIRRUSCOMMONS, replace {{ INFO-OR-DEBUG }} with the logging level desired. If not configured, the default is ‘INFO’.
  6. For SAS_LOG_LEVEL, replace {{ INFO-OR-DEBUG }} with the logging level desired. If not configured, the default is ‘INFO’. Note: Setting this to DEBUG will result in the logging for all the other SAS Microservices that SAS Risk Cirrus Builder communicates with, thereby increasing the size of the log.

  7. In the base kustomization.yaml file in the $deploy directory, add site-config/risk-cirrus-builder/resources/sas_risk_cirrus_builder_transform.yaml to the transformers block. Here is an example:

    transformers:
      - site-config/risk-cirrus-builder/resources/sas_risk_cirrus_builder_transform.yaml
  8. Complete the deployment steps to apply the new settings. See Deploy the Software in the SAS Viya Platform: Deployment Guide.

Note: This overlay can be applied during the initial deployment of the SAS Viya platform. or after the deployment of the SAS Viya platform.

Verify Overlay Default Settings

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl -n <name-of-namespace> get configmap | grep sas-risk-cirrus-builder

    The above will return the ConfigMap defined for sas-risk-cirrus-builder. Here is an example:

    sas-risk-cirrus-builder-parameters-<id>                      9      6d19h
  2. Execute the following:

    kubectl describe configmap sas-risk-cirrus-builder-parameters-<id> -n
    <name-of-namespace>
  3. Verify that the output contains the settings that you configured.

    Name:         sas-risk-cirrus-builder-parameters-<id>
    Namespace:    <name-of-namespace>
    Labels:       sas.com/admin=cluster-local
                   sas.com/deployment=sas-viya
    Annotations:  <none>
    
    Data
    ====
    SAS_LOG_LEVEL_RISKCIRRUSBUILDER:
    ----
    INFO
    SAS_LOG_LEVEL_RISKCIRRUSCOMMONS:
    ----
    INFO
    RISK_CIRRUS_UI_SAVE_ENABLED:
    ----
    true
    DEFAULT_EMAIL_ADDRESS:
    ----
    [email protected]

Preparing and Configuring Risk Cirrus Core for Deployment

Overview of the Pre-Deployment Process

Before you can deploy a SAS Risk Cirrus solution, it is important to understand that your solution content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer (Risk Cirrus Core) that is used by all SAS Risk Cirrus solutions. Therefore, in order to fully deploy your solution, you must deploy, at minimum, the Risk Cirrus Core content in addition to your solution.

In order to deploy Risk Cirrus Core, you must first complete the following pre-deployment tasks:

  1. Review the Risk Cirrus Objects README file.

  2. (For deployments that use external PostgreSQL databases) Deploy and stage an external PostgreSQL database.

  3. Deploy an additional PostgreSQL cluster for the SAS Common Data Store.

  4. Specify a Persistent Volume Claim for Risk Cirrus Core by updating the SAS Viya platform customization file (kustomization.yaml).

  5. Modify the Configuration for Risk Cirrus Core.

  6. Review any solution README files for additional deployment-related tasks.

  7. Complete the deployment process.

  8. Verify your access control settings.

  9. Verify that the configuration overrides have been applied successfully.

Review the Risk Cirrus Objects README File

Before you deploy Risk Cirrus Core, ensure that you review the Risk Cirrus Objects README file. This file contains important pre-deployment instructions that you must follow to make changes to the sas_risk_cirrus_objects_transform.yaml, as part of the overall SAS Viya platform deployment. See the Risk Cirrus Objects README file located at $deploy/sas-bases/examples/sas-risk-cirrus-objects/resources/README.md (for Markdown-formatted instructions) and $deploy/sas-bases/docs/configure_environment_id_settings_for_sas_risk_cirrus_builder_microservice.htm (for HTML-formatted instructions).

Deploy and Stage an External Database

IMPORTANT: This task is required only if you are deploying an external PostgreSQL database instance for a solution that supports its use.

If your solution supports the use of an external PostgreSQL database instance, ensure that you have completed the following pre-deployment tasks:

The process for configuring the LTREE extension and setting the database locale varies depending on the cloud provider and operating system.

For specific instructions on performing these tasks, consult your cloud provider documentation.

Deploy an Additional PostgreSQL Cluster for the SAS Common Data Store

The Risk Data Service requires the deployment of an additional PostgreSQL cluster called SAS Common Data Store (also called CDS PostgreSQL). This cluster is configured separately from the required platform PostgreSQL cluster that supports the SAS Infrastructure Data Server.

Note: Your SAS Common Data Store must match the state (external or internal) of the SAS Infrastructure Data Server. So if the SAS Infrastructure Data Server is on an external PostgreSQL instance, an external PostgreSQL instance must also be used for the SAS Common Data Store cluster (and vice versa).

For more information about configuring the SAS Common Data Store cluster, see the README file located at $deploy/sas-bases/examples/postgres/README.md (for Markdown-formatted instructions) or $deploy/sas-bases/docs/configure_postgresql.htm (for HTML-formatted instructions).

Specify a Persistent Volume Claim for Risk Cirrus Core

The best option for storing any code that is needed for SAS programming run-time environment sessions is a Network File Sharing (NFS) server that all programming run-time Kubernetes pods can access. In order for SAS Risk Cirrus solutions to operate properly, you must specify a Persistent Volume Claim (PVC) for Risk Cirrus Core in the SAS Viya platform. This is done by adding sas-risk-cirrus-core to the comma-separated set of PVCs in the annotationSelector section of configuration code in your top-level kustomization.yaml file.

The following is a sample excerpt from that file with sas-risk-cirrus-core added to the comma-separated list of PVCs.

patches:
- path: site-config/storageclass.yaml
  target:
    kind: PersistentVolumeClaim
    annotationSelector: sas.com/component-name in (sas-backup-job,sas-data-quality-services,
    sas-commonfiles,sas-cas-operator,sas-pyconfig,sas-risk-cirrus-core)

For additional information about this process, see Specify PersistentVolumeClaims to Use ReadWriteMany StorageClass.

Modify the Configuration for Risk Cirrus Core

Overview of Configuration Parameters for Risk Cirrus Core

Risk Cirrus Core provides a ConfigMap whose values control various aspects of its deployment process. This includes variables such as logging level for the deployment, deployment steps to skip, etc. SAS provides default values for these variables as described in the next section. You can override these default values by configuring a configuration.env file with your override values and configuring your kustomization.yaml file to apply these overrides.

For a list of variables that can be overridden and their default values, see Risk Cirrus Core Configuration Parameters.

For the steps needed to override the default values with your own values, see Apply your own overrides to the configuration parameters.

Risk Cirrus Core Configuration Parameters

The following table contains a list of parameters that can be specified in the Risk Cirrus Core .env configuration file. These parameters can all be found in the template configuration file (configuration.env) but are commented out in the template file. Lines with a ‘#’ at the beginning are commented out, and their values will not be applied during deployment. If you want to override a SAS-provided default for a given variable, you must uncomment the line by removing the ‘#’ at the beginning of the line.

Parameter Name Description
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER Specifies a logging level for the deployment. The logging level value: "INFO" is used if the variable is not overridden by your .env file. For a more verbose level of logging, specify value: "DEBUG".
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS Specifies whether you want to skip specific steps during the deployment of SAS Risk Cirrus Core.
Note: Typically, you should set this value blank: "". The value: "" is used if the variable is not overridden by your .env file. This means no deployment steps will be explicitly skipped.
SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS Specifies whether you want to run specific steps during the deployment of SAS Risk Cirrus Core.
Note: Typically, you should set this value blank: "". The value: "" is used if the variable is not overridden by your .env file. This means all deployment steps will be executed.
SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG Specifies whether the value of the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT variable is used to set SAS Workflow Manager default service account. If the value is "N", the deployment process does not set the workflow default service account. The value: "N" is used if the variable is not overridden by your .env file. This means the deployment will not set a default service account for SAS Workflow Manager. You can still set a default service account after deployment via SAS Environment Manager.
SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT The user account to be configured in the SAS Workflow Manager in order to use workflow service tasks (if SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG is set to "Y"). Using the SAS administrator user account for this purpose is not advised because it might allow file access rights that are not secure enough for the workflow client account.
IMPORTANT: Make sure to review the information about configuring the workflow client default service account in the section “Configuring the Workflow Client” in the SAS Workflow Manager: Administrator’s Guide. It contains important information to secure a successful deployment. The value: "" is used if the variable is not overridden by your .env file.

Apply Overrides to the Configuration Parameters

If you want to override any of the Risk Cirrus Core configuration parameters rather than using the default values, complete these steps:

  1. If you have a $deploy/site-config/sas-risk-cirrus-core directory, delete it and its contents.
    Then, edit your base kustomization.yaml file ($deploy/kustomization.yaml) to remove the following line from the transformers section:

    - site-config/sas-risk-cirrus-core/resources/core_transform.yaml

    This step should only be necessary if you are upgrading from a cadence prior to 2025.02.

  2. Copy the configuration.env from $deploy/sas-bases/examples/sas-risk-cirrus-rcc to the $deploy/site-config/sas-risk-cirrus-rcc directory. Create the destination directory if one does not exist. If the directory already exists and already has the expected .env file, verify that the overrides have been correctly applied. No further actions are required, unless you want to apply different overrides.

  3. In the base kustomization.yaml file, add the sas-risk-cirrus-core-parameters ConfigMap to the configMapGenerator block. If that block does not exist, create it. Here is an example of what the inserted code block should look like in the kustomization.yaml file:

    configMapGenerator:
    ...
    - name: sas-risk-cirrus-core-parameters
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-rcc/configuration.env
    ...
  4. Save the kustomization.yaml file.

  5. Modify the configuration.env file in the $deploy/site-config/sas-risk-cirrus-rcc directory. If there are any parameters for which you want to override the default value, uncomment that variable’s line in your configuration.env file and replace the placeholder with the desired value.

    The following is an example of a configuration.env file that you could use for Risk Cirrus Core. This example will use all of the default values provided by SAS except for the two workflow-related variables. In this case, it will set a default service account in SAS Workflow to the user workflowacct during deployment.

    # SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-or-DEBUG }}
    # SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
    # SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
    SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG=Y
    SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT=workflowacct

    For a list of variables that can be overridden and their default values, see Risk Cirrus Core Configuration Parameters.

  6. Save the configuration.env file.

Review Solution README Files for Additional Tasks

After you have completed your pre-deployment configurations for Risk Cirrus Core, ensure that you review the solution README files for any Cirrus applications that you are deploying. These files contain additional pre-deployment instructions that you must follow to make changes to the kustomization.yaml file as well as to solution-specific configuration files, as part of the overall SAS Viya platform deployment. You can also refer to the solution-specific administrative documentation for further details as needed.

Complete the Deployment Process

When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software.

Verify Your Access Control Settings

When deploying Risk Cirrus Core, you can determine whether to enable Linux Access Control Lists (ACL) to set permissions on Analysis Run directories. By default, when Risk Cirrus Core is deployed, the ‘requireACL’ flag in SAS Environment Manager is set to OFF. If you are upgrading from an existing deployment and had previously set ‘requireACL=ON’, that setting will remain. When ‘requireACL=ON’, users might encounter issues when executing an analysis run, depending upon the setup of their analysis run folders and security permissions. If you do not require ACL security, turn it off to avoid these issues.

To turn ACL security off, perform the following steps:

  1. Log into SAS Environment Manager.

  2. Click on the Configuration menu item.

  3. In the search bar, enter “risk cirrus”.

  4. Select the Risk Cirrus Core service.

  5. In the Configuration pane on the right, update the requireACL field to OFF.

  6. Save your changes.

Using ‘requireACL=ON’ enables restricted sharing mode. This mode guarantees that only the user/owner (including group) running the analysis run has write permissions to the analysis run directory in the PVC. Using ‘requireACL=OFF’ enables unrestricted sharing mode. This mode allows any user/owner (including group and others) running the analysis run to have write permissions to the analysis run directory in the PVC. For more information about configuration settings in SAS Environment Manager, see Configuration Page

Verify That the Configuration Overrides Have Been Applied Successfully

Note: If you configured overrides during a past deployment, your overrides should be available in the SAS Risk Cirrus Core ConfigMap. To verify that your overrides were applied successfully to the ConfigMap, run the following command:

kubectl describe configmap sas-risk-cirrus-core-parameters -n <name-of-namespace>

Verify that the output contains your configured overrides.

Configure the SAS Risk Cirrus KRM Service

Overview

The SAS Risk Cirrus KRM service provides a REST API for starting and managing KRM runs associated with Risk Cirrus analysis runs and comes with default settings that may be changed. The template in $deploy/sas-bases/examples/sas-risk-cirrus-krm/resources should be used as a starting point.

Some portions the SAS Risk Cirrus KRM service use cluster-level privileges for reading Node and Pod information to do its work. Those privileges are provided by adding an overlay to the service.

IMPORTANT: It is strongly recommended that deployments of SAS Risk Cirrus KRM be deployed with the cluster-level read privileges. For details, see the README located at $deploy/sas-bases/overlays/sas-risk-cirrus-krm/cluster-role-binding/README.md

Configure Connection Settings for SAS Risk Cirrus KRM Service

Installation

  1. Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-krm/resources to the $deploy/site-config/sas-risk-cirrus-krm/resources directory. Create a destination directory if one does not exist.

    IMPORTANT: If the destination directory already exists, verify that the overlay connection settings have been correctly applied. No further actions are required, unless you want to change the connection settings to different values.

  2. Modify the krm_transform.yaml file (located in the $deploy/site-config/sas-risk-cirrus-krm/resources directory) to specify your settings as follows:

    a. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES, replace {{ Y-OR-N }} to specify whether you want to include sample artifacts. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, sample artifacts will not be included.

    WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N.

    b. For KRM_RUN_PROGRESS_TTL, replace {{ RUN-PROGRESS-REPORTED-FOR-RUNS-LESS-THAN-THIS-OLD-SECONDS }} with the number of seconds that you wish your run progresses to be reported (for example, in ALM’s Calculation Monitor page). By default, this value is set to about a week, after which runs will not be reported.

    c. For MIN_KRM_POD_COUNT, replace {{ MINIMUM-KRMD-POD-COUNT }} with the minimum number of KRMD pods you wish to keep alive to provide quicker runs by not having to experience pod-startup time. By default, this value is set to 1.

    d. For MAX_KRM_POD_COUNT, replace {{ MAXIMUM-KRMD-POD-COUNT }} with the maximum number of KRMD pods you wish to have alive at once. Having more alive at once consumes more resources that can be used by other pods, including other KRMD pods running large runs. By default, this value is set to 3.

    e. For IDLE_KRM_POD_TTL, replace {{ SHUTS-DOWN-KRMD-PODS-IDLE-FOR-THESE-SECONDS }} with the number of seconds that you wish your KRMD pods to remain idle before they are considered for being shut down. Shutting down idle pods saves CPU usage. Keeping idle pods alive causes faster execution by not having to wait for a pod to be created. By default, this value is set to 120 seconds.

    f. For NULLSNOTDISTINCT, replace “NULLS NOT DISTINCT” with “” when the CDS Postgres server version is less than 15. You can use the following sql statement to check the version of your Postgres server: select version();

  3. In the base kustomization.yaml file ($deploy/kustomization.yaml), add site-config/sas-risk-cirrus-krm/resources/krm_transform.yaml to the transformers block. Here is an example:

    transformers:
    - site-config/sas-risk-cirrus-krm/resources/krm_transform.yaml
  4. When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide to apply the new settings.

Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.

Verify Overlay Connection Settings Applied Successfully

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl describe configmap sas-risk-cirrus-krm-config -n
    <name-of-namespace>
  2. Verify that the output contains the desired connection settings that you configured.

Cluster Privileges for SAS Risk Cirrus KRM

Overview

Some portions the SAS Risk Cirrus KRM service use cluster-level privileges for reading Node and Pod information to do its work. If these privileges are not provided by adding the overlay described below to add a ClusterRoleBinding and ClusterRole object to the deployment, some features of the service will not be enabled. Not deploying the overlay can affect the features and functionality of downstream products that require the use of this service, such as SAS Asset and Liability Management.

Instructions

Enable SAS Risk Cirrus KRM’s cluster-level privileges for the namespace

The cluster-level privileges are enabled by adding the cluster-role-binding directory to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

resources:
...
- sas-bases/overlays/sas-risk-cirrus-krm/cluster-role-binding

Disable SAS Risk Cirrus KRM’s cluster-level privileges for the namespace

To disable cluster-level privileges:

  1. Remove sas-bases/overlays/sas-risk-cirrus-krm/cluster-role-binding from the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml). This also ensures that this overlay will not be applied in future Kustomize builds.

  2. Perform the following commands to remove the ClusterRoleBinding from the namespace:

    kubectl delete clusterrolebinding sas-risk-cirrus-krm-<your namespace>

Build

After you configure Kustomize, continue your SAS Viya platform deployment as documented.

Configure Environment ID Settings for SAS Risk Cirrus Objects Microservice

Overview

The SAS Risk Cirrus Objects Microservice stores information related to business object definitions and business object instances such as analysis data, analysis runs, models, or model reviews. Cirrus Objects also stores and retrieves items related to business objects, such as attachments. These business objects are associated with the Risk Cirrus Platform that underlies most Risk offerings.

SAS Risk Cirrus Objects is deployed with some default settings. These settings can be overridden by using the $deploy/sas-bases/examples/sas-risk-cirrus-objects/resources/sas_risk_cirrus_objects_transform.yaml file as a starting point.

There is no requirement to configure this transform. Currently, all fields in the transform are optional (with the default value documented here used if no value is supplied).

Note: For more information about the SAS Risk Cirrus Objects Microservice, see Administrator’s Guide: Cirrus Objects.

Installation

  1. Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-objects/resources to the $deploy/site-config/sas-risk-cirrus-objects/resources directory. Create a destination directory if one does not exist.

    IMPORTANT: If the destination directory already exists, verify that the overlay default settings have been correctly applied. No further actions are required, unless you want to change the default settings to different values.

  2. Modify the new copy of sas_risk_cirrus_objects_transform.yaml to specify your settings as follows:

    • For JAVA_OPTION_ENVIRONMENT_ID, replace {{ MY-ENVIRONMENT-ID }} with the identifier you have chosen for this particular environment.

    • If not configured, the system will default to no environment identifier.

  3. In the base kustomization.yaml file in the $deploy directory, add site-config/risk-cirrus-objects/resources/sas_risk_cirrus_objects_transform.yaml to the transformers block. Here is an example:

    transformers:
    - site-config/risk-cirrus-objects/resources/sas_risk_cirrus_objects_transform.yaml
  4. Complete the deployment steps to apply the new settings. See Deployment Tasks: Deploy SAS Risk Cirrus.

Note: This overlay can be applied during the initial deployment of the SAS Viya platform. or after the deployment of the SAS Viya platform.

Verify Overlay Default Settings

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl -n <name-of-namespace> get configmap | grep sas-risk-cirrus-objects

    The command returns the ConfigMaps defined for sas-risk-cirrus-objects. Here is an example:

    sas-risk-cirrus-objects-parameters-<id>                  9      6d19h
    sas-risk-cirrus-objects-config-<id>                      9      6d19h
  2. Execute the following:

    kubectl describe configmap sas-risk-cirrus-objects-config-<id> -n <name-of-namespace>
  3. Verify that the output contains the settings that you configured.

    Name:         sas-risk-cirrus-objects-config-g5dg72m87g
    Namespace:    d89282
    Labels:       sas.com/admin=cluster-local
      sas.com/deployment=sas-viya
    Annotations:  <none>
    
      Data
      ====
    JAVA_OPTION_CIRRUS_ENVIRONMENT_ID:
      ----
      -Dcirrus.environment.id=MY_DEV_123
    JAVA_OPTION_XMX:
      ----
      -Xmx512m
    JAVA_OPTION_XPREFETCH:
      ----
      -Dsas.event.consumer.prefetchCount=8
    SEARCH_ENABLED:
      ----
      true
    JAVA_OPTION_JAVA_LOCALE_USEOLDISOCODES:
      ----
      -Djava.locale.useOldISOCodes=true
    JAVA_OPTION_XSS:
      ----
      -Xss1048576
    
      BinaryData
      ====
    
    Events:  <none>

    Tip: Use filtering to focus on a specific setting:

    kubectl describe configmap sas-risk-cirrus-objects-config-<id> -n <name-of-namespace> | grep environment

    Result:

    JAVA_OPTION_CIRRUS_ENVIRONMENT_ID:
      -Dcirrus.environment.id=MY_DEV_123

Integrate SAS Risk Engine with Python

Overview

SAS Risk Engine integrates with Python running in a CAS session. This Python interface enables you to write your evaluation methods in Python instead of using the SAS Function Compiler.

The Python interface has additional benefits:

This README describes the deployment updates that are required for you to enable the Python interface.

Prerequisites

Review the following documents before proceeding to the Steps To Configure section:

Steps To Configure

The SAS Viya platform provides the configuration YAML files (ending with a .yaml extension) that the Kustomize tool uses to configure the various software components. Before you modify any of these configuration files, you must perform the following tasks to collect information:

The following sections focus on a specific configurable software component. Each section discusses specific steps to create or modify the configuration files.

Configure the SAS Configurator for Open Source

To configure the sas-pyconfig component, complete the following instructions to copy and modify the change-configuration.yaml and change-limits.yaml files.

  1. If the $deploy/site-config/sas-pyconfig/ directory does not already exist, create it. If the $deploy/site-config/sas-pyconfig/change-configuration.yaml file does not already exist, create it by copying the file from the $deploy/sas-bases/examples/sas-pyconfig/ directory.

  2. In the copied change-configuration.yaml file, update the /data/global.enabled and /data/global.python_enabled entries to enable the Python interpreter by replacing “false” with “true”:

    ...
    - op: replace
      path: /data/global.enabled
      value: "true"
    - op: replace
      path: /data/global.python_enabled
      value: "true"
    ...
  3. The set of packages for the Python interpreter is already initialized. If additional packages are needed, add them by package name to the /data/default_py.pip_install_packages entry. For example, to add the “tf-quant-finance”, “quantlib”, and “numba” packages:

    ...
    - op: replace
      path: /data/default_py.pip_install_packages
      value: "Prophet sas_kernel matplotlib sasoptpy sas-esppy NeuralProphet scipy Flask XGBoost TensorFlow pybase64 scikit-learn statsmodels sympy mlxtend
    Skl2onnx nbeats-pytorch ESRNN onnxruntime opencv-python zipfile38 json2 pyenchant nltk spacy gensim pyarrow hnswlib sas-ipc-queue great-expectations==0.16.8
    tf-quant-finance quantlib numba"
    ...
  4. Ensure that the site-config/sas-pyconfig/change-configuration.yaml entry is in the transformers block of the base $deploy/kustomization.yaml file. Here is an example:

    ...
    transformers:
    ...
    - site-config/sas-pyconfig/change-configuration.yaml
  5. If the $deploy/site-config/sas-pyconfig/change-limits.yaml file does not already exist, create it by copying the file from the $deploy/sas-bases/examples/sas-pyconfig/ directory.

  6. SAS Risk Engine does not require any modifications to the change-limits.yaml file. Before making any changes to the limit adjustments for CPU and memory, refer to the Resource Management section in the README at $deploy/sas-bases/examples/sas-pyconfig/README.md (for Markdown format) or at $deploy/sas-bases/docs/sas_configurator_for_open_source_options.htm (for HTML format).

  7. Regardless of whether any changes were made for step 6, ensure that the site-config/sas-pyconfig/change-limits.yaml entry is included in the transformers block of the base kustomization.yaml file. Here is an example:

    ...
    transformers:
    ...
    - site-config/sas-pyconfig/change-limits.yaml

Configure SAS Open Source Configuration for Python

To configure the sas-open-source-config/python component, complete the following instructions to copy and modify the kustomization.yaml and python-transformer.yaml files.

  1. If the $deploy/site-config/sas-open-source-config/python/ directory does not already exist, create it. If the $deploy/site-config/sas-open-source-config/python/kustomization.yaml file does not already exist, create it by copying the file from $deploy/sas-bases/examples/sas-open-source-config/python/ directory.

  2. Add the following entry in the $deploy/site-config/sas-open-source-config/python/kustomization.yaml file.

    ```yaml
    RISK_PYUSERPATH=/repyeval/usercode/
    ```
    
  3. Replace the following placeholders with the appropriate values: {{ PYTHON-EXE-DIR }}, {{ PYTHON-EXECUTABLE }}, and {{ SAS-EXTLANG-SETTINGS-XML-FILE }}. Here is an example:

    - PROC_PYPATH=/opt/sas/viya/home/sas-pyconfig/default_py/bin/python3
    - PROC_M2PATH=/opt/sas/viya/home/SASFoundation/misc/tk
    - SAS_EXTLANG_SETTINGS=/repyeval/extlang.xml
    - RISK_PYUSERPATH=/repyeval/usercode/
  4. If the SAS Micro Analytic Service is not required for your environment, comment out the MAS_PYPATH and MAS_M2PTH entries.

  5. If the Open Source Code node in SAS Visual Data Mining and Machine Learning is not required for your environment, comment out the DM_PYPATH entry.

  6. If the SAS Micro Analytic Service is not required for your environment, comment out the following entry.

    - name: sas-open-source-config-python-mas
      literals:
      - MAS_PYPORT= 31100
  7. If the site-config/sas-open-source-config/python entry is not already in the resources block of the base kustomization.yaml file, add it. Here is an example:

    resources:
    ...
    - site-config/sas-open-source-config/python
    ...
  8. If the $deploy/site-config/sas-open-source-config/python/python-transformer.yaml file does not already exist, create it by copying the file from the $deploy/sas-bases/examples/sas-open-source-config/python/ directory.

  9. Edit the following sections in the copied $deploy/site-config/sas-open-source-config/python/python-transformer.yaml file. There are three sections to be edited.

    ...
    ---
    apiVersion: builtin
    kind: PatchTransformer
    metadata:
      name: cas-python-transformer
    patch: |-
      # Add python volume
      - op: add
        path: /spec/controllerTemplate/spec/volumes/-
        value: { name: python-volume, {{ VOLUME-ATTRIBUTES }} }
    
      # Add mount path for python
      - op: add
        path: /spec/controllerTemplate/spec/containers/0/volumeMounts/-
       value:
          name: python-volume
          mountPath: /python
          readOnly: true
    
      # Add python-config configMap
      - op: add
        path: /spec/controllerTemplate/spec/containers/0/envFrom/-
        value:
          configMapRef:
            name: sas-open-source-config-python
    
    target:
      group: viya.sas.com
      kind: CASDeployment
      name: .*
      version: v1alpha1
    ---
    ...
    ...
    ---
    apiVersion: builtin
    kind: PatchTransformer
    metadata:
      name: launcher-job-python-transformer
    patch: |-
      # Add python volume
      - op: add
        path: /template/spec/volumes/-
        value: { name: python-volume, {{ VOLUME-ATTRIBUTES }} }
    
      # Add mount path for python
      - op: add
        path: /template/spec/containers/0/volumeMounts/-
        value:
          name: python-volume
          mountPath: /python
          readOnly: true
    
      # Add python-config configMap
      - op: add
        path: /template/spec/containers/0/envFrom/-
        value:
          configMapRef:
            name: sas-open-source-config-python
    
    target:
      kind: PodTemplate
      name: sas-launcher-job-config
      version: v1
    ---
    ...
    ...
    ---
    apiVersion: builtin
    kind: PatchTransformer
    metadata:
      name: compute-job-python-transformer
    patch: |-
      # Add python volume
      - op: add
        path: /template/spec/volumes/-
        value: { name: python-volume, {{ VOLUME-ATTRIBUTES }} }
    
      # Add mount path for python
      - op: add
        path: /template/spec/containers/0/volumeMounts/-
        value:
          name: python-volume
          mountPath: /python
          readOnly: true
    
      # Add python-config configMap
      - op: add
        path: /template/spec/containers/0/envFrom/-
        value:
          configMapRef:
            name: sas-open-source-config-python
    
    target:
      kind: PodTemplate
      name: sas-compute-job-config
      version: v1
    ---
    ...
  10. In each section that you edited, replace the lines for the python volume and mount path with the specific attributes and values. For example, replace

    # Add python volume
    - op: add
      path: /template/spec/volumes/-
      value: { name: python-volume, {{ VOLUME-ATTRIBUTES }} }
    
    # Add mount path for python
    - op: add
      path: /template/spec/containers/0/volumeMounts/-
      value:
        name: python-volume
        mountPath: /python
        readOnly: true
    with:
    
    # Add python volume
    - op: add
      path: /spec/controllerTemplate/spec/volumes/-
      value:
        name: sas-pyconfig
        persistentVolumeClaim:
          claimName: sas-pyconfig
    
    # Add mount path for python
    - op: add
      path: /spec/controllerTemplate/spec/containers/0/volumeMounts/-
      value:
        name: sas-pyconfig
        mountPath: /opt/sas/viya/home/sas-pyconfig
        readOnly: true
  11. If the SAS Micro Analytic Service is not required for your environment, comment out the following entry.

    apiVersion: builtin
    kind: PatchTransformer
    metadata:
      name: mas-python-transformer
    patch: |-
      # Add side car Container
    ...
    target:
      group: apps
      kind: Deployment
      name: sas-microanalytic-score
      version: v1
    ---
  12. If the Open Source Code node in SAS Visual Data Mining and Machine Learning is not required for your environment, comment out the following entry.

    ...
    apiVersion: builtin
    kind: PatchTransformer
    metadata:
      name: add-python-sas-java-policy-allow-list
    patch: |-
      - op: add
        path: /data/SAS_JAVA_POLICY_ALLOW_DM_PYPATH
        value: /python/{{ PYTHON-EXE-DIR }}/{{ PYTHON-EXECUTABLE }}
    target:
      kind: ConfigMap
      name: sas-programming-environment-java-policy-config
  13. Ensure that the site-config/sas-open-source-config/python/python-transformer.yaml entry is in the transformers block of the base kustomization.yaml file. Here is an example:

    ...
    transformers:
    ...
    - site-config/sas-open-source-config/python/python-transformer.yaml

Compute Server Configuration

To configure the sas-compute-server/configure component, complete the following instructions to copy and modify the compute-server-add-nfs-mount.yaml file.

Configure the NFS Mount for User Python Code

  1. If the $deploy/site-config/sas-compute-server/configure folder does not already exist, create it. If the $deploy/site-config/sas-compute-server/configure/compute-server-add-nfs-mount.yaml file does not exist, create it by copying it from the $deploy/sas-bases/examples/sas-compute-server/configure/ directory.

  2. Edit the following entry in the $deploy/site-config/sas-compute-server/configure/compute-server-add-nfs-mount.yaml file.

    ...
    - op: add
      path: /template/spec/volumes/-
      value:
      name: {{ MOUNT-NAME }}
      nfs:
        path: {{ PATH-TO-BE-MOUNTED }}
        server: {{ HOST }}
    - op: add
      path: /template/spec/containers/0/volumeMounts/-
      value:
        name: {{ MOUNT-NAME }}
        mountPath: {{ PATH-TO-BE-MOUNTED }}
    ...
  3. Replace the following placeholders with the appropriate values: { MOUNT-NAME }}, {{ HOST }}, and {{ PATH-TO-BE-MOUNTED }}. Here is an example:

    ...
    - op: add
     path: /template/spec/volumes/-
     value:
       name: repyeval-volume
       nfs:
         path: /export/repyeval
         server: 192.168.2.4
    - op: add
     path: /template/spec/containers/0/volumeMounts/-
     value:
       name: repyeval-volume
       mountPath: /repyeval
    ...
  4. Ensure that the site-config/sas-compute-server/configure/compute-server-add-nfs-mount.yaml entry is in the transformers block of the base kustomization.yaml file. Here is an example:

    ...
    transformers:
    ...
    - site-config/sas-compute-server/configure/compute-server-add-nfs-client.yaml
    ...

Enable Lockdown Access Methods

To configure the sas-programming-environment/lockdown component, complete the following steps to copy and modify the enable-lockdown-access-methods.yaml file.

  1. If the $deploy/site-config/sas-programming-environment/lockdown/ directory does not already exist, create it. If the $deploy/site-config/sas-programming-environment/lockdown/enable-lockdown-access-methods.yaml file does not already exist, create it by copying it from $deploy/sas-bases/examples/sas-programming-environment/lockdown/ directory.

  2. Replace the placeholder {{ ACCESS-METHOD-LIST }} with the appropriate values. Here is an example:

    ...
    - op: add
      path: /data/VIYA_LOCKDOWN_USER_METHODS
     value: "PYTHON PYTHON_EMBED SOCKET"
    ...
  3. Alternatively, in the case where the $deploy/site-config/sas-programming-environment/lockdown/enable-lockdown-access-methods.yaml file pre-exists, modify the VIYA_LOCKDOWN_USER_METHODS entry to include “PYTHON PYTHON_EMBED SOCKET”.

  4. Ensure that the site-config/sas-programming-environment/lockdown/enable-lockdown-access-methods.yaml entry is in the transformers block of the base kustomization.yaml file. Here is an example:

    ...
    transformers:
    ...
    - site-config/sas-programming-environment-lockdown/enable-lockdown-access-methods.yaml
    ...

External Languages Access Control Configuration

  1. If the extlang.xml configuration file that is specified in the SAS_EXTLANG_SETTINGS entry in the $deploy/site-config/sas-open-source-config/python/kustomization.yaml file already exists, add the RISK_PYUSERPATH environment variable setting to the PYTHON3 language block in the file. If the extlang.xml file does not have a ‘PYTHON3’ language block, add it to the existing extlang.xml file. Here is an example:

    ...
          <LANGUAGE name="PYTHON3" interpreter="/opt/sas/viya/home/sas-pyconfig/default_py/bin/python3">
             <ENVIRONMENT name="RISK_PYUSERPATH" value="/repyeval/usercode/" />
          </LANGUAGE>
    ...
  2. If the extlang.xml file does not exist, create it in an editor session. Save it to the NFS share location. Here is an example:

    <EXTLANG version="1.0" mode="ALLOW" allowAllUsers="ALLOW">
       <DEFAULT scratchDisk="/tmp" diskAllowlist="/tmp:/repyeval/usercode/">
          <LANGUAGE name="PYTHON3"
                    interpreter="/opt/sas/viya/home/sas-pyconfig/default_py/bin/python3">
             <ENVIRONMENT name="RISK_PYUSERPATH" value="/repyeval/usercode/"/>
          </LANGUAGE>
       </DEFAULT>
    </EXTLANG>
  3. Ensure that your users have only Read permission for the extlang.xml configuration file.

CAS Configuration

If the $deploy/site-config/cas/configure directory does not exist, create it.

Configure CAS Enable Host

By default, CAS cannot launch sessions under a user’s host identity. All sessions run under the cas service account instead. CAS can be configured to allow for host identity launches by including a patch transformer in the kustomization.yaml file. The /$deploy/sas-bases/examples/cas/configure directory contains a cas-enable-host.yaml file, which can be used for this purpose.

  1. If the $deploy/site-config/cas/configure/cas-enable-host.yaml does not exist, create it by copying it from $deploy/sas-bases/examples/cas/configure/ directory.

  2. SAS Risk Engine does not require any modifications to the cas-enable-host.yaml file. If you have modified it or intend to modify it for other reasons, those changes will not affect SAS Risk Engine.

  3. The example file defaults to targeting all CAS servers by specifying a name component of .*. To target specific CAS servers, comment out the name: .* line and choose which CAS servers you want to target. Either uncomment the name: and replace NAME-OF-SERVER with one particular CAS server or uncomment the labelSelector line to target only the default deployment.

  4. Ensure that the $deploy/site-config/cas/configure/cas-enable-host.yaml file is in the transformers block of the base kustomization.yaml file. Here is an example:

    ...
    transformers:
    - site-config/cas/configure/cas-enable-host.yaml
    ...

Configure the NFS Mount for User Python Code for CAS

  1. If the $deploy/site-config/cas/configure/cas-add-nfs-mount.yaml file does not exist, create it by copying it from $deploy/sas-bases/examples/cas/configure/ directory.

  2. Edit the following entry in the existing $deploy/site-config/cas/configure/cas-add-nfs-mount.yaml file:

    ...
    - op: add
      path: /spec/controllerTemplate/spec/volumes/-
      value:
        name: {{ MOUNT-NAME }}
        nfs:
          path: {{ NFS-PATH-TO-BE-MOUNTED }}
          server: {{ HOST }}
    - op: add
      path: /spec/controllerTemplate/spec/containers/0/volumeMounts/-
      value:
        name: {{ MOUNT-NAME }}
        mountPath: {{ CONTAINER-MOUNT-PATH }}
    ...
  3. Replace the following placeholders with the appropriate values: {{ MOUNT-NAME }}, {{ HOST }}, and {{ CONTAINER-MOUNT-PATH }}. Here is an example:

    ...
    - op: add
      path: /spec/controllerTemplate/spec/volumes/-
      value:
        name: repyeval-volume
        nfs:
          path: /export/repyeval
          server: 192.168.2.4
    - op: add
      path: /spec/controllerTemplate/spec/containers/0/volumeMounts/-
      value:
        name: repyeval-volume
        mountPath: /repyeval
    ...
  4. Ensure that the $deploy/site-config/cas/configure/cas-add-nfs-mount.yaml entry is in the transformers block of the base kustomization.yaml file. Here is an example:

    ...
    transformers:
    ...
    - site-config/cas/configure/cas-add-nfs-client.yaml
    ...

Preparing and Configuring SAS Risk Factor Manager for Deployment

Prerequisites

When SAS Risk Factor Manager is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS Risk Factor Manager solution successfully, you must deploy the Cirrus Core content in addition to the solution content. Preparing and configuring Cirrus Core for deployment is described in the Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-rcc/resources/README.md (Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm (HTML format).

For storage options for your solution, such as internal databases, refer to the Cirrus Core README.

For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Risk Cirrus: Administrator’s Guide.

Modify the Configuration Files for SAS Risk Factor Manager

Overview of Configuration for SAS Risk Factor Manager

SAS Risk Factor Manager provides a ConfigMap whose values control various aspects of its deployment process. It includes variables such as the logging level for the deployment, deployment steps to skip, etc. SAS provides default values for these variables as described in the next section. You can override these default values by configuring a configuration.env file with your override values and then configuring your kustomization.yaml file to apply those overrides.

For a list of variables that can be overridden and their default values, see SAS Risk Factor Manager Configuration Parameters.

For the steps needed to override the default values with your own values, see Apply Overrides to the Configuration Parameters.

SAS Risk Factor Manager Configuration Parameters

The following list describes the parameters that can be specified in the SAS Risk Factor Manager .env configuration file. These parameters can be found in the template configuration file (configuration.env), but they are commented out in that file. Lines that begin with # will not be applied during deployment. If you want to use one of those skipped variables, remove the # at the beginning of the line.

  1. The SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER parameter specifies a logging level for the deployment. The logging level INFO is used if the variable is not overridden by your .env file. For a more verbose level of logging, specify the value DEBUG.

  2. The SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS parameter specifies whether you want to skip specific steps during the deployment of SAS Risk Factor Manager. The value "" is used if the variable is not overridden by your .env file. This means none of the deployment steps will be skipped explicitly.

  3. The SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS parameter specifies an explicit list of steps you want to run during a deployment. The value "" is used if the variable is not overridden by your .env file. This means all of the deployment steps will be run except steps skipped in SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS.

Apply Overrides to the Configuration Parameters

Note: If you configured overrides during a previous deployment, those overrides should already be available in the SAS Risk Factor Manager ConfigMap. You can verify that here.

If you want to override any of the SAS Risk Factor Manager configuration properties rather than using the default values, complete these steps:

  1. If you are upgrading from a cadence prior to 2025.02, you should complete this step. Otherwise, you can skip to the next step.

    If you have a $deploy/site-config/sas-risk-cirrus-rfm directory, take note of the values in your rfm_transform.yaml file. You may want to use them in the following steps. Once you have the values you need, delete the directory and its contents. Then, edit your base kustomization.yaml file ($deploy/kustomization.yaml) to remove the following line from the transformers section: -site-config/sas-risk-cirrus-rfm/resources/rfm_transform.yaml.

  2. Create a $deploy/site-config/sas-risk-cirrus-rfm directory if one does not exist. Then copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-rfm to that directory.

    IMPORTANT: If the destination directory already exists, confirm it contains the configuration.env file, not the rfm_transform.yaml file that was used for cadences prior to 2025.02. If the directory already exists, and it has the configuration.env file, then verify that the overlay connection settings have been applied correctly. No further actions are required unless you want to change the connection settings to different values.

  3. In the base kustomization.yaml file, add the sas-risk-cirrus-rfm-parameters ConfigMap to the configMapGenerator block. If that block does not exist, create it. Here is an example:

    configMapGenerator:
      - name: sas-risk-cirrus-rfm-parameters
        behavior: merge
        envs:
          - site-config/sas-risk-cirrus-rfm/configuration.env
  4. Save the kustomization.yaml file.

  5. Modify the configuration.env file (located in the $deploy/site-config/sas-risk-cirrus-rfm directory). Lines that begin with # will not be applied during deployment. If you want to use one of those skipped variables, remove the # at the beginning of the line. You can read more about each step in SAS Risk Factor Manager Configuration Parameters. If you want to override the default settings provided by SAS, specify your settings as follows:


    a. For SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER, replace {{ INFO-OR-DEBUG }} with the logging level desired. The default value is INFO.


    b. For SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. The default value is an empty string "".


    c. For SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }} with the IDs of the steps you want to run. The default value is an empty string "".

    WARNING: This list is absolute; the deployment will only run the steps included in this list. This variable should be an empty string if you are deploying this environment for the first time, or if you are upgrading from a previous version. Otherwise you risk a failed or incomplete deployment.

  6. Save your changes to the configuration.env file.

    The following is an example of a configuration.env file you could use for SAS Risk Factor Manager. This example uses the default values provided by SAS:

    # SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
    # SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
    # SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}

Complete the Deployment Process

When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide.

Note: The .env overlay can be applied during or after the initial deployment of the SAS Viya platform.

Verify Overlay Connection Settings Applied Successfully

Before verifying the settings for SAS Risk Factor Manager solution, complete step 7 specified in the Cirrus Core README to verify for Cirrus Core.

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl describe configmap sas-risk-cirrus-rfm-parameters -n <name-of-namespace>
  2. Verify that the output contains the desired connection settings that you configured.

Preparing and Configuring SAS Risk Modeling for Deployment

Prerequisites

When SAS Risk Modeling is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS Risk Modeling solution successfully, you must deploy the Cirrus Core content in addition to the solution content. Preparing and configuring Cirrus Core for deployment is described in the Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-rcc/README.md (Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm (HTML format).

The Risk Cirrus Core README also contains information about storage options, such as external databases, for your solution. You must complete the pre-deployment described in the Risk Cirrus Core README before deploying SAS Risk Modeling. Please read that document for important information about the deployment tasks that should be completed prior to deploying SAS Risk Modeling.

IMPORTANT: You must complete the step Modify the Configuration for Risk Cirrus Core. Because SAS Risk Modeling uses workflow service tasks, a user account must be configured for a workflow client. If you know which user account you want to use and want to configure it during installation, set the SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG variable to “Y” and specify the user account in the value of the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT variable.

For more information about deploying Risk Cirrus Core, you can also read Deployment Tasks in the SAS Risk Cirrus: Administrator’s Guide.

Installation

  1. If you have a $deploy/site-config/sas-risk-cirrus-rm/resources directory, delete it and its contents. Remove the reference to this directory from the transformers section of your base kustomization.yaml file ($deploy/kustomization.yaml). This step should only be necessary if you are upgrading from a cadence prior to 2025.02.

  2. Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-rm to the $deploy/site-config/sas-risk-cirrus-rm directory. Create a destination directory if one does not exist.

    IMPORTANT: If the destination directory already exists, confirm it contains the .env file, not the rm_transform.yaml file that was used for cadences prior to 2025.02. If the directory already exists, and it has the .env file, then verify that the overlay connection settings have been applied correctly. No further actions are required unless you want to change the connection settings to different values.

  3. Modify the configuration.env file (located in the $deploy/site-config/sas-risk-cirrus-rm directory). Lines with a # at the beginning are commented out; their values will not be applied during deployment. To override a default provided by SAS for a given variable, uncomment the line by removing the # at the beginning of the line and modify as explained in the following section. Specify, if needed, your settings as follows:


    a. For SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER, replace {{ INFO-OR-DEBUG }} with the logging level desired. (Default is INFO).


    b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES,default =’N’. Currently, SAS Risk Modeling does not include sample artifacts. Therefore this parameter defaults to ‘N’. Do not modify this parameter in the YAML file. In the future, any items marked as sample artifacts will be listed here.


    c. For SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. Currently, SAS Risk Modeling requires users to complete all the steps. Set this variable to an empty string.


    d. For SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }} with the IDs of the steps you want to run. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment. For example, if you have deleted the prepackaged monitoring plans or KPIs from your environment, then you can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS to “load_objects” and then delete the sas-risk-cirrus-rm pod to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS. WARNING: This list is absolute; the deployment will only run the steps included in this list. If you are deploying this environment for the first time, this variable should be an empty string, or you risk an incomplete or failed deployment.

    The following is an example of a configuration.env that you could use for SAS Risk Modeling. The uncommented parameters will be added to the solution configuration map.

    SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER=INFO
    SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES=N
    # SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
    # SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
  4. In the base kustomization.yaml file in the $deploy directory, add site-config/sas-risk-cirrus-rm/configuration.env to the configMapGenerator block. Here is an example:

     configMapGenerator:
       ...
       - name: sas-risk-cirrus-rm-parameters
         behavior: merge
         env:
           - site-config/sas-risk-cirrus-rm/configuration.env
       ...

Complete the Deployment Process

When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide.

Note: The .env overlay can be applied during or after the initial deployment of the SAS Viya platform.

Verify Overlay Connection Settings Applied Successfully

Before verifying the settings for SAS Risk Modeling solution, you should first verify Risk Cirrus Core’s settings. Those instructions can be found in the Risk Cirrus Core README. To verify the settings for SAS Risk Modeling, do the following:

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl describe configmap sas-risk-cirrus-rm-parameters -n <name-of-namespace>
  2. Verify that the output contains the desired connection settings that you configured.

Additional Resources

Disabling the SAS Start-Up Sequencer

Overview

The SAS Start-Up Sequencer is configured to start pods in an predetermined, ordered sequence to ensure that pods are efficiently and effectively started in a manner that improves startup time. This design ensures that certain components start before others and allows Kubernetes to pull container Images in a priority-based sequence. It also provides a degree of resource optimization, in that resources are more efficiently spent during SAS Viya platform start-up with a priority given to starting essential components first.

However, there may be cases where this optimization is not desired by an administrator. For these cases, we provide the ability to disable this feature by applying a transformer that updates the components in your cluster that prevents the start sequencer functionality from executing.

Installation

Add sas-bases/overlays/startup/disable-startup-transformer.yaml to the transformers block in your base kustomization.yaml ($deploy/kustomization.yaml) file. Ensure that ordered-startup-transformer.yaml is listed after sas-bases/overlays/required/transformers.yaml.

Here is an example:

...
transformers:
...
- sas-bases/overlays/required/transformers.yaml
- sas-bases/overlays/startup/disable-startup-transformer.yaml

To apply the change, perform the appropriate steps at Deploy the Software.

Preparing and Configuring SAS Stress Testing for Deployment

Overview

The SAS Stress Testing solution contains three different offerings: SAS Stress Testing, SAS Climate Stress Testing and SAS Credit Stress Testing. The SAS Stress Testing offering is the enterprise offering which includes the Climate Stress Testing and Credit Stress Testing offerings in addition to other features such as financial statement projection. The Climate Stress Testing offering is tailored toward evaluation of your business on climate risk. The Credit Stress Testing offering is tailored toward evaluation of your business on credit risk.

Prerequisites

When SAS SAS Stress Testing is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Risk Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS SAS Stress Testing solution successfully, you must deploy the Risk Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-rcc/README.md (Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm (HTML format).

For storage options for your solution, such as external databases, refer to the Risk Cirrus Core README.

For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Stress Testing: Administrator’s Guide.

Installation

  1. Complete steps 1-4 described in the Risk Cirrus Core README.

  2. Complete step 5 described in the Risk Cirrus Core README to modify your Risk Cirrus Core .env configuration file. Because SAS SAS Stress Testing uses workflow service tasks, a default service account must be configured for the Risk Cirrus Objects workflow client. If you know which user account to use before installation and prefer having it configured during installation, you should set the SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG variable to “Y” and assign the user ID to the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT variable. If you choose not to configure this during installation, you can set the default service account after deployment via SAS Environment Manager.

  3. If you are upgrading from a cadence prior to 2025.02, you should complete this step. Otherwise, you can skip to the next step.

    If you have a $deploy/site-config/sas-risk-cirrus-st/resources directory, take note of the values in your st_transform.yaml file. You may want to use them in the following steps. Once you have the values you need, delete the directory and its contents. Then, edit your base kustomization.yaml file ($deploy/kustomization.yaml) to remove the following line from the transformers section: - site-config/sas-risk-cirrus-st/resources/st_transform.yaml.

  4. Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-st to the $deploy/site-config/sas-risk-cirrus-st directory. Create a destination directory if one does not exist.

    IMPORTANT: If the destination directory already exists, make sure it has the expected configuration.env and sas-risk-cirrus-st-secret.env files, not the old st_transform.yaml file from previous cadences (prior to 2025.02). If the directory already exists and already has the expected configuration.env and sas-risk-cirrus-st-secret.env files, verify that overlay settings have been applied successfully to the configmap and verify that overlay settings have been applied successfully to the secret have been correctly applied. No further actions are required unless you want to change the connection settings to different overrides.

  5. Modify the configuration.env file (located in the $deploy/site-config/sas-risk-cirrus-st directory). Lines with a # at the beginning are commented out; their values will not be applied during deployment. If there are any parameters for which you want to override the default value, uncomment that variable’s line by removing the # at the beginning of the line and replace the placeholder with the desired value as explained in the following section. Specify, if needed, your settings as follows:

    a. For SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER, replace {{ INFO-OR-DEBUG }} with the logging level desired. (Default is INFO)

    b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES, replace {{ Y-OR-N }} to specify whether you want to include steps flagged as sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts and listed by product:

    • SAS Stress Testing

      • The create_cas_lib step creates the default STReporting CAS library that is used for reporting in SAS Stress Testing.
      • The create_db_auth_domain step creates an STDBAuth domain for the riskcirrusst schema and assigns default permissions.
      • The create_db_auth_domain_user step creates an STUserDBAuth domain for the riskcirrusst schema and assigns default group permissions.
      • The import_dataloader_files_climate step uploads the Cirrus_Climate_loader.xlsx file into the file service under the Products/SAS Stress Testing directory.
      • The import_dataloader_files_credit step uploads the Cirrus_Credit_loader.xlsx file into the file service under the Products/SAS Stress Testing directory.
      • The import_dataloader_files_ewst step uploads the Cirrus_EWST_loader.xlsx file into the file service under the Products/SAS Stress Testing directory.
      • The import_sample_dataloader_files_common step uploads the Cirrus_ST_sample_data_loader.zip file into the file service under the Products/SAS Stress Testing directory.
      • The import_templates_common step uploads the Business Evolution Template used for import/export of BEP growth projections to the file service under the Products/SAS Stress Testing directory.
      • The import_va_reports_climate step imports SAS-provided Climate reports created in SAS Visual Analytics.
      • The install_riskengine_project_climate step loads the sample Climate project into SAS Risk Engine.
      • The install_riskengine_project_credit step loads the sample Credit project into SAS Risk Engine.
      • The load_ado_linked_objects_ewst step loads the Link Instances between the Business Evolution Plans (BEP) and Analysis Data Objects as well as linking the BEP to the Risk Scenarios in SAS Risk Factor Manager.
      • The load_objects_climate step data loads the Cirrus_ST_Climate_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
      • The load_objects_credit step data loads the Cirrus_ST_Credit_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
      • The load_objects_ewst step data loads the Cirrus_ST_EWST_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
      • The load_sample_objects_common step data loads the Cirrus_ST_Sample_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
      • The load_workflows_climate step loads and activates the ST Climate workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.
      • The load_workflows_credit step loads and activates the ST Credit workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.
      • The load_workflows_ewst step loads and activates the ST workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.
      • The localize_va_reports_climate step imports localized SAS-provided Climate reports created in SAS Visual Analytics.
      • The localize_va_reports_credit step imports localized SAS-provided Credit reports created in SAS Visual Analytics.
      • The manage_cas_lib_acl step sets up permissions for the default STReporting CAS library. Users in the STUsers, STAdministrators and SASAdministrators groups have full access to the tables.
      • The transfer_sampledata_files_climate step stores a copy of all Climate sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.
      • The transfer_sampledata_files_common step stores a copy of all common sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, reports and a BEP template.
      • The transfer_sampledata_files_credit step stores a copy of all Credit sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.
      • The transfer_sampledata_files_ewst step stores a copy of all sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.
      • The update_db_sampledata_scripts_pg_climate step stores a copy of the install_climate_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the Climate sample data.
      • The update_db_sampledata_scripts_credit step stores a copy of the install_credit_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the Credit sample data.
      • The update_db_sampledata_scripts_pg_ewst step stores a copy of the install_ewst_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the sample data.

      WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N. The following steps have not been marked as sample artifacts and will always be deployed:

      • The check_services step checks if the ST dependent services are up and running.
      • The check_solution_existence step checks to see if the ST solution is already running.
      • The check_solution_deployment step checks for the successful deployment of Risk Cirrus Core.
      • The create_solution_repo step creates the ST repository.
      • The check_solution_running step checks to entire the ST solution is running.
      • The import_solution step imports the solution in the ST repository.
      • The load_app_registry step loads the ST solution into the SAS application registry.
      • The load_auth_rules_common step assigns authorization rules for the ST solution.
      • The load_group_memberships_common step assigns members to various ST groups.
      • The load_identities_common step loads the ST identities.
      • The load_main_objects_common step loads the Cirrus_ST_main_loader.xlsx file which contains required object instances, like Source System codes, Sequence Definitions and Code Libraries. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
      • The setup_code_lib_repo step creates the ST code library directory.
      • The share_objects_with_solution step shares the Risk Cirrus Core code library with the ST solution.
    • SAS Climate Stress Testing

      • The create_cas_lib step creates the default STReporting CAS library that is used for reporting in SAS Stress Testing.
      • The create_db_auth_domain step creates an STDBAuth domain for the riskcirrusst schema and assigns default permissions.
      • The create_db_auth_domain_user step creates an STUserDBAuth domain for the riskcirrusst schema and assigns default group permissions.
      • The import_dataloader_files_climate step uploads the Cirrus_Climate_loader.xlsx file into the file service under the Products/SAS Stress Testing directory.
      • The import_sample_dataloader_files_common step uploads the Cirrus_ST_sample_data_loader.zip file into the file service under the Products/SAS Stress Testing directory.
      • The import_templates_common step uploads the Business Evolution Template used for import/export of BEP growth projections to the file service under the Products/SAS Stress Testing directory.
      • The import_va_reports_climate step imports SAS-provided Climate reports created in SAS Visual Analytics.
      • The install_riskengine_project_climate step loads the sample Climate project into SAS Risk Engine.
      • The load_objects_climate step data loads the Cirrus_ST_Climate_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
      • The load_sample_objects_common step data loads the Cirrus_ST_Sample_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
      • The load_workflows_climate step loads and activates the ST Climate workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.
      • The localize_va_reports_climate step imports localized SAS-provided Climate reports created in SAS Visual Analytics.
      • The manage_cas_lib_acl step sets up permissions for the default STReporting CAS library. Users in the STUsers, STAdministrators and SASAdministrators groups have full access to the tables.
      • The transfer_sampledata_files_climate step stores a copy of all Climate sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.
      • The transfer_sampledata_files_common step stores a copy of all common sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, reports and a BEP template.
      • The update_db_sampledata_scripts_pg_climate step stores a copy of the install_climate_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the Climate sample data.

      WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N. The following steps have not been marked as sample artifacts and will always be deployed:

      • The check_services step checks if the ST dependent services are up and running.
      • The check_solution_existence step checks to see if the ST solution is already running.
      • The check_solution_deployment step checks for the successful deployment of Risk Cirrus Core.
      • The create_solution_repo step creates the ST repository.
      • The check_solution_running step checks to entire the ST solution is running.
      • The import_solution step imports the solution in the ST repository.
      • The load_app_registry step loads the ST solution into the SAS application registry.
      • The load_auth_rules_common step assigns authorization rules for the ST solution.
      • The load_group_memberships_common step assigns members to various ST groups.
      • The load_identities_common step loads the ST identities.
      • The load_main_objects_common step loads the Cirrus_ST_main_loader.xlsx file which contains required object instances, like Source System codes, Sequence Definitions and Code Libraries. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
      • The setup_code_lib_repo step creates the ST code library directory.
      • The share_objects_with_solution step shares the Risk Cirrus Core code library with the ST solution.
    • SAS Credit Stress Testing

      • The create_cas_lib step creates the default STReporting CAS library that is used for reporting in SAS Stress Testing.
      • The create_db_auth_domain step creates an STDBAuth domain for the riskcirrusst schema and assigns default permissions.
      • The create_db_auth_domain_user step creates an STUserDBAuth domain for the riskcirrusst schema and assigns default group permissions.
      • The import_dataloader_files_credit step uploads the Cirrus_Credit_loader.xlsx file into the file service under the Products/SAS Stress Testing directory.
      • The import_sample_dataloader_files_common step uploads the Cirrus_ST_sample_data_loader.zip file into the file service under the Products/SAS Stress Testing directory.
      • The import_templates_common step uploads the Business Evolution Template used for import/export of BEP growth projections to the file service under the Products/SAS Stress Testing directory.
      • The import_va_reports_credit step imports SAS-provided Credit reports created in SAS Visual Analytics.
      • The install_riskengine_project_credit step loads the sample Credit project into SAS Risk Engine.
      • The load_objects_credit step data loads the Cirrus_ST_Credit_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
      • The load_sample_objects_common step data loads the Cirrus_ST_Sample_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
      • The load_workflows_credit step loads and activates the ST Credit workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.
      • The localize_va_reports_credit step imports localized SAS-provided Credit reports created in SAS Visual Analytics.
      • The manage_cas_lib_acl step sets up permissions for the default STReporting CAS library. Users in the STUsers, STAdministrators and SASAdministrators groups have full access to the tables.
      • The transfer_sampledata_files_credit step stores a copy of all Credit sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.
      • The transfer_sampledata_files_common step stores a copy of all common sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, reports and a BEP template.
      • The update_db_sampledata_scripts_pg_credit step stores a copy of the install_credit_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the Credit sample data.

      WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N. The following steps have not been marked as sample artifacts and will always be deployed:

      • The check_services step checks if the ST dependent services are up and running.
      • The check_solution_existence step checks to see if the ST solution is already running.
      • The check_solution_deployment step checks for the successful deployment of Risk Cirrus Core.
      • The create_solution_repo step creates the ST repository.
      • The check_solution_running step checks to entire the ST solution is running.
      • The import_solution step imports the solution in the ST repository.
      • The load_app_registry step loads the ST solution into the SAS application registry.
      • The load_auth_rules_common step assigns authorization rules for the ST solution.
      • The load_group_memberships_common step assigns members to various ST groups.
      • The load_identities_common step loads the ST identities.
      • The load_main_objects_common step loads the Cirrus_ST_main_loader.xlsx file which contains required object instances, like Source System codes, Sequence Definitions and Code Libraries. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.
      • The setup_code_lib_repo step creates the ST code library directory.
      • The share_objects_with_solution step shares the Risk Cirrus Core code library with the ST solution.

    c. For SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to run. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment.

    For example, if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then the “transfer_sampledata_files_common” and the “load_sample_objects_common” steps will be skipped during deployment. After the deployment finishes, you decide you want to include the SAS-provided sample data to use. You can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS to “transfer_sampledata,load_sample_data” and then delete the sas-risk-cirrus-st pod to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS.

    WARNING: This list is absolute; the deployment will only run the steps included in this list. If you are deploying this environment for the first time, this variable should be an empty string, or you risk an incomplete or failed deployment.

    d. For SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. Typically, the only use case for this would be skipping the load of sample data. To skip the load of sample data, set this variable to “load_sample_data”. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then set this variableto an empty string to skip load_sample_data and any other steps that are marked as sample data. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to Y, then set this variable with the IDs of any steps you would like to skip, including those flagged as sample data.

    e. For SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-NAME }} with the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the Shared Services database.

    f. For SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET, replace {{ BASE64-ENCODED-SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }} with the base64 encoded version of the database user for the user name that was used for SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME.

    The following is an example of a configuration.env that you could use for SAS SAS Stress Testing. This example uses the default values provided by SAS except for the solution input data database user name variable. The SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME should be replaced with the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the Shared Services database.

    # SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
    # SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES={{ Y-OR-N }}
    # SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
    # SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
    SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME=stuser
  6. In the base kustomization.yaml file, add site-config/sas-risk-cirrus-st/configuration.env to the configMapGenerator block. Here is an example:

    configMapGenerator:
    ...
    - name: sas-risk-cirrus-st-parameters
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-st/configuration.env
    ...

    Save the kustomization.yaml file.

  7. Modify the sas-risk-cirrus-st-secret.env file (in the $deploy/site-config/sas-risk-cirrus-st directory) and specify your settings as follows:

    For the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }} with the database schema user secret. If the directory already exists and already has the expected .env file, verify that overlay settings have been applied successfully to the secret have been correctly applied. No further actions are required unless you want to change the secret.

    The following is an example of secret.env file that you could use for SAS SAS Stress Testing.

    SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET=stsecret

    Save the sas-risk-cirrus-st-secret.env file.

  8. In the base kustomization.yaml file, add site-config/sas-risk-cirrus-st/sas-risk-cirrus-st-secret.env to the secretGenerator block. Here is an example:

    secretGenerator:
    ...
    - name: sas-risk-cirrus-st-secret
      behavior: merge
      envs:
        - site-config/sas-risk-cirrus-st/sas-risk-cirrus-st-secret.env
    ...

    Save the kustomization.yaml file.

  9. When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide to apply the new settings.

    Note: The .env overlay can be applied during or after the initial deployment of the SAS Viya platform.

    • If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, and then run kustomize build to create and apply the manifests.
    • If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build to create and apply the manifests.

Verify That Overlay Settings Have Been Applied Successfully to the ConfigMap

Before verifying the settings for SAS SAS Stress Testing solution, complete step 9 specified in the Risk Cirrus Core README to verify for Risk Cirrus Core.

  1. Run the following command to verify whether the overlay has been applied to the configuration map:

    kubectl describe configmap sas-risk-cirrus-st-parameters -n <name-of-namespace>
  2. Verify that the output contains the desired configurations that you configured.

Verify That Overlay Settings Have Been Applied Successfully to the Secret

To verify that your overrides were applied successfully to the secret, run the following commands:

  1. Find the name of the secret on the namespace.

    kubectl describe secret sas-risk-cirrus-st-secret -n <name-of-namespace>
  2. Retrieve the name of the secret on the namespace from the “Name:” line on the generated output.

  3. Verify that the output contains the desired database schema user secret that you configured.

    kubectl get secret <name-of-the-secret> -n <name-of-namespace>-o jsonpath='{.data}'

Additional Resources

Change Alternate Data Storage for SAS Viya Platform Files Service

Overview

The SAS Viya platform files service uses PostgreSQL to store file metadata and content. However, In PostgreSQL upload time is slower for large objects. To overcome this limitation you can choose to store the file content in other data storage, such as Azure Blob Storage. If you choose Azure Blob Storage as the storage database, then the file content is stored in Azure Blob Storage and file metadata remains in PostgreSQL.

Configure SAS Viya File Service for Azure Blob Storage

The steps necessary to configure the SAS Viya platform files service to use Azure Blob Storage as the back end for file content are listed below.

Prerequisites

Before you start, create or obtain a storage account and record the name of the storage account and its access key.

Installation

  1. Copy the files in the $deploy/sas-bases/examples/sas-files/azure/blob directory to the $deploy/site-config/sas-files/azure/blob directory. Create the target directory if it does not already exist.

  2. Create a file named account_key in the $deploy/site-config/sas-files/azure/blob directory, and paste the storage account key into the file. The file should only contain the storage account key.

  3. In the $deploy/site-config/sas-files/azure/blob/configmaps.yaml file, replace {{ STORAGE-ACCOUNT-NAME }} with the name of the storage account to be used by the files service.

  4. Make the following changes to the base kustomization.yaml file in the $deploy directory.

    4.1. Add sas-bases/overlays/sas-files and site-config/azure/blob to the resources block. Here is an example:

    resources:
    ...
    - sas-bases/overlays/sas-files
    - site-config/sas-files/azure/blob
    ...

    4.2. Add site-config/azure/blob/transformers.yaml and sas-bases/overlays/sas-files/file-custom-db-transformer.yaml to the transformers block. Here is an example:

    transformers:
    ...
    - sas-bases/overlays/sas-files/file-custom-db-transformer.yaml
    - site-config/sas-files/azure/blob/transformers.yaml
    ...
  5. Use the deployment commands described in SAS Viya Platform Deployment Guide to apply the new settings.

Configuration Settings for SAS Workload Orchestrator Service

Overview

The SAS Workload Orchestrator Service manages the workload which is started on demand by the launcher service. The SAS Workload Orchestrator Service has manager pods in a StatefulSet and server pods in a DaemonSet.

This README file describes the changes that can be made to the SAS Workload Orchestrator Service settings for pod resource requirements, for user-defined resource scripts, for the initial configuration of the service, and for specifying a different Prometheus Pushgateway URL than the default.

IMPORTANT: It is strongly recommended that deployments of SAS Workload Orchestrator also have the ClusterRoleBinding. For details, see the README located at $deploy/sas-bases/overlays/sas-workload-orchestrator/README.md (for Markdown format) or at $deploy/sas-bases/docs/cluster_privileges_for_sas_workload_orchestrator_service.htm (for HTML format).

Pod Resource Requests and Limits

Kubernetes pods have resource requests and limits for CPU and memory.

Manager pods handle all the REST API calls and manage all of the processing of host, job, and queue information. The more jobs you process at the same time, the more memory and cores you should assign to the StatefulSet pods. For manager pods, the current default resource request and limit for CPU and memory is 1 core and 4GB of memory.

Server pods interact with Kubernetes to manage the resources and jobs running on a particular node. Their memory and core requirements depend on how jobs are allowed to concurrently run on a node and how many pods not started by the SAS Workload Orchestrator Service are also running on a node. For server pods, the current default resource request and limit for CPU and memory is 0.1 core and 250MB of memory.

Generally, manager pods use more resources than daemon pods with the resource request amount equalling the limit amount.

Pod User-Defined Script Volume

SAS Workload Orchestrator allows user-defined resources to be used for scheduling. User-defined resources can be a specified value or can be a value returned by executing a script.

Manager pods handle the running of user-defined resource scripts for resources that affect the scheduling on a global scale. An example of a global resource would be the number of licenses across all pods started by SAS Workload Orchestrator.

Server pods also handle the running of user-defined resource scripts for resources that reflect values about an individual node that a pod would run on. An example of a host resource could be number of GPUs on a host (for the case of a static resource) or the amount of disk space left on a mount (for the case of a dynamic resource).

In order to set these values, SAS Workload Orchestrator looks for a script in a volume mount named “/scripts”. To place a script in that directory, the script must be placed in a volume and that volume specified in the StatefulSet or DaemonSet definition as a volume with the name ‘scripts’.

SAS Workload Orchestrator Initial Configuration

As of the 2024.09 cadence, the default SAS Workload Orchestrator configuration is loaded from the sas-workload-orchestrator-initial-configuration ConfigMap. If the initial configuration needs to be modified, the ConfigMap can be modified by a patch transformer.

Custom Prometheus Pushgateway

As of the 2024.09 cadence, the Prometheus Pushgateway used by SAS Workload Orchestrator can be specified by an environment variable allowing customers to change where SAS Workload Orchestrator sends its metric information. A patch transformer is provided to allow a custom URL to be set in the SAS Workload Orchestrator Daemonset configuration. If the environment variable is not specified, the metrics are sent to http://prometheus-pushgateway:9091.

Installation

Based on the following descriptions of available example files, determine if you want to use any example file in your deployment. If so, copy the example file and place it in your site-config directory.

The example files described in this README file are located at ‘/$deploy/sas-bases/examples/sas-workload-orchestrator/configure’.

StatefulSet Pods Requests and Limits

The values for memory and CPU resources for the SAS Workload Orchestrator Service manager pods are specified in sas-workload-orchestrator-statefulset-resources.yaml.

To update the defaults, replace the {{ MEM-REQUIRED }} and {{ CPU-REQUIRED }} variables with the values you want to use.

Note: It is important that the values for the requests and limits be identical to get Guaranteed Quality of Service for the SAS Workload Orchestrator Service pods.

Here is an example:

  - op: replace
    path: /spec/template/spec/containers/0/resources/requests/memory
    value: 6Gi
  - op: replace
    path: /spec/template/spec/containers/0/resources/limits/memory
    value: 6Gi
  - op: replace
    path: /spec/template/spec/containers/0/resources/requests/cpu
    value: "2"
  - op: replace
    path: /spec/template/spec/containers/0/resources/limits/cpu
    value: "2"

Note: For details on the value syntax used in the code, see Resource units in Kubernetes.

After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

transformers:
...
- site-config/sas-workload-orchestrator/configure/sas-workload-orchestrator-statefulset-resources.yaml

DaemonSet Pods Requests and Limits

The values for memory and CPU resources for the SAS Workload Orchestrator Service server pods are specified in sas-workload-orchestrator-daemonset-resources.yaml.

To update the defaults, replace the {{ MEM-REQUIRED }} and {{ CPU-REQUIRED }} variables with the values you want to use.

Note: It is important that the values for the requests and limits be identical to get Guaranteed Quality of Service for the SAS Workload Orchestrator Service pods.

Here is an example:

  - op: replace
    path: /spec/template/spec/containers/0/resources/requests/memory
    value: 4Gi
  - op: replace
    path: /spec/template/spec/containers/0/resources/limits/memory
    value: 4Gi
  - op: replace
    path: /spec/template/spec/containers/0/resources/requests/cpu
    value: "1500m"
  - op: replace
    path: /spec/template/spec/containers/0/resources/limits/cpu
    value: "1500m"

Note: For details on the value syntax used in the code, see Resource units in Kubernetes

After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

transformers:
...
- site-config/sas-workload-orchestrator/configure/sas-workload-orchestrator-daemonset-resources.yaml

User-Defined Scripts Volume for Manager Pods

The example file sas-workload-orchestrator-global-user-defined-resources-script-storage.yaml mounts an NFS volume as the ‘scripts’ volume.

To update the volume, replace the {{ NFS-SERVER-ADDR }} variable with the fully-qualified domain name of the server and replace the {{ NFS-SERVER-PATH }} variable with the path to the volume on the server. Here is an example:

  - op: replace
    path: /spec/template/spec/volumes/0
    value:
      name: scripts
      nfs:
        path: /path/to/my/scripts
        server: my.nfs.server.mydomain.com

Alternately, you could use any other type of volume Kubernetes supports.

The following example updates the volume to use a PersistentVolumeClaim instead of an NFS mount. This assumes the PVC has already been defined and created.

  - op: replace
    path: /spec/template/spec/volumes/0
    value:
      name: scripts
      persistentVolumeClaim:
        claimName: my-pvc-name
        readOnly: true

Note: For details on the value syntax used specifying volumes, see Kubernetes Volumes.

After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

transformers:
...
- site-config/sas-workload-orchestrator/configure/sas-workload-orchestrator-global-user-defined-resources-script-storage.yaml

DaemonSet Pods User-Defined Script Volume

The example file sas-workload-orchestrator-host-user-defined-resources-script-storage.yaml mounts an NFS volume as the ‘scripts’ volume.

To update the volume, replace the {{ NFS-SERVER-ADDR }} variable with the fully-qualified domain name of the server and replace the {{ NFS-SERVER-PATH }} variable with the path to the volume on the server. Here is an example:

  - op: replace
    path: /spec/template/spec/volumes/0
    value:
      name: scripts
      nfs:
        path: /path/to/my/scripts
        server: my.nfs.server.mydomain.com

Alternately, you could use any other type of volume Kubernetes supports.

The following example updates the volume to use a PersistentVolumeClaim instead of an NFS mount. This assumes the PVC has already been defined and created.

  - op: replace
    path: /spec/template/spec/volumes/0
    value:
      name: scripts
      persistentVolumeClaim:
        claimName: my-pvc-name
        readOnly: true

Note: For details on the value syntax used specifying volumes, see Kubernetes Volumes.

After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

transformers:
...
- site-config/sas-workload-orchestrator/configure/sas-workload-orchestrator-host-user-defined-resources-script-storage.yaml

Custom Initial SAS Workload Orchestrator Configuration

The example file sas-workload-orchestrator-initial-configuration-change.yaml changes the initial SAS Workload Orchestrator configuration to add additional administrators.

To update the initial configuration, replace the {{ NEW_CONFIG_JSON }} variable with the JSON representation of the updated configuration. Here is an example:

  - op: replace
    path: /data/SGMG_CONFIG_JSON
    value: |
        {
          "version" : 1,
          "admins"  : ["SASAdministrators","myAdmin1","myAdmin2"],

          "hostTypes":
          [
              {
                "name"           : "default",
                "description"    : "SAS Workload Orchestrator Server Hosts on Kubernetes Nodes",
                "role"           : "server"
              }
          ]
        }

After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

transformers:
...
- site-config/sas-workload-orchestrator/configure/sas-workload-orchestrator-initial-configuration.yaml

Note: The SAS Workload Orchestrator configuration in JSON can be exported by the Workload Orchestrator dialog in SAS Environment Manager application or it can be retrieved by using the workload-orchestrator plugin to the sas-viya CLI.

Changing Prometheus Pushgateway URL

The example file sas-workload-orchestrator-prometheus-gateway-url.yaml changes the Prometheus Pushgateway URL from the default of http://prometheus-pushgateway:9091 to the value specified by the SGMG_PROMETHEUS_PUSHGATEWAY_URL environment variable.

To update the URL, replace the {{ PROMETHEUS_PUSHGATEWAY_URL }} variable with the URL where SAS Workload Orchestrator should push its metrics. Here is an example:

  - op: add
    path: /spec/template/spec/containers/0/env/-
    value:
        name: SGMG_PROMETHEUS_PUSHGATEWAY_URL
        value: https://my-prometheus-pushgateway.mycompany.com

After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

transformers:
...
- site-config/sas-workload-orchestrator/configure/sas-workload-orchestrator-prometheus-pushgateway-url.yaml

Cluster Privileges for SAS Workload Orchestrator Service

Overview

SAS Workload Orchestrator Service is an advanced scheduler that integrates with SAS Launcher. SAS recommends adding this overlay to allow the SAS Workload Orchestrator service account to retrieve the following node and pod information so that your deployment will run optimally.

If you choose not to allow the ClusterRoleBinding, you must perform the following tasks:

Without the ability to get node labels as host properties, SAS Workload Orchestrator cannot allocate a new node from the correct node pool when a pod triggers a scale-up. As stated above, SAS Workload Orchestrator uses the host properties (that is, node labels) of the host type to be scaled to create the scaling pod’s nodeAffinity information. Without the host properties, the only label in the nodeAffinity section will be ‘workload.sas.com/class=compute’. If you have only one deployment in a cluster and only one scalable node pool for the deployment, this is not a problem. If you have multiple deployments and each deployment has a scalable host type or multiple scalable host types, this is a problem because the node information cannot be accessed.

Instructions

Enable the ClusterRole

The ClusterRole and ClusterRoleBinding are enabled by adding the file to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

resources:
...
- sas-bases/overlays/sas-workload-orchestrator

Disable the ClusterRole

To disable the ClusterRole and ClusterRoleBinding:

  1. Remove sas-bases/overlays/sas-workload-orchestrator from the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml). This also ensures that the ClusterRole option will not be applied in future Kustomize builds.

  2. Perform the following command to remove the ClusterRoleBinding from the namespace:

    kubectl delete clusterrolebinding sas-workload-orchestrator-<your namespace>
  3. Perform the following command to remove the ClusterRole from the cluster.

    kubectl delete clusterrole sas-workload-orchestrator

Build

After you configure Kustomize, continue your SAS Viya platform deployment as documented.

Disabling and Enabling SAS Workload Orchestrator Service

Overview

The SAS Workload Orchestrator Service consists of a set of manager pods controlled by the sas-workload-orchestrator statefulset and a set of server pods controlled by the sas-workload-orchestrator daemonset.

This README file describes how to automatically disable (or enable) the SAS Workload Orchestrator Service by disabling (or enabling) the sas-workload-orchestrator statefulset and daemonset.

Instructions

Automatically Disable the SAS Workload Orchestrator Service

Because the SAS Workload Orchestrator Service is enabled by default, there is no action needed to automatically enable the statefulset and daemonset pods.

You can automatically disable the SAS Workload Orchestrator Service by adding a patch transformer to the main kustomization.yaml file so that no statefulset pods and no daemonset pods are created.

Note: Automatically disabling SAS Workload Orchestrator Service causes it to remain disabled even if an update is made to the deployment.

To automatically disable the service, add a reference to the disable patch transformer file into the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

Here is an example:

transformers:
...
- sas-bases/overlays/sas-workload-orchestrator/enable-disable/sas-workload-orchestrator-disable-patch-transformer.yaml

Manually Disable or Enable the SAS Workload Orchestrator Service

Manually enable or disable the SAS Workload Orchestrator Service statefulset and daemonset pods by using the ‘kubectl patch’ command along with supplied patch files. There are four files, two for enabling the daemonset and statefulset, and two for disabling the daemonset and statefulset.

Since manually disabling or enabling of the SAS Workload Orchestrator Service is done from a machine that is running kubectl with access to the cluster, the files from $deploy/sas-bases/overlays/sas-workload-orchestrator/enable-disable must be accessible on that machine either by mounting the overlays directory to the machine or copying the files to the machine running the kubectl command.

Note: Manually disabling the SAS Workload Orchestrator Service is temporary. If an update is applied to the deployment, SAS Workload Orchestrator Service will be enabled again.

Both disabling and enabling manually require two kubectl patch commands, one for the sas-workload-orchestrator daemonset and one for the sas-workload-orchestrator statefulset.

Disabling

  1. Terminate the daemonset pods:

    kubectl -n <namespace> patch daemonset sas-workload-orchestrator --patch-file /<path>/<to>/sas-workload-orchestrator-patch-daemonset-disable.yaml
  2. Wait for daemonset pods to terminate, and then terminate the statefulset pods:

    kubectl -n <namespace> patch statefulset sas-workload-orchestrator --patch-file /<path>/<to>/sas-workload-orchestrator-patch-statefulset-disable.yaml
  3. Disable SAS Workload Orchestrator through the SAS Launcher:

    kubectl -n <namespace> set env deployment sas-launcher -c sas-launcher SAS_LAUNCHER_SWO_DISABLED="true"

Enabling

  1. Enable the statefulset pods:

    kubectl -n <namespace> patch statefulset sas-workload-orchestrator --patch-file /<path>/<to>/sas-workload-orchestrator-patch-statefulset-enable.yaml
  2. Wait for both statefulset pods to become running, and then enable the daemonset pods:

    kubectl -n <namespace> patch daemonset sas-workload-orchestrator --patch-file /<path>/<to>/sas-workload-orchestrator-patch-daemonset-enable.yaml
  3. Enable SAS Workload Orchestrator through the SAS Launcher:

    kubectl -n <namespace> set env deployment sas-launcher -c sas-launcher SAS_LAUNCHER_SWO_DISABLED="false"

SAS SingleStore Cluster Operator

Overview

Note: The SingleStore Operator documentation related to cluster configuration is located at the SingleStore website.

If your order includes SAS with Singlestore, the following is deployed by default:

The SAS Viya platform deployment includes example files to modify the configuration to suit your needs:

Recommendations for SingleStore infrastructure on Azure are located at the SingleStore System Requirements and Recommendations page. SingleStore engineers also require that you use Azure CNI as the Kubernetes network provider and Azure managed-premium storage for your storage. SingleStore also notes that some customer workloads may require Azure Ultra SSD.

Recommendations for Singlestore infrastructure on AWS are located at the AWS EC2 Best Practices.

Calico CNI is required as the Kubernetes network provider for upstream open source Kubernetes clusters.

SingleStore Cluster Definition

The configuration of the SingleStore cluster is site-specific. To create a SingleStore cluster in your deployment:

  1. Copy $deploy/sas-bases/examples/sas-singlestore into $deploy/site-config.

  2. Edit $deploy/site-config/sas-singlestore/sas-singlestore-secret.yaml:

    • Paste the provided SingleStore license code into the file replacing the string {{ LICENSE-CODE }}
    • Use the following Python code to generate the hashed value for the password you want to use for the admin account. Replace secretpass with your desired admin password, and then paste the resulting output into the sas-singlestore-secret.yaml file, replacing the string {{ HASHED-ADMIN-PASSWORD }}. The hashed password contains an initial asterisk that must be included.
    from hashlib import sha1
    print("*" + sha1(sha1('secretpass'.encode('utf-8')).digest()).hexdigest().upper())
  3. You can also override other cluster attributes, such as the number of leaf nodes, the storage class, or the amount of storage allocated to each node type. In the following example, the leaf node definition in sas-singlestore-cluster-config.yaml has been modified to create four leaf nodes each with 750 GB of storage, to use a scaling height of 1 (defined as 8 vCPU cores and 32 GB of RAM) and to use the “managed” storage class. You may also want to perform similar alterations to the aggregatorSpec. Refer to the SingleStore Cluster Scaling Document for more information.

    - op: replace
      path: /spec/leafSpec/count
      value: 4
    - op: replace
      path: /spec/leafSpec/height
      value: 1
    - op: replace
      path: /spec/leafSpec/storageGB
      value: 750
    - op: replace
      path: /spec/leafSpec/storageClass
      value: managed
  4. To allow certain source ranges to access the load balancer, you must override the cluster attribute loadBalancerSourceRanges to configure optional firewall rules. Refer to the SingleStore Advanced Service Configuration for more information. The following examples demonstrate defining the load balancer source range for:

    • Multiple source ranges
    • A single source range
    • No source range, using an empty array
    - op: replace
      path: /spec/serviceSpec/loadBalancerSourceRanges
      value: [ 100.110.120.130/16, 200.210.220.230/28, {{ IP-RANGE }} ]
    
    ...
    - op: replace
      path: /spec/serviceSpec/loadBalancerSourceRanges
      value: [ {{ IP-RANGE }} ]
    
    ...
    - op: replace:
      path: /spec/serviceSpec/loadBalancerSourceRanges
      value: []
  5. Add the following to your base kustomization.yaml ($deploy/kustomization.yaml) file.

    Note: Ensure that the sas-bases/components/sas-singlestore component is added before any TLS components. Note: Ensure that the site-config/sas-singlestore component is added after the sas-bases/components/sas-singlestore component.

    The site-config/sas-singlestore component will merge in your license/secret.

    ...
    components:
    - sas-bases/components/sas-singlestore
    - site-config/sas-singlestore
    ...
    transformers:
    - site-config/sas-singlestore/sas-singlestore-cluster-config.yaml
  6. Determine whether you need to override the cluster OS configuration. For more information, see the README file located at $deploy/sas-bases/examples/sas-singlestore-osconfig/README.md (for Markdown format) or at $deploy/sas-bases/docs/sas_singlestore_cluster_os_configuration.htm (for HTML format).

  7. If you are deploying on Red Hat OpenShift, you must apply a Security Context Constraint to a service account. For the required steps, see the README file located at $deploy/sas-bases/examples/sas-singlestore-osconfig/openshift/README.md (for Markdown format) or at $deploy/sas-bases/docs/security_context_constraint_and_service_account_for_sas_singlestore_cluster_os_configuration.htm (for HTML format).

SAS SingleStore Cluster OS Configuration

Overview

$deploy/sas-bases/examples/sas-singlestore-osconfig/sas-singlestore-osconfig.yaml is a patch transformer that can be used to override the default OS configurations of the SingleStore database nodes used when deploying an integrated SAS Viya platform and SingleStore environment.

Customize the OS Configuration

The sas-singlestore-osconfig.yaml contains OS configuration settings as recommended by SingleStore Documentation. The configuration setting min_free_kbytes controls the amount of memory held in reserve. The default value is 658,096, which is appropriate for a cluster node with about 64GiB of memory. If your cluster nodes’ system RAM is substantially larger than that, you should set the value of min_free_kbytes to either 1% of system RAM or 4194304 (4 GiB), whichever is smaller.

Multiply the available GiB of RAM, 1024 (MiB in GiB), 1024 again (kb in MiB), and .01 (1%). For example, if you are running on nodes with 256 GiB of system RAM, you would calculate: 256 X 1024 X 1024 X 0.01 = 2684354, and use that as the value for min_free_kbytes since it is less than 4194304.

Unless directed by SAS Technical Support to modify the other configuration values, SAS recommends that you leave them unaltered.

To enable this customization:

  1. Copy the $deploy/sas-bases/examples/sas-singlestore-osconfig/sas-singlestore-osconfig.yaml file to the location of your SingleStore overlay. For example, site-config/sas-singlestore-osconfig/sas-singlestore-osconfig.yaml.

  2. Modify the OS configuration values within site-config/sas-singlestore-osconfig/sas-singlestore-osconfig.yaml.

  3. Add the relative path of the sas-singlestore-osconfig.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml) before the reference to the sas-bases/overlays/required/transformers.yaml file. Here is an example:

    transformers:
    ...
    - site-config/sas-singlestore-osconfig/sas-singlestore-osconfig.yaml
    ...
    - sas-bases/overlays/required/transformers.yaml
    ...

Security Context Constraint and Service Account for SAS SingleStore Cluster OS Configuration

Overview

Note: If your SAS Viya platform is not being deployed on Red Hat OpenShift, you should ignore this README file.

Similar to the way that Role Based Access Control resources control user access, administrators can use Security Context Constraints (SCCs) on Red Hat OpenShift to control permissions for pods. These permissions include actions that a pod can perform and what resources it can access. SCCs are used to define a set of conditions that a pod must run with in order to be accepted into the system.

In an OpenShift environment, each Kubernetes pod starts up with an association to a specific SCC, which limits the privileges that pod can request. An administrator configures each pod to run with a certain SCC by granting the corresponding service account for that pod access to the SCC. For example, if pod A requires its own SCC, an administrator must grant access to that SCC for the service account under which pod A is launched.

This README describes several tasks:

The service account is needed to enable the sas-singlestore-osconfig daemonset with elevated privileges to start sas-singlestore-osconfig pods. The elevated privileges are needed because the pods make changes to the host node’s operating system’s kernel parameters.

Installation

The following steps should be performed before deploying SAS Viya platform.

  1. Apply a security context constraint on an OpenShift cluster:

    The /$deploy/sas-bases/examples/sas-singlestore-osconfig/openshift directory contains a file to apply a security context constraint for deploying SingleStore on an OpenShift cluster.

    A Kubernetes cluster administrator should add this SCC to their OpenShift cluster prior to deploying the SAS Viya platform. Use the following command to apply the SCC.

    kubectl apply -f sas-bases/examples/sas-singlestore-osconfig/openshift/sas-singlestore-osconfig-scc.yaml
  2. Bind the security context constraint to a service account:

    After the SCC has been applied, a Kubernetes cluster administrator must bind the SCC to the sas-singlestore-osconfig service account that will use it.

    Use the following command. Replace the entire variable {{ NAME-OF-NAMESPACE }}, including the braces, with the Kubernetes namespace used for the SAS Viya platform.

    oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-singlestore-osconfig -z
    sas-singlestore-osconfig
  3. Add the service account to the daemonset:

    Make the following changes to the base kustomization.yaml file in the $deploy directory:

    • Add sas-bases/overlays/sas-singlestore-osconfig/openshift directory to the resources block.
    • Add sas-bases/overlays/sas-singlestore-osconfig/openshift/daemonset-transformer.yaml to the transformers block.

    Here is an example:

    resources:
    - sas-bases/overlays/sas-singlestore-osconfig/openshift
    
    transformers:
    - sas-bases/overlays/sas-singlestore-osconfig/openshift/daemonset-transformer.yaml
  4. After you revise the base kustomization.yaml file and complete all the tasks in the README files that you want, continue your SAS Viya platform deployment as documented in SAS Viya Platform: Deployment Guide.

Additional Resources

Configuring SAS/ACCESS and Data Connectors for SAS Viya 4

Overview

This directory contains files to customize your SAS Viya platform deployment for SAS/ACCESS and Data Connectors. Some SAS/ACCESS products require third-party libraries and configurations. This README describes the steps necessary to make these files available to your SAS Viya platform deployment. It also describes how to set required environment variables to point to these files.

Note: If you re-configure SAS/ACCESS products after the initial deployment, you must restart the CAS server.

Prerequisites

Before you start the deployment, collect the third-party libraries and configuration files that are required for your data sources. Examples of these requirements include the following:

When you have collected these files, place them on storage that is accessible to your Kubernetes deployment. This storage could be a mount or a storage device with a PersistentVolume (PV) configured.

SAS recommends organizing your software in a consistent manner on your mount storage device. The following is an example directory structure:

          access-clients
          ├── hadoop
          │   ├── jars
          │   ├── config
          ├── odbc
          │   ├── sql7.0.1
          │   ├── gplm7.1.6
          │   ├── dd7.1.6
          ├── oracle
          ├── postgres
          └── teradata

Note the details of your specific storage solution, as well as the paths to the configuration files within it. You will need this information before you start the deployment.

You should also create a subdirectory within $deploy/site-config to store your ACCESS configurations. In this documentation, we will refer to a user-created subdirectory called $deploy/site-config/data-access. For more information, refer to the “Directory Structure” section of the “Pre-installation Tasks” Deployment Guide.

Installation

Attach Storage to the SAS Viya Platform

Use Kustomize PatchTransformers to attach the storage with your configuration files to the SAS Viya platform. Within the $deploy/sas-bases/examples/data-access directory, there are four example files to help you with this process: data-mounts-cas.sample.yaml, data-mounts-deployment.sample.yaml, data-mounts-job.sample.yaml, and data-mounts-statefulset.sample.yaml.

Copy these four files into your $deploy/site-config/data-access directory, removing “.sample” from the file names and making changes to each file according to your storage choice. The information should be largely duplicated across the four files, but notice that the path reference in each file is different, as well as the Kubernetes resource type that it targets.

When you have created your PatchTransformers, add them to the transformers block in the base kustomization.yaml file located in your $deploy directory.

transformers:
...
- site-config/data-access/data-mounts-cas.yaml
- site-config/data-access/data-mounts-deployment.yaml
- site-config/data-access/data-mounts-job.yaml
- site-config/data-access/data-mounts-statefulset.yaml

Set Environment Variables

Copy $deploy/sas-bases/examples/data-access/sas-access.properties into your $deploy/site-config/data-access directory. Edit the values in the $(VARIABLE) format as they pertain to your data source configuration, un-commenting them as needed. These paths refer to the volumeMount location of the storage you attached within the containers.

As an example, to configure an ODBC connection, the lines within sas-access.properties look like this:

# ODBCINI=$(PATH_TO_ODBCINI)
# ODBCINST=$(PATH_TO_ODBCINST)
# THIRD_PARTY_LIB=$(ODBC_DRIVER_LIB)
# THIRD_PARTY_BIN=$(ODBC_DRIVER_BIN)

They should be un-commented and edited to include values like this, where /access-clients is the volumeMount location defined in Attach Storage to the SAS Viya Platform:

ODBCINI=/access-clients/odbc/odbc.ini
ODBCINST=/access-clients/odbc/odbcinst.ini
THIRD_PARTY_LIB=/access-clients/odbc/lib
THIRD_PARTY_BIN=/access-clients/odbc/bin

Edit the base kustomization.yaml file in the $deploy directory to add the following content to the configMapGenerator block, replacing $(PROPERTIES_FILE) with the relative path to your new file within the $deploy/site-config directory.

configMapGenerator:
...
- name: sas-access-config
  behavior: merge
  envs:
  - $(PROPERTIES_FILE)

For example,

configMapGenerator:
...
- name: sas-access-config
  behavior: merge
  envs:
  - site-config/data-access/sas-access.properties

Also add the following reference to the transformers block of the base kustomization.yaml file. This path references a SAS file that you do not need to edit, and it will apply the environment variables in sas-access.properties to the appropriate parts of your SAS Viya platform deployment.

transformers:
- sas-bases/overlays/data-access/data-env.yaml

Specify External JDBC Drivers

SAS redistributes CData JDBC drivers for Hive, Databricks, SparkSQL, and others. When connecting to these targets, there is generally no need to configure an external JDBC driver. If you have external JDBC drivers that you want to make accesible within the SAS Viya platform, create a volumeMount location that uses the special name of /data-drivers/jdbc. When this directory is present during deployment, then this name will be automatically appended to the Java class path used by the JDBC-based SAS/ACCESS products. See the Attach Storage to the SAS Viya Platform section for more information about creating a data mount point.

Restart CAS Server

After the initial deployment of the SAS Viya platform, if you make changes to your SAS/ACCESS configuration, you should restart the CAS server. This will refresh the CAS environment and enable any changes that you’ve made.

IMPORTANT: Performing this task will cause the termination of all active connections and sessions and the loss of any in-memory data.

Set your KUBECONFIG and run the following command:

kubectl -n name-of-namespace delete pods -l app.kubernetes.io/managed-by=sas-cas-operator

You can now proceed with your deployment as described in SAS Viya Platform Deployment Guide.

Database-Specific Configuration

Configuration for ODBC-based Connectors

Configuring ODBC connectivity to your database for the SAS Viya platform requires some or all of the following environment variables to be set. Configure these variables using the sas-access.properties file within your $deploy/site-config directory.

ODBCINI=$(PATH_TO_ODBCINI)
ODBCINST=$(PATH_TO_ODBCINST)
THIRD_PARTY_LIB=$(ODBC_DRIVER_LIB)
THIRD_PARTY_BIN=$(ODBC_DRIVER_BIN)

The THIRD_PARTY_LIB variable is a colon-separated set of directories where your third-party ODBC libraries are located. You must add the location of the ODBC shared libraries to this path so that drivers can be loaded dynamically at run time. This variable will be appended to the LD_LIBRARY_PATH as part of your install. If you need to set binaries on the PATH, you can also use a colon-separated set of bin directories using THIRD_PARTY_BIN.

It is possible to invoke multiple ODBC-based SAS/ACCESS products in the same SAS session. However, you must first define the driver names in a single odbcinst.ini configuration file. Also, if you decide to use DSNs in your SAS/ACCESS connections, the data sources must be defined in a single odbc.ini configuration file. You cannot pass a delimited string of files for the ODBCINST or ODBCINI environment variables. The requirement to use a single initialization file extends to any situation in which you are running multiple ODBC-based SAS/ACCESS products. Always set the ODBCINI and ODBCINST to the full paths to the respective files, including the filenames.

ODBCINI=$(ODBCINI)
ODBCINST=$(ODBCINST)

The $deploy/sas-bases/examples/data-access directory has the odbcinst.ini and odbc.ini files included in your install. SAS recommends using these files to add additional ODBC drivers or set a DSN to ensure that you have the correct configuration for the included ODBC-based SAS/ACCESS products. It is also best to copy odbcinst.sample.ini or odbc.sample.ini from the examples directory to a location on your PersistentVolume.

SAS/ACCESS Interface to Amazon Redshift

SAS/ACCESS Interface to Amazon Redshift uses an ODBC client (from Progress DataDirect), which is included in your install. By default, the Amazon Redshift connector is set up for non-encrypted DSN-less connections. To reference a DSN, follow the ODBC configuration steps to associate your odbc.ini file with your instance.

SAS/ACCESS Interface to Google BigQuery

In order to avoid possible connection errors with SAS/ACCESS Interface to Google BigQuery, add the following environment variable to the sas-access.properties file within your $deploy/site-config directory:

GOMEMLIMIT=250MiB
The Google BigQuery documentation has more information about the values that can be used for GOMEMLIMIT.

SAS/ACCESS Interface to DB2

SAS/ACCESS Interface to DB2 uses the installed DB2 client environment that must be accessible from a PersistentVolume. After the initial DB2 client setup, two directories must be created and be accessible to your SAS Viya platform cluster as a PersistentVolume. These directories contain the installed client files (e.g., /db2client) and the configured server definition files (/db2). The following steps need to be executed on the PersistentVolume.

  1. Install the DB2 client files into a designated directory. The “/db2client” directory is used in these instructions.

  2. This step is important. Create (or reuse) a system user that has a uid and gid value of “1001”. A specific owner and group name is not essential (“sas” is used in these instructions), but the uid and gid values need to be set to “1001”. When referenced within a PersistentVolume, these values will be mapped to the predefined “sas” user and group that the SAS Viya platform uses. Once the user is setup on the host system, you should see the expected uid and gid values using the “id” command.

> sudo groupadd -g 1001 sas
> sudo useradd -u 1001 -g 1001 sas
> id sas
uid=1001(sas) gid=1001(sas) groups=1001(sas)
  1. Prepare the network mounted client environment by sourcing the “db2profile” script. Then run the “db2ccprf” script to copy the global configuration to a local directory. This directory will later be assigned to the INSTHOME environment variable as the DB2_CONFIGURED_DIR value. The “/db2” directory name is used in these instructions.
export DB2_NET_CLIENT_PATH=/db2client/sqllib
# Edit $DB2_NET_CLIENT_PATH/db2profile to set the following environment variable values
#   DB2DIR=/db2client/sqllib
#   DB2INSTANCE=sas
#   INSTHOME=/db2
source $DB2_NET_CLIENT_PATH/db2profile
export DB2_APPL_DATA_PATH=/db2
export DB2_APPL_CFG_PATH=/db2
$DB2_NET_CLIENT_PATH/bin/db2ccprf -f -t /db2
  1. Ensure that all files in the /db2client and /db2 directories are assigned to the user and group that you created earlier.
sudo chown -R sas:sas /db2client
sudo chown -R sas:sas /db2
  1. The following 5 variables need to be set with your specific client installation values. Substitute these values into your final set of environment variables. In our example,
  * DB2_CLIENT_USER=sas
  * DB2_CLIENT_DIR=/db2client
  * DB2_CONFIGURED_DIR=/db2
  * PATH_TO_DB2_LIBS=/db2client/sqllib/lib64:/db2client/sqllib/lib64/gskit:/db2client/sqllib/lib32
  * PATH_TO_DB2_BIN=/db2client/sqllib/bin:/db2client/sqllib/adm:/db2client/sqllib/misc

Within your sas-access.properties file, use the 5 values above to set the following environment variables. Note that some variables are not assigned a value.

CUR_INSTHOME=
CUR_INSTNAME=
DASWORKDIR=
DB2DIR=$(DB2_CLIENT_DIR)/sqllib
DB2INSTANCE=$(DB2_CLIENT_USER)
DB2LIB=$(DB2_CLIENT_DIR)/sqllib/lib
DB2_HOME=$(DB2_CLIENT_DIR)/sqllib
DB2_NET_CLIENT_PATH=$(DB2_CLIENT_DIR)/sqllib
IBM_DB_DIR=$(DB2_CLIENT_DIR)/sqllib
IBM_DB_HOME=$(DB2_CLIENT_DIR)/sqllib
IBM_DB_INCLUDE=$(DB2_CLIENT_DIR)/sqllib/
IBM_DB_LIB=/dbi/db2/viya4/db2client/sqllib/lib
INSTHOME=$(DB2_CONFIGURED_DIR)
INST_DIR=$(DB2_CLIENT_DIR)/sqllib
PREV_DB2_PATH=
DB2=$(PATH_TO_DB2_LIBS)
DB2_BIN=$(PATH_TO_DB2_BIN)

If you want to use SAS/ACCESS to JDBC to access your DB2 database, then copy your DB2 client installation’s JDBC driver (from $(DB2_CLIENT _DIR)/sqllib/java) to the source location of the /data-drivers/jdbc volumeMount. See the Specify External JDBC Drivers section for more information about creating this data mount point.

SAS/ACCESS Interface to Greenplum

SAS/ACCESS Interface to Greenplum uses an ODBC client (SAS/ACCESS to Greenplum from Progress DataDirect), which is included in your install. By default, the Greenplum connector is set up for non-encrypted DSN-less connections. To reference a DSN, follow the ODBC configuration steps above to associate your odbc.ini file with your instance.

Bulk-Loading

SAS/ACCESS Interface to Greenplum can use the Greenplum Client Loader Interface for loading large volumes of data. To perform bulk loading, the Greenplum Client Loader Package must be accessible from a PersistentVolume.

SAS recommends using the Greenplum Database parallel file distribution program (gpfdist) for bulk loading. The gpfdist binary and the temporary location gpfdist uses to write data files must be accessible from your Viya platform cluster and a secondary machine. You will need to launch the gpfdist server binary on the secondary machine to serve requests from SAS:

./gpfdist -d $(GPLOAD_HOME) -p 8081 -l $(GPLOAD_HOME)/gpfdist.log &

Within your sas-access.properties file, set the following environment variables. The $(GPLOAD_HOME) environment variable points to the directory where the external tables you want to load will reside. Note that this location must be mounted and accessible to your Viya platform cluster as a PersistentVolume, as well as the secondary machine running gpfdist.

GPHOME_LOADERS=$(PATH_TO_GPFDIST_UTILITY)
GPLOAD_HOST=$(HOST_RUNNING_GPFDIST)
GPLOAD_HOME=$(PATH_TO_EXTERNAL_TABLES_DIR)
GPLOAD_PORT=$(GPFDIST_LISTENING_PORT)
GPLOAD_LIBS=$(GPHOME_LOADERS)/lib

SAS/ACCESS Interface to Hadoop

You must make your Hadoop JARs and configuration file available to SAS/ACCESS Interface to Hadoop on a PersistentVolume or mounted storage. After your SAS Viya platform software is deployed, set the options SAS_HADOOP_JAR_PATH and SAS_HADOOP_CONFIG_PATH within your SAS program to point to this location. SAS does not recommend setting these as environment variables within your sas-access.properties file, as they would then be used for any connections from your Viya platform cluster. Instead, within your SAS program, use:

options set=SAS_HADOOP_JAR_PATH=$(PATH_TO_HADOOP_JARs);
options set=SAS_HADOOP_CONFIG_PATH=$(PATH_TO_HADOOP_CONFIG);

SAS/ACCESS Interface to Impala

SAS/ACCESS Interface to Impala requires the ODBC driver for Impala. The Impala ODBC driver is an API-compliant shared library, that must be accessible from a PersistentVolume. You must include the full path to the shared library by setting the IMPALA attribute so that the Impala driver can be loaded dynamically at run time.

SAS Viya platform provides internal Impala ODBC driver as default. Customers can do kustomization in sas-access.properties file.

IMPALA=$(PATH_TO_IMPALA_LIBS)
SIMBAIMPALAINI=$(PATH_TO_SIMBA_IMPALA_INI)

To reference a DSN in your connection, follow the instructions in ODBC configuration.

Bulk-Loading

Bulk loading with Impala is accomplished in two ways:

  1. Use the WebHDFS interface to Hadoop to push data to HDFS. The SAS environment variable SAS_HADOOP_RESTFUL must be specified and set to the value of 1. The properties for the WebHDFS location is included in the Hadoop hdfs-site.xml file. In this case, the hdfs-site.xml file must be accessible from a PersistentVolume. Alternatively, you can specify the WebHDFS hostname or the server’s IP address where the external file is stored using the BL_HOST= and BL_PORT= options.

  2. Configure a required set of Hadoop JAR files. JAR files must be in a single location accessible from a PersistentVolume. The SAS environment variable SAS_HADOOP_JAR_PATH and SAS_HADOOP_CONFIG_PATH must be specified and set to the location of the Hadoop JAR and configuration files. For a caslib connection, the data source options HADOOPJARPATH= and HADOOPCONFIGDIR= should be used.

SAS/ACCESS Interface to Informix

SAS/ACCESS Interface to Informix uses an ODBC client (from Progress DataDirect) that is included in your install. By default, the Informix connector is set up for non-encrypted DSN-less connections. If you use quotation marks in your Informix SQL statements, set the DELIMIDENT attribute to DELIMIDENT=YES or Informix might reject your statements.

DELIMIDENT=$(YES_OR_NO)

To reference a DSN in your connection, follow the instructions in ODBC configuration.

SAS/ACCESS Interface to JDBC

You must make your JDBC client and configuration file(s) available to SAS/ACCESS Interface to JDBC on a PersistentVolume or mounted storage.

SAS/ACCESS Interface to MongoDB

The SAS/ACCESS Interface to MongoDB requires the MongoDB C API client library (libmongoc). The MongoDB C shared library must be accessible from a PersistentVolume, and the full path to the library must be set using the MONGODB variable.

MONGODB=$(PATH_TO_MONGODB_LIBS)

SAS/ACCESS Interface to Microsoft SQL Server

SAS/ACCESS Interface to Microsoft SQL Server uses an ODBC client (from Progress DataDirect), which is included in your install. By default, the SQL Server connector is set up for non-encrypted DSN-less connections. To reference a DSN, follow the ODBC configuration steps to associate your odbc.ini file with your instance.

Connecting to Microsoft Azure SQL Database or Microsoft Azure Synapse

When connecting to Microsoft Azure SQL Database or Microsoft Azure Synapse, add the option

EnableScrollableCursors=4

to your DSN configuration in the odbc.ini file, or include it in the CONNECT_STRING libname option or the CONOPTS caslib option.

Bulk-Loading

Bulk-loading is initiated by setting the connection option EnableBulkLoad to one.

EnableBulkLoad=4

This option can be set in your DSN (odbc.ini file) or with the CONNECT_STRING libname option for DSN-less connections. When connecting via a caslib, use the CONOPTS option for DSN-less connection.

TLS/SSL server authentication

Depending on how your database administrator has configured the SQL instance, you might need a valid truststore configured for the TLS/SSL connections to Microsoft SQL Server. Failure to specify a valid truststore may result in the following error when connecting through SAS/ACCESS:

ERROR: CLI error trying to establish connection: [SAS][ODBC SQL Server Wire Protocol driver]Cannot load trust store. SSL Error

You can specify a truststore through the DSN definition in odbc.ini or in the CONNECT_STRING libname option for DSN-less connections:

TrustStore=/security/trustedcerts.pem

You may also choose to have the ODBC client ignore the TrustStore by specifying the following option in the odbc.ini or CONNECT_STRING:

ValidateServerCertificate=0

With this option, the ODBC client does not validate the server certificate with the TrustStore contents.

SAS/ACCESS Interface to MySQL

The SAS/ACCESS Interface to MySQL requires the MySQL C API client (libmysqlclient). The MySQL C API client must be accessible from a PersistentVolume, and the full path to the library must be set using the MYSQL variable.

MYSQL=$(PATH_TO_MYSQL_LIBS)

SAS/ACCESS Interface to Netezza

SAS/ACCESS Interface to Netezza requires the ODBC driver for Netezza. The IBM Netezza ODBC driver is an API-compliant shared library, that must be accessible from a PersistentVolume. The NETEZZA variable must be set to the full path of the shared library so that the Netezza driver can be loaded dynamically at run time. IBM’s Netezza client package may contain a “linux-64.tar.gz” archive which contains older files that can cause a conflict with other SAS/ACCESS products. SAS recommends that the following files and symbolic links not be included in the Netezza library path:

* libk5crypto.so.*
* libkrb5.so.*
* libkrb5support.so.*

The libcom_err.so.* files/links must be included.

NETEZZA=$(PATH_TO_NETEZZA_LIBS)

To reference a DSN in your connection, follow the instructions in ODBC configuration.

SAS/ACCESS Interface to ODBC

To configure your ODBC driver to work with SAS/ACCESS Interface to ODBC, follow the instructions in ODBC configuration.

SAS/ACCESS Interface to Oracle

SAS/ACCESS Interface to Oracle uses the Oracle Instant Client, which is included in your install. If you intend to colocate optional Oracle configuration files such as tnsnames.ora, sqlnet.ora or ldap.ora with the Oracle Instant Client, then you must make these files available on a PersistentVolume or mounted storage and set the environment variable TNS_ADMIN to the directory name where these files are located.

TNS_ADMIN=$(PATH_TO_TNS_ADMIN)

If you plan to use a different version of the Oracle Instant Client to the one provided you will have to add the following Oracle properties in the sas-access.properties file and use the kustomize tool.

ORACLE=$(PATH_TO_ORACLE_LIBS)
ORACLE_BIN=$(PATH_TO_ORACLE_BIN)

SAS/ACCESS Interface to the PI System

The SAS/ACCESS Interface to the PI System uses the PI System Web API. No PI System client software is required to be installed. However, the PI System Web API (PI Web API 2015-R2 or later) must be installed and activated on the host machine where the user connects.

SSL Certificate

HTTPS requires an SSL (Secure Sockets Layer) certificate to authenticate with the host. Prior to the libname statement, set the location to the certificate file in a SAS session using the “option set” command. The syntax is as follows:

options set=SSLCALISTLOC "/usr/mydir/root.pem";

SAS/ACCESS Interface to PostgreSQL

SAS/ACCESS Interface to PostgreSQL uses an ODBC client, which is included in your install. By default, the PostgreSQL connector is set up for DSN-less connections. To reference a DSN, follow the ODBC configuration steps to associate your odbc.ini file with your instance.

SAS/ACCESS Interface to R/3

The SAS/ACCESS Interface to R/3 requires the SAP NetWeaver RFC Library. The SAP NetWeaver RFC Library must be accessible from a PersistentVolume, and the full path to the library must be set using the R3 variable.

R3=$(PATH_TO_R3_LIBS)

Additional required post-installation tasks are described in Post-Installation Instructions for SAS/ACCESS 9.4 Interface to R/3.

SAS/ACCESS Interface to Salesforce

There are no configuration steps required. SAS/ACCESS Interface to Salesforce connects to Salesforce using version 46.0 of its SOAP API.

SAS/ACCESS Interface to SAP ASE

The SAS/ACCESS Interface to SAP ASE requires the SAP ASE shared libraries. The SAP ASE shared libraries must be accessible from a PersistentVolume, and the full path to the libraries must be set using the SYBASELIBS variable. The SYBASE variable must also be set to the full path of the SAP ASE (Sybase) installation directory, and the SYBASE_BIN variable must be set to the SAP ASE installation bin directory.

SYBASE=$(PATH_TO_SAPASE_INSTALLATION_DIR)
SYBASELIBS=$(PATH_TO_SAPASE_LIBS)
SYBASE_BIN=$(PATH_TO_SAPASE_BIN_DIRECTORY)

Here are optional SAP ASE (Sybase) environment variables that you may want to consider setting:

SYBASE_OCS=$(SAPASE_HOME_DIRECTORY_NAME)
DSQUERY=$(NAME_OF_TARGET_SERVER)

Installing SAP ASE Procedures

The SAP ASE administrator or user must install two SAP ASE (Sybase) stored procedures on the target SAP server. These files are available in a compressed TGZ archive for download from the SAS Support site at https://support.sas.com/downloads/package.htm?pid=2458.

SAS/ACCESS Interface to SAP HANA

SAS/ACCESS Interface to SAP HANA requires the ODBC driver for SAP HANA. The SAP HANA ODBC driver is an API-compliant shared library, that must be accessible from a PersistentVolume. The HANA variable must be set to the full path of the shared library so that the SAP HANA driver can be loaded dynamically at run time.

HANA=$(PATH_TO_HANA_LIBS)

To configure a TLS/SSL connection to SAP HANA, two additional environment variables are required: SECUDIR and SAPCRYPTO_LIB.

SECUDIR=$(PATH_TO_HANA_SECUDIR)
SAPCRYPTO_LIB=$(PATH_TO_SAPCRYPTO_LIB)

To reference a DSN in your connection, follow the instructions in ODBC configuration.

SAS/ACCESS Interface to SAP IQ

The SAS/ACCESS Interface to SAP IQ requires the SAP IQ shared libraries. The SAP IQ shared libraries must be accessible from a PersistentVolume, and the full path to the libraries must be set using the SAPIQ variable. The IQDIR16 variable must also be set to the full path of the SAP IQ installation directory, and the SAPIQ_BIN variable must be set to the SAP IQ installation bin directory.

IQDIR16=$(PATH_TO_SAPIQ_INSTALLATION_DIR)
SAPIQ=$(PATH_TO_SAPIQ_LIBS)
SAPIQ_BIN=$(PATH_TO_SAPIQ_BIN_DIRECTORY)

SAS/ACCESS Interface to SingleStore

There are no additional configuration steps required.

SAS/ACCESS Interface to Snowflake

SAS/ACCESS Interface to Snowflake uses an ODBC client, which is included in your install. To reference a DSN in your connection, follow the instructions in ODBC configuration.

SAS/ACCESS Interface to Spark

You must make your Hadoop JARs and configuration file available to SAS/ACCESS Interface to Spark on a PersistentVolume or mounted storage. After your SAS Viya platform software is deployed, set the options SAS_HADOOP_JAR_PATH and SAS_HADOOP_CONFIG_PATH within your SAS program to point to this location. SAS does not recommend setting these as environment variables within your sas-access.properties file, as they would then be used for any connections from your Viya platform cluster. Instead, within your SAS program, use:

options set=SAS_HADOOP_JAR_PATH=$(PATH_TO_HADOOP_JARs);
options set=SAS_HADOOP_CONFIG_PATH=$(PATH_TO_HADOOP_CONFIG);

Connecting to Databricks

SAS redistributes the CData JDBC driver for Databricks, so there is no need to configure an external JDBC driver.

SAS supports bulk loading to Databricks via ADLS when running on Azure. Refer to SAS/ACCESS to Spark documentation for more information on Azure bulk loading to Databricks.

SAS/ACCESS Interface to Teradata

SAS/ACCESS Interface to Teradata requires the Teradata Tools and Utilities (TTU) shared libraries. The TTU libraries must be accessible from a PersistentVolume, and the TERADATA variable must be set to the full path of the TTU libraries.

SAS Viya platform provides internal Teradata client as default, but it doesn’t contain TD Wallet. Customers who want to use TD Wallet can do kustomization in sas-access.properties file.

TERADATA=$(PATH_TO_TERADATA_LIBS)

Ensure that the Teradata client encoding is set to UTF-8 in the clispd.dat file. The two lines in the clispd.dat file that need to be set are:

charset_type=N
charset_id=UTF8

Set the COPLIB environment variable to the location of the updated clispd.dat file.

COPLIB=$(TERADATA_COPLIB)

SAS/ACCESS Interface to Vertica

SAS/ACCESS Interface to Vertica requires the ODBC driver for Vertica. The Vertica ODBC driver is an API-compliant shared library, that must be accessible from a PersistentVolume. The VERTICA variable must be set to the full path of the shared library so that the Vertica driver can be loaded dynamically at run time. Also, the VERTICAINI attribute must be set to point to vertica.ini file on your PersistentVolume. SAS Viya platform provides internal Vertica ODBC driver as default. Customers can do kustomization in sas-access.properties file.

VERTICA=$(PATH_TO_VERTICAL_LIBS)
VERTICAINI=$(PATH_TO_VERTICA_ODBCINI)

Also, the driver manager encoding defined in the vertica.ini file should be set to UTF-8.

DriverManagerEncoding=UTF-8

To reference a DSN in your connection, follow the instructions in ODBC configuration.

SAS/ACCESS Interface to Yellowbrick

SAS/ACCESS Interface to Yellowbrick uses an ODBC client, which is included in your install. By default, the Yellowbrick connector is set up for DSN-less connections. To reference a DSN, follow the ODBC configuration steps to associate your odbc.ini file with your instance.

Bulk-Loading

SAS/ACCESS Interface to Yellowbrick can use the Yellowbrick bulk loader (ybload) and bulk unloader(ybunload) to move large volumes of data. To perform bulk loading, set the following data set options:

BULKLOAD=YES
BL_YB_PATH='path-to-tool-location'

These tools must be accessible from a PersistentVolume.

Enabling Data Connector Ports

The publishDCServices key enables network connections between CAS and supported databases, such as Teradata and Hadoop, to transfer data in parallel between the database nodes and CAS nodes. Parallel data transfer is a functionality provided by the SAS Data Connector Accelerator for Hadoop or Teradata products.

Edit the base kustomization.yaml file in your $deploy directory to add the following lines.

transformers:
...
- sas-bases/overlays/data-access/enable-dc-ports.yaml

Enabling SAS Embedded Process Continuous Session Ports

The publishEPCSService key enables the execution of the SAS Embedded Process for Spark Continuous Session (EPCS) in the Kubernetes cluster. The SAS Embedded Process for Spark continuous session (EPCS) is an instantiation of a long-lived SAS Embedded Process session on a cluster that can serve one CAS session. EPCS provides a tight integration between CAS and Spark by processing multiple execution requests without having to start and stop the SAS Embedded Process for Spark every time an execution request is made.

Users can improve system performance by using the EPCS and the SAS Data Connector to Hadoop to perform multiple actions within the same CAS session. Users can also use the EPCS to run models in Spark.

Edit the base kustomization.yaml file in your $deploy/site-config directory to add the following lines.

transformers:
...
- sas-bases/overlays/data-access/enable-epcs-port.yaml

Additional Resources

For information about PersistentVolumes, see Persistent Volumes.

Configure SAS/CONNECT Spawner in the SAS Viya Platform

Overview

This readme describes hows to customize your SAS Viya platform deployment to use SAS/CONNECT Spawner.

Installation

SAS provides example and overlay files for customizations. Read the descriptions of the available tasks in the following sections. If you want to perform a task to customize your deployment, follow the instructions for it that follow in that section.

Disable Cloud Native Mode

Perform these steps if cloud native mode should be disabled in your environment.

  1. Add the following code to the configMapGenerator block of the base kustomization.yaml file:

    ```
    ...
    configMapGenerator:
    ...
    - name: sas-connect-spawner-config
      behavior: merge
      literals:
        - SASCLOUDNATIVE=0
    ...
    ```
    
  2. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Enable System Security Services Daemon (SSSD) Container

Perform these steps if SSSD is required in your environment.

  1. Add sas-bases/overlays/sas-connect-spawner/add-sssd-container-transformer.yaml to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml).

Important: This line must come before any network transformers (that is, transformers starting with “- sas-bases/overlays/network/”) and the required transformer “- sas-bases/overlays/required/transformers.yaml”. Note that your configuration may not have network transformers if security is not configured.

Here is an example for Full-stack TLS. If you are using a different version of TLS, or no TLS at all, the network transformers may be different or not present.

  ```
  ...
  transformers:
  ...
  - sas-bases/overlays/sas-connect-spawner/add-sssd-container-transformer.yaml
  # The following lines are provided as a location reference, they should not be added if they don't appear.
  - sas-bases/overlays/network/ingress/security/transformers/product-tls-transformers.yaml
  - sas-bases/overlays/network/ingress/security/transformers/ingress-tls-transformers.yaml
  - sas-bases/overlays/network/ingress/security/transformers/backend-tls-transformers.yaml
  # The following line is provided as a location reference, it should appear only once and not be duplicated.
  - sas-bases/overlays/required/transformers.yaml
  ...
  ```
  1. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Add a Custom Configuration for System Security Services Daemon (SSSD)

Use these steps to provide a custom SSSD configuration to handle user authorization in your environment.

  1. Copy the $deploy/sas-bases/examples/sas-connect-spawner/external-sssd-config/add-sssd-configmap-transformer.yaml file to $deploy/site-config/sas-connect-spawner/external-sssd-config/add-sssd-configmap-transformer.yaml.

  2. Modify the copied file according to the comments in it.

  3. Add site-config/sas-connect-spawner/external-sssd-config/add-sssd-configmap-transformer.yaml and sas-bases/overlays/sas-connect-spawner/ext-sssd-volume-transformer.yaml to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    ```
    ...
    transformers:
    ...
    -
    - site-config/sas-connect-spawner/external-sssd-config/add-sssd-configmap-transformer.yaml
    - sas-bases/overlays/sas-connect-spawner/ext-sssd-volume-transformer.yaml
    ...
    ```
    
  4. Copy your custom sssd configuration file to $deploy/site-config/sas-connect-spawner/external-sssd-config/sssd.conf.

  5. Add the following code to the secretGenerator block of the base kustomization.yaml file:

    ```
    ...
    secretGenerator:
    ...
    - name: sas-sssd-config
      files:
        - SSSD_CONF=site-config/sas-connect-spawner/external-sssd-config/sssd.conf
      type: Opaque
    ...
    ```
    
  6. Deploy the software using the commands in SAS Viya Platform: Deployment Guide.

Provide External Access to sas-connect-spawner via a Load Balancer

LoadBalancer assigns an IP address for the SAS/CONNECT Spawner and allows the standard port number to be used.

  1. Copy the $deploy/sas-bases/examples/sas-connect-spawner/enable-external-access/sas-connect-spawner-enable-loadbalancer.yaml file to $deploy/site-config/sas-connect-spawner/enable-external-access/sas-connect-spawner-enable-loadbalancer.yaml.

  2. Modify the copied file according to the comments in it.

  3. Add a reference to the copied file to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:

    ```
    ...
    resources:
    ...
    - site-config/sas-connect-spawner/enable-external-access/sas-connect-spawner-enable-loadbalancer.yaml
    ...
    ```
    
  4. Deploy the software as described in SAS Viya Platform: Deployment Guide.

  5. Refer to External Client Sign-On to TLS-Enabled SAS Viya SAS/CONNECT Spawner when LoadBalancer is configured.

Provide External Access to sas-connect-spawner via a NodePort

NodePort assigns a port and routes traffic from that port to the SAS/CONNECT Spawner. A value can be selected from the allowed nodePort range and assigned in the yaml. This assignment prevents the SAS/CONNECT Spawner from starting if the selected port is already in use or is outside the allowable nodePort range.

  1. Copy the $deploy/sas-bases/examples/sas-connect-spawner/enable-external-access/sas-connect-spawner-enable-nodeport.yaml file to $deploy/site-config/sas-connect-spawner/enable-external-access/sas-connect-spawner-enable-nodeport.yaml.

  2. Modify the copied file according to the comments in it.

  3. Add a reference to the copied file to the resources block of the base kustomization.yaml file. Here is an example:

    ```
    ...
    resources:
    ...
    - site-config/sas-connect-spawner/enable-external-access/sas-connect-spawner-enable-nodeport.yaml
    ...
    ```
    
  4. Deploy the software as described in SAS Viya Platform: Deployment Guide.

  5. Refer to External Client Sign-On to TLS-Enabled SAS Viya SAS/CONNECT Spawner when NodePort is configured.

Additional Resources

For more information about configurations and using example and overlay files, see SAS Viya Platform: Deployment Guide.


category: SAS RFC Solution Configuration tocprty: 90


Configuring SAS RFC Solution Workloads

Overview

SAS RFC Solution Workloads provides dynamic management of SAS Business Orchestration projects. It passes messages from a client library through brokers to a Kubernetes controller. This enables dynamic deployment and management of projects that do not require administrator intervention.

SAS RFC Solution Workloads requires a running RabbitMQ broker. The instructions in this README describe how to configure the software.

SAS RFC Solution Workloads needs to be configured if you are deploying SAS Business Orchestration projects from the UI application directly.

Instructions

Configure with Initial SAS Viya Platform Deployment

To configure SAS RFC Solution Workloads as part of the initial deployment of SAS Viya platform:

  1. Copy the files in the $deploy/sas-bases/examples/sas-rfc-solution-workloads/install directory to the $deploy/site-config/sas-rfc-solution-workloads/install directory. Create the destination directory if it does not exist.

  2. If you are installing SAS RFC Solution Workloads with SAS Viya platform, add $deploy/site-config/sas-rfc-solution-workloads/install to the resources block of the base kustomization.yaml file. Here is an example:

    resources:
    ...
    - site-config/sas-rfc-solution-workloads/install
    ...
  3. Update the $deploy/site-config/sas-rfc-solution-workloads/install/kustomization.yaml file by replacing the variables with the appropriate values for secrets.

  4. Update the $deploy/site-config/sas-rfc-solution-workloads/install/namespace.yaml file by replacing the {{ NAMESPACE }} value.

  5. Update the $deploy/site-config/sas-rfc-solution-workloads/install/settings.properties file by replacing the variables with the appropriate values for settings properties.

    Specifying an ingress host will create ingress objects on workloads deployments. Administrators will need to patch those ingresses or create thier own for TLS support.

  6. Update the $deploy/site-config/sas-rfc-solution-workloads/install/runtime.properties file by replacing the variables with the appropriate values for runtime properties.

  7. Update the $deploy/site-config/sas-rfc-solution-workloads/install/runtime-secrets.properties file by replacing the variables with the appropriate values for runtime secrets.

  8. Review the $deploy/site-config/sas-rfc-solution-workloads/install/rbac.yaml file to ensure that role based access controls are are acceptable.

  9. Update the $deploy/site-config/sas-rfc-solution-workloads/install/deployment.yaml file as instructed by the comments in the file.

  10. Update the $deploy/site-config/sas-rfc-solution-workloads/install/image-pull-secrets.yaml file by replacing the variables with the appropriate values for the imagePullSecrets.

    The imagePullSecret can be found using the SAS Viya platform Kustomize build command:
    
    ```shell
    kustomize build . > site.yaml
    grep '.dockerconfigjson:' site.yaml
    ```
    
    Alternatively, if SAS Viya platform has already been deployed, the imagePullSecret can be found with the kubectl command:
    
    ```shell
    kubectl -n {{ NAMESPACE }} get secret --field-selector=type=kubernetes.io/dockerconfigjson -o yaml | grep '.dockerconfigjson:'
    ```
    
    The output is .dockerconfigjson: <SECRET>.  Replace the {{ IMAGE_PULL_SECRET }} variables with the <SECRET> value returned by the command above.
    
    Replace the {{ NAMESPACE }} values.
    
  11. Update the $deploy/site-config/sas-rfc-solution-workloads/install/validate-properties.json file by adding the rules to be enforced.

    Administrators can enforce values that are specified in the project yaml file by using an accept list. This feature can be used to lock
    down ports, file paths, or any other property values. The example below shows how to restrict logging levels in components to INFO or DEBUG.
    
    ```json
    {
       "rules": [
          {
                "key": "level",
                "acceptList": [
                   "I.*",
                   "D.*"
                ],
                "isKeyRegex": false,
                "isValueRegex": true
          },
          {
                "key": "workloads[0].flows[0].processors[0].log.level",
                "acceptList": [
                   "INFO",
                   "DEBUG"
                ],
                "isKeyRegex": false,
                "isValueRegex": false
          },
          {
                "key": "leve.*",
                "acceptList": [
                   "INFO",
                   "DEBUG"
                ],
                "isKeyRegex": true,
                "isValueRegex": false
          },
          {
                "key": "level",
                "acceptList": [
                   "INFO",
                   "DEBUG"
                ],
                "isKeyRegex": false,
                "isValueRegex": false
          }
       ]
    }
    ```
    
  12. Deploy the software.

Configure after the Initial Deployment

Alternatively, SAS RFC Solution Workloads can be installed separately from the SAS Viya platform. Complete steps 1-10 in Configure with Initial SAS Viya Platform Deployment. Then complete the following steps:

  1. Update the image values that are contained in the $deploy/site-config/sas-rfc-solution-workloads/install/deployment.yaml file.

    In that file, revise the value “sas-rfc-solution-workloads” to include the registry server, relative path, name, and tag. The registry relative server and relative path are the same as other SAS Viya platform delivered images.

    The name of the container is ‘sas-rfc-solution-workloads’. The registry relative path, name, and tag values are found in the sas-components-* configmap in the SAS Viya deployment.

    Perform the following commands to determine the appropriate information. When you have the information, add it to the appropriate places in the deployment.yaml file.

    # generate site.yaml file
    kustomize build -o site.yaml
    
    # get the sas-rfc-solution-workloads registry information
    cat manifests.yaml | grep 'sas-rfc-solution-workloads:' | grep -v -e "VERSION" -e 'image'
    
    # manually update the sas-rfc-solution-workloads-controller images using the information gathered below: <container registry>/<container relative path>/sas-rfc-solution-workloads:<container tag>
    
    # apply site.yaml file
    kustomize apply -f site.yaml 

    Perform the following commands to get the required information from a running SAS Viya platform deployment.

    
    # get the registry server, kubectl needs to point to the SAS Viya Platform deployment namespace, and replace {{ NAMESPACE }} with the namespace value
    kubectl -n {{ NAMESPACE }} get deployment sas-readiness -o yaml | grep -e "image:.*sas-readiness" | sed -e 's/image: //g' -e 's/\/.*//g'  -e 's/^[ \t]*//'
      <container registry>
    
    # get registry relative path and tag, kubectl needs to point to the SAS Viya Platform deployment namespace, and replace {{ NAMESPACE }} with the namespace value
    CONFIGMAP="$(kubectl -n {{ NAMESPACE }} get cm | grep sas-components | tr -s '' | cut -d ' ' -f1)"
    kubectl -n {{ NAMESPACE }} get cm "$CONFIGMAP" -o yaml | grep 'sas-rfc-solution-workloads:' | grep -v "VERSION"
       SAS_COMPONENT_RELPATH_sas-business-orchestration-worker: <container relative path>/sas-rfc-solution-workloads
       SAS_COMPONENT_TAG_sas-rfc-solution-workloads: <container tag>
  2. Perform the following commands:

    kustomize -b $deploy/site-config/sas-rfc-solution-workloads/install > sas-rfc-solution-workloads.yaml
    kubectl apply -f sas-rfc-solution-workloads.yaml

Run the SAS AML Provisioning Job after a SAS Viya platform deployment is complete. The SAS Anti-Money Laundering onboarding process determines which onboarding steps need be preformed and runs them. The provisioning job logs the job progress in the internal SAS Viya database and resumes from the last completed task on restart, should any errors occur.

If there is a misconfiguration or if there are changes in the configuration that are external to the SAS Viya platform, administrators can specify specific steps to run.

For instructions and information about SAS Anti-Money Laundering provisioning and configuration, see: https://documentation.sas.com/?softwareId=compcdc&softwareVersion=viya4&softwareContextId=provisioning.

OSZAR »