Generated: 04/21/2025 Order Number: XXXXX Version: Stable 2025.04 Release: 20250421.1745214394396This page contains all the README files for the SAS Viya platform. When you purchase the SAS Viya platform, you receive a subset of README files specific to your order and cadence.
SAS strongly recommends that you refer to that subset when deploying the SAS Viya platform. For more information, see SAS Viya Platform Operations Guide.
All of the information in the READMEs and in the SAS Viya Platform Operations Guide assumes that you have met all System Requirements for the SAS Viya platform.
If you have any feedback on the contents of these README files, please email the SAS Documentation Feedback team.
The sas-orchestration image includes several tools that help
deploy and manage the software. It includes a lifecycle
command
that can run various lifecycle operations as well as the recommended
versions of both kustomize
and kubectl
. These latter tools may
be used with the --entrypoint
option that is available on both Docker
and Podman container runtime CLIs.
Note: The examples use Docker, but the Podman container engine can also be used.
Note: All examples below are auto-generated based on your order.
To run the sas-orchestration image, Docker must be installed.
Pull the sas-orchestration
image:
docker pull cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-orchestration:1.141.0-20250403.1743683531608
Replace ‘cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-orchestration:1.141.0-20250403.1743683531608’ with a local tag for ease of use in the examples that will follow:
docker tag cr.sas.com/viya-4-x64_oci_linux_2-docker/sas-orchestration:1.141.0-20250403.1743683531608 sas-orchestration
The examples that follow assume:
$deploy
refers to the directory that will contain the deployment assets.$deploy
. To download the deployment assets
for an order from my.sas.com
, go to http://my.sas.com, log in, find your order
and select Download Deployment Assets
. Extract the downloaded tarball to $deploy
.config
exists in /home/user/kubernetes
. The kubeconfig file defines the cluster
the lifecycle operations will connect to.$deploy
directory is the current working directory.
cd
to $deploy and use $(pwd)
to mount the current directory into the container.The lifecycle
command executes deployment-wide operations over the assets deployed from an order.
See the README file at $deploy/sas-bases/examples/kubernetes-tools/README.md
(for Markdown)
or $deploy/sas-bases/docs/using_kubernetes_tools_from_the_sas-orchestration_image.htm
(for HTML) for
lifecycle operation documentation.
Docker uses the following options:
-v
to mount the directories-w
to define the working directory-e
to define the needed environment variablesLifecycle
command documentation$deploy/sas-bases/examples/kubernetes-tools/lifecycle-operations/start-all/README.md
$deploy/sas-bases/examples/kubernetes-tools/lifecycle-operations/stop-all/README.md
The list
sub-command displays the available operations of a deployment
lifecycle list
examplecd $deploy
docker run --rm \
-v "$(pwd):/cwd" \
-v /home/user/kubernetes:/kubernetes \
-e "KUBECONFIG=/kubernetes/config" \
-w /cwd \
sas-orchestration \
lifecycle list --namespace {{ NAME-OF-NAMESPACE }}
The run
sub-command runs a given operation.
Arguments before --
indicate the operation to run and how lifecycle should locate the operation’s
definition. Arguments after --
apply to the operation itself, and may vary between operations.
lifecycle run
examplecd $deploy
docker run --rm \
-v "$(pwd):/cwd" \
-v /home/user/kubernetes:/kubernetes \
-e "KUBECONFIG=/kubernetes/config" \
sas-orchestration \
lifecycle run \
--operation assess \
--deployment-dir /cwd \
-- \
--manifest /cwd/site.yaml \
--namespace {{ NAME-OF-NAMESPACE }}
As indicated in the example, the run
sub-command needs an operation (--operation
) and the location of
your assets (–deployment-dir). The assess
lifecycle operation needs a manifest (--manifest
) and the
Kubernetes namespace to assess, (--namespace
). To connect and assess the Kubernetes cluster,
the KUBECONFIG environment variable is set on the container; (-e
).
To see all possible assess
operation arguments, run assess
with the --help
flag:
docker run --rm \
-v "$(pwd):/cwd" \
sas-orchestration \
lifecycle run \
--operation assess \
--deployment-dir /cwd/sas-bases \
-- \
--help
The example assumes that the $deploy directory contains the kustomization.yaml
and supporting files. Note that the kustomize
call here is a simple example.
Refer to the deployment documentation for full usage details.
cd $deploy
docker run --rm \
-v "$(pwd):/cwd" \
-w /cwd \
--entrypoint kustomize \
sas-orchestration \
build . > site.yaml
This example assumes a site.yaml
manifest file exists in $deploy
.
See the SAS Viya Platform Deployment Guide.
for instructions on how to create the site.yaml manifest.
Note The kubectl
call here is a simple example.
Refer to the deployment documentation for full usage details.
cd $deploy
docker run --rm \
-v "$(pwd):/cwd" \
-v /home/user/kubernetes:/kubernetes \
-w /cwd \
--entrypoint kubectl \
sas-orchestration \
--kubeconfig=/kubernetes/kubeconfig apply -f site.yaml
The assess
lifecycle operation assesses an undeployed manifest file for its eventual use in a cluster.
For general lifecycle operation execution details, please see the README file at
$deploy/sas-bases/examples/kubernetes-tools/README.md
(for Markdown) or
$deploy/sas-bases/docs/using_kubernetes_tools_from_the_sas-orchestration_image.htm
(for HTML).
Note: $deploy
refers to the directory containing the deployment assets.
The following example assumes:
$deploy
. To download the deployment assets
for an order from my.sas.com
, go to http://my.sas.com, log in, find your order
and select Download Deployment Assets
. Extract the downloaded tarball to $deploy
.site.yaml
manifest file exists in $deploy
.
See the SAS Viya Platform: Deployment Guide
for instructions on how to create the site.yaml manifest.config
exists in /home/user/kubernetes
. The kubeconfig file defines the cluster
the lifecycle operations will connect to.$deploy
directory is the current working directory.
cd
to $deploy and use $(pwd)
to mount the current directory into the container.{{ NAME-OF-NAMESPACE }}
is the namespace where the SAS Viya platform deployment described by the manifest file being assessed will be located.cd $deploy
docker run --rm \
-v "$(pwd):/cwd" \
-v /home/user/kubernetes:/kubernetes \
-e "KUBECONFIG=/kubernetes/config" \
sas-orchestration \
lifecycle run \
--operation assess \
--deployment-dir /cwd \
-- \
--manifest /cwd/site.yaml \
--namespace {{ NAME-OF-NAMESPACE }}
Note: To see the commands that would be executed from the operation without
making any changes to the cluster, add -e "DISABLE_APPLY=true"
to the container.
You can stop and start your SAS Viya platform deployment by using CronJobs or by applying transformers. For details about both methods, see Starting and Stopping a SAS Viya Platform Deployment.
In order to schedule the start-all Cronjob, use the start-stop example in $deploy/sas-bases/examples/kubernetes-tools/lifecycle-operations/schedule-start-stop
.
You can stop and start your SAS Viya platform deployment by using CronJobs or by applying transformers. For details about both methods, see Starting and Stopping a SAS Viya Platform Deployment.
In order to schedule the stop-all Cronjob, use the start-stop example in $deploy/sas-bases/examples/kubernetes-tools/lifecycle-operations/schedule-start-stop
.
The start-all and stop-all Cronjobs can be run on a schedule using the example file in $deploy/sas-bases/examples/kubernetes-tools/lifecycle-operations/schedule-start-stop
. Copy the ‘schedule-start-stop.yaml’ into the site-config directory and revise it to insert a schedule for start-all and another schedule for stop-all. Add a reference to the file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
site-config/kubernetes-tools/lifecycle-operations/schedule-start-stop/schedule-start-stop.yaml
Note: This file should be included after the line
- sas-bases/overlays/required/transformers.yaml
The $deploy/sas-bases/examples/deployment-operator directory contains files for deploying and running the SAS Viya Platform Deployment Operator. For information about what the operator is and how to deploy and run it, see the SAS Viya Platform Deployment Guide.
To remove SAS Viya Platform Deployment Operator management of updates to a SAS
Viya platform deployment, you must disassociate the deployment from the
SASDeployment custom resource and then delete the SASDeployment custom resource.
The
$deploy/sas-bases/examples/deployment-operator/disassociate/disassociate-deployment-operator.sh
script performs these actions.
Running the script requires bash
, kubectl
, and jq
. SAS recommends that you
save the current SASDeployment custom resource before executing the script
because the script deletes it.
First, make the script executable with the following command.
chmod 755 ./disassociate-deployment-operator.sh
Then execute the script, specifying the namespace which contains the SASDeployment custom resource.
./disassociate-deployment-operator.sh <name-of-namespace>
The script removes the SASDeployment ownerReference from the .metadata.
ownerReferences
field and the
kubectl.kubernetes.io/last-applied-configuration
annotation in all resources
in the namespace. It then removes the SASDeployment custom resource. The SAS
Viya platform deployment is otherwise unchanged.
Note: Running the disassociate script might cause the following message to be displayed. This message can be safely ignored.
Warning: path <API-path-for-URLs> cannot be used with pathType Prefix
If you want to use the SAS Viya Platform Deployment Operator for this SAS Viya platform deployment in the future, a SASDeployment custom resource can be reintroduced into the namespace. See the SAS Viya Platform: Deployment Guide for details.
A mirror registry is a local registry of the software necessary to create your deployment. For the SAS Viya platform, a mirror registry is created with SAS Mirror Manager.
For more information about mirror repositories and SAS Mirror Manager, see Using a Mirror Registry.
Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.
This overlay is used to apply an additional imagePullSecret. This overlay is required for SAS Viya platform deployments on Red Hat OpenShift version 4.16 and later that use the OpenShift Container Registry as a mirror for their deployment assets.
Use these steps to apply the desired property to your SAS Viya platform deployment.
Create the $deploy/site-config/add-imagepullsecret
directory and copy
$deploy/sas-bases/examples/add-imagepullsecret/configuration.env
into it.
Define the property in the configuration.env file. To define the property, update its token value as described in the comments in the file.
Add the following path to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml
):
...
resources:
...
- sas-bases/overlays/add-imagepullsecret/resources
...
Add the following entry to the configMapGenerator block of the base kustomization.yaml file:
...
configMapGenerator:
...
- behavior: merge
name: add-imagepullsecret-configuration
envs:
- site-config/add-imagepullsecret/configuration.env
...
Add the following entry to the transformers block of the base kustomization.yaml file:
...
transformers:
...
- sas-bases/overlays/add-imagepullsecret/transformers.yaml
...
The sitedefault.yaml file specifies configuration properties that will be written to the Consul key value store when the sas-consul-server is started.
Each property in the sitedefault.yaml file will be written to the Consul key value store if it does not already exist.
Example:
The following properties specify the configuration for the LDAP provider and base points from which to search for groups and users.
- sas.identities.providers.ldap:
- connection:
- host: ldap.example.com
- password:
- port: 3269
- url: ldaps://${sas.identities.providers.ldap.connection.host}:${sas.identities.providers.ldap.connection.port}
- userDN: cn=AdminUser,dn=example,dn=com
- group:
- baseDN: ou=groups,dc=example,dn=com
- user:
- baseDN: DC=example,DC=com
Caution: The example requires a value for a password. Due to security concerns for providing a value for the required password field, an alternative method is described. Using the sitedefault file to set LDAP properties is not required because an administrator can set the LDAP connection using SAS Environment Manager.
Copy the sitedefault.yaml file from $deploy/sas-bases/examples/configuration
to the site-config directory.
In the file you just copied, provide the values you want to use for your deployment as described in the “Properties” section below.
After you have entered the values for your deployment, revise the base kustomization.yaml file as described in “Add a sitedefault File to Your Deployment”.
Note: There will be an LDAP AuthenticationException in the log for the identities service. It can be safely ignored if you follow the remaining steps.
Log in to SAS Environment Manager as sasboot.
Using SAS Environment Manager, replace the temporary values you used for the ldap.connection password and userDN with real values.
When the changes are picked up by SAS Environment Manager, select the SAS Administrators group under Custom Groups to see the LDAP users. You can add any LDAP user that is listed as an administrator.
Log out of SAS Environment Manager and log back in as a user that was added in step 6. Use that user to get administrator privileges.
This section describes the properties associated with Lightweight Directory Access Protocol (LDAP) that can be specified in the sitedefault.yaml file. Any required properties must have a value specified in order to have their defaults applied.
For information about all the properties that can be configured in the sitedefault.yaml file, see “Configuration Properties: Reference (Services)”.
The set of properties that are used to configure the LDAP provider.
The set of properties that are used to configure the connection to the LDAP provider.
The LDAP server’s host name.
Example: ldap.example.com
The password for logging on to the LDAP server.
Example: tempPassword
Caution: SAS recommends setting the password to a temporary string, such as tempPassword. See the Instructions for post-deployment steps to insert a real password.
The LDAP server’s port.
Example: 3269
The URL for connecting to the LDAP server.
Example: ldaps://${sas.identities.providers.ldap.connection.host}:${sas.identities.providers.ldap.connection.port}
The distinguished name (DN) of the user account for logging on to the LDAP server.
Example: tempUser
Caution: SAS recommends setting the userDN to a temporary string, such as tempUser. See the Instructions for post-deployment steps to insert a real userDN.
The set of properties that are used to configure information for retrieving group information from the LDAP provider.
The point from which the LDAP server searches for groups.
Example: ou=groups,dc=example,dn=com
The set of properties that are used to configure additional information for retrieving user information from the LDAP provider.
The point from which the LDAP server searches for users.
Example: DC=example,DC=com
For more information about the sitedefault.yaml file, see “Add a sitedefault File to Your Deployment”.
This directory contains files to Kustomize your SAS Viya platform deployment to use a multi-node SAS Cloud Analytic Services (CAS) server, referred to as MPP.
In order to add this CAS server to your deployment, add a reference to the cas-server
overlay
to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
resources:
- sas-bases/overlays/cas-server
On an MPP CAS Server, the number of workers helps determine the processing power
of your cluster. The server is SMP by default which means there are no workers.
The default number of workers in the cas-server overlay (0) can be modified by
using the cas-manage-workers.yaml
example located in the cas examples directory
at /$deploy/sas-bases/examples/cas/configure
. The number of workers cannot exceed
the number of nodes in your k8s cluster, so ensure that you have enough resources
to accommodate the value you choose.
You can make modifications to the overlay through the use of
Patch Transformers. Examples are located in /$deploy/sas-bases/examples/cas/configure
,
including how to add additional volume mounts and data connectors, modifying CAS
server resource allocation, and changing the default PVC access modes.
To be included in the manifest, any yaml files containing Patch Transformers must also be added to the transformers block of the base kustomization.yaml file:
transformers:
- {{ PATCH-FILE-1 }}
- {{ PATCH-FILE-2 }}
If you have an environment where there are untainted nodes, the Kubernetes scheduler may consider them candidates for the CAS Server. You can use an additional overlay to restrict the scheduling of the CAS server to nodes that have the dedicated label.
The dedicated label is workload.sas.com/class=cas
The label can be applied to a node with this command:
kubectl label nodes node1 workload.sas.com/class=cas --overwrite
To add the label to the CAS Server,
add sas-bases/overlays/cas-server/require-cas-label.yaml
to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example:
...
transformers:
...
- sas-bases/overlays/cas-server/require-cas-label.yaml
...
Alternatively, you can use the sas-bases/overlays/cas-server/require-cas-label-pools.yaml
transformer if your deployment meets all of the following conditions:
workload.sas.com/class=cascontroller
to be used exclusively by the controller.workload.sas.com/class=casworker
.If your deployment meets these conditions, add sas-bases/overlays/cas-server/require-cas-label-pools.yaml
to the transformers block of the base kustomization.yaml file. Here is an example:
...
transformers:
...
- sas-bases/overlays/cas-server/require-cas-label-pools.yaml
...
The /$deploy/sas-bases/examples/cas/configure
directory contains a file to
grant Security Context Constraints for fsgroup 1001 on an OpenShift cluster. A
Kubernetes cluster administrator should add these Security Context Constraints
to their OpenShift cluster prior to deploying the SAS Viya platform. Use one of the
following commands:
Step 1:
kubectl apply -f cas-server-scc.yaml
or
oc create -f cas-server-scc.yaml
Step 2:
After the SCC has been applied, you must link the SCC to the appropriate ServiceAccount that will use it. Perform the following command which corresponds to the appropriate host launch type:
No host launch: oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-cas-server -z sas-cas-server
Host launch enabled: oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-cas-server-host -z sas-cas-server
Note: If you are enabling host launch, use the SecurityContexConstraint file cas-server-scc-host-launch.yaml instead of cas-server-scc.yaml. This file sets the correct capabilities and privilege escalation
By default, CAS does not automatically restart during version updates performed
by the SAS Viya Platform Deployment Operator. The default prevents the disruption of active
CAS sessions so that tables do not need to be reloaded. This default behavior can be changed by
applying the cas-auto-restart.yaml
example file located at /$deploy/sas-bases/examples/cas/configure
.
The example applies the autoRestart option to the pod spec.
The deployment operator checks for this option on all existing CAS servers during
software updates, and it automatically restarts servers that are tagged in this way.
Copy the /$deploy/sas-bases/examples/cas/configure/cas-auto-restart.yaml
to the site-config directory.
By default, the target for this patch applies to all CAS servers:
target:
group: viya.sas.com
kind: CASDeployment
name: .*
version: v1alpha1
To target specific CAS servers, list the CAS servers to which the change should be applied in the name field.
target:
group: viya.sas.com
kind: CASDeployment
name: {{ NAME-OF-SERVER }}
version: v1alpha1
Add the cas-auto-restart.yaml
file to the transformers section of the base
kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example that
assumes the file was copied to the $deploy/site-config/cas/configure
directory:
transformers:
...
- site-config/cas/configure/cas-auto-restart.yaml
...
In order to validate that the auto-restart option has been enabled on for a CAS server, this command may be run:
kubectl -n <name-of-namespace> get pods <cas-server-pod-name> --show-labels
If the label sas.com/cas-auto-restart=true
is visible, then the auto-restart option has been applied successfully.
If you subsequently want to disable auto-restart then remove cas-auto-restart.yaml
from your transformers list to disable auto-restart for any future CAS servers. If you want to disable auto-restart on a CAS server that is already running, run the following command to disable auto-restart for that active server:
kubectl -n <name-of-namespace> label pods --selector=app.kubernetes.io/instance=<cas-deployment-name> sas.com/cas-auto-restart=false
Note: You cannot enable both CAS auto-restart and state transfer in the same SAS Viya platform deployment.
Note: Ideally this option should be set as a pre-deployment task. However, it can be applied to an already running CAS server, but that server must be manually restarted in order for the auto-restart option to be turned on.
After you configure Kustomize, continue your SAS Viya platform deployment as documented.
For more information about the difference between SMP and MPP CAS, see What is the CAS Server, SMP, and MPP?.
This README describes how to create additional CAS server definitions with the
create-cas-server.sh
script. The script creates a Custom Resource (CR) that
can be added to your manifest and deployed to the Kubernetes cluster.
Running this script creates all of the artifacts that are necessary for
deploying a new CAS server in the Kubernetes cluster in one directory. The
directory can be referenced in the base kustomization.yaml
.
Note: The script does not modify your Kubernetes cluster. It creates the manifests that you can apply to your Kubernetes cluster to add a CAS server.
Run the create-cas-server.sh
script and specify, at a minimum, the instance
name. The instance name is used to label the server and differentiate it from
the default instance that is provided automatically. The default tenant name
is “shared” and provided automatically when multi-tenancy is not enabled in
your deployment.
./create-cas-server.sh -i {{ INSTANCE }}
The sample command creates a top-level directory cas-{{ TENANT }}-{{ INSTANCE }}
that contains everything that is required for a new CAS server instance. For
example, the directory contains the CR, PVC definitions for the permstore and
data PVs, and so on.
Optional arguments:
-o $deploy/site-config
. If you do not create the output in that
directory, you should move the new directory to $deploy/site-config
.In the base kustomization.yaml file, add the new directory to the resources section so that the CAS server is included when the manifest is rebuilt. This server is fully customizable with the use of patch transformers.
resources:
- site-config/cas-{{ TENANT }}-{{ INSTANCE }}
Deploy your software using the steps in Deploy the Software according to the method you are using.
kubectl get pods -l casoperator.sas.com/server={{ TENANT }}-{{ INSTANCE }}
cas-{{ TENANT }}-{{ INSTANCE }}-controller 3/3 Running 0 1m
kubectl get pvc -l sas.com/cas-instance: {{ TENANT }}-{{ INSTANCE }}
NAME STATUS ...
cas-{{ TENANT }}-{{ INSTANCE }}-data Bound ...
cas-{{ TENANT }}-{{ INSTANCE }}-permstore Bound ...
Run the script with more options:
./create-cas-server.sh --instance sample --output . --workers 2 --backup 1
This sample command creates a new directory named cas-shared-sample
in the
current location and creates a new CAS distributed server (MPP) CR with 2
worker nodes and a backup controller.
This document describes the customizations that can be made by the Kubernetes administrator for deploying CAS in both symmetric multiprocessing (SMP) and massively parallel processing (MPP) configurations.
An SMP server requires one Kubernetes node. An MPP server requires one Kubernetes node for the server controller and two or more nodes for server workers. The SAS Viya Platform: Deployment Guide provides information to help you decide. A link to the deployment guide is provided in the Additional Resources section.
SAS provides example files for many common customizations. Read the descriptions
for the example files in the following list. If you want to use an example file
to simplify customizing your deployment, copy the file to your
$deploy/site-config
directory.
Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ NUMBER-OF-WORKERS }}. Replace the entire variable string, including the braces, with the value you want to use.
After you edit a file, add a reference to it in the transformer block of the
base kustomization.yaml
file.
The example files are located at $deploy/sas-bases/examples/cas/configure
. The
following is a list of each example file for CAS settings and the file name.
mount non-NFS persistentVolumeClaims and data connectors for the CAS server
(cas-add-host-mount.yaml
)
Note: To use hostPath mounts on Red Hat OpenShift, see Enable hostPath Mounts for CAS.
mount NFS persistentVolumeClaims and data connectors for the CAS server
(cas-add-nfs-mount.yaml
)
add a backup controller to an MPP deployment (cas-manage-backup.yaml
)
Note: Do not use this example for an SMP CAS server.
change the user the CAS process runs as (cas-modify-user.yaml
)
modify the storage size for CAS PersistentVolumeClaims
(cas-modify-pvc-storage.yaml
)
manage resources for CPU and memory (cas-manage-cpu-and-memory.yaml
)
modify the CPU overhead that is reserved for other daemonsets and pods
(cas-manage-cpu-reserve.yaml
)
modify the resource allocation for ephemeral storage
(cas-modify-ephemeral-storage.yaml
)
add a configMap to your CAS server (cas-add-configmap.yaml
)
add environment variables (cas-add-environment-variables.yaml
)
add a configMap with an SSSD configuration (cas-sssd-example.yaml
)
Note: This file has no variables. It is an example of how to create a configMap for SSSD.
modify the accessModes on the CAS permstore and data PVCs
(cas-storage-access-modes.yaml
)
disable the sas-backup-agent sidecar from running
(cas-disable-backup-agent.yaml
)
add paths to the file system path allowlist for the CAS server.
(cas-add-allowlist-paths.yaml
)
enable your CAS Services to be externally accessible.
(cas-enable-external-services.yaml
)
remove secure computing mode (seccomp) profile for CAS.
(cas-disable-seccomp.yaml
)
set the secure computing mode (seccomp) profile for CAS, and override the
default of “RuntimeDefault”.
(cas-seccomp-profile.yaml
)
automatically restart CAS servers during Deployment Operator updates.
(cas-auto-restart.yaml
)
enable host identity session launching.
(cas-enable-host.yaml
)
disable publish of HTTP Ingress.
(cas-disable-http-ingress.yaml
)
enable TLS for CAS internode communications.
(cas-enable-internode-tls.yaml
)
enable a backing store for CAS memory with a size selected by CAS auto-resourcing.
(cas-enable-default-backing-store.yaml
)
enable a backing store for CAS memory with a size selected at deployment time.
(cas-enable-backing-store.yaml
)
enable a backing store for CAS memory with a separate backing store for each
priority group.
(cas-enable-backing-store-with-priority-groups.yaml
)
enable a backing store for CAS memory with a separate backing store priority
group 1.
(cas-enable-backing-store-with-priority-group-one.yaml
)
Note: If you are using an SMP configuration, skip this section.
By default, MPP CAS has two workers. To modify the number of workers, you must modify the cas-manage-workers.yaml transformer file. The file can be modified before or after the initial deployment of your SAS Viya platform. Adding or removing workers does not require a restart, but existing CAS tables will not be load balanced to use the new workers by default. New tables should take advantage of the new workers.
To enable load balancing when changing the number of workers, you should enable CAS Node Scaling, which requires a modification to the cas-add-environment-variables.yaml transformer file. If automatic balancing of tables is desired when adding workers to a running server, the environment variables should be set at the time of the initial deployment, regardless of whether you are changing the number of workers at that time. Setting the variables allows you to use CAS Node Scaling after the software has been deployed without having to change any of the transformers or the base kustomization.yaml file. For details about CAS Node Scaling, see CAS Node Scaling.
To use the cas-manage-workers.yaml transformer, copy the file to the $deploy/sas-config subdirectory. Then modify the file as described in the comments of the file itself before adding the file to the transformer block of the base kustomization.yaml file.
To set the environment variables for CAS Node Scaling, copy the cas-add-environment-variables.yaml file to the $deploy/sas-config subdirectory. Modify the file to add the following environment variables:
...
patch: |-
- op: add
path: /spec/controllerTemplate/spec/containers/0/env/-
value:
name: CAS_GLOBAL_TABLE_AUTO_BALANCE
value: "true"
- op: add
path: /spec/controllerTemplate/spec/containers/0/env/-
value:
name: CAS_SESSION_TABLE_AUTO_BALANCE
value: "true"
Ensure that you accurately designate which CAS servers are receiving the new environment variables in the target block of the file. Then add the file to the transformer block of the base kustomization.yaml file.
Perform these steps if cloud native mode should be disabled in your environment.
Add the following code to the configMapGenerator block of the base kustomization.yaml file:
```yaml
...
configMapGenerator:
...
- name: sas-cas-config
behavior: merge
literals:
- CASCLOUDNATIVE=0
...
```
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
Note: If you are enabling SSSD on an OpenShift cluster, use the SecurityContextConstraint patch
cas-server-scc-sssd.yaml
instead ofcas-server-scc.yaml
. This will set the correct capabilities and privilege escalation.
If SSSD is required in your environment, add
sas-bases/overlays/cas-server/cas-sssd-sidecar.yaml as the first entry to
the transformers list of the base kustomization.yaml file
($deploy/kustomization.yaml
).
Here is an example:
```yaml
...
transformers:
...
- sas-bases/overlays/cas-server/cas-sssd-sidecar.yaml
...
```
Note: In the transformers list, the
cas-sssd-sidecar.yaml
file must precede the entrysas-bases/overlays/required/transformers.yaml
and any TLS transformers.
Use these steps to provide a custom SSSD configuration to handle user authorization in your environment.
Copy the $deploy/sas-bases/examples/cas/configure/cas-sssd-example.yaml
file to the location of your
CAS server overlay.
Example: site-config/cas-server/cas-sssd-example.yaml
Add the relative path of cas-sssd-example.yaml to the transformers block of
the base kustomization.yaml file
($deploy/kustomization.yaml
).
Here is an example:
```yaml
...
transformers:
...
- site-config/cas-server/cas-sssd-example.yaml
...
```
Copy your custom SSSD configuration file to sssd.conf
.
Add the following code to the secretGenerator block of the base
kustomization.yaml
file with a relative path to sssd.conf
:
```yaml
...
secretGenerator:
...
- name: sas-sssd-config
files:
- SSSD_CONF=site-config/cas-server/sssd.conf
type: Opaque
...
```
Note: If you use Kerberos in your deployment, or enable SSSD and disable CASCLOUDNATIVE, you must enable host launch.
By default, CAS cannot launch sessions under a user’s host identity. All
sessions run
under the cas service account instead. CAS can be configured to allow for host
identity
launches by including a patch transformer in the kustomization.yaml file. The
/$deploy/sas-bases/examples/cas/configure
directory
contains a cas-enable-host.yaml file, which can be used for this purpose.
Note: If you are enabling host launch on an OpenShift cluster, specify one of the following files to create the SecurityContextConstraint instead of
cas-server-scc.yaml
:
- If SSSD is not configured, use the SecurityContextConstraint patch
cas-server-scc-host-launch.yaml
- If SSSD is configured, use the SecurityContextConstraint patch
cas-server-scc-sssd.yaml
This will set the correct capabilities and privilege escalation.
To enable this feature:
Copy the $deploy/sas-bases/examples/cas/configure/cas-enable-host.yaml
file
to the location of your
CAS server overlay. For example, site-config/cas-server/cas-enable-host.yaml
.
The example file defaults to targeting all CAS servers by specifying a name
component of .*
.
To target specific CAS servers, comment out the name: .*
line and choose which
CAS servers you
want to target. Either uncomment the name: and replace NAME-OF-SERVER with one
particular CAS
server or uncomment the labelSelector line to target only the default
deployment.
Add the relative path of the cas-enable-host.yaml
file to the transformers
block of the base
kustomization.yaml file ($deploy/kustomization.yaml
) before the reference to
the sas-bases/overlays/required/transformers.yaml file and any SSSD
transformers. Here is an example:
```yaml
transformers:
...
- site-config/cas-server/cas-enable-host.yaml
...
- sas-bases/overlays/required/transformers.yaml
...
```
CAS supports encrypting connections between the worker nodes. When internode encryption is configured, any data sent between worker nodes is sent over a TLS connection.
By default, CAS internode communication is not encrypted in any of the SAS Viya platform encryption modes. If required, CAS internode encryption should only be enabled in the “Full-stack TLS” encryption mode.
Before deciding to enable CAS internode encryption, you should be familiar with the content in SAS Viya Platform Encryption: Data in Motion.
Note: Encryption has performance costs. Enabling CAS internode encryption will degrade your performance and increase the amount of CPU time that is required to complete any action. Actions that move large amounts of data are penalized the most. Session start-up time is also impacted negatively. Testing indicates that scenarios that move large blocks of data between nodes can increase elapsed action times by a factor of ten.
Perform these steps to enable CAS internode encryption.
Copy the
$deploy/sas-bases/examples/cas/configure/cas-enable-internode-tls.yaml
into
your /site-config
directory. For example:
site-config/cas-server/cas-enable-host.yaml
The cas-enable-internode-tls.yaml transformer file defaults to targeting all
CAS servers by specifying a name component of .*
. Edit the transformer to
indicate the CAS servers you want to target for CAS internode encryption. For
more information about selecting specific CAS servers, see Targeting CAS
Servers.
Add the relative path of the cas-enable-internode-tls.yaml
file to the
transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
) before the reference to
the sas-bases/overlays/required/transformers.yaml
. Here is an example:
```yaml
transformers:
...
- site-config/cas-server/cas-enable-host.yaml
...
- sas-bases/overlays/required/transformers.yaml
...
```
For the instructions to set up a CAS State transfer, including configuration steps,
see the README file located at $deploy/sas-bases/overlays/cas-server/state-transfer/README.md
(for Markdown format) or at $deploy/sas-bases/docs/state_transfer_for_cas_server_for_the_sas_viya_platform.htm
(for HTML format).
Generally, when CAS allocates memory, it uses memory allocated from the threaded kernel. However, such memory is susceptible to the Linux Out of Memory (OOM) killer, potentially causing the entire deployment of CAS to restart and interrupting the functionality of CAS. To avoid some of the risk, you can enable a backing store for the memory allocation.
One of the following patch transformers can be used to enable the use of a backing store for CAS memory allocation.
If you are using CAS auto-resourcing or have manually specified resource limits for CAS with cas-manage-cpu-and-memory.yaml, use cas-enable-default-backing-store.yaml
to allow the CAS operator to select an appropriate size for the backing store (80% of the memory limit).
If you have not set a limit for the CAS container, or if the 80% ratio is not appropriate in your case, then you can select a specific size for the backing store with cas-enable-backing-store.yaml
.
The transformer in cas-enable-backing-store-with-priority-groups.yaml
selects a specific size for five separate backing stores, one for each priority group. This is appropriate only when CAS resource management is enabled.
The transformer in cas-enable-backing-store-with-priority-group-one.yaml
selects specific sizes for two backing stores, one for users in priority group one, and a second for all other users. This is appropriate only when CAS resource management is enabled.
Follow the instructions in the comments of the patch transformers to replace variables with the appropriate values.
Note: For information about CAS resource management policies, see CAS Resource Management Policies
Each example patch has a target section which tells it what resource(s) it
should apply to.
There are several parameters including object name, kind, version, and
labelSelector. By default,
the examples in this directory use name: .*
which applies to all CAS server
definitions.
If there are multiple CAS servers and you want to target a specific instance,
you can set the
“name” option to the name of that CASDeployment. If you want to target the
default “cas-server”
overlay you can use a labelSelector:
Example:
target:
name: cas-example
labelSelector: "sas.com/cas-server-default"
kind: CASDeployment
Note: When targeting the default CAS server provided explicitly the path option must be used, because the name is a config map token that cannot be targeted.
For more information about CAS configuration and using example files, see the SAS Viya Platform: Deployment Guide.
This directory contains files to Kustomize your SAS Viya platform deployment to enable automatic resource limit allocation.
In order to add this CAS server to your deployment, perform both of the following steps.
First, add a reference to the auto-resources
overlay to the resources block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). This enables the ClusterRole and ClusterRoleBinding for the sas-cas-operator Service Account.
resources:
...
- sas-bases/overlays/cas-server/auto-resources
Next, add the transformer to remove any hardcoded resource requests for CPU and memory from your CAS deployment. This allows the resources to be auto-calculated.
transformers:
...
- sas-bases/overlays/cas-server/auto-resources/remove-resources.yaml
After you configure Kustomize, continue your SAS Viya platform deployment as documented.
This directory contains files to Kustomize your SAS Viya platform deployment to enable state transfers. Enabling state transfers allows the sessions, tables, and state of a running CAS server to be preserved between a running CAS server and a new CAS server instance which will be started as part of the CAS server upgrade.
Note: You cannot enable both CAS auto-restart and state transfer in the same SAS Viya platform deployment. If you have already enabled auto-restart, disable it before continuing.
To add the new CAS server to your deployment:
Add a reference to the state-transfer
overlay to the resources block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). This overlay adds a PVC to the deployment
to store the temporary state data during a state transfer. This PVC is mounted to both the source and target system
and must be large enough to hold all session and global tables that are loaded at transfer time.
If you need to increase the size of the transfer PVC, consider using the cas-modify-pvc-storage.yaml
example file.
resources:
...
- sas-bases/overlays/cas-server/state-transfer
Add the state-transfer transformer to enable the state transfer feature to the deployment
transformers:
...
- sas-bases/overlays/cas-server/state-transfer/support-state-transfer.yaml
3. Determine the method to transfer the state. The model ‘readonly’ has a shorter
window where the server is unresponsive. However, during the transfer, attempts to alter
or create global tables will fail. The model ‘suspend’ has a longer window
where the server is unresponsive, and attempts to alter or create global tables will
wait until the transfer is complete.
The default state transfer model is ‘suspend’. If you want to specify a model at
deployment time, copy the $deploy/sas-bases/examples/cas/configure/cas-add-environment-variables.yaml
file to $deploy/site-config/cas/configure/cas-add-environment-variables.yaml
, if you have
not already done so. In the copied file, change the value of CASCFG_STATETRANSFERMODEL
to the model you want to use. The model can also be changed by altering the CAS server option stateTransferModel.
Here is an example of the code used to set the state transfer model to ‘readonly’.
...
patch: |-
- op: add
path: /spec/controllerTemplate/spec/containers/0/env/-
value:
name: CASCFG_STATETRANSFERMODEL
value: "readonly"
Decide if you want to limit the amount of data in individual sessions to be transferred. The server will be unresponsive while session tables are transferred between the original server and the new server. The length of this period of unresponsiveness can be managed by setting the MAXSESSIONTRANSFERSIZE server option. Any session that has more data loaded than the value of this option will not be transferred to the new session. The default behavior is to impose no limit. Smaller values of this option can reduce the amount of time that the server is unresponsive during a state transfer.
If you want to specify a limit at deployment time, copy the
$deploy/sas-bases/examples/cas/configure/cas-add-environment-variables.yaml
file to
$deploy/site-config/cas/configure/cas-add-environment-variables.yaml
, if you have
not already done so. In the copied file, set the environment variable CASCFG_MAXSESSIONTRANSFERSIZE.
Here is an example of the code used to set the session transfer size limit to 10 million bytes.
...
patch: |-
- op: add
path: /spec/controllerTemplate/spec/containers/0/env/-
value:
name: CASCFG_MAXSESSIONTRANSFERSIZE
value: "10000000"
If you have changed the values CASCFG_STATETRANSFERMODEL or CASCFG_MAXSESSIONTRANSFERSIZE, add
a reference to the cas-add-environment-variables.yaml file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
- site-config/cas/configure/cas-add-environment-variables.yaml
If you have already made some configuration changes for CAS, this entry may already exist in the transformers block.
After you configure Kustomize, continue your SAS Viya platform deployment as documented.
The SAS GPU Reservation Service aids SAS processes in resource sharing and utilization of the Graphic Processing Units (GPUs) that are available in a Kubernetes Pod. It is available by default in every SAS Cloud Analytic Services (CAS) Pod, but it must be enabled in order to take advantage of the GPUs in your cluster.
In MPP CAS server configurations, GPU resources are only used on the CAS worker nodes. Additional CAS server placement configuration can be used to configure distinct node pools for CAS controller pods and CAS worker pods. This allows CAS controller pods to be scheduled on more economical nodes while CAS worker pods are scheduled on nodes that provide GPU resources.
For instructions to set up distinct node pools for CAS Controllers and CAS
Workers, including configuration steps,
see the README file located at
$deploy/sas-bases/overlays/cas-server/auto-resources/README.md
(for Markdown format) or at
$deploy/sas-bases/docs/auto_resources_for_cas_server_for_sas_viya.htm
(for HTML format).
The SAS GPU Reservation Service is supported on all of the supported cloud platforms. If you are deploying on Microsoft Azure, refer to Azure Configuration, Using Azure CLI or Azure Portal, Using SAS Viya Infrastructure as Code for Microsoft Azure, and Using the NVIDIA Device Plug-In. If you are deploying on a provider other than Microsoft Azure, refer to Installing the NVIDIA GPU Operator.
Note: If you are using Kubernetes 1.20 and later and you choose to use Docker as your container runtime, the NVIDIA GPU Operator is not needed.
If you are deploying the SAS Viya platform on Microsoft Azure, before you enable CAS to use GPUs, the Azure Kubernetes Service (AKS) cluster must be properly configured.
The cas
node pool must be configured with a properly sized N-Series Virtual Machine (VM).
The N-Series VMs in Azure have GPU capabilities.
If the cas
node pool already exists, the VM node size cannot be changed. The cas
node
pool must first be deleted and then re-created to the proper VM size and node count.
WARNING: Deleting a node pool on an actively running SAS Viya platform deployment will cause any CAS sessions to be prematurely terminated. These steps should only be performed on an idle deployment. The node pool can be deleted and re-created using the Azure portal or the Azure CLI.
az aks nodepool delete --cluster-name <replace-with-aks-cluster-name> --name cas --resource-group <replace-with-resource-group>
az aks nodepool add --cluster-name <replace-with-aks-cluster-name> --name cas --resource-group <replace-with-resource-group> --node-count <replace with node count> --node-vm-size "<replace with N-Series VM>" [--zones <replace-with-availability-zone-number>]
SAS Viya 4 Infrastructure as Code (IaC) for Microsoft Azure (viya4-iac-azure) contains Terraform scripts to provision Microsoft Azure Cloud infrastructure
resources required to deploy SAS Viya platform products. Edit the terraform.tfvars file and change the
machine_type
for the cas
node pool to an N-Series VM.
node_pools = {
cas = {
"machine_type" = "<Change to N-Series VM>"
...
}
},
...
Verify the cas
node pool was created and properly sized.
az aks nodepool list -g <resource-group> --cluster-name <cluster-name> --query '[].{Name:name, vmSize:vmSize}'
An additional requirement in a Microsoft Azure environment is that the
NVIDIA device plug-in must be
installed and configured. The example nvidia-device-plugin-ds.yaml
manifest requires
the following addition to the tolerations
block so that the plug-in will be scheduled on
to the CAS node pool.
tolerations:
...
- key: workload.sas.com/class
operator: Equal
value: "cas"
effect: NoSchedule
...
Create the gpu-resources
namespace and apply the updated manifest to create the NVIDIA device plug-in DaemonSet.
kubectl create namespace gpu-resources
kubectl apply -f nvidia-device-plugin-ds.yaml
Beginning with Kubernetes version 1.20, Docker was deprecated as the default container runtime in favor of the ICO-compliant containerd. In order to leverage GPUs using this new container runtime, install the NVIDIA GPU Operator into the same cluster as the SAS Viya platform. After the NVIDIA GPU Operator is deployed into your cluster, proceed with enabling the SAS GPU Reservation Service.
CAS GPU patch files are located at $deploy/sas-bases/examples/gpu
.
Copy the appropriate patch files for your CAS Server configuration:
For SMP CAS servers and MPP CAS servers without distinct cascontroller
and casworker
node pool configurations,
copy $deploy/sas-bases/examples/gpu/cas-gpu-patch.yaml
and $deploy/sas-bases/examples/gpu/kustomizeconfig.yaml
to $deploy/site-config/gpu/
.
For MPP CAS servers with distinct cascontroller
and casworker
node pool configurations,
copy $deploy/sas-bases/examples/gpu/cas-gpu-patch-worker-only.yaml
and $deploy/sas-bases/examples/gpu/kustomizeconfig.yaml
to $deploy/site-config/gpu/
.
In the copied cas-gpu-patch.yaml
or cas-gpu-patch-worker-only.yaml
file, make the following changes:
Revise the values for the resource requests and resource limits so that they are the same and do not exceed the maximum number of GPU devices on a single node.
In the cas-vars section, consider whether you require a different level of information from the GPU process. The value for SASGPUD_LOG_TYPE can be info, json, debug, or trace.
After you have made your changes, save and close the revised file.
After you edit the file, add the following references to the base
kustomization.yaml file ($deploy/kustomization.yaml
):
Add the path to the selected cas-gpu-patch
file used as the first entry in the transformers block.
Add the path to the kustomizeconfig.yaml
file to the configurations block. If the configurations block does not exist yet, create it.
Here are examples of these changes:
...
transformers:
- site-config/gpu/cas-gpu-patch.yaml
...
configurations:
- site-config/gpu/kustomizeconfig.yaml
For more information about using example files, see the SAS Viya Platform: Deployment Guide.
For more information on CAS Workload Placement, see Plan the Workload Placement
The SAS Compute server provides the ability to execute SAS Refresh Token, which by use of a sidecar works as a silent partner to the main container, refreshing the client token as needed. Using the sidecar is valuable for long-running tasks that exceed the default life of the client token, which in turn inhibits the successful completion of such tasks. The sidecar seamlessly refreshes the token so that these tasks can continue running unimpeded.
The SAS Refresh Token facility is disabled by default. This README file describes how to customize your SAS Viya platform deployment to allow SAS Compute server to run the SAS Refresh Token sidecar.
Enable the ability for the pod where the SAS Compute server is running to run SAS Refresh Token. SAS Refresh Token starts when the SAS Compute server is started. It exists for the life of the SAS Compute server.
SAS has provided an overlay to enable SAS Refresh Token in your environment.
To use the overlay:
Add a reference to the sas-programming-environment/refreshtoken
overlay to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example:
```yaml
...
transformers:
...
- sas-bases/overlays/sas-programming-environment/refreshtoken
- sas-bases/overlays/required/transformers.yaml
...
```
NOTE: The reference to the sas-programming-environment/refreshtoken
overlay MUST come before the required transformers.yaml, as shown in the example above.
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
To disable SAS Refresh Token:
Remove sas-bases/overlays/sas-programming-environment/refreshtoken
from the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
This document describes the customizations that can be made by the Kubernetes administrator for managing the settings for the LOCKDOWN feature in the SAS Programming Environment.
For more information about LOCKDOWN, see LOCKDOWN System Option.
Read the descriptions for the example files in the following list. If you
want to use an example file to simplify customizing your deployment, copy
the file to your $deploy/site-config
directory.
Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ AMOUNT-OF-STORAGE }}. Replace the entire variable string, including the braces, with the value you want to use.
After you edit a file, add a reference to it in the transformers block of the
base kustomization.yaml
file.
Here is an example using the enable LOCKDOWN access methods transformer, saved
to $deploy/site-config/sas-programming-environment/lockdown
:
transformers:
...
- /site-config/sas-programming-environment/lockdown/enable-lockdown-access-methods.yaml
...
```
## Examples
The default behavior allows the following access methods to be enabled via
LOCKDOWN:
- HTTP
- EMAIL
- FTP
- HADOOP
- JAVA
These settings can be toggled using the transformers in the example files.
The example files are located at
`$deploy/sas-bases/examples/sas-programming-environment/lockdown`.
- To enable access methods not included in the list above, such as PYTHON or
PYTHON_EMBED, replace {{ ACCESS-METHOD-LIST }}
in `enable-lockdown-access-methods.yaml`. For example,
```yaml
...
patch : |-
- op: add
path: /data/VIYA_LOCKDOWN_USER_METHODS
value: "python python_embed"
...
NOTE: The names of the access methods are case-insensitive.
disable-lockdown-access-methods.yaml
with a list
of values to remove. For example,...
patch : |-
- op: add
path: /data/VIYA_LOCKDOWN_USER_DISABLED_METHODS
value: "java"
...
NOTE: The names of the access methods are case-insensitive.
For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.
This document describes the customizations that can be made by the Kubernetes administrator for managing the Java security policy file that is generated for the SAS Programming Environment.
By default the deployment of the SAS Programming Environment generates a Java security policy file to prevent SAS programmers from executing Java code that would be deemed unsafe by the administrator directly from SAS code. This README file describes the customizations that can be made by the Kubernetes administrator to manage the Java security policy file that is generated.
The generated Java security policy controls permissions for Java access inside of the SAS Programming Environment. In cases where the application of the policy file is deemed restrictive by the administrator, the generation of the policy file can be disabled.
SAS has provided an overlay to disable the generation of the Java security policy.
To use the overlay:
Add a reference to the sas-programming-environment/java-security-policy/
disable-java-policy-file-generation.yaml
overlay to the transformers block of the base kustomization.yaml file
($deploy/kustomization.yaml
).
Here is an example:
...
transformers:
...
- sas-bases/overlays/sas-programming-environment/java-security-policy/disable-java-policy-file-generation.yaml
- sas-bases/overlays/required/transformers.yaml
...
NOTE: The reference to the sas-programming-environment/java-security-policy/disable-java-policy-file-generation.yaml
overlay MUST come before the required transformers.yaml, as seen in the example above.
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
To enable the generation of the Java security policy file:
Remove sas-bases/overlays/sas-programming-environment/java-security-policy/disable-java-policy-file-generation.yaml
from the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
This document describes the customizations that can be made by the Kubernetes administrator for managing the Java security policy file that is generated for the SAS Programming Environment.
By default the SAS Programming Environment generates a Java security policy file to prevent SAS programmers from executing Java code directly from SAS code that would be deemed unsafe by the administrator. This README describes the customizations that can be made by the Kubernetes administrator for managing the Java security policy file that is generated for the SAS Programming Environment.
If a class is determined acceptable by the Kubernetes administrator, the following steps allow that class to be added.
The default behavior generates a Java security policy file similar to
grant {
permission java.lang.RuntimePermission "*";
permission java.io.FilePermission "<<ALL FILES>>", "read, write, delete";
permission java.util.PropertyPermission "*", "read, write";
permission java.net.SocketPermission "*", "connect,accept,listen";
permission java.io.FilePermission "com.sas.analytics.datamining.servertier.SASRScriptExec", "exec";
permission java.io.FilePermission "com.sas.analytics.datamining.servertier.SASPythonExec", "exec";
};
The Java security policy file can be modified by using the add-allowed-java-class.yaml file.
Copy the
$deploy/sas-bases/examples/sas-programming-environment/java-security-policy/add-allowed-java-class.yaml
file to the site-config directory.
To add classes with an exec
permission to this generated policy file,
replace the following in the copied file.
For example,
...
patch: |-
- op: add
path: /data/SAS_JAVA_POLICY_ALLOW_TESTCLASS
value: "my.org.test.testclass"
...
After you edit the file, add a reference to it in the transformers block of
the base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example assuming the file has been saved
to $deploy/site-config/sas-programming-environment/java-security-policy
:
transformers:
...
- /site-config/sas-programming-environment/java-security-policy/add-allowed-java-class.yaml
...
For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.
The SAS Compute server provides the ability to execute SAS Watchdog, which monitors spawned processes to ensure that they comply with the terms of LOCKDOWN system option.
The LOCKDOWN system option employs an allow list in the SAS Compute server. Only files that reside in paths or folders that are included in the allow list can be accessed by the SAS Compute server. The limitation on the LOCKDOWN system option is that it can only block access to files and folders directly accessed by SAS Compute server processing. The SAS Watchdog facility extends this checking to files and folders that are used by languages that are invoked by the SAS Compute server. Therefore, code written in Python, R, or Java that is executed directly in the SAS Compute server process is checked against the allow list. The configuration of the SAS Watchdog facility replicates the allow list that is configured by the LOCKDOWN system option by default.
Note: For more information about the LOCKDOWN system option, see LOCKDOWN System Option
The SAS Watchdog facility is disabled by default. This README file describes how to customize your SAS Viya platform deployment to allow SAS Compute server to run SAS Watchdog.
Enable the ability for the pod where the SAS Compute Server is running to run SAS Watchdog. SAS Watchdog starts when the SAS Compute server is started, and exists for the life of the SAS Compute server.
SAS has provided an overlay to enable SAS Watchdog in your environment.
To use the overlay:
Add a reference to the sas-programming-environment/watchdog
overlay to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example:
...
transformers:
...
- sas-bases/overlays/sas-programming-environment/watchdog
- sas-bases/overlays/required/transformers.yaml
...
NOTE: The reference to the sas-programming-environment/watchdog
overlay MUST come before the required transformers.yaml, as seen in the example above.
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
To disable SAS Watchdog:
Remove sas-bases/overlays/sas-programming-environment/watchdog
from the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
As a Kubernetes cluster administrator of the OpenShift cluster, use one of the following commands to apply the Security Context Constraint. An example of the yaml may be found in sas-bases/examples/sas-programming-environment/watchdog/sas-watchdog-scc.yaml
.
kubectl apply -f sas-watchdog-scc.yaml
oc apply -f sas-watchdog-scc.yaml
oc -n <namespace> adm policy add-scc-to-user sas-watchdog -z sas-programming-environment
Run the following command to remove the service account from the SCC:
oc -n <namespace> adm policy remove-scc-from-user sas-watchdog -z sas-programming-environment
Run one of the following commands to delete the SCC after it has been removed:
kubectl delete scc sas-watchdog
oc delete scc sas-watchdog
NOTE: Do not delete the SCC if there are other SAS Viya platform deployments in the cluster. Only delete the SCC after all namespaces running SAS Viya platform in the cluster have been removed.
The SAS Compute server provides the ability to execute SAS code that can drive requests into the shared CAS server in the cluster. For development purposes in applications such as SAS Studio, you might need to allow data scientists the ability to work with a CAS server that is local to their SAS Compute session.
This README file describes how to customize your SAS Viya platform deployment to allow SAS Compute server users access to a personal CAS server. This personal CAS server uses symmetric multiprocessing (SMP) architecture.
Note: The README for Personal CAS Server with GPU is located at $deploy/sas-bases/examples/sas-programming-environment/personal-cas-server-with-gpu/README.md
(for Markdown format) or $deploy/sas-bases/doc/configuring_sas_compute_server_to_use_a_personal_cas_server-with-gpu.htm
(for HTML format).
Enable the ability for the pod where the SAS Compute server is running to contain a personal CAS server instance. This CAS server starts when the SAS Compute server is started, and exists for the life of the SAS Compute server. Code executing in the SAS Compute session can then be directed to this personal CAS server.
SAS has provided an overlay to enable the personal CAS server in your environment.
To use the overlay:
Add a reference to the sas-programming-environment/personal-cas-server
overlay to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example:
...
transformers:
...
- sas-bases/overlays/sas-programming-environment/personal-cas-server
- sas-bases/overlays/required/transformers.yaml
...
NOTE: The reference to the sas-programming-environment/personal-cas-server
overlay MUST come before the required transformers.yaml, as seen in the example above.
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
To disable the personal CAS Server:
Remove sas-bases/overlays/sas-programming-environment/personal-cas-server
from the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
The SAS Compute server provides the ability to execute SAS code that can drive requests into the shared CAS server in the cluster. For development purposes in applications such as SAS Studio, you might need to allow data scientists the ability to work with a CAS server that is local to their SAS Compute session.
This README file describes how to customize your SAS Viya platform deployment to allow SAS Compute server users access to a personal CAS server with GPU. This personal CAS server uses symmetric multiprocessing (SMP) architecture.
Enable the ability for the pod where the SAS Compute server is running to contain a personal CAS server (with GPU) instance. This CAS server starts when the SAS Compute server is started, and exists for the life of the SAS Compute server. Code executing in the SAS Compute session can then be directed to this personal CAS server (with GPU).
Installing this overlay is the same as installing the overlay that adds Personal CAS Server (without GPU). The only difference is that the overlay name is different.
If you want to add GPU to an existing Personal CAS Server, perform these steps:
Follow the instructions in the README to remove the Personal CAS Server. The README for Personal CAS Server (without GPU) is located at $deploy/sas-bases/examples/sas-programming-environment/personal-cas-server/README.md
(for Markdown format) or $deploy/sas-bases/doc/configuring_sas_compute_server_to_use_a_personal_cas_server.htm
(for HTML format).
Use this overlay (and these instructions) to add Personal CAS Server with GPU.
Note: Only one Personal CAS Server may be present in SAS Compute Server
SAS has provided an overlay to enable the personal CAS server in your environment.
To use the overlay:
Add a reference to the sas-programming-environment/personal-cas-server-with-gpu
overlay to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example:
...
transformers:
...
- sas-bases/overlays/sas-programming-environment/personal-cas-server-with-gpu
- sas-bases/overlays/required/transformers.yaml
...
Note: The reference to the sas-programming-environment/personal-cas-server-with-gpu
overlay must come before the required transformers.yaml, as seen in the example above.
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
To disable the personal CAS Server:
Remove sas-bases/overlays/sas-programming-environment/personal-cas-server-with-gpu
from the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
This document describes the customizations that can be made by the Kubernetes administrator for deploying the Personal CAS Server.
The SAS Viya Platform provides example files for many common customizations. Read the descriptions
for the example files in the following list. If you want to use an example file
to simplify customizing your deployment, copy the file to your
$deploy/site-config
directory.
Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ AMOUNT-OF-STORAGE }}. Replace the entire variable string, including the braces, with the value you want to use.
After you edit a file, add a reference to it in the transformers block of the
base kustomization.yaml
file.
Here is an example using the host path transformer, saved to $deploy/site-config/sas-programming-environment/personal-cas-server
:
```yaml
transformers:
...
- /site-config/sas-programming-environment/personal-cas-server/personal-cas-modify-host-cache.yaml
...
```
The example files are located at $deploy/sas-bases/examples/sas-programming-environment/personal-cas-server
.
The following is a list of each example file.
enable Kerberos support for the Personal CAS server
(personal-cas-enable-kerberos.yaml
)
modify the CAS_DISK_CACHE to be a host path for the Personal CAS server
(personal-cas-modify-host-cache.yaml
)
For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.
The SAS Viya platform requires the ability to have write access to certain locations in the environment. An example of this is the SASWORK location, where data used at runtime may be created or modified. The SAS Programming Environment container image is set up by default to use an emptyDir volume for this purpose. Depending on workload, you may need to configure different storage classes for these volumes.
A storage class in Kubernetes is defined by a StorageClass resource. Examples of StorageClasses can be found at Storage Classes.
This README describes how to use example files to configure the required storage classes.
The following processes assign their runtime storage locations using the process described above.
The default behavior assigns an emptyDir volume for use for runtime storage by these server applications.
This processing takes place at the initialization of the server application; therefore these changes take effect upon the next launch of a pod for the server application.
The volume storage class for these applications can be modified by using the
transformers in the example file located at
$deploy/sas-bases/examples/sas-programming-environment/storage
.
Copy the
$deploy/sas-bases/examples/sas-programming-environment/storage/change-viya-volume-storage-class.yaml
file to the site-config directory.
To change the StorageClass replace the {{ VOLUME-STORAGE-CLASS }} variable in the copied file with a different volume storage class. The example file provided looks like the following:
- op: add
path: /template/spec/volumes/-
value:
name: viya
{{ VOLUME-STORAGE-CLASS }}
For example, assume that the storage location you want to use is an NFS volume. That volume may be described in the following way:
nfs:
server: myserver.mycompany.com
path: /path/to/my/location
To use this in the transformer, substitute in the volume definition in the {{ VOLUME-STORAGE-CLASS }} location. The result would look like this:
- op: add
path: /template/spec/volumes/-
value:
name: viya
nfs:
server: myserver.mycompany.com
path: /path/to/my/location
Note: The transformer defined here delete the previously defined viya volume specification in the associated podTemplates. Any content that may exist in the current viya volume is not affected by this transformer.
After you edit the change-viya-volume-storage-class.yaml file, add it to
the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Note: The reference to the site-config/change-viya-volume-storage-class.yaml
overlay must come before the required transformers.yaml.
Here is an example assuming the file has been saved to
$deploy/site-config
:
transformers:
...
- site-config/change-viya-volume-storage-class.yaml
- sas-bases/overlays/required/transformers.yaml
...
For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.
A SAS Batch Server has the ability to restart a SAS job using either SAS’s data step checkpoint/restart capability or SAS’s label checkpoint/restart capability. For the checkpoint/restart capability to work properly, the checkpoint information must be stored on storage that persists across all compute nodes in the deployment. When the Batch Server job is restarted, it will have access to the checkpoint information no matter what compute node it is started on.
The checkpoint information is stored in SASWORK, which is allocated in
the volume named viya
. Since a Batch Server is a SAS Viya platform server that
uses the SAS Programming Run-Time Environment, it is possible that the
viya
volume may be set to ephemeral storage by the
$deploy/sas-bases/examples/sas-programming-environment/storage/change-viya-volume-storage-class.yaml
transformers. If that is the case, the Batch Server’s viya
volume would need
to be changed to persistent storage without changing any other server’s
storage.
Note: For more information about changing the storage for SAS Viya platform servers that use the SAS Programming Run-Time Environment, see the README file at $deploy/sas-bases/examples/sas-programming-environment/storage/README.md
(for Markdown format) or at $deploy/sas-bases/docs/sas_programming_environment_storage_tasks.htm
(for HTML format).
The transformers described in this README sets the storage class for the SAS Batch
Server’s viya
volume defined in the SAS Batch Server pod templates without
changing the storage of the other SAS Viya platform servers that use the SAS
Programming Run-Time Environment.
The changes described by this README take place at the initialization of the server application; therefore the changes take effect at the next launch of a pod for the server application.
The volume storage class for these applications can be modified by using the
example file located at $deploy/sas-bases/examples/sas-batch-server/storage
.
Copy the
$deploy/sas-bases/examples/sas-batch-server/storage/change-batch-server-viya-volume-storage-class.yaml
file to the site-config directory.
To change the storage class, replace the {{ VOLUME-STORAGE-CLASS }} variable in the copied file with a different volume storage class. The unedited example file contains a transformer that looks like this:
---
apiVersion: builtin
kind: PatchTransformer
metadata:
name: add-batch-viya-volume
patch: |-
- op: add
path: /template/spec/volumes/-
value:
name: viya
{{ VOLUME-STORAGE-CLASS }}
target:
kind: PodTemplate
labelSelector: "launcher.sas.com/job-type=sas-batch-job"
Assume that the storage location you want to use is an NFS volume. That volume may be described in the following way:
nfs:
server: myserver.mycompany.com
path: /path/to/my/location
To use this storage location in the transformer, substitute in the volume definition in the {{ VOLUME-STORAGE-CLASS }} location. The result would look like this:
---
apiVersion: builtin
kind: PatchTransformer
metadata:
name: add-batch-viya-volume
patch: |-
- op: add
path: /template/spec/volumes/-
value:
name: viya
nfs:
server: myserver.mycompany.com
path: /path/to/my/location
target:
kind: PodTemplate
labelSelector: launcher.sas.com/job-type=sas-batch-job
Note: The first transformer defined in the example file deletes the previously defined viya
volume specification in the associated podTemplates and the second transformer in the example file
adds the viya
volume you defined. Any content that may
exist in the current viya
volume is not affected by these transformers.
After you edit the change-batch-server-viya-volume-storage-class.yaml file, add it to the transformers block
of the base kustomization.yaml file ($deploy/kustomization.yaml
) before the required transformers.yaml.
Note: If the $deploy/sas-bases/examples/sas-programming-environment/storage/change-viya-volume-storage-class.yaml
transformers file is also being used in the base kustomization.yaml file,
ensure the Batch Server transformers file is located after the entry for
the change-viya-volume-storage-class.yaml
patch.
Otherwise the Batch Server patch will have no effect.
Here is an example assuming the file has been saved to
$deploy/site-config
:
transformers:
...
<...other transformers...>
< site-config/change-viya-volume-storage-class.yaml if used>
- site-config/change-batch-server-viya-volume-storage-class.yaml
- sas-bases/overlays/required/transformers.yaml
...
For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.
This document describes the customizations that can be made by the Kubernetes administrator for controlling the access a user has to change environment variables by way of the SET= System Option.
The SAS language includes the SET= System Option, which allows the user to define or change the value of an environment variable in the session that the user is working in. However, an administrator might want to limit the ability of the user to change certain environment variables. The steps described in this README provide the administrator with the ability to block specific variables from being set by the user.
The list of environment variables that should be blocked for users to change
can be modified by using the transformer in the example file located at
$deploy/sas-bases/examples/sas-programming-environment/options-set
.
Copy the
$deploy/sas-bases/examples/sas-programming-environment/options-set/deny-options-set-variables.yaml
file to the site-config directory.
To add variables that users should be prevented from changing, replace the {{ OPTIONS-SET-DENY-LIST }} variable in the copied file with the list of environment variables to be protected. Here is an example:
NOTE: The environment variables _JAVA_OPTIONS, JAVA_TOOL_OPTIONS, JDK_JAVA_OPTIONS, ODBCINST, ODBCINI are blocked out of the box as they pose potential security risks if left unblocked. The transformer in the example overwrites this list, so you must include these environment variables along with additional environment variables that you wish to block in the list.
...
patch : |-
- op: add
path: /data/SAS_OPTIONS_SET_DENY_LIST
value: "_JAVA_OPTIONS JAVA_TOOL_OPTIONS JDK_JAVA_OPTIONS ODBCINST ODBCINI VAR1 VAR2 VAR3"
...
After you edit the file, add a reference to it in the transformers block of
the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an
example assuming the file has been saved to
$deploy/site-config/sas-programming-environment/options-set
:
transformers:
...
- site-config/sas-programming-environment/options-set/deny-options-set-variables.yaml
...
For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.
The SAS GPU Reservation Service aids SAS processes in resource sharing and utilization of the Graphic Processing Units (GPUs) that are available in a Kubernetes pod. The SAS Programming Environment container image makes this service available, but it must be enabled in order to take advantage of the GPUs in your cluster.
Note: The following servers create Kubernetes pods using the SAS Programming Environment container image:
The SAS GPU Reservation Service is supported on all of the supported cloud platforms. In a Microsoft Azure Kubernetes deployment, additional configuration steps are required.
If you are deploying the SAS Viya platform on Microsoft Azure, before you enable the SAS Programming
Environment to use GPUs, you must configure the Azure Kubernetes Service (AKS) cluster.
The compute
node pool must be configured with a properly sized N-Series Virtual Machine (VM). The N-Series VMs in Azure have GPU capabilities.
If the compute
node pool already exists, the VM node size cannot be changed. The compute
node
pool must be deleted and then recreated to the proper VM size and node count with the following commands.
WARNING: Deleting a node pool on an actively running SAS Viya platform deployment will cause any active sessions to be prematurely terminated. These steps should only be performed on an idle deployment. The node pool can be deleted and recreated using the Azure portal or the Azure CLI.
az aks nodepool delete --cluster-name <replace-with-aks-cluster-name> --name compute --resource-group <replace-with-resource-group>
az aks nodepool add --cluster-name <replace-with-aks-cluster-name> --name compute --resource-group <replace-with-resource-group> --node-count <replace with node count> --node-vm-size "<replace with N-Series VM>" [--zones <replace-with-availability-zone-number>]
SAS Viya 4 Infrastructure as Code (IaC) for Microsoft Azure (viya4-iac-azure) contains Terraform scripts to provision Microsoft Azure Cloud infrastructure
resources required to deploy SAS Viya platform products. Edit the terraform.tfvars file and change the
machine_type
for the compute
node pool to an N-Series VM.
node_pools = {
compute = {
"machine_type" = "<Change to N-Series VM>"
...
}
},
...
Then verify the compute
node pool was created and properly sized.
az aks nodepool list -g <resource-group> --cluster-name <cluster-name> --query '[].{Name:name, vmSize:vmSize}'
An additional requirement in a Microsoft Azure environment is that the
NVIDIA device plug-in must be
installed and configured. Download the example nvidia-device-plugin-ds.yaml manifest
from that Microsoft page. Then add the following to the tolerations
block of the
manifest so that the plug-in will be scheduled on to the compute
node pool.
tolerations:
...
- key: workload.sas.com/class
operator: Equal
value: "compute"
effect: NoSchedule
...
Create the gpu-resources
namespace and apply the updated manifest to create the NVIDIA device plug-in DaemonSet.
kubectl create namespace gpu-resources
kubectl apply -f nvidia-device-plugin-ds.yaml
SAS has provided an overlay to enable the SAS GPU Reservation Service for SAS Programming Environment in your environment.
To use the overlay:
Add a reference to the sas-programming-environment/gpu
overlay to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example:
...
transformers:
...
- sas-bases/overlays/sas-programming-environment/gpu
- sas-bases/overlays/required/transformers.yaml
...
NOTE: The reference to the sas-programming-environment/gpu
overlay MUST come before the required transformers.yaml, as seen in the example above.
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
To disable the SAS GPU Reservation Service.
Remove sas-bases/overlays/sas-programming-environment/gpu
from the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
The default PostgreSQL server (used by most micro-services) in the SAS Viya platform is called “Platform PostgreSQL”. The SAS Viya platform can handle multiple PostgreSQL servers at once, but only specific micro-services use servers besides the default. Consult the documentation for your order to see if you have products that require their own PostgreSQL in addition to the default.
The SAS Viya platform provides two options for your PostgreSQL servers: internal instances provided by SAS or external PostgreSQL that you would like the SAS Viya platform to utilize. Before deploying, you must select which of these options you want to use for your SAS Viya platform deployment. If you follow the instructions in the SAS Viya Platform Deployment Guide, the deployment includes an internal instance of PostgreSQL.
Note: PostgreSQL servers must be all internally managed or all externally managed. SAS does not support mixing internal and external PostgreSQL servers in the same deployment. For information about moving from an internal PostgreSQL server to an external one, see the PostgreSQL Data Transfer Guide.
Platform PostgreSQL is required in the SAS Viya platform.
Go to the base kustomization.yaml file ($deploy/kustomization.yaml
). In the resources block of that file, add the following content, including adding the block if it doesn’t already exist:
resources:
- sas-bases/overlays/postgres/platform-postgres
Then, follow the appropriate subsection to continue installing or configuring Platform PostgreSQL as either internally or externally managed.
Follow the steps in the “Configure Crunchy Data PostgreSQL” README located at $deploy/sas-bases/examples/crunchydata/README.md
(for Markdown format) or $deploy/sas-bases/docs/configure_crunchy_data_postgresql.htm
(for HTML format).
Follow the steps in the section “External PostgreSQL Configuration”.
CDS PostgreSQL is an additional PostgreSQL server that some services in your SAS Viya platform deployment may want to utilize, providing a second database that can be configured separately from the default PostgreSQL server.
Go to the base kustomization.yaml file ($deploy/kustomization.yaml
). In the resources block of that file, add the following content, including adding the block if it doesn’t already exist:
resources:
- sas-bases/overlays/postgres/cds-postgres
Then, follow the appropriate subsection to continue installing or configuring CDS PostgreSQL as either internally or externally managed.
Follow the steps in the “Configure Crunchy Data PostgreSQL” README located at $deploy/sas-bases/examples/crunchydata/README.md
(for Markdown format) or $deploy/sas-bases/docs/configure_crunchy_data_postgresql.htm
(for HTML format).
Follow the steps in the section “External PostgreSQL Configuration”.
External PostgreSQL is configured by modifying the DataServer CustomResource to describe your PostgreSQL server. Follow the below steps separately for each external PostgreSQL server in your Viya deployment.
Copy the file $deploy/sas-bases/examples/postgres/postgres-user.env
into your $deploy/site-config/postgres/
directory and make it writable:
chmod +w $deploy/site-config/postgres/postgres-user.env
Rename the copied file to something unique. SAS recommends following the naming convention: {{ POSTGRES-SERVER-NAME }}-user.env
. For example, a copy of the file for Platform PostgreSQL might be called platform-postgres-user.env
.
Note: Take note of the name and path of your copied file. This information will be used in a later step.
Adjust the values in your copied file following the in-line comments.
Go to the base kustomization file ($deploy/kustomization.yaml
). In the secretGenerator block of that file, add the following content, including adding the block if it doesn’t already exist:
secretGenerator:
- name: {{ POSTGRES-USER-SECRET-NAME }}
envs:
- {{ POSTGRES-USER-FILE }}
In the added secretGenerator, fill out the user-defined values as follows:
Replace {{ POSTGRES-USER-SECRET-NAME }}
with a unique name for the secret. For example, you might use platform-postgres-user
if specifying the user for Platform PostgreSQL.
Replace {{ POSTGRES-USER-FILE }}
with the path of the file you copied in Step 2. For example, this may be something like site-config/postgres/platform-postgres-user.env
.
Note: Take note of the name you give this secretGenerator. This information will be used in a later step.
Copy the file $deploy/sas-bases/examples/postgres/dataserver-transformer.yaml
into your $deploy/site-config/postgres
directory and make it writable:
chmod +w $deploy/site-config/postgres/dataserver-transformer.yaml
Rename the copied file to something unique. SAS recommends following the naming convention: {{ POSTGRES-SERVER-NAME }}-dataserver-transformer.yaml
. For example, a copy of the transformer targeting Platform PostgreSQL might be called platform-postgres-dataserver-transformer.yaml
, and if you have CDS PostgreSQL, then a copy of the transformer targeting CDS PostgreSQL might be called cds-postgres-dataserver-transformer.yaml
.
Note: Take note of the name and path of your copied file. This information will be used in step 9.
Adjust the values in your copied file following the guidelines in the comments.
In the transformers block of the base kustomization.yaml ($deploy/kustomization.yaml
), add references to the files you renamed in step 7. The following example is based on the deployment using a file named platform-postgres-dataserver-transformer.yaml
for the Platform PostgreSQL instance:
transformers:
- site-config/postgres/platform-postgres-dataserver-transformer.yaml
By default, the SAS Viya platform uses a database named “SharedServices” in each PostgreSQL server.
To set a custom database name, uncomment the surrounding block and replace the {{ DB-NAME }}
variable in your copied dataserver-transformer.yaml
file(s) with the custom database name.
**Note:** Do not use "postgres" as your custom database. "postgres" is the default system database for the PostgreSQL server. The Viya Restore utility does not work with "postgres".
SAS strongly recommends the use of SSL/TLS to secure data in transit. You should follow the documented best practices provided by your cloud platform provider for securing access to your database using SSL/TLS. Securing your database server with SSL/TLS entails the use of certificates. Upon securing your database server, your cloud platform provider may provide you with a server CA certificate. In order for the SAS Viya platform to connect directly to a secure database server, you must provide the server CA certificate to the SAS Viya platform prior to deployment. Failing to configure the SAS Viya platform to trust the database server CA certificate results in “Connection refused” errors or in communications falling back to insecure modes. For instructions on how to provide CA certificates to the SAS Viya platform, see the section labeled “Incorporating Additional CA Certificates into the SAS Viya Platform Deployment” in the README file at $deploy/sas-bases/examples/security/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_network_security_and_encryption_using_sas_security_certificate_framework.htm
(for HTML format).
When using an SQL proxy for database communication, it might be possible to secure database communication in accordance with the cloud platform vendor’s best practices without the need to import your database server CA certificate. Some cloud platforms, such as the Google Cloud Platform, allow the use of a proxy server to connect to the database server indirectly in a manner similar to a VPN tunnel. These platform-provided SQL proxy servers obtain certificates directly from the cloud platform. In this case, a database server CA certificate is obtained automatically by the proxy and you do not need to provide it during deployment. To find out more about SQL proxy connections to the database server, consult your cloud provider’s documentation.
If you are using Google Cloud SQL for PostgreSQL, the following steps are required for each PostgreSQL server. For example, if you have both a Platform PostgreSQL server and a CDS PostgreSQL server, then you need a separate sql-proxy for each server.
Copy the file $deploy/sas-bases/examples/postgres/cloud-sql-proxy.yaml
to your $deploy/site-config/postgres/
directory and make it writable:
chmod +w $deploy/site-config/postgres/cloud-sql-proxy.yaml
Rename the copied file to something unique. SAS recommends following the naming convention: {{ POSTGRES-SERVER-NAME }}-cloud-sql-proxy.yaml
. For example, a copy of the transformer targeting Platform PostgreSQL might be called platform-postgres-cloud-sql-proxy.yaml
, and if you have CDS PostgreSQL, then a copy of the transformer targeting CDS PostgreSQL might be called cds-postgres-cloud-sql-proxy.yaml
.
Note: Take note of the name and path of your copied file. This information will be used in step 4.
Adjust the values in your copied file following the guidelines in the file’s comments.
In the resources block of the base kustomization.yaml ($deploy/kustomization.yaml
), add references to the files you renamed in step 2. The following example is based on the deployment using a file named platform-postgres-cloud-sql-proxy.yaml
:
resources:
- site-config/postgres/platform-postgres-cloud-sql-proxy.yaml
The Google Cloud SQL Auth Proxy requires a Google Service Account Key. It retrieves this key from a Kubernetes Secret. To create this secret you must place the Service Account Key required by the Google sql-proxy in the file $deploy/site-config/postgres/ServiceAccountKey.json
(in JSON format).
Go to the base kustomization file ($deploy/kustomization.yaml
). In the secretGenerator block of that file, add the following content, including adding the block if it doesn’t already exist:
secretGenerator:
- name: sql-proxy-serviceaccountkey
files:
- credentials.json=site-config/postgres/ServiceAccountKey.json
The file $deploy/sas-bases/overlays/postgres/external-postgres/gcp-tls-transformer.yaml
allows database clients and the sql-proxy pod to communicate in clear text. This transformer must be added after all other security transformers.
transformers:
...
- sas-bases/overlays/postgres/external-postgres/gcp-tls-transformer.yaml
You can add PostgreSQL servers to the SAS Viya platform via the DataServer.webinfdsvr.sas.com CustomResource. This CustomResource is used to inform the SAS Viya platform of the location and credentials for PostgreSQL servers. DataServers can be configured to reference either internally managed Crunchy Data PostgreSQL clusters or externally managed PostgreSQL servers.
Note: DataServer CustomResources will not provision PostgreSQL servers on your behalf.
To view the DataServer CustomResources in your SAS Viya platform deployment, run the following command.
kubectl get dataservers.webinfdsvr.sas.com -n {{ NAME-OF-NAMESPACE }}
Internally managed instances of PostgreSQL use the PostgreSQL Operator and Containers provided by Crunchy Data behind the scenes to create the PostgreSQL servers.
Before installing any Crunchy Data components, you should know which PostgreSQL servers are required by your SAS Viya platform order.
Additionally, you should have followed the steps to configure PostgreSQL in the SAS Viya platform described in the “Configure PostgreSQL” README located at $deploy/sas-bases/examples/postgres/README.md
(for Markdown format) or $deploy/sas-bases/docs/configure_postgresql.htm
(for HTML format).
You must install the Crunchy Data PostgreSQL Operator in conjunction with specific PostgreSQL servers.
To install the PostgreSQL Operator, go to the base kustomization.yaml file ($deploy/kustomization.yaml
). In the resources block of that file, add the following content, including adding the block if it doesn’t already exist:
resources:
- sas-bases/overlays/crunchydata/postgres-operator
Additionally, you must add content to the components block based on whether you are deploying Platform PostgreSQL or CDS PostgreSQL.
Go to the base kustomization.yaml file ($deploy/kustomization.yaml
). In the components block of that file, add the following content, including adding the block if it doesn’t already exist:
components:
- sas-bases/components/crunchydata/internal-platform-postgres
Note: The internal-platform-postgres entry should be listed before any entries that do not relate to Crunchy Data.
Go to the base kustomization.yaml file ($deploy/kustomization.yaml
). In the components block of that file, add the following content, including adding the block if it doesn’t already exist:
components:
- sas-bases/components/crunchydata/internal-cds-postgres
Note: The internal-cds-postgres entry should be listed before any entries that do not relate to Crunchy Data.
Crunchy Data supports many PostgreSQL features and configurations. Here are the supported options:
$deploy/sas-bases/examples/crunchydata/backups/README.md
(for Markdown format) or $deploy/sas-bases/docs/configuration_settings_for_crunchy_data_pgbackrest_utility.htm
(for HTML format)$deploy/sas-bases/examples/crunchydata/pod-resources/README.md
(for Markdown format) or $deploy/sas-bases/docs/configuration_settings_for_postgresql_pod_resources.htm
(for HTML format)$deploy/sas-bases/examples/crunchydata/replicas/README.md
(for Markdown format) or $deploy/sas-bases/docs/configuration_settings_for_postgresql_replicas_count.htm
(for HTML format)$deploy/sas-bases/examples/crunchydata/storage/README.md
(for Markdown format) or $deploy/sas-bases/docs/configuration_settings_for_postgresql_storage.htm
(for HTML format)$deploy/sas-bases/examples/crunchydata/tuning/README.md
(for Markdown format) or $deploy/sas-bases/docs/configuration_settings_for_postgresql_database_tuning.htm
(for HTML format)PostgreSQL is highly configurable, allowing you to tune the server(s) to meet expected workloads. This README describes how to tune and adjust the configuration for your PostgreSQL clusters. Here are the transformers in $deploy/sas-bases/examples/crunchydata/tuning/
with a description of the purpose of each:
- crunchy-tuning-connection-params-transformer.yaml: Change PostgreSQL connection parameters
- crunchy-tuning-log-params-transformer.yaml: Change PostgreSQL log parameters
- crunchy-tuning-patroni-params-transformer.yaml: Change Patroni parameters
- crunchy-tuning-pg-hba-no-tls-transformer.yaml: Set the entry for the pg_hba.conf file to disable TLS
Copy the transformer file (for example, $deploy/sas-bases/examples/crunchydata/tuning/crunchy-tuning-connection-params-transformer.yaml
) into your $deploy/site-config/crunchydata/
.
Rename the copied file to something unique. For example, the above transformer targeting Platform PostgreSQL could be named as platform-postgres-crunchy-tuning-connection-params-transformer.yaml
.
Adjust the values in your copied file using the in-line comments of the file and the directions in “Customize the Configuration Settings” below.
Add a reference to the file in the transformers block of the base kustomization.yaml ($deploy/kustomization.yaml
). The following example uses an example transformer file named platform-postgres-crunchy-tuning-connection-params-transformer.yaml
:
transformers:
- site-config/crunchydata/platform-postgres-crunchy-tuning-connection-params-transformer.yaml
To change the PostgreSQL parameters, such as a log filename with a timestamp instead of the name of the week, use the crunchy-tuning-log-params-transformer.yaml file as a sample transformer. You can add, remove, or update log parameters and their values following the pattern shown in the sample file. For the complete list of available PostgreSQL configuration parameters, see PostgreSQL Server Configuration.
Deployments that use non-TLS or Front-Door TLS can use the crunchy-tuning-pg-hba-no-tls-transformer.yaml file to make the incoming client connections go through without TLS.
PostgreSQL High Availability (HA) cluster deployments have one primary database node and one or more standby database nodes. Data is replicated from the primary node to the standby node(s). In Kubernetes, a standby node is referred to as a replica. This README describes how to configure the number of replicas in a PostgreSQL HA cluster.
Copy the file $deploy/sas-bases/examples/crunchydata/replicas/crunchy-replicas-transformer.yaml
into your $deploy/site-config/crunchydata/
directory.
Adjust the values in your copied file following the in-line comments.
Add a reference to the file in the transformers block of the base kustomization.yaml ($deploy/kustomization.yaml
), including adding the block if it doesn’t already exist:
transformers:
- site-config/crunchydata/crunchy-replicas-transformer.yaml
For more information, see SAS Viya Platform Deployment Guide.
PostgreSQL backups play a vital role in disaster recovery. Automatically scheduled backups and backup retention policies prevent unnecessary storage accumulation and further support disaster recovery. SAS installs Crunchy Data PostgreSQL servers with automatically scheduled backups and a retention policy. This README describes how to change the configuration settings of these backups.
Note: The backup settings here are for the internal Crunchy Data pgBackRest utility, not for SAS Viya backup and restore utility.
Copy the file $deploy/sas-bases/examples/crunchydata/backups/crunchy-pgbackrest-backup-config-transformer.yaml
into your $deploy/site-config/crunchydata/
directory.
Adjust the values in your copied file following the in-line comments.
Add a reference to the file in the transformers block of the base kustomization.yaml ($deploy/kustomization.yaml
), including adding the block if it doesn’t already exist:
transformers:
- site-config/crunchydata/crunchy-pgbackrest-backup-config-transformer.yaml
Note: Avoid scheduling backups during times when the environment might be shut down, such as Saturday or Sunday if you regularly scale down your Kubernetes cluster on weekends.
For more information about deployment, see SAS Viya Platform Deployment Guide.
For more information about pgBackRest, see pgBackRest User Guide and pgBackRest Command Reference.
PostgreSQL data is stored inside Kubernetes PersistentVolumeClaims (PVCs). This README describes how to adjust PostgreSQL PVC settings such as size and storage classes.
Important: Changing the storage class for PostgreSQL PVCs after the initial SAS Viya platform deployment must use the process described in Change the Storage Class of the Data Pod. Changing the access mode is not allowed after the initial SAS Viya platform deployment. The only supported access mode is ReadWriteOnce (RWO), and it is a placeholder for future use.
Copy the file $deploy/sas-bases/examples/crunchydata/storage/crunchy-storage-transformer.yaml
into your $deploy/site-config/crunchydata/
directory.
Rename the copied file to something unique. SAS recommends following the naming convention {{ CLUSTER-NAME }}-crunchy-storage-transformer.yaml
. For example, a copy of the transformer targeting Platform PostgreSQL could be named platform-postgres-crunchy-storage-transformer.yaml
.
Adjust the values in your copied file following the in-line comments.
Add a reference to the file in the transformers block of the base kustomization.yaml ($deploy/kustomization.yaml
), including adding the block if it doesn’t already exist. The following example shows the content based on a file named platform-postgres-crunchy-storage-transformer.yaml
:
transformers:
- site-config/crunchydata/platform-postgres-crunchy-storage-transformer.yaml
For reference, SAS uses the following default values:
PostgreSQL PVCs
pgBackrest PVCs
For more information, see SAS Viya Platform Deployment Guide.
This README describes how to adjust the CPU and memory usage of the PostgreSQL-related pods. The minimum for each of these values is described by their request and the maximum for each of these values is described by their limit.
Copy the file $deploy/sas-bases/examples/crunchydata/pod-resources/crunchy-pod-resources-transformer.yaml
into your $deploy/site-config/crunchydata/
directory.
Adjust the values in your copied file following the in-line comments. As a point of reference, the SAS defaults are as follows:
# PostgreSQL values
requests:
cpu: 150m
memory: 2Gi
limits:
cpu: 8000m
memory: 8Gi
# pgBackrest values
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1000m
memory: 500Mi
Add a reference to the file in the transformers block of the base kustomization.yaml ($deploy/kustomization.yaml
), including adding the block if it doesn’t already exist:
transformers:
- site-config/crunchydata/crunchy-pod-resources-transformer.yaml
For more information, see SAS Viya Platform Deployment Guide.
For more information about Pod CPU resource configuration, go here.
For more information about Pod memory resource configuration, go here.
Arke is a message broker proxy that sits between all services and RabbitMQ. This README file describes the settings available for deploying Arke.
Based on the following description of the available example files, determine if you want to use any example file in your deployment. If you do, copy the example file and place it in your site-config directory.
Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ MEMORY-LIMIT }}. Replace the entire variable string, including the braces, with the value you want to use.
After you have edited the file, add a reference to it in the transformers block
of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an
example using the Arke transformers:
transformers:
...
- site-config/arke/arke-modify-cpu.yaml
- site-config/arke/arke-modify-memory.yaml
- site-config/arke/arke-modify-hpa-replicas.yaml
The example files are located at $deploy/sas-bases/examples/arke
.
The following list contains a description of each example file for Arke settings
and the file names.
This README file describes the settings available for deploying RabbitMQ.
Based on the following description of the available example files, determine if you want to use any example file in your deployment. If you do, copy the example file and place it in your site-config directory.
Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ NUMBER-OF-NODES }}. Replace the entire variable string, including the braces, with the value you want to use.
After you have edited the file, add a reference to it in the transformers block
of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an
example using the RabbitMQ nodes transformer:
transformers:
...
- site-config/rabbitmq/configuration/rabbitmq-node-count.yaml
The example files are located at $deploy/sas-bases/examples/rabbitmq/configuration
.
The following list contains a description of each example file for RabbitMQ settings
and the file names.
Note: The default number of nodes is 3. SAS recommends a node count that is odd such as 1, 3, or 5.
Note: The default memory limit is 8Gi which may not be sufficient under some workloads. If the RabbitMQ pods are restarting on their own or if you notice memory usage above 4Gi, then you should increase the memory limit. RabbitMQ requires the additional 4Gi for garbage collection.
Note: You must delete the RabbitMQ statefulset and PVCs before applying the PVC size change. Use the following procedure:
Delete the RabbitMQ statefulset.
kubectl -n <name-of-namespace> delete statefulset sas-rabbitmq-server
Wait for all of the pods to terminate before deleting the PVCs. You can check the status of the RabbitMQ pods with the following command:
kubectl -n <name-of-namespace> get pods -l app.kubernetes.io/name=sas-rabbitmq-server
When no pods are listed as output for the command in step 2, delete the PVCs:
kubectl -n <name-of-namespace> delete pvc -l app.kubernetes.io/name=sas-rabbitmq-server
4. (Optional) Enable access to the RabbitMQ Management UI (rabbitmq-enable-management-ui.yaml).
Note: SAS does not recommend leaving the RabbitMQ Management UI enabled. However, the rabbitmq-enable-management-ui.yaml file can be used for that purpose. SAS does not recommend adding it to the base kustomization.yaml file.
Note: Consider the following when you are reducing resources allocated for RabbitMQ:
IMPORTANT: Starving RabbitMQ of CPU, memory, or disk space can cause RabbitMQ to become unstable, affecting the operation of SAS Viya platform.
Redis is used as a distributed cache for SAS Viya platform services. This README file describes the settings available for deploying Redis.
The redis-modify-memory.yaml
transformer file allows you to change the memory resources for Redis nodes. The default required value is 90Mi, and the default limit is 500Mi. The Redis ‘maxmemory’ setting is set to 90% of the container memory limit. To change those values:
Copy the $deploy/sas-bases/examples/redis/server/redis-modify-memory.yaml
file to site-config/redis/server/redis-modify-memory.yaml
.
The variables in the copied file are set off by curly braces and spaces, such as {{ MEMORY-LIMIT }}. Replace each variable string, including the braces, with the values you want to use. If you want to use the default for a variable, make no changes to that variable.
After you have edited the file, add a reference to it in the transformers block
of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an
example:
transformers:
...
- site-config/redis/server/redis-modify-memory.yaml
The SAS Viya platform can allow two-way communication between SAS (CAS and Compute engines) and open source environments (Python and R). This README describes the various post-installation steps required to install, configure, and deploy Python and R to enable integration in the SAS Viya platform.
The SAS Viya platform provides YAML files that the Kustomize tool uses to configure Python. Before you use those files, you must perform the following tasks:
$deploy/sas-bases/examples/sas-microanalytic-score/astores/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_sas_micro_analytic_service_to_support_analytic_stores.htm
(for HTML format).Each of the following numbered sections provides details about installation and configuration steps required to enable various open source integration points.
SAS provides the SAS Configurator for Open Source utility, which automates the download and installation of Python from source by creating and executing the sas-pyconfig
job. For details, including the steps to configure one or more Python environments using the SAS Configurator for Open Source, see the README at $deploy/sas-bases/examples/sas-pyconfig/README.md
(for Markdown format) or $deploy/sas-bases/doc/sas_configurator_for_open_source_options.htm
(for HTML format). The example file $deploy/sas-bases/examples/sas-pyconfig/change-configuration.yaml
contains default options that can be run as is or tailored to your environment, including which Python version to install, which collection of Python libraries to install, and whether to install multiple Python environments with different configurations (such as Python libraries or Python versions). Python is installed into a persistent volume that is mounted into the SAS Viya platform pods later (see Step 3: Configure Python and R to Be Visible in the SAS Viya Platform).
SAS recommends that you increase CPU and memory beyond the default values when using the SAS Configurator for Open Source to avoid out-of-memory errors during the installation of Python. See the Resource Management section of the README. Also, per SAS Documentation: Required Updates by Component, #3, you must delete the sas-pyconfig
job after successful completion of the Python installation and before deploying a manual update. Otherwise, you will see an error similar to the following:
Job.batch "sas-pyconfig" is invalid: spec.template: Invalid value: ... field is immutable".
You might also want to turn off the sas-pyconfig
job by setting the global.enabled
value to false
in $deploy/site-config/sas-pyconfig/change-configuration.yaml
file prior to executing future manual deployments, to prevent a restart of the sas-pyconfig
job.
Note that the SAS Configurator for Open Source requires an internet connection. If your SAS Viya platform environment does not have access to the public internet, you will need to download, install, and configure Python on an internet-accessible device and transfer those files to your deployment environment.
Install R from source in a persistent volume that will be mounted to the SAS Viya platform pods during Step 3: Configure Python and R to be Visible in the SAS Viya Platform. After installing R, you should also download and install all desired R packages (for example, by starting an R session and executing the install.packages(my-desired-package)
command). Two notes of caution:
/lib/[your-linux-distribution]
into the /your-R-parent-directory/lib/R/lib
within the PVC directory where you install R (/your-R-parent-directory
).R
and Rscript
files./r-mount
). You can specify the directory during the configuration of your R installation by setting the --prefix=/{{ your-mountPath }}
option (where you replace {{ your-mountPath }}
with the desired mountPath in your pods) when running ./configure
. Install all R packages within that /r-mount
directory, and copy all shared libraries into the subdirectory /r-mount/lib/R/lib
). Finally, copy or move the entire contents of {{ your-mountPath }}
into your PVC directory of choice.If your SAS Viya platform environment does not have access to the public internet, you will need to download, install, and configure R on an internet-accessible device and transfer those files to your deployment environment.
Add NFS mounts for Python and R directories. Now that Python and R are installed on your persistent storage, you need to mount those directories so that they are available to the SAS Viya platform pods. Do this by copying transformers for Python and R from the $deploy/sas-bases/examples/sas-open-source-config/python
and $deploy/sas-bases/examples/sas-open-source-config/r
directories into your $deploy/site-config/sas-open-source-config
Python and R directories. For details, refer to the following two READMEs:
$deploy/sas-bases/examples/sas-open-source-config/python/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_python_for_sas_viya_using_a_kubernetes_persistent_volume.htm
(for HTML format).$deploy/sas-bases/examples/sas-open-source-config/r/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_r_for_sas_viya.htm
(for HTML format).This step makes the installed software visible to the SAS Viya platform pods. You must enable lockdown access methods (for Python) and configure the SAS Viya platform to connect to your open-source packages (both Python and R) to enable users to connect to R or Python from within a SAS Viya platform GUI.
This step opens up communication between Python or R, and the SAS Viya platform. You will need to enable python
and python_embed
methods for most, if not all, Python integration points; the socket
method is also required to enable PROC Python and the Python Code Editor. For details, see $deploy/sas-bases/examples/sas-programming-environment/lockdown/README.md
.
These steps tell the SAS Viya platform how to connect to your Python and R binaries that you installed in the mounted directories. For details, see:
$deploy/sas-bases/examples/sas-open-source-config/python/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_python_for_sas_viya_using_a_kubernetes_persistent_volume.htm
(for HTML format).$deploy/sas-bases/examples/sas-open-source-config/r/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_r_for_sas_viya.htm
(for HTML format).Following the steps in these two READMEs, you will update the Python- and R-specific kustomization.yaml
files (in their respective folders within $deploy/site-config/sas-open-source-config
) to replace the {{ }}
placeholders with your installation’s details (for example, RHOME
path pointing to the parent directory where R is mounted). These kustomization
files create environment variables that are made available in the SAS Viya platform pods. These new environment variables tell the SAS Viya platform where to look for the Python and R executables and associated libraries.
If you have licensed SAS/IML, you also need to create two new environment variables to enable R to be called by PROC IML in a SAS Program (for details, see SAS Documentation on the RLANG system option):
R_HOME
must point to {{ r-parent-directory }}/lib/R
within your mounted R directory (for example, /r-mount/lib/R
if R is mounted to /r-mount
).SASV9_OPTIONS
environment variable must be set to =-RLANG
You can automate the creation of these two environment variables by adding them to $deploy/site-config/sas-open-source-config/r/kustomization.yaml
, or after deploying your updates by adding them within the SAS Environment Manager GUI.
For both Python and R, you also need to create a single new XML file with the “External languages settings”. This is required for FCMP and PROC TSMODEL’s EXTLANG package.
By default, CAS resources can be accessed by Python and R from within the cluster, but not external to the cluster. To access CAS resources outside the cluster (such as from an existing JupyterHub deployment elsewhere or from a desktop installation of R-Studio), additional configuration steps are required to enable binary (recommended) access. For details, see the README at $deploy/sas-bases/examples/cas/configure/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configuration_settings_for_cas.htm
(for HTML format). See also SAS Viya Platform Operations: Configure External Access to CAS.
External connections to the SAS Viya platform, including CAS, can be made using resources that SAS provides for developers, open-source programmers, and system administrators who want to leverage or manage the computational capabilities of the SAS Viya platform but from open-source coding interfaces. See the SAS Developer Home page for up-to-date information about the different collections of resources, such as code libraries and APIs for building apps with SAS, SAS Viya Platform and CAS REST APIs, and end-to-end example API use cases.
The SAS Viya platform must be configured to enable users to register and publish open-source models in the SAS Viya platform. For details and configuration options, see the following resources:
Deployment READMEs:
$deploy/sas-bases/examples/sas-model-repository/python/README.md
(for Markdown format) or at $deploy/sas-bases//docs/configure_python_for_sas_model_repository_service.htm
(for HTML format).$deploy/sas-bases/examples/sas-model-repository/r/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_rpy2_for_sas_model_repository_service.htm
(for HTML format).$deploy/sas-bases/examples/sas-model-publish/git/README.md
(for Markdown format) or at $deploy/sas-bases/docs/onfigure_git_for_sas_model_publish_service.htm
(for HTML format).$deploy/sas-bases/examples/sas-model-publish/kaniko/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_kaniko_for_sas_model_publish_service.htm
(for HTML format).The SAS Viya platform allows direct integration with Git within the SAS Studio interface. Follow the steps outlined in the following resources:
sas.studio.showServerFiles
property. To specify the directory path for the root node in the Folders tree in SAS Studio, edit the sas.studio.fileNavigationCustomRootPath
and sas.studio.fileNavigationRoot
propertiesThe configuration properties can be edited within the SAS Environment Manager console, or by using the SAS Viya Platform Command Line Interface tool’s Configuration plug-in.
The following links were referenced in this README or provide further useful information:
The following table maps each specific open-source integration point to the relevant resource(s) containing details about configuring that specific integration point.
README | PROC Python | PROC FCMP (Python) | PROC IML (R) | Open Source Code Node (Python) | Open Source Code Node (R) | EXTLANG Package (Python) | EXTLANG Package (R) | SWAT (Python & R) |
---|---|---|---|---|---|---|---|---|
Python configuration | x | x | x | x | ||||
R configuration | x | x | x | |||||
Lockdown methods | x | x | x | x | ||||
External access to CAS | x |
Python configuration: see the README at $deploy/sas-bases/examples/sas-open-source-config/python/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_python_for_sas_viya_using_a_kubernetes_persistent_volume.htm
(for HTML format).
R configuration: see the README at $deploy/sas-bases/examples/sas-open-source-config/r/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_r_for_sas_viya.htm
(for HTML format).
Lockdown methods: See the README at $deploy/sas-bases/examples/sas-programming-environment/lockdown/README.md
(for Markdown format) or at $deploy/sas-bases/docs/lockdown_settings_for_the_sas_programming_environment.htm
(for HTML format).
External access to CAS: See the README at $deploy/sas-bases/examples/cas/configure/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configuration_settings_for_cas.htm
(for HTML format).
The SAS Viya platform can use a customer-prepared environment consisting of a Python installation and any required packages stored on a Kubernetes PersistentVolume. This README describes how to make that volume available to your deployment.
SAS provides a utility, SAS Configurator for Open Source, that facilitates the download and management of Python from source and partially automates the steps to integrate Python with the SAS Viya platform. SAS recommends that you use this utility.
For comprehensive documentation related to the configuration of open-source language integration, including the use of SAS Configurator for Open Source, see SAS Viya Platform: Integration with External Languages.
Note: The examples provided in this README are appropriate for a manual deployment of Python integration. For a deployment that uses SAS Configurator for Open Source, consult SAS Viya Platform: Integration with External Languages.
The SAS Viya platform provides YAML files that the Kustomize tool uses to configure Python. Before you use those files, you must perform the following tasks:
Make note of the attributes for the volume where Python and the associated packages are to be deployed. For example, note the server and directory for NFS.
For more information about various types of PersistentVolumes in Kubernetes, see Additional Resources.
If you are deploying on Red Hat OpenShift cluster, you may need to define permissions to the service account for the volume that you mount for Python. For more information about installing the service account overlay, refer to the README file at /$deploy/sas-bases/overlays/sas-microanalytic-score/service-account/README.md
(for Markdown format) or $deploy/sas-bases/docs/configure_sas_micro_analytic_service_to_add_service_account.htm
(for HTML format).
Install Python and any necessary packages on the volume.
In addition to the volume attributes, you must have the following information:
The Python overlay for sas-microanalytic-score uses a Persistent Volume named astores-volume, which is defined in the astores overlay. The Python and astore overlays are usually installed together. If you choose to install the python overlay only, you still need to install the astores overlay as well. For more information on installing the astores overlay, refer to the “Configure SAS Micro Analytic Service to Support Analytic Stores” README file at $deploy/sas-bases/examples/sas-microanalytic-score/astores/README.md
(for Markdown format) or $deploy/sas-bases/docs/configure_sas_micro_analytic_service_to_support_analytic_stores.htm
(for HTML format).
Copy the files in the $deploy/sas-bases/examples/sas-open-source-config/python
directory to the $deploy/site-config/sas-open-source-config/python
directory.
Create the destination directory, if it does not already exist.
Note: If the destination directory already exists, verify that the overlay has been applied.
If the output contains the /python
mount directory path, you do not need to take any further actions, unless you want to change the overlay parameters to use a different Python environment.
The kustomization.yaml file defines all the necessary environment variables. Replace all tags, such as {{ PYTHON-EXE-DIR }}, with the values that you gathered in the Prerequisites step. Then, set the following parameters, according to the SAS products you will be using:
Note: Any environment variables that you define in this example will be set on all pods, although they might not have an effect. For example, setting MAS_PYPATH will not affect the Python executable used by the EXTLANG package. That executable is set in the SAS_EXTLANG_SETTINGS file. However, if you define $MAS_PYPATH you can then use it in the SAS_EXTLANG_SETTINGS file. For example,
<LANGUAGE name="PYTHON3" interpreter="$MAS_PYPATH"></LANGUAGE>
Attach storage to your SAS Viya platform deployment. The python-transformer.yaml file uses PatchTransformers in Kustomize to attach the volume containing your Python installation to the SAS Viya platform. Replace {{ VOLUME-ATTRIBUTES }} with the appropriate volume specification.
For example, when using an NFS mount, the {{ VOLUME-ATTRIBUTES }} tag should be replaced with nfs: {path: /vol/python, server: myserver.sas.com}
where myserver.sas.com
is the NFS server and /vol/python
is the NFS path you recorded in the Prerequisites step.
The relevant code excerpt from python-transformer.yaml file before the change:
patch: |-
# Add Python Volume
- op: add
path: /spec/template/spec/volumes/-
value: { name: python-volume, {{ VOLUME-ATTRIBUTES }} }
The relevant code excerpt from python-transformer.yaml file after the change:
patch: |-
# Add Python Volume
- op: add
path: /spec/template/spec/volumes/-
value: { name: python-volume, nfs: {path: /vol/python, server: myserver.sas.com} }
Also in the python-transformer.yaml file, there is a PatchTransformer called sas-python-sas-java-policy-allow-list. This PatchTransformer sets paths to the Python executable so that the SAS runtime allows execution of the Python code. Replace the {{ PYTHON-EXE-DIR }} and {{ PYTHON-EXECUTABLE }} tags with the appropriate values. If you are specifying multiple Python environments, set each of them here. Here is an example:
apiVersion: builtin
kind: PatchTransformer
metadata:
name: add-python-sas-java-policy-allow-list
patch: |-
- op: add
path: /data/SAS_JAVA_POLICY_ALLOW_DM_PYPATH
value: /python/python3/bin/python3.8
- op: add
path: /data/SAS_JAVA_POLICY_ALLOW_DM_PYPATH2
value: /python/python2/bin/python2.7
target:
kind: ConfigMap
name: sas-programming-environment-java-policy-config
Python runs in a separate container in the sas-microanalytic-score pod. Default resource limits are defined for the Python container in the python-transformer.yaml file. Depending upon your application requirements, the CPU and memory values can be modified in the resources section of that file.
```yaml
command: ["$(MAS_PYPATH)", "$(MAS_M2PATH)"]
envFrom:
- configMapRef:
name: sas-open-source-config-python
- configMapRef:
name: sas-open-source-config-python-mas
resources:
requests:
memory: 50Mi
cpu: 50m
limits:
memory: 500Mi
cpu: 500m
```
Make the following changes to the base kustomization.yaml file in the $deploy directory.
sas-bases/overlays/required/transformers.yaml
.Here is an example:
resources:
- site-config/sas-open-source-config/python
transformers:
...
- site-config/sas-open-source-config/python/python-transformer.yaml
- sas-bases/overlays/required/transformers.yaml
The Process Orchestration feature requires additional tasks to configure Python. If your deployment includes the Process Orchestration feature, then perform the steps in the README located at $deploy/sas-bases/examples/sas-airflow/python/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_python_for_process_orchestration.htm
(for HTML format).
Note: If you are not certain if your deployment includes Process Orchestration, look at the directory path for the README described above. If the README is present, then Process Orchestration is included in your deployment. If the README is not present, Process Orchestration is not included in the deployment, and you should go to the next step.
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.All affected pods, except the CAS Server pod, are automatically restarted when the overlay is applied. If the overlay is applied after the initial deployment, the CAS Server might need an explicit restart. For information, see Restart CAS Server.
Run the following command to verify whether the overlay has been applied:
kubectl describe pod <sas-microanalyticscore-pod-name> -n <name-of-namespace>
Verify that the output contains the following mount directory paths:
Mounts:
/python (r)
The SAS Viya platform can use a customer-prepared environment consisting of a Python installation (and any required packages) that are stored on a Kubernetes PersistentVolume or a Docker image. This README describes how to make a Docker image that contains a Python installation available to your deployment.
Note: Python can be used by the Micro Analytic Score service, Cloud Analytic Services (CAS) and the Compute service. However, accessing Python via a Docker image is currently available as an option only for the Micro Analytic Score service. Therefore, if you use this method and you require Python for CAS or the Compute Server, a Python distribution must also be available via a Kubernetes persistent volume.
Because Python can be used from a Docker image only by the Micro Analytic Score service, until the Docker image is available to other pods, make sure that the Python environment in the Docker image is available in the mounted volume for other pods. The SAS Viya platform provides YAML files that the Kustomize tool uses to configure Python. Before you use those files, you must perform the following tasks:
Prepare the Python Docker image with all the necessary Python packages that you will be using. Make note of the Python image URL in the Docker registry ( {{ PYTHON-DOCKER-IMAGE-URL }} parameter in python-transformer.yaml) and the configuration settings for accessing the registry with the Python image ( {{ DOCKER-REGISTRY-CONFIG }} parameter in kustomization.yaml).
Here is a sample Docker registry configuration setting:
{"auths": {"registry.company.com": {"username": "myusername","password": "mypassword","email":"[email protected]","auth":"< mysername:mypassword in base64 encoded form>"}}}
For more information about Python image preparation and registry configuration settings, see Additional Resources.
Make note of the attributes for the volume where Python and the associated packages are to be deployed. For example, note the server and directory for NFS. For more information about various types of PersistentVolumes in Kubernetes, see Additional Resources.
Install Python and any necessary packages on the volume.
In addition to the volume attributes, you must have the following information:
Copy the files in the $deploy/sas-bases/examples/sas-open-source-config/python
directory to the $deploy/site-config/sas-open-source-config/python
directory.
Create the destination directory, if it does not already exist.
Note: If the destination directory already exists, verify that the overlay has been applied.
If the output contains the /mas2py
mount directory path, you do not need to take any further action unless you want to change the overlay parameters to use a different Python environment.
Use the kustomization.yaml file to define the necessary environment variables. Replace all tags, such as {{ PYTHON-EXE-DIR }}, with the values that you gathered in the Prerequisites step. Then set the following parameters according to the SAS products that you will be using:
Note: Any environment variables that you define in this example will be set on all pods, although they might not have an effect. For example, setting MAS_PYPATH will not affect the Python executable used by the EXTLANG package. That executable is set in the SAS_EXTLANG_SETTINGS file. However, if you define $MAS_PYPATH you can then use it in the SAS_EXTLANG_SETTINGS file. Here is an example:
<LANGUAGE name="PYTHON3" interpreter="$MAS_PYPATH"></LANGUAGE>
Attach storage to your SAS Viya platform deployment. The python-image-transformer.yaml file uses PatchTransformers in Kustomize to attach the Python installation volume to the SAS Viya platform. Replace {{ VOLUME-ATTRIBUTES }} with the appropriate volume specification.
For example, when using an NFS mount, the {{ VOLUME-ATTRIBUTES }} tag should be replaced with nfs: {path: /vol/python, server: myserver.sas.com}
where myserver.sas.com
is the NFS server and /vol/python
is the NFS path that you recorded in the Prerequisites step.
Here is the relevant code excerpt from the python-image-transformer.yaml file before the change:
patch: |-
# Add side car Container
- op: add
path: /spec/template/spec/containers/-
value:
name: viya4-mas-python-runner
image: {{ PYTHON-DOCKER-IMAGE-URL }}
patch: |-
# Add Python Volume
- op: add
path: /spec/template/spec/volumes/-
value: { name: python-volume, {{ VOLUME-ATTRIBUTES }} }
Here is the relevant code excerpt from the python-image-transformer.yaml file after the change:
patch: |-
# Add side car Container
- op: add
path: /spec/template/spec/containers/-
value:
name: viya4-mas-python-runner
image: registry.company.com/python-env:latest
```
```yaml
patch: |-
# Add Python Volume
- op: add
path: /spec/template/spec/volumes/-
value: { name: python-volume, nfs: {path: /vol/python, server: myserver.sas.com} }
Here is the relevant code excerpt from the kustomization.yaml file before the change:
secretGenerator:
- name: python-regcred
type: kubernetes.io/dockerconfigjson
literals:
- '.dockerconfigjson={{ DOCKER-REGISTRY-CONFIG }}'
The relevant code excerpt from the kustomization.yaml file after the change:
yaml
secretGenerator:
- name: python-regcred
type: kubernetes.io/dockerconfigjson
literals:
- '.dockerconfigjson={"auths": {"registry.company.com": {"username": "myusername","password": "mypassword","email":"[email protected]","auth":"< mysername:mypassword in base64 encoded form>"}}}'
The python-image-transformer.yaml file contains a PatchTransformer called sas-python-sas-java-policy-allow-list. This PatchTransformer sets paths to the Python executable so that the SAS runtime allows execution of the Python code. Replace the {{ PYTHON-EXE-DIR }} and {{ PYTHON-EXECUTABLE }} tags with the appropriate values. If you are specifying multiple Python environments, each need to be set here. Here is an example:
apiVersion: builtin
kind: PatchTransformer
metadata:
name: add-python-sas-java-policy-allow-list
patch: |-
- op: add
path: /data/SAS_JAVA_POLICY_ALLOW_DM_PYPATH
value: /python/python3/bin/python3.8
- op: add
path: /data/SAS_JAVA_POLICY_ALLOW_DM_PYPATH2
value: /python/python2/bin/python2.7
target:
kind: ConfigMap
name: sas-programming-environment-java-policy-config
Python runs in a separate container in the sas-microanalytic-score pod. Default resource limits are defined for the Python container in the python-image-transformer.yaml file. Depending on your application requirements, the CPU and memory values can be modified in the resources section of that file. Here is an example:
command: ["$(MAS_PYPATH)", "$(MAS_M2PATH)"]
envFrom:
- configMapRef:
name: sas-open-source-config-python-image-mas
resources:
requests:
memory: 50Mi
cpu: 50m
limits:
memory: 500Mi
cpu: 500m
Make the following changes to the base kustomization.yaml file in the $deploy directory.
sas-bases/overlays/required/transformers.yaml
.Here is an example:
resources:
- site-config/sas-open-source-config/python-image
transformers:
...
- site-config/sas-open-source-config/python-image/python-image-transformer.yaml
- sas-bases/overlays/required/transformers.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.All affected pods, except the CAS Server pod, are automatically restarted when the overlay is applied. If the overlay is applied after the initial deployment, the CAS Server might need an explicit restart. For information, see Restart CAS Server.
Run the following command to verify whether the overlay has been applied:
kubectl describe pod <sas-microanalyticscore-pod-name> -n <name-of-namespace>
Verify that the output contains the following mount directory paths:
Mounts:
/mas2py
The SAS Viya platform can use a customer-prepared environment consisting of an R installation and any required packages stored on a Kubernetes Persistent Volume. This README describes how to make that volume available to your deployment.
The SAS Viya platform provides YAML files that the Kustomize tool uses to configure R. Before you use those files, you must perform the following tasks:
Make note of the attributes of the volume where R and the associated packages are to be deployed. For example, note the server and directory for NFS. For more information about various types of persistent volumes in Kubernetes, see Additional Resources.
Install R and any necessary packages on the volume.
In addition to the volume attributes, you must have the following information:
Copy the files in the $deploy/sas-bases/examples/sas-open-source-config/r
directory to the $deploy/site-config/sas-open-source-config/r
directory. Create the target directory, if it does not already exist.
Note: If the destination directory already exists, verify that the overlay has been applied.
If the output contains the /nfs/r-mount
directory path, you do not need to take any further actions, unless you want to change the overlay parameters to use a different R environment.
The kustomization.yaml file defines all the necessary environment variables. Replace all tags, such as {{ R-HOMEDIR }}, with the values that you gathered in the Prerequisites step. Then, set the following parameters, according to the SAS products that you will be using:
Attach storage to your SAS Viya platform deployment. The r-transformer.yaml file uses PatchTransformers in kustomize to attach the volume containing your R installation to the SAS Viya platform.
nfs: {path: /vol/r-mount, server: myserver.sas.com}
where myserver.sas.com
is the NFS server and /vol/r-mount
is the NFS path that you recorded in the Prerequisites.The relevant code excerpt from r-transformer.yaml file before the change:
patch: |-
# Add R Volume
- op: add
path: /spec/template/spec/volumes/-
value: { name: r-volume, {{ VOLUME-ATTRIBUTES }} }
# Add mount path for R
- op: add
path: /template/spec/containers/0/volumeMounts/-
value:
name: r-volume
mountPath: {{ R-MOUNTPATH }}
readOnly: true
The relevant code excerpt from r-transformer.yaml file after the change:
patch: |-
# Add R Volume
- op: add
path: /spec/template/spec/volumes/-
value: { name: r-volume, nfs: {path: /vol/r, server: myserver.sas.com} }
# Add mount path for R
- op: add
path: /template/spec/containers/0/volumeMounts/-
value:
name: r-volume
mountPath: /nfs/r-mount
readOnly: true
Also in the r-transformer.yaml file, there is a PatchTransformer called sas-r-sas-java-policy-allow-list. This PatchTransformer sets paths to the R interpreter so that the SAS runtime allows execution of the R script. Replace the {{ R-MOUNTPATH }} and {{ R-HOMEDIR }} tags with the appropriate values. Here is an example:
apiVersion: builtin
kind: PatchTransformer
metadata:
name: add-r-sas-java-policy-allow-list
patch: |-
- op: add
path: /data/SAS_JAVA_POLICY_ALLOW_DM_RHOME
value: /nfs/r/R-3.6.2/bin/Rscript
target:
kind: ConfigMap
name: sas-programming-environment-java-policy-config
Make the following changes to the base kustomization.yaml file in the $deploy directory.
Here is an example:
resources:
- site-config/sas-open-source-config/r
transformers:
- site-config/sas-open-source-config/r/r-transformer.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
**Note:** This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
* If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, then run `kustomize build` to create and apply the manifests.
* If the overlay is applied after the initial deployment of the SAS Viya platform, run `kustomize build` to create and apply the manifests.
Run the following command to verify whether the overlay has been applied:
kubectl describe pod sas-cas-server-default-controller -n <name-of-namespace>
Verify that the output contains the following mount directory paths:
Mounts:
/nfs/r-mount (r)
The SAS Model Repository service provides support for registering, organizing, and managing models within a common model repository. This service is used by SAS Event Stream Processing, SAS Intelligent Decisioning, SAS Model Manager, Model Studio, SAS Studio, and SAS Visual Analytics.
The Model Repository service also includes support for testing and deploying R models. SAS environments such as CAS and SAS Micro Analytic Service do not support direct execution of R code. Therefore, R models in a SAS environment are executed using Python with the rpy2 package. The rpy2 package enables Python to directly access the R libraries and execute R code.
This README describes how to configure your Python and R environments to use the rpy2 package for executing models.
The SAS Viya platform provides YAML files that the Kustomize tool uses to configure Python and R. Before you use those files, you must perform the following tasks:
Note: For rpy2 to work properly, Python and R must be installed on the same system. They do not have to be mounted in the same volume. However, in order to use the R libraries, Python must have access to the directory that was set for the R_HOME environment variable.
Make note of the attributes for the volumes where Python and R, as well as their associated packages, are to be deployed. For example, for NFS, note the NFS server and directory. For more information about the various types of persistent volumes in Kubernetes, see Additional Resources.
Verify that R 3.4+ is installed on the R volume.
Verify that Python 3.5+ and the requests package are installed on the Python volume.
Verify that the R_HOME environment variable is set.
Verify that rpy2 2.9+ is installed as a Python package.
Note: For information about the rpy2 package and version compatibilities, see the rpy2 documentation.
Verify that both the Python and R open-source configurations have been
completed. For more information, see the README files in
$deploy/sas-bases/examples/sas-open-source-config/
.
Copy the files in the $deploy/sas-bases/examples/sas-model-repository/r
directory to the $deploy/site-config/sas-model-repository/r
directory.
Create the target directory, if it does not already exist.
In rpy2-transformer.yaml replace the {{ R-HOME }} value with the R_HOME
directory path. The value for the R_HOME path is the same as the DM_RHOME
value in the kustomization.yaml file, which was specified as part of the R
open-source configuration. That file is located in
$deploy/site-config/open-source-config/r/
.
There are three sections in the rpy2-transformer.yaml file that you must update.
Here is a sample of one of the sections before the change:
patch: |-
# Add R_HOME Path
- op: add
path: /template/spec/containers/0/env/-
value:
name: R_HOME
value: {{ R-HOME }}
target:
kind: PodTemplate
name: sas-launcher-job-config
Here is a sample of the same section after the change:
patch: |-
- op: add
path: /template/spec/containers/0/env/-
value:
name: R_HOME
value: /share/nfsviyar/lib64/R
target:
kind: PodTemplate
name: sas-launcher-job-config
In the cas-rpy2-transformer section of the rpy2-transformer.yaml file, update the CASLLP_99_EDMR value, as shown in this example.
Here is the relevant code excerpt before the change:
- op: add
path: /spec/controllerTemplate/spec/containers/0/env/-
value:
name: CASLLP_99_EDMR
value: {{ R-HOME }}/lib
Here is the relevant code excerpt after the change:
- op: add
path: /spec/controllerTemplate/spec/containers/0/env/-
value:
name: CASLLP_99_EDMR
value: /share/nfsviyar/lib64/R/lib
Add site-config/sas-model-repository/r/rpy2-transformer.yaml to the
transformers block to the base kustomization.yaml file in the $deploy
directory.
transformers:
- site-config/sas-model-repository/r/rpy2-transformer.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.The SAS Viya platform can be deployed as a High Availability (HA) system. In this mode, the SAS Viya platform has redundant stateless and stateful services to handle service outages, such as an errant Kubernetes node.
A kustomize transformer enables High Availability (HA) in the SAS Viya platform among the stateless microservices. Stateful services, with the exception of SMP CAS, are enabled HA at initial deployment.
Add the sas-bases/overlays/scaling/ha/enable-ha-transformer.yaml
to the
transformers block in your base kustomization.yaml file.
...
transformers:
...
- sas-bases/overlays/scaling/ha/enable-ha-transformer.yaml
After the base kustomization.yaml file is modified, deploy the software using the commands
that are described in Deploy the Software.
Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.
Important: The transformer described in this README can be used to deploy the SAS Viya platform in a mode that is not high availability (HA). A non-HA deployment might be suitable for test environments. However, non-HA deployments are not recommended for production environments.
The SAS Viya platform deploys stateful components in a High Availability configuration by default. Do not perform these steps on an environment that has already been configured.
This feature triggers outages during updates as the single replica components update.
A series of kustomize transformers modifies the appropriate SAS Viya platform deployment components to a single replica mode.
Add sas-bases/overlays/scaling/single-replica/transformer.yaml
to the
transformers block in your base kustomization.yaml file. Here is an example:
...
transformers:
...
- sas-bases/overlays/scaling/single-replica/transformer.yaml
To apply the change, run kustomize build -o site.yaml
Before reading this document, you should be familiar with the content in SAS® Viya® Platform Encryption: Data in Motion. In addition, you should have made the following decisions:
Because the openssl certificate generator is the default, the absence of references to a certificate generator in your site-config directory will result in openssl being used. No additional steps are required.
For information about supported versions of cert-manager, see Kubernetes Cluster Requirements.
In order to use the cert-manager certificate generator, it must be correctly configured prior to deploying the SAS Viya platform.
Create a configMap generator to customize the sas-certframe settings. The steps to create these customizations are located in Configuring Certificate Attributes.
Set the SAS_CERTIFICATE_GENERATOR
environment variable to cert-manager
in the file you created in step 1. Here is an example:
---
apiVersion: builtin
kind: ConfigMapGenerator
metadata:
name: sas-certframe-user-config
behavior: merge
literals:
- SAS_CERTIFICATE_GENERATOR=cert-manager
Cert-manager uses a CA Issuer to create the server identity certificates used by the SAS Viya platform. The cert-manager CA issuer requires an issuing CA certificate. The issuing CA for the issuer is stored in a secret named sas-viya-ca-certificate-secret. Add a reference to the cert-manager issuer to the resources block of the base kustomization.yaml file. Here is an example:
resources:
...
- sas-bases/overlays/cert-manager-issuer
Copy this example file to your /site-config
directory, and modify it as described in the comments:
cd $deploy
cp sas-bases/examples/security/customer-provided-ingress-certificate.yaml site-config/security
vi site-config/security/customer-provided-ingress-certificate.yaml
When you have completed your modifications, add the path to this file to the generators
block of your $deploy/kustomization.yaml
file (see the examples below to add a
generators:
block if one does not already exist).
generators:
- site-config/security/customer-provided-ingress-certificate.yaml # configures the ingress to use a secret that contains customer-provided certificate and key
An example of the code that creates an ingress controller certificate and stores it in a secret is provided in the following file:
sas-bases/examples/security/openssl-generated-ingress-certificate.yaml
Copy the example to your /site-config
directory and modify it as described in the comments that are included in the code.
cd $deploy
cp sas-bases/examples/security/openssl-generated-ingress-certificate.yaml site-config/security
vi site-config/security/openssl-generated-ingress-certificate.yaml
When you have completed your modifications, add the path to this file to the resources block of your base kustomization.yaml file:
resources:
- site-config/security/openssl-generated-ingress-certificate.yaml # causes openssl to generate an ingress certificate and key and store them in a secret
To use cert-manager to generate the ingress certificate, add the following path to the transformers block of your base kustomization.yaml file:
transformers:
- sas-bases/overlays/cert-manager-provided-ingress-certificate/ingress-annotation-transformer.yaml # causes cert-manager to generate an ingress certificate and key and store them in a secret
An example of the code that configures cert-manager to generate the certificate and secret is provided in the following file:
sas-bases/examples/security/cert-manager-pre-created-ingress-certificate.yaml
Copy the example to your /site-config
directory and modify it as described in the comments that are included in the code. Note that you will need to know the network DNS alias of your Kubernetes ingress controller.
cd $deploy
cp sas-bases/examples/security/cert-manager-pre-created-ingress-certificate.yaml site-config/security
vi site-config/security/cert-manager-pre-created-ingress-certificate.yaml
When you have completed your modifications, add the path to this file to the resources block of your base kustomization.yaml file:
resources:
- site-config/security/cert-manager-pre-created-ingress-certificate.yaml # causes cert-manager to generate an ingress certificate and key and store them in a secret
Ensure that any of the following TLS components that are added to the components
block of the base kustomization.yaml file come after any other SAS-provided components, but before any user-provided components. This ensures that TLS customizations are applied to the fully-formed manifests of individual SAS offerings without conflicting with any customizations applied by the user.
In Full-stack TLS mode, the ingress controller must be configured to decrypt incoming network traffic and re-encrypt traffic before forwarding it to the back-end SAS servers. Network traffic between SAS servers is encrypted in this mode. To enable Full-Stack TLS, include the customization that corresponds to your ingress controller in the components
block of the base kustomization.yaml file:
components:
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls
components:
- sas-bases/components/security/network/route.openshift.io/route/full-stack-tls
components:
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/front-door-tls
components:
- sas-bases/components/security/network/route.openshift.io/route/front-door-tls
Add this component to your kustomization.yaml to configure the SAS Viya platform for Front-door TLS mode and configure CAS and SAS/CONNECT to encrypt network traffic:
IMPORTANT: Do not add more than one component for SAS servers TLS. The component for each TLS mode must be used by itself.
components:
- sas-bases/components/security/core/base/front-door-tls # component to build trust stores for all services and enable back-end TLS for CAS and SAS/CONNECT
Note: TLS for the ingress controller is required if you are using Full-stack TLS.
IMPORTANT: Do not add more than one TLS component. The component for each TLS mode must be used by itself. Include this customization in the components
block of the base kustomization.yaml file:
components:
- sas-bases/components/security/core/base/full-stack-tls # component to support TLS for back-end servers
An example configMap is provided to help you customize configuration settings. To create this configMap with non-default settings, see the comments in the provided example file, $deploy/sas-bases/examples/security/customer-provided-merge-sas-certframe-configmap.yaml
:
cd $deploy
cp sas-bases/examples/security/customer-provided-merge-sas-certframe-configmap.yaml site-config/security/
vi site-config/security/customer-provided-merge-sas-certframe-configmap.yaml
When you have completed your updates, add the path to the file to the generators
block of your $deploy/kustomization.yaml
file. Here is an example:
generators:
- site-config/security/customer-provided-merge-sas-certframe-configmap.yaml # merges customer-provided configuration settings into the sas-certframe-user-config configmap
Follow these steps to add your proprietary CA certificates to the SAS Viya platform deployment. The certificate files must be in PEM format, and the path to the files must be relative to the directory that contains the kustomization.yaml file. You might have to maintain several files containing CA certificates and update them over time. SAS recommends creating a separate directory for these files.
Place your CA certificate files in the site-config/security/cacerts
directory. Ensure that the user ID that runs the kustomize command has Read access to
the files.
Copy the file $deploy/sas-bases/examples/security/customer-provided-ca-certificates.yaml
into your $deploy/site-config/security
directory.
Edit the site-config/security/customer-provided-ca-certificates.yaml
file and specify the required information.
Instructions for editing this file are provided as comments in the file.
Here is an example:
export deploy=~/deploy
cd $deploy
mkdir -p site-config/security/cacerts
#
# the following line assumes that your CA Certificates are in a file named /tmp/my_ca_certificates.pem
#
cp /tmp/my_ca_certificates.pem site-config/security/cacerts
cp sas-bases/examples/security/customer-provided-ca-certificates.yaml site-config/security
vi site-config/security/customer-provided-ca-certificates.yaml
When you have completed your modifications, add the path to this file to the generators
block of your $deploy/kustomization.yaml
file. Here is an example:
generators:
- site-config/security/customer-provided-ca-certificates.yaml # generates a configmap that contains CA Certificates
In order to add CA certificates to pod trust bundles, add the following component to the components
block of your base kustomization.yaml file:
IMPORTANT: Do not add this component if you have configured Front-door TLS or Full-stack TLS mode.
components:
- sas-bases/components/security/core/base/truststores-only # component to build trust stores when no TLS is desired
# Full-stack TLS with cert-manager certificate generator and cert-Manager generated ingress certificates
namespace: fullstacktls
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer
- sas-bases/overlays/network/networking.k8s.io
components:
- sas-bases/components/security/core/base/full-stack-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls
transformers:
- sas-bases/overlays/required/transformers.yaml
- sas-bases/overlays/cert-manager-provided-ingress-certificate/ingress-annotation-transformer.yaml # causes cert-manager to generate the ingress certificate and key and store it in a secret
generators:
- site-config/security/customer-provided-ca-certificates.yaml # This generator is optional. Include it only if you need to add additional CA Certificates
- site-config/security/customer-provided-merge-sas-certframe-configmap.yaml # make sure edits to the site-config/security/customer-provided-merge-sas-certframe-configmap.yaml file are in place
# Full-stack TLS with cert-manager certificate generator and customer-provided ingress certificates
namespace: fullstacktls
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer
- sas-bases/overlays/network/networking.k8s.io
components:
- sas-bases/components/security/core/base/full-stack-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls
transformers:
- sas-bases/overlays/required/transformers.yaml
generators:
- site-config/security/customer-provided-ingress-certificate.yaml
- site-config/security/customer-provided-ca-certificates.yaml
- site-config/security/customer-provided-merge-sas-certframe-configmap.yaml # make sure edits to the site-config/security/customer-provided-merge-sas-certframe-configmap.yaml file are in place
# Front-door TLS with cert-manager certificate generator and cert-Manager generated ingress certificates
namespace: frontdoortls
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer
- sas-bases/overlays/network/networking.k8s.io
components:
- sas-bases/components/security/core/base/front-door-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/front-door-tls
transformers:
- sas-bases/overlays/required/transformers.yaml
- sas-bases/overlays/cert-manager-provided-ingress-certificate/ingress-annotation-transformer.yaml # causes cert-manager to generate the ingress certificate and key and store it in a secret
generators:
- site-config/security/customer-provided-ca-certificates.yaml # This generator is optional. Include it only if you need to add additional CA Certificates
- site-config/security/customer-provided-merge-sas-certframe-configmap.yaml # make sure edits to the site-config/security/customer-provided-merge-sas-certframe-configmap.yaml file are in place
# Front-door TLS with cert-manager certificate generator and customer-provided ingress certificates
namespace: frontdoortls
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer
- sas-bases/overlays/network/networking.k8s.io
components:
- sas-bases/components/security/core/base/front-door-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/front-door-tls
transformers:
- sas-bases/overlays/required/transformers.yaml
generators:
- site-config/security/customer-provided-ingress-certificate.yaml
- site-config/security/customer-provided-ca-certificates.yaml
- site-config/security/customer-provided-merge-sas-certframe-configmap.yaml # make sure edits to the site-config/security/customer-provided-merge-sas-certframe-configmap.yaml file are in place
# Full-stack TLS with openssl certificate generator and openssl generated ingress certificates
namespace: fullstacktls
resources:
- sas-bases/base
- sas-bases/overlays/network/networking.k8s.io
- site-config/security/openssl-generated-ingress-certificate.yaml
components:
- sas-bases/components/security/core/base/full-stack-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls
transformers:
- sas-bases/overlays/required/transformers.yaml
generators:
- site-config/security/customer-provided-ca-certificates.yaml
# Full-stack TLS with openssl certificate generator and customer-provided ingress certificates
namespace: fullstacktls
resources:
- sas-bases/base
- sas-bases/overlays/network/networking.k8s.io
components:
- sas-bases/components/security/core/base/full-stack-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls
transformers:
- sas-bases/overlays/required/transformers.yaml
generators:
- site-config/security/customer-provided-ingress-certificate.yaml
- site-config/security/customer-provided-ca-certificates.yaml
# Front-door TLS with openssl certificate generator and customer-provided ingress certificates
namespace: frontdoortls
resources:
- sas-bases/base
- sas-bases/overlays/network/networking.k8s.io
components:
- sas-bases/components/security/core/base/front-door-tls
- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/front-door-tls
transformers:
- sas-bases/overlays/required/transformers.yaml
generators:
- site-config/security/customer-provided-ingress-certificate.yaml
- site-config/security/customer-provided-ca-certificates.yaml
# Full-stack TLS with cert-manager certificate generator and customer-provided ingress certificates
namespace: fullstacktls
resources:
- sas-bases/base
- sas-bases/overlays/cert-manager-issuer
- sas-bases/overlays/network/route.openshift.io
components:
- sas-bases/components/security/core/base/full-stack-tls
- sas-bases/components/security/network/route.openshift.io/route/full-stack-tls
transformers:
- sas-bases/overlays/required/transformers.yaml
generators:
- site-config/security/customer-provided-ingress-certificate.yaml
- site-config/security/customer-provided-ca-certificates.yaml
- site-config/security/customer-provided-merge-sas-certframe-configmap.yaml # make sure edits to the site-config/security/customer-provided-merge-sas-certframe-configmap.yaml file are in place
This README describes the steps necessary to configure the SAS Viya platform for single sign-on using Kerberos.
Before you start the deployment, obtain the Kerberos configuration file and keytab for the HTTP service account. Make sure you have tested the keytab before proceeding with the installation.
Copy the files in the $deploy/sas-bases/examples/kerberos/http
directory to the $deploy/site-config/kerberos/http
directory. Create the target directory, if it does not already exist.
Copy your Kerberos keytab and configuration files into the $deploy/site-config/kerberos/http
directory, naming them keytab
and krb5.conf
respectively.
Modify the parameters in $deploy/site-config/kerberos/http/configmaps.yaml
.
HTTP/<hostname>
and may be the same as the principal name in the keytab.Make the following changes to the base kustomization.yaml file in the $deploy directory.
Use the deployment commands described in SAS Viya Platform Deployment Guide to apply the new settings.
This README describes the steps necessary to configure your SAS Viya platform SAS Servers to use Kerberos.
Before you start the deployment, obtain the Kerberos configuration file (krb5.conf) and keytab file for the HTTP service account.
Edit the krb5.conf file and add renewable = true
under the [libdefaults]
section. This allows renewable Kerberos credentials to be used in SAS Viya platform. SAS servers
will renew Kerberos credentials prior to expiration up to the renewable lifetime.
Here is an example:
[libdefaults]
...
renewable = true
Obtain a keytab file for the HTTP service account.
If you are using SAS/CONNECT from external clients, such as SAS 9.X, obtain a keytab for the SAS service account. The HTTP service account and SAS service account can be placed in the same keytab file for convenience. If you are using a single keytab file, the SAS service account should be placed before the HTTP service account in the keytab file.
Make sure you have tested the keytab files before proceeding with the installation.
If you want to connect to the CAS Server from external clients through the binary or REST ports, you must also configure the CAS Server to accept direct Kerberos connections.
If SAS/ACCESS Interface to Hadoop will be used with a Hadoop deployment that is Kerberos-protected, either nss_wrapper or System Security Services Daemon (SSSD) must be configured. Unlike SSSD, nss_wrapper does not require running in a privilege elevated container. If you are using OpenShift Container Platform 4.2 or later, neither nss_wrapper nor SSSD are required. If SAS/CONNECT is configured to spawn the SAS/CONNECT Server in the SAS/CONNECT Spawner pod, SSSD must be configured regardless of the container orchestration platform being used.
To configure nss_wrapper, make the following changes to the base kustomization.yaml file in the $deploy
directory. Add the following to the transformers block. These additions must come before
sas-bases/overlays/required/transformers.yaml
.
transformers:
...
- sas-bases/overlays/kerberos/nss_wrapper/add-nss-wrapper-transformer.yaml
To configure SSSD for SAS Compute Server and SAS Batch
Server, follow the instructions in $deploy/sas-bases/examples/kerberos/sssd/README.md
(for Markdown format) or $deploy/sas-bases/docs/docs/configure_system_security_services_daemon.htm
(for HTML format). For CAS, follow the instructions in $deploy/sas-bases/examples/cas/configure/README.md
(for Markdown format) and $deploy.sas-bases/docs/configuration_settings_for_cas.htm
(for HTML format).
For SAS/CONNECT, follow the instructions in $deploy/sas-bases/examples/sas-connect-spawner/README.md
(for Markdown format) or $deploy/sas-bases/docs/configure_sasconnect_spawner_in_sas_viya.htm
(for HTML format).
The aim of configuring for Kerberos is to allow Kerberos authentication to
flow into, between, and out from the SAS Viya platform environment. Allowing SAS servers
to connect to other SAS Viya platform processes and third-party data sources on behalf of
the user is referred to as delegation
. SAS supports Kerberos Unconstrained
Delegation, Kerberos Constrained Delegation, and Kerberos Resource-based Constrained
Delegation. Delegation should be configured prior to completing the installation steps below.
The HTTP service account must be trusted for delegation. If you are using SAS/CONNECT, the SAS service account must also be trusted for delegation.
As an alternative method to Delegation, external credentials can be stored in an Authentication Domain. SAS uses the stored credentials to generate Kerberos credentials on the user’s behalf. The default Authentication Domain is KerberosAuth. The Authentication Domain, whether default or custom, will need to be created in SAS Environment Manager. SAS recommends creating a Custom Group with shared external credentials and assigning the custom group to the created Authentication Domain.
For more information about creating Authentication Domains, see External Credentials: Concepts.
Note: Stored user credentials take precedence over stored group credentials in the same Authentication Domain. For more information, see How to configure Kerberos stored credentials.
Copy the files in the $deploy/sas-bases/examples/kerberos/sas-servers
directory to the $deploy/site-config/kerberos/sas-servers
directory. Create
the target directory, if it does not already exist.
Copy your Kerberos keytab file and configuration files into the
$deploy/site-config/kerberos/sas-servers
directory, naming them keytab
and
krb5.conf
respectively.
Note: A Kubernetes secret is generated during deployment using the content of the keytab binary file. However, the SAS Viya Platform Deployment Operator and the viya4-deployment project do not support creating secrets from binary files. For these types of deployments, the Kerberos keytab content must be loaded from an existing Kubernetes secret. If you are using either of these deployment types, see Manually Configure a Kubernetes Secret for the Kerberos Keytab for the steps.
Replace {{ SPN }} in
$deploy/site-config/kerberos/sas-servers/configmaps.yaml
under the
sas-servers-kerberos-sidecar-config
stanza with the name of the
principal as it appears in the keytab file.
Make the following changes to the base kustomization.yaml file in the $deploy directory.
Add site-config/kerberos/sas-servers
to the resources block.
resources:
...
- site-config/kerberos/sas-servers
Add the following to the transformers block. These additions must come
before sas-bases/overlays/required/transformers.yaml
.
If TLS is enabled:
transformers:
...
- sas-bases/overlays/kerberos/sas-servers/sas-kerberos-job-tls.yaml
- sas-bases/overlays/kerberos/sas-servers/sas-kerberos-deployment-tls.yaml
- sas-bases/overlays/kerberos/sas-servers/cas-kerberos-tls-transformer.yaml
If you are deploying the SAS Viya platform with TLS on Red Hat OpenShift
and using SAS/CONNECT, replace `sas-kerberos-deployment-tls.yaml` with
`sas-kerberos-deployment-tls-openshift.yaml`.
If TLS is not enabled:
transformers:
...
- sas-bases/overlays/kerberos/sas-servers/sas-kerberos-job-no-tls.yaml
- sas-bases/overlays/kerberos/sas-servers/sas-kerberos-deployment-no-tls.yaml
- sas-bases/overlays/kerberos/sas-servers/cas-kerberos-no-tls-transformer.yaml
If you are deploying the SAS Viya platform without TLS on Red Hat OpenShift
and using SAS/CONNECT, replace sas-kerberos-deployment-no-tls.yaml
with
sas-kerberos-deployment-no-tls-openshift.yaml
.
Follow the instructions in
$deploy/sas-bases/examples/kerberos/http/README.md
(for Markdown format) or
$deploy/sas-bases/docs/configuring_kerberos_single_sign-on_for_sas_viya.htm
(for
HTML format) to configure Kerberos single sign-on. Specifically, in
$deploy/site-config/kerberos/http/configmaps.yaml
change
SAS_LOGON_KERBEROS_HOLDONTOGSSCONTEXT
to true
.
When all the SAS Servers are configured in the base kustomization.yaml file, use the deployment commands described in SAS Viya Platform Deployment Guide to apply the new settings.
After the deployment is started, enable Kerberos
in SAS Environment
Manager.
1. Sign into SAS Environment Manager as sasboot or as an Administrator. Go
to the Configuration page.
2. On the Configuration page, select Definitions
from the list. Then
select sas.compute
.
3. Click the pencil (Edit) icon.
4. Change kerberos.enabled
to on
.
5. Click Save
.
If you want to connect to the CAS Server from external clients through the binary port, perform the following steps in addition to the section above.
Copy the files in the $deploy/sas-bases/examples/kerberos/cas-server
directory to the $deploy/site-config/kerberos/cas-server
directory. Create
the target directory, if it does not already exist.
Copy your Kerberos keytab and configuration files into the
$deploy/site-config/kerberos/cas-server
directory, naming them keytab
and
krb5.conf
respectively.
Replace {{ SPN }} in
$deploy/site-config/kerberos/cas-server/configmaps.yaml
under the
cas-server-kerberos-config
stanza with the name of the service
principal as it appears in the keytab file without the @DOMAIN.COM.
Replace {{ HTTP_SPN }} with the HTTP SPN used for the krb5 proxy sidecar container without the @DOMAIN.COM. SAS recommends that you use the same keytab file and SPN for both the CAS Server and the krb5 proxy sidecar for consistency and to allow REST port direct Kerberos connections.
Make the following changes to the base kustomization.yaml file in the $deploy directory.
Add site-config/kerberos/cas-server
to the resources block.
resources:
...
- site-config/kerberos/cas-server
Add the following to the transformers block. These additions must come
before sas-bases/overlays/required/transformers.yaml
.
transformers:
...
- sas-bases/overlays/kerberos/sas-servers/cas-kerberos-direct.yaml
Edit your $deploy/site-config/kerberos/cas-server/krb5.conf
file. Add the
following to the [libdefaults]
section:
[libdefaults]
...
dns_canonicalize_hostname=false
If you are using SAS/CONNECT from external clients, such as SAS 9.4, perform the following steps in addition to the section above.
Add a reference to
sas-bases/overlays/kerberos/sas-servers/sas-connect-spawner-kerberos-transformer
.yaml
in the transformers block of the kustomization.yaml file in the $deploy
directory.
The reference must come before
sas-bases/overlays/required/transformers.yaml
. Here is an example:
transformers:
...
- sas-bases/overlays/kerberos/sas-servers/sas-connect-spawner-kerberos-transformer.yaml
- sas-bases/overlays/required/transformers.yaml
Uncomment the sas-connect-spawner-kerberos-secrets
stanza in
$deploy/site-config/kerberos/sas-servers/secrets.yaml
. If you are using
separate keytab files for the HTTP service account and SAS service account,
change the keytab
name to the actual keytab file name in each stanza. The
SAS SPN is required to authenticate the user with SAS/CONNECT from external
clients. The HTTP SPN is required to authenticate the user with SAS Login Manager.
Uncomment the sas-connect-spawner-kerberos-config
stanza in
$deploy/site-config/kerberos/sas-servers/configmaps.yaml
.
Replace {{ SPN }} with the HTTP SPN from the keytab file without the @DOMAIN.COM.
If you are using separate keytab files for the HTTP service account and
SAS service account, change the keytab
name to the actual keytab file name
in each stanza. The keytab file name must match the name used in secrets.yaml
for step 2.
Edit your $deploy/site-config/kerberos/sas-servers/krb5.conf
file. Add the
following to the [libdefaults]
section:
[libdefaults]
...
dns_canonicalize_hostname=false
If you are using MIT Kerberos as your KDC, then enabling delegation involves
setting the flag ok_as_delegate
on the principal. For example, the following
command adds this flag to the existing HTTP principal:
kadmin -q "modprinc +ok_as_delegate HTTP/mywebserver.company.com"
If you are using Microsoft Active Directory for your KDC, you must set the
delegation option after registering the SPN. The Active Directory Users and
Computers GUI tool does not expose the delegation options until at least one SPN
is registered against the service account. The HTTP Service account must be
able to delegate to any applicable data sources. The service account must have
Read all user information
permissions to the approprate Domain or Orgranizational
Units in Active Directory.
For the HTTP service account, as a Windows domain administrator, right-click
the name and select Properties
.
In the Properties
dialog, select the Delegation
tab.
On the Delegation
tab, you must select Trust this user for delegation to
any services (Kerberos only).
In the Properties
dialog, select OK
.
If you are using SAS/CONNECT, repeat the steps in this section for the SAS service account.
In $deploy/site-config/kerberos/http/configmaps.yaml
, set
SAS_LOGON_KERBEROS_HOLDONTOGSSCONTEXT
to false
.
In the sas-servers-kerberos-sidecar-config
stanza of $deploy/site-config/kerberos/sas-servers/configmaps.yaml
,
add the following under literals
:
- SAS_CONSTRAINED_DELEG_ENABLED="true"
If you are using SAS/CONNECT, in the sas-connect-spawner-kerberos-config
stanza, add the following under literals
:
- SAS_SERVICE_PRINCIPAL={{ SAS service account SPN }}
- SAS_CONSTRAINED_DELEG_ENABLED="true"
If you are using MIT Kerberos as your KDC, then enabling delegation involves
setting the flag ok_to_auth_as_delegate
on the principal. For example, the
following command adds the flag to the existing HTTP principal:
kadmin -q "modprinc +ok_to_auth_as_delegate HTTP/mywebserver.company.com"
If you are using Microsoft Active Directory for your KDC, you must set the
delegation option after registering the SPN. The Active Directory Users and
Computers GUI tool does not expose the delegation options until at least one SPN
is registered against the service account. The HTTP Service account must be
able to delegate to any applicable data sources. The service account must have
Read all user information
permissions to the approprate Domain or Orgranizational
Units in Active Directory.
For the HTTP service account, as a Windows domain administrator, right-click
the account name and select Properties
.
In the Properties
dialog, select the Delegation
tab.
On the Delegation
tab, select Trust this user for delegation to
the specified services only
and Use any authentication protocol
.
Select Add...
In the Add Services
panel, select Users and Computers...
In the Select Users or Computers
dialog box, complete the following for
the Kerberos-protected services that the SAS Servers access:
1. In the `Enter the object names to select` text box, enter the account
for the Kerberos protected services the SAS Server accesses, such as Microsoft
SQL Server. Then, select `Check Names`.
2. If the name is found, select `OK`.
3. Repeat the previous two steps to select additional SPNs for the SAS
Servers to access.
4. When you are done, select `OK`.
In the Add Services
dialog box, select OK
.
In the Properties
dialog, select OK
.
If you are using SAS/CONNECT, repeat the steps in this section for the SAS service account.
In $deploy/site-config/kerberos/http/configmaps.yaml
, set
SAS_LOGON_KERBEROS_HOLDONTOGSSCONTEXT
to false
.
In the sas-servers-kerberos-sidecar-config
stanza of $deploy/site-config/kerberos/sas-servers/configmaps.yaml
,
add the following under literals
:
- SAS_CONSTRAINED_DELEG_ENABLED="true"
If you are using SAS/CONNECT, in the sas-connect-spawner-kerberos-config
stanza, add the following under literals
:
- SAS_SERVICE_PRINCIPAL={{ SAS service account SPN }}
- SAS_CONSTRAINED_DELEG_ENABLED="true"
Kerberos Resource-based Constrained Delegation can only be configured using Microsoft PowerShell. Resource-based constrained delegation gives control of delegation to the administrator of the back-end service, therefore, the delegation permissions are applied on the back-end service being accessed.
Note: The examples below demonstrate adding a single identity that is trusted for
delegation. To add multiple identities, use the format: ($identity1),($identity2)
.
If the back-end service being accessed is running on Windows under the Local System account, then the front-end service principal is applied to the back-end service Computer Object.
$sashttpidentity = Get-ADUser -Identity <HTTP service account>
Set-ADComputer <back-end service hostname> -PrincipalsAllowedToDelegateToAccount $sashttpidentity
If the back-end servers being accessed is running on Unix/Linux or on Windows under a Domain Account, then the front-end service principal is applied to the Domain Account of the back-end service where the service principal is registered.
$sashttpidentity = Get-ADUser -Identity <HTTP service account>
Set-ADUser <back-end service Domain Account> -PrincipalsAllowedToDelegateToAccount $sashttpidentity
If you are using SAS/CONNECT, the HTTP service account must trust the SAS service account.
$sasidentity = GetADUser -Identity <SAS service account>
Set-ADUser <HTTP service account> -PrincipalsAllowedToDelegateToAccount $sasidentity
If you are using SAS/CONNECT and the back-end service is running on Windows under the Local System account, then the SAS service principal is applied to the back-end service Computer Object.
$sasidentity = GetADUser -Identity <SAS service account>
Set-ADComputer <back-end service hostname> -PrincipalsAllowedToDelegateToAccount $sasidentity
If you are using SAS/CONNECT and the back-end service is running on UNIX/Linux or on Windows under a Domain Account, then the SAS service principal is applied to the Domain Account of the back-end service where the principal is registered.
$sasdentity = Get-ADUser -Identity <SAS service account>
Set-ADUser <back-end service Domain Account> -PrincipalsAllowedToDelegateToAccount $sasidentity
Configure the usage of stored credentials:
In the sas-servers-kerberos-sidecar-config
block of $deploy/site-config/kerberos/sas-servers/configmaps.yaml
, set the desired Authentication Domain to query for stored credentials.
literals:
...
- SAS_KRB5_PROXY_CREDAUTHDOMAIN=KerberosAuth # Name of authentication domain to query for stored credentials
Uncomment these lines in the sas-servers-kerberos-container-config
block of $deploy/site-config/kerberos/sas-servers/configmaps.yaml
:
literals:
...
- SAS_KRB5_PROXY_CHECKCREDSERVICE="true" # Set to true if SAS should prefer stored credentials over Constrained Delegation
- SAS_KRB5_PROXY_LOOKUPINGROUP="true" # Set to true if SAS should look for a group credential if no user credential is stored
System Security Services Daemon (SSSD) provides access to remote identity providers, such as LDAP and Microsoft Active Directory. SSSD can be used when using SAS/ACCESS Interface to Hadoop with a Kerberos-protected Hadoop deployment where identity lookup is required.
Note: Alternatively, nss_wrapper can be used with SAS/ACCESS Interface to Hadoop.
To implement nss_wrapper, follow the instructions in the “nss_wrapper” section of
the README file located at $deploy/sas-bases/examples/kerberos/sas-servers/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configuring_sas_servers_for_kerberos_in_sas_viya_platform.htm
(for HTML format).
Add sas-bases/overlays/kerberos/sssd/add-sssd-container-transformer.yaml
to
the transformers block of the base kustomization.yaml file
($deploy/kustomization.yaml
).
**Important:** This line must come before any network transformers
(transformers that start with “- sas-bases/overlays/network/”) and the required transformer (“- sas-bases/overlays/required/transformers.yaml”). Note that your configuration may not have network transformers if security is not configured. This line must also be placed after any Kerberos transformers (transformers starting with “- sas-bases/overlays/kerberos/sas-servers”).
```yaml
transformers:
...
# Place after any sas-bases/overlays/kerberos lines
- sas-bases/overlays/kerberos/sssd/add-sssd-container-transformer.yaml
# Place before any sas-bases/overlays/network lines and before
# sas-bases/overlays/required/transformers.yaml
```
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
Use these steps to provide a custom SSSD configuration to handle user authorization in your environment.
Copy the files in the $deploy/sas-bases/examples/kerberos/sssd
directory to the $deploy/site-config/kerberos/sssd
directory. Create
the target directory, if it does not already exist.
Copy your customer sssd.conf configuration file to
$deploy/site-config/kerberos/sssd/sssd.conf
.
Make the following changes to the base kustomization.yaml file in the $deploy directory.
- Add the following to the generators block.
```yaml
generators:
...
- site-config/kerberos/sssd/secrets.yaml
```
- Add a reference to `sas-bases/overlays/kerberos/sssd/add-sssd-configmap-transformer.yaml`
to the transformers block. The new line must come
after sas-bases/overlays/kerberos/sssd/add-sssd-container-transformer.yaml
.
```yaml
transformers:
...
- sas-bases/overlays/kerberos/sssd/add-sssd-configmap-transformer.yaml
```
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
You can use the examples found within $deploy/sas-bases/examples/security/web/rate-limiting
to enforce rate-limiting at the ingress-nginx controller for SAS Viya platform endpoints. The properties are applied to all Ingress resources deployed with the SAS Viya platform. If you are using any external load balancers or API gateways, enforcing rate-limiting with ingress-nginx is not optimal. Instead, enforce rate limiting through external technology. To read more about the available options in ingress-nginx, see https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#rate-limiting.
If you are deploying on Red Hat OpenShift, you must enforce rate-limiting at the OpenShift router instead. The properties are applied to all Route resources deployed with the SAS Viya platform. To read more about the available options in OpenShift, see https://docs.openshift.com/container-platform/4.15/networking/routes/route-configuration.html#nw-route-specific-annotations_route-configuration.
Use these steps to apply the desired properties to your SAS Viya platform deployment.
Copy the $deploy/sas-bases/examples/security/web/rate-limiting/ingress-nginx-configmap-inputs.yaml
file to the location of your working container security overlay,
such as site-config/security/web/
.
Define the properties in the ingress-nginx-configmap-inputs.yaml file which match the desired configuration. To define a property, uncomment it and update its token value as described in the example file.
Add the relative path of ingress-nginx-configmap-inputs.yaml to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
...
resources:
...
- site-config/security/web/rate-limiting/ingress-nginx-configmap-inputs.yaml
...
Add the relative path(s) of the corresponding transformer file(s) to the transformers block of the base kustomization.yaml file. There should be one transformer file added per property defined within the ConfigMap. Here is an example:
...
transformers:
...
- sas-bases/overlays/security/web/rate-limiting/update-ingress-nginx-limit-rps.yaml
- sas-bases/overlays/security/web/rate-limiting/update-ingress-nginx-limit-burst-multiplier.yaml
...
When deploying to Red Hat OpenShift, use these steps to apply the desired properties to your SAS Viya platform deployment. Do not use the steps for the ingress-nginx controller.
Copy the $deploy/sas-bases/examples/security/web/rate-limiting/route-configmap-inputs.yaml
file to the location of your working container security overlay,
such as site-config/security/web/rate-limiting
.
Define the properties in the route-configmap-inputs.yaml file which match the desired configuration. To define a property, uncomment it and update its token value as described in the example file.
Add the relative path of route-configmap-inputs.yaml to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
...
resources:
...
- site-config/security/web/rate-limiting/route-configmap-inputs.yaml
...
Add the relative path(s) of the corresponding transformer file(s) to the transformers block of the base kustomization.yaml file. There should be one transformer file added per property defined within the ConfigMap. Here is an example:
...
transformers:
...
- sas-bases/overlays/security/web/rate-limiting/update-route-rate-limit-connections.yaml
- sas-bases/overlays/security/web/rate-limiting/update-route-rate-limit-connections-rate-http.yaml
- sas-bases/overlays/security/web/rate-limiting/update-route-rate-limit-connections-rate-tcp.yaml
...
This readme describes how to customize your SAS Viya platform deployment for tasks related to the SAS Programming Environment.
SAS provides the ability for modifications to be made to the scripts that are used for launching processes. The following processes allow for modifications to be set in SAS Environment Manager.
Each server type has multiple configuration instances for modification of configuration files, autoexec code, and startup scripts that are used to launch the servers. Modifications to the startup script configurations for each server are disabled by default.
The system administrator can give the SAS Administrator the ability to have updates made to these configuration scripts processed by the server applications.
Since this processing takes place at the initialization of the server application, changes to these configMaps take effect upon the next launch of the pod.
Included in this folder is an overlay called enable-admin-script-access.yaml. This overlay provides a patchTransformer that gives the SAS Administrator the ability to have script modifications made in SAS Environment Manager processed by the server applications.
To enable this access:
Add sas-bases/overlays/sas-programming-environment/enable-admin-script-access.yaml
to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example:
```
...
transformers:
...
- sas-bases/overlays/sas-programming-environment/enable-admin-script-access.yaml
...
```
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
Included in this folder is an overlay called disable-admin-script-access.yaml. This overlay provides a patchTransformer that denies the SAS Administrator the ability to have script modifications made in SAS Environment Manager processed by the server applications.
To disable this access:
Add sas-bases/overlays/sas-programming-environment/disable-admin-script-access.yaml
to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example:
```
...
transformers:
...
- sas-bases/overlays/sas-programming-environment/disable-admin-script-access.yaml
...
```
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
This README describes customizations that can be made by the Kubernetes administrator to modify container security configurations while deploying the SAS Viya platform. An administrator might want or need to change the default container security settings in a SAS Viya platform deployment such as removing, adding, or updating settings in the podSpecs. There are many reasons why an administrator might want to modify these settings.
The steps in this README for fsGroup and seccomp can be used for any platform. However, if you are deploying on Red Hat OpenShift, these settings must be modified in order to take advantage of OpenShift’s built-in security context constraints (SCCs). The title of each section indicates whether it is required for OpenShift.
SCCs are the framework, provided by OpenShift, that controls what privileges can be requested by pods in the cluster. OpenShift provides users with several built-in SCCs. Admins can attach pods to any of these SCCs or they can create dedicated SCCs. Dedicated SCCs are created specifically to address the specs and capabilities required by a certain pod/product. For more information on OpenShift SCCs, see Managing SCCs in OpenShift.
You can use the customizations in this file to accomplish the following required or optional tasks:
(OpenShift only) Adjust your podSpec to use one of the built-in SCCs and avoid creating a dedicated one.
The “restricted” SCC, for example, is the primary built-in SCC that should control all pods. The restricted SCC is classified as the standard, and most pods should be able to run with it and validate against it.
Remove the seccomp profile settings from the podSpec or update its value.
Removal is required for OpenShift and optional for other environments. The restricted SCC does not allow this setting to be included in the podSpec.
Remove the fsGroup setting or update its value.
This step is required for OpenShift and optional for other environments. The restricted SCC prevents you from setting fsGroup to a value outside of the allocated ID range. SAS has set it to a default value that enables the shared service account to access the file system. This shared account is invalid in the OpenShift restricted SCC.
In OpenShift, every namespace/project has a dynamically allocated range of IDs that are used to prevent collisions between separate projects. Replace the fsGroup value with an ID from the allocated range.
In other environments, removing the setting is an option when you are using a storage class provider that grants group-write access by default.
Otherwise, the fsGroup value should be updated rather than removed.
Note: Pods that run with dedicated SCCs for Crunchy Data (the internal PostgreSQL server) or the CAS server do not need the customizations referenced in this README. They have dedicated SCCs that will contain all conditions for the pods without altering the podSpec. You can use some of these customizations for OpenSearch. For more information, see Security Requirements.
The fsGroup field defines a special supplemental group that assigns a GID for all containers in the pod. Volumes that support ownership management are modified to be owned and writable by the GID specified in fsGroup. For more information about using fsGroup, see Configure a Security Context for a Pod or Container.
Notes: Crunchy Data currently does not support updating this value. Do not attempt to change this setting for an internal PostgreSQL server. Instead, custom SCCs grant the Crunchy Data pods the ability to run with their specific group ID (GID).
Updating this value for CAS is optional because CAS default settings work in all environments. If you want to update values for CAS, you must uncomment the corresponding PatchTransformer in the update-fsgroup.yaml file. If you are deploying on OpenShift, the corresponding SCC also must be updated to specify the new fsGroup values or be set to “RunAsAny”.
Use these steps to update the fsGroup field for pods in your SAS Viya platform deployment.
Copy the $deploy/sas-bases/examples/security/container-security/configmap-inputs.yaml
file to the location of your working container security overlay,
such as site-config/security/container-security/
.
Update the {{ FSGROUP_VALUE }}
token in the configmap-inputs.yaml file to match the desired numerical group value.
Note: For OpenShift, you can get the allocated GID and value with the kubectl describe namespace <name-of-namespace>
command. The value to use is the minimum value of the openshift.io/sa.scc.supplemental-groups
annotation. For example, if the output is the following, you should use 1000700000
.
Name: sas-1
Labels: <none>
Annotations: ...
openshift.io/sa.scc.supplemental-groups: 1000700000/10000
...
Add the relative path of configmap-inputs.yaml to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
...
resources:
...
- site-config/security/container-security/configmap-inputs.yaml
...
Add the relative path of the update-fsgroup.yaml file to the transformers block of the base kustomization.yaml file. Here is an example:
...
transformers:
...
- sas-bases/overlays/security/container-security/update-fsgroup.yaml
...
(Optional) For CAS, add the relative path of the update-cas-fsgroup.yaml file to the transformers block of the base kustomization.yaml file. Here is an example:
...
transformers:
...
- sas-bases/overlays/security/container-security/update-fsgroup.yaml
- sas-bases/overlays/security/container-security/update-cas-fsgroup.yaml
...
(For OpenShift) If you performed the optional configuration for CAS from Step 5, update the dedicated SCC for CAS to allow the desired fsGroup value. This value should match the value from Step 2 above, or it should be set to RunAsAny
.
Note: Crunchy Data currently does not support removing this value. Pods for an internal PostgreSQL server will remain unaffected.
To remove the fsGroup field from your deployment specification, add the relative path of the remove-fsgroup-transformer.yaml file
to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
...
transformers:
...
- sas-bases/overlays/security/container-security/remove-fsgroup-transformer.yaml
...
Secure computing mode (seccomp) is a security facility that restricts the actions that are available within a container. You can use this feature to restrict your application’s access. For more information about seccomp, see Seccomp security profiles for Docker.
Considerations:
Use these steps to update the seccomp profile enabled for pods in your deployment specification.
Copy the deploy/sas-bases/examples/security/container-security/update-seccomp.yaml
file to the location of your working container security overlay.
Here is an example: site-config/security/container-security/update-seccomp.yaml
Update the “{{ SECCOMP_PROFILE }}” tokens in the update-seccomp.yaml file to match the desired seccomp profile value.
Add the relative path of update-seccomp.yaml to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
...
transformers:
...
- site-config/security/container-security/update-seccomp.yaml
...
To remove the seccomp profile settings from your deployment specification, add the relative path of the
remove-seccomp-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
IMPORTANT: You must make this modification in an OpenShift environment.
Here is an example:
...
transformers:
...
- sas-bases/overlays/security/container-security/remove-seccomp-transformer.yaml
...
The SAS Audit service can be configured to periodically archive audit records to file. If this feature is enabled, then a PersistentVolumeClaim must be created as the output location for these archive files.
Note: Because this task requires the SAS Environment Manager, it can only be performed after a successful deployment.
Archiving is disabled by default, so you must enable the feature to use it. As an administrator, open the Audit service configuration in SAS Environment Manager and change the following settings to the specified values.
Setting Name | Value |
---|---|
sas.audit.archive.process.storageType | local |
Copy all of the files in $deploy/sas-bases/examples/sas-audit/archive
to $deploy/site-config/sas-audit
, where $deploy is the directory containing your SAS Viya platform installation files. Create the target directory, if it does not already exist.
Edit the resources.yaml file to replace the following parameters with the appropriate values.
Parameter Name | Description | Example Value |
---|---|---|
STORAGE-CLASS | The storage class of the PersistentVolumeClaim. The storage class must support ReadWriteMany. | nfs-client |
STORAGE-CAPACITY | The size of the PersistentVolumeClaim. | 1Gi |
After updating the example files, you should add references to them to the base kustomization.yaml file ($deploy/kustomization.yaml
).
* Add a reference to the resources.yaml file to the resources block.
* Add a reference to the archive-transformer.yaml file to the transformers block.
For example, if you made the changes described above, then the base kustomization.yaml file should have entries similar to the following:
resources:
- site-config/sas-audit/resources.yaml
transformers:
- site-config/sas-audit/archive-transformer.yaml
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
Note: Audit service persistentvolumeclaim data does not participate in the SAS Viya platform backup and restore procedure. Therefore it contains archived data that is never restored to the SAS Viya platform system. As a result, when audit archiving is performed, SAS recommends that the cluster administrator take a backup of the audit archive data and keep that data at a secure location. Steps for backup can be found at $deploy/sas-bases/examples/sas-audit/backup/README.md
Archived data from the audit process is stored in a persistent volume (PV). Audit and activity data are stored separately.
Audit service PVC data does not participate in the SAS Viya platform backup and restore procedure. Therefore, it contains archived data that is never restored to the SAS Viya platform system. As a result, when audit archiving is performed, SAS recommends that the cluster administrator take a backup of the audit archive data and keep that data at a secure location.
The audit service should be running on both source and target environments and should have the PV attached.
To perform some elements of this task, you must have elevated Kubernetes permissions.
You should follow the steps described in Hardware and Resource Requirements page, especially section Persistent Storage Volumes, PersistentVolumeClaims, and Storage Classes.
Take frequent backups of audit archived data during off-peak hours. The frequency of backups should be determined by the frequency of archiving defined by your organization.
The PV that contains archived data is part of the same cluster as the environment. Therefore, SAS recommends that you routine copy archived data to storage outside of the cluster, such as NFS, in case the PV or entire cluster fails.
The time required to copy the archived audit data contents varies based on the size of the data, the disk I/O rate of the system, and the type of file system that you are using.
The following steps use a generic method (tar and kubectl exec) and an Audit service pod to copy data between environments. The steps in this generic method are not specific to any one cloud provider. You might follow a slightly different set of steps depending on what type of storage you are using for your data.
Log in to the cluster where you want to keep a backup of the archived data temporarily. You must have root level permissions to copy the archived data.
Determine the temporary location of the data that you wish to copy to and from.
Export or set the source machine kubeconfig file and then get source audit pod name:
export KUBECONFIG=<source-machine-kubeconfig>
kubectl get pods -n <name-of-namespace> | grep sas-audit
Copy audit archived data from source machine to temporary location:
kubectl -n <name-of-namespace> exec <source-audit-pod-name> -- tar cpf - -C /archive . | tar xf - -C <temp-folder-path>
Here is an example:
kubectl -n sourceTenant exec sas-audit-58cccfb4f7-pd870 -- tar cpf - -C /archive . | tar xf - -C /opt/tmpDir
Note: temp-folder-path
is the location that is being used to keep the data temporarily.
Export or set target machine kubeconfig on same setup and get target audit pod name:
export KUBECONFIG=<target-machine-kubeconfig>
kubectl get pods -n <name-of-namespace> | grep sas-audit
Migrate the audit archived data from the temporary location to the target machine PV:
tar cpf - -C <temp-folder-path> * | kubectl -n <name-of-namespace> exec -i <target-audit-pod-name> -- tar xf - -C /archive
Here is an example:
tar cpf - -C /opt/tmpDir * | kubectl -n targetTenant exec -i sas-audit-555c58c44f-ssjx7 -- tar xf - -C /archive
Note: The temp-folder-path
is the location where archived data is kept temporarily.
The files in this directory are used to customize your SAS Viya 4 deployment to run migration. For information about migration and using these files, see SAS Viya Platform Administration: Migration.
Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.
This directory contains overlays to customize your SAS Viya 4 deployment to run migration. For information about migration and using these files, see SAS Viya Platform Administration: Migration.
Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.
This README describes how to revise and apply the settings for configuring migration jobs.
To change the migration job timeout value, edit the $deploy/kustomization.yaml
file by adding an entry for the sas-restore-job-parameters configMap in the configMapGenerator block.
The entry uses the following format, where {{ TIMEOUT-IN-MINUTES }} is an integer.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- JOB_TIME_OUT={{ TIMEOUT-IN-MINUTES }}
If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.
Build the manifest.
kustomize build -o site.yaml
Apply the manifest.
kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts
To skip the migration of the configuration definition properties, edit the $deploy/kustomization.yaml
file by adding an entry for the sas-restore-job-parameters configMap in the configMapGenerator block. The entry uses the following format.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- RESTORE_DEFINITION_FILTER={{ RESTORE-DEFINITION-FILTER-CSV }}
The {{ RESTORE-DEFINITION-FILTER-CSV }} is a json string containing the comma-separated list of ‘key:value’ pairs where key is in the form ‘serviceName.definitionName.version’ and value itself can be a comma-separated list of properties to be filtered. If the entire definition is to be excluded, then set the value to ‘*’. If the service name is not present in the definition then only provide ‘definitionName’. Each key and value must be enclosed in double quotes (“). Here is an example:
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- RESTORE_DEFINITION_FILTER='{"sas.dataserver.common.1":"*","deploymentBackup.sas.deploymentbackup.1":"*","deploymentBackup.sas.deploymentbackup.2":"*","deploymentBackup.sas.deploymentbackup.3":"*","sas.security.1":"*","vault.sas.vault.1":"*","vault.sas.vault.2":"*","SASDataExplorer.sas.dataexplorer.1":"*","SASLogon.sas.logon.sas9.1":"*","sas.cache.1":"*","sas.cache.2":"*","sas.cache.3":"*","sas.cache.4":"*","identities-SASLogon.sas.identities.providers.ldap.user.1":"accountId,address.country","SASLogon.sas.logon.saml.providers.external_saml.1":"assertionConsumerIndex,idpMetadata"}'
If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.
Build the manifest.
kustomize build -o site.yaml
Apply the manifest.
kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts
To skip the migration of the configuration properties, edit the $deploy/kustomization.yaml
file by adding an entry for the sas-restore-job-parameters configMap in the configMapGenerator block.
The entry uses the following format.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- RESTORE_CONFIGURATION_FILTER={{ RESTORE-CONFIGURATION-FILTER-CSV }}
The {{ RESTORE-CONFIGURATION-FILTER-CSV }} is a json string containing the comma-separated list of ‘key:value’ pairs where key is in the form ‘serviceName.configurationMediaType’ and value itself can be a comma-separated list of properties to be filtered. If the entire configuration is to be excluded, then set the value to ‘*’. If the service name is not present in the configuration, then use the media type. Each key and value must be enclosed in double quotes (“). Here is an example:
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- RESTORE_CONFIGURATION_FILTER='{"postgres.application/vnd.sas.configuration.config.sas.dataserver.conf+json;version=1":"*","maps-reportPackages-webDataAccess.application/vnd.sas.configuration.config.sas.maps+json;version=2":"useArcGISOnlineMaps,localEsriServicesUrl"}'
If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.
Build the manifest.
kustomize build -o site.yaml
Apply the manifest.
kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts
If the default resources are not sufficient for the completion or successful execution of the migration job, modify the resources to the values you desire.
Copy the file $deploy/sas-bases/examples/migration/configure/sas-migration-job-modify-resources-transformer.yaml
to a location of your choice under $deploy/site-config
, such as $deploy/site-config/migration
.
In the copied file, replace {{ CPU-LIMIT }} with the desired value of CPU. {{ CPU-LIMIT }} must be a non-zero and non-negative numeric value, such as “3” or “5”. You can specify fractional values for the CPUs by using decimals, such as “1.5” or “0.5”.
In the same file, replace {{ MEMORY-LIMIT }} with the desired value of memory. {{ MEMORY-LIMIT }} must be a non-zero and non-negative numeric value followed by “Gi”. For example, “8Gi” for 8 gigabytes.
Add the full path of the copied file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). For example, if you
moved the file to $deploy/site-config/migration
, you would modify the
base kustomization.yaml file like this:
...
transformers:
...
- site-config/migration/sas-migration-job-modify-resources-transformer.yaml
...
Build the manifest.
kustomize build -o site.yaml
Apply the manifest.
kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts
This README contains information for customizations potentially required for migrating to SAS Viya 4. These customizations are not used often.
If you change the name of the PostgreSQL service during migration, you must map the new name to the old name. Edit $deploy/kustomization.yaml
and add an entry to the restore_job_parameters configMap in configMapGenerator section. The entry uses the following format:
data-service-{{ NEW-SERVICE-NAME }}={{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }}
To get the value for {{ NEW-SERVICE-NAME }}:
kubectl -n <name-of-namespace> get dataserver -o=custom-columns=SERVICE_NAME:.spec.registrations[].serviceName --no-headers
The command lists all the PostgreSQL clusters in your deployment. Choose the appropriate one from the list.
{{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }} is the name of the directory in backup where the PostgreSQL backup is stored (for example, 2022-03-02T09_04_11_611_0700/acme/**postgres**
).
In the following example, {{ NEW-SERVICE-NAME }} is sas-cdspostgres, and {{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }} is cpspostgres:
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
...
- data-service-sas-cdspostgres=cpspostgres
Build the manifest.
kustomize build -o site.yaml
Apply the manifest.
kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts
If you need to exclude some of the schemas during migration, edit $deploy/kustomization.yaml
and
add an entry to the restore_job_parameters configMap in configMapGenerator section. The entry uses the following format:
EXCLUDE_SCHEMAS={schema1, schema2,...}
In the following example, “dataprofiles” and “naturallanguageunderstanding” are schemas that will not be migrated.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
...
- EXCLUDE_SCHEMAS=dataprofiles,naturallanguageunderstanding
Build the manifest.
kustomize build -o site.yaml
Apply the manifest.
kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts
If the database name on the system you want to restore (the target system) does not match the database name on the system from where a backup has been taken (the source system), then you must provide the appropriate database name as part of the restore operation.
The database name is provided by using an environment variable, RESTORE_DATABASE_MAPPING, which should be specified in the restore job ConfigMap, sas-restore-job-parameters. Use the following format:
RESTORE_DATABASE_MAPPING=<source instance name>.<source database name>=<target instance name>.<target database name>
For example, if the source system has the database name “SharedServices” and the target system database is named “TestDatabase”, then the environment variable would look like this:
RESTORE_DATABASE_MAPPING=postgres.SharedServices=postgres.TestDatabase
Build the manifest.
kustomize build -o site.yaml
Apply the manifest.
kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts
If you need to exclude some of the PostgreSQL instances during migration, edit $deploy/kustomization.yaml
and add an entry to the restore_job_parameters configMap in the configMapGenerator section. The entry uses the following format:
EXCLUDE_SOURCES={instance1, instance2,...}
In the following example, “sas-cdspostgres” are PostgreSQL instances that will not be migrated.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
...
- EXCLUDE_SOURCES=sas-cdspostgres
Build the manifest.
kustomize build -o site.yaml
Apply the manifest.
kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts
You can set a jobs option that reduces the amount of time required to restore the SAS Infrastructure Data server. The time required to restore the database from backup is reduced by restoring the database objects over multiple parallel jobs. The optimal value for this option depends on the underlying hardware of the server, of the client, and of the network (for example, the number of CPU cores). Refer to the –jobs parameter for more information about the parallel jobs.
You can specify the number of parallel jobs using the following environment variable, which should be specified in the sas-restore-job-parameters config map.
SAS_DATA_SERVER_RESTORE_PARALLEL_JOB_COUNT=<number-of-jobs>
The following section, if not present, can be added to the kustomization.yaml file in your $deploy
directory. If it is present, append the properties shown in this example in the literals
section.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- SAS_DATA_SERVER_RESTORE_PARALLEL_JOB_COUNT=<number-of-jobs>
Build the manifest.
kustomize build -o site.yaml
Apply the manifest.
kubectl apply --selector="sas.com/admin in (cluster-api,cluster-wide,cluster-local,namespace)" -f site.yaml --server-side --force-conflicts
For more information about migration, see SAS Viya Platform Administration: Migration.
Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.
The SAS Migration Management service interacts with SAS 9 Content Assessment to migrate applicable content from a SAS 9 system to SAS Viya 4.
The SAS Migration Management service accesses and maintains information about SAS 9 objects and their statuses in the migration process.
The SAS Migration Management service provides the following functions:
1. Upload content from the SAS 9 system captured by SAS Content Assessment.
2. Upload profiling information for an object.
3. Upload code check information for an object.
4. Update or append objects to the content.
5. List content based on a filter.
6. Create migration batches to subset content to be assessed by SAS Content Assessment.
7. Maintain migration batches, including adding and deleting content based on a filter.
8. Download a migration batch as a CSV file.
9. Log migration batch events.
The sas-migration-manager, microservice is deployed in an idle state (scale=0) by default to save resources in Viya unless the user wants to use the migration manager. In order to use the migration manager service it will have to be activated in a deployment. To activate sas-migration-manager follow the installation steps in this document.
To activate sas-migration-manager in your deployment, copy the
$deploy/sas-bases/examples/sas-migration-manager/scale-migration-on.yaml
file
to your $deploy/site-config/sas-migration-manager
directory.
After you copy the file, add a reference to it in the transformer block of the
base kustomization.yaml
file.
transformers:
- sas-migration-manager/scale-migration-on.yaml
For more information about configuration and using example files, see the SAS Viya Platform: Deployment Guide.
This readme describes how to convert SAS Viya 3.x CAS server definitions into SAS Viya 4 Custom Resources (CR) using the sas-migration-cas-converter.sh script.
To convert SAS Viya 3.x CAS servers into compatible SAS Viya 4 CRs, you must first run the inventory playbook to create a migration package. The package will contain a YAML file with the name of each of your CAS servers, such as cas-shared-default.yaml. Instructions to create a migration package using this playbook are given in the SAS Viya Platform Administration Guide.
You perform the conversion process by specifying the name of the YAML file as an
argument to the sas-migration-cas-converter.sh script. You can specify the -f
or --file
argument. You can specify the -o
or --output
option to specify
the location of the output file for the converted custom resource. By default,
if no output option is specified, the YAML file is created in the current
directory.
When you run the conversion script, a file with the custom resource is created in the format of {{ CAS-SERVER-NAME }}-migration-cr.yaml.
If you have data and permstore content to restore, use the cas-migration.yaml
patch in \$deploy/sas-bases/examples/migration/cas/cas-components
to specify
the backup location to restore from. This patch is already included in the
kustomization.yaml file in the cas-components
directory. To configure this
patch:
Open cas-migration.yaml to modify its contents.
Set up the NFS mount by replacing the NFS-MOUNT-PATH and NFS-SERVER tokens with the mounted path to your backup location and the NFS server where it lives:
nfs:
path: {{NFS-MOUNT-PATH}}
server: {{NFS-SERVER}}
To include the newly created CAS custom resource in the manifest, add a
reference to it in the resources block of the base kustomization.yaml file
in the migration example (there is an example commented out). After you
run kustomize build
and apply the manifest, your server is created.
Your backup content is restored if you included the cas-migration.yaml
patch with a valid backup location.
Enabling state transfers allows the sessions, tables and state of a running cas server to be preserved between a running CAS server and a new CAS server instance which will be started as part of the CAS server upgrade.
In the base kustomization.yaml file in the migration example (there are examples commented out):
- cas-components/state-transfer/transfer-pvc.yaml
line from the resources block.- cas-components/state-transfer/support-state-transfer.yaml
line from the transformers block.Run the script:
./sas-migration-cas-converter.sh -f cas-shared-default.yaml -o .
The output from this command is a file named
cas-shared-default-migration-cr.yaml
.
For more information about CAS migration, see SAS Viya Platform Administration: Promotion and Migration.
Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.
The $deploy/sas-bases/overlays/migration/openshift
directory contains a file to grant security context constraints (SCCs) for the sas-migration-job pod on an OpenShift cluster.
Note: The security context constraint needs to be applied only if the backup is present on an NFS path.
# using kubectl
kubectl apply -f migration-job-scc.yaml
# using the OpenShift CLI
oc create -f migration-job-scc.yaml
oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-migration-job -z sas-viya-backuprunner
The files in this directory are used to create a backup of the SAS Viya platform. You can perform a one-time backup or you can schedule a regular backup of your deployment. For information about performing backups and using these files, see SAS Viya Platform Administration: Backup and Restore.
Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.
This README describes how to revise and apply the settings for configuring backup jobs.
If you want to retain the PersistentVolumeClaim (PVC) used for backup utility when the namespace is deleted, then use a StorageClass with a ReclaimPolicy of’Retain’ as the backup PVC.
Copy the file $deploy/sas-bases/examples/backup/configure/sas-common-backup-data-storage-class-transformer.yaml
to a location of your choice under $deploy/site-config
, such as $deploy/site-config/backup
.
Follow the instructions in the copied sas-common-backup-data-storage-class-transformer.yaml file to change the values in that file as necessary.
Add the full path of the copied file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). For example, if you
moved the file to $deploy/site-config/backup
, you would modify the
base kustomization.yaml file like this:
...
transformers:
...
- site-config/backup/sas-common-backup-data-storage-class-transformer.yaml
...
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
sas-common-backup-data
PersistentVolumeClaimCopy the file $deploy/sas-bases/examples/backup/configure/sas-common-backup-data-storage-size-transformer.yaml
to a location of your choice under $deploy/site-config
, such as $deploy/site-config/backup
.
Follow the instructions in the copied sas-common-backup-data-storage-size-transformer.yaml file to change the values in that file as necessary.
Add the full path of the copied file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). For example, if you
moved the file to $deploy/site-config/backup
, you would modify the
base kustomization.yaml file like this:
...
transformers:
...
- site-config/backup/sas-common-backup-data-storage-size-transformer.yaml
...
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
By default, the backup utility is run once per week on Sundays at 1:00 a.m. Use the following instructions to schedule a backup more suited to your resources.
Copy the file $deploy/sas-bases/examples/backup/configure/sas-scheduled-backup-job-change-default-backup-transformer.yaml
to a location of your choice under $deploy/site-config
, such as $deploy/site-config/backup
.
Replace {{ SCHEDULE-BACKUP-CRON-EXPRESSION }} with the cron expression for the desired schedule in the copied sas-scheduled-backup-job-change-default-backup-transformer.yaml.
Add the full path of the copied file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). For example, if you
moved the file to $deploy/site-config/backup
, you would modify the
base kustomization.yaml file like this:
...
transformers:
...
- site-config/backup/sas-scheduled-backup-job-change-default-backup-transformer.yaml
...
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
By default, the incremental backup is run daily at 6:00 a.m. Use the following instructions to change the schedule of this additional job to a time more suited to your resources.
Copy the file $deploy/sas-bases/examples/backup/configure/sas-scheduled-backup-incr-job-change-default-schedule.yaml
to a location of your choice under $deploy/site-config
, such as $deploy/site-config/backup
.
In the copied file, replace {{ SCHEDULE-BACKUP-CRON-EXPRESSION }} with the cron expression for the desired schedule.
Add the full path of the copied file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). For example, if you
moved the file to $deploy/site-config/backup
, you would modify the
base kustomization.yaml file like this:
...
transformers:
...
- site-config/backup/sas-scheduled-backup-incr-job-change-default-schedule.yaml
...
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
By default, the additional job to back up all the data sources (including PostgreSQL) is suspended. When enabled, the job is scheduled to run once per week on Saturdays at 1:00 a.m by default.
Use the following instructions to change the schedule of this additional job to a time more suited to your resources.
This job should not be scheduled at the same time as sas-scheduled-backup-job
or the sas-scheduled-backup-incr-job
.
Copy the file $deploy/sas-bases/examples/backup/configure/sas-scheduled-backup-all-sources-change-default-schedule.yaml
to a location of your choice under $deploy/site-config
, such as $deploy/site-config/backup
.
In the copied file, Replace {{ SCHEDULE-BACKUP-CRON-EXPRESSION }} with the cron expression for the desired schedule in the copied sas-scheduled-backup-all-sources-change-default-schedule.yaml.
Add the full path of the copied file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). For example, if you
moved the file to $deploy/site-config/backup
, you would modify the
base kustomization.yaml file like this:
...
transformers:
...
- site-config/backup/sas-scheduled-backup-all-sources-change-default-schedule.yaml
...
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
If the default resources are not sufficient for the completion or successful execution of the backup job, modify the resources to the values you desire.
Copy the file $deploy/sas-bases/examples/backup/configure/sas-backup-job-modify-resources-transformer.yaml
to a location of your choice under $deploy/site-config
, such as $deploy/site-config/backup
.
In the copied file, replace {{ CPU-LIMIT }} with the desired value of CPU. {{ CPU-LIMIT }} must be a non-zero and non-negative numeric value, such as “3” or “5”. You can specify fractional values for the CPUs by using decimals, such as “1.5” or “0.5”.
In the same file, replace {{ MEMORY-LIMIT }} with the desired value of memory. {{ MEMORY-LIMIT }} must be a non-zero and non-negative numeric value followed by “Gi”. For example, “8Gi” for 8 gigabytes.
Add the full path of the copied file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). For example, if you
moved the file to $deploy/site-config/backup
, you would modify the
base kustomization.yaml file like this:
...
transformers:
...
- site-config/backup/sas-backup-job-modify-resources-transformer.yaml
...
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
If the default resources are not sufficient for the completion or successful execution of the backup copy and cleanup job, modify the resources to the values you desire.
Copy the file $deploy/sas-bases/examples/backup/configure/sas-backup-pv-copy-cleanup-job-modify-resources-transformer.yaml
to a location of your choice under $deploy/site-config
, such as $deploy/site-config/backup
.
In the copied file, replace {{ CPU-LIMIT }} with the desired value of CPU. {{ CPU-LIMIT }} must be a non-zero and non-negative numeric value, such as “3” or “5”. You can specify fractional values for the CPUs by using decimals, such as “1.5” or “0.5”.
In the same file, replace {{ MEMORY-LIMIT }} with the desired value of memory. {{ MEMORY-LIMIT }} must be a non-zero and non-negative numeric value followed by “Gi”. For example, “8Gi” for 8 gigabytes.
Add the full path of the copied file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). For example, if you
moved the file to $deploy/site-config/backup
, you would modify the
base kustomization.yaml file like this:
...
transformers:
...
- site-config/backup/sas-backup-pv-copy-cleanup-job-modify-resources-transformer.yaml
...
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
If the default resources are not sufficient for the completion or successful execution of the CAS controller pod, modify the resources of backup agent container of CAS controller pod to the values you desire.
Copy the file $deploy/sas-bases/examples/backup/configure/sas-cas-server-backup-agent-modify-resources-transformer.yaml
to a location of your choice under $deploy/site-config
, such as $deploy/site-config/backup
.
In the copied file, replace {{ CPU-LIMIT }} with the desired value of CPU. {{ CPU-LIMIT }} must be a non-zero and non-negative numeric value, such as “3” or “5”. You can specify fractional values for the CPUs by using decimals, such as “1.5” or “0.5”.
In the same file, replace {{ MEMORY-LIMIT }} with the desired value of memory. {{ MEMORY-LIMIT }} must be a non-zero and non-negative numeric value followed by “Gi”. For example, “8Gi” for 8 gigabytes.
By default the patch will be applied to all of the CAS servers. If the patch transformer is being applied to a single CAS server, replace {{ NAME-OF-CAS-SERVER }} with the named CAS server in the same file and comment out the lines ‘name: .*’ and ‘labelSelector: “sas.com/cas-server-default”’ with a hashtag (#).
Add the full path of the copied file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). For example, if you
moved the file to $deploy/site-config/backup
, you would modify the
base kustomization.yaml file like this:
...
transformers:
...
- site-config/backup/sas-cas-server-backup-agent-modify-resources-transformer.yaml
...
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
If you need to change the backup job timeout value, add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
The entry uses the following format, where {{ TIMEOUT-IN-MINUTES }} is an integer
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- JOB_TIME_OUT={{ TIMEOUT-IN-MINUTES }}
If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
If you need to change the backup retention period, add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
The entry uses the following format, where {{ RETENTION-PERIOD-IN-DAYS }} is an integer.
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- RETENTION_PERIOD={{ RETENTION-PERIOD-IN-DAYS }}
If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
If you want to back up additional consul properties, keys can be added to the sas-backup-agent-parameters configMap in the base kustomization.yaml file ($deploy/kustomization.yaml
).
To add keys, add a data block to the configMap.
If the sas-backup-agent-parameters configMap is already included in your base kustomization.yaml file, you should add the last line only. If the configMap isn’t included, add the entire block.
configMapGenerator:
- name: sas-backup-agent-parameters
behavior: merge
literals:
- BACKUP_ADDITIONAL_GENERIC_PROPERTIES="{{ CONSUL-KEY-LIST }}"
The {{ CONSUL-KEY-LIST }} should be a comma-separated list of properties to be backed up. Here is an example:
configMapGenerator:
- name: sas-backup-agent-parameters
behavior: merge
literals:
- BACKUP_ADDITIONAL_GENERIC_PROPERTIES="config/files/sas.files/maxFileSize,config/files/sas.files/blockedTypes"
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
To exclude specific folders and files during file system backup, add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
If the sas-backup-job-parameters configMap is already included in your base kustomization.yaml file, you should add the last line only. If the configMap isn’t included, add the entire block.
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- FILESYSTEM_BACKUP_EXCLUDELIST="{{ EXCLUDE_PATTERN }}"
The {{ EXCLUDE_PATTERN }} should be a comma-separated list of patterns for files or folders to be excluded from the backup. Here is an example that excludes all the files with extensions “.tmp” or “.log”:
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- FILESYSTEM_BACKUP_EXCLUDELIST="*.tmp,*.log"
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
By default, the filter list is set to exclude “.lck”, “.” and “lost+found” files and folders pattern from the file system backup. To change the default filter list to exclude files and folders
during file system backup, add an entry to the sas-backup-job-parameters configMap in the
configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
If the sas-backup-job-parameters configMap is already included in your base kustomization.yaml file, you should add the last line only. If the configMap isn’t included, add the entire block.
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- FILESYSTEM_BACKUP_OVERRIDE_EXCLUDELIST="{{ EXCLUDE_PATTERN }}"
The {{ EXCLUDE_PATTERN }} should be a comma-separated list of patterns for files or folders to be excluded from the backup. Here is an example that excludes all the files with extensions “.tmp” or “.log”:
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- FILESYSTEM_BACKUP_OVERRIDE_EXCLUDELIST="*.tmp,*.log"
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
By default, you are notified if the backup job fails. To disable backup job failure notification, add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). Replace {{ ENABLE-NOTIFICATIONS }} with the string “false”.
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- ENABLE_NOTIFICATIONS={{ ENABLE-NOTIFICATIONS }}
If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, add the last line only. If the configMap is not present, add the entire example.
To restore the default, change the value of {{ ENABLE-NOTIFICATIONS }} from “false” to “true”.
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
To include or exclude all Postgres servers registered with SAS Viya in the default back up, add the INCLUDE_POSTGRES variable to sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml). If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- INCLUDE_POSTGRES="{{ INCLUDE-POSTGRES }}"
To include all the registered PostgreSQL servers, replace {{ INCLUDE-POSTGRES }} in the code with a value ‘true’. To exclude all the registered PostgreSQL servers, replace {{ INCLUDE-POSTGRES }} in the code with a value ‘false’.
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
If using the default fsGroup settings does not result in the completion or successful execution of the backup job, modify the fsGroup resources to the values you desire.
Copy the file $deploy/sas-bases/examples/backup/configure/sas-backup-job-modify-fsgroup-transformer.yaml
to a location of your choice under $deploy/site-config
, such as $deploy/site-config/backup
.
Follow the instructions in the copied sas-backup-job-modify-fsgroup-transformer.yaml file to change the values in that file as necessary.
Add the full path of the copied file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). For example, if you
moved the file to $deploy/site-config/backup
, you would modify the
base kustomization.yaml file like this:
...
transformers:
...
- site-config/backup/sas-backup-job-modify-fsgroup-transformer.yaml
...
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
By default, resources such as space available for a PVC are pre-validated against PVC capacity to store data for a backup job. You can disable the resource validations for backup job if necessary.
Add an entry to the sas-backup-job-parameters configMap with the following command.
kubectl patch cm sas-backup-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/DISABLE_VALIDATION", "value":"true" }]'
Add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file.
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- DISABLE_VALIDATION="true"
If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, add the last line only. If the configMap is not present, add the entire example.
Build and apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
By default, resources such as space available for a PVC are pre-validated against PVC capacity to store data for a backup job and send a proactive notification. You can disable the proactive notification for resource validations for backup job if necessary.
Add an entry to the sas-backup-job-parameters configMap with the following command.
kubectl patch cm sas-backup-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/DISABLE_PROACTIVE_NOTIFICATION", "value":"true" }]'
Add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file.
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- DISABLE_PROACTIVE_NOTIFICATION="true"
If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, add the last line only. If the configMap is not present, add the entire example.
Build and apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
The backup progress feature provides real-time updates on the total estimated time for backup completion. This feature is enabled by default but can be disabled if users do not require progress tracking.
Add an entry to the sas-backup-job-parameters configMap with the following command.
kubectl patch cm sas-backup-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/BACKUP_PROGRESS", "value":"false" }]'
Add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file. Here is an example:
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- BACKUP_PROGRESS="false"
If the sas-backup-job-parameters configMap already exists in the base kustomization.yaml file, add only the last line. If the configMap is not present, include the entire example.
Build and apply the manifest.
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
To change the frequency of the updates on backup progress, add an entry to the sas-backup-job-parameters configMap within the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml). The entry uses the following format, where {{ PROGRESS-POLL-TIME-IN-MINUTES }} is an integer. The default and minimum value for backup progress poll time is 2 minutes. The maximum allowed value for backup progress poll time is 60 minutes.
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- PROGRESS_POLL_TIME={{ PROGRESS-POLL-TIME-IN-MINUTES }}
If the sas-backup-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.
“Note:” High-frequency progress updates increase network usage and should be used cautiously for backups with very long durations.
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
This README describes how to revise and apply the settings for backing up PostgreSQL using the SAS Viya Backup and Restore Utility.
If you need to add or change any option for the PostgreSQL backup command (pg_dump),
add an entry to the sas-backup-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
configMapGenerator:
- name: sas-backup-job-parameters
behavior: merge
literals:
- SAS_DATA_SERVER_BACKUP_ADDITIONAL_OPTIONS={{ OPTION-1-NAME OPTION-1-VALUE }},{{ FLAG-1 }},{{ OPTION-2-NAME OPTION-2-VALUE }}
The {{ OPTION-NAME OPTION-VALUE }} and {{ FLAG }} variables should be a comma-separated list of options to be added, such as -Z 0,--version
.
If the sas-backup-job-parameters configMap is already present in the ($deploy/kustomization.yaml
) file, you should add the last line only. If the configMap is not present, add the entire example.
Note: Do not use –format or -F in SAS_DATA_SERVER_BACKUP_ADDITIONAL_OPTIONS; the backup process defaults to directory format, ensuring compatibility during restoration.
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
To enable a suspended incremental backup job, edit the base kustomization file ($deploy/kustomization.yaml
).
In the transformers block, add /sas-bases/overlays/backup/sas-scheduled-backup-incr-job-enable.yaml
. Here is an example:
...
transformers:
- sas-bases/overlays/backup/sas-scheduled-backup-incr-job-enable.yaml
...
The above transformer also sets INCLUDE_POSTGRES=False in sas-backup-job-parameters configmap.
Build and Apply the Manifest As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
To enable a suspended job to back up all sources (including PostgreSQL), edit the base kustomization file ($deploy/kustomization.yaml
).
In the transformers block, add /sas-bases/overlays/backup/sas-scheduled-backup-all-sources-enable.yaml
. Here is an example:
...
transformers:
- sas-bases/overlays/backup/sas-scheduled-backup-all-sources-enable.yaml
...
Build and Apply the Manifest As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
The files in this directory are used to customize your SAS Viya platform deployment to perform a restore. For information about the restore function and using these files, see SAS Viya Platform Administration: Backup and Restore.
Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.
This directory contains overlays to customize your SAS Viya platform deployment to run restore. For information about the restore function and using these files, see SAS Viya Platform Administration: Backup and Restore.
Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.
This README describes how to revise and apply the settings for configuring restore jobs.
To change the restore job timeout value temporarily, edit the sas-restore-job-parameters configMap using the following command, where {{ TIMEOUT-IN-MINUTES }} is an integer.
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[ {"op": "replace", "path": "/data/JOB_TIME_OUT", "value":"{{ TIMEOUT-IN-MINUTES }}" }]'
If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.
To change the restore job timeout value, edit the $deploy/kustomization.yaml
file by adding an entry for the sas-restore-job-parameters configMap in the configMapGenerator block.
The entry uses the following format, where {{ TIMEOUT-IN-MINUTES }} is an integer.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- JOB_TIME_OUT={{ TIMEOUT-IN-MINUTES }}
If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
To skip the restore of the configuration definition properties once, edit the sas-restore-job-parameters configMap using the following command.
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/RESTORE_DEFINITION_FILTER", "value":"{{ RESTORE-DEFINITION-FILTER-CSV }}" }]'
The {{ RESTORE-DEFINITION-FILTER-CSV }} is a json string containing the comma-separated list of ‘key:value’ pairs where the key is in the form ‘serviceName.definitionName.version’ and the value can be a comma-separated list of properties to be filtered. If the entire definition is to be excluded, then set the value to ‘*’. If the service name is not present in the definition, then only provide ‘definitionName’. Each key and value must be enclosed in double quotes (“). Here is an example:
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/RESTORE_DEFINITION_FILTER", "value":"{\"sas.dataserver.common.1\":\"*\",\"deploymentBackup.sas.deploymentbackup.1\":\"*\",\"deploymentBackup.sas.deploymentbackup.2\":\"*\",\"deploymentBackup.sas.deploymentbackup.3\":\"*\",\"sas.security.1\":\"*\",\"vault.sas.vault.1\":\"*\",\"vault.sas.vault.2\":\"*\",\"SASDataExplorer.sas.dataexplorer.1\":\"*\",\"SASLogon.sas.logon.sas9.1\":\"*\",\"sas.cache.1\":\"*\",\"sas.cache.2\":\"*\",\"sas.cache.3\":\"*\",\"sas.cache.4\":\"*\",\"identities-SASLogon.sas.identities.providers.ldap.user.1\":\"accountId,address.country\",\"SASLogon.sas.logon.saml.providers.external_saml.1\":\"assertionConsumerIndex,idpMetadata\"}" }]'
If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.
Edit the $deploy/kustomization.yaml
file by adding an entry for the sas-restore-job-parameters configMap in the configMapGenerator block. The entry uses the following format.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- RESTORE_DEFINITION_FILTER={{ RESTORE-DEFINITION-FILTER-CSV }}
The {{ RESTORE-DEFINITION-FILTER-CSV }} is a json string containing the comma-separated list of ‘key:value’ pairs where key is in the form ‘serviceName.definitionName.version’ and value itself can be a comma-separated list of properties to be filtered. If entire definition is to be excluded, then set the value to ‘*’. If service name is not present in the definition then only provide ‘definitionName’. Each key and value must be enclosed in double quotes (“). Here is an example:
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- RESTORE_DEFINITION_FILTER='{"sas.dataserver.common.1":"*","deploymentBackup.sas.deploymentbackup.1":"*","deploymentBackup.sas.deploymentbackup.2":"*","deploymentBackup.sas.deploymentbackup.3":"*","sas.security.1":"*","vault.sas.vault.1":"*","vault.sas.vault.2":"*","SASDataExplorer.sas.dataexplorer.1":"*","SASLogon.sas.logon.sas9.1":"*","sas.cache.1":"*","sas.cache.2":"*","sas.cache.3":"*","sas.cache.4":"*","identities-SASLogon.sas.identities.providers.ldap.user.1":"accountId,address.country","SASLogon.sas.logon.saml.providers.external_saml.1":"assertionConsumerIndex,idpMetadata"}'
If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
To skip the restore of the configuration properties once, edit the sas-restore-job-parameters configMap using the following command.
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/RESTORE_CONFIGURATION_FILTER", "value":"{{ RESTORE-CONFIGURATION-FILTER-CSV }}" }]'
The {{ RESTORE-CONFIGURATION-FILTER-CSV }} is a json string containing the comma-separated list of ‘key:value’ pairs where the key is in the form ‘serviceName.configurationMediaType’ and the value can be a comma-separated list of properties to be filtered. If the entire configuration is to be excluded, then set the value to ‘*’. If the service name is not present in the configuration, then use the media type. Each key and value must be enclosed in double quotes (“). Here is an example:
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/RESTORE_DEFINITION_FILTER", "value":"{\"postgres.application/vnd.sas.configuration.config.sas.dataserver.conf+json;version=1\":\"*\",\"maps-reportPackages-webDataAccess.application/vnd.sas.configuration.config.sas.maps+json;version=2\":\"useArcGISOnlineMaps,localEsriServicesUrl\"}" }]'
If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.
Edit the $deploy/kustomization.yaml
file by adding an entry for the sas-restore-job-parameters configMap in the configMapGenerator block. The entry uses the following format.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- RESTORE_CONFIGURATION_FILTER={{ RESTORE-CONFIGURATION-FILTER-CSV }}
The {{ RESTORE-CONFIGURATION-FILTER-CSV }} is a json string containing the comma-separated list of ‘key:value’ pairs where key is in the form ‘serviceName.configurationMediaType’ and value itself can be a comma-separated list of properties to be filtered. If the entire configuration is to be excluded, then set the value to ‘*’. If service name is not present in the configuration, then use the media type. Each key and value must be enclosed in double quotes (“). Here is an example:
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- RESTORE_CONFIGURATION_FILTER='{"postgres.application/vnd.sas.configuration.config.sas.dataserver.conf+json;version=1":"*","maps-reportPackages-webDataAccess.application/vnd.sas.configuration.config.sas.maps+json;version=2":"useArcGISOnlineMaps,localEsriServicesUrl"}'
If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, you should add the last line only. If the configMap is not present, add the entire example.
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
By default, you are notified if the restore job fails. To disable the restore job failure notification once, add an entry to the sas-restore-job-parameters configMap with the following command. Replace {{ ENABLE-NOTIFICATIONS }} with the string “false”.
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/ENABLE_NOTIFICATIONS", "value":"{{ ENABLE-NOTIFICATIONS }}" }]'
To restore the default, change the value of {{ ENABLE-NOTIFICATIONS }} from “false” to “true”.
If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.
Add an entry to the sas-restore-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file. Replace {{ ENABLE-NOTIFICATIONS }} with the string “false”.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- ENABLE_NOTIFICATIONS={{ ENABLE-NOTIFICATIONS }}
If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, add the last line only. If the configMap is not present, add the entire example.
To restore the default, change the value of {{ ENABLE-NOTIFICATIONS }} from “false” to “true”.
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
In some cases, the default resources may not be sufficient for completion or successful execution of the restore job, resulting in the pod status being marked as OOMKilled. In this case, modify the resources to the values you desire.
Replace {{ CPU-LIMIT }} with the desired value of CPU. {{ CPU-LIMIT }} must be a non-zero and non-negative numeric value, such as “3” or “5”. You can specify fractional values for the CPUs by using decimals, such as “1.5” or “0.5”.
kubectl patch cronjob sas-restore-job -n name-of-namespace --type json -p '[{"op": "replace", "path": "/spec/jobTemplate/spec/template/spec/containers/0/resources/limits/cpu", "value":"{{ CPU-LIMIT }}" }]'
Replace {{ MEMORY-LIMIT }} with the desired value for memory. {{ MEMORY-LIMIT }} must be a non-zero and non-negative numeric value followed by “Gi”. For example, “8Gi” for 8 gigabytes.
```bash
kubectl patch cronjob sas-restore-job -n name-of-namespace --type json -p '[{"op": "replace", "path": "/spec/jobTemplate/spec/template/spec/containers/0/resources/limits/memory", "value":"{{ MEMORY-LIMIT }}" }]'
```
If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.
Copy the file $deploy/sas-bases/examples/restore/configure/sas-restore-job-modify-resources-transformer.yaml
to a location of your choice under $deploy/site-config
, such as $deploy/site-config/restore
.
In the copied file, replace {{ CPU-LIMIT }} with the desired value of CPU. {{ CPU-LIMIT }} must be a non-zero and non-negative numeric value, such as “3” or “5”. You can specify fractional values for the CPUs by using decimals, such as “1.5” or “0.5”.
In the same file, replace {{ MEMORY-LIMIT }} with the desired value of memory. {{ MEMORY-LIMIT }} must be a non-zero and non-negative numeric value followed by “Gi”. For example, “8Gi” for 8 gigabytes.
Add the full path of the copied file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). For example, if you
moved the file to $deploy/site-config/restore
, you would modify the
base kustomization.yaml file like this:
...
transformers:
...
- site-config/restore/sas-restore-job-modify-resources-transformer.yaml
...
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
External PostgreSQL servers can be backed up and restored externally. Point in time recovery performed in such cases creates a new PostgreSQL server with a new host name. To automatically update the host names of the PostgreSQL server after the restore is completed using the SAS Viya Backup and Restore Utility, update the sas-restore-job-parameters config map with the following parameters before performing the restore.
AUTO_SWITCH_POSTGRES: “true”
DATASERVER_HOST_MAP: “{{ DATASERVER_HOST_MAP }}”
{{ DATASERVER_HOST_MAP }} is comma-separated list of key value pairs that describes the mapping of dataserver custom resource to updated host names. The key and value within each KV pair is separated by colon (:). Here is an example that switches the host names for SAS platform PostgreSQL and SAS CDS PostgreSQL servers with the new host names:
DATASERVER_HOST_MAP="sas-platform-postgres:restored-postgres.postgres.azure.com,sas-cds-postgres:restored-cds-postgres.postgres.azure.com"
Here is an example command that adds the AUTO_SWITCH_POSTGRES_HOST and DATASERVER_HOST_MAP parameters to the config map:
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/AUTO_SWITCH_POSTGRES_HOST", "value":"TRUE" }, {"op": "replace", "path": "/data/DATASERVER_HOST_MAP","value":"sas-platform-postgres:restored-postgres.postgres.azure.com,sas-cds-postgres:restored-cds-postgres.postgres.azure.com" }]'
This section is used when SQL proxy is used to interface the external PostgreSQL server. External PostgreSQL servers can be backed up and restored externally. Point in time recovery performed in such cases creates a new PostgreSQL server with a new host name. To automatically update the host names of the PostgreSQL server after the restore is completed using the SAS Viya Backup and Restore Utility, update the sas-restore-job-parameters config map with the following parameters before performing the restore.
AUTO_SWITCH_POSTGRES: “true”
SQL_PROXY_POSTGRES_CONNECTION_MAP: “{{ SQL_PROXY_POSTGRES_CONNECTION_MAP }}”
{{ SQL_PROXY_POSTGRES_CONNECTION_MAP }} is comma-separated list of key value pairs that describes the mapping of the SQL proxy Kubernetes deployment name to new PostgreSQL connection string. The key and value within each KV pair is separated by the first colon (:). Here is an example that switches the host names for SAS platform PostgreSQL and SAS CDS PostgreSQL servers with the new connection strings:
SQL_PROXY_POSTGRES_CONNECTION_MAP="platform-postgres-sql-proxy:sub7:us-east1:restored-postgres-default-pgsql-clone,cds-postgres-sql-proxy:restored-cds-postgres-default-pgsql-clone"
Here is an example command that adds the AUTO_SWITCH_POSTGRES_HOST and SQL_PROXY_POSTGRES_CONNECTION_MAP parameters to the config map:
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/AUTO_SWITCH_POSTGRES_HOST", "value":"TRUE" }, {"op": "replace", "path": "/data/SQL_PROXY_POSTGRES_CONNECTION_MAP","value":"platform-postgres-sql-proxy:sub7:us-east1:restored-postgres-default-pgsql-clone,cds-postgres-sql-proxy:restored-cds-postgres-default-pgsql-clone" }]'
By default, resources like CPU and memory are pre-validated in order for the restore job to be completed successfully. You can disable the resource validation to complete the restore job successfully
Add an entry to the sas-restore-job-parameters configMap with the following command.
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/DISABLE_VALIDATION", "value":"true" }]'
Add an entry to the sas-restore-job-parameters configMap in the configMapGenerator block of the base kustomization.yaml file.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- DISABLE_VALIDATION="true"
If the sas-restore-job-parameters configMap is already present in the base kustomization.yaml file, add the last line only. If the configMap is not present, add the entire example.
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
This README file contains information about customizations that are potentially required for restoring SAS Viya Platform from a backup. These customizations are not used often.
If the database name on the system you want to restore (the target system) does not match the database name on the system from where a backup has been taken (the source system), then you must provide the appropriate database name as part of the restore operation.
The database name is provided by using an environment variable, RESTORE_DATABASE_MAPPING, which should be specified in the restore job ConfigMap, sas-restore-job-parameters. Use the following command:
```bash
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/RESTORE_DATABASE_MAPPING", "value":"<source instance name>.<source database name>=<target instance name>.<target database name>" }]'
```
For example, if the source system has the database name “SharedServices” and the target system database is named “TestDatabase”, then the environment variable would look like this:
```bash
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/RESTORE_DATABASE_MAPPING", "value":"postgres.SharedServices=postgres.TestDatabase" }]'
```
If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.
The database name is provided by using an environment variable, RESTORE_DATABASE_MAPPING, which should be specified in the restore job ConfigMap, sas-restore-job-parameters. Use the following format:
RESTORE_DATABASE_MAPPING=<source instance name>.<source database name>=<target instance name>.<target database name>
For example, if the source system has the database name “SharedServices” and the target system database is named “TestDatabase”, then the environment variable would look like this:
RESTORE_DATABASE_MAPPING=postgres.SharedServices=postgres.TestDatabase
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
If you change the name of the PostgreSQL service during migration, you must map the new name to the old name. Edit the sas-restore-job-parameters configMap using the following command:
```bash
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/data-service-{{ NEW-SERVICE-NAME }}", "value":"{{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }}" }]'
```
To get the value for {{ NEW-SERVICE-NAME }}:
```bash
kubectl -n <name-of-namespace> get dataserver -o=custom-columns=SERVICE_NAME:.spec.registrations[].serviceName --no-headers
```
The command lists all the PostgreSQL clusters in your deployment. Choose the appropriate one from the list. {{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }} is the name of the directory in backup where the
PostgreSQL backup is stored (for example, 2022-03-02T09_04_11_611_0700/acme/**postgres**
).
In the following example, {{ NEW-SERVICE-NAME }} is sas-cdspostgres, and {{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }} is cpspostgres:
```bash
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/data-service-sas-cdspostgres", "value":"cpspostgres" }]'
```
If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.
Edit $deploy/kustomization.yaml
and add an entry to the restore_job_parameters configMap in the configMapGenerator section. The entry uses the following format:
data-service-{{ NEW-SERVICE-NAME }}={{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }}
To get the value for {{ NEW-SERVICE-NAME }}:
kubectl -n <name-of-namespace> get dataserver -o=custom-columns=SERVICE_NAME:.spec.registrations[].serviceName --no-headers
The command lists all the PostgreSQL clusters in your deployment. Choose the appropriate one from the list.
{{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }} is the name of the directory in backup where the PostgreSQL backup is stored (for example, 2022-03-02T09_04_11_611_0700/acme/**postgres**
).
In the following example, {{ NEW-SERVICE-NAME }} is sas-cdspostgres, and {{ DIRECTORY-NAME-OF-POSTGRES-IN-BACKUP }} is cpspostgres:
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
...
- data-service-sas-cdspostgres=cpspostgres
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
If you need to exclude some of the schemas during migration once, edit the sas-restore-job-parameters configMap using the following command:
```yaml
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/EXCLUDE_SCHEMAS", "value":"{{ schema1, schema2,... }}" }]'
```
In the following example, “dataprofiles” and “naturallanguageunderstanding” are schemas that will not be restored.
```bash
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/EXCLUDE_SCHEMAS", "value":"dataprofiles,naturallanguageunderstanding" }]'
```
If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.
Edit $deploy/kustomization.yaml
by adding an entry to the restore_job_parameters configMap in the configMapGenerator section. The entry uses the following format:
EXCLUDE_SCHEMAS={schema1, schema2,...}
In the following example, “dataprofiles” and “naturallanguageunderstanding” are schemas that will not be restored.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
...
- EXCLUDE_SCHEMAS=dataprofiles,naturallanguageunderstanding
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
If you need to exclude some of the PostgreSQL instances during restore once, edit the sas-restore-job-parameters configMap using the following command:
```bash
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/EXCLUDE_SOURCES", "value":"{{ instance1, instance2,... }}" }]'
```
In the following example, “sas-cdspostgres” are PostgreSQL instances that will not be restored.
```bash
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/EXCLUDE_SOURCES", "value":"sas-cdspostgres" }]'
```
If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.
Edit $deploy/kustomization.yaml
by adding an entry to the restore_job_parameters configMap in configMapGenerator section. The entry uses the following format:
EXCLUDE_SOURCES={instance1, instance2,...}
In the following example, “sas-cdspostgres” are PostgreSQL instances that will not be restored.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
...
- EXCLUDE_SOURCES=sas-cdspostgres
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
You can set a jobs option that reduces the amount of time required to restore the SAS Infrastructure Data server. The time required to restore the database from backup is reduced by restoring the database objects over multiple parallel jobs. The optimal value for this option depends on the underlying hardware of the server, of the client, and of the network (for example, the number of CPU cores). Refer to the –jobs parameter for more information about the parallel jobs.
You can specify the number of parallel jobs once using the following environment variable, which should be specified in the sas-restore-job-parameters configMap.
```bash
kubectl patch cm sas-restore-job-parameters-name -n name-of-namespace --type json -p '[{"op": "replace", "path": "/data/SAS_DATA_SERVER_RESTORE_PARALLEL_JOB_COUNT", "value":"{{ number-of-jobs }}" }]'
```
If you are running the restore job with this configuration frequently, then add this configuration permanently using the following method.
Specify the number of parallel jobs using the following environment variable, which should be specified in the sas-restore-job-parameters config map.
SAS_DATA_SERVER_RESTORE_PARALLEL_JOB_COUNT=<number-of-jobs>
The following section, if not present, can be added to the kustomization.yaml file in your $deploy
directory. If it is present, append the properties shown in this example in the literals
section.
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- SAS_DATA_SERVER_RESTORE_PARALLEL_JOB_COUNT=<number-of-jobs>
Build and Apply the Manifest
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Modify Existing Customizations in a Deployment.
This README file contains information about the execution of scripts that are potentially required for restoring the SAS Viya Platform from a backup.
To execute the scripts described in this README, append the execute permission by running the following command.
chmod +x ./sas-backup-pv-copy-cleanup.sh ./scale-up-cas.sh ./sas-backup-pv-copy-cleanup-using-pvcs.sh ./sas-backup-pv-cleanup.sh
Persistent volumes claims (PVCs) are used by the CAS server to restore CAS data. To clean up the CAS PVCs after the restore job has completed, execute the sas-backup-pv-copy-cleanup.sh or the sas-backup-pv-copy-cleanup-using-pvcs.sh bash script. Both scripts have three arguments: namespace, operation to perform, and a comma-separated list of CAS instances or persistent volume claims. If you are attempting a restore after a successful SAS Viya 3.x to SAS Viya 4 migration, method 2 is recommended.
./sas-backup-pv-copy-cleanup.sh [namespace] [operation] "[CAS instances list]"
Here is an example:
./sas-backup-pv-copy-cleanup.sh viya04 remove "default"
Note: The default CAS instance name is “default” if the user has not changed it.
Use the following command to determine the name of the CAS instances.
kubectl -n name-of-namespace get casdeployment -L 'casoperator.sas.com/instance'
Verify that the output for the command contains the name of the CAS instances. Here is an example of the output:
test.host.com> kubectl -n viya04 get casdeployment -L 'casoperator.sas.com/instance'
NAME AGE INSTANCE
default 14h default
In this example, the CAS instance is named “default”. If the instance value in the output is empty, use “default” as the instance value.
To get the list of persistent volume claims for CAS instances, execute the following command.
kubectl -n name-of-namespace get pvc -l 'sas.com/backup-role=provider,app.kubernetes.io/part-of=cas'
Verify that the output contains the persistent volume claims.
test.host.com> kubectl -n viya04 get pvc -l 'sas.com/backup-role=provider,app.kubernetes.io/part-of=cas'
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cas-acme-default-data Bound pvc-6c4b3b65-cc11-4757-ac00-059d8e19f307 8Gi RWX nfs-client 20h
cas-acme-default-permstore Bound pvc-1a7cc621-5770-4e5d-b829-46eaad433460 100Mi RWX nfs-client 20h
cas-cyberdyne-default-data Bound pvc-cd5c173a-9bcf-4649-bea3-ea463930c9b4 8Gi RWX nfs-client 20h
cas-cyberdyne-default-permstore Bound pvc-253ff153-f309-4700-bef1-e041f63a7810 100Mi RWX nfs-client 20h
cas-default-data Bound pvc-52d98061-d296-40f0-92e9-eaa34ca856c5 8Gi RWX nfs-client 21h
cas-default-permstore Bound pvc-cd8c3e86-a848-4029-9456-5841c85b15fd 100Mi RWX nfs-client 21h
Select list of data and permstore persistent volume claim for a CAS instance.
./sas-backup-pv-copy-cleanup-using-pvcs.sh [namespace] [operation] "[PVCs]"
Here is an example:
./sas-backup-pv-copy-cleanup-using-pvcs.sh viya04 remove "cas-default-data,cas-default-permstore"
To remove data from the CAS PVCs after the restore job is completed, execute the sas-backup-pv-cleanup.sh script.
To retrieve the list of persistent volume claims (PVCs) for the source data, run the following command:
kubectl -n name-of-namespace get pvc -l 'sas.com/backup-role=provider,app.kubernetes.io/part-of=cas'
Verify that the output contains the persistent volume claims.
test.host.com> kubectl -n viya04 get pvc -l 'sas.com/backup-role=provider,app.kubernetes.io/part-of=cas'
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
cas-default-data Bound pvc-5feb5df5-daf9-4100-b998-64d48e221861 8Gi RWX nfs-client <unset> 2d1h
cas-default-permstore Bound pvc-29d9ba36-7da5-4870-b7ec-719811f41caa 100Mi RWX nfs-client <unset> 2d1h
In the command below, replace “[PVCs]” with the PVC names from the NAME column in the list above
./sas-backup-pv-cleanup.sh [namespace] "[PVCs]"
Here is an example:
./sas-backup-pv-cleanup.sh viya04 "cas-default-data,cas-default-permstore"
You can also use a Kubernetes job (sas-backup-pv-copy-cleanup-job) to copy backup data to and from the backup persistent volume claims like sas-common-backup-data and sas-cas-backup-data.
To create a copy job from the cronjob sas-backup-pv-copy-cleanup-job, execute the sas-backup-pv-copy-cleanup.sh script with three arguments: namespace, operation to perform, and a comma-separated list of CAS instances.
./sas-backup-pv-copy-cleanup.sh [namespace] [operation] "[CAS instances list]"
Here is an example:
./sas-backup-pv-copy-cleanup.sh viya04 copy "default"
Note: The default CAS instance name is “default” if the user hasn’t changed it.
The script creates a copy job for each CAS Instance that is included in the comma-separated list of CAS instances. Check for the sas-backup-pv-copy-job pod that is created for each individual CAS Instance
kubectl -n name-of-namespace get pod | grep -i sas-backup-pv-copy
If you do not see the results you expect, see the console output of the sas-backup-pv-copy-cleanup.sh script.
To create a copy job from the cronjob sas-backup-pv-copy-cleanup-job, execute the sas-backup-pv-copy-cleanup-using-pvcs.sh script with three arguments: namespace, operation to perform, and the backup persistent volume claim particular to the CAS instance.
To get the list of backup persistent volume claims for CAS instances, execute the following command.
kubectl -n name-of-namespace get pvc -l 'sas.com/backup-role=storage,app.kubernetes.io/part-of=cas'
Verify that the output contains the name of backup persistent volume claim particular to the cas instances.
test.host.com> kubectl -n viya04 get pvc -l 'sas.com/backup-role=storage,app.kubernetes.io/part-of=cas'
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sas-cas-backup-data Bound pvc-3b16a5c0-b4af-43a1-95f7-53aa30103a59 8Gi RWX nfs-client 21h
sas-cas-backup-data-acme-default Bound pvc-ceb3f86d-c0da-419b-bc06-825a6cddb5d9 4Gi RWX nfs-client 21h
sas-cas-backup-data-cyberdyne-default Bound pvc-306f6b28-7d5a-4769-885c-b21d3b734207 4Gi RWX nfs-client 21h
Select backup persistent volume claim for a CAS instance.
./sas-backup-pv-copy-cleanup-using-pvcs.sh [namespace] [operation] "[PVC]"
Here is an example:
./sas-backup-pv-copy-cleanup-using-pvcs.sh viya04 copy "sas-cas-backup-data"
The script creates a copy job that mounts the cas specific backup persistent volume claim and the sas-common-backup-data persistent volume claim. Check for the sas-backup-pv-copy-job pod that is created.
kubectl -n name-of-namespace get pod | grep -i sas-backup-pv-copy
If you do not see the results you expect, see the console output of the sas-backup-pv-copy-cleanup.sh script.
The copy job pod mounts two persistent volume claims per CAS instance. The ‘sas-common-backup-data’ PVC is mounted at ‘/sasviyabackup’ and the ‘sas-cas-backup-data’ PVC is mounted at ‘/cas’.
To scale up the CAS deployments that are used to restore CAS data for each CAS instance, execute the scale-up-cas.sh bash script with two arguments: namespace and a comma-separated list of CAS instances.
./scale-up-cas.sh [namespace] "[CAS instances list]"
Here is an example:
./scale-up-cas.sh viya04 "default"
Note: The default CAS instance name is “default” if the user has not changed it.
Ensure that all the required sas-cas-controller pods are scaled up, especially if you have multiple CAS controllers.
The $deploy/sas-bases/examples/restore/scripts/openshift
directory contains a file to grant security context constraints (SCCs) for the sas-backup-pv-copy-cleanup-job pod on an OpenShift cluster.
If you enable host launch on an OpenShift cluster, use the sas-backup-pv-copy-cleanup-job-scc.yaml
SCC.
If you did not enable host launch on an OpenShift cluster and are facing issues related to file deletion, use the sas-backup-pv-copy-cleanup-job-scc-fsgroup.yaml
SCC.
Note: The security context constraint needs to be applied only if CAS is configured to allow for host identity.
Use one of the following commands to apply the SCCs.
Using kubectl
kubectl apply -f sas-backup-pv-copy-cleanup-job-scc.yaml
or
kubectl apply -f sas-backup-pv-copy-cleanup-job-scc-fsgroup.yaml
Using the OpenShift CLI
oc create -f sas-backup-pv-copy-cleanup-job-scc.yaml
or
oc create -f sas-backup-pv-copy-cleanup-job-scc-fsgroup.yaml
Use the following command to link the SCCs to the appropriate Kubernetes service account. Replace the entire variable {{ NAME-OF-NAMESPACE }}, including the braces, with the Kubernetes namespace used for the SAS Viya platform.
oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-backup-pv-copy-cleanup-job -z sas-viya-backuprunner
The SAS Model Repository service provides support for registering, organizing, and managing models within a common model repository. This service is used by SAS Event Stream Processing, SAS Intelligent Decisioning, SAS Model Manager, Model Studio, SAS Studio, and SAS Visual Analytics.
Analytic store (ASTORE) files are extracted from the analytic store’s CAS table in the ModelStore caslib and written to the ASTORES persistent volume, when the following actions are performed:
When Python models (or decisions that use Python models) are published to the SAS Micro Analytic Service or CAS, the Python score resources are copied to the ASTORES persistent volume. Score resources for project champion models that are used by SAS Event Stream Processing are also copied to the persistent volume.
During the migration process, the analytic stores models and Python models are restored in the common model repository, along with their associated resources and analytic store files in the ASTORES persistent volume.
Note: The Python score resources from a SAS Viya 3.5 to SAS Viya 4 environment are not migrated with the SAS Model Repository service. For more information, see Promoting and Migrating Content in SAS Model Manager: Administrator’s Guide.
This README describes how to make the restore job parameters available to the
sas-model-repository container within your deployment, as part of the backup and
restore process. The restore process is performed during start-up of the
sas-model-repository container, if the SAS_DEPLOYMENT_START_MODE
parameter is
set to RESTORE
or MIGRATION
.
No prerequisite steps are required.
Copy the files in the
$deploy/sas-bases/examples/sas-model-repository/restore
directory to the
$deploy/site-config/sas-model-repository/restore
directory. Create the
target directory, if it does not already exist.
Make a copy of the kustomization.yaml file to recover after temporary changes are made: cp kustomization.yaml kustomization.yaml.save
Add site-config/sas-model-repository/restore/restore-transformer.yaml to the
transformers block of the base kustomization.yaml file in the $deploy
directory.
transformers:
- site-config/sas-model-repository/restore/restore-transformer.yaml
Excerpt from the restore-transformer.yaml file:
patch: |-
# Add restore job parameters
- op: add
path: /spec/template/spec/containers/0/envFrom/-
value:
configMapRef:
name: sas-restore-job-parameters
Add the sas-restore-job-parameters code below to the configMapGenerator
section of kustomization.yaml, and remove the configMapGenerator
line, if
it is already present in the default kustomization.yaml:
configMapGenerator:
- name: sas-restore-job-parameters
behavior: merge
literals:
- SAS_BACKUP_ID={{ SAS-BACKUP-ID-VALUE }}
- SAS_DEPLOYMENT_START_MODE=RESTORE
Here are more details about the previous code.
{{SAS-BACKUP-ID-VALUE}}
with the ID of the backup
that is selected for restore.For more information, see Backup and Restore: Perform a Restore in SAS Viya Platform Operations.
If you need to rerun a migration, you must remove the RestoreBreadcrumb.txt
file from the /models/resources/viya
directory.
Here is example code for removing the file:
kubectl get pods -n <namespace> | grep model-repository
kubectl exec -it -n <namespace> <podname> -c sas-model-repository -- bash
rm /models/resources/viya/RestoreBreadcrumb.txt
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.The Update Checker cron job builds a report comparing the currently deployed release with available releases in the upstream repository. The report is written to the stdout of the launched job pod and indicates when new content related to the deployment is available.
This example includes the following kustomize transform that defines proxy environment variables for the report when it is running behind a proxy server:
$deploy/sas-bases/examples/update-checker/proxy-transformer.yaml
For information about using the Update Checker, see View the Update Checker Report.
Note: Ensure that the version indicated by the version selector for the document matches the version of your SAS Viya platform software.
You can use the examples found within $deploy/sas-bases/examples/ingress-configuration/
to set general configuration values for Ingress resources.
The INGRESS_CLASS_NAME specifies the name of the IngressClass which SAS Viya Platform Ingress resources should use for this deployment. By default, SAS Viya Platform Ingress resources will use the nginx
IngressClass. For more information about IngressClass resources, see Ingress class and Using IngressClasses.
The corresponding transformer file to override the ingressClassName field in Ingress resources is found at sas-bases/overlays/ingress-configuration/update-ingress-classname.yaml
.
Use these steps to apply the desired properties to your SAS Viya platform deployment.
Copy the $deploy/sas-bases/examples/ingress-configuration/ingress-configuration-inputs.yaml
file to the location of your working container security overlay,
such as site-config/ingress-configuration/
.
Define the properties in the ingress-configuration-inputs.yaml file which match the desired configuration. To define a property, uncomment it and update its token value as described in the comments in the file.
Add the relative path of ingress-configuration-inputs.yaml to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
...
resources:
...
- site-config/ingress-configuration/ingress-configuration-inputs.yaml
...
Add the relative path(s) of the corresponding transformer file(s) to the transformers block of the base kustomization.yaml file. There should be one transformer file added per option defined within the ConfigMap. Here is an example:
...
transformers:
...
- sas-bases/overlays/ingress-configuration/update-ingress-classname.yaml
...
The Inventory Collector is a CronJob that contains two Jobs.
They are available to run after deployment is fully up and running.
The first Job creates inventory tables and the second Job creates an inventory
comparison table. Tables are created in the protected SystemData caslib
and used by SAS Inventory Reports located in the
Content/Products/SAS Environment Manager/Dashboard Items
folder.
Access to the tables and reports are restricted to users that are
members of the SAS Administrators group.
For more information, see SAS Help Center Documentation
The Inventory Collector Job must be run before the Inventory Comparison Job. It collects an inventory of artifacts created by various SAS Viya platform services. It also creates the SASINVENTORY4 and SASVIYAINVENTORY4_CASSVRDETAILS CAS tables in the SystemData caslib that are referenced by the SAS Viya 4 Inventory Report.
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-collector-job
Set the TENANT environment variable to “provider”, then create and run the Job. Here is an example:
kubectl set env cronjob/sas-inventory-collector TENANT=acme
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-collector-job
Set the TENANT environment variable to “provider”, then create and run the Job. Here is an example:
kubectl set env cronjob/sas-inventory-collector TENANT=provider
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-collector-job
kubectl set env cronjob/sas-inventory-collector TENANT-
The sas-inventory-collector CronJob is disabled by default. To enable it, run this command:
kubectl patch cronjob sas-inventory-collector -p '{"spec":{"suspend": false}}'
A schedule can be set in the CronJob Kubernetes resource by using the kubectl patch command. For example, to run once a day at midnight, run this command:
kubectl patch cronjob sas-inventory-collector -p '{"spec":{"schedule": "0 0 * * *"}}'
Scheduling the CronJob in the cluster is permitted for single-tenant environments.
Multi-tenant environments should run CronJobs outside the cluster on a machine where the admin can run kubectl commands. This approach allows multi-tenant Jobs to run independently and simultaneously. Here is an example that runs the provider tenant at midnight and the acme tenant five minutes later:
Add a crontab to a server with access to kubectl and the cluster namespace
$ crontab -e
Crontab entries
0 0 * * * /PATH_TO/inventory-collector.sh provider
5 0 * * * /PATH_TO/inventory-collector.sh acme
This sample script can be called by a crontab entry in a server running outside the cluster.
#!/bin/bash
TENANT=$1
export KUBECONFIG=/PATH_TO/kubeconfig
# unset the COMPARISON environment variable if set
kubectl set env cronjob/sas-inventory-collector COMPARISON-
# set the TENANT= environment variable
/PATH_TO/kubectl set env cronjob/sas-inventory-collector TENANT=$TENANT
# delete any previously run job
/PATH_TO/kubectl delete job sas-inventory-collector-$TENANT
# run the job
/PATH_TO/kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-collector-$TENANT
The inventory comparison job compares two inventory tables. The resulting table is used by the SAS Viya Inventory Comparison report.
kubectl set env cronjob/sas-inventory-collector COMPARISON=true
kubectl delete job sas-inventory-comparison-job
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-comparison-job
kubectl set env cronjob/sas-inventory-collector COMPARISON-
Here is an example:
kubectl set env cronjob/sas-inventory-collector TENANT=provider
kubectl set env cronjob/sas-inventory-collector COMPARISON=true
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-comparison-job
kubectl set env cronjob/sas-inventory-collector COMPARISON-
kubectl set env cronjob/sas-inventory-collector TENANT=<tenant-name>
kubectl set env cronjob/sas-inventory-collector COMPARISON=true
kubectl delete job sas-inventory-comparison-job
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-comparison-job
kubectl set env cronjob/sas-inventory-collector COMPARISON-
Inventory collection or scanning as it is referred to in SAS Viya 3, is typically run before a migration. Running a collection then a comparison, the first time following a migration, will compare pre-migration to post-migration artifacts. Subsequent collection/comparison runs will compare post-migration to post-migration artifacts. To re-run a pre-migration to post migration comparison, set the COMPARISON=”migration” environment variable.
kubectl set env cronjob/sas-inventory-collector TENANT=<tenant-name>
kubectl set env cronjob/sas-inventory-collector COMPARISON=migration
kubectl delete job sas-inventory-comparison-job
kubectl create job --from=cronjob/sas-inventory-collector sas-inventory-comparison-job
kubectl set env cronjob/sas-inventory-collector COMPARISON-
The Model Publish service uses the sas-model-publish-git dedicated PersistentVolume Claim (PVC) as a workspace. When a user publishes a model to a Git destination, sas-model-publish creates a local repository under /models/git/publish/, which is then mounted from the sas-model-publish-git PVC in the start-up process.
In order for the Model Publish service to successfully publish a model to a Git
destination, the user must prepare and adjust the following file that are
located in the $deploy/sas-bases/examples/sas-model-publish/git
directory:
storage.yaml - defines a PVC for the Git local repository.
The following file is located in the
$deploy/sas-bases/overlays/sas-model-publish/git
directory and does not need
to be modified:
git-transformer.yaml - adds the sas-model-publish-git PVC to the sas-model-publish deployment object.
Copy the files in the $deploy/sas-bases/examples/sas-model-publish/git
directory to the $deploy/site-config/sas-model-publish/git
directory.
Create the target directory, if it does not already exist.
Note: If the destination directory already exists, verify that the overlay has been applied. If the output contains the /models/git/ mount directory path, you do not need to take any further actions, unless you want to change the overlay parameters for the mounted directory.
Modify the parameters in storage-git.yaml. For more information about PersistentVolume Claims (PVCs), see Persistent Volume Claims on Kubernetes.
Make the following changes to the base kustomization.yaml file in the $deploy directory.
Here is an example:
resources:
- site-config/sas-model-publish/git
transformers:
- sas-bases/overlays/sas-model-publish/git/git-transformer.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Run the following command to verify whether the overlays have been applied:
kubectl describe pod <sas-model-publish-pod-name> -n <name-of-namespace>
Verify that the output contains the following mount directory paths:
Mounts: /models/git/publish
Kaniko is a tool to build container images from a Dockerfile without depending on a Docker daemon. The Kaniko container can load the build context from cloud storage or a local directory, and then push the built image to the container registry for a specific destination.
The Model Publish service uses the sas-model-publish-kaniko dedicated PersistentVolume Claim (PVC) as a workspace, which is shared with the Kaniko container. When a user publishes a model to a container destination, sas-model-publish creates a temporary folder (publish-xxxxxxxx) on the volume (/models/kaniko/), which is then mounted from the sas-model-publish-kaniko PVC in the start-up process.
The publishing process generates the following content:
Note: The “xxxxxxxx” part of the folder names is a system-generated alphanumeric string and is 8 characters in length.
The Model Publish service then loads a pod template from the
sas-model-publish-kaniko-job-config (as defined in podtemplate.yaml) and
dynamically constructs a job specification. The job specification helps mount
the directories in the Kaniko container. The default pod template uses the
official Kaniko image URL gcr.io/kaniko-project/executor:latest
. Users can
replace this image URL in the pod template, if the user wants to host the Kaniko
image in a different container registry or use a Kaniko debug image.
The Kaniko container is started after a batch job is executed. The Model Publish service checks the job status every 30 seconds. The job times out after 30 minutes, if it has not completed.
The Model Publish service deletes the job and the temporary directories after the job has completed successfully, completed with errors, or has timed out.
If you are deploying in a Red Hat OpenShift cluster, use this command to link the service account to run as root user.
oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user anyuid -z
sas-model-publish-kaniko
In order for the Model Publish service to successfully publish a model to a
container destination, the user must prepare and adjust the following files that
are located in the $deploy/sas-bases/examples/sas-model-publish/kaniko
directory:
storage.yaml - defines a PVC for the Kaniko workspace.
podtemplate.yaml - defines a pod template for the batch job that launches the Kaniko container.
** sa.yaml
defines the service account for running the Kaniko job.
The following file is located in the
$deploy/sas-bases/overlays/sas-model-publish/kaniko
directory and does not
need to be modified:
kaniko-transformer.yaml - adds the sas-model-publish-kaniko PVC to the sas-model-publish deployment object.
Copy the files in the $deploy/sas-bases/examples/sas-model-publish/kaniko
directory to the $deploy/site-config/sas-model-publish/kaniko
directory.
Create the destination directory, if it does not already exist.
Note: If the destination directory already exists, verify that the overlay has been applied. If the output contains the /models/kaniko/ mount directory path, you do not need to take any further actions, unless you want to change the overlay parameters for the mounted directory.
Modify the parameters in the podtemplate.yaml file, if you need to implement customized requirements, such as the location of Kaniko image.
Modify the parameters in storage.yaml. For more information about PersistentVolume Claims (PVCs), see Persistent Volume Claims on Kubernetes.
Make the following changes to the base kustomization.yaml file in the $deploy directory.
Here is an example:
resources:
- site-config/sas-model-publish/kaniko
transformers:
- sas-bases/overlays/sas-model-publish/kaniko/kaniko-transformer.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Run the following command to verify whether the overlays have been applied:
kubectl describe pod <sas-model-publish-pod-name> -n <name-of-namespace>
Verify that the output contains the following mount directory paths:
Mounts: /models/kaniko
BuildKit is a tool that is used to build container images from a Dockerfile without depending on a Docker daemon. BuildKit can build a container image in Kubernetes, and then push the built image to the container registry for a specific destination.
The Decisions Runtime Builder service uses the sas-decisions-runtime-builder-buildkit dedicated PersistentVolume Claim (PVC) as a cache. It caches builder images and layers beyond the life cycle of single job execution.
An Update request to the Decisions Runtime Builder service starts a Kubernetes job that builds a new image. The service checks the job status every 30 seconds. If a job is not complete after 30 minutes, it times out.
The Decisions Runtime Builder service deletes the job and the temporary directories after the job has completed successfully, completed with errors, or has timed out.
Copy the files in the $deploy/sas-bases/examples/sas-decisions-runtime-builder/buildkit
directory to the $deploy/site-config/sas-decisions-runtime-builder/buildkit
directory. Create the destination directory, if it does not already exist.
Note: Verify that the overlay has been applied. If the Buildkit daemon deployment already exists, you do not need to take any further action, unless you want to change the overlay parameters for the mounted directory.
Modify the parameters in the files storage.yaml and publish-storage.yaml in the directory $deploy/site-config/sas-decisions-runtime-builder/buildkit. For more information about PersistentVolume Claims (PVCs), see Persistent Volume Claims on Kubernetes.
(OpenShift deployments only) Uncomment and update the {{ FSGROUP_VALUE }} token in the $deploy/site-config/sas-decisions-runtime-builder/buildkit/publish-job-template.yaml
and $deploy/site-config/sas-decisions-runtime-builder/buildkit/update-job-template.yaml
files to match the desired numerical group value.
Note: For OpenShift, you can obtain the allocated GID and value by using this command:
kubectl describe namespace <name-of-namespace>
Use the minimum value of the openshift.io/sa.scc.supplemental-groups
annotation. For example, if the output is as follows, you would use 1000700000
.
Name: sas-1
Labels: <none>
Annotations: ...
openshift.io/sa.scc.supplemental-groups: 1000700000/10000
...
Make the following changes to the base kustomization.yaml file in the $deploy directory.
Here is an example:
resources:
- site-config/sas-decisions-runtime-builder/buildkit
transformers:
- sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-transformer.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.(OpenShift deployments only) Apply a security context constraint (SCC):
kubectl apply -f $deploy/sas-bases/overlays/sas-decisions-runtime-builder/buildkit/service-account/buildkit-scc.yaml
Bind the SCC to the service account with the command that includes the name of the SCC that you applied:
oc -n name-of-namespace adm policy add-scc-to-user sas-buildkit -z sas-buildkit
The sas-buildkitd deployment typically starts without any issues. However, for some cluster deployments, you might receive the following error:
/proc/sys/user/max_user_namespaces needs to be set to nonzero
If this occurs, use the buildkit-userns-transformer to configure user namespace support. This is done with an init container that is running in privileged mode during start-up.
Add ‘sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-certificates-transformer.yaml’ to the transformers block after the ‘buildkit-transformer.yaml’ entry. Here is an example:
transformers:
- sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-transformer.yaml
- sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-userns-transformer.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
If the registry contains SAS Viya platform deployment images or the destination registry is using self-signed certificates, those certificates should be added to the buildkit deployment. If they are not, the image build generates a ‘certificate signed by unknown authority’ error.
If you receive that error, complete the following steps to add self-signed certificates to the Buildkit deployment.
Copy the files in the $deploy/sas-bases/examples/sas-decisions-runtime-builder/buildkit/cert
directory to the $deploy/site-config/sas-decisions-runtime-builder/buildkit/certs
directory. Create the destination directory, if it does not already exist.
Add the self-signed certificates that you want to be trusted to the $deploy/site-config/sas-decisions-runtime-builder/buildkit/certs
directory.
In that directory, edit the kustomization.yaml file to add the certificate files to the files field in the secretGenerator section.
resources: []
secretGenerator:
- name: sas-buildkit-registry-secrets
files:
- registry1.pem
- regsitry2.pem
Make the following changes to the base kustomization.yaml file in the $deploy directory.
Here is an example:
resources:
- site-config/sas-decisions-runtime-builder/buildkit
- site-config/sas-decisions-runtime-builder/buildkit/certs
transformers:
- sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-transformer.yaml
- sas-bases/overlays/sas-decisions-runtime-builder/buildkit/buildkit-certificate-transformer.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Run the following command to verify whether the Buildkit overlay has been applied. It should show at least one pod starting with the prefix ‘buildkitd’.
kubectl -n <name-of-namespace> get pods | grep buildkitd
Note: SAS plans to discontinue the use of Kaniko in the future.
Kaniko is a tool that is used to build container images from a Dockerfile without depending on a Docker daemon. The Kaniko can build a container image in Kubernetes and then push the built image to the container registry for a specific destination.
The Decisions Runtime Builder service then loads a pod template from the sas-decisions-runtime-builder-kaniko-job-config (as defined in updateJobtemplate.yaml) and dynamically constructs a job specification. The job specification helps mount the directories in the Kaniko container.
The Kaniko container is started after a batch job is executed. The Decisions Runtime Builder service checks the job status every 30 seconds. The job times out after 30 minutes, if it has not completed.
If you are deploying in a Red Hat OpenShift cluster, use the following command to link the service account to run as the root user.
oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user anyuid -z sas-decisions-runtime-builder-kaniko
Copy the files in the $deploy/sas-bases/examples/sas-decisions-runtime-builder/kaniko
directory to the $deploy/site-config/sas-decisions-runtime-builder/kaniko
directory. Create the destination directory, if it does not already exist.
Modify the parameters in the $deploy/site-config/sas-decisions-runtime-builder/kaniko/storage.yaml file. For more information about PersistentVolume Claims (PVCs), see Persistent Volume Claims on Kubernetes.
Make the following changes to the base kustomization.yaml file in the $deploy directory.
resources:
- site-config/sas-decisions-runtime-builder/kaniko
transformers:
- sas-bases/overlays/sas-decisions-runtime-builder/kaniko/kaniko-transformer.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Run the following command to verify whether the overlays have been applied. It the overlay is applied, it shows a podTemplate named ‘sas-decisions-runtime-builder-kaniko-job-config’.
kubectl get podTemplates | grep sas-decisions-runtime-buider-kaniko-job-config
Note: This guide applies only to SAS Viya platform deployments in a Red Hat OpenShift environment.
In OpenShift, a security context constraint (SCC) is required for publishing objects
(models or decisions), as well as updating and validating published objects.
These actions create jobs within the cluster that
must run as user 1001 (sas), must have permission to mount volumes containing
container registry credentials, and must have access existing image pull secrets.
This README explains how to apply the sas-model-publish
SCC to the appropriate
service accounts:
sas-model-publish-buildkit
and
sas-decisions-runtime-builder-buildkit
service accounts
for publishing and updating.default
service account only if you expect
validation to be run within that specific OpenShift cluster and namespace.
For example, if the SAS Viya platform is deployed on AWS but the validation
jobs are executed in an OpenShift cluster, the sas-model-publish
SCC must
be applied to the default
service account in the namespace where validation runs.
This ensures those jobs have the necessary permissions in that environment.
If validation is not executed in OpenShift, applying the SCC to the default
service account is not required.The /$deploy/sas-bases/overlays/sas-model-publish/service-account
directory
contains a file to grant SCC to sas-model-publish and
sas-decisions-runtime-builder jobs.
A Kubernetes cluster administrator should add this SCC to their OpenShift cluster prior to deploying the SAS Viya platform. Use each of the following commands:
kubectl apply -f sas-model-publish-scc.yaml
After the SCC has been applied, you must link it to the appropriate service accounts that will use it. Use the following commands:
oc -n {{ NAME-OF-VALIDATION-NAMESPACE }} adm policy add-scc-to-user sas-model-publish -z
default
oc -n {{ NAME-OF-VIYA-NAMESPACE }} adm policy add-scc-to-user sas-model-publish -z
sas-model-publish-buildkit
oc -n {{ NAME-OF-VIYA-NAMESPACE }} adm policy add-scc-to-user sas-model-publish -z
sas-decisions-runtime-builder-buildkit
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Run the following command to verify whether the overlay has been applied:
kubectl -n <name-of-namespace> get rolebindings -o wide | grep sas-model-publish
Verify that the sas-model-publish SCC is bound to sas-model-publish-buildkit and sas-decisions-runtime-builder-buildkit service accounts.
OpenSearch is an Apache 2.0 licensed search and analytics suite based on Elasticsearch 7.10.2 . The SAS Viya platform provides two options for your search cluster: an internal instance provided by SAS or an external instance you would like the SAS Viya platform to utilize. Before deploying, you must select which of these options you want to use for your SAS Viya platform deployment.
Note: The search cluster must be either internally managed or externally managed. SAS does not support mixing internal and external search clusters in the same deployment. Once deployed, you cannot switch between an internal and external search cluster.
SAS Viya platform support for an internally managed search cluster is provided by a proprietary sas-opendistro
Kubernetes operator.
If you want to use an internal instance of OpenSearch, refer to the README file located at $deploy/sas-bases/overlays/internal-elasticsearch/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_an_internal_opensearch_instance_for_sas_viya.htm
(for HTML format).
If you want to use an external instance of OpenSearch, you should refer to the README file located at $deploy/sas-bases/examples/configure-elasticsearch/external/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_an_external_opensearch_instance.htm
(for HTML format).
Externally managed cloud subscriptions to Elasticsearch and Open Distro for Elasticsearch are not supported.
SAS strongly recommends the use of SSL/TLS to secure data in transit. You should follow the documented best practices provided by OpenSearch and your cloud platform provider for securing access to your external OpenSearch instance using SSL/TLS. Securing your OpenSearch cluster with SSL/TLS entails the use of certificates. In order for the SAS Viya platform to connect directly to a secure OpenSearch cluster, you must provide the OpenSearch cluster’s CA certificate to the SAS Viya platform prior to deployment. Failing to configure the SAS Viya platform to trust the OpenSearch cluster’s CA certificate results in “Connection refused” errors. For instructions on how to provide CA certificates to the SAS Viya platform, see the section labeled “Incorporating Additional CA Certificates into the SAS Viya Platform Deployment” in the README file at $deploy/sas-bases/examples/security/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_network_security_and_encryption_using_sas_security_certificate_framework.htm
(for HTML format).
Note: SAS terminology standards prohibit the use of the term “master.” However, this document refers to the term “master node” to maintain alignment with OpenSearch documentation.
Note: In previous releases, the SAS Viya platform included OpenDistro for Elasticsearch. Many Kubernetes resources keep the name OpenDistro for backwards compatiblity.
This README file describes the files used to customize an internally managed instance of OpenSearch using the sas-opendistro operator provided by SAS.
In order to use the internal search cluster instance, you must customize your deployment to point to the required overlay and transformers.
Go to the base kustomization.yaml file ($deploy/kustomization.yaml
). In the
resources block of that file, add the following content, including adding
the block if it does not already exist.
resources:
...
- sas-bases/overlays/internal-elasticsearch
...
Go to the base kustomization.yaml file ($deploy/kustomization.yaml
). In the
transformers block of that file, add the following content, including adding
the block if it does not already exist.
transformers:
...
- sas-bases/overlays/internal-elasticsearch/internal-elasticsearch-transformer.yaml
...
Deploying OpenSearch requires configuration to support the ability to create many memory mapped areas if vm.max_map_count
is set too low.
Several methods are available to configure the sysctl option vm.max_map_count
documented below. Choose a method which is supported for your platform.
Method | Platforms | Requirements |
---|---|---|
Use sas-opendistro-sysctl init container (recommended) | Microsoft Azure Kubernetes Service (AKS) without Microsoft Defender Amazon Elastic Kubernetes Service (EKS) Google Kubernetes Engine (GKE) RedHat Openshift |
Privileged Containers Allow Privilege Escalation |
Use sas-opendistro-sysctl DaemonSet | Microsoft Azure Kubernetes Service (AKS) with Microsoft Defender | Privileged Containers Allow Privilege Escalation Kubernetes nodes for stateful workloads labeled with workload.sas.com/class as stateful |
Apply sysctl configuration manually | All platforms | Ability to configure sysctl on stateful Kubernetes nodes |
Disable mmap support | All platforms | Unable to apply sysctl configuration manually or use privileged containers |
Use sas-opendistro-sysctl init container: If your deployment allows privileged containers, add a reference to sas-bases/overlays/internal-elasticsearch/sysctl-transformer.yaml
to the transformers block of the base kustomization.yaml. The sysctl-transformer.yaml
transformer must be included before the sas-bases/overlays/required/transformers.yaml
transformer. Here is an example:
transformers:
- sas-bases/overlays/internal-elasticsearch/sysctl-transformer.yaml
- sas-bases/overlays/required/transformers.yaml
Use sas-opendistro-sysctl DaemonSet (Microsoft Azure Kubernetes Service with Microsoft Defender only): If your deployment allows privileged containers and you are deploying to an environment secured by Microsoft Defender, add a reference to sas-bases/overlays/internal-elasticsearch/sysctl-daemonset.yaml
to the resources block of the base kustomization file. Here is an example:
resources:
- sas-bases/overlays/internal-elasticsearch/sysctl-daemonset.yaml
Apply sysctl configuration manually: If your deployment does not allow privileged containers, the Kubernetes administrator should set the vm.max_map_count
property to be at least 262144 for stateful workload nodes.
Disable mmap support: If your deployment does not allow privileged containers and you are in an environment where you cannot control the memory map settings, add a reference to sas-bases/overlays/internal-elasticsearch/disable-mmap-transformer.yaml
to the transformers block of the base kustomization.yaml to disable memory mapping instead. The disable-mmap-transformer.yaml
transformer must be included before the sas-bases/overlays/required/transformers.yaml
. Here is an example:
transformers:
- sas-bases/overlays/internal-elasticsearch/disable-mmap-transformer.yaml
- sas-bases/overlays/required/transformers.yaml
Disabling memory mapping is discouraged since doing so will negatively impact performance and may result in out of memory exceptions.
For additional customization options, refer to the following README files:
Update the storage class used by OpenSearch: $deploy/sas-bases/examples/configure-elasticsearch/internal/storage/README.md
(for Markdown format) or $deploy/sas-bases/docs/configure_a_default_storageclass_for_opensearch.htm
(for HTML format).
$deploy/sas-bases/examples/configure-elasticsearch/internal/topology/README.md
(for Markdown format) or $deploy/sas-bases/docs/configure_a_default_topology_for_opensearch.htm
(for HTML format).$deploy/sas-bases/examples/configure-elasticsearch/internal/run-user/README.md
(for Markdown format) or $deploy/sas-bases/docs/configure_a_run_user_for_opensearch.htm
(for HTML format).$deploy/sas-bases/examples/configure-elasticsearch/internal/openshift/README.md
(for Markdown format) or $deploy/sas-bases/docs/opensearch_on_red_hat_openshift.htm
(for HTML format).$deploy/sas-bases/examples/configure-elasticsearch/internal/security-audit-logs/README.md
(for Markdown format) or $deploy/sas-bases/docs/opensearch_security_audit_logs.htm
(for HTML format).Configure a temporary directory for JNA in OpenSearch: $deploy/sas-bases/examples/configure-elasticsearch/internal/jna/README.md
(for Markdown format) or $deploy/sas-bases/docs/configure_a_temporary_directory_for_jna_in_opensearch.htm
(for HTML format).
After you revise the base kustomization.yaml file, continue your SAS Viya platform deployment as documented in SAS Viya Platform: Deployment Guide.
A single cluster is supported with the following topologies:
The operator does not support the following actions:
OpenSearch requires a StorageClass to be configured in the Kubernetes cluster that provides block storage (e.g. virtual disks) or a local file system mount to store the search indices. Remote file systems, such as NFS, should not be used to store the search indices.
By default, the OpenSearch deployment uses the default StorageClass defined in the Kubernetes cluster. If a different StorageClass is required to meet the requirements, this README file describes how to specify a new StorageClass and configure it to be used by OpenSearch.
Note: The default StorageClass should be set according to the target environment and usage requirements. The transformer can reference an existing or custom StorageClass.
In order to specify a default StorageClass to be used by OpenSearch, you must customize your deployment to include a transformer.
If a new StorageClass must be defined in the target cluster to meet the requirements for OpenSearch, consult the documentation for the target Kubernetes platform for details on available storage options and how to configure a new StorageClass.
Copy the StorageClass transformer from $deploy/sas-bases/examples/configure-elasticsearch/internal/storage/storage-class-transformer.yaml
into the $deploy/site-config
directory.
Open the storage-class-transformer.yaml file for editing and replace {{ STORAGE-CLASS }}
with the name of the StorageClass to be used by OpenSearch.
Add the storage-class-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
- site-config/storage-class-transformer.yaml
For more information, see SAS Viya Platform: Deployment Guide.
This README file describes the files used to specify and modify the topology to be used by the sas-opendistro operator.
Note: The default topology should be set according to the target environment and usage requirements. The transformer can reference an existing or custom topology.
Note: SAS terminology standards prohibit the use of the term “master.” However, this document refers to the term “master node” to maintain alignment with OpenSearch documentation.
The default installation topology consists of one OpenSearch node configured as both a master and a data node. Although this topology is acceptable for initial small scale data imports, configuration, and testing, SAS does not recommend that it be used in a production environment.
The recommended production topology should consist of no less than three master nodes and no less than three data storage nodes. This topology provides the following benefits:
If you wish to migrate your initial data from the initial setup to the production setup, you must modify the cluster topology in such a manner that no data or configuration is lost.
One way of doing this is to transition your topology through an intermediate state into your final production state. Here is an example
Initial State | Intermediate State | Final State |
---|---|---|
[Master/Data Node] | [Master/Data Node] | |
[Master Node 1] | [Master Node 1] | |
[Master Node 2] | [Master Node 2] | |
[Master Node 3] | [Master Node 3] | |
[Data Node 1] | [Data Node 1] | |
[Data Node 2] | [Data Node 2] | |
[Data Node 3] | [Data Node 3] |
This example allows the cluster to copy the data stored on the Master/Data Node across to the data nodes. The migration will have to pause in the intermediate state for a period while the data is spread across the cluster. Depending on the volume of data, this should be completed within a few tens of minutes.
Copy the migrate-topology-step1.yaml
file into your site-config directory.
Edit the example topology to reflect your desired topology:
Remove the following line from the transformers block of the base kustomization file ($deploy/kustomization.yaml
) if it is present.
transformers:
...
- sas-bases/overlays/internal-elasticsearch/ha-transformer.yaml
...
Add the topology reference to the transformers block of the base kustomization.yaml file. Here is an example of a modified base kustomization.yaml file with a reference to the custom topology example:
transformers:
...
- site-config/configure-elasticsearch/internal/topology/migrate-topology-step1.yaml
Perform the commands to update the software. These are the same as the commands to originally deploy the software as outlined in SAS Viya Platform: Deployment Guide: Deployment: Installation: Deploy the Software.
The important difference to note is that as you have now modified the $deploy/kustomization.yaml
file to include your topology changes, the deployment process will not
perform a complete rebuild but will instead adapt the existing system to your new configuration.
Once the new configuration has deployed, wait for the new servers to share out all the data.
Repeat steps 1 through 5 using the migrate-topology-step2.yaml
file. Ensure that you make the same modifications to the step2 file as you made in the step1 file.
The custom topology example should be used to define and customize highly available production OpenSearch deployments. See the example file located at
sas-bases/examples/configure-elasticsearch/internal/topology/custom-topology.yaml
.
The single node topology example should not be used in production. The single node topology is intended to minimize resources in development, demonstration, class, and test deployments.
sas-bases/examples/configure-elasticsearch/internal/topology/single-node-topology.yaml
.
In addition to the general cluster topology, properties such as the heap size and disk size of each individual node set can be adjusted depending on the use case for the OpenSearch cluster, expected index sizes, shard numbers, and/or hardware constraints.
When the volume claim’s storage capacity is not specified in the node spec, the operator creates a PersistentVolumeClaim with a capacity of 128Gi for each node in the OpenSearch cluster by default.
Similarly, when the volume claim’s storage class is not specified in the node spec, the operator creates a PersistentVolumeClaim using either the default StorageClass for that OpenSearch cluster (if specified) or the default storage class for the Kubernetes cluster (see sas-bases/examples/configure-elasticsearch/internal/storage/README.md
for instructions for configuring a default storage class for the OpenSearch cluster).
To define your own volume claim template with your desired storage capacity and the Kubernetes storage class that is associated with the persistent volume, see the example file located at sas-bases/examples/configure-elasticsearch/internal/topology/custom-topology-with-custom-volume-claim.yaml
. Replace {{ STORAGE-CLASS }} with the name of the StorageClass and {{ STORAGE-CAPACITY }} with the desired storage capacity for this volume claim.
The amount of heap size dedicated to each node directly impacts the performance of OpenSearch. If the heap is too small, the garbage collection will cause frequent pauses, resulting in reduced throughput and regular small latency spikes. If the heap is too large, on the other hand, full-heap garbage collection may cause infrequent but long latency spikes.
Generally, the heap size value should be up to half of the available physical RAM with a maximum of 32GB.
The maximum heap size also affects the maximum number of shards that can be safely stored on the node without suffering from oversharding and circuit breaker events. As a rule of thumb you should aim for 25 shards or fewer per GB of heap memory with each shard not exceeding 50 GB.
See sas-bases/examples/configure-elasticsearch/internal/topology/custom-topology-with-custom-heap-size.yaml
for an example of how to configure the amount of heap memory dedicated to OpenSearch nodes. Replace {{ HEAP-SIZE }} with the appropriate heap size for your needs.
Copy the example topology file into your site-config directory.
Edit the example topology as directed by comments in the file.
Remove the following line from the transformers block of the base kustomization file ($deploy/kustomization.yaml
) if it is present.
transformers:
...
- sas-bases/overlays/internal-elasticsearch/ha-transformer.yaml
...
Add the topology reference to the transformers block of the base kustomization.yaml file. Here is an example of a modified base kustomization.yaml file with a reference to the custom topology example:
transformers:
...
- site-config/configure-elasticsearch/internal/topology/custom-topology.yaml
For more information, see SAS Viya Platform: Deployment Guide.
In a default deployment of the SAS Viya platform, the OpenSearch JVM process runs under the fixed user ID (UID) of 1000. A fixed UID is required so that files that are written to storage for the search indices can be successfully read after subsequent restarts.
If you do not want OpenSearch to run with UID 1000, you can specify a different UID for the process. You can take the following steps to apply a transformer that changes the UID of the OpenSearch processes to another value.
Note: The decision to change the UID of the OpenSearch processes must be made at the time of the initial deployment. The UID cannot be changed after the SAS Viya platform has been deployed.
To configure OpenSearch to run as a different UID:
Copy the Run User transformer from $deploy/sas-bases/examples/configure-elasticsearch/internal/run-user/run-user-transformer.yaml
into the $deploy/site-config
directory.
Open the run-user-transformer.yaml file for editing. Replace {{ USER-ID }}
with the UID under which the OpenSearch processes should run.
Add the run-user-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
- site-config/run-user-transformer.yaml
For more information, see SAS Viya Platform: Deployment Guide.
Before deploying your SAS Viya platform software, perform the following steps in order to run OpenSearch on OpenShift in that deployment.
An example Security Context Constraints is available at $deploy/sas-bases/examples/configure-elasticsearch/internal/openshift/sas-opendistro-scc.yaml
.
A Kubernetes cluster administrator must add these Security Context Constraints to their OpenShift cluster before deploying the SAS Viya platform.
Consult Common Customizations for information about the additional transformers, which might require changes to the Security Context Constraints.
If modifications are required, place a copy of the sas-opendistro-scc.yaml
file in the site-config directory and apply the changes to the copy.
If you are planning to use run-user-transformer.yaml
to specify a custom UID for the OpenSearch processes, update the uid
property of the runAsUser
option to match the custom UID. For example, if UID 2000 will be configured in the run-user-transformer.yaml
, update the file sas-opendistro-scc.yaml
as follows.
runAsUser:
type: MustRunAs
uid: 2000
If your deployment will use sysctl-transformer.yaml
to apply the necessary sysctl parameters, the sas-opendistro-scc.yaml
file must be modified.
Otherwise, you should skip these steps.
Set the allowPrivilegeEscalation and allowPrivilegedContainer options to true
. This allows a privileged init container to execute and apply the necessary sysctl parameters.
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
Update the runAsUser option to RunAsAny
, using the following example as your guide. This allows the privileged init container to run as a different user to apply the necessary sysctl parameters.
runAsUser:
type: RunAsAny
As a Kubernetes cluster administrator of the OpenShift cluster, use one of the following commands to apply the Security Context Constraints.
kubectl apply -f sas-opendistro-scc.yaml
oc apply -f sas-opendistro-scc.yaml
The sas-opendistro SecurityContextConstraints must be added to the sas-opendistro ServiceAccount within each target deployment namespace to grant the necessary privileges.
Use the following command to configure the ServiceAccount. Replace the entire variable {{ NAME-OF-NAMESPACE }}
, including the braces,
with the Kubernetes namespace used for the SAS Viya platform.
oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-opendistro -z sas-opendistro
An example transformer that removes the seccomp property and annotation from the OpenSearch pods through the OpenDistroCluster resource is available at $deploy/sas-bases/overlays/internal-elasticsearch/remove-seccomp-transformer.yaml
.
To include this transformer, add the following to the base kustomization.yaml file ($deploy/kustomization.yaml
).
```yaml
transformers:
...
- sas-bases/overlays/internal-elasticsearch/remove-seccomp-transformer.yaml
```
Security audit logs track a range of OpenSearch cluster events. The OpenSearch audit logs can provide beneficial information for compliance purposes or assist in the aftermath of a security breach.
The audit logs are written to audit indices in the OpenSearch cluster. Audit indices can build up over time and use valuable resources. By default, an Index State Management (ISM) policy named ‘viya_delete_old_security_audit_logs’ is applied by the operator which deletes security audit log indices after seven days with an ISM priority of 50. OpenSearch enables ISM history logs, which are also stored to new indices. By default, ISM history retention is seven days.
The ISM policy can be disabled or configured to retain OpenSearch audit log indices for a specified length of time.
If you have already manually created an ISM policy for OpenSearch audit logs, the policy with the higher priority value will take precedence.
Configurable Parameter | Description | Default |
---|---|---|
enableIndexCleanup | Apply the ISM policy to remove OpenSearch security audit log indices after the length of time specified in indexRetentionPeriod. If you want to retain the indices indefinitely, set to “false”. Note: In order to prevent performance issues, SAS recommends that you change the indexRetentionPeriod to a higher period rather than disabling index cleanup. |
true |
indexRetentionPeriod | Period of time an OpenSearch audit log is retained for if the ISM policy is applied. Supported units are d (days), h (hours), m (minutes), s (seconds), ms (milliseconds), and micros (microseconds). | 7d |
ismPriority | A priority to disambiguate when multiple policies match an index name. OpenSearch takes the settings from the template with the highest priority and applies it to the index. | 50 |
enableISMPolicyHistory | Additional indices are also created to log ISM history data. Specifies whether ISM audit history is enabled or not. | true |
ismLogRetentionPeriod | Period of time ISM history indices are kept if they are enabled. Supported units are d (days), h (hours), m (minutes), s (seconds), ms (milliseconds), and micros (microseconds). | 7d |
Copy the audit log retention transformer from $deploy/sas-bases/examples/configure-elasticsearch/internal/security-audit-logs/audit-log-retention-transformer.yaml
into the $deploy/site-config
directory. Adjust the value for each parameter listed above that you would like to change.
Add the audit-log-retention-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
- site-config/audit-log-retention-transformer.yaml
Note: The ISM policy values can be adjusted and reconfigured after the initial deployment.
OpenSearch security audit logging can be disabled completely.
Copy the disable security audit transformer from $deploy/sas-bases/examples/configure-elasticsearch/internal/security-audit-logs/disable-security-audit-transformer.yaml
into the $deploy/site-config
directory.
Add the disable-security-audit-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
- site-config/disable-security-audit-transformer.yaml
For more information on OpenSearch audit logs or Index State Management (ISM) policies, see the OpenSearch Documentation.
This README file describes the files used to configure the SAS Viya platform deployment to use an externally managed instance of OpenSearch.
Before deploying the SAS Viya platform, make sure you have the following prerequisites:
An external instance of OpenSearch 2.5
The following OpenSearch plug-ins:
analysis-icu
analysis-kuromoji
analysis-nori
analysis-phonetic
analysis-smartcn
analysis-stempel
mapper-murmur3
If you are deploying SAS Visual Investigator, the external instance of OpenSearch requires a specific configuration of OpenSearch and its security plugin. For more information, see the README file at $deploy/sas-bases/examples/configure-elasticsearch/external/config/README.md
(for Markdown format) or at $deploy/sas-bases/docs/external_opensearch_configuration_requirements_for_sas_visual_investigator.htm
(for HTML format).
In order to use an external OpenSearch instance, you must customize your deployment to point to the required resources and transformers.
If you are deploying in Front-door or Full-stack TLS modes, copy the file $deploy/sas-bases/examples/configure-elasticsearch/external/client-config-tls.yaml
into your $deploy/site-config/external-opensearch/
directory. Create the $deploy/site-config/external-opensearch/
directory if it does not already exist.
If you are deploying in No TLS mode, copy the file $deploy/sas-bases/examples/configure-elasticsearch/external/client-config-no-tls.yaml
into your $deploy/site-config/external-opensearch/
directory. Create the $deploy/site-config/external-opensearch/
directory if it does not already exist.
Adjust the values in your copied file following the in-line comments.
Copy the file $deploy/sas-bases/examples/configure-elasticsearch/external/secret.yaml
into your $deploy/site-config/external-opensearch/
directory . Adjust the values in your copied file following the in-line comments.
Copy the file $deploy/sas-bases/examples/configure-elasticsearch/external/external-opensearch-transformer.yaml
into your $deploy/site-config/external-opensearch/
directory .
Go to the base kustomization file ($deploy/kustomization.yaml
). In the transformers block of that file, add the following content, including adding the block if it doesn’t already exist:
transformers:
- site-config/external-opensearch/external-opensearch-transformer.yaml
If you are deploying in Full-stack TLS or Front-door TLS mode, add the following content in the resources block of the base kustomization file. Add the resources block if it does not already exist.
resources:
...
- site-config/external-opensearch/client-config-tls.yaml
- site-config/external-opensearch/secret.yaml
...
If you are deploying in Front-door TLS mode and the external instance of OpenSearch is not in the same cluster, add the following content in the resources block of the base kustomization file. Add the resources block if it does not already exist.
resources:
...
- site-config/external-opensearch/client-config-tls.yaml
- site-config/external-opensearch/secret.yaml
...
If you are deploying in Front-door TLS mode and the external instance of OpenSearch is in the same cluster, add the following content in the resources block of the base kustomization file. Add the resources block if it does not already exist.
resources:
...
- site-config/external-opensearch/client-config-no-tls.yaml
- site-config/external-opensearch/secret.yaml
...
If you are not using TLS, add the following content in the resources block of the base kustomization file, including adding the block if it doesn’t already exist.
resources:
...
- site-config/external-opensearch/client-config-no-tls.yaml
- site-config/external-opensearch/secret.yaml
...
To ensure the optimal functionality of index creation within the SAS Viya platform, ensure that the action section inside the config/opensearch.yml file has the auto_create_index set to -sand__*,-viya_catalog__*,-cirrus__*,-viya_cirrus__*,+*
.
This README file describes OpenSearch’s configuration requirements for SAS Visual Investigator.
Note: If your deployment does not include SAS Visual Investigator, this README contains no information that pertains to you.
In the action section inside the config/opensearch.yml file, the destructive_requires_name setting should be set to false.
In the config.dynamic section inside the config/opensearch-security/config.yml file, the do_not_fail_on_forbidden setting should be set to true.
In the config.dynamic.authc section inside the config/opensearch-security/config.yml file, the following four authentication domains must be defined in this exact order:
Basic authentication with challenge set to false.
OpenID authentication using user_name as subject key.
Configure the openid_connect_url to point to SAS Logon’s OpenID endpoint.
Configure the openid_connect_idp.pemtrustedcas_filepath to point to the certificates needed to connect to SAS Logon.
OpenId authentication using client_id as subject key.
Configure the openid_connect_url to point to SAS Logon’s OpenID endpoint.
Configure the openid_connect_idp.pemtrustedcas_filepath to point to the certificates needed to connect to SAS Logon.
Basic authentication with challenge set to true.
For a security config example, see $deploy/sas-bases/examples/configure-elasticsearch/external/config/config.yaml
.
By default, OpenSearch creates its temporary directory within /tmp using an emptyDir volume mount. However, some hardened installations mount /tmp on emptyDir volumes with the noexec
option, preventing JNA and libffi from functioning correctly. This can cause startup failures with exceptions like java.lang.UnsatisfiedLinkerError
or messages indicating issues with mapping segments or allocating closures.
In order to allow JNA loading without relaxing filesystem restrictions, OpenSearch can be configured to use a memory-backed temporary directory.
To configure OpenSearch to use a memory-backed temporary directory:
Copy the JNA Temporary Directory transformer from $deploy/sas-bases/examples/configure-elasticsearch/internal/jna/jna-tmp-dir-transformer.yaml
into the $deploy/site-config
directory.
Add the jna-tmp-dir-transformer.yaml file to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
- site-config/jna-tmp-dir-transformer.yaml
For more information, see SAS Viya Platform: Deployment Guide.
Process Orchestration is enabled in some of the Risk solutions. As part of this enablement, you can view, monitor, and manage job flow executions.
Process Orchestration uses Apache Airflow.
Apache Airflow requires a dedicated PostgreSQL database. Here are the potential locations for the Airflow database:
An external PostgreSQL server with the SAS Infrastructure Data Server on it.
An external PostgreSQL server with the SAS Common Data Store on it.
A separate external PostgreSQL server with no other SAS Viya data on it.
An internal PostgreSQL server with the SAS Infrastructure Data Server on it.
Note: SAS recommends that the Airflow database be hosted on the PostgreSQL server that hosts the SAS Infrastructure Data Server.
If you choose to host the Apache Airflow database on an external instance of PostgreSQL, when you create the Apache Airflow database, you must also create a special user (such as airflow_user). This is done for security reasons so that the user has access to the Apache Airflow database only.
If you choose to host the Apache Airflow database on the internal instance of PostgreSQL that also hosts the SAS Infrastructure Data Server, then the Apache Airflow database can be automatically created on that instance, along with a secure PostgreSQL user.
For details about the SAS Infrastructure Data Server or SAS Common Data Store, see PostgreSQL Server Requirements in System Requirements for the SAS Viya Platform.
The Apache Airflow database can be hosted on either an external instance of PostgreSQL or an internal instance. Use the section below that corresponds to the type of instance of PostgreSQL that you use.
If your external instance of PostgreSQL already exists, skip to step 2. Otherwise, use the documentation for your PostgreSQL provider to create an external instance of PostgreSQL. This instance must meet the SAS Viya platform system requirements. See PostgreSQL Server Requirements in System Requirements for the SAS Viya Platform for these requirements.
When the external PostgreSQL instance exists, create the Airflow user name and database. See the Apache Airflow documentation.
For reference, here are the necessary commands:
CREATE DATABASE airflow_db;
CREATE USER airflow_user WITH PASSWORD 'airflow_password';
GRANT ALL PRIVILEGES ON DATABASE airflow_db TO airflow_user;
-- PostgreSQL 15 requires additional privileges:
USE airflow_db;
GRANT ALL ON SCHEMA public TO airflow_user;
Where:
TIP: Be sure to enclose the password in single quotation marks.
Configure the PostgreSQL database for use by Apache Airflow. To do so, create the sas-airflow-metadata Secret to specify the location of the database:
Copy the file $deploy/sas-bases/examples/sas-airflow/metadata/metadata.env
into the $deploy/site-config/sas-airflow/metadata
directory.
Issue the following command to make the file writable:
chmod +w $deploy/site-config/sas-airflow/metadata/metadata.env
Edit the file $deploy/site-config/sas-airflow/metadata/metadata.env
.
Replace {{ METADATA-URL }}
with the full PostgreSQL connection URI of the database to be used by Apache Airflow. Follow the example given in the comments of the metadata.env file, being sure to replace the airflow_user, airflow_password, airflow_db_host, airflow_db, and sslmode with the appropriate values.
Edit the base kustomization file ($deploy/kustomization.yaml
).
Locate the components block in the file. If the block does not exist, add it. Then, add the following line:
components:
- sas-bases/components/sas-airflow/external-airflow
Locate the secretGenerator block in the file. If the block does not exist, add it. Then, add the following content:
secretGenerator:
- name: sas-airflow-metadata
envs:
- site-config/sas-airflow/metadata/metadata.env
Configure the internal PostgreSQL database for use by Apache Airflow.
Edit the base kustomization file ($deploy/kustomization.yaml
).
Locate the following line in the components block in the file.
components:
- sas-bases/components/crunchydata/internal-platform-postgres
Add two lines, using the example that follows. The two new lines must immediately follow the - sas-bases/components/crunchydata/internal-platform-postgres
line.
components:
- sas-bases/components/crunchydata/internal-platform-postgres
- sas-bases/components/crunchydata/internal-platform-airflow
- sas-bases/components/sas-airflow/internal-airflow
The Process Orchestration feature of the SAS Viya platform uses Apache Airflow which uses an instance of Redis. This README file describes how to modify the persistent storage allocation and class used by Airflow Redis.
Copy the files in the $deploy/sas-bases/examples/sas-airflow/sas-airflow-redis
directory to the $deploy/site-config/sas-airflow/sas-airflow-redis
directory.
Create the destination directory if it does not already exist.
Edit the sas-airflow-redis-modify-storage.yaml file to replace the variables with actual values. Do not use quotes in the replacement.
Replace {{ STORAGE-SIZE }} with the desired size. The default is 1Gi. Also replace {{ STORAGE-CLASS }} with the desired storage class. The default is the default storage class in Kubernetes. Replace the entire variable string, including the braces, with the value you want to use.
After you have edited the file, add a reference to it in the transformers block
of the base kustomization.yaml file ($deploy/kustomization.yaml
):
transformers:
...
- site-config/sas-airflow/sas-airflow-redis/sas-airflow-redis-modify-storage.yaml
Users of SAS Risk products can make use of additional features when an
administrator enables Python integration with the SAS Viya platform. The SAS
Process Orchestration framework provides a set of features for SAS Risk
solutions. Some of these features use PROC PYTHON
in SAS code that runs in
Process Orchestration flows.
SAS Process Orchestration can use a customer-prepared environment consisting of a Python installation and any required packages. Some configuration is required.
The requirements to install and configure Python for the SAS Viya platform are described in the official documentation for open-source language integration, SAS Viya Platform Operations: Integration with External Languages.
SAS recommends that you use the SAS Configurator for Open Source tool to configure the integration with Python. SAS Configurator for Open Source partially automates the download, installation, and ongoing management of Python from source.
SAS has provided the YAML files in the $deploy/sas-bases/examples/sas-airflow/python
directory to assist you in setting up the Python integration for SAS Process
Orchestration. For a full set of instructions for using these files to configure
the integration, see Enabling Python Integration with SAS Process Orchestration.
Risk Reporting Framework Core Service support SAS Integrated Regulatory Reporting solution and SAS Insurance Capital Management solution in xbrl generation, validation execution, and filing instance template UI services. This README file describes the settings available for deploying Risk Reporting Framework Core Service. The example files described in this README file are located at ‘/$deploy/sas-bases/examples/sas-risk-rrf-core/configure’.
Based on the following descriptions of available example files, determine if you want to use any example file in your deployment. If you do, copy the example file and place it in your site-config directory.
The default values and maximum values for CPU requests and CPU limits can be specified in an rrf pod template. The risk-rrf-core-cpu-requests-limits.yaml file allows you to change these default and maximum values for the CPU resource. To update the defaults, replace the {{ DEFAULT-CPU-REQUEST }}, {{ MAX-CPU-REQUEST }}, {{ DEFAULT-CPU-LIMIT }}, and {{ MAX-CPU-LIMIT }} variables with the value you want to use. Here is an example:
patch: |-
- op: add
path: /metadata/annotations/launcher.sas.com~1default-cpu-request
value: 50m
- op: add
path: /metadata/annotations/launcher.sas.com~1max-cpu-request
value: 100m
- op: add
path: /metadata/annotations/launcher.sas.com~1default-cpu-limit
value: "2"
- op: add
path: /metadata/annotations/launcher.sas.com~1max-cpu-limit
value: "2"
Note: For details on the value syntax used above, see “Manage Requests and Limits for CPU and Memory” section, located at https://documentation.sas.com/?cdcId=itopscdc&cdcVersion=default&docsetId=itopssrv&docsetTarget=p0wvl5nf1lvyzfn16pqdgf9tybuo.htm.
After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:
transformers:
...
- site-config/sas-risk-rrf-core/configure/risk-rrf-core-cpu-requests-limits.yaml
Note: The current example PatchTransformer targets only RRF PodTemplate used by Risk Reporting Framework Core Service.
The default values and maximum values for memory requests and memory limits can be specified in an rrf pod template. The risk-rrf-core-memory-requests-limits.yaml file allows you to change these default and maximum values for the memory resource. To update the defaults, replace the {{ DEFAULT-MEMORY-REQUEST }}, {{ MAX-MEMORY-REQUEST }}, {{ DEFAULT-MEMORY-LIMIT }}, and {{ MAX-MEMORY-LIMIT }} variables with the value you want to use. Here is an example:
patch: |-
- op: add
path: /metadata/annotations/launcher.sas.com~1default-memory-request
value: 300M
- op: add
path: /metadata/annotations/launcher.sas.com~1max-memory-request
value: 2Gi
- op: add
path: /metadata/annotations/launcher.sas.com~1default-memory-limit
value: 500M
- op: add
path: /metadata/annotations/launcher.sas.com~1max-memory-limit
value: 2Gi
Note: For details on the value syntax used above, see “Manage Requests and Limits for CPU and Memory” section, located at https://documentation.sas.com/?cdcId=itopscdc&cdcVersion=default&docsetId=itopssrv&docsetTarget=p0wvl5nf1lvyzfn16pqdgf9tybuo.htm.
After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml). Here is an example:
transformers:
...
- site-config/sas-risk-rrf-core/configure/risk-rrf-core-memory-requests-limits.yaml
Note: The current example PatchTransformer targets only RRF PodTemplate used by Risk Reporting Framework Core Service.
When SAS Allowance for Credit Loss is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Risk Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS Allowance for Credit Loss solution successfully, you must deploy the Risk Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-rcc/README.md
(Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm
(HTML format).
For storage options for your solution, such as external databases, refer to the Risk Cirrus Core README.
For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Allowance for Credit Loss: Administrator’s Guide.
Complete steps 1-4 described in the Risk Cirrus Core README.
Complete step 5 described in the Risk Cirrus Core README to modify your Risk Cirrus Core .env
configuration file. Because SAS Allowance for Credit Loss uses workflow service tasks, a default service account must be configured for the Risk Cirrus Objects workflow client. If you know which user account to use before installation and prefer having it configured during installation, you should set the SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG variable
to “Y” and assign the user ID to the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT
variable. If you choose not to configure this during installation, you can set the default service account after deployment via SAS Environment Manager.
If you are upgrading from a cadence prior to 2025.02, you should complete this step. Otherwise, you can skip to the next step.
If you have a $deploy/site-config/sas-risk-cirrus-acl/resources
directory, take note of the values in your acl_transform.yaml
file. You may want to use them in the following steps. Once you have the values you need, delete the directory and its contents. Then, edit your base kustomization.yaml
file ($deploy/kustomization.yaml
) to remove the following line from the transformers
section: - site-config/sas-risk-cirrus-acl/resources/acl_transform.yaml
.
Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-acl
to the $deploy/site-config/sas-risk-cirrus-acl
directory. Create a destination directory if one does not exist.
IMPORTANT: If the destination directory already exists, make sure it has the expected configuration.env
and sas-risk-cirrus-acl-secret.env
files, not the old acl_transform.yaml file from previous cadences (prior to 2025.02). If the directory already exists and already has the expected configuration.env
and sas-risk-cirrus-acl-secret.env
files, verify that overlay settings have been applied successfully to the configmap and verify that overlay settings have been applied successfully to the secret have been correctly applied. No further actions are required unless you want to change the connection settings to different overrides.
Modify the configuration.env
file (located in the $deploy/site-config/sas-risk-cirrus-acl
directory). Lines with a #
at the beginning are commented out; their values will not be applied during deployment. If there are any parameters for which you want to override the default value, uncomment that variable’s line by removing the #
at the beginning of the line and replace the placeholder with the desired value as explained in the following section. Specify, if needed, your settings as follows:
Parameter Name | Description |
---|---|
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER | Replace {{ INFO-OR-DEBUG }} with the logging level desired. (Default is INFO) |
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES | Replace {{ Y-OR-N }} to specify whether you want to include steps flagged as sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts: - The create_cas_lib step creates the default ACLReporting CAS library that is used for reporting in SAS Allowance for Credit Loss.- The create_db_auth_domain step creates an ACLDBAuth domain for the riskcirrusacl schema and assigns default permissions.- The create_db_auth_domain_user step creates an ACLUserDBAuth domain for the riskcirrusacl schema and assigns default group permissions.- The import_main_dataloader_files step uploads the Cirrus_ACL_main_loader.xlsx file into the file service under the Products/SAS Allowance for Credit Loss directory.- The import_sample_data_loader_files step uploads the Cirrus_ACL_sample_data_loader.zip file into the file service under the Products/SAS Allowance for Credit Loss directory.- The import_va_reports step imports SAS-provided reports created in SAS Visual Analytics.- The install_riskengine_curves_project step loads the sample ACL Curves project into SAS Risk Engine.- The install_sampledata step loads sample load data into the riskcirrusacl database schema library.- The install_scenarios_sampledata step loads the sample scenarios into SAS Risk Factor Manager.- The load_sampledata_dataloader_objects step loads sample Class Members, Class Member Translations, NamedTreePaths, Named Tree Path Translations, Roles, RolePermissions, and Positions. It also loads sample object instances, like Attribution Templates, Configuration Sets, Configuration Tables, Cycles, Data Definitions, Models, Rule Sets and Scripts, as well as the Link Instances, Object Classifications, and Workflows associated with those objects. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.- The load_workflows step loads and activates the ACL workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.- The localize_va_reports step imports localized SAS-provided reports created in SAS Visual Analytics.- The manage_cas_lib_acl step sets up permissions for the default ACLReporting CAS library. Users in the ACLUsers, ACLAdministrators and SASAdministrators groups have full access to the tables.- The transfer_sampledata_files step stores a copy of all sampledata files loaded into the environment into the file service under the Products/SAS Allowance for Credit Loss directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.- The update_db_sampledata_scripts_pg step stores a copy of the install_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the sample data.WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N. The following steps have not been marked as sample artifacts and will always be deployed: - The check_services step checks if the ACL dependent services are up and running.- The check_solution_existence step checks to see if the ACL solution is already running.- The check_solution_deployment step checks for the successful deployment of Risk Cirrus Core.- The create_solution_repo step creates the ACL repository.- The check_solution_running step checks to entire the ACL solution is running.- The import_solution step imports the solution in the ACL repository.- The load_app_registry step loads the ACL solution into the SAS application registry.- The load_auth_rules step assigns authorization rules for the ACL solution.- The load_group_memberships step assigns members to various ACL groups.- The load_identities step loads the ACL identities.- The load_main_dataloader_objects step loads the Cirrus_ACL_main_loader.xlsx file which contains required object instances, like Source System codes, Sequence Definitions and Code Libraries. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.- The setup_code_lib_repo step creates the ACL code library directory.- The share_ia_script_with_solution step shares the Risk Cirrus Core individual assessment script with the ACL solution.- The share_objects_with_solution step shares the Risk Cirrus Core code library with the ACL solution.- The upload_notifications step loads workflow notifications into SAS Workflow Manager. |
SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS | Replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to run. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment. For example, if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then the “transfer_sampledata” and the “load_sample_data” steps will be skipped during deployment. After the deployment finishes, you decide you want to include the SAS-provided sample data to use. You can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS to “transfer_sampledata,load_sample_data” and then delete the sas-risk-cirrus-acl pod to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS .WARNING: This list is absolute; the deployment will only run the steps included in this list. If you are deploying this environment for the first time, this variable should be an empty string, or you risk an incomplete or failed deployment. |
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS | Replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. Typically, the only use case for this would be skipping the load of sample data. To skip the load of sample data, set this variable to “load_sample_data”. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then set this variable to an empty string to skip load_sample_data and any other steps that are marked as sample data. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to Y, then set this variable with the IDs of any steps you would like to skip, including those flagged as sample data. (Default is \<Empty list>). Note: If this variable is empty, all steps will be executed unless the solution has already deployed successfully in which case no steps will be executed. If this step is non-empty, only the steps listed in this variable will be executed. |
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME | Replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-NAME }} with the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the Shared Services database. |
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET | Replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }} with the database schema user secret for the user name that was used for SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME . |
The following is an example of a configuration.env
that you could use for SAS Allowance for Credit Loss. This example uses the default values provided by SAS except for the solution input data database user name variable. The SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME
should be replaced with the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the Shared Services database.
# SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
# SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES={{ Y-OR-N }}
# SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
# SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME=acluser
In the base kustomization.yaml
file, add site-config/sas-risk-cirrus-acl/configuration.env
to the configMapGenerator
block. Here is an example:
configMapGenerator:
...
- name: sas-risk-cirrus-acl-parameters
behavior: merge
envs:
- site-config/sas-risk-cirrus-acl/configuration.env
...
Save the kustomization.yaml
file.
Modify the sas-risk-cirrus-acl-secret.env file (in the $deploy/site-config/sas-risk-cirrus-acl
directory) and specify your settings as follows:
For the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET
, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }}
with the database schema user secret. If the directory already exists and already has the expected .env
file, verify that overlay settings have been applied successfully to the secret have been correctly applied. No further actions are required unless you want to change the secret.
The following is an example of secret.env
file that you could use for SAS Allowance for Credit Loss.
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET=aclsecret
Save the sas-risk-cirrus-acl-secret.env
file.
In the base kustomization.yaml
file, add site-config/sas-risk-cirrus-acl/sas-risk-cirrus-acl-secret.env
to the secretGenerator
block. Here is an example:
secretGenerator:
...
- name: sas-risk-cirrus-acl-secret
behavior: merge
envs:
- site-config/sas-risk-cirrus-acl/sas-risk-cirrus-acl-secret.env
...
Save the kustomization.yaml
file.
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide to apply the new settings.
Note: The .env
overlay can be applied during or after the initial deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Before verifying the settings for SAS Allowance for Credit Loss solution, complete step 9 specified in the Risk Cirrus Core README to verify for Risk Cirrus Core.
Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl describe configmap sas-risk-cirrus-acl-parameters -n <name-of-namespace>
Verify that the output contains the desired configurations that you configured.
To verify that your overrides were applied successfully to the secret, run the following commands:
Find the name of the secret on the namespace.
kubectl describe secret sas-risk-cirrus-acl-secret -n <name-of-namespace>
Retrieve the name of the secret on the namespace from the “Name:” line on the generated output.
Verify that the output contains the desired database schema user secret that you configured.
kubectl get secret <name-of-the-secret> -n <name-of-namespace>-o jsonpath='{.data}'
When SAS Asset and Liability Management is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Risk Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS Asset and Liability Management solution successfully, you must deploy the Risk Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-rcc/README.md
(Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm
(HTML format).
For storage options for your solution, such as external databases, refer to the Risk Cirrus Core README.
For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Asset and Liability Management: Administrator’s Guide.
Complete steps 1-4 described in the Risk Cirrus Core README.
Complete step 5 described in the Risk Cirrus Core README to modify your Risk Cirrus Core .env
configuration file. Because SAS Asset and Liability Management uses workflow service tasks, a user account must be configured for a workflow client. If you know which user account to use before installation and prefer having it configured during installation, you should set the SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG
variable to “Y” and assign the user account to the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT
variable. If you choose not to configure this during installation, you can set the default service account after deployment via SAS Environment Manager.
If you are upgrading from a cadence prior to 2025.02, you should complete this step. Otherwise, you can skip to the next step.
If you have a $deploy/site-config/sas-risk-cirrus-alm/resources
directory, take note of the values in your alm_transform.yaml
file. You may want to use them in the following steps. Once you have the values you need, delete the directory and its contents. Then, edit your base kustomization.yaml
file ($deploy/kustomization.yaml
) to remove the following line from the transformers
section: - site-config/sas-risk-cirrus-alm/resources/alm_transform.yaml
.
Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-alm/
to the $deploy/site-config/sas-risk-cirrus-alm
directory. Create a destination directory if one does not exist.
IMPORTANT: If the destination directory already exists, make sure it has the expected configuration.env file, not the old alm_transform.yaml file from previous cadences (prior to 2025.02). If the directory already exists and already has the expected configuration.env
file, verify that overlay settings have been applied successfully to the configmap. No further actions are required unless you want to change the connection settings to different overrides.
Modify the configuration.env
file (located in the $deploy/site-config/sas-risk-cirrus-alm
directory). Lines with a #
at the beginning are commented out; their values will not be applied during deployment. If there are any parameters for which you want to override the default value, uncomment that variable’s line by removing the #
at the beginning of the line and replace the placeholder with the desired value as explained in the following section. Specify, if needed, your settings as follows:
a. For SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER
, replace {{ INFO-OR-DEBUG }} with the logging level desired. (Default is INFO)
b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
, replace {{ Y-OR-N }} to specify whether you want to include steps flagged as sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts:
transfer_sampledata
step stores a copy of all sample data files in the file service under the Products/SAS Asset and Liability Management directory. This directory will include DDLs, sample data and scripts.install_sample_data
step loads the sample portfolio data.load_sampledata_dataloader_objects
step loads sample Class Members, Class Member Translations, NamedTreePaths, and Positions. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.import_va_reports
step imports SAS-provided reports created in SAS Visual Analytics.
c. For SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to run. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment. For example, if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then the upload_notifications step will be skipped during deployment. After the deployment finishes, you decide you want to include the SAS-provided notifications to use in your custom workflow definitions. You can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS to “upload_notifications” and then delete the sas-risk-cirrus-alm pod to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS.
WARNING: This list is absolute; the deployment will only run the steps included in this list. If you are deploying this environment for the first time, this variable should be an empty string, or you risk an incomplete or failed deployment.
d. For SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. Typically, the only use case for this would be skipping the load of sample data. To skip the load of sample data, set this variable to “load_sample_data”. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then set this variable to an empty string to skip load_sample_data and any other steps that are marked as sample data. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to Y, then set this variable with the IDs of any steps you would like to skip, including those flagged as sample data.
In the base kustomization.yaml
file, add site-config/sas-risk-cirrus-alm/configuration.env
to the configMapGenerator
block. Here is an example:
configMapGenerator:
...
- name: sas-risk-cirrus-alm-parameters
behavior: merge
envs:
- site-config/sas-risk-cirrus-alm/configuration.env
...
Save the kustomization.yaml
file.
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply
the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide to apply the new settings.
Note: The configuration.env
overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Before verifying the settings for SAS Asset and Liability Management solution, complete step 9 specified in the Risk Cirrus Core README to verify for Risk Cirrus Core.
Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl describe configmap sas-risk-cirrus-alm-parameters -n <name-of-namespace>
Verify that the output contains the desired connection settings that you configured.
To deploy SAS Business Orchestration Services, you must create an init container image that includes all the configuration files for SAS Business Orchestration Services. A reference to the init container must be added to the base kustomization.yaml file.
Additionally, you can run SAS Business Orchestration Services in legacy mode.
You must create a SAS Business Orchestration Services init container image that contains configuration files. To create this init container image, follow the instructions at Configuring an Init Container.
To add a SAS Business Orchestration Services init container to a SAS Business Orchestration Services deployment, complete these steps:
Copy the files in the $deploy/sas-bases/examples/sas-boss/init-container
directory to the $deploy/site-config/sas-boss/init-container
directory.
Create the destination directory if it does not exist.
Edit the file add-init-container.yaml
in the
$deploy/site-config/sas-boss/init-container
directory. Replace the image name
sas-boss-hydrator
with the full image name of your SAS Business Orchestration
Services init container.
Add site-config/sas-boss/init-container/add-init-container.yaml
to the
patches block of the base kustomization.yaml file. Create this block if it does
not exist. Here is an example:
patches:
- target:
group: apps
version: v1
kind: Deployment
name: sas-boss
path: site-config/sas-boss/init-container/add-init-container.yaml
...
Deploy the software using the commands described in SAS Viya Platform Deployment Guide.
You can choose to run SAS Business Orchestration Services in Legacy Mode. For more information about Legacy Mode, see Native Mode and Legacy Mode. If you need to run SAS Business Orchestration Services in legacy mode, you must follow the steps below:
Copy the file
$deploy/sas-bases/examples/sas-boss/legacy-mode/enable-legacy-mode.yaml
to the $deploy/site-config/sas-boss/legacy-mode
directory.
Create the destination directory if it does not exist.
Edit the
$deploy/sas-bases/examples/sas-boss/legacy-mode/enable-legacy-mode.yaml
by
replacing the file name boss-context.xml
with the relative path of your
SAS Business Orchestration Services context file in your SAS Business Orchestration
Services init container.
Add site-config/sas-boss/legacy-mode/enable-legacy-mode.yaml
to the
transformers block of the base kustomization.yaml file. Here is an example:
transformers:
...
- site-config/sas-boss/legacy-mode/enable-legacy-mode.yaml
...
Deploy the software using the commands described in SAS Viya Platform Deployment Guide.
You can choose to run SAS Business Orchestration Services so that Netty endpoint ports are exposed with a Kubernetes type of LoadBalancer. Follow the steps below:
Copy the file
$deploy/sas-bases/examples/sas-boss/netty-service/netty-service-transformer.yaml
to the $deploy/site-config/sas-boss/netty-service
directory.
Create the destination directory if it does not exist.
Follow the comments in the copied netty-service-transformer.yaml
file to
edit the port numbers as needed.
Add sas-bases/overlays/sas-boss/netty-service/netty-service.yaml
to the
resources block and
site-config/sas-boss/netty-service/netty-service-transformer.yaml
to the
transformers block of the base kustomization.yaml file. Here is an example:
resources:
...
- sas-bases/overlays/sas-boss/netty-service/netty-service.yaml
...
transformers:
...
- site-config/sas-boss/netty-service/netty-service-transformer.yaml
...
Deploy the software using the commands described in SAS Viya Platform Deployment Guide.
If the SAS Business Orchestration Services is not delivered with other SAS solutions, a patch transformer is provided to scale down unused pods. Follow the steps below:
Copy the file
$deploy/sas-bases/examples/sas-boss/minimal/scale-others-to-zero.yaml
to
the $deploy/site-config/sas-boss/minimal
directory.
Create the destination directory if it does not exist.
In the copied scale-others-to-zero.yaml
file, edit the boss-patch and
readiness-patch transformer blocks, if needed, as directed by the comments in
the file.
Add $deploy/site-config/sas-boss/minimal/scale-others-to-zero.yaml
and $deploy/sas-bases/overlays/startup/disable-startup-transformer.yaml
to the
transformers block of the base kustomization.yaml file. Here is an example:
transformers:
...
- site-config/sas-boss/minimal/scale-others-to-zero.yaml
- sas-bases/overlays/startup/disable-startup-transformer.yaml
...
In the base kustomization.yaml file, comment out or remove any lines with “postgres” in them.
Deploy the software using the commands described in SAS Viya Platform Deployment Guide.
For more information about SAS Business Orchestration Services, see SAS Business Orchestration Services: User’s Guide
This README file describes the configuration settings for a cloud-native engine that enables users to declare their orchestrations through a set of workloads and flows in YAML format. This version of the product is also referred to as SAS Business Orchestration Worker.
SAS Business Orchestration Services has two versions. The first is the one that has been shipping for some time and uses an engine that is based on Apache Camel. The README for deploying and configuring that version of SAS Business Orchestration Services is located at $deploy/sas-bases/examples/sas-boss/README.md (for Markdown format) or at $deploy/sas-bases/docs/deploying_sas_business_orchestration_services.htm (for HTML format).
Create a copy of the example template in $deploy/sas-bases/examples/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml
. Save this copy in $deploy/site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml
.
Placeholders are indicated by curly brackets, such as {{ NAMESPACE }}. Find and replace the placeholders with the values you want for your deployment. After all placeholders have been filled in, directly apply your deployment yaml via SAS Viya platform Kustomize or direct kubectl apply commands.
If you are using the SAS Viya platform Kustomize process, add the resource $deploy/site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml
to the
resources block of the base kustomization.yaml file. The use case here is to deploy a SAS Business Orchestration Worker project with SAS Viya platform. Here is an example:
resources:
...
- site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml
...
The Deployment Resource sections below describe several TLS configurations for sas-business-orchestration-worker deployments. These configurations must align with SAS Viya security requirements, as specified in SAS Viya Platform Operations guide Security Requirements. Here are the specific TLS deployment requirements:
The business-orchestration-worker-deployment.yaml resource has customizable sections.
This section provides a ConfigMap example that mounts the project.yaml into pods. The project.yaml describes the orchestration.
This section provides an image pull secret example that grants access to the container registry images.
The image pull secret can be grepped from the SAS Viya platform Kustomize build command output:
kustomize build . > site.yaml
grep '.dockerconfigjson:' site.yaml
.dockerconfigjson: <SECRET>
Alternatively, if SAS Viya platform has already been deployed the image pull secret can be queried:
kubectl -n {{ NAMESPACE }} get secret --field-selector=type=kubernetes.io/dockerconfigjson -o yaml | grep '.dockerconfigjson:'
.dockerconfigjson: <SECRET>
Replace the namespace and image pull secret values in the example.
This section provides an example that configures high availability routing for sas-business-orchestration-worker pods.
This section provides an example that shows pod configuration and behaviors.
When using ODE processor, you must create a sas-business-orchestration-worker init container that fetches the required SFM jar files from by pulling a docker image.
Create a docker image that contains the required SFM jar files. Here is a sample Dockerfile.
FROM ubuntu
# Package updates and install dependencies
RUN apt-get update -y && apt-get upgrade -y && apt-get install -y \
curl \
apt-transport-https \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin
# Grab SAS Fraud Management JARs (boss-worker-sb ode plugin dependency, used in performance/component/processor-ode.yaml)
RUN mkdir /sfmlibs
RUN mkdir /sfmlibs/44
RUN cd /sfmlibs/44 && curl -LO http://ivy.fyi.sas.com/Repositories/sds/dev/f0rapt44/DEVD/ivy-repo/SAS_content/sas.finance.fraud.transaction/404001.0.0.20161020100850_f0rapt44/sas.finance.fraud.transaction.jar
RUN cd /sfmlibs/44 && curl -LO http://ivy.fyi.sas.com/Repositories/sds/dev/f0rapt44/DEVD/ivy-repo/SAS_content/sas.finance.fraud.engine/404001.0.0.20161020100936_f0rapt44/sas.finance.fraud.engine.jar
RUN mkdir /sfmlibs/61
RUN cd /sfmlibs/61 && curl -LO http://ivy.fyi.sas.com/Repositories/sds/dev/f0rapt61/DEVD/ivy-repo/SAS_content/sas.finance.fraud.transaction/601002.0.0.20220622174613_f0rapt61/sas.finance.fraud.transaction.jar
RUN cd /sfmlibs/61 && curl -LO http://ivy.fyi.sas.com/Repositories/sds/dev/f0rapt61/DEVD/ivy-repo/SAS_content/sas.finance.fraud.engine/601002.0.0.20220622174651_f0rapt61/sas.finance.fraud.engine.jar
RUN mkdir /sfmlibs/62
RUN cd /sfmlibs/62 && curl -LO http://ivy.fyi.sas.com/Repositories/sds/dev/d4rapt62/DEVD/ivy-repo/SAS_content/sas.finance.fraud.transaction/602000.0.0.20231003221024_d4rapt62/sas.finance.fraud.transaction.jar
RUN cd /sfmlibs/62 && curl -LO http://ivy.fyi.sas.com/Repositories/sds/dev/d4rapt62/DEVD/ivy-repo/SAS_content/sas.finance.fraud.engine/602000.0.0.20231003221203_d4rapt62/sas.finance.fraud.engine.jar
Run the following docker command to create a docker image that is used in the init container.
docker build -t <image_name>:<tag> <path_to_Dockerfile_directory>
Tag the image and push it to a Docker registry.
docker tag <image_name>:<tag> <repository_url>/<image_name>:<tag>
Replace
docker tag myimage:latest myrepository/myimage:latest
Log in to the Docker registry and push the Docker image to the repository.
docker login <registry_url>
docker push <repository_url>/<image_name>:<tag>
For example:
docker push myrepository/myimage:latest
Edit the $deploy/site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml
file. In the Deployment section, uncomment the init container for “fetch-ode-jars”, and replace {{ SFM_JAR_IMAGE }} with the URL to the Docker image generated in Step 2. Here is an example:
initContainers:
- name: fetch-ode-jars
image: myrepository/myimage:latest
command: ["sh", "-c"]
args: ["cp -R /sfmlibs/* /tmp/data"]
imagePullPolicy: Always
volumeMounts:
- name: sfmlibs
mountPath: "/tmp/data"
The sas-business-orchestration-worker container includes categories of environmental properties. The properties include properties for logging, external services (such as Apache Kafka, Redis and RabbitMQ), processing options, and probe options. Optional security-related properties are covered in the Security section.
Update the two image values that are contained in the $deploy/site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml
file. Revise the value “sas-business-orchestration-worker” to include the registry server, relative path, name, and tag. The registry relative server and relative path are the same as other SAS Viya platform deployment images.
The name of the container is ‘sas-business-orchestration-worker’. The registry relative path, name, and tag values are found in the sas-components-* configmap in the Viya deployment.
Perform the following commands to determine the appropriate information. When you have the information, add it to the appropriate places in the three files listed above.
$ # generate site.yaml file
$ kustomize build -o site.yaml
## get the sas-business-orchestration-worker registry information
$ cat manifests.yaml | grep 'sas-business-orchestration-worker:' | grep -v -e "VERSION" -e 'image'
$ # manually update the sas-business-orchestration-worker-example images using the information gathered below: <container registry>/<container relative path>/sas-business-orchestration-worker:<container tag>
$ # apply site.yaml file
$ kustomize apply -f site.yaml
Perform the following commands to get the required information from a running SAS Viya platform deployment.
# get the registry server, kubectl needs to point to the SAS Viya platform deployment namespace, and replace {{ NAMESPACE }} with the namespace value
$ kubectl -n {{ NAMESPACE }} get deployment sas-readiness -o yaml | grep -e "image:.*sas-readiness" | sed -e 's/image: //g' -e 's/\/.*//g' -e 's/^[ \t]*//'
<container registry>
# get registry relative path and tag, kubectl needs to point to the SAS Viya platform deployment namespace, and replace {{ NAMESPACE }} with the namespace value
$ CONFIGMAP="$(kubectl -n {{ NAMESPACE }} get cm | grep sas-components | tr -s '' | cut -d ' ' -f1)"
$ kubectl -n {{ NAMESPACE }} get cm "$CONFIGMAP" -o yaml | grep 'sas-business-orchestration-worker:' | grep -v "VERSION"
SAS_COMPONENT_RELPATH_sas-business-orchestration-worker: <container relative path>/sas-business-orchestration-worker
SAS_COMPONENT_TAG_sas-business-orchestration-worker: <container tag>
The SAS_LOG_LEVEL environment variable specifies the minimum severity level for emitting logs. To control the verbosity of the log output, the level can be set to TRACE, DEBUG, INFO, WARN, or ERROR.
The SAS_LOG_FORMAT environment variable specifies the format of the emitted logs. The format can be set to json or plain.
The SAS_LOG_LOCALE environment variable determines which locale messages should be included in the output. The default value is “en”.
External services that are used by a workload require defined properties that are specific to the technology in use. See the comments in the $deploy/site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml
resource file for specific examples.
Project yaml files can include multiple workloads that scale independently. This means that a pod can run only one workload. Use the WORKLOAD_ENABLED_BY_INDEX environment variable to specify which workload to execute. If the property is missing the index 0, meaning the first, workload is executed.
The sas-business-orchestration-worker container uses a readiness probe, which allows Kubernetes to determine when a pod is ready to receive data. The initialDelaySeconds field specifies how many seconds Kubernetes should wait before performing the initial probe. The periodSeconds field specifies how many seconds Kubernetes should wait between probes.
For more information about readiness probes, see https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/.
The sas-business-orchestration-worker-sb container includes a Spring Boot application sidecar. Some orchestration components need to leverage Java libraries to connect to other Java services. This includes SAS Fraud Management engines or when parsing certain SWIFT and ISO standardized message formats is required. See comments in the resource file for specifics. This sidecar can be removed or commented out if those JAVA specific implemented features are not needed by the project orchestration workload being executed.
This section provides an example of a Horizontal Pod Autoscaler.
This section provides an example of an ingress in an OpenShift environment. If you use this ingress, comment out other ingresses in the file.
This section provides an example of an ingress in an OpenShift environment. If you use this ingress, comment out other ingresses in the file.
This section provides an example of NGINX for HTTP traffic using TLS. If you use this ingress, comment out other ingresses in the file.
This section provides an example of a secret that holds TLS certificates and keys.
This section provides an example of a secret that holds the certificate authority certificate and key that are used for two-way TLS (mTLS).
This section provides an example of a secret that holds the certificate authority certificate and key that are used for two-way TLS (mTLS) with external services.
Duplicate this section as needed if multiple external services are used by the orchestration project.
These resources do not require much customization. They require the SUFFIX to be filled in, and the NAMESPACE to be specified, as indicated in the template. The ingresses additionally require the host property be specified.
The services are ClusterIP services, accessed externally via the ingress resources. The ports are already filled in and line up with the prefilled ingress ports.
The ingresses include the host, and rules for directing requests. For the sas-business-orchestration-worker ingress, anything sent with /sas-business-orchestration-worker as the path prefix will use this ingress. The service referenced above uses the ingress in most cases. You might not need ingress if all traffic is within the Kubernetes cluster or if the containers are hosted by another cloud technology.
If you are deploying your SAS Business Orchestration Worker on OpenShift, you will not be able to use the Ingress resource. In this case, replace your ingress resource with an OpenShift Route.
There are optional, commented out sections that may be used to create the secrets containing TLS certificates and keys. The data must be base64 encoded and included in these definitions. These secrets could optionally be created manually via kubectl or kustomize secrets generators. If the secrets are created via some other method, the secret names must still match those referenced in the volumes and ingress definitions.
To add TLS to your ingress, some annotations and spec fields must be added. These will require certificates either included in this template, or created and supplied previously. The template includes a TLS ingress that is commented out, but the below examples break down what is different in this ingress.
To secure your ingress, the following annotations can be used to add one-way TLS, two-way TLS (mTLS), or both.
annotations:
# Used to enable TLS
nginx.ingress.kubernetes.io/auth-tls-secret: {{ NAMESPACE }}/business-orchestration-worker-ingress-tls-ca-config-{{ SUFFIX }}
# used to enable mTLS
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
For one-way TLS, fill in the tls field under the spec field. This also includes a secretName, which includes your TLS certificate.
tls:
- hosts:
- {{ PREFIX }}.{{ INGRESS-TYPE }}.{{ HOST }}
secretName: business-orchestration-worker-ingress-tls-config-{{ SUFFIX }}
See the resource comments for more specific details.
Depending on the security configuration, mounting additional trusted certificates in your containers may be necessary. The areas to add these are tagged with SECURITY, and can be uncommented as necessary. The secret names must match whatever secrets have been configured for these certificates.
There are three volume examples created from secrets containing TLS certificates. One volume example is for sas-business-orchestration-worker certificates, one volume is for an external service certificate. These are defined for each container in the Deployment spec.
After being created, these volumes may be mounted in the sas-business-orchestration-worker container. As defined in the template, the business-orchestration-worker certificates are mounted in /var/run/security, the external service certificates are mounted in /var/run/security/
Read through all the inline comments of deployment resource. There is considerable overlap with these instructions here. However, there are more specifics and a higher degree of detail in the actual deployment resource template.
Alternatively, SAS Business Orchestration Worker can be installed separately from the SAS Viya platform. Complete steps above, except “Deploy the Software” in the “Additional Resources” section. The use case here is to deploy a SAS Business Orchestration Worker project in a Kubernetes namespace that is not a SAS Viya platform deployment. Instead, perform the following command:
kubectl apply -f "$deploy/site-config/sas-business-orchestration-worker/business-orchestration-worker-deployment.yaml"
This directory contains an example transformer that illustrates how to change the StorageClass and size of the PVC used to store the Quality Knowledge Base (CTF) in SAS Viya.
Copy the file sas-bases/examples/sas-clinical-repository/storageclass/sas-clinical-storage-class-transformer.yaml
and place it in your site-config directory.
Replace the {{ CTF-STORAGE-CLASS }} value with your desired StorageClass. Note that the CTF requires that your storage class support the RWX accessMode.
Also replace the {{ CTF-STORAGE-SIZE }} value with the size you wish to allocate to the CTF volume. The recommended size is 8Gi. Note that using a lower value may restrict your ability to add new CTFs to SAS Viya; 1Gi is the absolute minimum required.
After you edit the file, add a reference to it in the transformer block of the base kustomization.yaml file.
For more information about using example files, see the SAS Viya Deployment Guide.
For more information about Kubernetes StorageClasses, please see the Kubernetes Storage Class Documentation.
The directory $deploy/sas-bases/examples/sas-data-agent-server-colocated
contains files to customize your SAS Viya platform deployment for
a co-located SAS Data Agent. This README describes the steps necessary
to make these files available to your SAS Viya platform deployment. It also describes
how to set required environment variables to point to these files.
Note: If you make changes to these files after the initial deployment, you must restart the co-located SAS Data Agent.
Before you start the deployment you should determine the OAUTH secret that will be used by co-located SAS Data Agent and any remote SAS Data Agents.
You should also create a subdirectory within $deploy/site-config
to store your co-located SAS Data Agent configurations. This README uses a user-created subdirectory called
$deploy/site-config/sas-data-agent-server-colocated
. For more information, refer to the “Directory Structure” section of the “Pre-installation
Tasks” Deployment Guide.
The base kustomization.yaml file ($deploy/kustomization.yaml
) provides configuration properties for the customization process.
The co-located SAS Data Agent requires specific customizations in order to communicate with remote SAS Data Agents and configure server options. Copy the example sas-data-agent-server-colocated-config.properties
and sas-data-agent-server-colocated-secret.properties
files from $deploy/sas-bases/examples/sas-data-agent-server-colocated
to $deploy/site-config/sas-data-agent-server-colocated
.
Note: The default values listed in the descriptions that follow should be suitable for most users.
The sas-data-agent-server-colocated-secret.properties file contains configuration properties for the OAUTH secret. The OAUTH secret value is required and must be specified in order to communicate with a remote SAS Data Agent. There is no default value for the OAUTH secret.
Note: The following example is for illustration only and should not be used.
Enter a string value for the OAUTH secret that will be shared with the remote SAS Data Agent. Here is an example:
SAS_DA_OAUTH_SECRET=MyS3cr3t
The sas-data-agent-server-colocated-config.properties
file contains configuration properties for logging.
Enter a string value to set the level of additional logging.
* `SAS_DA_DEBUG_LOGTYPE=TRACEALL` enables trace level for all log items.
* `SAS_DA_DEBUG_LOGTYPE=TRACEAPI` enables trace level for api calls.
* `SAS_DA_DEBUG_LOGTYPE=TRACE` enables trace level for most log items.
* `SAS_DA_DEBUG_LOGTYPE=PERFORMANCE` enables tracce/debug level items for performance debugging.
* `SAS_DA_DEBUG_LOGTYPE=PREFETCH` enables trace/debug level items for prefetch debugging.
* `SAS_DA_DEBUG_LOGTYPE=None` disables additional tracing.
If no value is specified, the default of None is used.
Here is an example:
SAS_DA_DEBUG_LOGTYPE=None
The sas-data-agent-server-colocated-config.properties
file contains configuration properties that restrict drivers from accessing the container filesystem. By default, drivers can only access the directory tree /data
which must be mounted on the co-located SAS Data Agent container.
When set to TRUE, the file access drivers can only access the directory structure specified by SAS_DA_CONTENT_ROOT.
When set to FALSE, the file access drivers can access any directories accessible from within the co-located SAS Data Agent container.
If no value is specified, the default of TRUE is used.
SAS_DA_RESTRICT_CONTENT_ROOT=None
Enter a string value to specify the directory tree that file access drivers are allowed to access. This value is ignored if SAS_DA_RESTRICT_CONTENT_ROOT=FALSE. If no value is specified, the default of /data
is used.
Here is an example:
SAS_DA_CONTENT_ROOT=/accounting/data
The sas-data-agent-server-colocated-config.properties
file contains configuration properties that control how the server treats client sessions that are unused for long periods of time. By default the server will try to gracefully shut down sessions that have not been used for one hour.
Use this variable to specify how often the server will check for idle connections. This variable has a default of 60 seconds (1 minute).
Here is an example of how to check for idle client sessions every 5 minutes:
SAS_DA_SESSION_CLEANUP=300
Use this variable to specify how long to wait before an unused client session is considered idle, and thus eligible to be killed. This value is only used when the client does not specify a value for SESSION_TIMEOUT when connecting. This variable has a default of 3600 seconds (1 hour).
Here is an example of how to default to a 20 minute wait before an unused client session is considered idle:
SAS_DA_DEFAULT_SESSION_TIMEOUT=1200
Use this variable to specify the maximum time before an unused client session is considered idle, and thus eligible to be killed. This value applies even when SESSION_TIMEOUT or SAS_DA_DEFAULT_SESSION_TIMEOUT are set to longer times. This variable has a default of 0 seconds (meaning no maximum wait time).
Here is an example of how to set the maximum wait time to 18000 seconds (5 hours) before an unused client session is considered idle:
SAS_DA_MAX_SESSION_TIMEOUT=18000
Use this variable to specify the maximum time the server will wait for a database operation to complete when killing idle client sessions. This variable has a default of 0 seconds (meaning no maximum wait time).
Here is an example of how to set the maximum object timeout to 300 seconds (5 minutes) when killing idle client sessions:
SAS_DA_MAX_SESSION_TIMEOUT=300
Use this variable to specify the maximum time a worker pod will remain when there are no active client sessions. This variable has a default of 0 seconds (meaning the worker pod will remain active and available to service future requests). If a worker pod exits a new client request will automatically start another worker pod to service it, but this might result in a slight initialization delay.
Here is an example of how to set the worker pod timeout to 3600 seconds (1 hour):
SAS_DA_WORKER_TIMEOUT=3600
Use this variable to specify whether a worker pod should be launched before the first client request is received. This variable has a default of TRUE if SAS_DA_OAUTH_SECRET has been specified, otherwise the default is FALSE. If a client request is received a worker pod will be automatically started if it is not already running, but this might result in a slight initialization delay.
Here is an example of how to disable worker pod prelaunch:
SAS_DA_PRELAUNCH_WORKERS=FALSE
The sas-data-agent-server-colocated-config.properties
file contains configuration properties for
Java, SAS/ACCESS Interface to Spark and SAS/ACCESS to Hadoop.
If your deployment includes SAS/ACCESS Interface to Spark, you must make your Hadoop JARs and configuration file available on a PersistentVolume or mounted storage.
Set the options SAS_DA_HADOOP_JAR_PATH and SAS_DA_HADOOP_CONFIG_PATH to point to this location.
See the SAS/ACCESS Interface to Spark documentation at $deploy/sas-bases/examples/data-access/README.md
(for Markdown format) or $deploy/sas-bases/docs/configuring_sasaccess_and_data_connectors_for_sas_viya_4.htm
(for HTML format) for more details. These variables have no default values.
Here are some examples:
SAS_DA_HADOOP_CONFIG_PATH=/clients/hadoopconfig/prod
SAS_DA_HADOOP_JAR_PATH=/clients/jdbc/spark/2.6.22
Use this variable to specify an alternate JAVA_HOME for use by the co-located SAS Data Agent. This variable has no default value.
Here is an example:
SAS_DA_JAVA_HOME=/java/lib/jvm/jre
Add these entries to the base kustomization.yaml file ($deploy/kustomization.yaml
) in order to include
the modified sas-data-agent-server-colocated-config.properties
and sas-data-agent-server-colocated-secret.properties
files.
configMapGenerator:
...
- name: sas-data-agent-server-colocated-config
behavior: merge
envs:
- site-config/sas-data-agent-server-colocated/sas-data-agent-server-colocated-config.properties
...
secretGenerator:
...
- name: sas-data-agent-server-colocated-secrets
behavior: merge
envs:
- site-config/sas-data-agent-server-colocated/sas-data-agent-server-colocated-secret.properties
For more information about configuring SAS/ACCESS, see the README file located at $deploy/sas-bases/examples/data-access/README.md
(for Markdown format) or $deploy/sas-bases/docs/configuring_sasaccess_and_data_connectors_for_sas_viya_4.htm
(for HTML format).
SAS Common Planning Service, used by SAS Assortment Planning, SAS Demand Planning, and SAS Financial Planning, requires dedicated PersistentVolumeClaims (PVCs) for storing data. During setup the sas-planning-retail PVCs are defined and then mounted in the startup process. This directory contains an example transformer that illustrates how to change the StorageClass and size of the PVCs.
$deploy/sas-bases/examples/sas-planning/storage.yaml
file to the
$deploy/site-config
directory.Add a reference to the base kustomization.yaml file ($deploy/kustomization.yaml
)
for the revise file. Here is an example that assumes you put the copied file in
$deploy/site-config/sas-planning/storage.yaml
:
transformers:
...
- site-config/sas-planning/storage.yaml
4. Continue your SAS Viya platform deployment as documented in
SAS Viya Platform Deployment Guide.
To avoid issues related to client timeouts, configure SAS Common Planning Service ingress-nginx timeout.
$deploy/sas-bases/examples/sas-planning/sas-planning-ingress-patch.yaml
file to the
$deploy/site-config
directory.Add a reference to the base kustomization.yaml file ($deploy/kustomization.yaml
)
for the revise file. Here is an example that assumes you put the copied file in
$deploy/site-config/sas-planning/sas-planning-ingress-patch.yaml
:
transformers:
...
- site-config/sas-planning/sas-planning-ingress-patch.yaml
4. Continue your SAS Viya platform deployment as documented in
SAS Viya Platform Deployment Guide.
The sas-planning service uses the Common Data Store PostgreSQL as well as the one provided by platform.
This README describes the customizations needed for a PersistentVolumeClaim (PVC). It also contains the steps required to configure an ingress-nginx timeout.
If updating from any release prior to 2023.09, please refer to this documentation for additional steps to follow.
For more information on using an internal instance of PostgreSQL, you should
refer to the README file located at
$deploy/sas-bases/examples/postgres/README.md
.
Add the following overlay to the resources block of the base kustomization.yaml
file ($deploy/kustomization.yaml
):
resources:
...
- sas-bases/overlays/sas-planning
...
Add the following overlays to the transformers block of the base kustomization.yaml file:
transformers:
...
- sas-bases/overlays/sas-planning/sas-planning-transformer.yaml
...
A PersistentVolumeClaim (PVC) states the storage requirements from cloud providers. The storage provided by cloud is mapped to the predefined paths across services that collaborate to handle files.
In the base kustomization.yaml file, immediately after the transformers block, add a patches block with the following content.
...
patches:
- path: site-config/storageclass.yaml
target:
kind: PersistentVolumeClaim
annotationSelector: sas.com/component-name in (sas-planning)
After you revise the base kustomization.yaml file, continue your SAS Viya platform deployment as documented in SAS Viya Platform Deployment Guide.
This readme describes the settings available for deploying Compute Server.
Based on the following description of different example files, determine if you want to use any example file in your deployment. If you do, copy the example file and place it in your site-config directory.
Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ NUMBER-OF-WORKERS }}. Replace the entire variable string, including the braces, with the value you want to use.
After you have edited the file, add a reference to it in the transformer block of the base kustomization.yaml file.
The example files are located at /$deploy/sas-bases/examples/compute-server/configure.
For information about PersistentVolumes, see Persistent Volumes.
The SAS Compute service makes calls to Compute server processes running in the cluster using HTTP calls. The Compute service uses a default request timeout of 600 seconds. This README describes the customizations that can be made for updating this timeout to control how long the Compute service requests to the servers wait for a response.
The SAS Compute service internal HTTP request timeout can be modified by using the change-sas-compute-http-request-timeout.yaml file.
Copy the
$deploy/sas-bases/examples/compute/client-request-timeout/change-sas-compute-http-request-timeout.yaml
file to the site-config directory.
In the copied file, replace {{ TIMEOUT }} with the number of seconds to use for the timeout. Note that the trailing “s” after {{ TIMEOUT }} should be kept.
Here is an example:
```yaml
...
patch: |-
- op: replace
path: /spec/template/spec/containers/0/env/-
value:
name: SAS_HTTP_CLIENT_TIMEOUT_REQUEST
value: 1200s
...
```
After you edit the file, add a reference to it in the transformers block of
the base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example assuming the file has been saved
to $deploy/site-config/compute/client-request-timeout
:
transformers:
...
- /site-config/compute/client-request-timeout/change-sas-compute-http-request-timeout.yaml
...
For more information about deployment and using example files, see the SAS Viya Platform: Deployment Guide.
With open-source language integration, SAS Viya platform users can decide which language they want to use for a given task. They can use either the SAS programming language or an open-source programming language, such as Python, R, Lua, or Java, to develop programs for the SAS Viya platform. This integration requires some additional configuration.
SAS Configurator for Open Source is a utility that simplifies the download, configuration, building, and installation of Python and R from source. The result is a Python or R build that is located in a persistent volume (PV) and referenced by a Persistent Volume Claim (PVC). The PVC and the builds that it contains are then available for pods that require Python and R for their operations.
SAS Configurator for Open Source can build and install multiple Python and R builds or versions in the same PV. It can use profiles to handle multiple builds. Various pods can then reference different versions or builds of Python and R located in the PV.
SAS Configurator for Open Source also includes functionality to reduce downtime associated with updates. A given build is located in the PV and referenced by a pod using a symlink. In an update scenario, the symlink is changed to point to the latest build for that profile.
For system requirements and a full set of steps to use SAS Configurator for Open Source, see SAS Viya Platform: Integration with External Languages.
Building Python or R requires a number of steps. This section describes the steps performed by SAS Configurator for Open Source in its operations to manage Python and R.
SAS Configurator for Open Source only processes configuration changes after the initial execution of the job. For example, packages are reprocessed only if a change occurs in the package list and the respective versions of R or Python remain unchanged. If the version of Python or R changes, then all steps are performed from the download of the source to the updating of symlinks.
For Python, downloads the source, signature file, and signer’s key from the configured location. For R, downloads only the source.
Verifies the authenticity of the Python source using the signer’s key and signature file. The R source cannot be verified at the time of this writing because signer keys are not generated for R source.
Extracts the Python and R sources into a temporary directory for building.
Configures and performs a make of the Python and R sources.
Installs the Python and R builds within the PV and updates supporting components, such as pip, if applicable.
Builds and installs configured packages for Python and R.
Note: Python and R packages that require additional dependencies to be installed within any combination of the SAS Configurator for Open Source container, the SAS Programming Environment container, and the CAS Server container are not supported with the SAS Configurator for Open Source.
If everything has completed successfully, creates the symbolic links, or changes the symbolic links’ targets to point to the latest builds for both Python and R.
THe SAS Configurator for Open Source utility runs a job named sas-pyconfig. When you enable the utility, the job runs automatically once, during the initial SAS Viya platform deployment, and runs again with subsequent SAS Viya updates.
The official documentation for SAS Configurator for Open Source, SAS Viya Platform: Integration with External Languages, provides instructions for configuring and enabling the utility.
SAS Configurator for Open Source requires more CPU and memory than most
components. This requirement is largely due to Python and R building-related
operations, such as those performed by configure
and make
. Because SAS
Configurator for Open Source is disabled by default, pod resources are minimized
so that they are not misallocated during scheduling. The default resource values
are as follows:
limits:
cpu: 250m
memory: 250Mi
requests:
cpu: 25m
memory: 25Mi
Important: If the default values are used, pod execution will result in an OOMKilled (Out of Memory Killed) status in the pod list and the job does not complete. You must increase the requests and limits in order for the pod to complete successfully. The official SAS Configurator for Open Source documentation provides instructions.
If the environment does not use resource quotas, a CPU request value of 4000m and a memory request value of 3000mi and no limits provide a good starting point. No limits will allow the pod to use more than requested resources if they are available, which can result in a shorter time to completion. With these values, the pod should complete its operations in approximately 15 minutes and before the environment is stable enough for widespread use. Differences in hardware specifications will have an impact on the time it takes for the pod to complete.
If the environment uses resource quotas, the specified limit values must be equal to or greater than the respective request values for CPU and memory.
The values of requests and limits can be adjusted to meet specific needs of an environment. For example, reduce values to allow scheduling within smaller environments, or increase values to reduce the time required to build multiple versions of Python and R.
A YAML file is provided in your deployment assets to help you increase CPU and memory requests. By default, the recommended CPU and memory requests are specified in the file (change-limits.yaml), and no limits are specified. Below are some examples of updates to this file.
In this example, SAS Open Source Configuration is configured with a CPU request value of 4000m and memory request value of 3000mi. No limit to CPU and memory usage is specified. This configuration should not be used in environments where resource quotas are in use.
---
apiVersion: builtin
kind: PatchTransformer
metadata:
name: sas-pyconfig-limits
patch: |-
- op: replace
path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/cpu
value:
4000m
- op: replace
path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/memory
value:
3000Mi
- op: remove
path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/cpu
- op: remove
path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/memory
target:
group: batch
kind: CronJob
name: sas-pyconfig
version: v1
#---
#apiVersion: builtin
#kind: PatchTransformer
#metadata:
# name: sas-pyconfig-limits
#patch: |-
# - op: replace
# path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/cpu
# value:
# 4000m
# - op: replace
# path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/memory
# value:
# 3000Mi
# - op: replace
# path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/cpu
# value:
# 4000m
# - op: replace
# path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/memory
# value:
# 3000Mi
#target:
# group: batch
# kind: CronJob
# name: sas-pyconfig
In this example, both requests and limits values for CPU and memory have been set to 4000m and 3000mi, respectively. This configuration can be used in an environment where resource quotas are enabled.
#---
#apiVersion: builtin
#kind: PatchTransformer
#metadata:
# name: sas-pyconfig-limits
#patch: |-
# - op: replace
# path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/cpu
# value:
# 4000m
# - op: replace
# path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/memory
# value:
# 3000Mi
# - op: remove
# path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/cpu
# - op: remove
# path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/memory
#target:
# group: batch
# kind: CronJob
# name: sas-pyconfig
# version: v1
---
apiVersion: builtin
kind: PatchTransformer
metadata:
name: sas-pyconfig-limits
patch: |-
- op: replace
path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/cpu
value:
4000m
- op: replace
path: /spec/jobTemplate/spec/template/spec/containers/0/resources/requests/memory
value:
3000Mi
- op: replace
path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/cpu
value:
4000m
- op: replace
path: /spec/jobTemplate/spec/template/spec/containers/0/resources/limits/memory
value:
3000Mi
target:
group: batch
kind: CronJob
name: sas-pyconfig
You can change the configuration and run the sas-pyconfig job again without redeploying the SAS Viya platform. The official SAS Configurator for Open Source documentation describes the steps to run the job manually and install and configure Python or R from source.
By default, SAS Configurator for Open Source is disabled.
Determine the exact name of the sas-pyconfig-parameters ConfigMap:
kubectl get configmaps -n <name-of-namespace> | grep sas-pyconfig`
The name will be something like sas-pyconfig-parameters-abcd1234.
Edit the ConfigMap using the following command:
kubectl edit configmap <sas-pyconfig-parameters-configmap-name> -n <name-of-namespace>
In this example, sas-pyconfig-parameters-configmap-name
is the name of the
ConfigMap from step 1. Change the value of global.enabled
to false
.
SAS Configurator for Open Source does not run during a deployment or update of the SAS Viya platform.
The configuration options used by SAS Configurator for Open Source are referenced from the sas-pyconfig-parameters ConfigMap (provided for you in the change-configuration.yaml file). The official SAS Configurator for Open Source documentation describes the options available in the ConfigMap, their purpose, and their default values.
Configuration options fall into two main categories:
global options
Options that are applied across or related to all profiles and to the application.
profile options
Options that are specific to a profile.
For a description of each global option, including the option to specify an HTTP or HTTPS web proxy server, see the official SAS Configurator for Open Source documentation.
Profiles are references to different versions or builds of Python and R in the PV, enabling SAS Configurator for Open Source to manage multiple builds of Python or R.
The predefined Python profile is named “default_py”, and the predefined R profile is named “default_r”. Profiles are described in detail in the official SAS Configurator for Open Source documentation.
The following example change-configuration.yaml file contains the predefined profiles only:
apiVersion: builtin
kind: PatchTransformer
metadata:
name: sas-pyconfig-custom-parameters
patch: |-
- op: replace
path: /data/global.enabled
value: "false"
- op: replace
path: /data/global.python_enabled
value: "false"
- op: replace
path: /data/global.r_enabled
value: "false"
- op: replace
path: /data/global.pvc
value: "/opt/sas/viya/home/sas-pyconfig"
- op: replace
path: /data/global.python_profiles
value: "default_py"
- op: replace
path: /data/global.r_profiles
value: "default_r"
- op: replace
path: /data/global.dry_run
value: "false"
- op: replace
path: /data/global.http_proxy
value: "none"
- op: replace
path: /data/global.https_proxy
value: "none"
- op: replace
path: /data/default_py.pip_local_packages
value: "false"
- op: replace
path: /data/default_py.pip_index_url
value: "none"
- op: replace
path: /data/default_py.pip_extra_url
value: "none"
- op: replace
path: /data/default_py.configure_opts
value: "--enable-optimizations"
- op: replace
path: /data/default_r.configure_opts
value: "--enable-memory-profiling --enable-R-shlib --with-blas --with-lapack --with-readline=no --with-x=no"
- op: replace
path: /data/default_py.cflags
value: "-fPIC"
- op: replace
path: /data/default_r.cflags
value: "-fPIC"
- op: replace
path: /data/default_py.pip_install_packages
value: "Prophet sas_kernel matplotlib sasoptpy sas-esppy NeuralProphet scipy==1.10 Flask XGBoost TensorFlow pybase64 scikit-learn statsmodels sympy mlxtend Skl2onnx nbeats-pytorch ESRNN onnxruntime opencv-python zipfile38 json2 pyenchant nltk spacy gensim pyarrow hnswlib==0.7.0 sas-ipc-queue great-expectations==0.16.8"
- op: replace
path: /data/default_py.pip_r_packages
value: "rpy2"
- op: replace
path: /data/default_py.pip_r_profile
value: "default_r"
- op: replace
path: /data/default_py.python_signer
value: https://keybase.io/pablogsal/pgp_keys.asc
- op: replace
path: /data/default_py.python_signature
value: https://www.python.org/ftp/python/3.11.10/Python-3.11.10.tgz.asc
- op: replace
path: /data/default_py.python_tarball
value: https://www.python.org/ftp/python/3.11.10/Python-3.11.10.tgz
- op: replace
path: /data/default_r.r_tarball
value: https://cloud.r-project.org/src/base/R-4/R-4.3.3.tar.gz
- op: replace
path: /data/default_r.packages
value: "dplyr jsonlite httr tidyverse randomForest xgboost forecast arrow logger"
- op: replace
path: /data/default_r.pkg_repos
value: "https://cran.rstudio.com/ http://cran.rstudio.com/ https://cloud.r-project.org/ http://cloud.r-project.org/"
target:
version: v1
kind: ConfigMap
name: sas-pyconfig-parameters
The following example change-configuration.yaml file adds a Python profile called “myprofile” to the global.profiles list and adds profile options for “myprofile”. Note that the default Python profile is still listed and will also be built.
apiVersion: builtin
kind: PatchTransformer
metadata:
name: sas-pyconfig-custom-parameters
patch: |-
- op: replace
path: /data/global.enabled
value: "true"
- op: replace
path: /data/global.python_profiles
value: "default_py myprofile"
- op: add
path: /data/myprofile.configure_opts
value: "--enable-optimizations"
- op: add
path: /data/myprofile.cflags
value: "-fPIC"
- op: add
path: /data/myprofile.pip_install_packages
value: "Prophet sas_kernel matplotlib sasoptpy sas-esppy NeuralProphet scipy==1.10 Flask XGBoost TensorFlow pybase64 scikit-learn statsmodels sympy mlxtend Skl2onnx nbeats-pytorch ESRNN onnxruntime opencv-python zipfile38 json2 pyenchant nltk spacy gensim pyarrow hnswlib==0.7.0 sas-ipc-queue great-expectations==0.16.8"
- op: replace
path: /data/myprofile.pip_local_packages
value: "false"
- op: replace
path: /data/myprofile.pip_r_packages
value: "rpy2"
- op: replace
path: /data/myprofile.pip_r_profile
value: "default_r"
- op: add
path: /data/myprofile.python_signer
value: https://keybase.io/pablogsal/pgp_keys.asc
- op: add
path: /data/myprofile.python_signature
value: https://www.python.org/ftp/python/3.11.10/Python-3.11.10.tgz.asc
- op: add
path: /data/myprofile.python_tarball
value: https://www.python.org/ftp/python/3.11.10/Python-3.11.10.tgz
target:
version: v1
kind: ConfigMap
name: sas-pyconfig-parameters
Janusgraph is no longer supported for SAS Data Catalog. Therefore the contents of this README and the overlay it refers to have been removed.
This directory contains an example transformer that illustrates how to change the StorageClass and size of the PVC used to store the Quality Knowledge Base (QKB) in the SAS Viya platform.
Copy the file sas-bases/examples/data-quality/storageclass/storage-class-transformer.yaml
and place it in your site-config directory.
Replace the {{ QKB-STORAGE-CLASS }} value with your desired StorageClass. Note that the QKB requires that your storage class support the RWX accessMode.
Also replace the {{ QKB-STORAGE-SIZE }} value with the size you wish to allocate to the QKB volume. The recommended size is 8Gi. Note that using a lower value may restrict your ability to add new QKBs to the SAS Viya platform; 1Gi is the absolute minimum required.
After you edit the file, add a reference to it in the transformer block of the base kustomization.yaml file.
For more information about using example files, see the SAS Viya Platform Deployment Guide.
For more information about Kubernetes StorageClasses, please see the Kubernetes Storage Class Documentation.
This readme describes the scripts available for maintaining Quality Knowledge Base (QKB) content in the SAS Viya platform. QKBs support the SAS Data Quality product.
These scripts are intended for ad hoc use after deployment. They generate YAML that is suitable for consumption by kubectl. The YAML creates Kubernetes Job objects to perform the specific task designated by the script name. After these jobs have finished running, some jobs will be deleted automatically and the rest can be manually deleted.
containerize-qkb.sh
containerize-qkb.sh "NAME" PATH REPO[:TAG]
This script runs Docker to create a specially formatted container that allows the QKB to be imported into the SAS Viya platform running in Kubernetes.
For the NAME argument, provide the name by which the QKB will be surfaced in the SAS Viya platform. It may include spaces, but must be enclosed with quotation marks.
The PATH argument should be the location on disk where the QKB QARC file is located.
The REPO argument specifies the repository to assign to the Docker container that will be created. TAG may be specified after a colon in standard Docker notation.
After the script runs, a new Docker container with the specified tag is created in the local Docker registry.
$ bash containerize-qkb.sh "My Own QKB" /tmp/myqkb.qarc registry.mycompany.com/myownqkb:v1
Setting up staging area...
Generating Dockerfile...
Running docker...
Docker container generated successfully.
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.mycompany.com/myownqkb v1 8dfb63e527c8 1 second ago 945.3MB
After the script completes, information about the new container is output, as shown above. If the local docker registry is not accessible to your Kubernetes cluster, you should then push the container to one that is.
$ docker push registry.mycompany.com/myownqkb:v1
The push refers to repository [registry.mycompany.com/myownqkb]
f2409fb2f83e: Pushed
076d9dcc6e6a: Mounted from myqkb-image1
ce30860818b8: Mounted from myqkb-image1
dfadf160ceab: Mounted from myqkb-image1
v2: digest: sha256:b9802cff2f81dba87e7bb92355f2eb0fd14f91353574233c4d8f662a0b424961 size: 1360
deploy-qkb.sh
deploy-qkb.sh REPO[:TAG]
This script deploys a containerized QKB into the SAS Viya platform. The REPO argument specifies a Docker repo (and, optionally, tag) from which to pull the container. Note that this script does not make any changes to your Kubernetes configuration directly; instead it generates a Kubernetes Job that can then be piped to the kubectl command.
While the SAS Viya platform persists all deployed QKBs in the sas-quality-knowledge-base PVC, we recommend following the GitOps pattern of storing the generated YAML file in version control, under your $deploy/site-config directory. Doing so allows you to easily re-deploy the same QKB again later, should the PVC be deleted.
Generate a Kubernetes Job to deploy a QKB, and run it immediately:
bash deploy-qkb.sh registry.mycompany.com/myownqkb:v1 | kubectl apply -n name-of-namespace -f -
Generate a Kubernetes Job to deploy a QKB, and write it into your site’s overlays directory:
bash deploy-qkb.sh registry.mycompany.com/myownqkb:v1 >> $deploy/site-config/data-quality/custom-qkbs.yaml
This command appends the job configuration for the new QKB to the file called “custom-qkbs.yaml”. This is a convenient place to store all custom QKB jobs, and is suitable for inclusion into your SAS Viya platform’s base kustomization.yaml file as a resource overlay.
NOTE: The Kubernetes job will be deleted immediately upon successful completion.
If you do not yet have a $deploy/site-config/data-quality directory, you can create and initialize it as follows:
mkdir -p $deploy/site-config/data-quality
cp $deploy/sas-bases/overlays/data-quality/* $deploy/site-config/data-quality
To attach custom-qkbs.yaml to your SAS Viya platform’s configuration, edit your base kustomization.yaml file, and find or create the “resources:” section. Under that section, add the following line:
- site-config/data-quality
You can re-apply these kustomizations to bring the new QKB into your SAS Viya platform.
list-qkbs.sh
list-qkbs.sh
A parameter-less script that generates Kubernetes Job YAML to list the names of all QKBs available on sas-quality-knowledge-bases volume. Output is sent to the log for the pod created by the job.
$ bash list-qkbs.sh | kubectl apply -n name-of-namespace -f -
job.batch/sas-quality-knowledge-base-list-job-ifvw01lr created
$ kubectl -n name-of-namespace logs job.batch/sas-quality-knowledge-base-list-job-ifvw01lr
QKB CI 31
My Own QKB
$ kubectl -n name-of-namespace delete job.batch/sas-quality-knowledge-base-list-job-ifvw01lr
job.batch "sas-quality-knowledge-base-list-job-ifvw01lr" deleted
If a QKB is in the process of being deployed, or was aborted for some reason, you may see the string “(incomplete)” after that QKB’s name:
$ kubectl -n name-of-namespace logs job.batch/sas-quality-knowledge-base-list-job-ifvw01lr
QKB CI 31
My Own QKB (incomplete)
remove-qkb.sh
remove-qkb.sh NAME
Generates Kubernetes Job YAML that removes a QKB from the sas-quality-knowledge-bases volume. The QKB to remove is specified by NAME, which is returned by list-qkbs.sh
. Any errors or other output is written to the associated pod’s log and can be viewed using the kubectl logs
command.
NOTE: The Kubernetes job will be deleted immediately upon successful completion. The kubectl logs and delete commands below can be used to check logs in case of failures in the job.
$ bash remove-qkb.sh "My Own QKB" | kubectl apply -n name-of-namespace -f -
job.batch/sas-quality-knowledge-base-remove-job-zbl4sxmq created
$ kubectl logs -n name-of-namespace job.batch/sas-quality-knowledge-base-remove-job-zbl4sxmq
Reference data content "My Own QKB" was removed.
$ kubectl delete -n name-of-namespace job.batch/sas-quality-knowledge-base-remove-job-zbl4sxmq
job.batch "sas-quality-knowledge-base-remove-job-zbl4sxmq" deleted
For more information about the QKB, see the SAS Data Quality documentation.
SAS Data Quality for Payment Integrity Health Care (DQHFWA) provides a tool for forensic accountants and data analysts to discover wasteful and fraudulent activity with submittal and payment of medical claims.
An external PostgreSQL database is required. Although SAS Data Quality for Payment Integrity Health Care does not require the PostgreSQL Common Data Store (CDS) database, SAS recommends that the external CDS PostgreSQL database be configured along with the external Platform PostgreSQL database due to the expected data volumes. Data volume includes size of the temporary (work) tables for merges and joins of transient table data that the application might choice to use, Stage tables where the Data Quality algorithm might process the data, and finally the Warehouse tables where pristine datasets might rest.
Strictly using the SAS Viya Platform PostgreSQL database to contain the customer data can negatively affect performance of your Viya platform. For more information, see SAS Common Data Store Requirements.
Platform PostgreSQL is required in the SAS Viya platform. Refer to the
instructions in the README file located at
$deploy/sas-bases/examples/postgres/README.md
(for Markdown format) or at
$deploy/sas-bases/docs/configure_postgresql.htm
(for HTML format) for information about configuring an external
instance of PostgreSQL.
Use of CDS PostgreSQL is optional but recommended for the SAS Data Quality for Payment Integrity Health Care.
Refer to the README file
located at $deploy/sas-bases/examples/postgres/README.md
(for Markdown format) or at
$deploy/sas-bases/docs/configure_postgresql.htm
(for HTML format) for information about configuring an
external instance of PostgreSQL for CDS.
In the top of the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
), add the following entry to allow the Compute
server to refresh authorization tokens.
Note: This entry must be placed above the - sas-bases/overlays/required/transformers.yaml
line.
transformers:
- sas-bases/overlays/sas-programming-environment/refreshtoken
In the transformers block of the base kustomization.yaml, add the following entry to allow the Compute server startup script to run.
transformers:
- sas-bases/overlays/sas-programming-environment/enable-admin-script-access.yaml
In the transformers block of the base kustomization.yaml file, add the follwing entry to add the required overlays for the sas-data-quality-hfwa application.
Note: This entry must be placed above the - sas-bases/overlays/required/transformers.yaml
line.
transformers:
- sas-bases/overlays/sas-data-quality-hfwa/hfwa-required-transfomers.yaml
The access token validity time needs to be increased for the SAS Compute Server and SAS Studio to handle long-running jobs. This also exposes the file paths to generated application code in SAS Studio.
a. Copy the file $deploy/sas-bases/examples/configuration/sitedefault.yaml
to
the $deploy/site-config
directory if it does not already exist.
b. Add the following content to the $deploy/site-config/sitedefault.yaml
file.
sas.studio:
showServerFiles: true
fileNavigationRoot: "CUSTOM"
fileNavigationCustomRootPath: "/dqhfwa"
oauth2.client:
Services: "cas-shared-default, Compute Service, Credentials service, Job Execution service, Launcher service"
accessTokenValidty: 216000
refreshTokenValidity: 216000
sas.logon.jwt:
policy.accessTokenValiditySeconds: 216000
policy.global.accessTokenValiditySeconds: 216000
policy.global.refreshTokenValiditySeconds: 216000
policy.refreshTokenValiditySeconds: 216000
If you are using the recommended CDS PostgresSQL instance, also perform steps 5 and 6.
In the transformers block of the base kustomization.yaml file, add a reference to the file
sas-bases/overlays/sas-data-quality-hfwa/hfwa-server-use-cds-postgres-config-ma
p.yaml
.
Note: This entry must be placed above the - sas-bases/overlays/required/transformers.yaml
line.
transformers:
- sas-bases/overlays/sas-data-quality-hfwa/hfwa-server-use-cds-postgres-config-map.yaml
In the generators block of the base kustomization.yaml file, add a reference to the cds-config-map file
sas-bases/overlays/sas-data-quality-hfwa/hfwa-add-cds-config-map.yaml
.
generators:
- sas-bases/overlays/sas-data-quality-hfwa/hfwa-add-cds-config-map.yaml
Before deploying SAS Data Quality for Payment Integrity Health Care, you need to create the necessary directories required by the application on the NFS server, and then assign those directories to the volumes and volume mounts defined in the application, SAS compute server, and the SAS CAS server.
Create the file shares on the NFS server for use by the application.
Note: You will need the SSH private key created for access to the
jumpserver and the user ID and public IP address of the jumpserver. Replace the
indicated values enclosed with {{ }}
in the export statements in the script
with your specific values.
Copy the file
$deploy/sas-bases/examples/sas-data-quality-hfwa/hfwa_create_nfs_directories.sh
to $deploy/site-config/sas-data-quality-hfwa
directory and
make it writable. If the directory $deploy/site-config/sas-data-quality-hfwa
doesn’t exist
create it.
chmod +wx $deploy/site-config/sas-data-quality-hfwa/hfwa_create_nfs_directories.sh
Replace the variables in the script
$deploy/site-config/sas-data-quality-hfwa/hfwa_create_nfs_directories.sh
with values specific for your environment.
{{ NAMESPACE }}
with the namespace for the Kubernetes namespace for
your Viya installation.{{ SSH_PRIVATE_KEY }}
with the path to the ssh private key file for
access to the jumpserver.{{ JUMP_SERVER }}
with the ip address of the jump server.{{ JUMP_SERVER_JUMP_USER }}
with the username of the user with
access to the jump server.Execute the the modified script from a Linux terminal on your deployment server.
$deploy/site-config/sas-data-quality-hfwa/hfwa_create_nfs_directories.sh
Copy the file
$deploy/sas-bases/examples/sas-data-quality-hfwa/hfwa-nfs-config-map.yaml
into your $deploy/site-config/sas-data-quality-hfwa
directory
and make it writable:
chmod +w $deploy/site-config/sas-data-quality-hfwa/hfwa-nfs-config-map.yaml
Replace the value of {{ V4_CFG_RWX_FILESTORE_ENDPOINT}} with the IP address of your cluster’s NFS server. Replace the value of
{{V4_CFG_RWX_FILESTORE_DATA_PATH }} with the value of the path to your NFS server
viya share (for example, /export/mynamespace
).
In the transformers block of the base kustomization.yaml, add a reference to the file you just copied.
transformers:
- site-config/sas-data-quality-hfwa/hfwa-nfs-config-map.yaml
Before deploying SAS Data Quality for Payment Integrity Health Care, secrets for the database encryption and, if you are using an SFTP server, the SFTP secrets need to be defined.
Copy the file
$deploy/sas-bases/examples/sas-data-quality-hfwa/hfwa-security-add-secret-database-key.yaml
into your $deploy/site-config/sas-data-quality-hfwa
directory
and make it writable:
chmod +w $deploy/site-config/sas-data-quality-hfwa/hfwa-server-add-secret-database-key.yaml
Edit the file and change the value of {{ DATABASE_ENCRYPTION_KEY }} in the literals section to a phrase with exactly 32 characters (no spaces) of your choice. Here is an example:
## This SecretGenerator creates a Secret containing an AES key used by
sas-data-quality-hfwa to securely store data
---
apiVersion: builtin
kind: SecretGenerator
metadata:
name: sas-data-quality-hfwa-db-key
literals:
- key=thisisanexample32byteaeskey12345 # Change me
type: Opaque
Copy the file
$deploy/sas-bases/examples/sas-data-quality-hfwa/hfwa-security-add-secret-sftp-
keys.yaml
into your $deploy/site-config/sas-data-quality-hfwa
directory and
make it writable:
chmod +w $deploy/site-config/sas-data-quality-hfwa/hfwa-server-add-secret-sftp-keys.yaml
Copy the SFTP server private RSA key file to the
$deploy/site-config/security
directory. If the security directory does not
exist, create it in the $deploy/site-config
directory. Replace the {{
CONNECTION_NAME }} value with the name of the connection that you will use for
a SFTP connection. Replace the {{ RELATIVE_PATH_TO_KEY_FILE }} with the
relative path to the file you just copied (such as site-config/security/sftpkey
).
If you have multiple SFTP servers, you can add additional entries under the
files section.
Add references to these files under the generators: section of the kustomization.yaml file:
generators:
- site-config/sas-data-quality-hfwa/hfwa-security-add-secret-database-key.yaml
- site-config/sas-data-quality-hfwa/hfwa-security-add-secret-sftp-keys.yaml
SAS code programs execute in the Compute Server service. For performant execution, the Compute Server relies on fast data storage for transient intermediate datafiles it generates within SASWORK. Therefore, depending on the data volume anticipated it is strongly recommended to configure the Compute Servers to have access to fast local storage. On Azure, a good fit and recommendation is Ls-Series v3 server instances. The local NVMe storage available on these instances can be configured to use RAID and striping to provide both disk size/volume and performance for SASWORK utilization.
Note: Adding NVMe storage and changing the SASWORK location is optional. It is only required if the data volume being processed exceeds the capacity of the default location for SASWORK.
If you decide to use a server that has fast local storage for the compute
server nodes, in the resources block of the base kustomization.yaml, add a reference to the file
sas-bases/overlays/sas-data-quality-hfwa/compute-server/compute-nvme-ssd.yaml
.
resources:
- sas-bases/overlays/sas-data-quality-hfwa/compute-server/compute-nvme-ssd.yaml
In the transformers block of the base kustomization.yaml, add a reference to the file
sas-bases/overlays/sas-data-quality-hfwa/compute-server/custom-saswork-location.yaml
.
transformers:
- sas-bases/overlays/sas-data-quality-hfwa/compute-server/custom-saswork-location.yaml
Processing large data volume requires increasing the default SAS Compute
server HTTP timeout setting. To adjust the setting, refer to the README file located
at $deploy/sas-bases/examples/compute/client-request-timeout/README.md
(for Markdown
format) or at $deploy/sas-bases/docs/update_compute_service_internal_http_request_timeout.htm
(for HTML format).
Some of the SAS code programs execute in the SAS CAS Server in a distributed
manner across all CAS instances (depending on SMP vs MPP deployment). Similar
to Compute Server, CAS instances also rely on fast data storage for transient
intermediate datafiles it memory maps and generate within CASCACHE
.
Therefore, depending on the data volume being represented within memory or
spilled on to disk it is strongly recommended to configure the CAS Servers to
have access to fast local storage. On Azure, a good fit and recommendation is
Ls-Series v3 server instances. The local NVMe storage available on these
instances can be configured to use RAID and striping to provide both disk
size/volume and performance for CASCACHE
utilization.
Note: Adding NVMe storage and changing the CASCACHE
location is optional
and only required if the data volume being processed exceed the capacity of the
default location for CASCACHE
.
If you have decided to use a server that has fast local storage for the CAS
server nodes, in the resources block of the base kustomization.yaml file, add a reference to the file
sas-bases/overlays/sas-data-quality-hfwa/cas-server/cas-nvme-ssd.yaml
.
resources:
- sas-bases/overlays/sas-data-quality-hfwa/cas-server/cas-nvme-ssd.yaml
In the transformers block of the base kustomization.yaml, add a reference to the file
sas-bases/overlays/sas-data-quality-hfwa/cas-server/custom-caswork-location.yaml
.
transformers:
- sas-bases/overlays/sas-data-quality-hfwa/cas-server/custom-caswork-location.yaml
Due to large volumes of data being processed, SAS recommends that the number
of CAS workers be increased to at least three for increased performance. To increase
the number of CAS workers. see the “Manage the Number of Workers” section of the README file located at
$deploy/sas-bases/examples/cas/configure/README.md
(for Markdown format) or at
$deploy/sas-bases/docs/configuration_settings_for_cas.htm
(for HTML format).
Before deploying SAS Data Quality for Payment Integrity Health Care, the file shares used by the application need to be allowed proper access in CAS.
Copy the file
$deploy/sas-bases/examples/cas/configure/cas-add-allowlist-paths.yaml
into
your $deploy/site-config/sas-data-quality-hfwa
directory and make it writable:
chmod +w $deploy/site-config/sas-data-quality-hfwa/cas-add-allowlist-paths.yaml
Replace the patch: |-
section of the yaml file with the following code.
Note: If you already have this file in your deployment for other
applications, add the code starting at the line following the patch: |-
line
to your existing file in the patch block.
patch: |-
- op: add
path: /spec/appendCASAllowlistPaths/-
value:
/dqhfwa/data/incoming
- op: add
path: /spec/appendCASAllowlistPaths/-
value:
/dqhfwa/sascode/data/module_specific
- op: add
path: /spec/appendCASAllowlistPaths/-
value:
/dqhfwa/job_code/data/module_specific/entity_resolution
In the transformers block of the base kustomization.yaml, add a reference to the file you just copied, or skip this step if the reference already exists.
transformers:
- site-config/sas-data-quality-hfwa/cas-add-allowlist-paths.yaml
For more information about configuration and using example files, see the SAS Viya Platform: Deployment Guide.
This README file describes the configuration settings available for deploying and running SAS Detection Engine. The sections of this README correspond to sections of the full example template, detection-engine-deployment.yaml. In addition to the full template, examples of how to complete each section are also available in /$deploy/sas-bases/examples/sas-detection/
.
Create a copy of the example template in /$deploy/sas-bases/examples/sas-detection/detection-engine-deployment.yaml
. Save this copy in /$deploy/site-config/sas-detection/detection-engine-deployment.yaml
.
Placeholders are indicated by curly brackets, such as {{ DECISION }}. Find and replace the placeholders with the values you want for your deployment. After all placeholders have been filled in, directly apply your deployment yaml via kubectl apply, indicating the file you’ve just filled in.
kubectl apply -f detection-engine-deployment.yaml
The example files are located at /$deploy/sas-bases/examples/sas-detection/
. Each item in the list includes a description of the example and the example file name.
This is the most customizeable section of the template. Each container has various environmental options that can be set.
The SAS Container Runtime (SCR) container requires an image to be specified. This will be available in your configured Docker registry. This is where the output from your design time work using the SAS Viya platform will be.
containers:
- name: sas-sda-scr
# Image from your docker registry
image: {{ DECISION }}
Other than the image, the only required properties for the sas-sda-scr container are SAS_REDIS_HOST and SAS_REDIS_PORT. The other properties are optional security properties covered in detail in the security section. See the container-configuration.yaml file for the minimal required configuration.
The sas-detection container includes a few categories of environmental properties: logging properties, Kafka properties, Redis properties, and processing options. Optional security-related properties are covered further in the security section. See the container-configuration.yaml file for the minimal required configuration.
SAS_LOG_LEVEL can be DEBUG, INFO, WARN, or ERROR. The value determines the verbosity of the log output. WARN or ERROR should be used where performance is important.
SAS_LOG_LOCALE determines which locale messages should be included in the output, where the default is “en”.
There is a property for the bootstrap server to connect, a few properties to indicate the topics sas-detection will use to read/write, and a boolean determining whether reading from Kafka is enabled.
SAS_DETECTION_KAFKA_SERVER is the Kafka bootstrap server.
SAS_DETECTION_KAFKA_TDR_TOPIC is the the transaction detection repository (output) topic.
SAS_DETECTION_KAFKA_REJECTTOPIC is the reject topic for when errors occur.
SAS_DETECTION_KAFKA_TOPIC is the input message topic.
SAS_DETECTION_KAFKA_CONSUMER_ENABLED determines whether sas-detection will consume messages from the SAS_DETECTION_KAFKA_TOPIC.
SAS_DETECTION_REDIS_HOST is the Redis host and SAS_DETECTION_REDIS_PORT is the port used to connect to Redis.
SAS_DETECTION_REDIS_POOL_SIZE is the size of the connection pool for the go-redis client. If not specified, this defaults to 10.
For metrics gathering and reporting to work correctly, SAS_DETECTION_DEPLOYMENT_NAME must match your deployment name and SAS_DETECTION_PROCESSING_DISABLEMETRICS must be set to “false”.
SAS_DETECTION_PROCESSING_SLA determines the threshold in milliseconds after which a transaction should fail with an SLA error.
SAS_DETECTION_PROCESSING_SETVERBOSE is an integer between 1 and 13, inclusive, which determines the logging level within the sas-sda-scr container.
SAS_DETECTION_PROCESSING_OUTPUT_FILTER allows the output REST response to be filtered. It is a comma separated list of variable sets or variables in your message. message.sas.system,message.request,message.sas.decision, for example.
SAS_DETECTION_KAFKA_BYPASS disables Kafka reads and writes if set to “true”.
SAS_DETECTION_RULE_METRICS_BYPASS disables rule metrics reads and writes to Redis if set to “true”.
SAS_DETECTION_WATCHER_INTERVAL_SEC is the interval in seconds at which the watcher will check your docker registry for an update to the image in your sas-sda-scr container.
These resources don’t need much customization. They require the SUFFIX to be filled in, and the NAMESPACE to be specified, as indicated in the template. The ingresses additionally require the host property be specified. There is a service and ingress for each of the containers defined in the deployment.
The services are ClusterIP services, accessed externally via the ingress resources. The ports are already filled in and line up with the prefilled ingress ports.
The ingresses include the host, and rules for directing requests. For the sas-detection ingress, anything sent with /detection as the path prefix will use this ingress. The services above are referenced in these ingresses.
See the ingress-setup-insecure.yaml file for an example.
If you are deploying your SAS Detection Engine on OpenShift, you will not be able to use the Ingress resource. In this case, replace your ingress resource with an OpenShift Route.
See the openshift-route.yaml file for an example.
These only require that the NAMESPACE be specified.
The reader role allows the pods in the specified namespace to retrieve info on deployments and pods used to report metrics for all replicas. The SAS Container Runtime (SCR) container also uses this role to read service and endpoint information The scaler role allows the pods to scale themselves up or down, which is necessary for them to restart themselves upon seeing an update to a decision image. The secretReader role allows the pods to access Kubernetes secrets, in order to get the authorization information required to interact with the tag registry.
The RoleBinding resources add these roles to the service account in your NAMESPACE, in order to attach and enable these Roles.
See the roles-and-rolebinding.yaml file for an example.
The sas-detection container uses a readiness probe, which allowes Kubernetes to determine when that pod is ready to receive transactions. The initialDelaySeconds field specifies how many seconds Kubernetes should wait before performing the initial probe. The periodSeconds field specifies how many seconds Kubernetes should wait between probes.
More information on readiness probes is available here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
There are optional, commented out sections that may be used to create the secrets containing TLS certificates and keys. The data must be base64 encoded and included in these definitions. These secrets could optionally be created manually via kubectl, or managed via cert-manager. If the secrets are created via some other method, the secret names must still match those referenced in the volumes and ingress definitions.
An alternative is using the selfsigned-certificates.yaml example file. Placeholders in this file are indicated by curly brackets, such as {{ DNS_NAME }}. Find and replace the placeholders with the values you want for your certificates. This file is optional and may be edited as needed to fit your purposes. As with the detection-engine deployment file, you create these resources directly using kubectl apply. This file must be applied once, and it will generate secrets containing your certificates and keys.
In addition to one-way TLS, the Detection Engine allows the optional configuration of mutual TLS (mTLS) connections to itself, as well as outgoing mutual TLS connections to Redis and Kafka. Mutual TLS allows the server to authenticate the client using a client certificate and client key that the client sends to the server. This certificate and key pair needs to be signed by a CA the server is configured to trust, and then supplied by the client to connect to the server. Examples of client certificates can be found in the /$deploy/sas-bases/examples/sas-detection/selfsigned-certificates
example file, where the usage field includes “client auth” as a value.
To add TLS to your ingress, some annotations and spec fields must be added. These will require certificates either included in this template, or created and supplied previously. The template includes a TLS ingress that is commented out, but the below examples break down what is different in this ingress.
To secure your ingress, the following annotations can be used to add one-way TLS, mutual TLS, or both.
annotations:
# Used to enable TLS
nginx.ingress.kubernetes.io/auth-tls-secret: {{ NAMESPACE }}/detection-ingress-tls-ca-config-{{ ORGANIZATION }}
# used to enable mTLS
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
For one-way TLS, fill in the tls field under the spec field. This also includes a secretName, which includes your TLS certificate.
tls:
- hosts:
- {{ ORGANIZATION }}.{{ INGRESS-TYPE }}.{{ HOST }}
secretName: detection-ingress-tls-config-{{ ORGANIZATION }}
See the ingress-setup-secure.yaml file for an example of where to add these fields to your deployment yaml.
Depending on the security configuration, mounting additional trusted certificates in your containers may be necessary. The areas to add these are tagged with SECURITY, and can be uncommented as necessary. The secret names must match whatever secrets have been configured for these certificates.
There are three volumes created from secrets containing TLS certificates. One volume is for sas-detection certificates, one volume is for Redis certificates, and one volume is for Kafka certificates. These are defined for each container in the Deployment spec.
After being created, these volumes may be mounted in the sas-sda-scr and sas-detection containers. As defined in the template, the detection certificates are mounted in /security, the redis certificates are mounted in /security/redis, and the kafka certificates are mounted in /security/kafka. The sas-sda-scr container does not access Kafka, so it does not require the Kafka mount.
See the container-configuration-secure.yaml file for an example. Note that the volumes are created once outside the container definitions, and then used to create volumeMounts within each container.
The security properties for this container deal with Redis TLS. Not all are required. They cover authentication, one-way TLS, and mutual TLS.
SAS_REDIS_AUTH_USER and SAS_REDIS_AUTH_PASSWORD are required when the Redis service is configured with user password. They can be entered directly, or referenced from a Kubernetes secret.
SAS_REDIS_CA_CERT is the path to the certificate in the container for one-way TLS.
SAS_REDIS_TRUST_CERT_PATH is optional and may be used to add additional trusted certificates.
SAS_REDIS_CLIENT_CERT_FILE and SAS_REDIS_CLIENT_PRIV_KEY_FILE are required only to configure mutual TLS. They contain the client certificate and key used for client verification by the server.
SAS_REDIS_TLS is used with a TLS-enabled Redis. A value of “1”, “Y”, or “T” will allow TLS. A value of “0”, “N”, or “F” will prohibit TLS. If a value is not entered, the default behavior is to prohibit TLS.
SAS detection includes properties to enable TLS and mutual TLS for Redis and Kafka.
SAS_DETECTION_REDIS_AUTH_USER allows a user to be entered for Redis. Not required, defaults to “default” user.
SAS_DETECTION_REDIS_AUTH_PASS allows a password to be entered for Redis.
SAS_DETECTION_REDIS_TLS_ENABLED should be set to true if the Redis server has TLS enabled.
SAS_DETECTION_REDIS_TLS_CACERT is optional and may be used to add a trusted CA.
SAS_DETECTION_REDIS_CLIENT_CERT_FILE and SAS_DETECTION_REDIS_CLIENT_PRIV_KEY_FILE are optional and may be used to supply a client certificate and client key if connecting to Redis with mutual TLS enabled.
SAS_DETECTION_REDIS_SERVER_DOMAIN can be used to supply the correct hostname for hostname verification of the certificate
SAS_DETECTION_KAFKA_SECURITY_PROTOCOL can be PLAINTEXT, SSL, SASL_PLAINTEXT, or SASL_SSL to indicate which combination of TLS enabled and Authentication enabled protocol Kafka is using. This defaults to PLAINTEXT. SAS_DETECTION_KAFKA_TRUSTSTORE can be used to add trusted certificates.
SAS_DETECTION_KAFKA_ENABLE_HOSTNAME_VERIFICATION enables DNS verification for TLS, defaulting to true.
SAS_DETECTION_KAFKA_CERTIFICATE_LOCATION is the location of the client certificate used to enable mTLS.
SAS_DETECTION_KAFKA_KEY_LOCATION is the location of the client key used to enable mTLS.
SAS_DETECTION_KAFKA_KEY_PASSWORD is the password for the supplied key, if a password is used.
SAS_DETECTION_KAFKA_SASL_USERNAME and SAS_DETECTION_KAFKA_SASL_PASSWORD define the username and password if authentication is enabled for the Kafka cluster.
This README describes how a service account with defined privileges can be added to the sas-detection-definition pod. A service account is required in an OpenShift cluster if it needs to mount NFS. Models are mounted in the detection-definition container using an NFS mount. To enable use of models, the service account requires NFS volume mounting privilege.
The /$deploy/sas-bases/overlays/sas-detection-definition/service-account
directory contains a file to grant security context constraints for using NFS on an OpenShift cluster.
A Kubernetes cluster administrator should add the security context constraints to their OpenShift cluster prior to deploying the SAS Viya platform. Use one of the following commands:
kubectl apply -f sas-detection-definition-scc.yaml
or
oc create -f sas-detection-definition-scc.yaml
After the security context constraints have been applied, you must link the security context constraints to the appropriate service account that will use it. Use the following command:
oc -n <name-of-namespace> adm policy add-scc-to-user sas-detection-definition -z sas-detection-definition
Make the following changes to the kustomization.yaml file in the $deploy directory:
Here is an example:
resources:
- sas-bases/overlays/sas-detection-definition/service-account/sa.yaml
transformers:
- sas-bases/overlays/sas-detection-definition/service-account/sa-transformer.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Run the following command to verify whether the overlay has been applied:
kubectl -n <name-of-namespace> get pod <sas-detection-definition-pod-name> -o yaml | grep serviceAccount
Verify that the output contains the service-account sas-detection-definition.
serviceAccount: sas-detection-definition
serviceAccountName: sas-detection-definition
When SAS Dynamic Actuarial Modeling is deployed, its content is integrated with the SAS Risk Cirrus platform.
The platform includes a common layer (Cirrus Core) that is used by multiple solutions.
Therefore, in order to fully deploy SAS Dynamic Actuarial Modeling, you must deploy, at minimum, the Cirrus Core content in addition to SAS Dynamic Actuarial Modeling.
Preparing and configuring Cirrus Core for deployment is described in the Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-core/README.md
(Markdown format) or
$deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm
(HTML format).
The Risk Cirrus Core README also contains information about storage options, such as external databases, for your solution. You must complete steps 1-4 described in the Risk Cirrus Core README before deploying SAS Dynamic Actuarial Modeling. Please read that document for important information about the pre-deployment tasks that should be completed prior to deploying SAS Dynamic Actuarial Modeling.
Complete steps 1-4 described in the Cirrus Core README.
Complete step 4 described in the Cirrus Core README to modify your Cirrus Core configuration file. Because SAS Dynamic Actuarial Modeling uses workflow service tasks, a user account must be configured for a workflow client. If you know which user account to use before installation and prefer having it configured during installation, you should set the SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG
variable to “Y” and assign the user account to the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT
variable.
If you have a $deploy/site-config/sas-risk-cirrus-pcpricing/resources
directory, delete it and its contents. Remove the reference to this directory from the transformers section of your base kustomization.yaml
file ($deploy/kustomization.yaml
). This step should only be necessary if you are upgrading from a cadence prior to 2025.02.
Copy the files in
$deploy/sas-bases/examples/sas-risk-cirrus-pcpricing
to the
$deploy/site-config/sas-risk-cirrus-pcpricing
directory. Create a destination directory if one does not exist.
IMPORTANT: If the destination directory already exists, make sure it has the expected configuration.env
file, not the old pcpricing_transform.yaml file from previous cadences (prior to 2025.02). If the directory already exists and already has the expected .env
file, verify that the overrides have been correctly applied. No further actions are required unless you want to change the connection settings to different overrides.
Modify the configuration.env
file (located in the $deploy/site-config/sas-risk-cirrus-pcpricing
directory). Lines with a #
at the beginning are commented out; their values will not be applied during deployment. To override a default provided by SAS for a given variable, uncomment the line by removing the #
at the beginning of the line and modify as explained in the following section. Specify, if needed, your settings as follows:
a. For SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER
, replace {{ INFO-OR-DEBUG }} with the logging level desired. (Default is INFO).
b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
, replace {{ Y-OR-N }} to specify whether you want to include steps flagged as sample artifacts. If this value is N, then steps marked as sample_step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. For SAS Dynamic Actuarial Modeling, the following are interrelated sample installation steps:
transfer_files_sampledata
step loads SAS sample data to the file service.transfer_files_csv_sampledata
step loads csv sample data to the file service.install_sample_data
step creates the pcprfm Cas library and loads tables into it.manage_cas_lib_acl
step setups permissions for the pcprfm Cas library.install_discovery_agent
step creates an agent for data analysis in SAS Information Catalog.To perform the sample installation steps, set this variable to Y. To skip them, set this variable to N. (Default is Y)
c. For SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. Typically, you should leave this variable blank, the only use case for this would be skipping the load of sample data. To skip the load of sample data, set this variable to “transfer_files_sampledata”. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
is set to N, then set this variable to an empty string to skip sample data and any other steps that are marked as samples. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
is set to Y, then set this variable with the IDs of any steps you would like to skip, including those flagged as sample data. (Default is \<Empty list>).
Note: If this variable is empty, all steps will be executed unless the solution has already deployed successfully in which case no steps will be executed. If this step is non-empty, only the steps listed in this variable will be executed.
d. For SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }} with the IDs of the steps you want to run. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment. For example, if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
is set to N, then the “transfer_files_sampledata” and the “install_sample_data” steps will be skipped during deployment. After the deployment finishes, you decide you want to include the SAS-provided sample data to use. You can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
to “transfer_files_sampledata,install_sample_data” and then delete the sas-risk-cirrus-pcpricing pod to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
. IMPORTANT: In your initial deployment this variable shoud be an empty string, or you risk an incomplete or failed deployment. If you specify a list of comma-separated steps to run, only those steps are performed. If the environment variable is not set, every step is run except for sample steps if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
is set to N. (Default is \<Empty list>).
The following is an example of a configuration.env
that you could use for SAS Dynamic Actuarial Modeling. The uncommented parameters will be added to the solution configuration map.
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER=INFO
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES=Y
# SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
# SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
In the base kustomization.yaml file in the $deploy
directory, add
site-config/sas-risk-cirrus-pcpricing/configuration.env
to the configMapGenerator
block. Here is an example:
configMapGenerator:
...
- name: sas-risk-cirrus-pcpricing-parameters
behavior: merge
env:
- site-config/sas-risk-cirrus-pcpricing/configuration.env
...
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide.
Note: The .env
overlay can be applied during or after the initial deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Before verifying the settings for SAS Dynamic Actuarial Modeling solution, complete step 6 specified in the Cirrus Core README to verify for Cirrus Core.
Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl describe configmap sas-risk-cirrus-pcpricing-parameters -n <name-of-namespace>
Verify that the output contains the desired connection settings that you configured.
Use the $deploy/sas-bases/examples/sas-esp-operator/espconfig/espconfig-properties.yaml
and $deploy/sas-bases/examples/sas-esp-operator/espconfig/espconfig-env-variables.yaml
files to set default settings and environment variables for the SAS Event Stream Processing Kubernetes Operator and all SAS Event Stream Processing servers that start within a Kubernetes environment.
Each default setting and environment variable that is described in these example files represents optional settings that enable you to change the default settings and environment variables.
If no configuration changes are required, do not add these example files to your kustomization.yaml
file.
By default, each default setting or environment variable in the example files is commented out. Start by determining which of the commented settings or environment variables you intend to set. The following list describes the settings and environment variables that can be added and provides information about how to set them.
espconfig-properties.yaml
:
Determine the server.disableTrace
value.
"true"
to avoid log injection.Determine the server.mas-threads
value.
"0"
for one thread per CPU.Determine the server.store-location
value.
Determine the server.loglevel
value.
"DF.ESP=trace,DF.ESP.AUTH=info"
.Determine the server.trace
value.
XML
, JSON
, or CSV
.Determine the server.badevents
value.
Determine the server.plugins
value.
Determine the server.pluginsdir
value.
Determine the maximum Kubernetes resource limits that may be allocated to all SAS Event Stream Processing server Kubernetes pod and horizontal pod autoscaling resources.
maxReplicas
value.maxMemory
value.maxCpu
value.espconfig-env-variables.yaml
:
The following environment variables are specified in the example:
DFESP_QKB
- Determine the absolute path to the share directory under the SAS Data Quality installation.DFESP_QKB_LIC
- Determine the absolute path to the file of the SAS Data Quality license.LD_LIBRARY_PATH
- Determine paths to append to LD_LIBRARY_PATH
.This example transformer enables you to leverage additional content that is located in a mounted path, such as /mnt/path/to/file
.
You can add any other environment variable that is not included in this file.
Copy the example files from the $deploy/sas-bases/examples/sas-esp-operator/espconfig
directory to the $deploy/site-config/sas-esp-operator/espconfig
directory.
Create the destination directory if it does not exist.
Use the $deploy/site-config/sas-esp-operator/espconfig/espconfig-properties.yaml
file to specify custom SAS Event Stream Processing default settings.
For each SAS Event Stream Processing default setting that you intend to use, uncomment the op
, path
, and value
lines that are associated with the setting.
Then replace the {{ VARIABLE-NAME }}
variable with the desired value.
Here are some examples:
...
- op: add
path: /spec/espProperties/server.disableTrace
value: "true"
...
- op: add
path: /spec/espProperties/server.loglevel
value: esp=trace
...
- op: replace
path: /spec/limits/maxReplicas
value: "2"
...
Use the $deploy/site-config/sas-esp-operator/espconfig/espconfig-env-variables.yaml
file to specify custom SAS Event Stream Processing default environment variables.
For each SAS Event Stream Processing default environment variable that you intend to use, uncomment the op
, path
, value
, name
, and value
lines that are associated with the environment variable.
Then replace the {{ VARIABLE-NAME }}
variable with the desired value.
If you would like to include additional environment variables that are not in the example file, add new sections for them after the provided examples.
Here are some examples:
...
- op: add
path: /spec/projectTemplate/deployment/spec/template/spec/containers/0/env/-
value:
name: DFESP_QKB_LIC
value: /mnt/data/sas/data/quality/license
...
- op: add
path: /spec/projectTemplate/deployment/spec/template/spec/containers/0/env/-
value:
name: CUSTOM_ENV_VAR_NUMBER
value: "1234"
- op: add
path: /spec/projectTemplate/deployment/spec/template/spec/containers/0/env/-
value:
name: CUSTOM_ENV_VAR_FLAG
value: "true"
...
Add site-config/sas-esp-operator/espconfig/espconfig-properties.yaml
and/or site-config/sas-esp-operator/espconfig/espconfig-env-variables.yaml
to the transformers block of the base kustomization.yaml
file.
Here is an example:
...
transformers:
...
- site-config/sas-esp-operator/espconfig/espconfig-properties.yaml
- site-config/sas-esp-operator/espconfig/espconfig-env-variables.yaml
...
After the base kustomization.yaml
file is modified, deploy the software using
the commands that are described in SAS Viya Platform: Deployment Guide.
SAS Event Stream Processing creates a PersistentVolumeClaim (PVC) with a default storage capacity of 5 GB. Follow these instructions to change that value.
Copy the file $deploy/sas-bases/examples/sas-event-stream-processing-studio-app/storage/esp-storage-size-transformer.yaml
to a location of your choice under $deploy/site-config
, such as $deploy/site-config/sas-event-stream-processing-studio-app/storage
.
Follow the instructions in the copied esp-storage-size-transformer.yaml file to change the values in that file as necessary.
Add the full path of the copied file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). For example, if you
moved the file to $deploy/site-config/backup
, you would modify the
base kustomization.yaml file like this:
...
transformers:
...
- site-config/backup/esp-storage-size-transformer.yaml
...
After the base kustomization.yaml file is modified, deploy the software using the commands described in SAS Viya Platform Deployment Guide.
The $deploy/sas-bases/examples/sas-esp-operator/esp-server-connectors-config
directory contains files to configure the SAS Event Stream Processing Kubernetes Operator to include SAS Event Stream Processing connectors configuration.
For information, see Overview to Connectors.
The example files provided assume the following:
$deploy
refers to the directory that contains the deployment assets.$deploy
directory.$deploy
directory is the current directory.Create the $deploy/sas-config/esp-server-connectors-config
directory. Copy the content from the $deploy/sas-bases/examples/sas-esp-operator/esp-server-connectors-config
directory to the $deploy/site-config/esp-server-connectors-config
directory.
The $deploy/site-config/esp-server-connectors-config/secret.yaml
file contains a Kubernetes secret resource. The secret contains a value for the ESP Server connectors.config
file content. The connectors.config
value should be updated with SAS Event Stream Processing Server connector configuration parameters. For information, see Setting Configuration Parameters in a Kubernetes Environment.
Make the following changes to the base kustomization.yaml file ($deploy/kustomization.yaml
).
$deploy/site-config/esp-server-connectors-config/secret.yaml
to the resources block.$deploy/site-config/esp-server-connectors-config/patchtransformer.yaml
to the transformers block.The references should look like this:
...
resources:
...
- site-config/esp-server-connectors-config/secret.yaml
...
transformers:
...
- site-config/esp-server-connectors-config/patchtransformer.yaml
...
After you modify the $deploy/kustomization.yaml
file, deploy the software using the commands described in Deploy the Software.
$deploy/site-config/esp-server-connectors-config/project.xml
.To configure SAS Event Stream Processing Studio to use analytic store (ASTORE) files inside the application’s container, a volume mount with a PersistentVolumeClaim (PVC) of sas-microanalytic-score-astores is required in the deployment.
Before proceeding, ensure that a PVC is defined by the SAS Micro Analytic Service Analytic Store Configuration for the sas-microanalytic-score service.
Consult the $deploy/sas-bases/examples/sas-microanalytic-score/astores/README.md file.
In the base kustomization.yaml file in the $deploy directory, add sas-bases/overlays/sas-event-stream-processing-studio-app/astores/astores-transformer.yaml to the transformers block. The reference should look like this:
...
transformers:
...
- sas-bases/overlays/sas-event-stream-processing-studio-app/astores/astores-transformer.yaml
...
After the base kustomization.yaml file is modified, deploy the software using the commands described in SAS Viya Platform Deployment Guide.
To configure SAS Event Stream Manager to use analytic store (ASTORE) files inside the application’s container, a volume mount with a PersistentVolumeClaim (PVC) of sas-microanalytic-score-astores is required in the deployment.
Before proceeding, ensure that a PVC is defined by the SAS Micro Analytic Service Analytic Store Configuration for the sas-microanalytic-score service.
Consult the $deploy/sas-bases/examples/sas-microanalytic-score/astores/README.md file.
In the base kustomization.yaml file in the $deploy directory, add sas-bases/overlays/sas-event-stream-manager-app/astores/astores-transformer.yaml to the transformers block. The reference should look like this:
...
transformers:
...
- sas-bases/overlays/sas-event-stream-manager-app/astores/astores-transformer.yaml
...
After the base kustomization.yaml file is modified, deploy the software using the commands described in SAS Viya Platform Deployment Guide.
When SAS Expected Credit Loss is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Risk Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS Expected Credit Loss solution successfully, you must deploy the Risk Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-rcc/README.md
(Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm
(HTML format).
For storage options for your solution, such as external databases, refer to the Risk Cirrus Core README.
For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Expected Credit Loss: Administrator’s Guide.
Complete steps 1-4 described in the Risk Cirrus Core README.
Complete step 5 described in the Risk Cirrus Core README to modify your Risk Cirrus Core .env
configuration file. Because SAS Expected Credit Loss uses workflow service tasks, a default service account must be configured for the Risk Cirrus Objects workflow client. If you know which user account to use before installation and prefer having it configured during installation, you should set the SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG variable
to “Y” and assign the user ID to the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT
variable. If you choose not to configure this during installation, you can set the default service account after deployment via SAS Environment Manager.
If you are upgrading from a cadence prior to 2025.02, you should complete this step. Otherwise, you can skip to the next step.
If you have a $deploy/site-config/sas-risk-cirrus-ecl/resources
directory, take note of the values in your ecl_transform.yaml
file. You may want to use them in the following steps. Once you have the values you need, delete the directory and its contents. Then, edit your base kustomization.yaml
file ($deploy/kustomization.yaml
) to remove the following line from the transformers
section: - site-config/sas-risk-cirrus-ecl/resources/ecl_transform.yaml
.
Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-ecl
to the $deploy/site-config/sas-risk-cirrus-ecl
directory. Create a destination directory if one does not exist.
IMPORTANT: If the destination directory already exists, make sure it has the expected configuration.env
and sas-risk-cirrus-ecl-secret.env
files, not the old ecl_transform.yaml file from previous cadences (prior to 2025.02). If the directory already exists and already has the expected configuration.env
and sas-risk-cirrus-ecl-secret.env
files, verify that overlay settings have been applied successfully to the configmap and verify that overlay settings have been applied successfully to the secret have been correctly applied. No further actions are required unless you want to change the connection settings to different overrides.
Modify the configuration.env
file (located in the $deploy/site-config/sas-risk-cirrus-ecl
directory). Lines with a #
at the beginning are commented out; their values will not be applied during deployment. If there are any parameters for which you want to override the default value, uncomment that variable’s line by removing the #
at the beginning of the line and replace the placeholder with the desired value as explained in the following section. Specify, if needed, your settings as follows:
Parameter Name | Description |
---|---|
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER | Replace {{ INFO-OR-DEBUG }} with the logging level desired. (Default is INFO) |
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES | Replace {{ Y-OR-N }} to specify whether you want to include steps flagged as sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts: - The create_cas_lib step creates the default ECLReporting CAS library that is used for reporting in SAS Expected Credit Loss.- The create_db_auth_domain step creates an ECLDBAuth domain for the riskcirrusecl schema and assigns default permissions.- The create_db_auth_domain_user step creates an ECLUserDBAuth domain for the riskcirrusecl schema and assigns default group permissions.- The import_main_dataloader_files step uploads the Cirrus_ECL_main_loader.xlsx file into the file service under the Products/SAS Expected Credit Loss directory.- The import_sample_data_loader_files step uploads the Cirrus_ECL_sample_data_loader.zip file into the file service under the Products/SAS Expected Credit Loss directory.- The import_va_reports step imports SAS-provided reports created in SAS Visual Analytics.- The install_riskengine_curves_project step loads the sample ECL Curves project into SAS Risk Engine.- The install_sampledata step loads sample load data into the riskcirrusecl database schema library.- The install_scenarios_sampledata step loads the sample scenarios into SAS Risk Factor Manager.- The load_sampledata_dataloader_objects step loads sample Class Members, Class Member Translations, NamedTreePaths, Named Tree Path Translations, Roles, RolePermissions, and Positions. It also loads sample object instances, like Attribution Templates, Configuration Sets, Configuration Tables, Cycles, Data Definitions, Models, Rule Sets and Scripts, as well as the Link Instances, Object Classifications, and Workflows associated with those objects. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.- The load_workflows step loads and activates the ECL workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.- The localize_va_reports step imports localized SAS-provided reports created in SAS Visual Analytics.- The manage_cas_lib_acl step sets up permissions for the default ECLReporting CAS library. Users in the ECLUsers, ECLAdministrators and SASAdministrators groups have full access to the tables.- The transfer_sampledata_files step stores a copy of all sampledata files loaded into the environment into the file service under the Products/SAS Expected Credit Loss directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.- The update_db_sampledata_scripts_pg step stores a copy of the install_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the sample data.WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N. The following steps have not been marked as sample artifacts and will always be deployed: - The check_services step checks if the ECL dependent services are up and running.- The check_solution_existence step checks to see if the ECL solution is already running.- The check_solution_deployment step checks for the successful deployment of Risk Cirrus Core.- The create_solution_repo step creates the ECL repository.- The check_solution_running step checks to entire the ECL solution is running.- The import_solution step imports the solution in the ECL repository.- The load_app_registry step loads the ECL solution into the SAS application registry.- The load_auth_rules step assigns authorization rules for the ECL solution.- The load_group_memberships step assigns members to various ECL groups.- The load_identities step loads the ECL identities.- The load_main_dataloader_objects step loads the Cirrus_ECL_main_loader.xlsx file which contains required object instances, like Source System codes, Sequence Definitions and Code Libraries. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.- The setup_code_lib_repo step creates the ECL code library directory.- The share_ia_script_with_solution step shares the Risk Cirrus Core individual assessment script with the ECL solution.- The share_objects_with_solution step shares the Risk Cirrus Core code library with the ECL solution.- The upload_notifications step loads workflow notifications into SAS Workflow Manager. |
SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS | Replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to run. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment. For example, if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then the “transfer_sampledata” and the “load_sample_data” steps will be skipped during deployment. After the deployment finishes, you decide you want to include the SAS-provided sample data to use. You can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS to “transfer_sampledata,load_sample_data” and then delete the sas-risk-cirrus-ecl pod to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS .WARNING: This list is absolute; the deployment will only run the steps included in this list. If you are deploying this environment for the first time, this variable should be an empty string, or you risk an incomplete or failed deployment. |
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS | Replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. Typically, the only use case for this would be skipping the load of sample data. To skip the load of sample data, set this variable to “load_sample_data”. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then set this variable to an empty string to skip load_sample_data and any other steps that are marked as sample data. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to Y, then set this variable with the IDs of any steps you would like to skip, including those flagged as sample data. (Default is \<Empty list>). Note: If this variable is empty, all steps will be executed unless the solution has already deployed successfully in which case no steps will be executed. If this step is non-empty, only the steps listed in this variable will be executed. |
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME | Replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-NAME }} with the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the Shared Services database. |
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET | Replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }} with the database schema user secret for the user name that was used for SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME . |
The following is an example of a configuration.env
that you could use for SAS Expected Credit Loss. This example uses the default values provided by SAS except for the solution input data database user name variable. The SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME
should be replaced with the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the Shared Services database.
# SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
# SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES={{ Y-OR-N }}
# SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
# SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME=ecluser
In the base kustomization.yaml
file, add site-config/sas-risk-cirrus-ecl/configuration.env
to the configMapGenerator
block. Here is an example:
configMapGenerator:
...
- name: sas-risk-cirrus-ecl-parameters
behavior: merge
envs:
- site-config/sas-risk-cirrus-ecl/configuration.env
...
Save the kustomization.yaml
file.
Modify the sas-risk-cirrus-ecl-secret.env file (in the $deploy/site-config/sas-risk-cirrus-ecl
directory) and specify your settings as follows:
For the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET
, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }}
with the database schema user secret. If the directory already exists and already has the expected .env
file, verify that overlay settings have been applied successfully to the secret have been correctly applied. No further actions are required unless you want to change the secret.
The following is an example of secret.env
file that you could use for SAS Expected Credit Loss.
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET=eclsecret
Save the sas-risk-cirrus-ecl-secret.env
file.
In the base kustomization.yaml
file, add site-config/sas-risk-cirrus-ecl/sas-risk-cirrus-ecl-secret.env
to the secretGenerator
block. Here is an example:
secretGenerator:
...
- name: sas-risk-cirrus-ecl-secret
behavior: merge
envs:
- site-config/sas-risk-cirrus-ecl/sas-risk-cirrus-ecl-secret.env
...
Save the kustomization.yaml
file.
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide to apply the new settings.
Note: The .env
overlay can be applied during or after the initial deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Before verifying the settings for SAS Expected Credit Loss solution, complete step 9 specified in the Risk Cirrus Core README to verify for Risk Cirrus Core.
Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl describe configmap sas-risk-cirrus-ecl-parameters -n <name-of-namespace>
Verify that the output contains the desired configurations that you configured.
To verify that your overrides were applied successfully to the secret, run the following commands:
Find the name of the secret on the namespace.
kubectl describe secret sas-risk-cirrus-ecl-secret -n <name-of-namespace>
Retrieve the name of the secret on the namespace from the “Name:” line on the generated output.
Verify that the output contains the desired database schema user secret that you configured.
kubectl get secret <name-of-the-secret> -n <name-of-namespace>-o jsonpath='{.data}'
SAS Image Staging ensures images are pulled to and staged properly on respective nodes in an effort to decrease start-up times of various SAS Viya platform components. This README describes how to customize your SAS Viya platform deployment for tasks related to SAS Image Staging.
SAS provides the ability to modify the behavior of the SAS Image Staging application to fit the needs of specific environments.
This README describes two areas that can be configured, the mode of operation and the check interval.
SAS Image Staging requires that Workload Node Placement (WNP) be used. Specifically, at least one node in the Kubernetes cluster be labeled “workload.sas.com/class=compute” in order for SAS Image Staging to function properly.
If WNP is not used, the SAS Image Staging application will not pre-stage images. Timeouts can occur when images are pulled into the cluster the first time or when the image is removed from the image cache and the image needs to be pulled again for use.
For more information about WNP, see Plan the Workload Placement.
The default behavior of SAS Image Staging is to start pods on nodes via a daemonset at interval to ensure that relevant images have been pulled to hosts. While this default behavior accomplishes the goal of pulling images to nodes and decreasing start-up times, some users may want more intelligent and specific control with less churn in Kubernetes.
In order for the non-default option described in this README to function, the SAS Image Staging application must have the ability to list nodes. The nodes resource is cluster-scoped and resides outside of the SAS Viya platform namespace. Requirements may not allow for this sort of access, and default namespace-scoped resources do not provide the view needed for this option to work.
The SAS Image Staging application uses the list of nodes to determine which images are currently pulled to the node and their respective version. If an image is missing or a different version exists on the node, the SAS Image Staging application will target that node for a pull of the image instead of starting daemonsets to pull images.
Regardless of the mode of operation, it is normal to see a number of pods that contain the word “prepull” in their name. The name and frequency in which these pods show up depend on the mode of operation used. These pods are transient and are used to pull images to respective nodes.
Advantages:
Disadvantages:
Advantages:
Disadvantages:
$deploy/sas-bases/examples/sas-prepull
contains an example file named add-prepull-cr-crb.yaml.
This example provides a resource to permit access to resource node and verb list for
the namespaced sas-prepull service account.
To enable the Node List Option:
Copy $deploy/sas-bases/examples/sas-prepull/add-prepull-cr-crb.yaml
to
$deploy/site-config/sas-prepull/add-prepull-cr-crb.yaml
.
Modify add-prepull-cr-crb.yaml by replacing all instances of ‘{{ NAMESPACE }}’ with the namespace of the SAS Viya platform deployment where you want node and list access granted for the sas-prepull service account.
Add site-config/sas-prepull/add-prepull-cr-crb.yaml to the resourcess block of the
base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example:
...
resources:
...
- site-config/sas-prepull/add-prepull-cr-crb.yaml
...
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
You should increase the resource limit of the SAS Image Staging deployment if the node list option is used and you plan to use autoscaling in your cluster. The default values for CPU and Memory limits are 1 and 1Gi respectively.
The $deploy/sas-bases/examples/sas-prepull
directory contains an example file named change-resource-limits.yaml.
This example provides a patch that will change the values for resources limits in the SAS Image Staging application pod.
Steps to modify:
Copy $deploy/sas-bases/examples/sas-prepull/change-resource-limits.yaml
to
$deploy/site-config/sas-prepull/change-resource-limits.yaml
.
Modify change-resource-limits.yaml by replacing the resource limit values to match your needs.
Add site-config/sas-prepull/change-resource-limits.yaml to the transformers block of the
base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example:
...
transformers:
...
- site-config/sas-prepull/change-resource-limits.yaml
...
Remove site-config/sas-prepull/add-prepull-cr-crb.yaml from the resources block of the base
kustomization.yaml file ($deploy/kustomization.yaml
). This is to ensure the option does not
get applied in future Kustomize builds.
If there are no other SAS Viya platform deployments in other namespaces in the cluster, execute
kubectl delete -f $deploy/site-config/sas-prepull/add-prepull-cr-crb.yaml
to remove the
ClusterRole and ClusterRoleBinding from the cluster. If there are other SAS Viya platform deployments
in other namespaces in the cluster, execute kubectl delete clusterrolebinding sas-prepull-v2-{{ NAMESPACE }} -n {{ NAMESPACE }}
,
where {{ NAMESPACE }} is the namespace of the deployment in which you want the ClusterRoleBinding
removed.
The check interval is the time the SAS Image Staging application pauses between checks for newer versions of images. By default, the check interval in Daemonset mode is 1 hour and the check interval for Node List mode is 30 secs. These defaults are reasonable given their operation and impact to an environment. However, you may wish to adjust the interval to further reduce churn in the environment. This section of the README describes how to make those interval adjustments.
The interval is configured via two options located in the sas-prepull-parameters configmap. Those options are called SAS_PREPULL_DAEMON_INT and SAS_PREPULL_CRCRB_INT and control the intervals of Daemon Mode and Node List Mode respectively.
The $deploy/sas-bases/examples/sas-prepull
directory contains an example file named change-check-interval.yaml.
This example provides a patch that will change the values for the intervals in the configmap
referenced by the SAS Image Staging application.
Steps to modify:
Copy $deploy/sas-bases/examples/sas-prepull/change-check-interval.yaml
to
$deploy/site-config/sas-prepull/change-check-interval.yaml
.
Modify change-check-interval.yaml by replacing all instances of ‘{{ DOUBLE-QUOTED-VALUE-IN-SECONDS }}’ with the value in seconds for each respective mode. Note that the value must be wrapped in double quotes in order for Kustomize to appropriately reference the value.
Add site-config/sas-prepull/change-check-interval.yaml to the transformers block of the
base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example:
...
transformers:
...
- site-config/sas-prepull/change-check-interval.yaml
...
If you are deploying on Red Hat OpenShift and are using a mirror registry, SAS Image Staging requires a modification to work properly. The change-relpath.yaml file in the $deploy/sas-bases/overlays/sas-prepull directory contains a patch for the relative path of images that are pre-staged by SAS Image Staging.
To use the patch, add sas-bases/overlays/sas-prepull/change-relpath.yaml
to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Make sure the addition is above the line sas-bases/overlays/required/transformers.yaml
.
Here is an example:
...
transformers:
...
- sas-bases/overlays/sas-prepull/change-relpath.yaml
- sas-bases/overlays/required/transformers.yaml
...
SAS Insurance Capital Management provides a ConfigMap whose values control various aspects of its deployment process. This
includes variables such as logging level for the deployment, deployment steps to skip, etc. SAS provides
default values for these variables as described in the next section. You can override these default values
by configuring a configuration.env
file with your override values and configuring your kustomization.yaml
file to apply these overrides.
For a list of variables that can be overridden and their default values, see SAS Insurance Capital Management Configuration Parameters.
For the steps needed to override the default values with your own values, see Apply your own overrides to the configuration parameters.
The following table contains a list of parameters that can be specified in the SAS Insurance Capital Management .env
configuration file. These parameters can all be found in the template configuration file (configuration.env
)
but are commented out in the template file. Lines with a ‘#’ at the beginning are commented out, and their values
will not be applied during deployment. If you want to override a SAS-provided default for a given variable, you
must uncomment the line by removing the ‘#’ at the beginning of the line.
Parameter Name | Description |
---|---|
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER | Specifies a logging level for the deployment. The logging level value: "INFO" is used if the variable is not overridden by your YAML file. For a more verbose level of logging, specify value: "DEBUG" . |
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES | Specifies whether you want to include deployment steps that relate to sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts: - The update_db_sampledata_scripts_pg_ics step prepares the ICS sample data scripts into a temporary folder. - The create_db_auth_domain_ics step creates an authentication domain to allow the deployer script to add the ICS sample data to the library. - The create_db_auth_domain_user_ics step adds the install user to the authentication domain. - The update_db_sampledata_scripts_pg_s2 step prepares the Solvency II (SII) sample data scripts into a temporary folder. - The create_db_auth_domain_s2 step creates an authentication domain to allow the deployer script to add the SII sample data to the library. - The load_sample_objects_ics step uploads the sample data resources for ICS to the Cirrus web interface. - The import_sample_dataloader_files_ics step imports the uploaded sample data resources for ICS into Cirrus. - The load_sample_objects_s2 step uploads the sample data resources for SII to the Cirrus web interface. - The import_sample_dataloader_files_s2 step imports the uploaded sample data resources for SII into Cirrus. - The install_sampledata_ics step adds the sample data for ICS to the database. - The install_sampledata_s2 step adds the sample data for SII to the database. |
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME | Specifies the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the SharedServices database. |
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS | Specifies whether you want to skip specific steps during the deployment of SAS Insurance Risk Management. Note: Typically, you should set this value blank: "" . The value: "" is used if the variable is not overridden by your YAML file. This means no deployment steps will be explicitly skipped. |
SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS | Specifies whether you want to run specific steps during the deployment of SAS Insurance Risk Management. Note: Typically, you should set this value blank: "" . The value: "" is used if the variable is not overridden by your YAML file. This means all deployment steps will be executed. |
The following table contains a parameter that can be specified in the SAS Insurance Capital Management .env
secret file.
Parameter Name | Description |
---|---|
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET | Specifies the secret to be used for the user specified in SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME above. |
If you want to override any of the SAS Insurance Capital Management configuration parameters rather than using the default values, complete these steps:
If you have a $deploy/site-config/sas-risk-cirrus-icm
directory, take note of the values in
your icm_transform.yaml
file. You may want to use them in the following steps. Once
you have the values you need, delete the directory and its contents.
Then, edit your base kustomization.yaml
file ($deploy/kustomization.yaml
) to remove the
following line from the transformers
section:
- site-config/sas-risk-cirrus-icm/resources/icm_transform.yaml
This step should only be necessary if you are upgrading from a cadence prior to 2025.02.
Copy the configuration.env
from $deploy/sas-bases/examples/sas-risk-cirrus-icm
to the
$deploy/site-config/sas-risk-cirrus-icm
directory. Create the destination directory if
one does not exist. If the directory already exists and already has the expected .env
file,
verify that the overrides
have been correctly applied. No further actions are required, unless you want to apply different
overrides.
In the base kustomization.yaml file, add the sas-risk-cirrus-icm-parameters
ConfigMap to the
configMapGenerator
block. If that block does not exist, create it. Here is an example of what
the inserted code block should look like in the kustomization.yaml file:
configMapGenerator:
...
- name: sas-risk-cirrus-icm-parameters
behavior: merge
envs:
- site-config/sas-risk-cirrus-icm/configuration.env
...
Save the kustomization.yaml file.
Modify the configuration.env file (in the $deploy/site-config/sas-risk-cirrus-icm
directory)
and specify your settings as follows:
a. For the parameter SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER
, replace {{ INFO-or-DEBUG }}
with the logging level desired.
b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
, replace {{ Y-or-N }}
with "Y"
or "N"
. This value determines if the deployment steps that deploy sample artifacts will be executed. If the value is "N"
, the deployment process does not execute the install steps that deploy sample artifacts.
c. For the parameter SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
with the IDs of the steps you want to skip.
d. For the parameter SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
with the IDs of the steps you want to run. Typically, you should leave this variable blank.
Note: If this variable is empty, all steps will be executed unless the solution has already deployed successfully in which case no steps will be executed. If this step is non-empty, only the steps listed in this variable will be executed.
Save the configuration.env
file.
The following is an example of a .env
file that you could use for SAS Insurance Capital Management. This example will use all of the default values provided by SAS except for the sample artifacts deployment.
# SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-or-DEBUG }}
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES=N
# SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
# SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
Modify the sas-risk-cirrus-icm-secret.env file (in the $deploy/site-config/sas-risk-cirrus-icm
directory)
and specify your settings as follows:
a. For the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET
, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }}
with the database schema user secret.
Save the sas-risk-cirrus-icm-secret.env
file.
The following is an example of a .env
file that you could use for SAS Insurance Capital Management.
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET=EXAMPLESECRET
In the base kustomization.yaml file, in the $deploy
directory, add sas-risk-cirrus-icm-secret.env
to the
secretGenerator
block. If that block does not exist, create it. Here is an example of what
the inserted code block should look like in the kustomization.yaml file:
secretGenerator:
...
- name: sas-risk-cirrus-icm-secret
behavior: merge
envs:
- site-config/sas-risk-cirrus-icm/sas-risk-cirrus-icm-secret.env
...
Save the kustomization.yaml file.
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings.
Note: If you configured overrides during a past deployment, your overrides should be available in the SAS Insurance Risk Management ConfigMap. To verify that your overrides were applied successfully to the ConfigMap, run the following command:
kubectl describe configmap sas-risk-cirrus-icm-parameters -n <name-of-namespace>
Verify that the output contains your configured overrides.
To verify that your overrides were applied successfully to the secret, run the following commands:
Find the name of the secret on the namespace.
kubectl describe secret sas-risk-cirrus-icm-secret -n <name-of-namespace>
Retrieve the name of the secret on the namespace from the “Name:” line on the generated output.
Verify the database schema user secret.
kubectl get secret <name-of-the-secret> -n <name-of-namespace>-o jsonpath='{.data}’
Verify that the output contains your configured override. Note that this value will be BASE64 encoded.
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software.
Once the deployment has been completed, SAS recommends reviewing the Administrator Guide for necessary post-deployment instructions (for example, installing Python packages for report generation), suggested site-specific considerations, and performance tunings.
SAS Insurance Contract Valuation Foundation provides a ConfigMap whose values control various aspects of its deployment process. This
includes variables such as logging level for the deployment, deployment steps to skip, etc. SAS provides
default values for these variables as described in the next section. You can override these default values
by configuring a configuration.env
file with your override values and configuring your kustomization.yaml
file to apply these overrides.
For a list of variables that can be overridden and their default values, see SAS Insurance Contract Valuation Foundation Configuration Parameters.
For the steps needed to override the default values with your own values, see Apply your own overrides to the configuration parameters.
The following table contains a list of parameters that can be specified in the SAS Insurance Contract Valuation Foundation .env
configuration file. These parameters can all be found in the template configuration file (configuration.env
)
but are commented out in the template file. Lines with a ‘#’ at the beginning are commented out, and their values
will not be applied during deployment. If you want to override a SAS-provided default for a given variable, you
must uncomment the line by removing the ‘#’ at the beginning of the line.
Parameter Name | Description |
---|---|
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER | Specifies a logging level for the deployment. The logging level value: "INFO" is used if the variable is not overridden by your YAML file. For a more verbose level of logging, specify value: "DEBUG" . |
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES | Specifies whether you want to include deployment steps that relate to sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts: - The update_db_sampledata_scripts_pg_ifrs17 step prepares the IFRS17 sample data scripts into a temporary folder. - The create_db_auth_domain_ifrs17 step creates an authentication domain to allow the deployer script to add the IFRS17 sample data to the library. - The create_db_auth_domain_user_ifrs17 step adds the install user to the authentication domain. - The load_sample_objects_common step uploads the sample data resources for IFRS17 to the Cirrus web interface. - The import_sample_dataloader_files_common step imports the uploaded sample data resources for IFRS17 into Cirrus. - The install_sampledata_ifrs17 step adds the sample data for IFRS17 to the database. |
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME | Specifies the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the SharedServices database. |
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS | Specifies whether you want to skip specific steps during the deployment of SAS Insurance Risk Management. Note: Typically, you should set this value blank: "" . The value: "" is used if the variable is not overridden by your YAML file. This means no deployment steps will be explicitly skipped. |
SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS | Specifies whether you want to run specific steps during the deployment of SAS Insurance Risk Management. Note: Typically, you should set this value blank: "" . The value: "" is used if the variable is not overridden by your YAML file. This means all deployment steps will be executed. |
The following table contains a parameter that can be specified in the SAS Insurance Contract Valuation Foundation .env
secret file.
Parameter Name | Description |
---|---|
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET | Specifies the secret to be used for the user specified in SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME above. |
If you want to override any of the SAS Insurance Contract Valuation Foundation configuration parameters rather than using the default values, complete these steps:
If you have a $deploy/site-config/sas-risk-cirrus-icv/resources
directory, delete it and its contents. Then, edit your base kustomization.yaml
file ($deploy/kustomization.yaml
) to remove the following line from the transformers
section:
- site-config/sas-risk-cirrus-icv/resources/icv_transform.yaml
This step should only be necessary if you are upgrading from a cadence prior to 2025.02.
Copy the configuration.env
from $deploy/sas-bases/examples/sas-risk-cirrus-icv
to the $deploy/site-config/sas-risk-cirrus-icv
directory. Create a destination directory if one does not exist.
IMPORTANT: If the destination directory already exists, make sure it has the expected configuration.env
and sas-risk-cirrus-icv-secret.env
files, not the old icv_transform.yaml file from previous cadences (prior to 2025.02). If the directory already exists and already has the expected configuration.env
and sas-risk-cirrus-icv-secret.env
files, verify that overlay settings have been applied successfully to the configmap and verify that overlay settings have been applied successfully to the secret. No further actions are required unless you want to change the connection settings to different overrides.
Modify the configuration.env file (in the $deploy/site-config/sas-risk-cirrus-icv
directory)
and specify your settings as follows:
a. For the parameter SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER
, replace {{ INFO-or-DEBUG }}
with the logging level desired.
b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
, replace {{ Y-or-N }}
with "Y"
or "N"
. This value determines if the deployment steps that deploy sample artifacts will be executed. If the value is "N"
, the deployment process does not execute the install steps that deploy sample artifacts.
c. For the parameter SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
with the IDs of the steps you want to skip.
d. For the parameter SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
with the IDs of the steps you want to run. Typically, you should leave this variable blank.
Note: If this variable is empty, all steps will be executed unless the solution has already deployed successfully in which case no steps will be executed. If this step is non-empty, only the steps listed in this variable will be executed.
Save the configuration.env
file.
The following is an example of a .env
file that you could use for SAS Insurance Contract Valuation Foundation. This example will use all of the default values provided by SAS except for the sample artifacts deployment.
# SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-or-DEBUG }}
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES=N
# SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
# SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
In the base kustomization.yaml
file, add the sas-risk-cirrus-icv-parameters
ConfigMap to the
configMapGenerator
block. If that block does not exist, create it. Here is an example of what
the inserted code block should look like in the kustomization.yaml file:
configMapGenerator:
...
- name: sas-risk-cirrus-icv-parameters
behavior: merge
envs:
- site-config/sas-risk-cirrus-icv/configuration.env
...
Save the kustomization.yaml file.
Modify the sas-risk-cirrus-icv-secret.env file (in the $deploy/site-config/sas-risk-cirrus-icv
directory)
and specify your settings as follows:
a. For the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET
, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }}
with the database schema user secret.
Save the sas-risk-cirrus-icv-secret.env
file.
The following is an example of a .env
file that you could use for SAS Insurance Contract Valuation Foundation.
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET=EXAMPLESECRET
In the base kustomization.yaml
file, add site-config/sas-risk-cirrus-icv/sas-risk-cirrus-icv-secret.env
to the secretGenerator
block. Here is an example:
secretGenerator:
...
- name: sas-risk-cirrus-icv-secret
behavior: merge
envs:
- site-config/sas-risk-cirrus-icv/sas-risk-cirrus-icv-secret.env
...
Save the kustomization.yaml
file.
Note: If you configured overrides during a past deployment, your overrides should be available in the SAS Insurance Risk Management ConfigMap. To verify that your overrides were applied successfully to the ConfigMap, run the following command:
kubectl describe configmap sas-risk-cirrus-icv-parameters -n <name-of-namespace>
To verify that your overrides were applied successfully to the secret, run the following commands:
Find the name of the secret on the namespace.
kubectl describe secret sas-risk-cirrus-icv-secret -n <name-of-namespace>
Retrieve the name of the secret on the namespace from the “Name:” line on the generated output.
Verify the database schema user secret.
kubectl get secret <name-of-the-secret> -n <name-of-namespace>-o jsonpath='{.data}'
Verify that the output contains your configured override. Note that this value will be BASE64 encoded.
When you have finished configuring your deployment using the README files that are provided,
complete the deployment steps to apply the new settings. The method by which the manifest is applied
depends on what deployment method is being used. For more information, see
Deploy the Software in the SAS Viya Platform: Deployment Guide to apply the new settings.
Note: The .env
overlay can be applied during or after the initial deployment of the SAS Viya platform.
- If you are applying the overlay during the initial deployment of the SAS Viya platform, complete all the tasks in the README files that you want to use, and then run kustomize build
to create and apply the manifests.
- If the overlay is applied after the initial deployment of the SAS Viya platform, run kustomize build
to create and apply the manifests.
When the deployment has been completed, SAS recommends that you review the Administrator Guide for suggested site-specific considerations, configurations and performance tunings.
When SAS Integrated Regulatory Reporting is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS Integrated Regulatory Reporting solution successfully, you must deploy the Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-core/resources/README.md
(Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm
(HTML format).
For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Integrated Regulatory Reporting: Administrator’s Guide.
SAS Integrated Regulatory Reporting provides a ConfigMap whose values control various aspects of its deployment process. This includes variables such as logging level for the deployment, deployment steps to skip, etc. SAS provides default values for these variables as described in the next section. You can override these default values by configuring a configuration.env
file with your override values and configuring your kustomization.yaml
file to apply these overrides.
For a list of variables that can be overridden and their default values, see SAS Integrated Regulatory Reporting Configuration Parameters and Secrets.
For the steps needed to override the default values with your own values, see Apply Overrides to the Configuration Parameters and Secrets.
The following table contains a list of parameters that can be specified in the SAS Integrated Regulatory Reporting .env
configuration file. These parameters can all be found in the template configuration file (configuration.env
) but are commented out in the template file. Lines with a ‘#’ at the beginning are commented out, and their values will not be applied during deployment. If you want to override a SAS-provided default for a given variable, you must uncomment the line by removing the ‘#’ at the beginning of the line.
Parameter Name | Description |
---|---|
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER | Specifies a logging level for the deployment. The logging level value: "INFO" is used if the variable is not overridden by your .env file. For a more verbose level of logging, specify value: "DEBUG" . |
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES | Specifies whether you want to include deployment steps that relate to sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts: - The create_sampledata_folders step creates all sample data folders in the file service under the Products/SAS Integrated Regulatory Reporting directory.- The transfer_sampledata_files step stores a copy of all sample data files in the file service under the Products/SAS Integrated Regulatory Reporting directory. This directory will include DDLs, reports, sample data, and scripts used to load the sample data.- The import_sample_dataloader_files step stores a copy of the Cirrus_EBA_sample_data_loader.xlsx file in the file service under the Products/SAS Integrated Regulatory Reporting directory. Administrators can then download the file from the Data Load page in SAS Integrated Regulatory Reporting and use it as a template to load and unload data.- The install_sampledata step loads the sample data into a EBA- The load_sampledata_dataloader_objects step loads sample Class Members, Class Member Translations, NamedTreePaths, Named Tree Path Translations, and Object Classifications.- The update_db_sampledata_scripts_pg step prepares the EBA sample data scripts into a temporary folder.- The create_db_auth_domain_user_tax_eba step adds the install user to the authentication domain.- The create_db_auth_domain_stg step creates an authentication domain to allow the deployer script to add the sample data of staging tables to the library.WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N. |
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS | Specifies whether you want to skip specific steps during the deployment of SAS Integrated Regulatory Reporting. Note: Typically, you should set this value blank: "" . The value: "" is used if the variable is not overridden by your .env file. This means no deployment steps will be explicitly skipped. |
SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS | Specifies whether you want to run specific steps during the deployment of SAS Integrated Regulatory Reporting. Note: Typically, you should set this value blank: "" . The value: "" is used if the variable is not overridden by your .env file. This means all deployment steps will be executed. |
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME | Specifies the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the SharedServices database. |
The following table contains a parameter that can be specified in the SAS Integrated Regulatory Reporting .env
secret file.
Parameter Name | Description |
---|---|
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET | Specifies the secret to be used for the user specified in SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME above.The SAS Integrated Regulatory Reporting.env secret file (sas-risk-cirrus-eba-secret.env) contains information about it. Its value will not be applied during deployment as it is commented out in the file. The ‘#’ at the start of the line must be removed in order to uncomment the line and override a SAS-provided default for a given variable. |
If you want to override any of the SAS Integrated Regulatory Reporting configuration parameters rather than using the default values, complete these steps:
If you have a $deploy/site-config/sas-risk-cirrus-eba
directory, delete it and its contents.
Then, edit your base kustomization.yaml
file ($deploy/kustomization.yaml
) to remove the
following line from the transformers
section:
- site-config/sas-risk-cirrus-eba/resources/eba_transform.yaml
This step should only be necessary if you are upgrading from a cadence prior to 2025.02.
Copy the configuration.env
from $deploy/sas-bases/examples/sas-risk-cirrus-eba
to the
$deploy/site-config/sas-risk-cirrus-eba
directory. Create the destination directory if
one does not exist. If the directory already exists and already has the expected .env
file,
verify that the overrides
have been correctly applied. No further actions are required, unless you want to apply different
overrides.
In the base kustomization.yaml file, add the sas-risk-cirrus-eba-parameters
ConfigMap to the
configMapGenerator
block. If that block does not exist, create it. Here is an example of what
the inserted code block should look like in the kustomization.yaml file:
configMapGenerator:
...
- name: sas-risk-cirrus-eba-parameters
behavior: merge
envs:
- site-config/sas-risk-cirrus-eba/configuration.env
...
Save the kustomization.yaml file.
Modify the configuration.env file (in the $deploy/site-config/sas-risk-cirrus-eba
directory) and specify your settings as follows:
a. For the parameter SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER
, replace {{ INFO-or-DEBUG }}
with the logging level desired.
b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
, replace {{ Y-or-N }}
with "Y"
or "N"
. This value determines if the deployment steps that deploy sample artifacts will be executed. If the value is "N"
, the deployment process does not execute the install steps that deploy sample artifacts.
c. For the parameter SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
with the IDs of the steps you want to skip.
d. For the parameter SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
with the IDs of the steps you want to run. Typically, you should leave this variable blank.
Note: If this variable is empty, all steps will be executed unless the solution has already deployed successfully in which case no steps will be executed. If this step is non-empty, only the steps listed in this variable will be executed.
e. Replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-NAME }}
with the the user who is intended to own the solution database schema for the SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME
parameter. The owner of the SharedServices database is used by default if no value is specified.
Save the configuration.env
file.
The following is an example of a configuration.env
file that you could use for SAS Integrated Regulatory Reporting. This example will use all of the default values provided by SAS.
# SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
# SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES={{ Y-OR-N }}
# SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
# SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
# SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME={{ SOLUTION-INPUT-DATA-SCHEMA-USER-NAME }}
Modify the sas-risk-cirrus-eba-secret.env file (in the $deploy/site-config/sas-risk-cirrus-eba
directory). For the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET
, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }}
with the input data schema secert for the user name that was used for SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME
.
Save the sas-risk-cirrus-eba-secret.env
file.
The following is an example of a secret.env
file that you could use for SAS Integrated Regulatory Reporting. This example will use the default value provided by SAS.
# SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET={{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }}
Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl describe configmap sas-risk-cirrus-eba-parameters -n <name-of-namespace>
Run the following command to verify whether the overlay has been applied to the secret:
kubectl get secret sas-risk-cirrus-eba-secret -n <name-of-namespace>
Verify that the output contains your configured overrides.
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software
This README file describes the settings available for deploying SAS Launcher Service. The example files described in this README file are located at ‘/$deploy/sas-bases/examples/sas-launcher/configure’.
Based on the following descriptions of available example files, determine if you want to use any example file in your deployment. If you do, copy the example file and place it in your site-config directory.
Example files are provided that contain suggested process limits based on your deployment size. There is a file provided for each of the two types of users, regular users and super users.
Regular users (non-super users) have the following suggested defaults according to your deployment size: * 10 (small) * 25 (medium) * 50 (large)
Super users have the following suggested defaults according to your deployment size: * 15 (small) * 35 (medium) * 65 (large)
In the example files, uncomment the value you wish to keep, and comment out the
rest. After you have edited the file, add a reference to it to the transformers
block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example using the transformer for regular users:
transformers:
...
- site-config/sas-launcher/configure/launcher-user-process-limit.yaml
The launcher-nfs-mount.yaml file allows you to change the location of the NFS server hosting the user’s home directories. The path is determined by the Identities service.
Create the location site-config/sas-launcher/configure/.
Copy the sas-bases/examples/sas-launcher/configure/launcher-nfs-mount.yaml file to the site-config/sas-launcher/configure/ location.
In the file, replace {{ NFS-SERVER-LOCATION }} with the location of the NFS server. Here is an example:
patch: |-
- op: add
path: /template/metadata/annotations/launcher.sas.com~1nfs-server
value: myserver.nfs.com
After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file. Here is an example:
transformers:
...
- site-config/sas-launcher/configure/launcher-nfs-mount.yaml
Note: If you are performing the tasks in this README before the initial deployment of your SAS Viya software, you should perform the next step after the deployment is completed. If you are updating an existing deployment, you should perform the next step now.
In SAS Environment Manager, set the Identities identifier.homeDirectoryPrefix to the parent path to the home directory location on the NFS server.
The launcher-user-homedirectory-volume.yaml allows you to specify the runtime storage location of the user’s home directory. The path is determined by the identities service and is mounted using the specified {{ VOLUME-STORAGE-CLASS }}.
Note: Using this feature overrides changes made for the Use NFS Server To Mount Home Directory feature.
Create the location site-config/sas-launcher/configure/
.
Copy the sas-bases/examples/sas-launcher/configure/launcher-user-homedirectory-volume.yaml file to the site-config/sas-launcher/configure/
location.
In the file, replace {{ VOLUME-STORAGE-CLASS }} with the location of the volume storage call of your choice. Here is an example:
patch: |-
- op: add
path: /template/spec/volumes/-
value:
name: sas-launcher-userhome
persistentVolumeClaim:
claimName: home-rwx-claim
After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file. Here is an example:
transformers:
...
- site-config/sas-launcher/configure/launcher-user-homedirectory-volume.yaml
Note: If you are performing the tasks in this README before the initial deployment of your SAS Viya software, you should perform the next step after the deployment is completed. If you are updating an existing deployment, you should perform the next step now.
In SAS Environment Manager, set the Identities identifier.homeDirectoryPrefix to the parent path to mount the home directory location in the pod.
The launcher-locale-encoding-defaults.yaml file allows you to modify the SAS LOCALE and SAS ENCODING defaults. The defaults are stored in a Kubernetes ConfigMap called sas-launcher-init-nls-config, which the Launcher service will use to determine which default values are needed to be set. The LOCALE and ENCODING defaults specified here will affect all consumers of SAS Launcher (SAS Compute Server, SAS/CONNECT, and SAS Batch Server) unless overridden (see below). To update the defaults, replace {{ LOCALE-DEFAULT }} and {{ ENCODING-DEFAULT }}. Here is an example:
patch: |-
- op: replace
path: /data/SAS_LAUNCHER_INIT_LOCALE_DEFAULT
value: en_US
- op: replace
path: /data/SAS_LAUNCHER_INIT_ENCODING_DEFAULT
value: utf8
Note: For a list of the supported values for LOCALE and ENCODING, see LOCALE, ENCODING, and LANG Value Mapping Table.
After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file. Here is an example:
transformers:
...
- site-config/sas-launcher/configure/launcher-locale-encoding-defaults.yaml
The defaults from this ConfigMap can be overridden on individual launcher contexts. For more information on overriding specific launcher contexts, see Change Default SAS Locale and SAS Encoding.
The defaults from this ConfigMap are also overridden by effective LOCALE and ENCODING values derived from an export LANG=langValue statement that is present in a startup_commands configuration instance of sas.compute.server, sas.connect.server, or sas.batch.server. For more information on setting or removing these statements, see Edit Server Configuration Instances.
Note: When following links to SAS documentation, use the version number selector towards the left side of the header to select your currently deployed release version.
The default values and maximum values for CPU requests and CPU limits can be specified in a Launcher service pod template. The launcher-cpu-requests-limits.yaml allows you to change these default and maximum values for the CPU resource. To update the defaults, replace the {{ DEFAULT-CPU-REQUEST }}, {{ MAX-CPU-REQUEST }}, {{ DEFAULT-CPU-LIMIT }}, and {{ MAX-CPU-LIMIT }} variables with the value you want to use. Here is an example:
patch: |-
- op: add
path: /metadata/annotations/launcher.sas.com~1default-cpu-request
value: 50m
- op: add
path: /metadata/annotations/launcher.sas.com~1max-cpu-request
value: 100m
- op: add
path: /metadata/annotations/launcher.sas.com~1default-cpu-limit
value: "2"
- op: add
path: /metadata/annotations/launcher.sas.com~1max-cpu-limit
value: "2"
Note: For details on the value syntax used above, see Resource units in Kubernetes
After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file. Here is an example:
transformers:
...
- site-config/sas-launcher/configure/launcher-cpu-requests-limits.yaml
Note: The current example PatchTransformer targets all PodTemplates used by sas-launcher. If you only wish to target only one PodTemplate, update the PatchTransformer to target a specific PodTemplate name.
The default values and maximum values for memory requests and memory limits can be specified in a Launcher service pod template. The launcher-memory-requests-limits.yaml allows you to change these default and maximum values for the memory resource. To update the defaults, replace the {{ DEFAULT-MEMORY-REQUEST }}, {{ MAX-MEMORY-REQUEST }}, {{ DEFAULT-MEMORY-LIMIT }}, and {{ MAX-MEMORY-LIMIT }} variables with the value you want to use. Here is an example:
patch: |-
- op: add
path: /metadata/annotations/launcher.sas.com~1default-memory-request
value: 300M
- op: add
path: /metadata/annotations/launcher.sas.com~1max-memory-request
value: 2Gi
- op: add
path: /metadata/annotations/launcher.sas.com~1default-memory-limit
value: 500M
- op: add
path: /metadata/annotations/launcher.sas.com~1max-memory-limit
value: 2Gi
Note: For details on the value syntax used above, see Resource units in Kubernetes
After you have edited the file, add a reference to it to the transformers block of the base kustomization.yaml file. Here is an example:
transformers:
...
- site-config/sas-launcher/configure/launcher-memory-requests-limits.yaml
Note: The current example PatchTransformer targets all PodTemplates used by sas-launcher. If you only wish to target only one PodTemplate, update the PatchTransformer to target a specific PodTemplate name.
This README describes the steps necessary to disable your SAS Viya platform deployment SAS Launcher Resource Exhaustion protection. Disabling this feature allows users to have no limit to the number of processes they can launch through the SAS Launcher API.
To disable SAS Launcher Resource Exhaustion protection, add sas-bases/overlays/sas-launcher/launcher-disable-user-process-limits.yaml
to the transformers block of the base kustomization.yaml file in the $deploy
directory. Here is an example:
```yaml
transformers:
...
- sas-bases/overlays/sas-launcher/launcher-disable-user-process-limits.yaml
```
When the reference is added to the base kustomization.yaml, use the deployment commands described in SAS Viya Platform: Deployment Guide to apply the new settings.
Tripwires ESP provides real-time notifications for the Investigation Content Pack Tripwires functionality.
The example files provided require SAS Event Stream Processing to be licensed in addition to SAS Law Enforcement Intelligence.
Tripwires ESP comprises an ESP project XML model and an ESP server instance.
The directory $deploy/sas-bases/examples/sas-tripwires-esp
contains the
example project and server definition.
Copy $deploy/sas-bases/examples/sas-tripwires-esp
to
$deploy/site-config/sas-tripwires-esp
.
Add site-config/sas-tripwires-esp
to the resources
block of the
base kustomization.yaml ($deploy/kustomization.yaml
) file.
Here is an example:
resources:
- site-config/sas-tripwires-esp
The $deploy/site-config/sas-tripwires-esp/tripwires.env
file is used to
configure the ESP server instance. The variables in the file should be updated
to reflect the requirements of your deployment.
Here is an example:
# The IP or hostname of the smtp server used to send notifications
SMTPHOST=mailhost
# The tripwire entity configured in SAS Visual Investigator
ENTITY=tripwire
# The interval at which to refresh information from PostgreSQL
PGINTERVAL=60
# The duration to throttle multiple events into a single notification
THROTTLE=10
If the deployment does not use internal TLS then edit
$deploy/site-config/sas-tripwires-esp/tripwires.env
to disable TLS for
RabbitMQ and PostgreSQL.
Here is an example:
RMQSSL=false
PGENCRYPTION=0
If the deployment uses external PostgreSQL:
Edit $deploy/site-config/sas-tripwires-esp/kustomization.yaml
to comment
the sas-tripwires-internal-postgres-config.yaml
transformer and uncomment
the sas-tripwires-external-postgres-config.yaml
transformer.
Here is an example:
```
transformers:
- transformers/sas-tripwires-esp-labels.yaml
- transformers/sas-tripwires-tls-config.yaml
# - transformers/sas-tripwires-internal-postgres-config.yaml
- transformers/sas-tripwires-external-postgres-config.yaml
```
Edit $deploy/site-config/sas-tripwires-esp/transformers/sas-tripwires-external-postgres-config.yaml
to update the two name
properties under secretKeyRef
to match the
name of the secret used for configuring the Platform PostgreSQL instance.
Here is an example:
```
valueFrom:
secretKeyRef:
name: platform-postgres-user
```
Edit $deploy/site-config/sas-tripwires-esp/tripwires.env
to supply the
hostname, port and database of the Platform PostgreSQL server.
Here is an example:
```
PGHOST=viya-postgres.example.com
PGPORT=5432
PGDATABASE=viya
```
The SAS NIBRS Data Loader CronJob runs on a configurable schedule to ingest NIBRS-compliant data files into a PostgreSQL database for consumption by other SAS Viya platform applications.
This README describes the steps necessary to configure the SAS NIBRS Data Loader CronJob.
The SAS NIBRS Data Loader CronJob reads the NIBRS files from a Persistent Volume (PV) in the SAS Viya platform namespace. Create a PV in the SAS Viya platform namespace that supports ReadWriteMany (RWX). See your infrastructure documentation for instructions on how to create a PV. Create a PersistentVolumeClaim (PVC) associated with the PV to mount the PV to the CronJob.
Review the Kubernetes documentation for Persistent Volumes and PersistentVolumeClaims on Kubernetes for more information.
The directory $deploy/sas-bases/examples/sas-nibrs-data-loader
contains the necessary configuration files. Copy $deploy/sas-bases/examples/sas-nibrs-data-loader
to $deploy/site-config/sas-nibrs-data-loader
. Create the destination directory, if it does not already exist.
In the base kustomization.yaml file ($deploy/kustomization.yaml), add a reference to the copied sas-nibrs-data-loader directory in the resources block. Here is an example:
resources:
...
- site-config/sas-nibrs-data-loader
Edit the $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-cronjob/volume-transformer.yaml
file, replacing {{ PVC-NAME }}
with the name of the PVC configured as part of the pre-requisites.
Review the Kubernetes documentation for CronJob schedule syntax.
Update the CronJob to the required schedule by editing the $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-cronjob/schedule-transformer.yaml
file, replacing {{ CRON-SCHEDULE }}
with the required schedule.
Note that this transformer will also set the suspend
value of the CronJob to false
, enabling the CronJob to run on the specified schedule. If you want the CronJob to remain suspended, set this value to true
.
In some cases, certain US states do not strictly adhere to the FBI’s NIBRS specification. Additional configuration is required for these exceptional cases.
Edit the $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-cronjob/us-state-transformer.yaml
file. Replace {{ NIBRS_STATE }}
with a two-character code representing the required state as specified in ISO 3166-2:US.
Open the CronJob kustomization file located at $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-cronjob/kustomization.yaml
. In the transformers section, include the following content:
transformers:
...
- us-state-transformer.yaml
This ensures that the data loader correctly handles the unique data format of the specified state.
The following steps create a configuration for a tenant named “tenant1”. Repeat these steps for each tenant that requires a SAS NIBRS Data Loader CronJob using the actual tenant name in place of “tenant1”.
Copy the example tenant configuration directory from $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-tenant-example
to $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-tenant1
Replace the {{ TENANT-NAME }} placeholders in $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-tenant-example/kustomization.yaml
with the tenant name.
Replace the {{ TENANT-NAME }} placeholder in $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-tenant-example/nibrs-tenant.env
with the tenant name.
Add the tenant-specific resource to the $deploy/site-config/sas-nibrs-data-loader/kustomization.yaml
file.
Here is an example:
resources:
# - sas-nibrs-data-loader-cronjob
- sas-nibrs-data-loader-tenant1
The NIBRS Data Loader will, by default, create a database schema named nibrsdataloader on the first run. It is possible to override this behaviour and specify a custom schema name.
Edit the $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-cronjob/custom-schema-transformer.yaml
file. Replace {{ NIBRS_SCHEMA }}
with the new schema name. Note that this name must comply with PostgreSQL schema naming rules.
Open the CronJob kustomization file located at $deploy/site-config/sas-nibrs-data-loader/sas-nibrs-data-loader-cronjob/kustomization.yaml
. In the transformers section, include the following content:
transformers:
...
- custom-schema-transformer.yaml
Configuring analytic store (ASTORE) directories is required in order to publish analytic store models from SAS Intelligent Decisioning, SAS Model Manager, and Model Studio to a SAS Micro Analytic Service publishing destination.
Configuring SAS Micro Analytic Service to use ASTORE files inside the container requires persistent storage from the cloud provider. A PersistentVolumeClaim (PVC) is defined to state the storage requirements from cloud providers. The storage provided by cloud is mapped to predefined paths across services collaborating to handle ASTORE files.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
Storage for the ASTORE files must support ReadWriteMany access permissions.
Note: The STORAGE-CLASS-NAME from the provider is used to determine the STORAGE-CAPACITY that is required for your ASTORE files. The required storage capacity depends on the size and number of ASTORE files.
Copy the files in $deploy/sas-bases/examples/sas-microanalytic-score/astores
to the $deploy/site-config/sas-microanalytic-score/astores
directory. Create the destination directory, if it does not already exist.
Note: If the destination directory already exists, verify that the overlays have been applied.
If the output contains the /models/astores/viya
and /models/resources/viya
mount directory paths, you do not need to take any further actions, unless you want to change the overlay parameters for the mounted directories.
The resources.yaml file in $deploy/site-config/sas-microanalytic-score/astores
contains the parameters of the storage that is required in the PeristentVolumeClaim. For more information about PersistentVolumeClaims, see Additional Resources.
Make the following changes to the base kustomization.yaml file in the $deploy directory.
Here is an example:
resources:
- site-config/sas-microanalytic-score/astores/resources.yaml
transformers:
- sas-bases/overlays/sas-microanalytic-score/astores/astores-transformer.yaml
Complete one of the following deployment steps to apply the new settings.
Run the following command to verify whether the overlays have been applied:
kubectl describe pod <sas-microanalyticscore-pod-name> -n <name-of-namespace>
Verify that the output contains the following mount directory paths:
Mounts:
/models/astores/viya from astores-volume (rw,path="models")
/models/resources/viya from astores-volume (rw,path="resources")
By default, SAS Micro Analytic Service is deployed with 750 MB of memory and 250m CPU.
If your SAS Micro Analytic Service deployment requires different resources, you can use the resources-transformer.yaml file in the $deploy/sas-bases/examples/sas-microanalytic-score/resources
directory to configure different values.
Determine the minimum and maximum value of memory and CPU required for your deployment. The values depend on available resources in the cluster and your desired throughput.
Copy the files in $deploy/sas-bases/examples/sas-microanalytic-score/resources
to the $deploy/site-config/sas-microanalytic-score/resources
directory. Create destination directory if it does not exist.
Note: If the destination directory already exists, verify that the overlay has been applied. You do not need to take any further actions, unless you want to change the CPU and memory parameters to different values.
Modify the resources-transformer.yaml in $deploy/site-config/sas-microanalytic-score/resources
to specify your resource settings. For more information about Kubernetes resources, see Additional Resources.
Note: Kubernetes uses units of measurement that are different from the standard. For memory, use Gi for gigabytes and Ti for terabytes. For cores, Kubernetes uses millicores as its standard, and there are 1000 millicores to a core. Therefore, if you want to use 4 cores, use 4000m as your value. 500m is equivalent to half a core.
In the base kustomization.yaml in $deploy directory, add site-config/sas-microanalytic-score/resources/resources-transformer.yaml to the transformers block.
Here is an example:
transformers:
- site-config/sas-microanalytic-score/resources/resources-transformer.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests. kustomize build
to create and apply the manifests. Run the following command to verify whether the overlay has been applied:
kubectl describe pod <sas-microanalyticscore-pod-name> -n <name-of-namespace>
Verify that the output contains the desired CPU and memory values that you configured:
Limits:
cpu: 4
memory: 2Gi
Requests:
cpu: 250m
memory: 750M
If enabled, the SAS Micro Analytic Service archive feature records the inputs and outputs of step execution to a set of rolling log files. To use the archive feature, SAS Micro Analytic Service must be configured with a persistent volume to use as a location in which to store the log files. This README describes how to configure SAS Micro Analytic Service to use a PersistentVolumeClaim to define storage for the archive logs.
By default, the archive feature is not enabled. This README also provides a link to where you can find more information about how to enable the archive feature in SAS Micro Analytic Service.
The archive feature requires storage with ReadWriteMany access mode for storing transaction logs. A peristentVolumeClaim is defined to specify the storage required.
Note: The STORAGE-CLASS-NAME from the cloud provider is used to determine the STORAGE-CAPACITY that is required for your archives. The required storage capacity depends on the expected transaction volume, the size of your payloads, and your backup strategy.
Copy the files in $deploy/sas-bases/examples/sas-microanalytic-score/archive
to the $deploy/site-config/sas-microanalytic-score/archive
directory. Create the destination directory if it does not exist.
Note: If the destination directory already exists, verify that the overlay has been applied.
If the output contains the /opt/sas/viya/config/var/log/microanalyticservice/default/archive
mount directory path, you do not need to take any further actions, unless you want to change the overlay parameters for the mounted directory.
The resources.yaml file in $deploy/site-config/sas-microanalytic-score/archive
contains the parameters of the storage that is required in the PeristentVolumeClaim. For more information about PersistentVolumeClaims, see Additional Resources.
Make the following changes to the kustomization.yaml file in the $deploy directory:
Here is an example:
resources:
- site-config/sas-microanalytic-score/archive/resources.yaml
transformers:
- sas-bases/overlays/sas-microanalytic-score/archive/archive-transformer.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Run the following command to verify whether the overlay has been applied:
kubectl describe pod <sas-microanalyticscore-pod-name> -n <name-of-namespace>
Verify that the output contains the following mount directory path:
Mounts:
/opt/sas/viya/config/var/log/microanalyticservice/default/archive from archives-volume (rw)
After the deployment is complete, the SAS Micro Analytic Service archive feature must be enabled in SAS Environment Manager. For more information, see Archive Feature Configuration in SAS Micro Analytic Service: Programming and Administration Guide.
This document describes the customizations that can be made by the Kubernetes administrator for deploying, tuning, and troubleshooting SAS Micro Analytic Service.
SAS provides example files for many common customizations. Read the descriptions for the example files in the examples section. Follow these steps to use transformers from examples to customize your deployment.
Copy the example transformer file in $deploy/sas-bases/examples/sas-microanalytic-score/config
to the $deploy/site-config/sas-microanalytic-score/config
directory. Create the destination directory if it does not exist.
Each file has information about its content. The variables in the file are set off by curly braces and spaces, such as {{ VARIABLE-NAME }}. Replace the entire variable string, including the braces, with the value you want to use.
In the base kustomization.yaml in $deploy directory, add site-config/sas-microanalytic-score/config/
transformers:
- site-config/sas-microanalytic-score/config/mas-add-environment-variables.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: These transformers can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests. The example files are located at $deploy/sas-bases/examples/sas-microanalytic-score/config
. The
following is a list of each example file for SAS Micro Analytic Service settings and the file name.
mas-add-environment-variables.yaml
)Run the following command to verify whether the transformer has been applied:
kubectl describe pod <sas-microanalyticscore-pod-name> -n <name-of-namespace>
Verify that the output contains the values that you configured:
Environment:
my-variable: my-value
This README describes how privileges can be added to the sas-microanalytic-score pod service account. Security context constraints are required in an OpenShift cluster if the sas-micro-analytic-score pod needs to mount an NFS volume. If the Python environment is made available through an NFS mount, the service account requires NFS volume mounting privileges.
Note: For information about using NFS to make Python available, see the README file at /$deploy/sas-bases/examples/sas-open-source-config/python/README.md
(for Markdown format) or /$deploy/sas-bases/docs/configure_python_for_sas_viya.htm
(for HTML format).
The /$deploy/sas-bases/overlays/sas-microanalytic-score/service-account
directory contains a file to grant security context constraints for using NFS on an OpenShift cluster.
A Kubernetes cluster administrator should add these security context constraints to their OpenShift cluster prior to deploying the SAS Viya platform. Use one of the following commands:
kubectl apply -f sas-microanalytic-score-scc.yaml
or
oc create -f sas-microanalytic-score-scc.yaml
After the security context constraints have been applied, you must link the security context constraints to the appropriate service account that will use it. Use the following command:
oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-microanalytic-score -z sas-microanalytic-score
Run this command to restart pod with new privileges added to the service account:
kubectl rollout restart deployment sas-microanalytic-score -n <name-of-namespace>
This document describes customizations that must be performed by the Kubernetes administrator for deploying SAS Micro Analytic Service to enable access to a DB2 database.
SAS Micro Analytic Service uses the installed DB2 client environment. This environment must be accessible from a PersistentVolume.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
The DB2 Client must be installed. After the initial DB2 Client setup, two directories (for example, /db2client and /db2) must be created and accessible to SAS Micro Analytic Service. Ensure that the two directories contain the installed client files (for example, /db2client) and the configured server definition files (/db2).
Copy the files in $deploy/sas-bases/examples/sas-microanalytic-score/db2-config
to the $deploy/site-config/sas-microanalytic-score/db2-config
directory. Create the destination directory, if it does not already exist.
Modify the three files under the site-config/sas-microanalytic-score/db2-config folder to point to your settings.
Modify the $deploy/site-config/sas-microanalytic-score/db2-config/data-mount-mas.yaml
file:
Modify the $deploy/site-config/sas-microanalytic-score/db2-config/etc-hosts-mas.yaml
file:
Modify the $deploy/site-config/sas-microanalytic-score/db2-config/db2-environment-variables-mas.yaml
file:
Make the following changes to the transformers block of base kustomization.yaml file (‘$deploy/kustomization.yaml’)
Here is an example:
transformers:
- site-config/sas-microanalytic-score/db2-config/data-mount-mas.yaml # patch to setup mount for mas
- site-config/sas-microanalytic-score/db2-config/etc-hosts-mas.yaml # Host aliases
- site-config/sas-microanalytic-score/db2-config/db2-environment-variables-mas.yaml # patch to inject environment variables for DB2
Complete one of the following deployment steps to apply the new settings.
Run the following command to verify whether the overlays have been applied:
kubectl describe pod <sas-microanalyticscore-pod-name> -n <name-of-namespace>
Verify that the output contains the following mount directory paths:
Mounts:
/db2 from db2 (rw)
/db2client from db2client (rw)
Verify that the output shows that each environment variable is assigned the appropriate value. Here is an example:
Environment:
SAS_K8S_DEPLOYMENT_NAME: sas-microanalytic-score
DB2DIR: /db2client/sqllib
DB2INSTANCE: sas
DB2LIB: /db2client/sqllib/lib
DB2_HOME: /db2client/sqllib
DB2_NET_CLIENT_PATH: /db2client/sqllib
IBM_DB_DIR: /db2client/sqllib
IBM_DB_HOME: /db2client/sqllib
IBM_DB_INCLUDE: /db2client/sqllib/
IBM_DB_LIB: /db2client/sqllib/lib
INSTHOME: /db2
INST_DIR: /db2client/sqllib
DB2: /db2client/sqllib/lib64:/db2client/sqllib/lib64/gskit:/db2client/sqllib/lib32
DB2_BIN: /db2client/sqllib/bin:/db2client/sqllib/adm:/db2client/sqllib/misc
SAS_EXT_LLP_ACCESS: /db2client/sqllib/lib64:/db2client/sqllib/lib64/gskit:/db2client/sqllib/lib32
SAS_EXT_PATH_ACCESS: /db2client/sqllib/bin:/db2client/sqllib/adm:/db2client/sqllib/misc
This README describes how a service account with defined privileges can be added to the sas-model-repository pod. A service account is required in an OpenShift cluster if it needs to mount NFS. If the Python environment is made available through an NFS mount, the service account requires NFS volume mounting privilege.
Note: For information about using NFS to make Python available, see the
README file at
/$deploy/sas-bases/examples/sas-open-source-config/python/README.md
(for
Markdown format) or /$deploy/sas-bases/docs/configure_python_for_sas_viya.htm
(for HTML format).
The /$deploy/sas-bases/overlays/sas-model-repository/service-account
directory
contains a file to grant security context constraints for using NFS on an
OpenShift cluster.
A Kubernetes cluster administrator should add these security context constraints to their OpenShift cluster prior to deploying the SAS Viya platform. Use one of the following commands:
kubectl apply -f sas-model-repository-scc.yaml
or
oc create -f sas-model-repository-scc.yaml
After the security context constraints have been applied, you must link the security context constraints to the appropriate service account that will use it. Use the following command:
oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-model-repository -z
sas-model-repository
Complete the deployment steps to apply the new settings. See Deploy the Software in SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Run the following command to verify whether the overlay has been applied:
kubectl -n <name-of-namespace> get pod <sas-model-repository-pod-name> -oyaml | grep serviceAccount
Verify that the output contains the service-account sas-model-repository.
serviceAccount: sas-model-repository
serviceAccountName: sas-model-repository
When SAS Model Risk Management is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Risk Cirrus Core, that is used by all SAS Risk Cirrus solutions. Therefore, in order to deploy the SAS Model Risk Management solution successfully, you must deploy the Risk Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/docs/preparing_and_configuring_risk_cirrus_core_for_deployment.htm
(HTML format) or at $deploy/sas-bases/examples/sas-risk-cirrus-rcc/README.md
(Markdown format).
The Risk Cirrus Core README also contains information about storage options, such as external databases, for your solution. You must complete the pre-deployment described in the Risk Cirrus Core README before deploying SAS Model Risk Management. Please read that document for important information about the pre-installation tasks that should be completed prior to deploying SAS Model Risk Management.
IMPORTANT: You must complete the step described in the Cirrus Core README to modify your Cirrus Core configuration file. SAS Model Risk Management uses workflow service tasks, so a user account must be configured for a workflow client. If you know before your deployment which user account you will use and you want to have it configured during installation, then you should set the {{SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG}} variable to Y and assign the user account to the {{SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT}} variable. The Cirrus Core README contains more information about these two environment variables.
For more information about deploying Risk Cirrus Core, you can also read Deployment Tasks in the SAS Risk Cirrus: Administrator’s Guide.
For more information about the tasks that should be completed prior to deploying SAS Model Risk Management, see Deployment Tasks in the SAS Model Risk Management: Administrator’s Guide.
SAS Model Risk Management provides a ConfigMap whose values control various aspects of its deployment process. It includes variables such as the logging level for the deployment, deployment steps to skip, etc. SAS provides default values for these variables as described in the next section. You can override these default values by configuring a configuration.env
file with your override values and then configuring your kustomization.yaml
file to apply those overrides.
For a list of variables that can be overridden and their default values, see SAS Model Risk Management Configuration Parameters.
For the steps needed to override the default values with your own values, see Apply Overrides to the Configuration Parameters.
The following list describes the parameters that can be specified in the SAS Model Risk Management .env
configuration file. These parameters can be found in the template configuration file (configuration.env
), but they are commented out in that file. Lines that begin with #
will not be applied during deployment. If you want to use one of those skipped variables, remove the #
at the beginning of the line.
The SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER
parameter specifies a logging level for the deployment. The logging level INFO
is used if the variable is not overridden by your configuration.env
file. For a more verbose level of logging, specify the value DEBUG
.
The SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
parameter specifies whether you want to include steps flagged as sample artifacts. The value N
is used if the variable is not overridden by your configuration.env
file. That means steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts:
load_workflows
step loads and activates the SAS-provided workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.upload_notifications
step loads notification templates that are used with the SAS-provided workflow definitions. If you are not using SAS-provided workflow definitions, then you do not need these templates.load_sample_data
step loads sample Class Members, Class Member Translations, NamedTreePaths, Roles, RolePermissions, Positions, ReportFacts, ReportObjectRegistrations, and ReportExtractConfigurations. It also loads sample object instances, like models and findings, as well as the LinkInstances, ObjectClassifications, and Workflows associated with those objects. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.import_main_data_loader_files
step imports the Cirrus_MRM_loader.zip file into the file service. Administrators can then download the file from the Data Load page in SAS Model Risk Management and use it as a template to load and unload data.import_sample_data_loader_files
step imports the Cirrus_MRM_sample_data_loader.xlsx and Cirrus_MRM_sample_data_workflow_change_state_loader.xlsx files into the files service. Administrators can then download the files from the Data Load page in SAS Model Risk Management and use them as a template to load and unload data.import_va_reports
step imports SAS-provided reports created in SAS Visual Analytics.localize_va_reports
step imports localized labels for SAS-provided reports created in SAS Visual Analytics.WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N.
The SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
parameter specifies whether you want to skip specific steps during the deployment of SAS Model Risk Management. The value ""
is used if the variable is not overridden by your configuration.env
file. This means none of the deployment steps will be skipped explicitly. Typically, the only use case for overriding this value would be to load some sample artifacts, like workflows, but skip the loading of sample data. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to Y, then set this variable with the IDs of any steps you would like to skip, including those flagged as sample data. If you want to skip the loading of sample data, for example, set this variable to “load_sample_data”. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then set this variable to an empty string to skip load_sample_data and any other steps that are marked as sample data.
The SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
parameter specifies an explicit list of steps you want to run during a deployment. The value ""
is used if the variable is not overridden by your configuration.env
file. This means all of the deployment steps will be run except steps flagged as sample artifacts (if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
is N
) or steps skipped in SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment. For example, if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES is set to N, then the upload_notifications step will be skipped during deployment. After the deployment finishes, you decide you want to include the SAS-provided notifications to use in your custom workflow definitions. You can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS to “upload_notifications” and then trigger the sas-risk-cirrus-mrm CronJob to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS.
WARNING: This list is absolute; the deployment will only run the steps included in this list. This variable should be an empty string if you are deploying this environment for the first time, or if you are upgrading from a previous version. Otherwise you risk a failed or incomplete deployment.
Note: If you configured overrides during a previous deployment, those overrides should already be available in the SAS Model Risk Management ConfigMap. You can verify that here.
If you want to override any of the SAS Model Risk Management configuration properties rather than using the default values, complete these steps:
If you have a $deploy/site-config/sas-risk-cirrus-mrm
directory, take note of the values in your mrm_transform.yaml
file. You may want to use them in the following steps. Once you have the values you need, delete the directory and its contents. Then, edit your base kustomization.yaml
file ($deploy/kustomization.yaml
) to remove the following line from the transformers
section: -site-config/sas-risk-cirrus-mrm/resources/mrm_transform.yaml
.
Create a $deploy/site-config/sas-risk-cirrus-mrm
directory if one does not exist. Then copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-mrm
to that directory.
IMPORTANT: If the destination directory already exists, confirm it contains the configuration.env
file, not the mrm_transform.yaml
file that was used for cadences prior to 2025.02. If the directory already exists, and it has the configuration.env
file, then verify that the overlay connection settings have been applied correctly. No further actions are required unless you want to change the connection settings to different values.
In the base kustomization.yaml file, add the sas-risk-cirrus-mrm-parameters
ConfigMap to the configMapGenerator
block. If that block does not exist, create it. Here is an example:
configMapGenerator:
- name: sas-risk-cirrus-mrm-parameters
behavior: merge
envs:
- site-config/sas-risk-cirrus-mrm/configuration.env
Save any changes you made to the kustomization.yaml file.
If you want to change the default settings provided by SAS or update overridden values from previous cadences, modify the configuration.env
file (located in the
$deploy/site-config/sas-risk-cirrus-mrm
directory). If there are any parameters for which you want to override the default value, remove the #
at the beginning of that variable’s
line in your configuration.env
file and replace the placeholder with the desired value. You can read more about each step in SAS Model Risk Management Configuration Parameters.
The following is an example of a configuration.env
file you could use for SAS Model Risk Management. This example uses the default values provided by SAS except for the SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
and SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
variables. In this case, it will run the sample steps described in SAS Model Risk Management Configuration Parameters except the step that loads sample data (load_sample_data
). That means your deployment will contain workflows, notifications, localized reports, and links to data loaders; but it will not contain roles, positions, object instances, or other sample data.
# SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES=Y
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS=load_sample_data
# - SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
6. Save any changes you made to the configuration.env
file.
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide.
Before verifying the settings for the SAS Model Risk Management solution, you should first verify Risk Cirrus Core’s settings. Those instructions can be found in the Risk Cirrus Core README. To verify the settings for SAS Model Risk Management, do the following:
Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl describe configmap sas-risk-cirrus-mrm-parameters -n <name-of-namespace>
Verify that the output contains the desired connection settings that you configured.
For administration information related to the SAS Model Risk Management solution, see SAS Model Risk Management: Administrator’s Guide.
For more generalized deployment information, see SAS Viya Platform: Deployment Guide.
The SAS RFC Solution Configuration Service installs three Kubernetes resources that define how Fraud solutions communicate with Apache Kafka.
This README file describes how to replace the placeholders in the files with values and secret data for a specific Apache Kafka cluster.
Copy all of the files in $deploy/sas-bases/examples/sas-rfc-solution-config/configure-kafka
to $deploy/site-config/sas-rfc-solution-config
, where $deploy is the directory containing your SAS Viya platform installation files. Create the target directory, if it does not already exist.
Edit the $deploy/site-config/sas-rfc-solution-config/kafka-configuration-patch.yaml
file. Update properties, especially the server, protocol and topics. Add any properties as recommended by product documentation or customer support. Here is an example:
- op: replace
path: /data
value:
SAS_KAFKA_SERVER: "fsi-kafka-kafka-bootstrap.kafka.svc.cluster.local:9093"
SAS_KAFKA_CONSUMER_DEBUG: ""
SAS_KAFKA_PRODUCER_DEBUG: ""
SAS_KAFKA_OFFSET: earliest
SAS_KAFKA_ACKS: "2"
SAS_KAFKA_BATCH: ""
SAS_KAFKA_LINGER: ""
SAS_KAFKA_AUTO_CREATE_TOPICS: "true"
SAS_KAFKA_SECURITY_PROTOCOL: "sasl_ssl"
SAS_KAFKA_HOSTNAME_VERIFICATION: "false"
SAS_DETECTION_KAFKA_TOPIC: "input-transactions"
SAS_DETECTION_KAFKA_TDR_TOPIC: "tdr-topic"
SAS_DETECTION_KAFKA_REJECTTOPIC: "transaction-reject"
SAS_TRIAGE_KAFKA_TDR_TOPICS: "tdr-topic"
SAS_TRIAGE_KAFKA_OUTBOUND_TOPIC: "sas-triage-topic-outbound"
SAS_TRIAGE_KAFKA_QUEUE_CHANGED_TOPIC: "sas-triage-notification-queue-changed"
SAS_TRANSACTION_MARK_TOPIC: "transaction-topic-outbound"
SAS_RWS_KAFKA_BROKERS: "fsi-kafka-kafka-bootstrap.kafka.svc.cluster.local:9093"
SAS_RWS_KAFKA_INPUT_TOPIC: "rws-input-transactions"
SAS_RWS_KAFKA_OUTPUT_TOPIC: "rws-output-transactions"
SAS_RWS_KAFKA_ERROR_TOPIC: "rws-error-transactions"
SAS_RWS_KAFKA_REJECT_TOPIC: "rws-reject-transactions"
Edit the $deploy/site-config/sas-rfc-solution-config/kafka-cred-patch.yaml
file. If the security protocol for Apache Kafka includes SASL, then modify the patch to include a base64 representation of the user ID and password. Here is an example:
- op: replace
path: /data
value:
username: ...
password: ...
Edit the $deploy/site-config/sas-rfc-solution-config/kafka-truststore-patch.yaml
file. If the security protocol for kafka includes SSL, then update the patch to use the correct certificate. Here is an example:
- op: replace
path: /data
value:
ca.crt: LS0tLS1CRU...
ca.p12: MIIGogIBAz...
ca.password: ...
After updating the example files, add references to them to the base kustomization.yaml file ($deploy/kustomization.yaml
). Add a reference to the kafka-configuration-patch.yaml file as a patch.
For example, if you made the changes described above, then the base kustomization.yaml file should have entries similar to the following:
patches:
- target:
version: v1
kind: ConfigMap
name: sas-rfc-solution-config-kafka-config
path: site-config/sas-rfc-solution-config/kafka-configuration-patch.yaml
- target:
version: v1
kind: Secret
name: sas-rfc-solution-config-kafka-creds
path: site-config/sas-rfc-solution-config/kafka-cred-patch.yaml
- target:
version: v1
kind: Secret
name: sas-rfc-solution-config-kafka-ca-cert
path: site-config/sas-rfc-solution-config/kafka-truststore-patch.yaml
As an administrator with cluster permissions, apply the edited files to your deployment by performing the steps described in Deploy the Software.
The configuration information in this README applies to both SAS Real-Time Watchlist Screening for Entities and SAS Real-Time Watchlist Screening for Payments.
SAS Real-Time Watchlist Screening requires running an Apache Kafka message broker and a PostgreSQL database. The instructions in this README describe how to configure the product.
The Configure with Initial SAS Viya Platform Deployment section below describes several TLS configurations. These configurations must align with SAS Viya security requirements, as specified in SAS Viya Platform Operations guide Security Requirements. Here are the specific TLS deployment requirements:
To configure SAS Real-Time Watchlist Screening:
Copy the files in the $deploy/sas-bases/examples/sas-watch-config/sample
directory to the $deploy/site-config/sas-watch-config/install
directory.
Create the destination directory if it does not exist.
If you are installing SAS Real-Time Watchlist Screening with SAS Viya platform, add $deploy/site-config/sas-watch-config/install
to the
resources block of the base kustomization.yaml file. Here is an example:
resources:
...
- site-config/sas-watch-config/install
...
Update the $deploy/site-config/sas-watch-config/install/list/watchlist.xml
file by replacing the variables with the appropriate values for configuring the watchlist.
Update the $deploy/site-config/sas-watch-config/install/kustomization.yaml
file by replacing the variables with the appropriate values for secrets. The secrets generator
can be removed if the equivalent secrets are created prior to installing SAS Real-Time Watchlist Screening.
Update the $deploy/site-config/sas-watch-config/install/namespace.yaml
file by replacing the variable with the appropriate value for the targeted namespace.
Update the $deploy/site-config/sas-watch-config/install/settings.properties
file by replacing the variables with the appropriate values for the properties.
Update the $deploy/site-config/sas-watch-config/install/base/rws-settings.yaml
file by replacing the variables with the appropriate values for the ConfigMap.
Update the $deploy/site-config/sas-watch-config/install/base/rws-image-pull-secrets.yaml
file by replacing the variables with the appropriate values for the image pull secrets.
The image pull secret can be found using the SAS Viya platform Kustomize build command:
kustomize build . > site.yaml
grep '.dockerconfigjson:' site.yaml
Alternatively, if SAS Viya platform has already been deployed, the image pull secret can be found with the kubectl command:
kubectl -n {{ NAMESPACE }} get secret --field-selector=type=kubernetes.io/dockerconfigjson -o yaml | grep '.dockerconfigjson:'
The output for either command is .dockerconfigjson: <SECRET>
. Replace the {{ IMAGE_PULL_SECRET }} variables with the value returned by the command you used.
Replace the {{ NAMESPACE }} value.
If you are deploying to Red Hat OpenShift, update configurations by following the instructions in the comments of each of the following files:
$deploy/site-config/sas-watch-config/install/base/kustomization.yaml
$deploy/site-config/sas-watch-config/install/base/rws-admin-route.yaml
$deploy/site-config/sas-watch-config/install/base/rws-rt-route.yaml
If you are not deploying to Red Hat OpenShift, update the $deploy/site-config/sas-watch-config/install/base/rws-ingress.yaml
file by replacing the variables with the appropriate values for the ingress host and namespace.
Update the five image values that are contained in these three files:
$deploy/site-config/sas-watch-config/install/base/rws-admin-deployment.yaml
$deploy/site-config/sas-watch-config/install/base/rws-async-deployment.yaml
$deploy/site-config/sas-watch-config/install/base/rws-rt-deployment.yaml
In those files, revise the value “sas-business-orchestration-worker” to include the registry server, relative path, name, and tag. The registry relative server and relative path are the same as other SAS Viya platform delivered images.
The name of the container is ‘sas-business-orchestration-worker’. The registry relative path, name, and tag values are found in the sas-components-* ConfigMap in the SAS Viya platform deployment.
Perform the following commands to determine the appropriate information. When you have the information, add it to the appropriate places in the three files listed above.
# generate site.yaml file
kustomize build -o site.yaml
# get the sas-business-orchestration-worker registry information
cat manifests.yaml | grep 'sas-business-orchestration-worker:' | grep -v -e "VERSION" -e 'image'
# manually update the sas-business-orchestration-worker-example images using the information gathered below: <container registry>/<container relative path>/sas-business-orchestration-worker:<container tag>
# apply site.yaml file
kustomize apply -f site.yaml
Perform the following commands to get the required information from a running SAS Viya platform deployment.
# get the registry server, kubectl needs to point to the SAS Viya Platform deployment namespace, and replace {{ NAMESPACE }} with the namespace value
kubectl -n {{ NAMESPACE }} get deployment sas-readiness -o yaml | grep -e "image:.*sas-readiness" | sed -e 's/image: //g' -e 's/\/.*//g' -e 's/^[ \t]*//'
<container registry>
# get registry relative path and tag, kubectl needs to point to the SAS Viya Platform deployment namespace, and replace {{ NAMESPACE }} with the namespace value
CONFIGMAP="$(kubectl -n {{ NAMESPACE }} get cm | grep sas-components | tr -s '' | cut -d ' ' -f1)"
kubectl -n {{ NAMESPACE }} get cm "$CONFIGMAP" -o yaml | grep 'sas-business-orchestration-worker:' | grep -v "VERSION"
SAS_COMPONENT_RELPATH_sas-business-orchestration-worker: <container relative path>/sas-business-orchestration-worker
SAS_COMPONENT_TAG_sas-business-orchestration-worker: <container tag>
If you are enabling TLS, follow the instructions in the appropriate comments section in each of the following files, based on the TLS mode you are deploying with:
$deploy/site-config/sas-watch-config/install/base/rws-admin-deployment.yaml
$deploy/site-config/sas-watch-config/install/base/rws-async-deployment.yaml
$deploy/site-config/sas-watch-config/install/base/rws-rt-deployment.yaml
$deploy/site-config/sas-watch-config/install/base/rws-ingress.yaml
$deploy/site-config/sas-watch-config/install/base/rws-tls.yaml
$deploy/site-config/sas-watch-config/install/base/bdsl/bdsl.yaml
If you are integrating with SAS Visual Investigator, perform the following steps:
Populate the fields in the $deploy/site-config/sas-watch-config/install/datastore/sas-watchlist-datastore-connection.json
file using the same values that exist in the sas-watchlist-db-credentials secret referenced in $deploy/site-config/sas-watch-config/install/kustomization.yaml
. Do not alter the “name” field within sas-watchlist-datastore-connection.json
.
Uncomment the following entries in the $deploy/site-config/sas-watch-config/install/kustomization.yaml
file:
secretGenerator:
...
# - name: sas-watchlist-datastore-connection
# files:
# - datastore/sas-watchlist-datastore-connection.json
# patches:
# - path: datastore/rfc-solution-config-datastore-patch.yaml
If you are deploying SAS Real-Time Watchlist Screening separately from the SAS Viya platform, make sure to supply these entries to the base kustomization.yaml file ($deploy/kustomization.yaml
) used to deploy the SAS Viya platform.
The SAS license must be applied to the deployment artifacts in order to successfully screen requests. The way that you reference the license secret depends on how SAS Real-Time Watchlist Screening is being deployed.
If you are deploying in the SAS Viya platform namespace and SAS Viya platform is already deployed, update the secret volume mount with the secretName of the existing sas-license-
kubectl get secret -n <namespace> | grep "sas-license"
The secretName must be updated in each of the following files:
$deploy/site-config/sas-watch-config/install/base/rws-async-deployment.yaml
$deploy/site-config/sas-watch-config/install/base/rws-rt-deployment.yaml
$deploy/site-config/sas-watch-config/install/base/rws-admin-deployment.yaml
If you are not deploying in the SAS Viya platform namespace or you are deploying in the SAS Viya platform namespace but SAS Viya platform has not been deployed yet, you must create a license secret. Provide your license JSON web token as input to a Kubernetes secret and replace {{ NAMESPACE }} with the namespace value:
kubectl create secret generic sas-license --from-file=SAS_LICENSE={{ your license jwt file }} -n {{ NAMESPACE }}
Alternatively, SAS Real-Time Watchlist Screening can be installed separately from the SAS Viya platform. Complete steps 1-13 in Configure with Initial SAS Viya Platform Deployment. Instead of step 14, perform the following command:
kustomize -b $deploy/site-config/sas-watch-config/install > sas-watch.yaml
kubectl apply -f sas-watch.yaml
When SAS Regulatory Capital Management is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS
Regulatory Capital Management solution successfully, you must deploy the Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at
$deploy/sas-bases/examples/sas-risk-cirrus-core/resources/README.md
(Markdown
format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm
(HTML format).
For storage options for your solution, such as external databases, refer to the Risk Cirrus Core README.
For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Regulatory Capital Management: Administrator’s Guide.
SAS Regulatory Capital Management provides a ConfigMap whose values control various aspects of its deployment process. This includes variables such as logging level for the deployment, deployment steps to skip, etc. SAS provides
default values for these variables as described in the next section. You can override these default values
by configuring a configuration.env
file with your override values and configuring your kustomization.yaml
file to apply these overrides.
For a list of variables that can be overridden and their default values, see SAS Regulatory Capital Management Configuration Parameters and Secrets.
For the steps needed to override the default values with your own values, see Apply Overrides to the Configuration Parameters and Secrets.
The following table contains a list of parameters that can be specified in the SAS Regulatory Capital Management .env
configuration file. These parameters can all be found in the template configuration file (configuration.env
)
but are commented out in the template file. Lines with a ‘#’ at the beginning are commented out, and their values
will not be applied during deployment. If you want to override a SAS-provided default for a given variable, you
must uncomment the line by removing the ‘#’ at the beginning of the line.
Parameter Name | Description |
---|---|
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER | Specifies a logging level for the deployment. The logging level value: "INFO" is used if the variable is not overridden by your .env file. For a more verbose level of logging, specify value: "DEBUG" . |
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES | Specifies whether you want to include deployment steps that relate to sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts: - The create_sampledata_folders step creates all sample data folders in the file service under the Products/SAS Regulatory Capital Management directory.- The transfer_sampledata_files step stores a copy of all sample data files in the file service under the Products/SAS Regulatory Capital Management directory. This directory will include DDLs, reports, sample data, and scripts used to load the sample data.- The import_sample_dataloader_files step step stores a copy of the Cirrus_RCM_sample_data_loader.xlsx file in the file service under the Products/SAS Regulatory Capital Management directory. Administrators can then download the file from the Data Load page in SAS Regulatory Capital Management and use it as a template to load and unload data.- The install_sampledata step loads the sample data into a RCM- The load_sampledata_dataloader_objects step loads sample Class Members, Class Member Translations, NamedTreePaths, Named Tree Path Translations, and Object Classifications.- The import_va_reports step imports SAS-provided reports created in SAS Visual Analytics.WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N. |
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS | Specifies whether you want to skip specific steps during the deployment of SAS Regulatory Capital Management. Note: Typically, you should set this value blank: "" . The value: "" is used if the variable is not overridden by your .env file. This means no deployment steps will be explicitly skipped. |
SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS | Specifies whether you want to run specific steps during the deployment of SAS Regulatory Capital Management. Note: Typically, you should set this value blank: "" . The value: "" is used if the variable is not overridden by your .env file. This means all deployment steps will be executed. |
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME | Specifies the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the SharedServices database. |
The SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET parameter specifies the secret for the database user who is intended to own the solution database schema. It is specified in the SAS Regulatory Capital Management .env
secret file (sas-risk-cirrus-rcm-secret.env
). It is commented out in the file with a ‘#’ at the beginning, so its value will not be applied during deployment. If you want to override a SAS-provided default for a given variable, you must uncomment the line by removing the ‘#’ at the beginning of the line.
If you want to override any of the SAS Regulatory Capital Management configuration parameters rather than using the default values, complete these steps:
If you have a $deploy/site-config/sas-risk-cirrus-rcm
directory, delete it and its contents.
Then, edit your base kustomization.yaml
file ($deploy/kustomization.yaml
) to remove the
following line from the transformers
section:
- site-config/sas-risk-cirrus-rcm/resources/rcm_transform.yaml
This step should only be necessary if you are upgrading from a cadence prior to 2025.02.
Copy the configuration.env
and sas-risk-cirrus-rcm-secret.env
from $deploy/sas-bases/examples/sas-risk-cirrus-rcm
to the
$deploy/site-config/sas-risk-cirrus-rcm
directory. Create the destination directory if
one does not exist. If the directory already exists and already has the expected .env
files,
verify that the overrides
have been correctly applied. No further actions are required, unless you want to apply different
overrides.
In the base kustomization.yaml file, add the sas-risk-cirrus-rcm-parameters
ConfigMap to the
configMapGenerator
block and sas-risk-cirrus-rcm-secret.env
to the
secretGenerator
block. If that blocks does not exist, create it. Here is an example of what
the inserted code block should look like in the kustomization.yaml file:
configMapGenerator:
...
- name: sas-risk-cirrus-rcm-parameters
behavior: merge
envs:
- site-config/sas-risk-cirrus-rcm/configuration.env
...
secretGenerator:
...
- name: sas-risk-cirrus-rcm-secret
behavior: merge
envs:
- site-config/sas-risk-cirrus-rcm/sas-risk-cirrus-rcm-secret.env
...
Save the kustomization.yaml file.
Modify the configuration.env file (in the $deploy/site-config/sas-risk-cirrus-rcm
directory). If
there are any parameters for which you want to override the default value, uncomment that variable’s
line in your configuration.env
file and replace the placeholder with the desired value.
The following is an example of a configuration.env
file that you could use for SAS Regulatory Capital Management. This example will use all of the default values provided by SAS.
# SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
# SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES={{ Y-OR-N }}
# SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
# SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
# SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME={{ SOLUTION-INPUT-DATA-SCHEMA-USER-NAME }}
For a list of variables that can be overridden and their default values, see SAS Regulatory Capital Management Configuration Parameters and Secrets.
Save the configuration.env
file.
Modify the sas-risk-cirrus-rcm-secret.env file (in the $deploy/site-config/sas-risk-cirrus-rcm
directory). If you want to override the default value of the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET
, uncomment the variable’s line in your sas-risk-cirrus-rcm-secret.env
file and replace the placeholder with the desired value.
The following is an example of a secret.env
file that you could use for SAS Regulatory Capital Management. This example will use the default value provided by SAS.
# SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET={{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }}
For the variable SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET and its default value, see SAS Regulatory Capital Management Configuration Parameters and Secrets.
Save the sas-risk-cirrus-rcm-secret.env
file.
Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl describe configmap sas-risk-cirrus-rcm-parameters -n <name-of-namespace>
Verify that the output contains your configured overrides.
Find the name of the secret on the namespace.
kubectl describe secret sas-risk-cirrus-rcm-secret -n <name-of-namespace>
Retrieve the name of the secret on the namespace from the “Name:” line on the generated output.
Get the database schema user secret.
kubectl get secret <name-of-the-secret> -n <name-of-namespace> -o jsonpath='{.data}'
Verify that the output contains your configured overrides.
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software
SAS Risk Cirrus Builder Microservice is the Go service that backs the Solution Builder. The service manages the lifecycle of solutions and their customizations.
By default, SAS Risk Cirrus Builder Microservice is deployed with some
default settings. These settings can be overridden via the
sas_risk_cirrus_builder_transform.yaml file. There is a template (in
$deploy/sas-bases/examples/sas-risk-cirrus-builder/resources
) that
should be used as a starting point.
There is no requirement to configure this transform. Currently all fields in the transform are optional (with the default value documented here used as default if not supplied).
Note: For more information about the SAS Risk Cirrus Builder Microservice, see Introduction to SAS Risk Cirrus in the SAS Risk Cirrus: Administrator’s Guide.
$deploy/sas-bases/examples/sas-risk-cirrus-builder/resources
to the $deploy/site-config/sas-risk-cirrus-builder/resources
directory. Create a
destination directory if one does not exist.IMPORTANT: If the destination directory already exists, verify that the overlay default settings (#verify-overlay-default-settings) have been correctly applied. No further actions are required, unless you want to change the default settings to different values.
Modify the sas_risk_cirrus_builder_transform.yaml file (located in the
$deploy/site-config/sas-risk-cirrus-builder/resources
directory) to specify your
settings as follows:
For RISK_CIRRUS_UI_SAVE_ENABLED, replace {{ ENABLE-ARTIFACTS-SAVE }} with the desired value.
Use ‘true’ to enable saving the UI artifacts in the solution builder UI. Use ‘false’ to disable saving the UI artifacts.
Note: In ‘production’ or ‘test’ systems, this should be set to ‘false’ so that the UI artifacts cannot be
accidentally updated in the configured GIT repository.
If not configured, the default is ‘true’
For SAS_LOG_LEVEL, replace {{ INFO-OR-DEBUG }} with the logging level desired. If not configured, the default is ‘INFO’. Note: Setting this to DEBUG will result in the logging for all the other SAS Microservices that SAS Risk Cirrus Builder communicates with, thereby increasing the size of the log.
In the base kustomization.yaml file in the $deploy directory, add site-config/risk-cirrus-builder/resources/sas_risk_cirrus_builder_transform.yaml to the transformers block. Here is an example:
transformers:
- site-config/risk-cirrus-builder/resources/sas_risk_cirrus_builder_transform.yaml
Complete the deployment steps to apply the new settings. See Deploy the Software in the SAS Viya Platform: Deployment Guide.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform. or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl -n <name-of-namespace> get configmap | grep sas-risk-cirrus-builder
The above will return the ConfigMap defined for sas-risk-cirrus-builder. Here is an example:
sas-risk-cirrus-builder-parameters-<id> 9 6d19h
Execute the following:
kubectl describe configmap sas-risk-cirrus-builder-parameters-<id> -n
<name-of-namespace>
Verify that the output contains the settings that you configured.
Name: sas-risk-cirrus-builder-parameters-<id>
Namespace: <name-of-namespace>
Labels: sas.com/admin=cluster-local
sas.com/deployment=sas-viya
Annotations: <none>
Data
====
SAS_LOG_LEVEL_RISKCIRRUSBUILDER:
----
INFO
SAS_LOG_LEVEL_RISKCIRRUSCOMMONS:
----
INFO
RISK_CIRRUS_UI_SAVE_ENABLED:
----
true
DEFAULT_EMAIL_ADDRESS:
----
[email protected]
Before you can deploy a SAS Risk Cirrus solution, it is important to understand that your solution content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer (Risk Cirrus Core) that is used by all SAS Risk Cirrus solutions. Therefore, in order to fully deploy your solution, you must deploy, at minimum, the Risk Cirrus Core content in addition to your solution.
In order to deploy Risk Cirrus Core, you must first complete the following pre-deployment tasks:
(For deployments that use external PostgreSQL databases) Deploy and stage an external PostgreSQL database.
Deploy an additional PostgreSQL cluster for the SAS Common Data Store.
Specify a Persistent Volume Claim for Risk Cirrus Core by updating the SAS Viya platform customization file (kustomization.yaml).
Review any solution README files for additional deployment-related tasks.
Verify that the configuration overrides have been applied successfully.
Before you deploy Risk Cirrus Core, ensure that you review the Risk Cirrus Objects README file. This
file contains important pre-deployment instructions that you must follow to make changes to the
sas_risk_cirrus_objects_transform.yaml, as part of the overall SAS Viya platform deployment. See the
Risk Cirrus Objects README file located at
$deploy/sas-bases/examples/sas-risk-cirrus-objects/resources/README.md
(for Markdown-formatted
instructions) and
$deploy/sas-bases/docs/configure_environment_id_settings_for_sas_risk_cirrus_builder_microservice.htm
(for HTML-formatted instructions).
IMPORTANT: This task is required only if you are deploying an external PostgreSQL database instance for a solution that supports its use.
If your solution supports the use of an external PostgreSQL database instance, ensure that you have completed the following pre-deployment tasks:
select * from pg_available_extensions;
--locale=C
parameter). You
can validate the database locale running the following command in your Linux terminal: psql -l
,
or running the following SQL query: select * from pg_database;
OR show LC_COLLATE;
The process for configuring the LTREE extension and setting the database locale varies depending on the cloud provider and operating system.
For specific instructions on performing these tasks, consult your cloud provider documentation.
The Risk Data Service requires the deployment of an additional PostgreSQL cluster called SAS Common Data Store (also called CDS PostgreSQL). This cluster is configured separately from the required platform PostgreSQL cluster that supports the SAS Infrastructure Data Server.
Note: Your SAS Common Data Store must match the state (external or internal) of the SAS Infrastructure Data Server. So if the SAS Infrastructure Data Server is on an external PostgreSQL instance, an external PostgreSQL instance must also be used for the SAS Common Data Store cluster (and vice versa).
For more information about configuring the SAS Common Data Store cluster, see the README file
located at $deploy/sas-bases/examples/postgres/README.md
(for Markdown-formatted instructions) or $deploy/sas-bases/docs/configure_postgresql.htm
(for HTML-formatted instructions).
The best option for storing any code that is needed for SAS programming run-time environment
sessions is a Network File Sharing (NFS) server that all programming run-time Kubernetes pods can
access. In order for SAS Risk Cirrus solutions to operate properly, you must specify a Persistent
Volume Claim (PVC) for Risk Cirrus Core in the SAS Viya platform. This is done by adding
sas-risk-cirrus-core
to the comma-separated set of PVCs in the annotationSelector
section of
configuration code in your top-level kustomization.yaml file.
The following is a sample excerpt from that file with sas-risk-cirrus-core
added to the
comma-separated list of PVCs.
patches:
- path: site-config/storageclass.yaml
target:
kind: PersistentVolumeClaim
annotationSelector: sas.com/component-name in (sas-backup-job,sas-data-quality-services,
sas-commonfiles,sas-cas-operator,sas-pyconfig,sas-risk-cirrus-core)
For additional information about this process, see Specify PersistentVolumeClaims to Use ReadWriteMany StorageClass.
Risk Cirrus Core provides a ConfigMap whose values control various aspects of its deployment process. This
includes variables such as logging level for the deployment, deployment steps to skip, etc. SAS provides
default values for these variables as described in the next section. You can override these default values
by configuring a configuration.env
file with your override values and configuring your kustomization.yaml
file to apply these overrides.
For a list of variables that can be overridden and their default values, see Risk Cirrus Core Configuration Parameters.
For the steps needed to override the default values with your own values, see Apply your own overrides to the configuration parameters.
The following table contains a list of parameters that can be specified in the Risk Cirrus Core .env
configuration file. These parameters can all be found in the template configuration file (configuration.env
)
but are commented out in the template file. Lines with a ‘#’ at the beginning are commented out, and their values
will not be applied during deployment. If you want to override a SAS-provided default for a given variable, you
must uncomment the line by removing the ‘#’ at the beginning of the line.
Parameter Name | Description |
---|---|
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER | Specifies a logging level for the deployment. The logging level value: "INFO" is used if the variable is not overridden by your .env file. For a more verbose level of logging, specify value: "DEBUG" . |
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS | Specifies whether you want to skip specific steps during the deployment of SAS Risk Cirrus Core. Note: Typically, you should set this value blank: "" . The value: "" is used if the variable is not overridden by your .env file. This means no deployment steps will be explicitly skipped. |
SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS | Specifies whether you want to run specific steps during the deployment of SAS Risk Cirrus Core. Note: Typically, you should set this value blank: "" . The value: "" is used if the variable is not overridden by your .env file. This means all deployment steps will be executed. |
SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG | Specifies whether the value of the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT variable is used to set SAS Workflow Manager default service account. If the value is "N" , the deployment process does not set the workflow default service account. The value: "N" is used if the variable is not overridden by your .env file. This means the deployment will not set a default service account for SAS Workflow Manager. You can still set a default service account after deployment via SAS Environment Manager. |
SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT | The user account to be configured in the SAS Workflow Manager in order to use workflow service tasks (if SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG is set to "Y" ). Using the SAS administrator user account for this purpose is not advised because it might allow file access rights that are not secure enough for the workflow client account. IMPORTANT: Make sure to review the information about configuring the workflow client default service account in the section “Configuring the Workflow Client” in the SAS Workflow Manager: Administrator’s Guide. It contains important information to secure a successful deployment. The value: "" is used if the variable is not overridden by your .env file. |
If you want to override any of the Risk Cirrus Core configuration parameters rather than using the default values, complete these steps:
If you have a $deploy/site-config/sas-risk-cirrus-core
directory, delete it and its contents.
Then, edit your base kustomization.yaml
file ($deploy/kustomization.yaml
) to remove the
following line from the transformers
section:
- site-config/sas-risk-cirrus-core/resources/core_transform.yaml
This step should only be necessary if you are upgrading from a cadence prior to 2025.02.
Copy the configuration.env
from $deploy/sas-bases/examples/sas-risk-cirrus-rcc
to the
$deploy/site-config/sas-risk-cirrus-rcc
directory. Create the destination directory if
one does not exist. If the directory already exists and already has the expected .env
file,
verify that the overrides
have been correctly applied. No further actions are required, unless you want to apply different
overrides.
In the base kustomization.yaml file, add the sas-risk-cirrus-core-parameters
ConfigMap to the
configMapGenerator
block. If that block does not exist, create it. Here is an example of what
the inserted code block should look like in the kustomization.yaml file:
configMapGenerator:
...
- name: sas-risk-cirrus-core-parameters
behavior: merge
envs:
- site-config/sas-risk-cirrus-rcc/configuration.env
...
Save the kustomization.yaml file.
Modify the configuration.env file in the $deploy/site-config/sas-risk-cirrus-rcc
directory. If
there are any parameters for which you want to override the default value, uncomment that variable’s
line in your configuration.env
file and replace the placeholder with the desired value.
The following is an example of a configuration.env
file that you could use for Risk Cirrus Core.
This example will use all of the default values provided by SAS except for the two workflow-related
variables. In this case, it will set a default service account in SAS Workflow to the user
workflowacct
during deployment.
# SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-or-DEBUG }}
# SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
# SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG=Y
SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT=workflowacct
For a list of variables that can be overridden and their default values, see Risk Cirrus Core Configuration Parameters.
Save the configuration.env
file.
After you have completed your pre-deployment configurations for Risk Cirrus Core, ensure that you review the solution README files for any Cirrus applications that you are deploying. These files contain additional pre-deployment instructions that you must follow to make changes to the kustomization.yaml file as well as to solution-specific configuration files, as part of the overall SAS Viya platform deployment. You can also refer to the solution-specific administrative documentation for further details as needed.
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software.
When deploying Risk Cirrus Core, you can determine whether to enable Linux Access Control Lists (ACL) to set permissions on Analysis Run directories. By default, when Risk Cirrus Core is deployed, the ‘requireACL’ flag in SAS Environment Manager is set to OFF. If you are upgrading from an existing deployment and had previously set ‘requireACL=ON’, that setting will remain. When ‘requireACL=ON’, users might encounter issues when executing an analysis run, depending upon the setup of their analysis run folders and security permissions. If you do not require ACL security, turn it off to avoid these issues.
To turn ACL security off, perform the following steps:
Log into SAS Environment Manager.
Click on the Configuration menu item.
In the search bar, enter “risk cirrus”.
Select the Risk Cirrus Core service.
In the Configuration pane on the right, update the requireACL field to OFF.
Save your changes.
Using ‘requireACL=ON’ enables restricted sharing mode. This mode guarantees that only the user/owner (including group) running the analysis run has write permissions to the analysis run directory in the PVC. Using ‘requireACL=OFF’ enables unrestricted sharing mode. This mode allows any user/owner (including group and others) running the analysis run to have write permissions to the analysis run directory in the PVC. For more information about configuration settings in SAS Environment Manager, see Configuration Page
Note: If you configured overrides during a past deployment, your overrides should be available in the SAS Risk Cirrus Core ConfigMap. To verify that your overrides were applied successfully to the ConfigMap, run the following command:
kubectl describe configmap sas-risk-cirrus-core-parameters -n <name-of-namespace>
Verify that the output contains your configured overrides.
The SAS Risk Cirrus KRM service provides a REST API for starting and managing KRM runs associated with Risk Cirrus analysis runs and comes with default settings that may be changed. The template in $deploy/sas-bases/examples/sas-risk-cirrus-krm/resources
should be used as a starting point.
Some portions the SAS Risk Cirrus KRM service use cluster-level privileges for reading Node and Pod information to do its work. Those privileges are provided by adding an overlay to the service.
IMPORTANT: It is strongly recommended that deployments of SAS Risk Cirrus KRM
be deployed with the cluster-level read privileges. For details, see the README located at
$deploy/sas-bases/overlays/sas-risk-cirrus-krm/cluster-role-binding/README.md
Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-krm/resources
to the
$deploy/site-config/sas-risk-cirrus-krm/resources
directory. Create a destination
directory if one does not exist.
IMPORTANT: If the destination directory already exists, verify that the
overlay connection settings
have been correctly applied. No further actions are required, unless you want to change
the connection settings to different values.
Modify the krm_transform.yaml file (located in the
$deploy/site-config/sas-risk-cirrus-krm/resources
directory) to
specify your settings as follows:
a. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
, replace {{ Y-OR-N }} to specify whether you want to include sample artifacts. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, sample artifacts will not be included.
WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N.
b. For KRM_RUN_PROGRESS_TTL
, replace {{ RUN-PROGRESS-REPORTED-FOR-RUNS-LESS-THAN-THIS-OLD-SECONDS }} with the number of seconds that you wish your run progresses to be reported (for example, in ALM’s Calculation Monitor page). By default, this value is set to about a week, after which runs will not be reported.
c. For MIN_KRM_POD_COUNT
, replace {{ MINIMUM-KRMD-POD-COUNT }} with the minimum number of KRMD pods you wish to keep alive to provide quicker runs by not having to experience pod-startup time. By default, this value is set to 1.
d. For MAX_KRM_POD_COUNT
, replace {{ MAXIMUM-KRMD-POD-COUNT }} with the maximum number of KRMD pods you wish to have alive at once. Having more alive at once consumes more resources that can be used by other pods, including other KRMD pods running large runs. By default, this value is set to 3.
e. For IDLE_KRM_POD_TTL
, replace {{ SHUTS-DOWN-KRMD-PODS-IDLE-FOR-THESE-SECONDS }} with the number of seconds that you wish your KRMD pods to remain idle before they are considered for being shut down. Shutting down idle pods saves CPU usage. Keeping idle pods alive causes faster execution by not having to wait for a pod to be created. By default, this value is set to 120 seconds.
f. For NULLSNOTDISTINCT
, replace “NULLS NOT DISTINCT” with “” when the CDS Postgres server version
is less than 15. You can use the following sql statement to check the version of your Postgres server: select version();
In the base kustomization.yaml file ($deploy/kustomization.yaml), add
site-config/sas-risk-cirrus-krm/resources/krm_transform.yaml
to
the transformers block. Here is an example:
transformers:
- site-config/sas-risk-cirrus-krm/resources/krm_transform.yaml
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide to apply the new settings.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl describe configmap sas-risk-cirrus-krm-config -n
<name-of-namespace>
Verify that the output contains the desired connection settings that you configured.
Some portions the SAS Risk Cirrus KRM service use cluster-level privileges for reading Node and Pod information to do its work. If these privileges are not provided by adding the overlay described below to add a ClusterRoleBinding and ClusterRole object to the deployment, some features of the service will not be enabled. Not deploying the overlay can affect the features and functionality of downstream products that require the use of this service, such as SAS Asset and Liability Management.
The cluster-level privileges are enabled by adding the cluster-role-binding directory to the resources block of the base kustomization.yaml file
($deploy/kustomization.yaml
). Here is an example:
resources:
...
- sas-bases/overlays/sas-risk-cirrus-krm/cluster-role-binding
To disable cluster-level privileges:
Remove sas-bases/overlays/sas-risk-cirrus-krm/cluster-role-binding
from the resources block of the
base kustomization.yaml file ($deploy/kustomization.yaml
). This also ensures that this overlay
will not be applied in future Kustomize builds.
Perform the following commands to remove the ClusterRoleBinding from the namespace:
kubectl delete clusterrolebinding sas-risk-cirrus-krm-<your namespace>
After you configure Kustomize, continue your SAS Viya platform deployment as documented.
The SAS Risk Cirrus Objects Microservice stores information related to business object definitions and business object instances such as analysis data, analysis runs, models, or model reviews. Cirrus Objects also stores and retrieves items related to business objects, such as attachments. These business objects are associated with the Risk Cirrus Platform that underlies most Risk offerings.
SAS Risk Cirrus Objects is deployed with some default settings. These settings can be overridden by using
the $deploy/sas-bases/examples/sas-risk-cirrus-objects/resources/sas_risk_cirrus_objects_transform.yaml
file
as a starting point.
There is no requirement to configure this transform. Currently, all fields in the transform are optional (with the default value documented here used if no value is supplied).
Note: For more information about the SAS Risk Cirrus Objects Microservice, see Administrator’s Guide: Cirrus Objects.
Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-objects/resources
to the $deploy/site-config/sas-risk-cirrus-objects/resources
directory. Create a
destination directory if one does not exist.
IMPORTANT: If the destination directory already exists, verify that the overlay default settings have been correctly applied. No further actions are required, unless you want to change the default settings to different values.
Modify the new copy of sas_risk_cirrus_objects_transform.yaml
to specify your
settings as follows:
For JAVA_OPTION_ENVIRONMENT_ID, replace {{ MY-ENVIRONMENT-ID }} with the identifier you have chosen for this particular environment.
If not configured, the system will default to no environment identifier.
In the base kustomization.yaml
file in the $deploy
directory, add
site-config/risk-cirrus-objects/resources/sas_risk_cirrus_objects_transform.yaml
to the transformers
block. Here is an example:
transformers:
- site-config/risk-cirrus-objects/resources/sas_risk_cirrus_objects_transform.yaml
Complete the deployment steps to apply the new settings. See Deployment Tasks: Deploy SAS Risk Cirrus.
Note: This overlay can be applied during the initial deployment of the SAS Viya platform. or after the deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl -n <name-of-namespace> get configmap | grep sas-risk-cirrus-objects
The command returns the ConfigMaps defined for sas-risk-cirrus-objects. Here is an example:
sas-risk-cirrus-objects-parameters-<id> 9 6d19h
sas-risk-cirrus-objects-config-<id> 9 6d19h
Execute the following:
kubectl describe configmap sas-risk-cirrus-objects-config-<id> -n <name-of-namespace>
Verify that the output contains the settings that you configured.
Name: sas-risk-cirrus-objects-config-g5dg72m87g
Namespace: d89282
Labels: sas.com/admin=cluster-local
sas.com/deployment=sas-viya
Annotations: <none>
Data
====
JAVA_OPTION_CIRRUS_ENVIRONMENT_ID:
----
-Dcirrus.environment.id=MY_DEV_123
JAVA_OPTION_XMX:
----
-Xmx512m
JAVA_OPTION_XPREFETCH:
----
-Dsas.event.consumer.prefetchCount=8
SEARCH_ENABLED:
----
true
JAVA_OPTION_JAVA_LOCALE_USEOLDISOCODES:
----
-Djava.locale.useOldISOCodes=true
JAVA_OPTION_XSS:
----
-Xss1048576
BinaryData
====
Events: <none>
Tip: Use filtering to focus on a specific setting:
kubectl describe configmap sas-risk-cirrus-objects-config-<id> -n <name-of-namespace> | grep environment
Result:
JAVA_OPTION_CIRRUS_ENVIRONMENT_ID:
-Dcirrus.environment.id=MY_DEV_123
SAS Risk Engine integrates with Python running in a CAS session. This Python interface enables you to write your evaluation methods in Python instead of using the SAS Function Compiler.
The Python interface has additional benefits:
This README describes the deployment updates that are required for you to enable the Python interface.
Review the following documents before proceeding to the Steps To Configure section:
SAS Configurator for Open Source - SAS provides the SAS Configurator for Open Source utility, which automates the download and installation of Python from source by creating and executing the sas-pyconfig
job. For details, see the README at $deploy/sas-bases/examples/sas-pyconfig/README.md
(for Markdown format) or at $deploy/sas-bases/docs/sas_configurator_for_open_source_options.htm
(for HTML format).
Configure Python and R Integration with the SAS Viya Platform - SAS Risk Engine leverages the SAS Viya platform, allowing two-way
communication between SAS (CAS and Compute engines) and the Python open source environment. For details, see the README at
$deploy/sas-bases/examples/sas-open-source-config/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_python_and_r_integration_with_the_sas_viya_platform.htm
(for HTML format) that describes the steps required to install, configure, and deploy
Python to enable integration in the SAS Viya platform.
Configure External Languages Access Control Configuration - The SAS Viya platform uses a configuration file to enable Python integration in CAS. This is required when using Python called directly by the CAS actions that are provided with SAS Risk Engine. For details, see External Languages Access Control Configuration.
Configure Python for the SAS Viya Platform Using a Kubernetes Persistent Volume - The SAS Configurator for Open Source prepared a complete run-time
environment consisting of a configured Python installation as well as the required packages stored on a Kubernetes PersistentVolume. For details, see the
README at $deploy/sas-bases/examples/sas-open-source-config/python/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_python_for_the_sas_viya_platform_using_a_kubernetes_persistent_volume.htm
(for HTML format) that describes how to make that persistent volume available to your CAS and
Compute resources in the SAS Viya deployment.
LOCKDOWN Settings for the SAS Programming Environment - By default, Python is not included in the allowlist as an access method to for users of the SAS
Programming Environment when running in LOCKDOWN. For details, see the README at $deploy/sas-bases/examples/sas-programming-environment/lockdown/README.md
(for Markdown format) or at $deploy/sas-bases/docs/lockdown_settings_for_the_sas_programming_environment.htm
(for HTML format) that describes how to make the Python interpreter available to users of the SAS Programming Environment.
Configuration Settings for Compute Server - The SAS Programming Environment provides access to the file system for the manipulation of user Python files.
For details, see the README at $deploy/sas-bases/examples/sas-compute-server/configure/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configuration_settings_for_compute_server.htm
(for HTML format) that describes how to make Network File System (NFS)
resources available in the SAS Programming Environment sessions that execute in the Compute Server.
Configuration Settings for CAS - For details, see the README at $deploy/sas-bases/examples/cas/configure/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configuration_settings_for_cas.htm
(for HTML format) that describes how to enable host identity session launching as well as adding an NFS persistentVolumeClaim mount for the CAS server.
The SAS Viya platform provides the configuration YAML files (ending with a .yaml extension) that the Kustomize tool uses to configure the various software components. Before you modify any of these configuration files, you must perform the following tasks to collect information:
Make note of the attributes for the volume where the Python interpreter will expect to find the customer provided python programs. For example, note the server and directory for NFS as well as the full path to the directory location.
Make a list of the users who will use the Python interface that is provided with SAS Risk Engine to write and execute methods. This list of users will need to be added to the CasHostAccountRequired custom group in the SAS Environment Manager.
The following sections focus on a specific configurable software component. Each section discusses specific steps to create or modify the configuration files.
To configure the sas-pyconfig component, complete the following instructions to copy and modify the change-configuration.yaml and change-limits.yaml files.
If the $deploy/site-config/sas-pyconfig/
directory does not already exist, create it. If the $deploy/site-config/sas-pyconfig/change-configuration.yaml
file does not already exist, create it by copying the file from the
$deploy/sas-bases/examples/sas-pyconfig/
directory.
In the copied change-configuration.yaml file, update the /data/global.enabled
and /data/global.python_enabled
entries to enable the Python
interpreter by replacing “false” with “true”:
...
- op: replace
path: /data/global.enabled
value: "true"
- op: replace
path: /data/global.python_enabled
value: "true"
...
The set of packages for the Python interpreter is already initialized. If additional packages are needed, add them by package name to the
/data/default_py.pip_install_packages
entry. For example, to add the “tf-quant-finance”, “quantlib”, and “numba” packages:
...
- op: replace
path: /data/default_py.pip_install_packages
value: "Prophet sas_kernel matplotlib sasoptpy sas-esppy NeuralProphet scipy Flask XGBoost TensorFlow pybase64 scikit-learn statsmodels sympy mlxtend
Skl2onnx nbeats-pytorch ESRNN onnxruntime opencv-python zipfile38 json2 pyenchant nltk spacy gensim pyarrow hnswlib sas-ipc-queue great-expectations==0.16.8
tf-quant-finance quantlib numba"
...
Ensure that the site-config/sas-pyconfig/change-configuration.yaml
entry
is in the transformers block of the base $deploy/kustomization.yaml
file.
Here is an example:
...
transformers:
...
- site-config/sas-pyconfig/change-configuration.yaml
If the $deploy/site-config/sas-pyconfig/change-limits.yaml
file does not already exist, create it by copying the file from the
$deploy/sas-bases/examples/sas-pyconfig/
directory.
SAS Risk Engine does not require any modifications to the change-limits.yaml file. Before making any changes to the limit adjustments for CPU and memory, refer to the Resource Management section in the README at $deploy/sas-bases/examples/sas-pyconfig/README.md
(for Markdown format) or at $deploy/sas-bases/docs/sas_configurator_for_open_source_options.htm
(for HTML format).
Regardless of whether any changes were made for step 6, ensure that the site-config/sas-pyconfig/change-limits.yaml
entry is included in the transformers block of the base kustomization.yaml file.
Here is an example:
...
transformers:
...
- site-config/sas-pyconfig/change-limits.yaml
To configure the sas-open-source-config/python component, complete the following instructions to copy and modify the kustomization.yaml and python-transformer.yaml files.
If the $deploy/site-config/sas-open-source-config/python/
directory does not already exist, create it. If the $deploy/site-config/sas-open-source-config/python/kustomization.yaml
file does not already exist, create it by copying the file from
$deploy/sas-bases/examples/sas-open-source-config/python/
directory.
Add the following entry in the $deploy/site-config/sas-open-source-config/python/kustomization.yaml
file.
```yaml
RISK_PYUSERPATH=/repyeval/usercode/
```
Replace the following placeholders with the appropriate values: {{ PYTHON-EXE-DIR }}, {{ PYTHON-EXECUTABLE }}, and {{ SAS-EXTLANG-SETTINGS-XML-FILE }}. Here is an example:
- PROC_PYPATH=/opt/sas/viya/home/sas-pyconfig/default_py/bin/python3
- PROC_M2PATH=/opt/sas/viya/home/SASFoundation/misc/tk
- SAS_EXTLANG_SETTINGS=/repyeval/extlang.xml
- RISK_PYUSERPATH=/repyeval/usercode/
If the SAS Micro Analytic Service is not required for your environment, comment out the MAS_PYPATH and MAS_M2PTH entries.
If the Open Source Code node in SAS Visual Data Mining and Machine Learning is not required for your environment, comment out the DM_PYPATH entry.
If the SAS Micro Analytic Service is not required for your environment, comment out the following entry.
- name: sas-open-source-config-python-mas
literals:
- MAS_PYPORT= 31100
If the site-config/sas-open-source-config/python
entry is not already in
the resources block of the base kustomization.yaml file, add it. Here is an
example:
resources:
...
- site-config/sas-open-source-config/python
...
If the $deploy/site-config/sas-open-source-config/python/python-transformer.yaml
file does not already exist, create it by copying the file from the
$deploy/sas-bases/examples/sas-open-source-config/python/
directory.
Edit the following sections in the copied
$deploy/site-config/sas-open-source-config/python/python-transformer.yaml
file. There are three sections to be edited.
...
---
apiVersion: builtin
kind: PatchTransformer
metadata:
name: cas-python-transformer
patch: |-
# Add python volume
- op: add
path: /spec/controllerTemplate/spec/volumes/-
value: { name: python-volume, {{ VOLUME-ATTRIBUTES }} }
# Add mount path for python
- op: add
path: /spec/controllerTemplate/spec/containers/0/volumeMounts/-
value:
name: python-volume
mountPath: /python
readOnly: true
# Add python-config configMap
- op: add
path: /spec/controllerTemplate/spec/containers/0/envFrom/-
value:
configMapRef:
name: sas-open-source-config-python
target:
group: viya.sas.com
kind: CASDeployment
name: .*
version: v1alpha1
---
...
...
---
apiVersion: builtin
kind: PatchTransformer
metadata:
name: launcher-job-python-transformer
patch: |-
# Add python volume
- op: add
path: /template/spec/volumes/-
value: { name: python-volume, {{ VOLUME-ATTRIBUTES }} }
# Add mount path for python
- op: add
path: /template/spec/containers/0/volumeMounts/-
value:
name: python-volume
mountPath: /python
readOnly: true
# Add python-config configMap
- op: add
path: /template/spec/containers/0/envFrom/-
value:
configMapRef:
name: sas-open-source-config-python
target:
kind: PodTemplate
name: sas-launcher-job-config
version: v1
---
...
...
---
apiVersion: builtin
kind: PatchTransformer
metadata:
name: compute-job-python-transformer
patch: |-
# Add python volume
- op: add
path: /template/spec/volumes/-
value: { name: python-volume, {{ VOLUME-ATTRIBUTES }} }
# Add mount path for python
- op: add
path: /template/spec/containers/0/volumeMounts/-
value:
name: python-volume
mountPath: /python
readOnly: true
# Add python-config configMap
- op: add
path: /template/spec/containers/0/envFrom/-
value:
configMapRef:
name: sas-open-source-config-python
target:
kind: PodTemplate
name: sas-compute-job-config
version: v1
---
...
In each section that you edited, replace the lines for the python volume and mount path with the specific attributes and values. For example, replace
# Add python volume
- op: add
path: /template/spec/volumes/-
value: { name: python-volume, {{ VOLUME-ATTRIBUTES }} }
# Add mount path for python
- op: add
path: /template/spec/containers/0/volumeMounts/-
value:
name: python-volume
mountPath: /python
readOnly: true
with:
# Add python volume
- op: add
path: /spec/controllerTemplate/spec/volumes/-
value:
name: sas-pyconfig
persistentVolumeClaim:
claimName: sas-pyconfig
# Add mount path for python
- op: add
path: /spec/controllerTemplate/spec/containers/0/volumeMounts/-
value:
name: sas-pyconfig
mountPath: /opt/sas/viya/home/sas-pyconfig
readOnly: true
If the SAS Micro Analytic Service is not required for your environment, comment out the following entry.
apiVersion: builtin
kind: PatchTransformer
metadata:
name: mas-python-transformer
patch: |-
# Add side car Container
...
target:
group: apps
kind: Deployment
name: sas-microanalytic-score
version: v1
---
If the Open Source Code node in SAS Visual Data Mining and Machine Learning is not required for your environment, comment out the following entry.
...
apiVersion: builtin
kind: PatchTransformer
metadata:
name: add-python-sas-java-policy-allow-list
patch: |-
- op: add
path: /data/SAS_JAVA_POLICY_ALLOW_DM_PYPATH
value: /python/{{ PYTHON-EXE-DIR }}/{{ PYTHON-EXECUTABLE }}
target:
kind: ConfigMap
name: sas-programming-environment-java-policy-config
Ensure that the site-config/sas-open-source-config/python/python-transformer.yaml
entry is in the transformers block of the base kustomization.yaml file.
Here is an example:
...
transformers:
...
- site-config/sas-open-source-config/python/python-transformer.yaml
To configure the sas-compute-server/configure component, complete the following instructions to copy and modify the compute-server-add-nfs-mount.yaml file.
If the $deploy/site-config/sas-compute-server/configure
folder does not already exist, create it. If the $deploy/site-config/sas-compute-server/configure/compute-server-add-nfs-mount.yaml
file does not exist, create it by copying it from the
$deploy/sas-bases/examples/sas-compute-server/configure/
directory.
Edit the following entry in the $deploy/site-config/sas-compute-server/configure/compute-server-add-nfs-mount.yaml
file.
...
- op: add
path: /template/spec/volumes/-
value:
name: {{ MOUNT-NAME }}
nfs:
path: {{ PATH-TO-BE-MOUNTED }}
server: {{ HOST }}
- op: add
path: /template/spec/containers/0/volumeMounts/-
value:
name: {{ MOUNT-NAME }}
mountPath: {{ PATH-TO-BE-MOUNTED }}
...
Replace the following placeholders with the appropriate values: { MOUNT-NAME }}, {{ HOST }}, and {{ PATH-TO-BE-MOUNTED }}. Here is an example:
...
- op: add
path: /template/spec/volumes/-
value:
name: repyeval-volume
nfs:
path: /export/repyeval
server: 192.168.2.4
- op: add
path: /template/spec/containers/0/volumeMounts/-
value:
name: repyeval-volume
mountPath: /repyeval
...
Ensure that the site-config/sas-compute-server/configure/compute-server-add-nfs-mount.yaml
entry is in the transformers block of the base kustomization.yaml file.
Here is an example:
...
transformers:
...
- site-config/sas-compute-server/configure/compute-server-add-nfs-client.yaml
...
To configure the sas-programming-environment/lockdown component, complete the following steps to copy and modify the enable-lockdown-access-methods.yaml file.
If the $deploy/site-config/sas-programming-environment/lockdown/
directory does not already exist, create it. If the $deploy/site-config/sas-programming-environment/lockdown/enable-lockdown-access-methods.yaml
file does not already exist, create it by copying it
from $deploy/sas-bases/examples/sas-programming-environment/lockdown/
directory.
Replace the placeholder {{ ACCESS-METHOD-LIST }} with the appropriate values. Here is an example:
...
- op: add
path: /data/VIYA_LOCKDOWN_USER_METHODS
value: "PYTHON PYTHON_EMBED SOCKET"
...
Alternatively, in the case where the $deploy/site-config/sas-programming-environment/lockdown/enable-lockdown-access-methods.yaml
file pre-exists, modify the VIYA_LOCKDOWN_USER_METHODS
entry to include “PYTHON PYTHON_EMBED SOCKET”.
Ensure that the site-config/sas-programming-environment/lockdown/enable-lockdown-access-methods.yaml
entry is in the transformers block of the base kustomization.yaml file.
Here is an example:
...
transformers:
...
- site-config/sas-programming-environment-lockdown/enable-lockdown-access-methods.yaml
...
If the extlang.xml configuration file that is specified in the SAS_EXTLANG_SETTINGS entry in the
$deploy/site-config/sas-open-source-config/python/kustomization.yaml file already exists, add the RISK_PYUSERPATH
environment variable
setting to the PYTHON3 language block in the file. If the extlang.xml file does not have a ‘PYTHON3’ language block, add it to the existing extlang.xml file. Here is an example:
...
<LANGUAGE name="PYTHON3" interpreter="/opt/sas/viya/home/sas-pyconfig/default_py/bin/python3">
<ENVIRONMENT name="RISK_PYUSERPATH" value="/repyeval/usercode/" />
</LANGUAGE>
...
If the extlang.xml file does not exist, create it in an editor session. Save it to the NFS share location. Here is an example:
<EXTLANG version="1.0" mode="ALLOW" allowAllUsers="ALLOW">
<DEFAULT scratchDisk="/tmp" diskAllowlist="/tmp:/repyeval/usercode/">
<LANGUAGE name="PYTHON3"
interpreter="/opt/sas/viya/home/sas-pyconfig/default_py/bin/python3">
<ENVIRONMENT name="RISK_PYUSERPATH" value="/repyeval/usercode/"/>
</LANGUAGE>
</DEFAULT>
</EXTLANG>
Ensure that your users have only Read permission for the extlang.xml configuration file.
If the $deploy/site-config/cas/configure
directory does not exist, create it.
By default, CAS cannot launch sessions under a user’s host identity. All
sessions run under the cas service account instead. CAS can be configured to
allow for host identity launches by including a patch transformer in the
kustomization.yaml file. The /$deploy/sas-bases/examples/cas/configure
directory contains a cas-enable-host.yaml file, which can be used for this
purpose.
If the $deploy/site-config/cas/configure/cas-enable-host.yaml
does not exist, create it by copying it from $deploy/sas-bases/examples/cas/configure/
directory.
SAS Risk Engine does not require any modifications to the cas-enable-host.yaml file. If you have modified it or intend to modify it for other reasons, those changes will not affect SAS Risk Engine.
The example file defaults to targeting all CAS servers by specifying a name component of .*
.
To target specific CAS servers, comment out the name: .*
line and choose which CAS servers you
want to target. Either uncomment the name: and replace NAME-OF-SERVER with one particular CAS
server or uncomment the labelSelector line to target only the default deployment.
Ensure that the $deploy/site-config/cas/configure/cas-enable-host.yaml
file
is in the transformers block of the base kustomization.yaml file. Here is an example:
...
transformers:
- site-config/cas/configure/cas-enable-host.yaml
...
If the $deploy/site-config/cas/configure/cas-add-nfs-mount.yaml
file does not exist, create it by copying it from $deploy/sas-bases/examples/cas/configure/
directory.
Edit the following entry in the existing $deploy/site-config/cas/configure/cas-add-nfs-mount.yaml
file:
...
- op: add
path: /spec/controllerTemplate/spec/volumes/-
value:
name: {{ MOUNT-NAME }}
nfs:
path: {{ NFS-PATH-TO-BE-MOUNTED }}
server: {{ HOST }}
- op: add
path: /spec/controllerTemplate/spec/containers/0/volumeMounts/-
value:
name: {{ MOUNT-NAME }}
mountPath: {{ CONTAINER-MOUNT-PATH }}
...
Replace the following placeholders with the appropriate values: {{ MOUNT-NAME }}, {{ HOST }}, and {{ CONTAINER-MOUNT-PATH }}. Here is an example:
...
- op: add
path: /spec/controllerTemplate/spec/volumes/-
value:
name: repyeval-volume
nfs:
path: /export/repyeval
server: 192.168.2.4
- op: add
path: /spec/controllerTemplate/spec/containers/0/volumeMounts/-
value:
name: repyeval-volume
mountPath: /repyeval
...
Ensure that the $deploy/site-config/cas/configure/cas-add-nfs-mount.yaml
entry
is in the transformers block of the base kustomization.yaml file.
Here is an example:
...
transformers:
...
- site-config/cas/configure/cas-add-nfs-client.yaml
...
When SAS Risk Factor Manager is deployed, its content is integrated with the SAS
Risk Cirrus platform. The platform includes a common layer, Cirrus Core, that is
used by multiple solutions. Therefore, in order to deploy the SAS Risk Factor
Manager solution successfully, you must deploy the Cirrus Core content in
addition to the solution content. Preparing and configuring Cirrus Core for
deployment is described in the Cirrus Core README at
$deploy/sas-bases/examples/sas-risk-cirrus-rcc/resources/README.md
(Markdown
format) or
$deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm
(HTML format).
For storage options for your solution, such as internal databases, refer to the Cirrus Core README.
For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Risk Cirrus: Administrator’s Guide.
SAS Risk Factor Manager provides a ConfigMap whose values control various
aspects of its deployment process. It includes variables such as the logging
level for the deployment, deployment steps to skip, etc. SAS provides default
values for these variables as described in the next section. You can override
these default values by configuring a configuration.env
file with your
override values and then configuring your kustomization.yaml
file to apply
those overrides.
For a list of variables that can be overridden and their default values, see SAS Risk Factor Manager Configuration Parameters.
For the steps needed to override the default values with your own values, see Apply Overrides to the Configuration Parameters.
The following list describes the parameters that can be specified in the SAS
Risk Factor Manager .env
configuration file. These parameters can be found in
the template configuration file (configuration.env
), but they are commented
out in that file. Lines that begin with #
will not be applied during
deployment. If you want to use one of those skipped variables, remove the #
at
the beginning of the line.
The SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER
parameter specifies a logging level
for the deployment. The logging level INFO
is used if the variable is not
overridden by your .env
file. For a more verbose level of logging, specify
the value DEBUG
.
The SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
parameter
specifies whether you want to skip specific steps during the deployment of
SAS Risk Factor Manager. The value ""
is used if the variable is not
overridden by your .env
file. This means none of the deployment steps will
be skipped explicitly.
The SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
parameter specifies
an explicit list of steps you want to run during a deployment. The value ""
is used if the variable is not overridden by your .env
file. This means all
of the deployment steps will be run except steps skipped in
SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
.
Note: If you configured overrides during a previous deployment, those overrides should already be available in the SAS Risk Factor Manager ConfigMap. You can verify that here.
If you want to override any of the SAS Risk Factor Manager configuration properties rather than using the default values, complete these steps:
If you are upgrading from a cadence prior to 2025.02, you should complete this step. Otherwise, you can skip to the next step.
If you have a $deploy/site-config/sas-risk-cirrus-rfm
directory, take note
of the values in your rfm_transform.yaml
file. You may want to use them in
the following steps. Once you have the values you need, delete the directory
and its contents. Then, edit your base kustomization.yaml
file
($deploy/kustomization.yaml
) to remove the following line from the
transformers
section:
-site-config/sas-risk-cirrus-rfm/resources/rfm_transform.yaml
.
Create a $deploy/site-config/sas-risk-cirrus-rfm
directory if one does not
exist. Then copy the files in
$deploy/sas-bases/examples/sas-risk-cirrus-rfm
to that directory.
IMPORTANT: If the destination directory already exists, confirm it
contains the configuration.env
file, not the rfm_transform.yaml
file that
was used for cadences prior to 2025.02. If the directory already exists, and
it has the configuration.env
file, then
verify that the overlay connection settings
have been applied correctly. No further actions are required unless you want
to change the connection settings to different values.
In the base kustomization.yaml file, add the sas-risk-cirrus-rfm-parameters
ConfigMap to the configMapGenerator
block. If that block does not exist,
create it. Here is an example:
configMapGenerator:
- name: sas-risk-cirrus-rfm-parameters
behavior: merge
envs:
- site-config/sas-risk-cirrus-rfm/configuration.env
Save the kustomization.yaml file.
Modify the configuration.env
file (located in the
$deploy/site-config/sas-risk-cirrus-rfm
directory). Lines that begin with
#
will not be applied during deployment. If you want to use one of those
skipped variables, remove the #
at the beginning of the line. You can read
more about each step in
SAS Risk Factor Manager Configuration Parameters.
If you want to override the default settings provided by SAS, specify your
settings as follows:
a. For SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER
, replace {{ INFO-OR-DEBUG }}
with the logging level desired. The default value is INFO.
b. For SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
, replace
{{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to
skip. The default value is an empty string ""
.
c. For SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
, replace
{{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }} with the IDs of the steps you want to
run. The default value is an empty string ""
.
WARNING: This list is absolute; the deployment will only run the steps included in this list. This variable should be an empty string if you are deploying this environment for the first time, or if you are upgrading from a previous version. Otherwise you risk a failed or incomplete deployment.
Save your changes to the configuration.env
file.
The following is an example of a configuration.env
file you could use for
SAS Risk Factor Manager. This example uses the default values provided by
SAS:
# SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
# SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
# SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide.
Note: The .env
overlay can be applied during or after the initial
deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Before verifying the settings for SAS Risk Factor Manager solution, complete step 7 specified in the Cirrus Core README to verify for Cirrus Core.
Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl describe configmap sas-risk-cirrus-rfm-parameters -n <name-of-namespace>
Verify that the output contains the desired connection settings that you configured.
When SAS Risk Modeling is deployed, its content is integrated with the SAS Risk
Cirrus platform. The platform includes a common layer, Cirrus Core, that is used
by multiple solutions. Therefore, in order to deploy the SAS Risk Modeling
solution successfully, you must deploy the Cirrus Core content in addition to
the solution content. Preparing and configuring Cirrus Core for deployment is
described in the Cirrus Core README at
$deploy/sas-bases/examples/sas-risk-cirrus-rcc/README.md
(Markdown
format) or
$deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm
(HTML format).
The Risk Cirrus Core README also contains information about storage options, such as external databases, for your solution. You must complete the pre-deployment described in the Risk Cirrus Core README before deploying SAS Risk Modeling. Please read that document for important information about the deployment tasks that should be completed prior to deploying SAS Risk Modeling.
IMPORTANT: You must complete the step Modify the Configuration for Risk Cirrus Core
. Because SAS Risk Modeling uses workflow service tasks, a user account must be configured for a workflow client. If you know which user account you want to use and want to configure it during installation, set the SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG
variable to “Y” and specify the user account in the value of the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT
variable.
For more information about deploying Risk Cirrus Core, you can also read Deployment Tasks in the SAS Risk Cirrus: Administrator’s Guide.
If you have a $deploy/site-config/sas-risk-cirrus-rm/resources
directory, delete it and its contents. Remove the reference to this directory from the transformers section of your base kustomization.yaml
file ($deploy/kustomization.yaml
). This step should only be necessary if you are upgrading from a cadence prior to 2025.02.
Copy the files in
$deploy/sas-bases/examples/sas-risk-cirrus-rm
to the
$deploy/site-config/sas-risk-cirrus-rm
directory. Create a destination directory if one does not exist.
IMPORTANT: If the destination directory already exists, confirm it contains the .env
file, not the rm_transform.yaml
file that was used for cadences prior to 2025.02. If the directory already exists, and it has the .env
file, then verify that the overlay connection settings have been applied correctly. No further actions are required unless you want to change the connection settings to different values.
Modify the configuration.env
file (located in the $deploy/site-config/sas-risk-cirrus-rm
directory). Lines with a #
at the beginning are commented out; their values will not be applied during deployment. To override a default provided by SAS for a given variable, uncomment the line by removing the #
at the beginning of the line and modify as explained in the following section. Specify, if needed, your settings as follows:
a. For SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER
, replace {{ INFO-OR-DEBUG }} with the logging level desired. (Default is INFO).
b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
,default =’N’. Currently, SAS Risk Modeling does not include sample artifacts. Therefore this parameter defaults to ‘N’. Do not modify this parameter in the YAML file. In the future, any items marked as sample artifacts will be listed here.
c. For SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. Currently, SAS Risk Modeling requires users to complete all the steps. Set this variable to an empty string.
d. For SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }} with the IDs of the steps you want to run. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment. For example, if you have deleted the prepackaged monitoring plans or KPIs from your environment, then you can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS to “load_objects” and then delete the sas-risk-cirrus-rm pod to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS. WARNING: This list is absolute; the deployment will only run the steps included in this list. If you are deploying this environment for the first time, this variable should be an empty string, or you risk an incomplete or failed deployment.
The following is an example of a configuration.env
that you could use for SAS Risk Modeling. The uncommented parameters will be added to the solution configuration map.
SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER=INFO
SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES=N
# SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
# SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
In the base kustomization.yaml file in the $deploy
directory, add
site-config/sas-risk-cirrus-rm/configuration.env
to the configMapGenerator
block. Here is an example:
configMapGenerator:
...
- name: sas-risk-cirrus-rm-parameters
behavior: merge
env:
- site-config/sas-risk-cirrus-rm/configuration.env
...
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide.
Note: The .env
overlay can be applied during or after the initial deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Before verifying the settings for SAS Risk Modeling solution, you should first verify Risk Cirrus Core’s settings. Those instructions can be found in the Risk Cirrus Core README. To verify the settings for SAS Risk Modeling, do the following:
Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl describe configmap sas-risk-cirrus-rm-parameters -n <name-of-namespace>
Verify that the output contains the desired connection settings that you configured.
The SAS Start-Up Sequencer is configured to start pods in an predetermined, ordered sequence to ensure that pods are efficiently and effectively started in a manner that improves startup time. This design ensures that certain components start before others and allows Kubernetes to pull container Images in a priority-based sequence. It also provides a degree of resource optimization, in that resources are more efficiently spent during SAS Viya platform start-up with a priority given to starting essential components first.
However, there may be cases where this optimization is not desired by an administrator. For these cases, we provide the ability to disable this feature by applying a transformer that updates the components in your cluster that prevents the start sequencer functionality from executing.
Add sas-bases/overlays/startup/disable-startup-transformer.yaml
to the transformers block in your base kustomization.yaml ($deploy/kustomization.yaml
) file. Ensure that ordered-startup-transformer.yaml is listed after sas-bases/overlays/required/transformers.yaml
.
Here is an example:
...
transformers:
...
- sas-bases/overlays/required/transformers.yaml
- sas-bases/overlays/startup/disable-startup-transformer.yaml
To apply the change, perform the appropriate steps at Deploy the Software.
The SAS Stress Testing solution contains three different offerings: SAS Stress Testing, SAS Climate Stress Testing and SAS Credit Stress Testing. The SAS Stress Testing offering is the enterprise offering which includes the Climate Stress Testing and Credit Stress Testing offerings in addition to other features such as financial statement projection. The Climate Stress Testing offering is tailored toward evaluation of your business on climate risk. The Credit Stress Testing offering is tailored toward evaluation of your business on credit risk.
When SAS SAS Stress Testing is deployed, its content is integrated with the SAS Risk Cirrus platform. The platform includes a common layer, Risk Cirrus Core, that is used by multiple solutions. Therefore, in order to deploy the SAS SAS Stress Testing solution successfully, you must deploy the Risk Cirrus Core content in addition to the solution content. Preparing and configuring Risk Cirrus Core for deployment is described in the Risk Cirrus Core README at $deploy/sas-bases/examples/sas-risk-cirrus-rcc/README.md
(Markdown format) or $deploy/sas-bases/docs/preparing_and_configuring_cirrus_core_for_deployment.htm
(HTML format).
For storage options for your solution, such as external databases, refer to the Risk Cirrus Core README.
For more information about the pre-installation tasks that should be completed prior to deploying your solution, see Performing Pre-Installation Tasks in the SAS Stress Testing: Administrator’s Guide.
Complete steps 1-4 described in the Risk Cirrus Core README.
Complete step 5 described in the Risk Cirrus Core README to modify your Risk Cirrus Core .env
configuration file. Because SAS SAS Stress Testing uses workflow service tasks, a default service account must be configured for the Risk Cirrus Objects workflow client. If you know which user account to use before installation and prefer having it configured during installation, you should set the SAS_RISK_CIRRUS_SET_WORKFLOW_SERVICE_ACCOUNT_FLG variable
to “Y” and assign the user ID to the SAS_RISK_CIRRUS_WORKFLOW_DEFAULT_SERVICE_ACCOUNT
variable. If you choose not to configure this during installation, you can set the default service account after deployment via SAS Environment Manager.
If you are upgrading from a cadence prior to 2025.02, you should complete this step. Otherwise, you can skip to the next step.
If you have a $deploy/site-config/sas-risk-cirrus-st/resources
directory, take note of the values in your st_transform.yaml
file. You may want to use them in the following steps. Once you have the values you need, delete the directory and its contents. Then, edit your base kustomization.yaml
file ($deploy/kustomization.yaml
) to remove the following line from the transformers
section: - site-config/sas-risk-cirrus-st/resources/st_transform.yaml
.
Copy the files in $deploy/sas-bases/examples/sas-risk-cirrus-st
to the $deploy/site-config/sas-risk-cirrus-st
directory. Create a destination directory if one does not exist.
IMPORTANT: If the destination directory already exists, make sure it has the expected configuration.env
and sas-risk-cirrus-st-secret.env
files, not the old st_transform.yaml file from previous cadences (prior to 2025.02). If the directory already exists and already has the expected configuration.env
and sas-risk-cirrus-st-secret.env
files, verify that overlay settings have been applied successfully to the configmap and verify that overlay settings have been applied successfully to the secret have been correctly applied. No further actions are required unless you want to change the connection settings to different overrides.
Modify the configuration.env
file (located in the $deploy/site-config/sas-risk-cirrus-st
directory). Lines with a #
at the beginning are commented out; their values will not be applied during deployment. If there are any parameters for which you want to override the default value, uncomment that variable’s line by removing the #
at the beginning of the line and replace the placeholder with the desired value as explained in the following section. Specify, if needed, your settings as follows:
a. For SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER
, replace {{ INFO-OR-DEBUG }} with the logging level desired. (Default is INFO)
b. For SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
, replace {{ Y-OR-N }} to specify whether you want to include steps flagged as sample artifacts. If this value is N, then steps marked as sample step = “true” will be skipped during deployment. For example, you may want to deploy sample artifacts on your ‘DEV’ environment, so you set this variable to Y for that environment; however, you probably do not want to deploy sample artifacts on your ‘PROD’ environment, so you set this variable to N for that environment. If you do not set this variable, or if you leave it blank, steps marked as sample artifacts will be skipped. The following steps have been marked as sample artifacts and listed by product:
SAS Stress Testing
create_cas_lib
step creates the default STReporting CAS library that is used for reporting in SAS Stress Testing.create_db_auth_domain
step creates an STDBAuth domain for the riskcirrusst schema and assigns default permissions.create_db_auth_domain_user
step creates an STUserDBAuth domain for the riskcirrusst schema and assigns default group permissions.import_dataloader_files_climate
step uploads the Cirrus_Climate_loader.xlsx file into the file service under the Products/SAS Stress Testing directory.import_dataloader_files_credit
step uploads the Cirrus_Credit_loader.xlsx file into the file service under the Products/SAS Stress Testing directory.import_dataloader_files_ewst
step uploads the Cirrus_EWST_loader.xlsx file into the file service under the Products/SAS Stress Testing directory.import_sample_dataloader_files_common
step uploads the Cirrus_ST_sample_data_loader.zip file into the file service under the Products/SAS Stress Testing directory.import_templates_common
step uploads the Business Evolution Template used for import/export of BEP growth projections to the file service under the Products/SAS Stress Testing directory.import_va_reports_climate
step imports SAS-provided Climate reports created in SAS Visual Analytics.install_riskengine_project_climate
step loads the sample Climate project into SAS Risk Engine.install_riskengine_project_credit
step loads the sample Credit project into SAS Risk Engine.load_ado_linked_objects_ewst
step loads the Link Instances between the Business Evolution Plans (BEP) and Analysis Data Objects as well as linking the BEP to the Risk Scenarios in SAS Risk Factor Manager.load_objects_climate
step data loads the Cirrus_ST_Climate_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.load_objects_credit
step data loads the Cirrus_ST_Credit_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.load_objects_ewst
step data loads the Cirrus_ST_EWST_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.load_sample_objects_common
step data loads the Cirrus_ST_Sample_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.load_workflows_climate
step loads and activates the ST Climate workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.load_workflows_credit
step loads and activates the ST Credit workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.load_workflows_ewst
step loads and activates the ST workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.localize_va_reports_climate
step imports localized SAS-provided Climate reports created in SAS Visual Analytics.localize_va_reports_credit
step imports localized SAS-provided Credit reports created in SAS Visual Analytics.manage_cas_lib_acl
step sets up permissions for the default STReporting CAS library. Users in the STUsers, STAdministrators and SASAdministrators groups have full access to the tables.transfer_sampledata_files_climate
step stores a copy of all Climate sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.transfer_sampledata_files_common
step stores a copy of all common sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, reports and a BEP template.transfer_sampledata_files_credit
step stores a copy of all Credit sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.transfer_sampledata_files_ewst
step stores a copy of all sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.update_db_sampledata_scripts_pg_climate
step stores a copy of the install_climate_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the Climate sample data.update_db_sampledata_scripts_credit
step stores a copy of the install_credit_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the Credit sample data.update_db_sampledata_scripts_pg_ewst
step stores a copy of the install_ewst_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the sample data.WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N. The following steps have not been marked as sample artifacts and will always be deployed:
check_services
step checks if the ST dependent services are up and running.check_solution_existence
step checks to see if the ST solution is already running.check_solution_deployment
step checks for the successful deployment of Risk Cirrus Core.create_solution_repo
step creates the ST repository.check_solution_running
step checks to entire the ST solution is running.import_solution
step imports the solution in the ST repository.load_app_registry
step loads the ST solution into the SAS application registry.load_auth_rules_common
step assigns authorization rules for the ST solution.load_group_memberships_common
step assigns members to various ST groups.load_identities_common
step loads the ST identities.load_main_objects_common
step loads the Cirrus_ST_main_loader.xlsx file which contains required object instances, like Source System codes, Sequence Definitions and Code Libraries. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.setup_code_lib_repo
step creates the ST code library directory.share_objects_with_solution
step shares the Risk Cirrus Core code library with the ST solution.SAS Climate Stress Testing
create_cas_lib
step creates the default STReporting CAS library that is used for reporting in SAS Stress Testing.create_db_auth_domain
step creates an STDBAuth domain for the riskcirrusst schema and assigns default permissions.create_db_auth_domain_user
step creates an STUserDBAuth domain for the riskcirrusst schema and assigns default group permissions.import_dataloader_files_climate
step uploads the Cirrus_Climate_loader.xlsx file into the file service under the Products/SAS Stress Testing directory.import_sample_dataloader_files_common
step uploads the Cirrus_ST_sample_data_loader.zip file into the file service under the Products/SAS Stress Testing directory.import_templates_common
step uploads the Business Evolution Template used for import/export of BEP growth projections to the file service under the Products/SAS Stress Testing directory.import_va_reports_climate
step imports SAS-provided Climate reports created in SAS Visual Analytics.install_riskengine_project_climate
step loads the sample Climate project into SAS Risk Engine.load_objects_climate
step data loads the Cirrus_ST_Climate_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.load_sample_objects_common
step data loads the Cirrus_ST_Sample_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.load_workflows_climate
step loads and activates the ST Climate workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.localize_va_reports_climate
step imports localized SAS-provided Climate reports created in SAS Visual Analytics.manage_cas_lib_acl
step sets up permissions for the default STReporting CAS library. Users in the STUsers, STAdministrators and SASAdministrators groups have full access to the tables.transfer_sampledata_files_climate
step stores a copy of all Climate sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.transfer_sampledata_files_common
step stores a copy of all common sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, reports and a BEP template.update_db_sampledata_scripts_pg_climate
step stores a copy of the install_climate_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the Climate sample data.WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N. The following steps have not been marked as sample artifacts and will always be deployed:
check_services
step checks if the ST dependent services are up and running.check_solution_existence
step checks to see if the ST solution is already running.check_solution_deployment
step checks for the successful deployment of Risk Cirrus Core.create_solution_repo
step creates the ST repository.check_solution_running
step checks to entire the ST solution is running.import_solution
step imports the solution in the ST repository.load_app_registry
step loads the ST solution into the SAS application registry.load_auth_rules_common
step assigns authorization rules for the ST solution.load_group_memberships_common
step assigns members to various ST groups.load_identities_common
step loads the ST identities.load_main_objects_common
step loads the Cirrus_ST_main_loader.xlsx file which contains required object instances, like Source System codes, Sequence Definitions and Code Libraries. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.setup_code_lib_repo
step creates the ST code library directory.share_objects_with_solution
step shares the Risk Cirrus Core code library with the ST solution.SAS Credit Stress Testing
create_cas_lib
step creates the default STReporting CAS library that is used for reporting in SAS Stress Testing.create_db_auth_domain
step creates an STDBAuth domain for the riskcirrusst schema and assigns default permissions.create_db_auth_domain_user
step creates an STUserDBAuth domain for the riskcirrusst schema and assigns default group permissions.import_dataloader_files_credit
step uploads the Cirrus_Credit_loader.xlsx file into the file service under the Products/SAS Stress Testing directory.import_sample_dataloader_files_common
step uploads the Cirrus_ST_sample_data_loader.zip file into the file service under the Products/SAS Stress Testing directory.import_templates_common
step uploads the Business Evolution Template used for import/export of BEP growth projections to the file service under the Products/SAS Stress Testing directory.import_va_reports_credit
step imports SAS-provided Credit reports created in SAS Visual Analytics.install_riskengine_project_credit
step loads the sample Credit project into SAS Risk Engine.load_objects_credit
step data loads the Cirrus_ST_Credit_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.load_sample_objects_common
step data loads the Cirrus_ST_Sample_loader.zip sample object instances. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.load_workflows_credit
step loads and activates the ST Credit workflow definitions. Once a workflow definition has been activated, it cannot be deleted from the environment.localize_va_reports_credit
step imports localized SAS-provided Credit reports created in SAS Visual Analytics.manage_cas_lib_acl
step sets up permissions for the default STReporting CAS library. Users in the STUsers, STAdministrators and SASAdministrators groups have full access to the tables.transfer_sampledata_files_credit
step stores a copy of all Credit sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, models, reports, sample loan data, scenarios and scripts to load the sample loan data.transfer_sampledata_files_common
step stores a copy of all common sampledata files loaded into the environment into the file service under the Products/SAS Stress Testing directory. This directory will include DDLs, reports and a BEP template.update_db_sampledata_scripts_pg_credit
step stores a copy of the install_credit_sample_data.sas script called install_sample_data_user_executable.sas that contains the PostgreSQL database connection information for users to execute to reinstall the Credit sample data.WARNING: You can always load sample data after a deployment has been completed, but it can be very difficult to remove sample data once it has been deployed. In some cases, your only option is to re-deploy the environment without sample data. If you are unsure about whether you want sample data on your environment, then set this variable to N. The following steps have not been marked as sample artifacts and will always be deployed:
check_services
step checks if the ST dependent services are up and running.check_solution_existence
step checks to see if the ST solution is already running.check_solution_deployment
step checks for the successful deployment of Risk Cirrus Core.create_solution_repo
step creates the ST repository.check_solution_running
step checks to entire the ST solution is running.import_solution
step imports the solution in the ST repository.load_app_registry
step loads the ST solution into the SAS application registry.load_auth_rules_common
step assigns authorization rules for the ST solution.load_group_memberships_common
step assigns members to various ST groups.load_identities_common
step loads the ST identities.load_main_objects_common
step loads the Cirrus_ST_main_loader.xlsx file which contains required object instances, like Source System codes, Sequence Definitions and Code Libraries. Once this data has been deployed, you must unload it to remove it from the environment. In some cases, it is impossible to unload the data fully without redeploying.setup_code_lib_repo
step creates the ST code library directory.share_objects_with_solution
step shares the Risk Cirrus Core code library with the ST solution.c. For SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to run. Typically, this is intended to be used after a deployment has completed successfully, and you need to re-run a specific step without redeploying the entire environment.
For example, if SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
is set to N, then the “transfer_sampledata_files_common” and the “load_sample_objects_common” steps will be skipped during deployment. After the deployment finishes, you decide you want to include the SAS-provided sample data to use. You can set SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
to “transfer_sampledata,load_sample_data” and then delete the sas-risk-cirrus-st pod to force a redeployment. Doing so will only run the steps listed in SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS
.
WARNING: This list is absolute; the deployment will only run the steps included in this list. If you are deploying this environment for the first time, this variable should be an empty string, or you risk an incomplete or failed deployment.
d. For SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS
, replace {{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }} with the IDs of the steps you want to skip. Typically, the only use case for this would be skipping the load of sample data. To skip the load of sample data, set this variable to “load_sample_data”. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
is set to N, then set this variableto an empty string to skip load_sample_data and any other steps that are marked as sample data. If SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES
is set to Y, then set this variable with the IDs of any steps you would like to skip, including those flagged as sample data.
e. For SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME
, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-NAME }} with the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the Shared Services database.
f. For SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET
, replace {{ BASE64-ENCODED-SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }} with the base64 encoded version of the database user for the user name that was used for SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME
.
The following is an example of a configuration.env
that you could use for SAS SAS Stress Testing. This example uses the default values provided by SAS except for the solution input data database user name variable. The SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME
should be replaced with the user who is intended to own the solution database schema. If a value is not specified, it defaults to the owner of the Shared Services database.
# SAS_LOG_LEVEL_RISKCIRRUSDEPLOYER={{ INFO-OR-DEBUG }}
# SAS_RISK_CIRRUS_DEPLOYER_INCLUDE_SAMPLES={{ Y-OR-N }}
# SAS_RISK_CIRRUS_DEPLOYER_SKIP_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-SKIP }}
# SAS_RISK_CIRRUS_DEPLOYER_RUN_SPECIFIC_INSTALL_STEPS={{ COMMA-SEPARATED-STEPS-IDS-TO-RUN }}
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_NAME=stuser
In the base kustomization.yaml
file, add site-config/sas-risk-cirrus-st/configuration.env
to the configMapGenerator
block. Here is an example:
configMapGenerator:
...
- name: sas-risk-cirrus-st-parameters
behavior: merge
envs:
- site-config/sas-risk-cirrus-st/configuration.env
...
Save the kustomization.yaml
file.
Modify the sas-risk-cirrus-st-secret.env file (in the $deploy/site-config/sas-risk-cirrus-st
directory) and specify your settings as follows:
For the parameter SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET
, replace {{ SOLUTION-INPUT-DATA-SCHEMA-USER-SECRET }}
with the database schema user secret. If the directory already exists and already has the expected .env
file, verify that overlay settings have been applied successfully to the secret have been correctly applied. No further actions are required unless you want to change the secret.
The following is an example of secret.env
file that you could use for SAS SAS Stress Testing.
SAS_RISK_CIRRUS_SOLUTION_INPUT_DATA_DB_USER_SECRET=stsecret
Save the sas-risk-cirrus-st-secret.env
file.
In the base kustomization.yaml
file, add site-config/sas-risk-cirrus-st/sas-risk-cirrus-st-secret.env
to the secretGenerator
block. Here is an example:
secretGenerator:
...
- name: sas-risk-cirrus-st-secret
behavior: merge
envs:
- site-config/sas-risk-cirrus-st/sas-risk-cirrus-st-secret.env
...
Save the kustomization.yaml
file.
When you have finished configuring your deployment using the README files that are provided, complete the deployment steps to apply the new settings. The method by which the manifest is applied depends on what deployment method is being used. For more information, see Deploy the Software in the SAS Viya Platform: Deployment Guide to apply the new settings.
Note: The .env
overlay can be applied during or after the initial deployment of the SAS Viya platform.
kustomize build
to create and apply the manifests.kustomize build
to create and apply the manifests.Before verifying the settings for SAS SAS Stress Testing solution, complete step 9 specified in the Risk Cirrus Core README to verify for Risk Cirrus Core.
Run the following command to verify whether the overlay has been applied to the configuration map:
kubectl describe configmap sas-risk-cirrus-st-parameters -n <name-of-namespace>
Verify that the output contains the desired configurations that you configured.
To verify that your overrides were applied successfully to the secret, run the following commands:
Find the name of the secret on the namespace.
kubectl describe secret sas-risk-cirrus-st-secret -n <name-of-namespace>
Retrieve the name of the secret on the namespace from the “Name:” line on the generated output.
Verify that the output contains the desired database schema user secret that you configured.
kubectl get secret <name-of-the-secret> -n <name-of-namespace>-o jsonpath='{.data}'
The SAS Viya platform files service uses PostgreSQL to store file metadata and content. However, In PostgreSQL upload time is slower for large objects. To overcome this limitation you can choose to store the file content in other data storage, such as Azure Blob Storage. If you choose Azure Blob Storage as the storage database, then the file content is stored in Azure Blob Storage and file metadata remains in PostgreSQL.
The steps necessary to configure the SAS Viya platform files service to use Azure Blob Storage as the back end for file content are listed below.
Before you start, create or obtain a storage account and record the name of the storage account and its access key.
Copy the files in the $deploy/sas-bases/examples/sas-files/azure/blob
directory to the
$deploy/site-config/sas-files/azure/blob
directory. Create the target directory if it does
not already exist.
Create a file named account_key
in the $deploy/site-config/sas-files/azure/blob
directory, and paste the storage account key into the file. The file should
only contain the storage account key.
In the $deploy/site-config/sas-files/azure/blob/configmaps.yaml
file, replace
{{ STORAGE-ACCOUNT-NAME }}
with the name of the storage account to be used by
the files service.
Make the following changes to the base kustomization.yaml file in the $deploy
directory.
4.1. Add sas-bases/overlays/sas-files
and site-config/azure/blob
to the resources block.
Here is an example:
resources:
...
- sas-bases/overlays/sas-files
- site-config/sas-files/azure/blob
...
4.2. Add site-config/azure/blob/transformers.yaml
and sas-bases/overlays/sas-files/file-custom-db-transformer.yaml
to the transformers block.
Here is an example:
transformers:
...
- sas-bases/overlays/sas-files/file-custom-db-transformer.yaml
- site-config/sas-files/azure/blob/transformers.yaml
...
Use the deployment commands described in SAS Viya Platform Deployment Guide to apply the new settings.
The SAS Workload Orchestrator Service manages the workload which is started on demand by the launcher service. The SAS Workload Orchestrator Service has manager pods in a StatefulSet and server pods in a DaemonSet.
This README file describes the changes that can be made to the SAS Workload Orchestrator Service settings for pod resource requirements, for user-defined resource scripts, for the initial configuration of the service, and for specifying a different Prometheus Pushgateway URL than the default.
IMPORTANT: It is strongly recommended that deployments of SAS Workload Orchestrator
also have the ClusterRoleBinding. For details, see the README located at
$deploy/sas-bases/overlays/sas-workload-orchestrator/README.md
(for Markdown format) or at
$deploy/sas-bases/docs/cluster_privileges_for_sas_workload_orchestrator_service.htm
(for HTML format).
Kubernetes pods have resource requests and limits for CPU and memory.
Manager pods handle all the REST API calls and manage all of the processing of host, job, and queue information. The more jobs you process at the same time, the more memory and cores you should assign to the StatefulSet pods. For manager pods, the current default resource request and limit for CPU and memory is 1 core and 4GB of memory.
Server pods interact with Kubernetes to manage the resources and jobs running on a particular node. Their memory and core requirements depend on how jobs are allowed to concurrently run on a node and how many pods not started by the SAS Workload Orchestrator Service are also running on a node. For server pods, the current default resource request and limit for CPU and memory is 0.1 core and 250MB of memory.
Generally, manager pods use more resources than daemon pods with the resource request amount equalling the limit amount.
SAS Workload Orchestrator allows user-defined resources to be used for scheduling. User-defined resources can be a specified value or can be a value returned by executing a script.
Manager pods handle the running of user-defined resource scripts for resources that affect the scheduling on a global scale. An example of a global resource would be the number of licenses across all pods started by SAS Workload Orchestrator.
Server pods also handle the running of user-defined resource scripts for resources that reflect values about an individual node that a pod would run on. An example of a host resource could be number of GPUs on a host (for the case of a static resource) or the amount of disk space left on a mount (for the case of a dynamic resource).
In order to set these values, SAS Workload Orchestrator looks for a script in a volume mount named “/scripts”. To place a script in that directory, the script must be placed in a volume and that volume specified in the StatefulSet or DaemonSet definition as a volume with the name ‘scripts’.
As of the 2024.09 cadence, the default SAS Workload Orchestrator configuration is loaded from the sas-workload-orchestrator-initial-configuration ConfigMap. If the initial configuration needs to be modified, the ConfigMap can be modified by a patch transformer.
As of the 2024.09 cadence, the Prometheus Pushgateway used by SAS Workload Orchestrator
can be specified by an environment variable allowing customers to change where
SAS Workload Orchestrator sends its metric information. A patch transformer is provided
to allow a custom URL to be set in the SAS Workload Orchestrator Daemonset configuration.
If the environment variable is not specified, the metrics are sent to
http://prometheus-pushgateway:9091
.
Based on the following descriptions of available example files, determine if you want to use any example file in your deployment. If so, copy the example file and place it in your site-config directory.
The example files described in this README file are located at ‘/$deploy/sas-bases/examples/sas-workload-orchestrator/configure’.
The values for memory and CPU resources for the SAS Workload Orchestrator Service manager pods
are specified in sas-workload-orchestrator-statefulset-resources.yaml
.
To update the defaults, replace the {{ MEM-REQUIRED }}
and {{ CPU-REQUIRED }}
variables
with the values you want to use.
Note: It is important that the values for the requests and limits be identical to get Guaranteed Quality of Service for the SAS Workload Orchestrator Service pods.
Here is an example:
- op: replace
path: /spec/template/spec/containers/0/resources/requests/memory
value: 6Gi
- op: replace
path: /spec/template/spec/containers/0/resources/limits/memory
value: 6Gi
- op: replace
path: /spec/template/spec/containers/0/resources/requests/cpu
value: "2"
- op: replace
path: /spec/template/spec/containers/0/resources/limits/cpu
value: "2"
Note: For details on the value syntax used in the code, see Resource units in Kubernetes.
After you have edited the file, add a reference to it to the transformers
block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
- site-config/sas-workload-orchestrator/configure/sas-workload-orchestrator-statefulset-resources.yaml
The values for memory and CPU resources for the SAS Workload Orchestrator Service server pods
are specified in sas-workload-orchestrator-daemonset-resources.yaml
.
To update the defaults, replace the {{ MEM-REQUIRED }}
and {{ CPU-REQUIRED }}
variables
with the values you want to use.
Note: It is important that the values for the requests and limits be identical to get Guaranteed Quality of Service for the SAS Workload Orchestrator Service pods.
Here is an example:
- op: replace
path: /spec/template/spec/containers/0/resources/requests/memory
value: 4Gi
- op: replace
path: /spec/template/spec/containers/0/resources/limits/memory
value: 4Gi
- op: replace
path: /spec/template/spec/containers/0/resources/requests/cpu
value: "1500m"
- op: replace
path: /spec/template/spec/containers/0/resources/limits/cpu
value: "1500m"
Note: For details on the value syntax used in the code, see Resource units in Kubernetes
After you have edited the file, add a reference to it to the transformers
block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
- site-config/sas-workload-orchestrator/configure/sas-workload-orchestrator-daemonset-resources.yaml
The example file sas-workload-orchestrator-global-user-defined-resources-script-storage.yaml
mounts an NFS volume as the ‘scripts’ volume.
To update the volume, replace the {{ NFS-SERVER-ADDR }}
variable with the fully-qualified
domain name of the server and replace the {{ NFS-SERVER-PATH }}
variable with the path to
the volume on the server. Here is an example:
- op: replace
path: /spec/template/spec/volumes/0
value:
name: scripts
nfs:
path: /path/to/my/scripts
server: my.nfs.server.mydomain.com
Alternately, you could use any other type of volume Kubernetes supports.
The following example updates the volume to use a PersistentVolumeClaim instead of an NFS mount. This assumes the PVC has already been defined and created.
- op: replace
path: /spec/template/spec/volumes/0
value:
name: scripts
persistentVolumeClaim:
claimName: my-pvc-name
readOnly: true
Note: For details on the value syntax used specifying volumes, see Kubernetes Volumes.
After you have edited the file, add a reference to it to the transformers
block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
- site-config/sas-workload-orchestrator/configure/sas-workload-orchestrator-global-user-defined-resources-script-storage.yaml
The example file sas-workload-orchestrator-host-user-defined-resources-script-storage.yaml
mounts an NFS volume as the ‘scripts’ volume.
To update the volume, replace the {{ NFS-SERVER-ADDR }}
variable with the fully-qualified
domain name of the server and replace the {{ NFS-SERVER-PATH }}
variable with the path to
the volume on the server. Here is an example:
- op: replace
path: /spec/template/spec/volumes/0
value:
name: scripts
nfs:
path: /path/to/my/scripts
server: my.nfs.server.mydomain.com
Alternately, you could use any other type of volume Kubernetes supports.
The following example updates the volume to use a PersistentVolumeClaim instead of an NFS mount. This assumes the PVC has already been defined and created.
- op: replace
path: /spec/template/spec/volumes/0
value:
name: scripts
persistentVolumeClaim:
claimName: my-pvc-name
readOnly: true
Note: For details on the value syntax used specifying volumes, see Kubernetes Volumes.
After you have edited the file, add a reference to it to the transformers
block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
- site-config/sas-workload-orchestrator/configure/sas-workload-orchestrator-host-user-defined-resources-script-storage.yaml
The example file sas-workload-orchestrator-initial-configuration-change.yaml
changes the initial SAS Workload Orchestrator configuration to add
additional administrators.
To update the initial configuration, replace the {{ NEW_CONFIG_JSON }}
variable with the
JSON representation of the updated configuration. Here is an example:
- op: replace
path: /data/SGMG_CONFIG_JSON
value: |
{
"version" : 1,
"admins" : ["SASAdministrators","myAdmin1","myAdmin2"],
"hostTypes":
[
{
"name" : "default",
"description" : "SAS Workload Orchestrator Server Hosts on Kubernetes Nodes",
"role" : "server"
}
]
}
After you have edited the file, add a reference to it to the transformers
block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
- site-config/sas-workload-orchestrator/configure/sas-workload-orchestrator-initial-configuration.yaml
Note: The SAS Workload Orchestrator configuration in JSON can be exported by the Workload Orchestrator dialog in SAS Environment Manager application or it can be retrieved by using the workload-orchestrator plugin to the sas-viya CLI.
The example file sas-workload-orchestrator-prometheus-gateway-url.yaml
changes the
Prometheus Pushgateway URL from the default of http://prometheus-pushgateway:9091
to the value specified by the SGMG_PROMETHEUS_PUSHGATEWAY_URL
environment variable.
To update the URL, replace the {{ PROMETHEUS_PUSHGATEWAY_URL }}
variable with the
URL where SAS Workload Orchestrator should push its metrics. Here is an example:
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: SGMG_PROMETHEUS_PUSHGATEWAY_URL
value: https://my-prometheus-pushgateway.mycompany.com
After you have edited the file, add a reference to it to the transformers
block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
transformers:
...
- site-config/sas-workload-orchestrator/configure/sas-workload-orchestrator-prometheus-pushgateway-url.yaml
SAS Workload Orchestrator Service is an advanced scheduler that integrates with SAS Launcher. SAS recommends adding this overlay to allow the SAS Workload Orchestrator service account to retrieve the following node and pod information so that your deployment will run optimally.
Node presence. This tells us whether the node is known by the Kubernetes server. If the node information cannot be accessed, SAS Workload Orchestrator assumes the node is viable from a status point of view even though kubelet might be dead.
Node allocatable cores and memory. These are used to determine how much memory and how many cores are available to use for scheduling. If the node information cannot be accessed, SAS Workload Orchestrator only knows what the hardware contains, not what is available to Kubernetes to use for pods. Only kubelet knows those amounts.
Node labels (referred to by SAS Workload Orchestrator as ‘host properties’). If the node information cannot be accessed:
Node ‘unschedulable’ value. This is used to determine whether a node has been cordoned off so that no pods can be scheduled on it. If the node information cannot be accessed, SAS Workload Orchestrator still schedules pods to a cordoned-off node. Note: The ability for SAS Workload Manager to recognize that a node has been cordoned is new in release 2024.01.
Resources used by pods from other namespaces running on the node. This is used to reduce the cores and memory available for scheduling. If the pod information about pods from other namespaces cannot be accessed, the amount of cores and memory available for scheduling is incorrect. This causes OutOfmemory and OutOfcpu launch errors.
If you choose not to allow the ClusterRoleBinding, you must perform the following tasks:
Prevent pods from other namespaces from running on the compute nodes that are used by the current namespace. This can be done by making both of these changes:
Group hosts by host name using a regular expression instead of host properties. The cloud vendor might generate node host names based in part on the node pool name.
Close hosts in SAS Workload Orchestrator instead of using Kubernetes cordoning action.
Limit the number of pods, jobs, or both that can be started on a node so that the total-pod-resource requirements fit within the nodes’ available memory and cores.
Prepare for kubelet to stop responding when it has a problem.
Without the ability to get node labels as host properties, SAS Workload Orchestrator cannot allocate a new node from the correct node pool when a pod triggers a scale-up. As stated above, SAS Workload Orchestrator uses the host properties (that is, node labels) of the host type to be scaled to create the scaling pod’s nodeAffinity information. Without the host properties, the only label in the nodeAffinity section will be ‘workload.sas.com/class=compute’. If you have only one deployment in a cluster and only one scalable node pool for the deployment, this is not a problem. If you have multiple deployments and each deployment has a scalable host type or multiple scalable host types, this is a problem because the node information cannot be accessed.
The ClusterRole and ClusterRoleBinding are enabled by adding the file to the resources block of the base kustomization.yaml file
($deploy/kustomization.yaml
). Here is an example:
resources:
...
- sas-bases/overlays/sas-workload-orchestrator
To disable the ClusterRole and ClusterRoleBinding:
Remove sas-bases/overlays/sas-workload-orchestrator
from the resources block of the
base kustomization.yaml file ($deploy/kustomization.yaml
). This also ensures that the
ClusterRole option will not be applied in future Kustomize builds.
Perform the following command to remove the ClusterRoleBinding from the namespace:
kubectl delete clusterrolebinding sas-workload-orchestrator-<your namespace>
Perform the following command to remove the ClusterRole from the cluster.
kubectl delete clusterrole sas-workload-orchestrator
After you configure Kustomize, continue your SAS Viya platform deployment as documented.
The SAS Workload Orchestrator Service consists of a set of manager pods controlled by the sas-workload-orchestrator statefulset and a set of server pods controlled by the sas-workload-orchestrator daemonset.
This README file describes how to automatically disable (or enable) the SAS Workload Orchestrator Service by disabling (or enabling) the sas-workload-orchestrator statefulset and daemonset.
Because the SAS Workload Orchestrator Service is enabled by default, there is no action needed to automatically enable the statefulset and daemonset pods.
You can automatically disable the SAS Workload Orchestrator Service by adding a patch transformer to the main kustomization.yaml file so that no statefulset pods and no daemonset pods are created.
Note: Automatically disabling SAS Workload Orchestrator Service causes it to remain disabled even if an update is made to the deployment.
To automatically disable the service, add a reference to the disable patch
transformer file into the transformers block of the base kustomization.yaml
file ($deploy/kustomization.yaml
).
Here is an example:
transformers:
...
- sas-bases/overlays/sas-workload-orchestrator/enable-disable/sas-workload-orchestrator-disable-patch-transformer.yaml
Manually enable or disable the SAS Workload Orchestrator Service statefulset and daemonset pods by using the ‘kubectl patch’ command along with supplied patch files. There are four files, two for enabling the daemonset and statefulset, and two for disabling the daemonset and statefulset.
Since manually disabling or enabling of the SAS Workload Orchestrator Service
is done from a machine that is running kubectl with access to the cluster,
the files from $deploy/sas-bases/overlays/sas-workload-orchestrator/enable-disable
must be accessible on that machine either by mounting the overlays directory to the
machine or copying the files to the machine running the kubectl command.
Note: Manually disabling the SAS Workload Orchestrator Service is temporary. If an update is applied to the deployment, SAS Workload Orchestrator Service will be enabled again.
Both disabling and enabling manually require two kubectl patch commands, one for the sas-workload-orchestrator daemonset and one for the sas-workload-orchestrator statefulset.
Terminate the daemonset pods:
kubectl -n <namespace> patch daemonset sas-workload-orchestrator --patch-file /<path>/<to>/sas-workload-orchestrator-patch-daemonset-disable.yaml
Wait for daemonset pods to terminate, and then terminate the statefulset pods:
kubectl -n <namespace> patch statefulset sas-workload-orchestrator --patch-file /<path>/<to>/sas-workload-orchestrator-patch-statefulset-disable.yaml
Disable SAS Workload Orchestrator through the SAS Launcher:
kubectl -n <namespace> set env deployment sas-launcher -c sas-launcher SAS_LAUNCHER_SWO_DISABLED="true"
Enable the statefulset pods:
kubectl -n <namespace> patch statefulset sas-workload-orchestrator --patch-file /<path>/<to>/sas-workload-orchestrator-patch-statefulset-enable.yaml
Wait for both statefulset pods to become running, and then enable the daemonset pods:
kubectl -n <namespace> patch daemonset sas-workload-orchestrator --patch-file /<path>/<to>/sas-workload-orchestrator-patch-daemonset-enable.yaml
Enable SAS Workload Orchestrator through the SAS Launcher:
kubectl -n <namespace> set env deployment sas-launcher -c sas-launcher SAS_LAUNCHER_SWO_DISABLED="false"
Note: The SingleStore Operator documentation related to cluster configuration is located at the SingleStore website.
If your order includes SAS with Singlestore, the following is deployed by default:
A SingleStore cluster that has the following attributes:
The SAS Viya platform deployment includes example files to modify the configuration to suit your needs:
$deploy/sas-bases/examples/sas-singlestore/sas-singlestore-secret.yaml
is a secret generator that must be updated with deployment secrets including the SingleStore license and the admin password.$deploy/sas-bases/examples/sas-singlestore/sas-singlestore-cluster-config.yaml
is a patch transformer that can be used to override the default configuration of the SingleStore database created when deploying an integrated SAS Viya platform and SingleStore environment.Recommendations for SingleStore infrastructure on Azure are located at the SingleStore System Requirements and Recommendations page. SingleStore engineers also require that you use Azure CNI as the Kubernetes network provider and Azure managed-premium storage for your storage. SingleStore also notes that some customer workloads may require Azure Ultra SSD.
Recommendations for Singlestore infrastructure on AWS are located at the AWS EC2 Best Practices.
Calico CNI is required as the Kubernetes network provider for upstream open source Kubernetes clusters.
The configuration of the SingleStore cluster is site-specific. To create a SingleStore cluster in your deployment:
Copy $deploy/sas-bases/examples/sas-singlestore
into $deploy/site-config
.
Edit $deploy/site-config/sas-singlestore/sas-singlestore-secret.yaml
:
{{ LICENSE-CODE }}
secretpass
with your desired admin password, and then paste the resulting output into the sas-singlestore-secret.yaml
file, replacing the string {{ HASHED-ADMIN-PASSWORD }}
. The hashed password contains an initial asterisk that must be included.from hashlib import sha1
print("*" + sha1(sha1('secretpass'.encode('utf-8')).digest()).hexdigest().upper())
You can also override other cluster attributes, such as the number of leaf nodes, the storage class, or the amount of storage allocated to each node type.
In the following example, the leaf node definition in sas-singlestore-cluster-config.yaml
has been modified to create four leaf nodes each with 750 GB of storage, to use a scaling height of 1 (defined as 8 vCPU cores and 32 GB of RAM) and to use the “managed” storage class. You may also want to perform similar alterations to the aggregatorSpec. Refer to the SingleStore Cluster Scaling Document for more information.
- op: replace
path: /spec/leafSpec/count
value: 4
- op: replace
path: /spec/leafSpec/height
value: 1
- op: replace
path: /spec/leafSpec/storageGB
value: 750
- op: replace
path: /spec/leafSpec/storageClass
value: managed
To allow certain source ranges to access the load balancer, you must override the cluster attribute loadBalancerSourceRanges to configure optional firewall rules. Refer to the SingleStore Advanced Service Configuration for more information. The following examples demonstrate defining the load balancer source range for:
- op: replace
path: /spec/serviceSpec/loadBalancerSourceRanges
value: [ 100.110.120.130/16, 200.210.220.230/28, {{ IP-RANGE }} ]
...
- op: replace
path: /spec/serviceSpec/loadBalancerSourceRanges
value: [ {{ IP-RANGE }} ]
...
- op: replace:
path: /spec/serviceSpec/loadBalancerSourceRanges
value: []
Add the following to your base kustomization.yaml ($deploy/kustomization.yaml
) file.
Note: Ensure that the sas-bases/components/sas-singlestore component is added before any TLS components. Note: Ensure that the site-config/sas-singlestore component is added after the sas-bases/components/sas-singlestore component.
The site-config/sas-singlestore component will merge in your license/secret.
...
components:
- sas-bases/components/sas-singlestore
- site-config/sas-singlestore
...
transformers:
- site-config/sas-singlestore/sas-singlestore-cluster-config.yaml
Determine whether you need to override the cluster OS configuration. For more information, see the README file located at $deploy/sas-bases/examples/sas-singlestore-osconfig/README.md
(for Markdown format) or at $deploy/sas-bases/docs/sas_singlestore_cluster_os_configuration.htm
(for HTML format).
If you are deploying on Red Hat OpenShift, you must apply a Security Context Constraint to a service account. For the required steps, see the README file located at $deploy/sas-bases/examples/sas-singlestore-osconfig/openshift/README.md
(for Markdown format) or at $deploy/sas-bases/docs/security_context_constraint_and_service_account_for_sas_singlestore_cluster_os_configuration.htm
(for HTML format).
$deploy/sas-bases/examples/sas-singlestore-osconfig/sas-singlestore-osconfig.yaml
is a patch transformer that can be used to override the default OS configurations of the SingleStore database nodes used when deploying an integrated SAS Viya platform and SingleStore environment.
The sas-singlestore-osconfig.yaml
contains OS configuration settings as recommended by SingleStore Documentation. The configuration setting min_free_kbytes
controls the amount of memory held in reserve. The default value is 658,096, which is appropriate for a cluster node with about 64GiB of memory. If your cluster nodes’ system RAM is substantially larger than that, you should set the value of min_free_kbytes to either 1% of system RAM or 4194304
(4 GiB), whichever is smaller.
Multiply the available GiB of RAM, 1024 (MiB in GiB), 1024 again (kb in MiB), and .01 (1%). For example, if you are running on nodes with 256 GiB of system RAM, you would calculate: 256 X 1024 X 1024 X 0.01 = 2684354
, and use that as the value for min_free_kbytes
since it is less than 4194304
.
Unless directed by SAS Technical Support to modify the other configuration values, SAS recommends that you leave them unaltered.
To enable this customization:
Copy the $deploy/sas-bases/examples/sas-singlestore-osconfig/sas-singlestore-osconfig.yaml
file to the location of your
SingleStore overlay. For example, site-config/sas-singlestore-osconfig/sas-singlestore-osconfig.yaml
.
Modify the OS configuration values within site-config/sas-singlestore-osconfig/sas-singlestore-osconfig.yaml
.
Add the relative path of the sas-singlestore-osconfig.yaml
file to the transformers block of the base
kustomization.yaml file ($deploy/kustomization.yaml
) before the reference to
the sas-bases/overlays/required/transformers.yaml file. Here is an example:
transformers:
...
- site-config/sas-singlestore-osconfig/sas-singlestore-osconfig.yaml
...
- sas-bases/overlays/required/transformers.yaml
...
Note: If your SAS Viya platform is not being deployed on Red Hat OpenShift, you should ignore this README file.
Similar to the way that Role Based Access Control resources control user access, administrators can use Security Context Constraints (SCCs) on Red Hat OpenShift to control permissions for pods. These permissions include actions that a pod can perform and what resources it can access. SCCs are used to define a set of conditions that a pod must run with in order to be accepted into the system.
In an OpenShift environment, each Kubernetes pod starts up with an association to a specific SCC, which limits the privileges that pod can request. An administrator configures each pod to run with a certain SCC by granting the corresponding service account for that pod access to the SCC. For example, if pod A requires its own SCC, an administrator must grant access to that SCC for the service account under which pod A is launched.
This README describes several tasks:
The service account is needed to enable the sas-singlestore-osconfig daemonset with elevated privileges to start sas-singlestore-osconfig pods. The elevated privileges are needed because the pods make changes to the host node’s operating system’s kernel parameters.
The following steps should be performed before deploying SAS Viya platform.
Apply a security context constraint on an OpenShift cluster:
The /$deploy/sas-bases/examples/sas-singlestore-osconfig/openshift
directory
contains a file to apply a security context constraint for deploying SingleStore on an
OpenShift cluster.
A Kubernetes cluster administrator should add this SCC to their OpenShift cluster prior to deploying the SAS Viya platform. Use the following command to apply the SCC.
kubectl apply -f sas-bases/examples/sas-singlestore-osconfig/openshift/sas-singlestore-osconfig-scc.yaml
Bind the security context constraint to a service account:
After the SCC has been applied, a Kubernetes cluster administrator must bind the SCC to the sas-singlestore-osconfig service account that will use it.
Use the following command. Replace the entire variable {{ NAME-OF-NAMESPACE }}, including the braces, with the Kubernetes namespace used for the SAS Viya platform.
oc -n {{ NAME-OF-NAMESPACE }} adm policy add-scc-to-user sas-singlestore-osconfig -z
sas-singlestore-osconfig
Add the service account to the daemonset:
Make the following changes to the base kustomization.yaml file in the $deploy directory:
Here is an example:
resources:
- sas-bases/overlays/sas-singlestore-osconfig/openshift
transformers:
- sas-bases/overlays/sas-singlestore-osconfig/openshift/daemonset-transformer.yaml
After you revise the base kustomization.yaml file and complete all the tasks in the README files that you want, continue your SAS Viya platform deployment as documented in SAS Viya Platform: Deployment Guide.
This directory contains files to customize your SAS Viya platform deployment for SAS/ACCESS and Data Connectors. Some SAS/ACCESS products require third-party libraries and configurations. This README describes the steps necessary to make these files available to your SAS Viya platform deployment. It also describes how to set required environment variables to point to these files.
Note: If you re-configure SAS/ACCESS products after the initial deployment, you must restart the CAS server.
Before you start the deployment, collect the third-party libraries and configuration files that are required for your data sources. Examples of these requirements include the following:
When you have collected these files, place them on storage that is accessible to your Kubernetes deployment. This storage could be a mount or a storage device with a PersistentVolume (PV) configured.
SAS recommends organizing your software in a consistent manner on your mount storage device. The following is an example directory structure:
access-clients
├── hadoop
│ ├── jars
│ ├── config
├── odbc
│ ├── sql7.0.1
│ ├── gplm7.1.6
│ ├── dd7.1.6
├── oracle
├── postgres
└── teradata
Note the details of your specific storage solution, as well as the paths to the configuration files within it. You will need this information before you start the deployment.
You should also create a subdirectory within $deploy/site-config
to store your ACCESS configurations. In this documentation, we will refer to a user-created subdirectory called
$deploy/site-config/data-access
. For more information, refer to the “Directory Structure” section of the “Pre-installation
Tasks” Deployment Guide.
Use Kustomize PatchTransformers to attach the storage with your configuration files to the SAS Viya platform. Within the $deploy/sas-bases/examples/data-access
directory, there are four example files to help you with this process: data-mounts-cas.sample.yaml
, data-mounts-deployment.sample.yaml
, data-mounts-job.sample.yaml
, and data-mounts-statefulset.sample.yaml
.
Copy these four files into your $deploy/site-config/data-access
directory, removing “.sample” from the file names and making changes to each file according to your storage choice. The information should be largely duplicated across the four files, but notice that the path reference in each file is different, as well as the Kubernetes resource type that it targets.
When you have created your PatchTransformers, add them to the transformers block
in the base kustomization.yaml
file located in your $deploy
directory.
transformers:
...
- site-config/data-access/data-mounts-cas.yaml
- site-config/data-access/data-mounts-deployment.yaml
- site-config/data-access/data-mounts-job.yaml
- site-config/data-access/data-mounts-statefulset.yaml
Copy $deploy/sas-bases/examples/data-access/sas-access.properties
into your $deploy/site-config/data-access
directory. Edit the values in the $(VARIABLE) format as they pertain to your data source configuration, un-commenting them as needed. These paths refer to the volumeMount location of the storage you attached within the containers.
As an example, to configure an ODBC connection, the lines within sas-access.properties look like this:
# ODBCINI=$(PATH_TO_ODBCINI)
# ODBCINST=$(PATH_TO_ODBCINST)
# THIRD_PARTY_LIB=$(ODBC_DRIVER_LIB)
# THIRD_PARTY_BIN=$(ODBC_DRIVER_BIN)
They should be un-commented and edited to include values like this, where /access-clients is the volumeMount location defined in Attach Storage to the SAS Viya Platform:
ODBCINI=/access-clients/odbc/odbc.ini
ODBCINST=/access-clients/odbc/odbcinst.ini
THIRD_PARTY_LIB=/access-clients/odbc/lib
THIRD_PARTY_BIN=/access-clients/odbc/bin
Edit the base kustomization.yaml file in the $deploy
directory to add the following content to the configMapGenerator block, replacing $(PROPERTIES_FILE) with the relative path to your new file within the $deploy/site-config
directory.
configMapGenerator:
...
- name: sas-access-config
behavior: merge
envs:
- $(PROPERTIES_FILE)
For example,
configMapGenerator:
...
- name: sas-access-config
behavior: merge
envs:
- site-config/data-access/sas-access.properties
Also add the following reference to the transformers block of the base kustomization.yaml file. This path references a SAS file that you do not need to edit, and it will apply the environment variables in sas-access.properties
to the appropriate parts of your SAS Viya platform deployment.
transformers:
- sas-bases/overlays/data-access/data-env.yaml
SAS redistributes CData JDBC drivers for Hive, Databricks, SparkSQL, and others. When connecting to these targets, there is generally no need to configure an external JDBC driver. If you have external JDBC drivers that you want to make accesible within the SAS Viya platform, create a volumeMount location that uses the special name of /data-drivers/jdbc
. When this directory is present during deployment, then this name will be automatically appended to the Java class path used by the JDBC-based SAS/ACCESS products. See the Attach Storage to the SAS Viya Platform section for more information about creating a data mount point.
After the initial deployment of the SAS Viya platform, if you make changes to your SAS/ACCESS configuration, you should restart the CAS server. This will refresh the CAS environment and enable any changes that you’ve made.
IMPORTANT: Performing this task will cause the termination of all active connections and sessions and the loss of any in-memory data.
Set your KUBECONFIG and run the following command:
kubectl -n name-of-namespace delete pods -l app.kubernetes.io/managed-by=sas-cas-operator
You can now proceed with your deployment as described in SAS Viya Platform Deployment Guide.
Configuring ODBC connectivity to your database for the SAS Viya platform requires some or all of the following environment variables to be set. Configure these variables using the sas-access.properties
file within your $deploy/site-config
directory.
ODBCINI=$(PATH_TO_ODBCINI)
ODBCINST=$(PATH_TO_ODBCINST)
THIRD_PARTY_LIB=$(ODBC_DRIVER_LIB)
THIRD_PARTY_BIN=$(ODBC_DRIVER_BIN)
The THIRD_PARTY_LIB variable is a colon-separated set of directories where your third-party ODBC libraries are located. You must add the location of the ODBC shared libraries to this path so that drivers can be loaded dynamically at run time. This variable will be appended to the LD_LIBRARY_PATH as part of your install. If you need to set binaries on the PATH, you can also use a colon-separated set of bin directories using THIRD_PARTY_BIN.
It is possible to invoke multiple ODBC-based SAS/ACCESS products in the same SAS session. However, you must first define the driver names in a single odbcinst.ini configuration file. Also, if you decide to use DSNs in your SAS/ACCESS connections, the data sources must be defined in a single odbc.ini configuration file. You cannot pass a delimited string of files for the ODBCINST or ODBCINI environment variables. The requirement to use a single initialization file extends to any situation in which you are running multiple ODBC-based SAS/ACCESS products. Always set the ODBCINI and ODBCINST to the full paths to the respective files, including the filenames.
ODBCINI=$(ODBCINI)
ODBCINST=$(ODBCINST)
The $deploy/sas-bases/examples/data-access
directory has the odbcinst.ini and odbc.ini files included in your install. SAS recommends using these files to add additional ODBC drivers or set a DSN to ensure that you have the correct configuration for the included ODBC-based SAS/ACCESS products. It is also best to copy odbcinst.sample.ini or odbc.sample.ini from the examples directory to a location on your PersistentVolume.
SAS/ACCESS Interface to Amazon Redshift uses an ODBC client (from Progress DataDirect), which is included in your install. By default, the Amazon Redshift connector is set up for non-encrypted DSN-less connections. To reference a DSN, follow the ODBC configuration steps to associate your odbc.ini file with your instance.
In order to avoid possible connection errors with SAS/ACCESS Interface to Google BigQuery, add the following environment variable to the sas-access.properties
file within your $deploy/site-config
directory:
GOMEMLIMIT=250MiB
The Google BigQuery documentation has more information about the values that can be used for GOMEMLIMIT
.
SAS/ACCESS Interface to DB2 uses the installed DB2 client environment that must be accessible from a PersistentVolume. After the initial DB2 client setup, two directories must be created and be accessible to your SAS Viya platform cluster as a PersistentVolume. These directories contain the installed client files (e.g., /db2client) and the configured server definition files (/db2). The following steps need to be executed on the PersistentVolume.
Install the DB2 client files into a designated directory. The “/db2client” directory is used in these instructions.
This step is important. Create (or reuse) a system user that has a uid and gid value of “1001”. A specific owner and group name is not essential (“sas” is used in these instructions), but the uid and gid values need to be set to “1001”. When referenced within a PersistentVolume, these values will be mapped to the predefined “sas” user and group that the SAS Viya platform uses. Once the user is setup on the host system, you should see the expected uid and gid values using the “id” command.
> sudo groupadd -g 1001 sas
> sudo useradd -u 1001 -g 1001 sas
> id sas
uid=1001(sas) gid=1001(sas) groups=1001(sas)
export DB2_NET_CLIENT_PATH=/db2client/sqllib
# Edit $DB2_NET_CLIENT_PATH/db2profile to set the following environment variable values
# DB2DIR=/db2client/sqllib
# DB2INSTANCE=sas
# INSTHOME=/db2
source $DB2_NET_CLIENT_PATH/db2profile
export DB2_APPL_DATA_PATH=/db2
export DB2_APPL_CFG_PATH=/db2
$DB2_NET_CLIENT_PATH/bin/db2ccprf -f -t /db2
sudo chown -R sas:sas /db2client
sudo chown -R sas:sas /db2
* DB2_CLIENT_USER=sas
* DB2_CLIENT_DIR=/db2client
* DB2_CONFIGURED_DIR=/db2
* PATH_TO_DB2_LIBS=/db2client/sqllib/lib64:/db2client/sqllib/lib64/gskit:/db2client/sqllib/lib32
* PATH_TO_DB2_BIN=/db2client/sqllib/bin:/db2client/sqllib/adm:/db2client/sqllib/misc
Within your sas-access.properties file, use the 5 values above to set the following environment variables. Note that some variables are not assigned a value.
CUR_INSTHOME=
CUR_INSTNAME=
DASWORKDIR=
DB2DIR=$(DB2_CLIENT_DIR)/sqllib
DB2INSTANCE=$(DB2_CLIENT_USER)
DB2LIB=$(DB2_CLIENT_DIR)/sqllib/lib
DB2_HOME=$(DB2_CLIENT_DIR)/sqllib
DB2_NET_CLIENT_PATH=$(DB2_CLIENT_DIR)/sqllib
IBM_DB_DIR=$(DB2_CLIENT_DIR)/sqllib
IBM_DB_HOME=$(DB2_CLIENT_DIR)/sqllib
IBM_DB_INCLUDE=$(DB2_CLIENT_DIR)/sqllib/
IBM_DB_LIB=/dbi/db2/viya4/db2client/sqllib/lib
INSTHOME=$(DB2_CONFIGURED_DIR)
INST_DIR=$(DB2_CLIENT_DIR)/sqllib
PREV_DB2_PATH=
DB2=$(PATH_TO_DB2_LIBS)
DB2_BIN=$(PATH_TO_DB2_BIN)
If you want to use SAS/ACCESS to JDBC to access your DB2 database, then copy your DB2 client installation’s JDBC driver (from $(DB2_CLIENT _DIR)/sqllib/java
) to the source location of the /data-drivers/jdbc
volumeMount. See the Specify External JDBC Drivers section for more information about creating this data mount point.
SAS/ACCESS Interface to Greenplum uses an ODBC client (SAS/ACCESS to Greenplum from Progress DataDirect), which is included in your install. By default, the Greenplum connector is set up for non-encrypted DSN-less connections. To reference a DSN, follow the ODBC configuration steps above to associate your odbc.ini file with your instance.
SAS/ACCESS Interface to Greenplum can use the Greenplum Client Loader Interface for loading large volumes of data. To perform bulk loading, the Greenplum Client Loader Package must be accessible from a PersistentVolume.
SAS recommends using the Greenplum Database parallel file distribution program (gpfdist) for bulk loading. The gpfdist binary and the temporary location gpfdist uses to write data files must be accessible from your Viya platform cluster and a secondary machine. You will need to launch the gpfdist server binary on the secondary machine to serve requests from SAS:
./gpfdist -d $(GPLOAD_HOME) -p 8081 -l $(GPLOAD_HOME)/gpfdist.log &
Within your sas-access.properties file, set the following environment variables. The $(GPLOAD_HOME) environment variable points to the directory where the external tables you want to load will reside. Note that this location must be mounted and accessible to your Viya platform cluster as a PersistentVolume, as well as the secondary machine running gpfdist.
GPHOME_LOADERS=$(PATH_TO_GPFDIST_UTILITY)
GPLOAD_HOST=$(HOST_RUNNING_GPFDIST)
GPLOAD_HOME=$(PATH_TO_EXTERNAL_TABLES_DIR)
GPLOAD_PORT=$(GPFDIST_LISTENING_PORT)
GPLOAD_LIBS=$(GPHOME_LOADERS)/lib
You must make your Hadoop JARs and configuration file available to SAS/ACCESS Interface to Hadoop on a PersistentVolume or mounted storage. After your SAS Viya platform software is deployed, set the options SAS_HADOOP_JAR_PATH and SAS_HADOOP_CONFIG_PATH within your SAS program to point to this location. SAS does not recommend setting these as environment variables within your sas-access.properties file, as they would then be used for any connections from your Viya platform cluster. Instead, within your SAS program, use:
options set=SAS_HADOOP_JAR_PATH=$(PATH_TO_HADOOP_JARs);
options set=SAS_HADOOP_CONFIG_PATH=$(PATH_TO_HADOOP_CONFIG);
SAS/ACCESS Interface to Impala requires the ODBC driver for Impala. The Impala ODBC driver is an API-compliant shared library, that must be accessible from a PersistentVolume. You must include the full path to the shared library by setting the IMPALA attribute so that the Impala driver can be loaded dynamically at run time.
SAS Viya platform provides internal Impala ODBC driver as default. Customers can do kustomization in sas-access.properties file.
IMPALA=$(PATH_TO_IMPALA_LIBS)
SIMBAIMPALAINI=$(PATH_TO_SIMBA_IMPALA_INI)
To reference a DSN in your connection, follow the instructions in ODBC configuration.
Bulk loading with Impala is accomplished in two ways:
Use the WebHDFS interface to Hadoop to push data to HDFS. The SAS environment variable SAS_HADOOP_RESTFUL must be specified and set to the value of 1. The properties for the WebHDFS location is included in the Hadoop hdfs-site.xml file. In this case, the hdfs-site.xml file must be accessible from a PersistentVolume. Alternatively, you can specify the WebHDFS hostname or the server’s IP address where the external file is stored using the BL_HOST= and BL_PORT= options.
Configure a required set of Hadoop JAR files. JAR files must be in a single location accessible from a PersistentVolume. The SAS environment variable SAS_HADOOP_JAR_PATH and SAS_HADOOP_CONFIG_PATH must be specified and set to the location of the Hadoop JAR and configuration files. For a caslib connection, the data source options HADOOPJARPATH= and HADOOPCONFIGDIR= should be used.
SAS/ACCESS Interface to Informix uses an ODBC client (from Progress DataDirect) that is included in your install. By default, the Informix connector is set up for non-encrypted DSN-less connections. If you use quotation marks in your Informix SQL statements, set the DELIMIDENT attribute to DELIMIDENT=YES or Informix might reject your statements.
DELIMIDENT=$(YES_OR_NO)
To reference a DSN in your connection, follow the instructions in ODBC configuration.
You must make your JDBC client and configuration file(s) available to SAS/ACCESS Interface to JDBC on a PersistentVolume or mounted storage.
The SAS/ACCESS Interface to MongoDB requires the MongoDB C API client library (libmongoc). The MongoDB C shared library must be accessible from a PersistentVolume, and the full path to the library must be set using the MONGODB variable.
MONGODB=$(PATH_TO_MONGODB_LIBS)
SAS/ACCESS Interface to Microsoft SQL Server uses an ODBC client (from Progress DataDirect), which is included in your install. By default, the SQL Server connector is set up for non-encrypted DSN-less connections. To reference a DSN, follow the ODBC configuration steps to associate your odbc.ini file with your instance.
When connecting to Microsoft Azure SQL Database or Microsoft Azure Synapse, add the option
EnableScrollableCursors=4
to your DSN configuration in the odbc.ini file, or include it in the CONNECT_STRING libname option or the CONOPTS caslib option.
Bulk-loading is initiated by setting the connection option EnableBulkLoad to one.
EnableBulkLoad=4
This option can be set in your DSN (odbc.ini file) or with the CONNECT_STRING libname option for DSN-less connections. When connecting via a caslib, use the CONOPTS option for DSN-less connection.
Depending on how your database administrator has configured the SQL instance, you might need a valid truststore configured for the TLS/SSL connections to Microsoft SQL Server. Failure to specify a valid truststore may result in the following error when connecting through SAS/ACCESS:
ERROR: CLI error trying to establish connection: [SAS][ODBC SQL Server Wire Protocol driver]Cannot load trust store. SSL Error
You can specify a truststore through the DSN definition in odbc.ini or in the CONNECT_STRING libname option for DSN-less connections:
TrustStore=/security/trustedcerts.pem
You may also choose to have the ODBC client ignore the TrustStore by specifying the following option in the odbc.ini or CONNECT_STRING:
ValidateServerCertificate=0
With this option, the ODBC client does not validate the server certificate with the TrustStore contents.
The SAS/ACCESS Interface to MySQL requires the MySQL C API client (libmysqlclient). The MySQL C API client must be accessible from a PersistentVolume, and the full path to the library must be set using the MYSQL variable.
MYSQL=$(PATH_TO_MYSQL_LIBS)
SAS/ACCESS Interface to Netezza requires the ODBC driver for Netezza. The IBM Netezza ODBC driver is an API-compliant shared library, that must be accessible from a PersistentVolume. The NETEZZA variable must be set to the full path of the shared library so that the Netezza driver can be loaded dynamically at run time. IBM’s Netezza client package may contain a “linux-64.tar.gz” archive which contains older files that can cause a conflict with other SAS/ACCESS products. SAS recommends that the following files and symbolic links not be included in the Netezza library path:
* libk5crypto.so.*
* libkrb5.so.*
* libkrb5support.so.*
The libcom_err.so.* files/links must be included.
NETEZZA=$(PATH_TO_NETEZZA_LIBS)
To reference a DSN in your connection, follow the instructions in ODBC configuration.
To configure your ODBC driver to work with SAS/ACCESS Interface to ODBC, follow the instructions in ODBC configuration.
SAS/ACCESS Interface to Oracle uses the Oracle Instant Client, which is included in your install. If you intend to colocate optional Oracle configuration files such as tnsnames.ora, sqlnet.ora or ldap.ora with the Oracle Instant Client, then you must make these files available on a PersistentVolume or mounted storage and set the environment variable TNS_ADMIN to the directory name where these files are located.
TNS_ADMIN=$(PATH_TO_TNS_ADMIN)
If you plan to use a different version of the Oracle Instant Client to the one provided you will have to add the following Oracle properties in the sas-access.properties file and use the kustomize tool.
ORACLE=$(PATH_TO_ORACLE_LIBS)
ORACLE_BIN=$(PATH_TO_ORACLE_BIN)
The SAS/ACCESS Interface to the PI System uses the PI System Web API. No PI System client software is required to be installed. However, the PI System Web API (PI Web API 2015-R2 or later) must be installed and activated on the host machine where the user connects.
HTTPS requires an SSL (Secure Sockets Layer) certificate to authenticate with the host. Prior to the libname statement, set the location to the certificate file in a SAS session using the “option set” command. The syntax is as follows:
options set=SSLCALISTLOC "/usr/mydir/root.pem";
SAS/ACCESS Interface to PostgreSQL uses an ODBC client, which is included in your install. By default, the PostgreSQL connector is set up for DSN-less connections. To reference a DSN, follow the ODBC configuration steps to associate your odbc.ini file with your instance.
The SAS/ACCESS Interface to R/3 requires the SAP NetWeaver RFC Library. The SAP NetWeaver RFC Library must be accessible from a PersistentVolume, and the full path to the library must be set using the R3 variable.
R3=$(PATH_TO_R3_LIBS)
Additional required post-installation tasks are described in Post-Installation Instructions for SAS/ACCESS 9.4 Interface to R/3.
There are no configuration steps required. SAS/ACCESS Interface to Salesforce connects to Salesforce using version 46.0 of its SOAP API.
The SAS/ACCESS Interface to SAP ASE requires the SAP ASE shared libraries. The SAP ASE shared libraries must be accessible from a PersistentVolume, and the full path to the libraries must be set using the SYBASELIBS variable. The SYBASE variable must also be set to the full path of the SAP ASE (Sybase) installation directory, and the SYBASE_BIN variable must be set to the SAP ASE installation bin directory.
SYBASE=$(PATH_TO_SAPASE_INSTALLATION_DIR)
SYBASELIBS=$(PATH_TO_SAPASE_LIBS)
SYBASE_BIN=$(PATH_TO_SAPASE_BIN_DIRECTORY)
Here are optional SAP ASE (Sybase) environment variables that you may want to consider setting:
SYBASE_OCS=$(SAPASE_HOME_DIRECTORY_NAME)
DSQUERY=$(NAME_OF_TARGET_SERVER)
The SAP ASE administrator or user must install two SAP ASE (Sybase) stored procedures on the target SAP server. These files are available in a compressed TGZ archive for download from the SAS Support site at https://support.sas.com/downloads/package.htm?pid=2458.
SAS/ACCESS Interface to SAP HANA requires the ODBC driver for SAP HANA. The SAP HANA ODBC driver is an API-compliant shared library, that must be accessible from a PersistentVolume. The HANA variable must be set to the full path of the shared library so that the SAP HANA driver can be loaded dynamically at run time.
HANA=$(PATH_TO_HANA_LIBS)
To configure a TLS/SSL connection to SAP HANA, two additional environment variables are required: SECUDIR and SAPCRYPTO_LIB.
SECUDIR=$(PATH_TO_HANA_SECUDIR)
SAPCRYPTO_LIB=$(PATH_TO_SAPCRYPTO_LIB)
To reference a DSN in your connection, follow the instructions in ODBC configuration.
The SAS/ACCESS Interface to SAP IQ requires the SAP IQ shared libraries. The SAP IQ shared libraries must be accessible from a PersistentVolume, and the full path to the libraries must be set using the SAPIQ variable. The IQDIR16 variable must also be set to the full path of the SAP IQ installation directory, and the SAPIQ_BIN variable must be set to the SAP IQ installation bin directory.
IQDIR16=$(PATH_TO_SAPIQ_INSTALLATION_DIR)
SAPIQ=$(PATH_TO_SAPIQ_LIBS)
SAPIQ_BIN=$(PATH_TO_SAPIQ_BIN_DIRECTORY)
There are no additional configuration steps required.
SAS/ACCESS Interface to Snowflake uses an ODBC client, which is included in your install. To reference a DSN in your connection, follow the instructions in ODBC configuration.
You must make your Hadoop JARs and configuration file available to SAS/ACCESS Interface to Spark on a PersistentVolume or mounted storage. After your SAS Viya platform software is deployed, set the options SAS_HADOOP_JAR_PATH and SAS_HADOOP_CONFIG_PATH within your SAS program to point to this location. SAS does not recommend setting these as environment variables within your sas-access.properties file, as they would then be used for any connections from your Viya platform cluster. Instead, within your SAS program, use:
options set=SAS_HADOOP_JAR_PATH=$(PATH_TO_HADOOP_JARs);
options set=SAS_HADOOP_CONFIG_PATH=$(PATH_TO_HADOOP_CONFIG);
SAS redistributes the CData JDBC driver for Databricks, so there is no need to configure an external JDBC driver.
SAS supports bulk loading to Databricks via ADLS when running on Azure. Refer to SAS/ACCESS to Spark documentation for more information on Azure bulk loading to Databricks.
SAS/ACCESS Interface to Teradata requires the Teradata Tools and Utilities (TTU) shared libraries. The TTU libraries must be accessible from a PersistentVolume, and the TERADATA variable must be set to the full path of the TTU libraries.
SAS Viya platform provides internal Teradata client as default, but it doesn’t contain TD Wallet. Customers who want to use TD Wallet can do kustomization in sas-access.properties file.
TERADATA=$(PATH_TO_TERADATA_LIBS)
Ensure that the Teradata client encoding is set to UTF-8 in the clispd.dat file. The two lines in the clispd.dat file that need to be set are:
charset_type=N
charset_id=UTF8
Set the COPLIB environment variable to the location of the updated clispd.dat file.
COPLIB=$(TERADATA_COPLIB)
SAS/ACCESS Interface to Vertica requires the ODBC driver for Vertica. The Vertica ODBC driver is an API-compliant shared library, that must be accessible from a PersistentVolume. The VERTICA variable must be set to the full path of the shared library so that the Vertica driver can be loaded dynamically at run time. Also, the VERTICAINI attribute must be set to point to vertica.ini file on your PersistentVolume. SAS Viya platform provides internal Vertica ODBC driver as default. Customers can do kustomization in sas-access.properties file.
VERTICA=$(PATH_TO_VERTICAL_LIBS)
VERTICAINI=$(PATH_TO_VERTICA_ODBCINI)
Also, the driver manager encoding defined in the vertica.ini file should be set to UTF-8.
DriverManagerEncoding=UTF-8
To reference a DSN in your connection, follow the instructions in ODBC configuration.
SAS/ACCESS Interface to Yellowbrick uses an ODBC client, which is included in your install. By default, the Yellowbrick connector is set up for DSN-less connections. To reference a DSN, follow the ODBC configuration steps to associate your odbc.ini file with your instance.
SAS/ACCESS Interface to Yellowbrick can use the Yellowbrick bulk loader (ybload) and bulk unloader(ybunload) to move large volumes of data. To perform bulk loading, set the following data set options:
BULKLOAD=YES
BL_YB_PATH='path-to-tool-location'
These tools must be accessible from a PersistentVolume.
The publishDCServices key enables network connections between CAS and supported databases, such as Teradata and Hadoop, to transfer data in parallel between the database nodes and CAS nodes. Parallel data transfer is a functionality provided by the SAS Data Connector Accelerator for Hadoop or Teradata products.
Edit the base kustomization.yaml
file in your $deploy
directory to add the following lines.
transformers:
...
- sas-bases/overlays/data-access/enable-dc-ports.yaml
The publishEPCSService key enables the execution of the SAS Embedded Process for Spark Continuous Session (EPCS) in the Kubernetes cluster. The SAS Embedded Process for Spark continuous session (EPCS) is an instantiation of a long-lived SAS Embedded Process session on a cluster that can serve one CAS session. EPCS provides a tight integration between CAS and Spark by processing multiple execution requests without having to start and stop the SAS Embedded Process for Spark every time an execution request is made.
Users can improve system performance by using the EPCS and the SAS Data Connector to Hadoop to perform multiple actions within the same CAS session. Users can also use the EPCS to run models in Spark.
Edit the base kustomization.yaml
file in your $deploy/site-config
directory to add the following lines.
transformers:
...
- sas-bases/overlays/data-access/enable-epcs-port.yaml
For information about PersistentVolumes, see Persistent Volumes.
This readme describes hows to customize your SAS Viya platform deployment to use SAS/CONNECT Spawner.
SAS provides example and overlay files for customizations. Read the descriptions of the available tasks in the following sections. If you want to perform a task to customize your deployment, follow the instructions for it that follow in that section.
Perform these steps if cloud native mode should be disabled in your environment.
Add the following code to the configMapGenerator block of the base kustomization.yaml file:
```
...
configMapGenerator:
...
- name: sas-connect-spawner-config
behavior: merge
literals:
- SASCLOUDNATIVE=0
...
```
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
Perform these steps if SSSD is required in your environment.
$deploy/kustomization.yaml
).Important: This line must come before any network transformers (that is, transformers starting with “- sas-bases/overlays/network/”) and the required transformer “- sas-bases/overlays/required/transformers.yaml”. Note that your configuration may not have network transformers if security is not configured.
Here is an example for Full-stack TLS. If you are using a different version of TLS, or no TLS at all, the network transformers may be different or not present.
```
...
transformers:
...
- sas-bases/overlays/sas-connect-spawner/add-sssd-container-transformer.yaml
# The following lines are provided as a location reference, they should not be added if they don't appear.
- sas-bases/overlays/network/ingress/security/transformers/product-tls-transformers.yaml
- sas-bases/overlays/network/ingress/security/transformers/ingress-tls-transformers.yaml
- sas-bases/overlays/network/ingress/security/transformers/backend-tls-transformers.yaml
# The following line is provided as a location reference, it should appear only once and not be duplicated.
- sas-bases/overlays/required/transformers.yaml
...
```
Use these steps to provide a custom SSSD configuration to handle user authorization in your environment.
Copy the $deploy/sas-bases/examples/sas-connect-spawner/external-sssd-config/add-sssd-configmap-transformer.yaml
file to $deploy/site-config/sas-connect-spawner/external-sssd-config/add-sssd-configmap-transformer.yaml
.
Modify the copied file according to the comments in it.
Add site-config/sas-connect-spawner/external-sssd-config/add-sssd-configmap-transformer.yaml
and sas-bases/overlays/sas-connect-spawner/ext-sssd-volume-transformer.yaml
to the transformers block of the base kustomization.yaml file ($deploy/kustomization.yaml
).
Here is an example:
```
...
transformers:
...
-
- site-config/sas-connect-spawner/external-sssd-config/add-sssd-configmap-transformer.yaml
- sas-bases/overlays/sas-connect-spawner/ext-sssd-volume-transformer.yaml
...
```
Copy your custom sssd configuration file to $deploy/site-config/sas-connect-spawner/external-sssd-config/sssd.conf
.
Add the following code to the secretGenerator block of the base kustomization.yaml file:
```
...
secretGenerator:
...
- name: sas-sssd-config
files:
- SSSD_CONF=site-config/sas-connect-spawner/external-sssd-config/sssd.conf
type: Opaque
...
```
Deploy the software using the commands in SAS Viya Platform: Deployment Guide.
LoadBalancer assigns an IP address for the SAS/CONNECT Spawner and allows the standard port number to be used.
Copy the $deploy/sas-bases/examples/sas-connect-spawner/enable-external-access/sas-connect-spawner-enable-loadbalancer.yaml
file to $deploy/site-config/sas-connect-spawner/enable-external-access/sas-connect-spawner-enable-loadbalancer.yaml
.
Modify the copied file according to the comments in it.
Add a reference to the copied file to the resources block of the base kustomization.yaml file ($deploy/kustomization.yaml
). Here is an example:
```
...
resources:
...
- site-config/sas-connect-spawner/enable-external-access/sas-connect-spawner-enable-loadbalancer.yaml
...
```
Deploy the software as described in SAS Viya Platform: Deployment Guide.
Refer to External Client Sign-On to TLS-Enabled SAS Viya SAS/CONNECT Spawner when LoadBalancer is configured.
NodePort assigns a port and routes traffic from that port to the SAS/CONNECT Spawner. A value can be selected from the allowed nodePort range and assigned in the yaml. This assignment prevents the SAS/CONNECT Spawner from starting if the selected port is already in use or is outside the allowable nodePort range.
Copy the $deploy/sas-bases/examples/sas-connect-spawner/enable-external-access/sas-connect-spawner-enable-nodeport.yaml
file to $deploy/site-config/sas-connect-spawner/enable-external-access/sas-connect-spawner-enable-nodeport.yaml
.
Modify the copied file according to the comments in it.
Add a reference to the copied file to the resources block of the base kustomization.yaml file. Here is an example:
```
...
resources:
...
- site-config/sas-connect-spawner/enable-external-access/sas-connect-spawner-enable-nodeport.yaml
...
```
Deploy the software as described in SAS Viya Platform: Deployment Guide.
Refer to External Client Sign-On to TLS-Enabled SAS Viya SAS/CONNECT Spawner when NodePort is configured.
For more information about configurations and using example and overlay files, see SAS Viya Platform: Deployment Guide.
category: SAS RFC Solution Configuration tocprty: 90
SAS RFC Solution Workloads provides dynamic management of SAS Business Orchestration projects. It passes messages from a client library through brokers to a Kubernetes controller. This enables dynamic deployment and management of projects that do not require administrator intervention.
SAS RFC Solution Workloads requires a running RabbitMQ broker. The instructions in this README describe how to configure the software.
SAS RFC Solution Workloads needs to be configured if you are deploying SAS Business Orchestration projects from the UI application directly.
To configure SAS RFC Solution Workloads as part of the initial deployment of SAS Viya platform:
Copy the files in the $deploy/sas-bases/examples/sas-rfc-solution-workloads/install
directory to the $deploy/site-config/sas-rfc-solution-workloads/install
directory. Create the destination directory if it does not exist.
If you are installing SAS RFC Solution Workloads with SAS Viya platform, add $deploy/site-config/sas-rfc-solution-workloads/install
to the resources block of the base kustomization.yaml file. Here is an example:
resources:
...
- site-config/sas-rfc-solution-workloads/install
...
Update the $deploy/site-config/sas-rfc-solution-workloads/install/kustomization.yaml
file by replacing the variables with the appropriate values for secrets.
Update the $deploy/site-config/sas-rfc-solution-workloads/install/namespace.yaml
file by replacing the {{ NAMESPACE }} value.
Update the $deploy/site-config/sas-rfc-solution-workloads/install/settings.properties
file by replacing the variables with the appropriate values for settings properties.
Specifying an ingress host will create ingress objects on workloads deployments. Administrators will need to patch those ingresses or create thier own for TLS support.
Update the $deploy/site-config/sas-rfc-solution-workloads/install/runtime.properties
file by replacing the variables with the appropriate values for runtime properties.
Update the $deploy/site-config/sas-rfc-solution-workloads/install/runtime-secrets.properties
file by replacing the variables with the appropriate values for runtime secrets.
Review the $deploy/site-config/sas-rfc-solution-workloads/install/rbac.yaml
file to ensure that role based access controls are are acceptable.
Update the $deploy/site-config/sas-rfc-solution-workloads/install/deployment.yaml
file as instructed by the comments in the file.
Update the $deploy/site-config/sas-rfc-solution-workloads/install/image-pull-secrets.yaml
file by replacing the variables with the appropriate values for the imagePullSecrets.
The imagePullSecret can be found using the SAS Viya platform Kustomize build command:
```shell
kustomize build . > site.yaml
grep '.dockerconfigjson:' site.yaml
```
Alternatively, if SAS Viya platform has already been deployed, the imagePullSecret can be found with the kubectl command:
```shell
kubectl -n {{ NAMESPACE }} get secret --field-selector=type=kubernetes.io/dockerconfigjson -o yaml | grep '.dockerconfigjson:'
```
The output is .dockerconfigjson: <SECRET>. Replace the {{ IMAGE_PULL_SECRET }} variables with the <SECRET> value returned by the command above.
Replace the {{ NAMESPACE }} values.
Update the $deploy/site-config/sas-rfc-solution-workloads/install/validate-properties.json
file by adding the rules to be enforced.
Administrators can enforce values that are specified in the project yaml file by using an accept list. This feature can be used to lock
down ports, file paths, or any other property values. The example below shows how to restrict logging levels in components to INFO or DEBUG.
```json
{
"rules": [
{
"key": "level",
"acceptList": [
"I.*",
"D.*"
],
"isKeyRegex": false,
"isValueRegex": true
},
{
"key": "workloads[0].flows[0].processors[0].log.level",
"acceptList": [
"INFO",
"DEBUG"
],
"isKeyRegex": false,
"isValueRegex": false
},
{
"key": "leve.*",
"acceptList": [
"INFO",
"DEBUG"
],
"isKeyRegex": true,
"isValueRegex": false
},
{
"key": "level",
"acceptList": [
"INFO",
"DEBUG"
],
"isKeyRegex": false,
"isValueRegex": false
}
]
}
```
Alternatively, SAS RFC Solution Workloads can be installed separately from the SAS Viya platform. Complete steps 1-10 in Configure with Initial SAS Viya Platform Deployment. Then complete the following steps:
Update the image values that are contained in the $deploy/site-config/sas-rfc-solution-workloads/install/deployment.yaml
file.
In that file, revise the value “sas-rfc-solution-workloads” to include the registry server, relative path, name, and tag. The registry relative server and relative path are the same as other SAS Viya platform delivered images.
The name of the container is ‘sas-rfc-solution-workloads’. The registry relative path, name, and tag values are found in the sas-components-* configmap in the SAS Viya deployment.
Perform the following commands to determine the appropriate information. When you have the information, add it to the appropriate places in the deployment.yaml file.
# generate site.yaml file
kustomize build -o site.yaml
# get the sas-rfc-solution-workloads registry information
cat manifests.yaml | grep 'sas-rfc-solution-workloads:' | grep -v -e "VERSION" -e 'image'
# manually update the sas-rfc-solution-workloads-controller images using the information gathered below: <container registry>/<container relative path>/sas-rfc-solution-workloads:<container tag>
# apply site.yaml file
kustomize apply -f site.yaml
Perform the following commands to get the required information from a running SAS Viya platform deployment.
# get the registry server, kubectl needs to point to the SAS Viya Platform deployment namespace, and replace {{ NAMESPACE }} with the namespace value
kubectl -n {{ NAMESPACE }} get deployment sas-readiness -o yaml | grep -e "image:.*sas-readiness" | sed -e 's/image: //g' -e 's/\/.*//g' -e 's/^[ \t]*//'
<container registry>
# get registry relative path and tag, kubectl needs to point to the SAS Viya Platform deployment namespace, and replace {{ NAMESPACE }} with the namespace value
CONFIGMAP="$(kubectl -n {{ NAMESPACE }} get cm | grep sas-components | tr -s '' | cut -d ' ' -f1)"
kubectl -n {{ NAMESPACE }} get cm "$CONFIGMAP" -o yaml | grep 'sas-rfc-solution-workloads:' | grep -v "VERSION"
SAS_COMPONENT_RELPATH_sas-business-orchestration-worker: <container relative path>/sas-rfc-solution-workloads
SAS_COMPONENT_TAG_sas-rfc-solution-workloads: <container tag>
Perform the following commands:
kustomize -b $deploy/site-config/sas-rfc-solution-workloads/install > sas-rfc-solution-workloads.yaml
kubectl apply -f sas-rfc-solution-workloads.yaml
Run the SAS AML Provisioning Job after a SAS Viya platform deployment is complete. The SAS Anti-Money Laundering onboarding process determines which onboarding steps need be preformed and runs them. The provisioning job logs the job progress in the internal SAS Viya database and resumes from the last completed task on restart, should any errors occur.
If there is a misconfiguration or if there are changes in the configuration that are external to the SAS Viya platform, administrators can specify specific steps to run.
For instructions and information about SAS Anti-Money Laundering provisioning and configuration, see: https://documentation.sas.com/?softwareId=compcdc&softwareVersion=viya4&softwareContextId=provisioning.