[go: up one dir, main page]

0% found this document useful (0 votes)
108 views59 pages

Deploying Documentum On Kubernetes Draft Public

This document provides instructions for deploying a Documentum environment on Kubernetes using Docker Desktop and Helm charts. It outlines downloading required Docker images, configuring WSL2 to allocate more CPU and RAM resources, installing Helm via Chocolatey, and deploying the provided Helm charts with a single command. This allows for quickly standing up a test/learning Documentum stack on one's own laptop without needing a cloud Kubernetes service. The tutorial aims to teach the process while being concise yet providing additional details in footnotes.

Uploaded by

TonyChu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views59 pages

Deploying Documentum On Kubernetes Draft Public

This document provides instructions for deploying a Documentum environment on Kubernetes using Docker Desktop and Helm charts. It outlines downloading required Docker images, configuring WSL2 to allocate more CPU and RAM resources, installing Helm via Chocolatey, and deploying the provided Helm charts with a single command. This allows for quickly standing up a test/learning Documentum stack on one's own laptop without needing a cloud Kubernetes service. The tutorial aims to teach the process while being concise yet providing additional details in footnotes.

Uploaded by

TonyChu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Learning to deploy Documentum

on Kubernetes
A tutorial using Docker Desktop Kubernetes
José María Sotomayor

1
Presentation
Benefits:
With this tutorial we are providing you with: • Quickly deploy the 23.2 Documentum stack
with a single command.
• Instructions on how to have your own
Kubernetes cluster in your laptop. • Learning how you can do it yourself in the
future, if needed.
• A D2 Helm Chart already configured. You
can deploy a complete Documentum stack • Familiarize yourself with Kubernetes
with a single command. technology.
• As much practical information as posible • Not having to pay for a cloud Kubernetes
about how this is done, so you can use it as service just for learning purposes.
foundations for further learning.
• Quickly deploy new versions of the
Documentum stack as soon as they are
released.
Slides go straight to the point, but I’ve added
extensive footnotes with additional information
when possible.

OpenText ©2023 All rights reserved 2

2
Steps

Have Docker Create a


Configure Download Deploy
Desktop Install Helm Docker Create a
WSL to use software NGINX Deploy the
running with with image with namespace
All CPUs and Docker ingress to the Helm Chart
Kubernetes Chocolatey D2-Base called “d2”
16GB RAM. images cluster
enabled. configuration

Preparing the environment Preparing Docker Images Deploying the provided* charts AS-IS

* In this tutorial we will refer to the Helm charts already configured


and ready to be deployed as “the provided” charts, as opposed to
“the original” charts, this is, the charts as you would download from
Opentext Support.

OpenText ©2023 All rights reserved 3

3
Getting the tutorial package

• Download the file named “Tutorial K8s D2 23_2.7z”


accompanying this tutorial and extract it.
• The package contains three elements:

• “d2” folder – Is the 23.2 Helm chart already


configured to be deployed in your Kubernetes cluster
with a single command. We will refer to them as “the
provided” charts, for the purpose of this tutorial.
• “d2-base” folder – contains some arctifacts required
to deploy the D2 Base configuration as part of your
deployment.
• get_images.cmd – it’s a very simple script that will pull
the required Docker images from Opentext’s Docker
registry.

OpenText ©2023 All rights reserved 4

4
Preparing the environment
Setting up your laptop to run Kubernetes using Docker Desktop.

OpenText ©2023 All rights reserved 5

5
Installing Docker Desktop with Kubernetes Support (1)

• Docker Desktop has an option to enable Kubernetes. It deploys a complete k8s cluster in your
laptop.
• A prerequisite to use this kubernetes flavour is to have WSL2 enabled. Complete instructions can
be found here: https://learn.microsoft.com/en-us/windows/wsl/install
• In most cases, opening an elevated Powershell and issuing the command wsl --install will
suffice (you may need to enable it before under “Add Windows Features”).
• Instructions to install Docker Desktop can be found here:
https://docs.docker.com/desktop/install/windows-install/
• No need to install additional Linux distros. Docker Desktop will install the required ones.

OpenText ©2023 All rights reserved 6

6
Installing Docker Desktop with Kubernetes Support (2)

• Once it’s installed and running, click


settings, select “Kubernetes” in the left
pane and click on “Enable Kubernetes”.
You should see both Docker and k8s in
green after a while.
• You can double check your cluster is
running with kubectl get nodes
command:

OpenText ©2023 All rights reserved 7

7
Configuring WSL for additional CPU&RAM
• We recommend host system to feature 32GB RAM, so at least 16GB
can be easily assigned to WSL2’s VM.
• The CPU and RAM resources of WSL2 by default, are not enough
for deploying the Documentum platform.
• Shutdown Docker desktop.
• Shutdown WSL2 by issuing wsl --shutdown
• Create a new file under C:\Users\<your_user_name> called .wslconfig (dot wslconfig) with the
following content (copy and paste from this slide notes):

• Next time you launch Docker Desktop, the new values (all vCPUs & 16GB RAM) will be picked up.
OpenText ©2023 All rights reserved 8

----------------Source for .wslconfig--------------------------

# Settings apply across all Linux distros running on WSL 2


[wsl2]

# Limits VM memory to use no more than 16 GB, this can be set as whole numbers
using GB or MB
memory=16GB

#Comment (or do not include) the following line so VM uses all available Logical
Processors
#processors=6

8
Installing Helm with Chocolatey

• Helm will be used to automatically deploy our workload to the k8s cluster using Helm charts.
• Easiest way to install Helm is via Chocolatey package manager (on Windows)
• Chocolatey can be downloaded from: https://chocolatey.org/install
• Once Chocolatey is installed, issue the following command from an elevated Powershell:
• choco install Kubernetes-Helm
• Deploy mysql to your cluster to test Helm is working as expected:
• helm repo add bitnami https://charts.bitnami.com/bitnami
• helm repo update
• helm install bitnami/mysql --generate-name
• Test mysql is running by issuing:
• kubectl get pods
• Once is working, uninstall it to save resources:

OpenText ©2023 All rights reserved 9

9
Preparing Docker images
Downloading Documentum binary images and creating an image
with the D2 Base configuration

OpenText ©2023 All rights reserved 10

10
Downloading software images

• Binaries are downloaded as Docker images from Opentext’s Docker registry.


• Login into Opentext’s Docker registry with your Opentext username and password:
docker login registry.opentext.com
• Pull images from the registry. For your convenience, use a cmd script such as:

• This operation may take quite some time (only once) depending on your internet connection.
• A sample script is provided, valid for 23.2. (get_images.cmd). Not all images are used.
OpenText ©2023 All rights reserved 11

The downloading (pulling) process will take some hours, but you can leave it totally
unattended. Compare this to downloading every single binary installer one by one
from Opentext Support.
Postgres database is pulled directly from Docker’s hub (no need to be logged in). Is
tagged as registry.opentext.com/cs/pg for convenience (there’s a variable in Helm
chart for the repository name –registry.opentext.com and it has been historically
named as cs/pg by Engineering)

----------------------------Source for get_images.cmd:-------------------------------

docker pull postgres:15.1


docker tag postgres:15.1 registry.opentext.com/cs/pg:15.1
docker pull registry.opentext.com/dctm-d2pp-classic-ol:23.2
docker pull registry.opentext.com/dctm-d2pp-config-ol:23.2
docker pull registry.opentext.com/dctm-d2pp-installer-ol:23.2
docker pull registry.opentext.com/dctm-d2pp-rest-ol:23.2
docker pull registry.opentext.com/dctm-d2pp-smartview-ol:23.2
docker pull registry.opentext.com/dctm-d2pp-ijms-ol:23.2
docker pull registry.opentext.com/dctm-tomcat:23.2
docker pull registry.opentext.com/dctm-server:23.2

11
docker pull registry.opentext.com/dctm-d2pp-ijms-ol:23.2
docker pull registry.opentext.com/dctm-xcp-installer:23.2
docker pull registry.opentext.com/dctm-xcp-apphost:23.2
docker pull registry.opentext.com/dctm-workflow-designer:23.2
docker pull registry.opentext.com/otds-server:23.1.1
docker pull registry.opentext.com/dctm-rest:23.2
docker pull registry.opentext.com/dctm-xplore-indexserver:22.1.2
docker pull registry.opentext.com/dctm-xplore-indexagent:22.1.2
docker pull registry.opentext.com/dctm-xplore-cps:22.1.2
docker pull registry.opentext.com/dctm-admin:23.2
docker pull registry.opentext.com/dctm-content-connect:23.2
docker pull registry.opentext.com/dctm-content-connect-dbinit:23.2
docker logout

11
Downloaded images are in your local Docker registry

• Docker keeps all


downloaded images in
a local registry.
• Select “Images” on the
Docker Desktop app to
see them.
• Filter by “opentext” to
see all the images
you’ve downloaded.

OpenText ©2023 All rights reserved 12

12
Creating a Docker image with D2-Base configuration

• You need to create a Docker image containing the D2-Base application DAR and ZIP files.
• The provided Helm charts are already configured to use it.
• In powershell, cd to the directory named “d2-base”. Inside there’s a Dockerfile that will create the
image for you.
• Issue the following command to create the Docker image (one line, the dot at the end must remain)
docker build -f Dockerfile -t d2customdar:latest --build-arg
CUSTOM_FILE_NAME=D2-Base-Export-Config.zip --build-arg CUSTOM_DAR_FILE=D2-
Base.dar --no-cache .
• The provided helm charts are already configured to leverage the image and deploy the
configuration.

OpenText ©2023 All rights reserved 13

Why is this step needed?

When CS and D2-Config pods do boot up, several initialization and installation
scripts are run.
Some of these scripts are in charge of deploying DAR files and D2 configurations.
These scripts expect to find DAR and config files at some preconfigured directories.
But obviously, Opentext is shipping the Docker images without our configurations.
How can we include them?
We are creating this small Docker image just containing (mostly) our DAR and ZIP
files inside it.
During deployment, we are creating a container from our image, and we make it
part of the CS and D2 Config pods.
This way, we are mounting the directories containing our files inside the original
pod’s filesystem, so the scripts can pick them up for installation.

-------Source for Dockerfile --------

FROM busybox:1.28
ARG CUSTOM_FILE_NAME
ARG CUSTOM_DAR_FILE

13
RUN adduser -D -H dmadmin &&\
mkdir -p /opt/D2-install/custom && \
chown -R dmadmin:dmadmin /opt/D2-install/custom
COPY --chown=dmadmin:dmadmin $CUSTOM_FILE_NAME /opt/D2-
install/custom/
COPY --chown=dmadmin:dmadmin $CUSTOM_DAR_FILE /opt/D2-
install/custom/
CMD sh

13
Deploying the provided chart AS-IS
All you need is love Helm

OpenText ©2023 All rights reserved 14

14
Deploying NGINX ingress to the cluster

• Docker Desktop’s Kubernetes don’t provide an ingress service by default.


• For POC/Demo purposes, a regular NGINX service will suffice (no need for NGINX+)
• To deploy the NGINX service, issue this command (one line):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-
nginx/controller-v1.7.0/deploy/static/provider/cloud/deploy.yaml
• This version is proved to work with the 23.2 stack and k8s version 1.25.4 (as installed by Docker
Desktop v. 4.17.1). In the future you may need to find a more recent NGINX version.
• You can test your ingress controller is working with:

• Important: see slide notes below.


OpenText ©2023 All rights reserved 15

By default nginx is configured to accept request body sizes of


máximum 1MB. This means that if you try to upload to D2 a document
bigger than this size, an error will occur.
Please edit values.yaml and modify dctm-ingress definition to
include the highlighted annotations (in bold), before deploying
D2’s Helm chart. Line is around 1092.

dctm-ingress:
enabled: true
#prefix for the ingress name
ingressPrefix: dctm
ingress:
#No need to configure host: and clusetrDomainName: if
configureHost is false.
configureHost: true
#Domain name of the ingress controller in the cluster
namespace
host: dctm-ingress
clusterDomainName: *ingress_domain

15
#To accomodate cluster 1.22
class: nginx
#annotations for the ingress object
#annotations: {}
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 50m

15
Creating a namespace for D2

• In k8s, a namespace acts as a sort of logical cluster. We will create one for our D2 deployment.
• Issue the following command:
kubectl create namespace d2
• Check that your new namespace has been created with:
kubectl get namespaces

• Note that the provided Helm charts do asume that you have created the “d2” namespace.
• From now on, when you issue kubectl and Helm commands, you’ll have to append the –n d2
switch, to specify you are working inside the “d2” namespace i.e. :

OpenText ©2023 All rights reserved 16

16
Deploying the Helm chart

• If you have performed the following prerequisites, you can deploy the provided Helm chart without
any modification:
• Have Docker Desktop running with Kubernetes enabled.
• Configured WSL to use 16GB RAM and all vCPUs.
• Installed Helm
• Downloaded software Docker images
• Created a Docker image with D2-Base configuration
• Deployed NGINX
• Created the “d2” namespace.
• In an elevated Powershell, cd to the directory called “d2” (it’s the directory containining the file
“values.yaml”).
• Issue the following command (single line):
helm install d2 . --values=dockerimages-values.yaml --values=d2-resources-
values-test-small.yaml --namespace d2
• This command will deploy a complete Documentum 23.2 stack automatically.
OpenText ©2023 All rights reserved 17

helm install d2 . : install the chart found in the current directory (.) and deploy it
with the name “d2”
--values=dockerimages-values.yaml --values=d2-resources-
values-test-small.yaml they do provide
information about what docker images to use and sizing
information. Note that values.yaml is also picked by default. You have
more information about these files in the following slides.
--namespace d2 specifies that elements must be deployed into the d2 namespace.

17
Monitoring the Helm chart deployment (1) – Tips

• The first time you deploy this chart it can take between one to two hours to deploy, depending on your laptop
specs, specially for the content server (but once it’s installed, subsequent restarts will take no more than 10
minutes)
• As a rule of thumb, pods are synchronized during deployment, and they patiently wait for their dependencies.
i.e. Content Server will wait for the docbroker to become alive, D2 will wait for the content server, etc.
• As soon as you launch the Helm chart, start issuing kubectl get pods -n d2 commands every 10
seconds or so. You should see their status going into “Running”. Most common error you may find is
“ImagePullBackOff” meaning that a Docker image can’t be found. Double check that you’ve included all the
correct names in dockerimages-values.yaml. In theory, if you used all the provided files without changes,
no errors of this type are expected.
• If you experience these errors, it’s better to cancel the deployment, then correct the issue in the yaml files, and
redeploy again until you see all pods going happily into “Running” status.
• To cancel and erase the deployment (everything inside the d2 namespace):
• helm uninstall d2
• kubectl delete pvc --all -n d2 (wait a little after this command so all bound pvs are deleted)

OpenText ©2023 All rights reserved 18

18
Monitoring the Helm chart deployment (2) - Tips

• Even if pods are in “Running” status, they won’t become available until they show “1/1” under the
“Ready” column.
• Each pod will periodically test itself until its liveliness and readiness probe are positive. At this point
it will appear as really available for the rest of the pods via its exposed services.
• Be patient, as some of these probes are tested with several minutes in between, especially during
first installation.
• Especially during first installation, you may see pods restarting themselves after several minutes.
This is expected behaviour. If their dependencies take time to be ready, they restart hoping for these
dependencies to be available next time. Eventually they will.
• See next slide for several commands that you may use to obtain more details during deployment.
• Be aware that, if you use “describe” commands, you will be able to read errors at the end of the
output. Note that the problem may have dissapeared even if there’s not a “Problem fixed” message.

OpenText ©2023 All rights reserved 19

19
Monitoring the Helm chart deployment (3) - Commands

You can use the following commands to monitor how things are going with your deployment.
• kubectl get pods -n d2
Provides information about pods’ status. You can “get” anything else (services, nodes, ingress…)
• kubectl logs <pod name> -n d2
Provides the log for a given pod
• kubectl describe pod <pod name> -n d2
Provides detailed information about the pod configuration, detailed readiness and liveliness
information, etc. You can “describe” anything else (services, nodes, ingress…)
• kubectl exec --stdin --tty -n d2 <pod name> -- /bin/bash
Opens a linux shell into a pod so you can browse for additional information. Most of the platform
files are located under /opt/dctm and /opt/dctm_docker typically. From here you can cat the
docbase log or any other file of your interest.

OpenText ©2023 All rights reserved 20

With “kubectl get” and “kubectl describe” you can obtain information about any
component. For instance:

kubectl get services –n d2 : list all deployed services


kubectl get ingress –n d2 : list all deployed ingresses
kubectl get pvc –n d2 : list all persistent volumen claims

kubectl describe service <service name> -n d2 : describe the service called


<service name>
kubectl describe ingress <ingress name> -n d2 : describe the ingress called
<ingress name>

20
Monitoring the Helm chart deployment (4) - Commands

• kubectl logs <pod name> -n d2


Prints the log of the specified pod. Add a -f switch to tail the log.
• kubectl get deployments -n d2
Shows all deployments in the specified namespace (d2).
• kubectl get statefulsets -n d2
Shows all statefulsets in the specified namespace (d2).

OpenText ©2023 All rights reserved 21

With “kubectl get” and “kubectl describe” you can obtain information about any
component. For instance:

kubectl get services –n d2 : list all deployed services


kubectl get ingress –n d2 : list all deployed ingresses
kubectl get pvc –n d2 : list all persistent volumen claims

kubectl describe service <service name> -n d2 : describe the service called


<service name>
kubectl describe ingress <ingress name> -n d2 : describe the ingress called
<ingress name>

21
Checking all pods are launched correctly

Some pods are running, some initializing, only More pods are reaching the Running state… Finally everyone is up and running.
postgres is ready

Until the docbase initialization process is complete, When all pods are 1/1 Ready and in running status, * These screenshots have been taken in different
CS (dcs-pg-0) won’t be ready and most of the pods the deployment has been completed and we can sessions.
will restart from time to time, until CS is ready. This access our apps.
may take more than 90 minutes to finish.

OpenText ©2023 All rights reserved 22

22
Inspecting correct docbroker initialization

OpenText ©2023 All rights reserved 23

23
Inspecting correct Content Server Initialization

OpenText ©2023 All rights reserved 24

24
Inspecting docbase initialization

CS pod will show as “Ready 0/1” for a while because the docbase installation keeps going.
This is where most of the time goes for the deployment (+1 hour).

You can cat the log every few minutes or tail it to inspect how the docbase initialization progresses. Lunch time!

OpenText ©2023 All rights reserved 25

25
Accessing the applications

OpenText ©2023 All rights reserved 26

26
Updating hosts file

• To simulate a proper domain name without a DNS, we will include our domain in the hosts file.
• Open values.yaml and find the line:

• Copy this value (dctm-ingress.d2.jm.net)


• Edit C:\Windows\System32\drivers\etc\hosts and add an entry with your current ip and
the domain name you’ve copied:

• You must update the file again everytime your ip changes.

OpenText ©2023 All rights reserved 27

27
Application endpoints

• Applications and services are exposed to the world outside the cluster via an ingress configuration.
Issue this command to know how are they exposed:
kubectl describe ingress dctm-ingress -n d2

• For instance, if you want to access Documentum Administrator, you just have to point your browser
to: https://dctm-ingress.d2.jm.net/da

OpenText ©2023 All rights reserved 28

28
Accessing our applications

• Browse to the application


endpoint i.e. https://dctm-
ingress.d2.jm.net/da/
• You will get a warning about the
security certificate not being
valid. Click on “Advanced” and
proceed to site anyway. This is
due to the certificate being self
signed (normal for demo envs).

OpenText ©2023 All rights reserved 29

29
Manual steps – OTDS Configuration

• DA will work with a simple inline user dmadmin:password. But to Access D2, Smartview, etc, you
will need to configure OTDS.
• OTDS must be configured following the steps described in page 24 of “OpenText Documentum D2
CE 23.2 - Cloud Deployment Guide”
(https://support.opentext.com/csm?id=kb_article_view&sys_kb_id=9030388947de6510f3f9da7a436d431f)

• The URL suggested in step 3c is valid unless you’ve modified service names in values.yaml:
http://dcs-pg-jms-service:9080/dmotdsrest
• Step 6, create an OAuth client, use “d2_oauth_client” as its name, unless you modified it in
values.yaml
• Step 6 a, you can find your ingress URL around line 23 of values.yaml, or by issuing a kubectl
describe ingress dctm-ingress -n d2 command. If you have not modified the provided
charts, it will be: https://dctm-ingress.d2.jm.net

OpenText ©2023 All rights reserved 30

• These steps could also be automated in the Helm charts, but I did not had the
time to try it out yet.
• These steps must be done only once, after first installation is complete.

30
Modifying the configuration
Overview of the changes made to the original Helm charts

OpenText ©2023 All rights reserved 31

31
Configuring Docker images

• In dockerimages-values.yaml, provide the Docker repository name

• The following variables may be left as default:

• For each component, ensure the Docker image name and tag matches the one you’ve downloaded:

OpenText ©2023 All rights reserved 32

A complete list of the image names and tags can be obtained from the Cloud
Deployment Guide.
For those components you won’t be using (i.e. Graylog, fluentd, etc) you can leave
them as they are. If the component is disabled, the image won’t be used.
I’ve commented some sections as, for instance, I’m not going to use Process
Engine:

# JM COMMENTED
#- name: "peinstaller-init"
# image: "registry.opentext.com/dctm-xcp-
installer:23.2"
# imagePullPolicy: *pull_policy_type
# command: ['/bin/sh', '-c', 'yes |sudo cp -Rf
/pescripts/* /opt/dctm_docker/customscriptpvc/']
# volumeMounts:
# - name: dcs-data-pvc
# mountPath: /opt/dctm_docker/customscriptpvc
# subPath: initcontainercustomscripts/dcs-pg

Also, you’re responsable to add some extra init containers with the images you’ve

32
created with D2 configuration DAR and ZIP files, i.e. under d2config:

- name: init
image: d2customdar:latest
imagePullPolicy: *pull_policy_type
command: ['/bin/sh', '-c', 'yes | cp -rf /opt/D2-
install/custom/* /customdir/']
volumeMounts:
- name: customconfig
mountPath: /customdir

32
Modifying variables to fit your environment

Deployment is configured in the file values.yaml. You don’t have to configure everything. Just some
global values and the elements you want to deploy.
• rwoStorage & rwmStorage: change it from trident-nfs to hostpath
• Find and replace all ocurrences of <namespace> by your namespace (i.e. d2)
• Find and replace all ocurrences of <docbase_name> by your docbase (i.e. docbase1)

• Find and replace all ocurrences of cfcr-lab.bp-paas.otxlab.net by your own domain (i.e. jm.net)

• You will find that these replacements will indirectly configure several values across the file. (i.e)

OpenText ©2023 All rights reserved 33

values.yaml is not shipping empty by default, but contains most of the values
engineering uses to deploy and test the charts.
It’s a matter of time and practice to develop your eye to identify the values you must
change, the ones that you can leave as they are, the ones that you don’t need etc.
In my experience, once you have managed to configure and deploy a Helm chart
once and become familiar with the key values, the next one will be significantly
easier. You can just compare your latest working values.yaml file (i.e 23.2) with the
new one you have downloaded (i.e. 23.4) and transpose the correct values to the
new one. Always keep an eye on new sections or configurations that may have
been added.
A good excercise you could do is to compare my provided values.yaml with the
original one (see specific slide for comparing files). That will provide you with an
idea of the changes I’ve made to values.yaml to fit my needs.
Also, there are no shortcuts here: we must know Documentum and learn
Kubernetes as much as posible.

• trident-nfs is the storage class used by engineering in their own environments.


With Docker Desktop we only have hostpath available by default. All persistent
information will be stored in our laptop filesystem. This is not a good practice if
we had more tan one node in our cluster, but for a single node demo environment

33
is good enough.
• cfcr-lab.bp-paas.otxlab.net is the domain name used by engineering in their own
environments. We can choose to have any domain we want. By including this
domain in our hosts file in Windows, we can créate the illusion of using a real
domain name exposed in the Internet.

33
Enabling or disabling features

• Deploying Documentum features is just a matter of activating (true) or disabling (false) them, and in
some cases providing some additional configuration parameters.
• Some of them are enabled at variable level. Let’s say you don’t want to use Graylog, turn it to false:

• Some others are enabled at the beginning of their own section.


There’s always a “enabled” attribute that you can set to true or false.
Let’s say you want to enable Documentum REST services. Just turn it to true:

• The provided sample Helm chart deploys postgres, content server, docbroker, administrator, REST,
xplore, otds, d2 classic, smartview and config.

OpenText ©2023 All rights reserved 34

If instead of using the provided Helm chart you want to learn to configure it yourself,
I highly recommend to start it simple: postgres, docbroker, server and da. Also
disable graylog, Kafka and OTDS components. Once you have this process
mastered and you are able to log in to Documentum Administrator, you can start
enabling other features one at a time.
If you think you’ve broken your environment, don’t hesitate to reset your Kubernetes
cluster using the option in Docker Desktop settings and start again. The less
components you try to deploy until you get the hang of it, the faster you will
progress.

34
Setting up resources
• In the original Helm charts, there are several files named as “d2-resources-values-
[sample_description].yaml”.
• These files basically describe:
• Sizes of volume claims
• CPU requests and limits
• Memory requests and limits
• Starting number of Pods per element (i.e. 2x Content Server)
• Chances are that even the smallest of the environments
(test_small) are too much for your laptop.
• In the provided “d2-resources-values-test-small.yaml” we have
stripped out the requests and limits sections, and downsized
all deployments to just one pod each.
• By doing this, k8s will use CPU and memory as specified in the
original charts. This will be enough for most demo / learning
use cases.
OpenText ©2023 All rights reserved 35

This approach is acceptable for POC/Demo purposes, not for production. But in a
laptop we have limited resources (my laptop uses 11GB just to show the Desktop).
By default, our Helm charts deploy the components as deployments or statefulsets
with at least 2 pods per service, which is great to provide HA out of the box. But in
our laptop we don’t have enough memory to have a mínimum of “two of everything”,
so this is why we have downsized all components to just one pod.

35
Setting up resources – downscaling and upscaling

• If you don’t want a given component to be deployed, just set enabled=false in values.yaml.
• For temporary adjustments, you can downscale or upscale the number of pods.
• Documentum components are deployed in the form of statefulsets or deployments:
• You can downscale them to 0 if you want to
temporarily “shut down” a component and save
memory / cpu: i.e:

• Conversely, you can scale to 1 again:

OpenText ©2023 All rights reserved 36

36
Comparing documents with VS Code

• Right click on any file


and select “Select for
Compare”
• Right click on a different
file and select
“Compare with
Selected”.
• This is a very powerful
feature for upgrades, as
we can quickly
transpose our values
from our current chart to Original Helm chart VS modified Helm chart (as provided for this tutorial)
a new version.

OpenText ©2023 All rights reserved 37

37
WIP Slides
Following procedures have not been fully tested yet,
but you may find them helpful.

OpenText ©2023 All rights reserved 38

38
Architecture

Needs rework, as it is
from a previous D2
version.
Backend (content server,
docbroker, etc is missing)

OpenText ©2023 All rights reserved 39

39
Deploy Kubernetes Dashboard
• Deploy application by issuing (single line): apiVersion: v1
kind: ServiceAccount
kubectl apply -f metadata:
https://raw.githubusercontent.com/kubernetes/dashboard/v2. name: admin-user
7.0/aio/deploy/recommended.yaml
namespace: kubernetes-dashboard
• Create a file named dashboard-admin.yaml with the ---
apiVersion: rbac.authorization.k8s.io/v1
contents shown on the right side. kind: ClusterRoleBinding
metadata:
• Create the user issuing: name: admin-user
kubectl apply -f .\dashboard-admin.yaml roleRef:
apiGroup: rbac.authorization.k8s.io
• Create a token issuing: kind: ClusterRole
name: cluster-admin
kubectl -n kubernetes-dashboard create token admin-user
subjects:
• Issue kubectl proxy and navigate to the URL - kind: ServiceAccount
name: admin-user
shown below. Paste the token from previous namespace: kubernetes-dashboard
command.
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
OpenText ©2023 All rights reserved 40

40
Kubernetes Dashboard showing “d2” namespace

OpenText ©2023 All rights reserved 41

41
Enable metrics server

• Metrics server allows to have detailed graphical information of CPU and Memory usage.
• Enable metrics server by issuing:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

• It won’t launch correctly because of the TLS certs. You must patch the metrics server deployment.
To do so:
• In Kubernetes Dashboard, select “Deployments”. In the top namespace selector, select “All”. In the
list, click “metrics-server”.
• Edit the deployment by clicking the
pencil icon in the right of the blue header.
• Edit the highlighted line and click “Update”

OpenText ©2023 All rights reserved 42

42
OpenText ©2023 All rights reserved 43

43
Enable metrics server (2)

• The information shown by the metrics server is very useful to know real memory and CPU
consumption.
• You can adjust the “resources” section of your yaml files to adapt to your environment, as a small
demo environment with one user (hey, that’s you!) will consume even less than the smallest of the
environments included with the charts.

OpenText ©2023 All rights reserved 44

44
Resetting the cluster
• If you want to start fresh, you
can use “Reset Kubernetes
cluster”. This will erase all
deployments but will keep your
Docker images.
• Sometimes you may need to
delete all data in the WSL VM to
retrieve space and performance.
Using “Clean/Purge Data” will
do it. You will loose your
downloaded images. See next
slide before doing this.

OpenText ©2023 All rights reserved 45

45
Keeping your Docker images
• Docker images are downloaded (pulled) to your local Docker registry. It takes a long time to
complete. This registry, the images and its contents will dissapear if you purge Docker’s data.
• For testing purposes sometimes purging this data is neccesary (several install / uninstall operations
of the Helm chart may have a severe impact on performance when using hostpath storageclass).
• Once you have downloaded the images at least once and they are in your Docker registry, you can
save them as TAR files by issuing this command:
docker save --output dctm-rest_23.2.tar registry.opentext.com/dctm-rest:23.2

• Conversely, you can load the images from the TAR files way quicker than downloading them again:
docker load --input dctm-rest_23.2.tar

• “save_images.cmd” and “load_images.cmd” scripts are provided for your convenience.


• Execute “save_images” from a directory not synchronized with Core Share or Onedrive.
• By using this procedure you can re-push the images to your Docker registry in about 20 minutes
instead of 2 hours
OpenText ©2023 All rights reserved 46

Proper way to do this is by keeping our own Docker image registry, but for testing
purposes this is cumbersome and requires more administration.

46
Deploying Eventhub (1)

• Pull required images:


docker pull registry.opentext.com/fluentd-4.4.2-1:23.2
docker pull registry.opentext.com/kafka-2.13-3.4.0:23.2
• Add them to dockerimages-values.yaml
(already correct in 23.2 helm chart)
• Enable kafka & fluentd in values.yaml

OpenText ©2023 All rights reserved 47

47
Deploying Eventhub (2)

• In 23.2 Helm chart Kafka deployment is configured to use trident-nfs storageclass. Add storageclass
information to values.yaml so Kafka is deployed using hostpah storage.

When eventhub is deployed, fluentd is added as a container to dcs-pg pod, dctm-rest,


etc. To ssh into these pods now, you need to specify the container you want to connect
to, i.e:

kubectl exec --stdin --container dcs-pg --tty -n d2 dcs-pg-0 -- /bin/bash

Otherwise you will connect to fluentd’s container by default.

OpenText ©2023 All rights reserved 48

48
Deploying a browser for Kafka data (1)

• Download Helm Chart from: https://github.com/obsidiandynamics/kafdrop


• Encode the following kafka properties as base 64 (https://www.base64encode.org/):
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafka-user"
password="kafka-password";

• This string gets encoded as:


c2VjdXJpdHkucHJvdG9jb2w9U0FTTF9QTEFJTlRFWFQKc2FzbC5tZWNoYW5pc209U0NSQU0tU0hBLTUxMgpzYXNsLmphYXMuY29uZmlnPW9yZy5hcGFjaGUua2Fma
2EuY29tbW9uLnNlY3VyaXR5LnNjcmFtLlNjcmFtTG9naW5Nb2R1bGUgcmVxdWlyZWQgdXNlcm5hbWU9ImthZmthLXVzZXIiIHBhc3N3b3JkPSJrYWZrYS1wYXNzd2
9yZCI7

• Include this encoded string as


value for “properties” in values.yaml.
• Include
kfk-0.kfk.d2.svc.cluster.local:9092
as brokerConnect

OpenText ©2023 All rights reserved 49

49
Deploying a browser for Kafka data (2)

• Create a namespace for Kafdrop


kubectl create namespace kafdrop

• From the directory containing the Helm Chart


Deploy Kafkadrop with
helm install kafdrop . --namespace kafdrop

• To access Kafkadrop, issue:


kubectl proxy
Browse to http://localhost:8001/api/v1/namespaces/kafdrop/services/http:kafdrop:9000/proxy
OpenText ©2023 All rights reserved 50

50
Deploying RabbitMQ

• RabbitMQ is useful for managing messaging queues. It can be installed quickly into your cluster.
• Deploy RabbitMQ by issuing:
kubectl create namespace rmq
helm install rabbitmq oci://registry-1.docker.io/bitnamicharts/rabbitmq --set
persistence.enabled=false -n rmq

• No PVCs configured. Data won’t be persisted.


• Port forward ports for AMQP API and Management console
kubectl port-forward --namespace rmq svc/rabbitmq 5672:5672
kubectl port-forward --namespace rmq svc/rabbitmq 15672:15672

• By default decoded password: eVnaXY67NEZLEu7f


• By default decoded cookie: kzEfLHmq58FplhktDslWUCMt1yAkcH4H

OpenText ©2023 All rights reserved 51

51
Accessing RabbitMQ

• Once the port is forwarded, browse to http://127.0.0.1:15672/

OpenText ©2023 All rights reserved 52

52
RabbitMQ: Sending and receiving messages
Send.js Receive.js

OpenText ©2023 All rights reserved 53

53
Thank you

twitter.com/opentext

linkedin.com/company/opentext

opentext.com

54

You might also like