The aim here is to have a minimalist container with the usual Documentum clients idql, iapi, dmawk and dmqdocbroker.
Documentum Administrator (DA) could also be a useful addition to our toolbox. We will show how to containerize it in a future article. A word of warning is in order here: the title’s catchy “small footprint” qualifier is relative. Don’t forget that said container will contain a certified O/S, a JRE, the DFCs and a few support shared libraries. Under these conditions, at more than 600 Mb, it is as small as it can get.
Since those clients are part of the content server (CS) and not packaged separately, we must first start with a full installation of the content server binaries and next we’ll remove all the non-essential parts. We chose the latest available CS version, currently v16.4.
The cute little utility dctm-wrapper discussed in Connecting to a Repository via a Dynamically Edited dfc.properties File (part I) is also included because it makes it unnecessary to manually edit the container’s dfc.properties file each time a new repository on a different machine needs to be accessed; this will be done dynamically by the utility itself.
The instructions to create the image are given in a dockerfile. We chose to base the image on the Centos distribution because as a Red Hat derivative it is implicitly certified and does not require any subscription. However, any other Documentum-certified Linux distribution such as RedHat, Suze or Ubuntu will do. Some adaptation may be in order though, e.g. adding libraries to $LD_LIBRARY_PATH or syminking libraries under another version suffix. Check the installation manual for details.
The working directory is quite flat:

dmadmin@dmclient:~/builds/documentum/clients$ ll -R
.:
total 24
-rw-rw-r-- 1 dmadmin dmadmin 68 Aug 1 10:28 dctm-secrets
drwxrwxr-x 2 dmadmin dmadmin 4096 Aug 2 11:55 files
-rw-rw-r-- 1 dmadmin dmadmin 7107 Aug 28 23:49 Dockerfile
 
./files:
total 2151304
-rw-rw-r-- 1 dmadmin dmadmin 872 Aug 1 10:28 linux_install_properties
-rwxrwxr-x 1 dmadmin dmadmin 1393568256 Aug 1 10:28 content_server_16.4_linux64_oracle.tar
-rwxrwxr-x 1 dmadmin dmadmin 809339975 Aug 1 10:28 CS_16.4.0080.0129_linux_ora_P08.tar.gz
-rw-rw-r-- 1 dmadmin dmadmin 349 Aug 1 10:28 CS_patch08.properties
-rwxrwxr-x 1 dmadmin dmadmin 7090 Aug 2 11:55 dctm-wrapper

There is only one sub-directory, files. Read on for more details.

The clients dockerfile

Let’s take a look at the clients dockerfile:

# syntax = docker/dockerfile:1.0-experimental
# we are using the secret mount type;
# cec - dbi-services - July 2019

FROM centos:latest
MAINTAINER "cec@dbi"

RUN yum install -y sudo less unzip gunzip tar iputils hostname gawk wget bind-utils net-tools && yum clean all

ARG soft_repo
ARG INSTALL_OWNER
ARG INSTALL_OWNER_UID
ARG INSTALL_OWNER_GROUP
ARG INSTALL_OWNER_GID
ARG INSTALL_HOME
ARG INSTALL_TMP
ARG PRODUCT_MAJOR_VERSION
ARG JBOSS
ARG DOCUMENTUM
ARG DOCUMENTUM_SHARED
ARG DM_HOME
ARG MNT_SHARED_FOLDER
ARG SCRIPTS_DIR

ENV LC_ALL C

USER root
RUN mkdir -p ${INSTALL_TMP} ${SCRIPTS_DIR} ${DOCUMENTUM} ${MNT_SHARED_FOLDER}

# copy the uncompressed files;
COPY ${soft_repo}/CS_patch08.properties ${soft_repo}/linux_install_properties ${soft_repo}/dctm-wrapper ${INSTALL_TMP}/.

# copy and expand the packages;
ADD ${soft_repo}/content_server_16.4_linux64_oracle.tar ${INSTALL_TMP}/.
ADD ${soft_repo}/CS_16.4.0080.0129_linux_ora_P08.tar.gz ${INSTALL_TMP}/.

RUN groupadd --gid ${INSTALL_OWNER_GID} ${INSTALL_OWNER_GROUP}                                                                                       && 
    useradd --uid ${INSTALL_OWNER_UID} --gid ${INSTALL_OWNER_GID} --shell /bin/bash --home-dir /home/${INSTALL_OWNER} --create-home ${INSTALL_OWNER} && 
    usermod -a -G wheel ${INSTALL_OWNER}                                                                                                             && 
    chown -R ${INSTALL_OWNER}:${INSTALL_OWNER_GROUP} ${INSTALL_HOME} ${INSTALL_TMP} ${SCRIPTS_DIR} ${DOCUMENTUM} ${MNT_SHARED_FOLDER}                && 
    chmod 775 ${INSTALL_HOME} ${INSTALL_TMP} ${SCRIPTS_DIR} ${DOCUMENTUM} ${MNT_SHARED_FOLDER}

# set the $INSTALL_OWNER's password passed in the secret file;
RUN --mount=type=secret,id=dctm-secrets,dst=/dctm-secrets . /dctm-secrets && echo ${INSTALL_OWNER}:"${INSTALL_OWNER_PASSWORD}" | /usr/sbin/chpasswd
RUN rm /tmp/dctm-secrets

# make the CLI comfortable again;
USER ${INSTALL_OWNER}
RUN echo >> ${HOME}/.bash_profile                                          && 
    echo "set -o vi" >> ${HOME}/.bash_profile                              && 
    echo "alias ll='ls -alrt'" >> ${HOME}/.bash_profile                    && 
    echo "alias psg='ps -ef | grep -i'" >> ${HOME}/.bash_profile           && 
    echo "export DM_HOME=${DM_HOME}"           >> ${HOME}/.bash_profile    && 
    echo "export DOCUMENTUM=${DOCUMENTUM}"     >> ${HOME}/.bash_profile    && 
    echo "export PATH=.:${SCRIPTS_DIR}:$PATH" >> ${HOME}/.bash_profile    && 
    echo >> ${HOME}/.bash_profile                                          && 
    echo "[[ -f ${DM_HOME}/bin/dm_set_server_env.sh ]] && . ${DM_HOME}/bin/dm_set_server_env.sh 2>&1 > /dev/null || true" >> ${HOME}/.bash_profile && 
    echo >> ${HOME}/.bash_profile                                          && 
    mv ${INSTALL_TMP}/dctm-wrapper ${SCRIPTS_DIR}/. && chmod +x ${SCRIPTS_DIR}/dctm-wrapper && 
    ln -s ${SCRIPTS_DIR}/dctm-wrapper ${SCRIPTS_DIR}/widql && ln -s ${SCRIPTS_DIR}/dctm-wrapper ${SCRIPTS_DIR}/wiapi && ln -s ${SCRIPTS_DIR}/dctm-wrapper ${SCRIPTS_DIR}/wdmawk

# install and then cleanup useless stuff from image;
WORKDIR ${DOCUMENTUM}
RUN . ${HOME}/.bash_profile && cd ${INSTALL_TMP} && ./serverSetup.bin -f ./linux_install_properties && echo $? && 
    tar xvf CS_16.4.0080.0129_linux_ora_P08.tar && 
    chmod +x ./patch.bin && ./patch.bin LAX_VM $DOCUMENTUM/java64/JAVA_LINK/jre/bin/java -f CS_patch08.properties && echo $? && 
    cd ${DOCUMENTUM} && rm -r ${INSTALL_TMP} /tmp/install* && 
    rm -r tcf && rm -r tools && rm -r ${JBOSS} && rm -r jmsTools && rm -r uninstall && rm -r dba && rm -r java64/1.8.0_152/db && rm -r java64/1.8.0_152/include && rm java64/1.8.0_152/javafx-src.zip && 
    rm java64/1.8.0_152/src.zip && rm -r java64/1.8.0_152/man && rm -r temp/* && 
    cd ${DOCUMENTUM}/product/16.4/ && 
    rm -r diagtools && rm -r lib && rm -r convert && rm -r unsupported && rm -r oracle && rm -r install && mkdir smaller_bin && mv bin/dm_set_server_env.sh smaller_bin/. && mv bin/iapi* smaller_bin/. && 
    mv bin/idql* smaller_bin/. && mv bin/dmqdocbroker* smaller_bin/. && mv bin/dmawk* smaller_bin/. && mv bin/libkmclient_shared.so* smaller_bin/. && mv bin/libdmcl*.so* smaller_bin/. && 
    mv bin/libsm_sms.so* smaller_bin/. && mv bin/libsm_clsapi.so* smaller_bin/. && mv bin/libsm_env.so* smaller_bin/. && mv bin/java.ini smaller_bin/. && rm -r bin && mv smaller_bin bin && 
    cd ${DOCUMENTUM}/dfc/ && for f in *; do mv $f _${f}; done && mv _aspectjrt.jar aspectjrt.jar && mv _commons-lang-2.6.jar commons-lang-2.6.jar && mv _dfc.jar dfc.jar && mv _log4j.jar log4j.jar && rm -r _* 

# keep the container rolling;
CMD bash -c "while true; do sleep 60; done"

# build the image;
# copy/paste the commented lines below starting with the cat command uncommented:
# cat - <<'eot' | gawk '{gsub(/#+ */, ""); print}'
# export INSTALL_OWNER=dmadmin
# export INSTALL_OWNER_UID=1000 
# export INSTALL_OWNER_GROUP=${INSTALL_OWNER}
# export INSTALL_OWNER_GID=1000
# export INSTALL_HOME=/app
# export INSTALL_TMP=${INSTALL_HOME}/tmp
# export SCRIPTS_DIR=${INSTALL_HOME}/scripts
# export PRODUCT_MAJOR_VERSION=16.4
# export JBOSS=wildfly9.0.1
# export DOCUMENTUM=${INSTALL_HOME}/dctm 
# export DOCUMENTUM_SHARED=${DOCUMENTUM}
# export DM_HOME=${DOCUMENTUM}/product/${PRODUCT_MAJOR_VERSION}
# export MNT_SHARED_FOLDER=${INSTALL_HOME}/shared
# time DOCKER_BUILDKIT=1 docker build --squash --no-cache --progress=plain --secret id=dctm-secrets,src=./dctm-secrets 
#  --build-arg soft_repo=./files                              
#  --build-arg INSTALL_OWNER=${INSTALL_OWNER}                 
#  --build-arg INSTALL_OWNER_UID=${INSTALL_OWNER_UID}         
#  --build-arg INSTALL_OWNER_GROUP=${INSTALL_OWNER_GROUP}     
#  --build-arg INSTALL_OWNER_GID=${INSTALL_OWNER_GID}         
#  --build-arg INSTALL_HOME=${INSTALL_HOME}                   
#  --build-arg INSTALL_TMP=${INSTALL_TMP}                     
#  --build-arg PRODUCT_MAJOR_VERSION=${PRODUCT_MAJOR_VERSION} 
#  --build-arg JBOSS=${JBOSS}                                 
#  --build-arg DOCUMENTUM=${DOCUMENTUM}                       
#  --build-arg DOCUMENTUM_SHARED=${DOCUMENTUM_SHARED}         
#  --build-arg DM_HOME=${DM_HOME}                             
#  --build-arg MNT_SHARED_FOLDER=${MNT_SHARED_FOLDER}         
#  --build-arg SCRIPTS_DIR=${SCRIPTS_DIR}                     
#  --tag="dbi/dctm-clients:v1.0"                              
#  .
# eot
# retag the squashed image as it has not tag:
# docker tag <image_id> dbi/dctm-clients:v1.0

# run the image and remove the container on exit;
# docker run -d --rm --hostname=container-clients dbi/dctm-clients:v1.0

# run the image and keep the container on exit;
# docker run -d --name container-clients --hostname=container-clients dbi/dctm-clients:v1.0

# for trans-host container access, connect the container to an existing overlay or macvlan network, e.g.:
# docker network connect dctmolnet01 container-clients

The pragma on line 1 says that we will be using an experimental feature that needs a special syntax, see below for details.
On line 5, we access the docker on-line registry to download an image of Centos and base our own image on it. We will also throw in a few utilities that can be helpful later.
On lines 10 to 23 we use ARG to receive the parameters passed to the build. We could have used ENV instead and hard-coded them in the dockerfile but the problem with that alternative is that those environment variables would have persisted in the containers derived from the image and since they are not used outside the build, they would just unnecessarily pollute the environment. Docker has currently no way to remove environment variables. They can be set to an empty value though, but this is not enough.
On line 31, the needed files, the CS and patch installation archives along with their property files plus anything else, are imported into the image from the local host’s directory ${soft_repo}. This directory must be local to the current one and both should only contain the exact needed files in order to keep the build context small. In effect, during the context creation, the current directory and everything under it will be recursively read, which is quite a slow process.
On line 39, the user dmadmin is added to the wheel group. This is a convenient trick in Centos to make dmadmin a sudoer of any command.
On line 44, the experimental feature “secrets” is finally used. This enhancement lets us pass a file containing confidential information, typically passwords, without leaving any traces in the image’s overlay filesystems. The RUN statement is executed atomically and, at the end, the mounted file is dismounted silently. Outside that RUN statement, it is like nothing had happened but we quietly managed to change dmadmin user’s password. Here is its content:

dmadmin@dmclient:~/builds/documentum/da$ cat dctm-secrets
export INSTALL_OWNER=dmadmin
export INSTALL_OWNER_PASSWORD=dmadmin

It is a bash file that is sourced in the RUN statement to give access to the key-value tuples through environment variables.
On lines 59 and 60, dctm-wrapper is installed as explained in the aforementioned article, i.e. by symlinking it to widql, wiapi and wdmawk. When any of these programs is executed, it will be resolved to the wrapper, which knows what to do next. Please, refer to that article for details.
On lines 64, the Documentum content server is installed in silent mode; the parameter file is the default sample one:

dmadmin@dmclient:~/builds/documentum/clients$ cat files/linux_install_properties
# silent install response file
# used to install the binaries;
 
INSTALLER_UI=silent
KEEP_TEMP_FILE=false
 
####installation
##default documentum home directory, on linux, this attribute is supposed to be set in the env so don't need
##SERVER.DOCUMENTUM=/opt/documentum
##app server port
APPSERVER.SERVER_HTTP_PORT=9080
##app server password
APPSERVER.SECURE.PASSWORD=jboss
##enable cas as default
SERVER.CAS_LICENSE=XXXXXXXXX
 
# provide the lockboxpassphrase for existing lockbox files. the file name and lockboxpassphrase must be start with 1 and sequentially increased and cannot skip any number
# for example, SERVER.LOCKBOX_FILE_NAME1=lockbox1.lb, SERVER.LOCKBOX_FILE_NAME2=lockbox2.lb, SERVER.LOCKBOX_FILE_NAME3=lockbox3.lb
# same rule applies to SERVER.LOCKBOX_PASSPHRASE.PASSWORD=
SERVER.LOCKBOX_FILE_NAME1=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD1=

Since we do not create repositories but just use a minimalist set of files needed by the clients, that file could be simplified to:

# silent install response file
INSTALLER_UI=silent
KEEP_TEMP_FILE=false

On line 66, the patch P08, the latest available one for CS v16.4 as of this writing, is extracted and on the next line it is installed in silent mode too. Its edited property file CS_patch08.properties looks like this:

dmadmin@dmclient:~/builds/documentum/clients$ cat files/CS_patch08.properties
# Sun Apr 28 13:24:15 CEST 2019
# Replay feature output
# ---------------------
# This file was built by the Replay feature of InstallAnywhere.
# It contains variables that were set by Panels, Consoles or Custom Code.
 
#all
#---
INSTALLER_UI=silent
USER_SELECTED_PATCH_ZIP_FILE=CS_16.4.0080.0129_linux_ora.tar.gz
common.installLocation=/app/dctm

Lines 68 to 74 delete all the files from the content server package that are not used by the clients. Small files are not worth the effort so they are left behind.
The DFCs have been empirically kept to a minimum. Normally, there should not be any surprise because the iapi, idql and dmawk are legacy clients that only scratch the surface of the DFCs and don’t use any other jars.
On line 77, an endless loop is defined in order to keep the container running. In effect, if no process is (no longer) running inside a container, that container will be shut down. Here, the container does nothing per se, it just provides tools to be invoked externally and would exit immediately if it weren’t for the sleeping loop.
Starting with line 79, the commented out instructions to build and run the image are listed, for we feel it is handy to have them all in one place. They could also be wrapped up into a script file if this is more convenient.

Building the image

To build the image, use the command below:

export INSTALL_OWNER=dmadmin
export INSTALL_OWNER_UID=1000
export INSTALL_OWNER_GROUP=${INSTALL_OWNER}
export INSTALL_OWNER_GID=1000
export INSTALL_HOME=/app
export INSTALL_TMP=${INSTALL_HOME}/tmp
export SCRIPTS_DIR=${INSTALL_HOME}/scripts
export PRODUCT_MAJOR_VERSION=16.4
export JBOSS=wildfly9.0.1
export DOCUMENTUM=${INSTALL_HOME}/dctm
export DOCUMENTUM_SHARED=${DOCUMENTUM}
export DM_HOME=${DOCUMENTUM}/product/${PRODUCT_MAJOR_VERSION}
export MNT_SHARED_FOLDER=${INSTALL_HOME}/shared
time DOCKER_BUILDKIT=1 docker build --squash --no-cache --progress=plain --secret id=dctm-secrets,src=./dctm-secrets 
--build-arg soft_repo=./files                              
--build-arg INSTALL_OWNER=${INSTALL_OWNER}                 
--build-arg INSTALL_OWNER_UID=${INSTALL_OWNER_UID}         
--build-arg INSTALL_OWNER_GROUP=${INSTALL_OWNER_GROUP}     
--build-arg INSTALL_OWNER_GID=${INSTALL_OWNER_GID}         
--build-arg INSTALL_HOME=${INSTALL_HOME}                   
--build-arg INSTALL_TMP=${INSTALL_TMP}                     
--build-arg PRODUCT_MAJOR_VERSION=${PRODUCT_MAJOR_VERSION} 
--build-arg JBOSS=${JBOSS}                                 
--build-arg DOCUMENTUM=${DOCUMENTUM}                       
--build-arg DOCUMENTUM_SHARED=${DOCUMENTUM_SHARED}         
--build-arg DM_HOME=${DM_HOME}                             
--build-arg MNT_SHARED_FOLDER=${MNT_SHARED_FOLDER}         
--build-arg SCRIPTS_DIR=${SCRIPTS_DIR}                     
--tag="dbi/dctm-clients:v1.0"                              
.

Don’t forget the tiny little dot at the bottom 😉
The instructions are given on lines 81 to 112 in the dockfile too. Just uncomment the first line (the cat command), copy/paste all the lines up to the dot included and add a line containing only the word “eot” to close the here-document; afterwards, copy/paste all the generated text on the shell and Bob’s your uncle.
The image creation takes up a little under 5.5 minutes on my old, faithful machine, mostly taken up by the Documentum installer (3mn) and next by the ––squash option on line 14 (42s). This too is an experimental option that merges together all the file system layers into a single one and removes deleted files. The net result is a much more compact image, it goes from 5.06 Gb down to 625 Mb. We *have* to use it here otherwise there would be no point in going through the effort of removing unnecessary files to produce an as compact as possible image.
The resulting size is now exactly the one effectively taken up by all the files in the image as shown from within a running image:

[dmadmin@container-clients dctm]$ sudo du -ms /
...
622 /

There are pros and cons of using ––squash but that option has no perceptible drawbacks in our case.
Let’s now look at the produced images’sizes from the outside:

dmadmin@dmclient:~/builds/documentum/clients$ docker image ls
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
<none>                <none>              85b59eb66c00        3 minutes ago       625MB
dbi/dctm-clients      v1.0                bb0038f8a577        4 minutes ago       5.06GB

Note that the ––squash option produced a new image, here with the id 85b59eb66c00. We’d expect it to replace the original image instead of creating a new one but it’s OK too, we have both now, the original one and the squashed one. Note also how that image’s size has shrunk almost ten-fold. Let’s tag it the way it should have been:

dmadmin@dmclient:~/builds/documentum/clients$ docker tag 85b59eb66c00 dbi/dctm-clients:v1.0
dmadmin@dmclient:~/builds/documentum/clients$ docker image ls
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
dbi/dctm-clients      v1.0                85b59eb66c00        About an hour ago   625MB
...

Running the image

To run the image as a temporary container, use the command below:

dmadmin@dmclient:~/builds/documentum/clients$ docker run -d --rm --hostname=container-clients dbi/dctm-clients:v1.0

To run the image and keep the container under the name container-clients on exit:

dmadmin@dmclient:~/builds/documentum/clients$ docker run -d --name container-clients --hostname=container-clients dbi/dctm-clients:v1.0
# check it:
dmadmin@dmclient:~/builds/documentum/clients$ docker container ls
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                    NAMES
97dd2f14e3ef        dbi/dctm-clients:v1.0   "/bin/sh -c 'bash -c…"   5 seconds ago       Up 3 seconds                                 container-clients
...

Good. Let’s now move on and test the container. Note that usually we specify a network the container should be connected into, either when starting it or later, as shown in the example below with a custom bridge network named dctmbrnet:

# connect the container to a custom bridge network after it has been started:
docker network connect dctmbrnet container-client
 
# connect the container to a custom bridge network when starting it up:
dmadmin@dmclient:~/builds/documentum/clients$ docker run -d --name container-clients --hostname=container-clients --network dctmbrnet dbi/dctm-clients:v1.0

Testing the container-clients container from the inside

In this scenario, we need first to enter the container and then to run the usual command-line clients.
Let’s suppose the container is connected to some network that allows it to access remote containerized repositories. Such network can be a simple bridge network if the repositories’ containers and the container-client’s container are on the same host. Or it can be an overlay network or a macvlan network if the containers are distributed across several hosts. Or it can be any non docker network, e.g. managed by Kubernetes. So, let’s first enter the container-client’s container and then test the tools:

# enter container-client from its host:
dmadmin@dmclient:~/builds/documentum/clients$ docker exec -it container-client /bin/bash -l
 
# connect to the container02's dmtest02 repository:
[dmadmin@container-clients dctm]$ widql dmtest02:container02:1489 -Udmadmin -Pdmadmin
 
 
OpenText Documentum idql - Interactive document query interface
Copyright (c) 2018. OpenText Corporation
All rights reserved.
Client Library Release 16.4.0070.0035
 
 
Connecting to Server using docbase dmtest02
[DM_SESSION_I_SESSION_START]info: "Session 0100c35080002a52 started for user dmadmin."
 
 
Connected to OpenText Documentum Server running Release 16.4.0080.0129 Linux64.Oracle
1> quit
Bye
Connection to dmclient closed.

Remember that widql/wiapi/etc… end up calling the native Documentum tools idql/iapi/etc… So, the containerized clients do work as expected.

Testing the container-clients container from its host

Since we are logged on the container’s host, we can pass commands to the container on the command-line, as show below:

dmadmin@dmclient:~/builds/documentum/clients$ docker exec -it container-clients bash -l widql dmtest02:container02:1489 -Udmadmin -Pdmadmin
...
Connected to OpenText Documentum Server running Release 16.4.0080.0129 Linux64.Oracle
1> quit
Bye
Connection to dmclient closed.

Note that we use widql here because we don’t want to manually edit in advance the container-clients’ dfc.properties file; widql will do it on-the-fly for us.
The above command can be shortened by defining the following aliases:

alias widqlc='docker exec -it container-clients bash -l widql'
alias wiapic='docker exec -it container-clients bash -l wiapi'
alias dmawkc="docker exec -it container-clients bash -l wdmawk"
alias wdmqc='docker exec -it container-clients bash -l dmqdocbroker'

Their usage would be as simple as:

dmadmin@dmclient:~/builds/documentum/clients$ widqlc dmtest02:container02:1489 -Udmadmin -Pdmadmin
dmadmin@dmclient:~/builds/documentum/clients$ wdmqc -t container02 -p 1489 -i
dmadmin@dmclient:~/builds/documentum/clients$ dmawkc -v dmtest02:container02:1489 '{print}' ~/.bash_profile

container-clients will do the connection to the repositories on the host’s behalf. As such, container-clients works as a proxy for the host.

Testing the container-clients container from a different host

In this scenario, we are logged on a machine different from the container’s host. Since we did not install ssh in the container, it is not possible to run something like “ssh dmadmin@container-clients widql dmtest02:container02:1489”, even supposing that the container is accessible through the network (we’d need to connect it to a macvlan network for example). But, if allowed to, we could access its host via ssh and ask it politely to send a command to the container, as examplified below:

dmadmin@dmclient2:~$ ssh -t dmadmin@dmclient docker exec -it container-clients bash -l widql dmtest02:container02:1489 -Udmadmin -Pdmadmin
...
Connecting to Server using docbase dmtest02
[DM_SESSION_I_SESSION_START]info: "Session 0100c35080002b1a started for user dmadmin."
 
 
Connected to OpenText Documentum Server running Release 16.4.0080.0129 Linux64.Oracle
1> quit
Bye
Connection to dmclient closed.

which again could be simplified through a local alias:

alias widqlc="ssh -t dmadmin@dmclient docker exec -it container-clients bash -l widql"

and invoked thusly:

dmadmin@dmclient2:~$ widqlc dmtest02:container02:1489 -Udmadmin -Pdmadmin
...
Connected to OpenText Documentum Server running Release 16.4.0080.0129 Linux64.Oracle
1> quit
Bye
Connection to dmclient closed.

Testing the container-clients container from another container

A similar scenario would be for a container (instead of a normal host) to use the container-clients to access a remote repository. In this case, the ssh client is needed on the source container and ssh server on the host dmclient (which generally has it already) and, supposing it has been installed, the same “ssh -t dmadmin@dmclient docker exec…” command as above, or an alias thereof, could be used.
Of course, if container-client has the whole ssh package installed and enabled, there is no need any more to go through its host machine to invoke the tools; a command like “ssh -t dmadmin@container-clients widql dmtest02:container02:1489” could be used directly or, again, a shortcut for it, from any node on the network, be it a container or a traditional host. The dmadmin account’s credentials are now needed but this is not a big deal.

Conclusion

The container-clients container allows to avoid having to install the Documentum clients over and over again on administrative machines, when that is possible at all. Having such a container available also avoids having to log on servers just to use their clients locally, again if that is possible at all, for less risks to pollute or corrupt the installation. We have used it only with containers so far but, network permitting, it could also be used to access remote, non containerized repositories just as easily and transparently. Although quite useful as-is, it could be made even more so by adding Documentum Administrator in it. This is shown in the next article here Documentum Administrator in a container.