I like to expand my knowledge, I like meditation, I like what i do, even if i do it wrong.
4576 words
@expykn

SOPS and GPG

SOPS is a tool used to facilitate the encryption/decryption operation on files and it support all major cloud providers tools and also GPG.

Prerequisites

How to use it

First thing first let's create a new gpg key (skip this steps if you already have one):

gpg --full-generate-key command will ask you some questions, just keep the defaults and go on.

After the key has been generated take the fingerprint and pass it as argument of sops:

sops -pgp [your fingerprint] pgpfile.yaml

SOPS man

NAME:
   sops - sops - encrypted file editor with AWS KMS, GCP KMS, Azure Key Vault and GPG support

USAGE:
   sops is an editor of encrypted files that supports AWS KMS and PGP

   To encrypt or decrypt a document with AWS KMS, specify the KMS ARN
   in the -k flag or in the SOPS_KMS_ARN environment variable.
   (you need valid credentials in ~/.aws/credentials or in your env)

   To encrypt or decrypt a document with GCP KMS, specify the
   GCP KMS resource ID in the --gcp-kms flag or in the SOPS_GCP_KMS_IDS
   environment variable.
   (you need to setup google application default credentials. See
    https://developers.google.com/identity/protocols/application-default-credentials)


   To encrypt or decrypt a document with HashiCorp Vault's Transit Secret Engine, specify the
   Vault key URI name in the --hc-vault-transit flag or in the SOPS_VAULT_URIS environment variable (eg. https://vault.example.org:8200/v1/transit/keys/dev
      where 'https://vault.example.org:8200' is the vault server, 'transit' the enginePath, and 'dev' is the name of the key )
   environment variable.
   (you need to enable the Transit Secrets Engine in Vault. See
      https://www.vaultproject.io/docs/secrets/transit/index.html)

   To encrypt or decrypt a document with Azure Key Vault, specify the
   Azure Key Vault key URL in the --azure-kv flag or in the SOPS_AZURE_KEYVAULT_URL
   environment variable.
   (authentication is based on environment variables, see
    https://docs.microsoft.com/en-us/go/azure/azure-sdk-go-authorization#use-environment-based-authentication.
    The user/sp needs the key/encrypt and key/decrypt permissions)

   To encrypt or decrypt using PGP, specify the PGP fingerprint in the
   -p flag or in the SOPS_PGP_FP environment variable.

   To use multiple KMS or PGP keys, separate them by commas. For example:
       $ sops -p "10F2...0A, 85D...B3F21" file.yaml

   The -p, -k, --gcp-kms, --hc-vault-transit and --azure-kv flags are only used to encrypt new documents. Editing
   or decrypting existing documents can be done with "sops file" or
   "sops -d file" respectively. The KMS and PGP keys listed in the encrypted
   documents are used then. To manage master keys in existing documents, use
   the "add-{kms,pgp,gcp-kms,azure-kv,hc-vault-transit}" and "rm-{kms,pgp,gcp-kms,azure-kv,hc-vault-transit}" flags.

   To use a different GPG binary than the one in your PATH, set SOPS_GPG_EXEC.

   To select a different editor than the default (vim), set EDITOR.

   For more information, see the README at github.com/mozilla/sops

VERSION:
   3.6.1

AUTHORS:
   AJ Bahnken <ajvb@mozilla.com>
   Adrian Utrilla <adrianutrilla@gmail.com>
   Julien Vehent <jvehent@mozilla.com>

COMMANDS:
     exec-env    execute a command with decrypted values inserted into the environment
     exec-file   execute a command with the decrypted contents as a temporary file
     publish     Publish sops file or directory to a configured destination
     keyservice  start a SOPS key service server
     groups      modify the groups on a SOPS file
     updatekeys  update the keys of a SOPS file using the config file
     help, h     Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --decrypt, -d                            decrypt a file and output the result to stdout
   --encrypt, -e                            encrypt a file and output the result to stdout
   --rotate, -r                             generate a new data encryption key and reencrypt all values with the new key
   --kms value, -k value                    comma separated list of KMS ARNs [$SOPS_KMS_ARN]
   --aws-profile value                      The AWS profile to use for requests to AWS
   --gcp-kms value                          comma separated list of GCP KMS resource IDs [$SOPS_GCP_KMS_IDS]
   --azure-kv value                         comma separated list of Azure Key Vault URLs [$SOPS_AZURE_KEYVAULT_URLS]
   --hc-vault-transit value                 comma separated list of vault's key URI (e.g. 'https://vault.example.org:8200/v1/transit/keys/dev') [$SOPS_VAULT_URIS]
   --pgp value, -p value                    comma separated list of PGP fingerprints [$SOPS_PGP_FP]
   --in-place, -i                           write output back to the same file instead of stdout
   --extract value                          extract a specific key or branch from the input document. Decrypt mode only. Example: --extract '["somekey"][0]'
   --input-type value                       currently json, yaml, dotenv and binary are supported. If not set, sops will use the file's extension to determine the type
   --output-type value                      currently json, yaml, dotenv and binary are supported. If not set, sops will use the input file's extension to determine the output format
   --show-master-keys, -s                   display master encryption keys in the file during editing
   --add-gcp-kms value                      add the provided comma-separated list of GCP KMS key resource IDs to the list of master keys on the given file
   --rm-gcp-kms value                       remove the provided comma-separated list of GCP KMS key resource IDs from the list of master keys on the given file
   --add-azure-kv value                     add the provided comma-separated list of Azure Key Vault key URLs to the list of master keys on the given file
   --rm-azure-kv value                      remove the provided comma-separated list of Azure Key Vault key URLs from the list of master keys on the given file
   --add-kms value                          add the provided comma-separated list of KMS ARNs to the list of master keys on the given file
   --rm-kms value                           remove the provided comma-separated list of KMS ARNs from the list of master keys on the given file
   --add-hc-vault-transit value             add the provided comma-separated list of Vault's URI key to the list of master keys on the given file ( eg. https://vault.example.org:8200/v1/transit/keys/dev)
   --rm-hc-vault-transit value              remove the provided comma-separated list of Vault's URI key from the list of master keys on the given file ( eg. https://vault.example.org:8200/v1/transit/keys/dev)
   --add-pgp value                          add the provided comma-separated list of PGP fingerprints to the list of master keys on the given file
   --rm-pgp value                           remove the provided comma-separated list of PGP fingerprints from the list of master keys on the given file
   --ignore-mac                             ignore Message Authentication Code during decryption
   --unencrypted-suffix value               override the unencrypted key suffix.
   --encrypted-suffix value                 override the encrypted key suffix. When empty, all keys will be encrypted, unless otherwise marked with unencrypted-suffix.
   --unencrypted-regex value                set the unencrypted key suffix. When specified, only keys matching the regex will be left unencrypted.
   --encrypted-regex value                  set the encrypted key suffix. When specified, only keys matching the regex will be encrypted.
   --config value                           path to sops' config file. If set, sops will not search for the config file recursively.
   --encryption-context value               comma separated list of KMS encryption context key:value pairs
   --set value                              set a specific key or branch in the input document. value must be a json encoded string. (edit mode only). eg. --set '["somekey"][0] {"somevalue":true}'
   --shamir-secret-sharing-threshold value  the number of master keys required to retrieve the data key with shamir (default: 0)
   --verbose                                Enable verbose logging output
   --output value                           Save the output after encryption or decryption to the file specified
   --enable-local-keyservice                use local key service
   --keyservice value                       Specify the key services to use in addition to the local one. Can be specified more than once. Syntax: protocol://address. Example: tcp://myserver.com:5000
   --help, -h                               show help
   --version, -v                            print the version

Helm

Add a new helm repository

Adding the bitnami repository as example

helm repo add bitnami https://charts.bitnami.com/bitnami

List installed repositories

helm repo list

Search for a package to install

helm search repo [your application name] [--versions]


Common Actions for Helm

  • helm search: search for charts
  • helm pull: download a chart to your local directory to view
  • helm install: upload the chart to Kubernetes
  • helm list: list releases of charts

Environment variables:

Name Description
$HELMCACHEHOME set an alternative location for storing cached files.
$HELMCONFIGHOME set an alternative location for storing Helm configuration.
$HELMDATAHOME set an alternative location for storing Helm data.
$HELM_DRIVER set the backend storage driver. Values are: configmap, secret, memory, postgres
$HELMDRIVERSQLCONNECTIONSTRING set the connection string the SQL storage driver should use.
$HELMNOPLUGINS disable plugins. Set HELMNOPLUGINS=1 to disable plugins.
$KUBECONFIG set an alternative Kubernetes configuration file (default "~/.kube/config")

Helm stores cache, configuration, and data based on the following configuration order:

  • If a HELM*HOME environment variable is set, it will be used
  • Otherwise, on systems supporting the XDG base directory specification, the XDG variables will be used
  • When no other location is set a default location will be used based on the operating system

By default, the default directories depend on the Operating System. The defaults are listed below:

Operating System Cache Path Configuration Path Data Path
Linux $HOME/.cache/helm $HOME/.config/helm $HOME/.local/share/helm
macOS $HOME/Library/Caches/helm $HOME/Library/Preferences/helm $HOME/Library/helm
Windows %TEMP%\helm %APPDATA%\helm %APPDATA%\helm

Usage:
helm [command]

Available Commands:
completion generate autocompletions script for the specified shell
create create a new chart with the given name
dependency manage a chart's dependencies
env helm client environment information
get download extended information of a named release
help Help about any command
history fetch release history
install install a chart
lint examine a chart for possible issues
list list releases
package package a chart directory into a chart archive
plugin install, list, or uninstall Helm plugins
pull download a chart from a repository and (optionally) unpack it in local directory
repo add, list, remove, update, and index chart repositories
rollback roll back a release to a previous revision
search search for a keyword in charts
show show information of a chart
status display the status of the named release
template locally render templates
test run tests for a release
uninstall uninstall a release
upgrade upgrade a release
verify verify that a chart at the given path has been signed and is valid
version print the client version information

Flags:
--add-dir-header If true, adds the file directory to the header
--alsologtostderr log to standard error as well as files
--debug enable verbose output
-h, --help help for helm
--kube-apiserver string the address and the port for the Kubernetes API server
--kube-context string name of the kubeconfig context to use
--kube-token string bearer token used for authentication
--kubeconfig string path to the kubeconfig file
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log-dir string If non-empty, write log files in this directory
--log-file string If non-empty, use this log file
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--logtostderr log to standard error instead of files (default true)
-n, --namespace string namespace scope for this request
--registry-config string path to the registry config file (default "/home/pj/.config/helm/registry.json")
--repository-cache string path to the file containing cached repository indexes (default "/home/pj/.cache/helm/repository")
--repository-config string path to the file containing repository names and URLs (default "/home/pj/.config/helm/repositories.yaml")
--skip-headers If true, avoid header prefixes in the log messages
--skip-log-headers If true, avoid headers when opening log files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level number for the log level verbosity
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging

Golang arrays performance

The scope of this article is just to illustrate the performance of arrays using the go language while iterating on different sizes of them, compared with accessing data directly (the same amount of items but not using an array).

The interesting thing is that:

  1. Accessing memory on small arrays(100 items) is slower than accessing memory on a single var in a for loop of the same size,
  2. Accessing memory on bigger arrays (10.000 items) is faster than accessing memory on a single var in a for loop of the same size,
  3. Accessing memory on more bigger arrays (100.000 items) is faster than accessing memory on a single var in a for loop of the same size.

Let's start illustrating the code that i'm using as the base example. This program is very simple, it prints out the times table of the numbers that you pass to it.

I've created two constants to pass the dimension of the arrays:

  • NUM i.e. the multiplicand
  • MULTIPLIERS i.e. the multipliers

If i set the NUM and the MULTIPLIERS constants to the value of 10 (100 items) i get:

Execution with array took 268.772µs
Execution without array took 180.132µs

If i set the NUM and the MULTIPLIERS constants to the value of 100 (10.000 items) i get:

Execution with array took 133.748902ms
Execution without array took 145.693206ms

If i set the NUM to the value of 1000 and the MULTIPLIERS to the value of 100 (100.000 items) i get:

Execution with array took 11.424318589s
Execution without array took 11.463020857s

Here is the code:

package main

import (
    "fmt"
    "time"
)

func main() {

    // Define our parameters here.
    // Change the values to see how performance change
    const NUM int = 10
    const MULTIPLIERS int = 10

    arrayStart := time.Now()
    fmt.Println(arrayTimesTable(NUM, MULTIPLIERS))
    elapsedArray := time.Since(arrayStart)
    //fmt.Printf("Execution took %s\n", elapsedArray)

    fmt.Println("--------------------------------------------------------------------------")

    start := time.Now()
    fmt.Println(timesTable(NUM, MULTIPLIERS))
    elapsed := time.Since(start)

    fmt.Printf("Execution with array took %s\n", elapsedArray)
    fmt.Printf("Execution without array took %s\n", elapsed)
}

func arrayTimesTable(num, multipliers int) string {
    m := ""
    N := num
    M := multipliers

    arr := make([]int, N)
    for i := range arr {
        arr[i] = i + 1
    }

    timesTable := make([]int, M)
    for k := range timesTable {
        timesTable[k] = k + 1
    }

    for i := 0; i < len(arr); i++ {
        for t := 0; t < len(timesTable); t++ {
            res := [...]int{arr[i] * timesTable[t]}
            m += fmt.Sprintf("%v * %v: %v\n", i + 1, t + 1, res)
        }
    }
    return m
}

func timesTable(num int, multipliers int) string {
    m := ""
    for i := 0; i <= num; i++ {
        for t := 0; t <= multipliers; t++ {
            res := i * t
            m += fmt.Sprintf("%d * %d: %d\n", i, t, res)
        }
    }
    return m
}

Postgresql docker image with PostGIS extension

This image extends the official Postgres image, exactly the 9.6 tagged one.

All the customizations (bash scripts and DBs) are copied inside the /docker-entrypoint-initdb.d/ path.

Here is an example of the Dockerfile configuration:

FROM postgres:9.6

ENV POSTGIS_MAJOR 2.4
ENV POSTGIS_VERSION 2.4.4+dfsg-4.pgdg80+1

RUN apt-get update \
      && apt-cache showpkg postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR \
      && apt-get install -y --no-install-recommends \
           postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR=$POSTGIS_VERSION \
           postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR-scripts=$POSTGIS_VERSION \
           postgis=$POSTGIS_VERSION \
      && rm -rf /var/lib/apt/lists/*


COPY init-db-postgis.sh /docker-entrypoint-initdb.d/init-db-postgis.sh
COPY update-postgis.sh /usr/local/bin/update-postgis.sh
COPY init-user-db.sh /docker-entrypoint-initdb.d/init-user-db.sh
COPY gis.sql.gz /docker-entrypoint-initdb.d/z.sql.gz

EXPOSE  5432

This way when the container starts users and DBs are created from scratch. It’s very important to remember that resources copied inside the /docker-entrypoint-initdb.d directory are executed in alphabetical order. I have renamed gis.sql.gz to z.sql.gz to be sure to have the my-user user created before the DB creation and restore phase.

Following an example of the init-user-db.sh script :

#!/bin/bash
set -e

psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
    CREATE USER my-user;
    ALTER ROLE my-user WITH PASSWORD 'my-password';
        CREATE DATABASE "my-db-name";
        GRANT ALL PRIVILEGES ON DATABASE "my-db-name" TO my-user;
EOSQL

Here an example of the init-db-postgis.sh script:

#!/bin/sh

set -e

# Perform all actions as $POSTGRES_USER
export PGUSER="$POSTGRES_USER"

# Create the 'template_postgis' template db
"${psql[@]}" <<- 'EOSQL'
CREATE DATABASE template_postgis;
UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template_postgis';
EOSQL

# Load PostGIS into both template_database and $POSTGRES_DB
for DB in template_postgis "$POSTGRES_DB"; do
        echo "Loading PostGIS extensions into $DB"
        "${psql[@]}" --dbname="$DB" <<-'EOSQL'
                CREATE EXTENSION IF NOT EXISTS postgis;
                CREATE EXTENSION IF NOT EXISTS postgis_topology;
                CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;
                CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder;
EOSQL
done

And the update-postgis.sh script file:

#!/bin/sh

set -e

# Perform all actions as $POSTGRES_USER
export PGUSER="$POSTGRES_USER"

POSTGIS_VERSION="${POSTGIS_VERSION%%+*}"

# Load PostGIS into both template_database and $POSTGRES_DB
for DB in template_postgis "$POSTGRES_DB" "${@}"; do
    echo "Updating PostGIS extensions '$DB' to $POSTGIS_VERSION"
    psql --dbname="$DB" -c "
        -- Upgrade PostGIS (includes raster)
        CREATE EXTENSION IF NOT EXISTS postgis VERSION '$POSTGIS_VERSION';
        ALTER EXTENSION postgis  UPDATE TO '$POSTGIS_VERSION';
        -- Upgrade Topology
        CREATE EXTENSION IF NOT EXISTS postgis_topology VERSION '$POSTGIS_VERSION';
        ALTER EXTENSION postgis_topology UPDATE TO '$POSTGIS_VERSION';
        -- Install Tiger dependencies in case not already installed
        CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;
        -- Upgrade US Tiger Geocoder
        CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder VERSION '$POSTGIS_VERSION';
        ALTER EXTENSION postgis_tiger_geocoder UPDATE TO '$POSTGIS_VERSION';
    "
done

List of GitHub commands

These are common Git commands used in various situations:

start a working area (see also: git help tutorial)
   clone     Clone a repository into a new directory
   init      Create an empty Git repository or reinitialize an existing one

work on the current change (see also: git help everyday)
   add       Add file contents to the index
   mv        Move or rename a file, a directory, or a symlink
   restore   Restore working tree files
   rm        Remove files from the working tree and from the index

examine the history and state (see also: git help revisions)
   bisect    Use binary search to find the commit that introduced a bug
   diff      Show changes between commits, commit and working tree, etc
   grep      Print lines matching a pattern
   log       Show commit logs
   show      Show various types of objects
   status    Show the working tree status

grow, mark and tweak your common history
   branch    List, create, or delete branches
   commit    Record changes to the repository
   merge     Join two or more development histories together
   rebase    Reapply commits on top of another base tip
   reset     Reset current HEAD to the specified state
   switch    Switch branches
   tag       Create, list, delete or verify a tag object signed with GPG

collaborate (see also: git help workflows)
   fetch     Download objects and refs from another repository
   pull      Fetch from and integrate with another repository or a local branch
   push      Update remote refs along with associated objects

'git help -a' and 'git help -g' list available subcommands and some
concept guides. See 'git help <command>' or 'git help <concept>'
to read about a specific subcommand or concept.
See 'git help git' for an overview of the system.

These GitHub commands are provided by hub:

   api            Low-level GitHub API request interface
   browse         Open a GitHub page in the default browser
   ci-status      Show the status of GitHub checks for a commit
   compare        Open a compare page on GitHub
   create         Create this repository on GitHub and add GitHub as origin
   delete         Delete a repository on GitHub
   fork           Make a fork of a remote repository on GitHub and add as remote
   gist           Make a gist
   issue          List or create GitHub issues
   pr             List or checkout GitHub pull requests
   pull-request   Open a pull request on GitHub
   release        List or create GitHub releases
   sync           Fetch git objects from upstream and update branches

Configure extended data types in Oracle

Prior to Oracle 12c, regardless of the character semantics used, the maximum size of a VARCHAR2, NVARCHAR2 and RAW columns in a database are as follows.

  • VARCHAR2 : 4000 bytes
  • NVARCHAR2 : 4000 bytes
  • RAW : 2000 bytes

With the introduction of Extended Data Types, Oracle 12c optionally increases these maximum sizes.

  • VARCHAR2 : 32767 bytes
  • NVARCHAR2 : 32767 bytes
  • RAW : 32767 bytes

Remember, these figures are in bytes, not characters. The total number of characters that can be stored will depend on the character sets being used.

Prerequisites

An instance of Oracle 12c release 2

Connect to oracle with sqlplus

First thing first set the correct ORACLE_SID environment variable pointing to the DB that we wont to upgrade.

ORACLE_SID=[YOUR DATABASE SID]; export ORACLE_SID

Now we can connect to the DB:

sqlplus sys as sysdba and when asked insert the password. Once we are logged in lets execute the instructions to extend the default maxstringsize parameter:

These instructions will modify the maxstringsize an all pluggable databases as well.

ALTER SYSTEM SET max_string_size=extended SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;
ALTER PLUGGABLE DATABASE ALL OPEN UPGRADE;
EXIT;

cd $ORACLE_HOME/rdbms/admin/
$ORACLE_HOME/perl/bin/perl catcon.pl -d $ORACLE_HOME/rdbms/admin -l /tmp -b utl32k_output utl32k.sql

sqlplus sys as sysdba
SHUTDOWN IMMEDIATE;
STARTUP;

Info

For more detailed information look at the official Oracle documentation:

https://oracle-base.com/articles/12c/extended-data-types-12cR1

JBoss domain setup

Cluster configuration

Env setup

We will use a single installation and different configuration folders to simulate remote hosts:

unzip jboss-eap-7.1.0.zip -d $HOME/JBossDomain

then we will create virtual hosts directory structure

export EAP_DOMAIN=$HOME/JBossDomain
cd $EAP_DOMAIN
mkdir host0 host1 host2
cp -r jboss-eap-7.1/domain host0/
cp -r jboss-eap-7.1/domain host1/
cp -r jboss-eap-7.1/domain host2/

domain user configuration

We will create a management user only for the host0 node

cd $EAP_DOMAIN/jboss-eap-7.1/bin
./add-user.sh -dc $EAP_DOMAIN/host0/domain/configuration

Choose the following options:

Management User (mgmt-users.properties)
userName: admin
userPassword: Admin01#
GroupList: Empty, the user will be added by defautl to the ManagementRealm
Is this user going to be used for one AS process to connect etc etc: no

configure host authentication

We will create a second user for host authentication. We will need to repeat this step for all hosts.

cd $EAP_DOMAIN/jboss-eap-7.1/bin
./add-user.sh -dc $EAP_DOMAIN/host0/domain/configuration

Choose the following options:

Management User (mgmt-users.properties)
userName: slave
userPassword: Slave01#
GroupList: Empty, the user will be added by defautl to the ManagementRealm: yes
Is this user going to be used for one AS process to connect etc etc: yes

When the add-user script complete, an encrypted password is generated in the output. Keep the encrypted generated password and replace the default secrete value for host1 and host2 server identities.
create the same user for host1 and host2, this time in batch mode

cd $EAP_DOMAIN/jboss-eap-7.1/bin
./add-user.sh -dc $EAP_DOMAIN/host1/domain/configuration -r ManagementRealm -u slave -p Slave01# -ro admin,manager
./add-user.sh -dc $EAP_DOMAIN/host2/domain/configuration -r ManagementRealm -u slave -p Slave01# -ro admin,manager

replace the default secret value for host1 and host2

host1/domain/configuration/host-slave.xml
host2/domain/configuration/host-slave.xml

<server-identities>
    <secret value="[your secret value]"/>
</server-identities>

Set up the domain host and interfaces

We will update the host0/domain/host-master.xml file to set a correct hostname for preventing confusion in the domain configuration dashboard

cd host0/domain/configuration/
vi host-master.xml
edit host name to be host0-master
<host xmlns="urn:jboss:domain:4.1" name="host0-master">

Now we have to indicate that a host is a domain controller. We will locate the domain-controller section in the configurationfile and make sure its content matches the following structure:

<domain-controller>
    <local/>
</domain-controller>

The local tag indicates that this is a domain controller.

Configure slaves (host-slave.xml)

We will set up communications links between the domain controller and the hosts. For each host (host1, host2) edit the domain/configuration/host-slave.xml configuration file:

cd host1/domain/configuration
vi host-slave.xml
<host xmlns="urn:jboss:domain:4.1" name="host1">
cd host2/domain/configuration
vi host-slave.xml
<host xmlns="urn:jboss:domain:4.1" name="host2">

change the management interface default port to 19999 for host1 and 29999 for host2. Port 9999 is already used by the domain controller (host0):

host1

<native-interface security-realm="ManagementRealm">
    <socket interface="management" port="${jboss.management.native.port:19999}"/>
</native-interface>

host2

<native-interface security-realm="ManagementRealm">
    <socket interface="management" port="${jboss.management.native.port:29999}"/>
</native-interface>

Set up the right configuration so that the host can join the domain controller(repeat for host1 and host2):

<domain-controller> 
    <remote security-realm="ManagementRealm">
    <discovery-options>
        <static-discovery name="primary" protocol="${jboss.domain.master.protocol:remote}" host="${jboss.domain.master.address :127.0.0.1}" port="${jboss.domain.master.port:9999}"/>
    </discovery-options>
     </remote>
</domain-controller>

Remove the content of the servers tag (host1 and host2):

<servers> </servers>

Start the domain

Let's start the domain controller and the hosts:

cd $EAP_DOMAIN/jboss-eap-7.1/bin
./domain.sh -Djboss.domain.base.dir=../host0/domain/ --host-config=host-master.xml

start host1 from another termial or tab

./domain.sh -Djboss.domain.base.dir=../host1/domain/ --host-config=host-slave.xml

start host2 from another terminal or tab

./domain.sh -Djboss.domain.base.dir=../host2/domain/ --host-config=host-slave.xml

you can see the host registering in domain controller logs.

Now you can connect do the domain controller:
http://127.0.0.1:9990/console

login: admin, Password: Admin01#

Create a server group

We will connect to the CLI and create a lab server group with the ha profile and the associated ha-sockets binding group.

cd $EAP_DOMAIN/jboss-eap-7.1/bin
./jboss-cli.sh -c
[domain@localhost:9999 /] /server-group=lab:add(profile=ha,socket-binding-group=ha-sockets)
{
    "outcome" => "success",
    "result" => "undefined",
    "server.groups" => "undefined"
}

Create some server instances

A server instance is cretaed on a host and belongs to one server group. Since we want to have many server instances on the same host, wi will shift the instances port offset, so we will add 100 to the base offset while moving from one server to another.

Host1 server creation

cd $EAP_DOMAIN/jboss-eap-7.1/bin
./jboss-cli.sh -c
[domain@localhost:9999 /] /host=host1/server-config=node11:add(group=lab,socket-binding-port-offset=100)

Host2 server creation

cd $EAP_DOMAIN/jboss-eap-7.1/bin
./jboss-cli.sh -c
[domain@localhost:9999 /] /host=host1/server-config=node21:add(group=lab,socket-binding-port-offset=200)

in this way we will have node11 on port 8180 and node21 on port 8280

Now let's start the servers

[domain@localhost:9999 /] /server-group=lab:start-servers

Configure Datasources

We will configure JBoss Domain for postgresql.

cd $EAP_DOMAIN/jboss-eap-7.1/modues/system/layers/base/
mkdir -p org/postgresql/jdbc/main
cp $HOME/Downloads/postgresql-9.4.1211.jar org/postgresql/jdbc/main/
vi org/postgresql/jdbc/main/module.xml

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.3" name="org.postgresql.jdbc">
   <resources>
          <resource-root path="postgresql-9.4.1211.jar"/>
   </resources>
   <dependencies>
          <module name="javax.api"/>
          <module name="javax.transaction.api"/>
   </dependencies>
</module>
[domain@localhost:9999 /] /profile=ha/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql.jdbc,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)

restart the domain.

JNDI configuration on Jboss web console

We will configure two datasources for the scope of this lab:
java:jboss/datasources/entandoDbPort2 and java:jboss/datasources/entandoDbServ2

Access to http://localhost:9990 user: admin password: Password01# and go to:

configuration->profiles->ha->datasources->Non-XA->Add

and select those options for create entandoDbPort2 JNDI:

  • Postgresql Datasource
  • Name: entandoDbPort2
  • JNDI Name: java:jboss/datasources/entandoDbPort2
  • Detected Driver: postgresql
  • Connection URL: jdbc:postgresql://localhost:5432/entandoDbPort2
  • user name: entando
  • password: entando

select those options for create entandoDbServ2 JNDI

  • Postgresql Datasource
  • Name: entandoDbServ2
  • JNDI Name: java:jboss/datasources/entandoDbServ2
  • Detected Driver: postgresql
  • Connection URL: jdbc:postgresql://localhost:5432/entandoDbServ2
  • user name: entando
  • password: entando

Done.

Postgresql dump and restore

Dumping with pg_dump

pg_dump -U [your username] [your schema] > [your-dump-file-name]_"$(date '+%F').sql" -h localhost -p 5433

Restoring with psql

psql -U postgres -h localhost -p 5433 -d [your schema] < [your-dump-file].sql

Reload configuration without restarting

SELECT pg_reload_conf();

Restore from a bz2 archive

bunzip2 your_dump.bz2 and then pg_restore -d [db-to-restore] -e [name-of-extracted-archive] -h [host] -U [user]

Backup and restore from a pod in a Kubernetes cluster

kubectl exec -it [your pod name] -- pg_dumpall -c -U postgres > /home/user/dump_db.sql
​
cat your_dump.sql | kubectl exec -it [your pod name] -- psql -U postgresp

Change owner recursively

select 'ALTER TABLE ' || t.tablename || ' OWNER TO [new user];'
 from  pg_tables t
 where schemaname = 'public';

Change owner of sequences recursively

select 'ALTER SEQUENCE ' || sequence_name || ' OWNER TO [your user];'
 from  information_schema.sequences t
 where sequence_schema = '[name of the schema]';

Delete all tables from a schema

select 'drop table if exists ' || tablename || ' cascade;' 
  from pg_tables
 where schemaname = '[name of the schema]'; 

Useful queries

https://gist.github.com/anvk/475c22cbca1edc5ce94546c871460fdd

Tweaks for JMeter

Tweaks for JMeter

This post highlights a few tips that may be necessary to identify the maximum concurrent throughput of one or more application servers with JMeter. They include TCP / IP tuning, load balancer tuning, and garbage collection tuning.

TCP / IP (Red Hat Enterprise Linux / RHEL)

When an HTTP request is made, an ephemeral port is allocated for the TCP / IP connection. The ephemeral port range is 32678 – 61000. After the client closes the connection, the connection is placed in the TIME-WAIT state for 60 seconds.

If JMeter (HttpClient) is sending thousands of HTTP requests per second and creating new TCP / IP connections, the system will run out of available ephemeral ports for allocation.

When JMeter is run, the following message may appear in the jmeter-server.log file if the JMeter server is unable to allocate a port to create a connection to the JMeter client to return the samples.

java.net.NoRouteToHostException: Cannot assign requested address

Otherwise, the following messages may appear in the JMeter JTL files:

Non HTTP response code: java.net.BindException
Non HTTP response message: Address already in use

The solution is to enable fast recycling and reuse TIME_WAIT sockets.

echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse

Other options include TCPFINTIMEOUT to reduce how long a connection is placed in the TIMEWAIT state and TCPTWREUSE to allow the system to reuse connections placed in the TIMEWAIT state. See this article for more information.

other resoruces: https://mapr.com/docs/52/AdvancedInstallation/SettingResourceLimitsOnCentOS.html