recap of a cloud migration, p. 1
This post provides a high-level summary of what I did to to migrate and deploy an instance of Kibana (Elastic) in my job's AWS account.
Because this is based off of real-life proprietary and/or classified work, details will be omitted. (In fact, most of the information here is already available in Elastic's official documentation.) The purpose of this post is merely to provide proof of work and experience with AWS and the Elastic stack.
$ some background
To preface, this project is relatively small and not complex. The nature of the migration itself involves a small network of devices (mostly virtual machines) and low complexity configurations. The on-premise network is also a lab/playground, so the data is unimportant/noncritical; risk is generally low.
Again, for the sake of keeping this high-level, I will not identify any other tools in the migration's tool stack other than the Elastic stack and Splunk, because the overall idea is to feed data from Splunk into the Elastic/Kibana instance. That part of this project will not be written here, as it is not 100% implemented yet, as of today.
On the Elastic side, because of the size of this project, it will only be a one-node deployment. Furthermore, because this is smaller than even a typical small enterprise environment, Beats and Logstash will not be incorporated into this solution.
$ launching an instance
Because of the size of this deployment, I had thought using a T3/T3a instance type felt most appropriate. Per AWS documentation, T3/T3a are low cost general purpose instance types that provide a baseline level of CPU performance. The constraints of this project did not call for instances optimized for computational work, memory, storage, etc.
I specifically chose a 2xlarge instance to allow for greater vCPUs and memory. For storage, I allocated 100 GBs. For the OS type as indicated in the AMI chosen, I chose a RHEL 8-based AMI.
$ configuring the OS
After launching the instance, I remote into it via SSH. As a first step, I ran a quick update, configured Network Time Synchronization (NTS), validated the synchronization configuration and created an Elastic user:
sudo yum updatesudo systemctl status chronyd
sudo timdatectl set-ntp true
sudo systemctl restart chronyd
chronyc sourcestats
sudo adduser elastic
getent passwd | grep elastic
sudo passwd elastic
sudo usermod -aG wheel elastic
cat /etc/group | grep wheel
Then I modifed the /etc/default/grub file to add cgroup options to kernal boot arguments and enabled the overlay2 kernel module. After that I installed Docker:
sudo yum config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
sudo yum makecache --timer
sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
docker --version
Adding the exact version of Docker works as well. The elastic user is then added to the docker group:
sudo usermod -aG docker elastic
cat /etc/group | grep elastic
Next, I setted up the XFS quota and formatted the fileystem for the OS if necessary. The lsblk command can be used to validate which drive ECE will be installed on. Then I formatted the partition if necessary using mkfs.xfs.
I created the /mnt/data directory as a mount point, added quotas to the filesystem path /mnt/data and regenerated the mount files:
sudo install -o elastic -g elastic -d -m 700 /mnt/data
sudo vi /etc/fstab add prjquota and pquota
sudo systemctl daemon-reload
sudo systemctl restart local-fs.target
Next I updated configuration systems for RHEL by first stopping the Docker service and modifying the kernel parameters in /etc/sysctl.conf. I applied settings using the sysctl and systemctl services to restart NetworkManager. Then I modified /etc/security/limits.conf to adjust the system limits.
Next I created a directory (/mnt/data/docker) for Docker service storage.
I then configured the Docker daemon options. I created the docker.service.d directory in /etc/systemd/system and created /etc/systemd/system/docker.service.d/docker/conf to add Docker daemon configurations. The daemon is then reloaded and the Docker service was restarted and enabled.
Lastly, I turned on network settings by adding configurations to /etc/sysctl.d/70-enterprise.conf.
The system is then needed to be rebooted. After, I verified the settings have persisted as intended using: sudo docker info | grep Root, which should return the right Docker root directoty (/mnt/data/docker).
$ installing ECE
I followed the steps below to install ECE. First I created "ece_install" and "elastic" directories on the /mnt/data/ path, then I changed into the "ece_install" working directory. I executed docker pull commands to pull Elastic images. In my deployment, I pulled the most updated version of ECE, Elasticsearch and Kibana.
I used curl to download and run the latest installation script and modified any variable definitions (i.e. ENABLE_DEBUG_LOGGING) in the script if necessary. I changed the mode of the file to make it executable, and I executed it. When it finished executing, I saved all pertinant information from the output. There should be a link to the ECE admin console posted.
For me, a lot of additional configurations were made to complete the ECE setup that is specific to the solution designed for my job, so I will not speak further about it. But these were essentially the step I took to deploy ECE with AWS. The next part of this will discuss data ingestion into Elasticsearch and Kibana.
Written: October 14, 2023