Docker – Install Other Open source DB

We can quickly install anything on Docker with hub.docker.com

PostgreSQL:

$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres

MariaDB:

docker run --name mariadbtest -e MYSQL_ROOT_PASSWORD=mypass -d mariadb/server:10.3 --log-bin --binlog-format=MIXED
Advertisements
Posted in Kubernetes/Docker/Container, Open Source, Others, RDBMS | Tagged , | Leave a comment

Informix on Docker

Today we will discuss how to install/configure informix on Docker,

>docker run -it –name ifx –privileged -p 9088:9088 -p 9089:9089 -p 27017:27017 -p 27018:27018 -p 27883:27883 -e LICENSE=accept ibmcom/informix-developer-database:latest

 

>docker exec -it ifx bash

 

Other information to consider:

http://www.php.net/distributions/php-7.3.1.tar.gz
http://apache.osuosl.org/httpd/httpd-2.4.37.tar.gz
wget https://pecl.php.net/get/PDO_INFORMIX-1.3.1.tgz
find . -type f -name ‘*.tar’

 

Posted in Kubernetes/Docker/Container, Others, RDBMS | Tagged | 1 Comment

Imp Linux Commands

Important Linux Commands
1. uname -a: To know what version of unix we are using
2. hostname: System name
3. ifconfig:      ip information
4. ping -c 3 :    ping 3 times

5. chmod:        change mode – permission (read/write/executable) – 777 – 7(me)7(group)7(others)

N         Description                  ls          binary
0          No permissions at all   —        000
1          Only execute               –x        001
2          Only write                   -w-       010
3          Write and execute       -wx      011
4          Only read                    r–        100
5          Read and execute       r-x        101
6          Read and write           rw-       110
7          Read, write, and exe   rwx 111

eg: sudo chmode 777 vin
First Number 7 – Read, write, and execute for user.
Second Number 7 – Read, write, and execute for group.
Third Number 7 – Read, write, and execute for other.

6. chown: Change owner
eg: chown -f root vin

7. df -h :human readable heading form

8. du -kh: Disk Usage all human readable – s summarize
9. whoami: returns the current user running command
10. pwd: present working directory
11. vi: Editor – :w (save) :q (quit)
12. touch : create empty file
13. mv: rename or move
14. cp:copy
15. rm:remove
16. ls -ltrh -R: (last lasting format, time wise sort, humen readable) (-r -reccursive -sub folder)
17. grep: search/contains
18. top: resource utilization
19. tail -f <no>: tail follows – live log information
20. dpkg -l |grep mysql : software package installed list
21. scp <source> <target> : Secure copy
22. wget: www get retrive the software from net -cloud
23. find /home *.txt : search in /home directory for all txt file on folders and sub folders
24. mkdir,chdir,rmdir:make, change, remove directory

 

Posted in General, Linux, Others | Tagged | Leave a comment

Configure Kubernetes with minikube

We have discussed about Docker Swarm to know how to use Docker as a Load balance. Today we will discuss about Kubernetes –

Wiki:

“Kubernetes is an open-source container orchestration system for automating application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation”

I will be using Romin irani’s blog to configure Minikube and kubernetes:

I am using Window’s 10 pro.

using same MySwitch network – Hyper-V Manager

install and configure minikube as

>C:\Windows\system32>minikube start –vm-driver hyperv –hyperv-virtual-switch “MySwitch”

Starting local Kubernetes v1.13.2 cluster…
Starting VM…
Downloading Minikube ISO
181.48 MB / 181.48 MB [============================================] 100.00% 0s
Getting VM IP address…
Moving files into cluster…
Downloading kubelet v1.13.2
Downloading kubeadm v1.13.2
Finished Downloading kubeadm v1.13.2
Finished Downloading kubelet v1.13.2
Setting up certs…
Connecting to cluster…
Setting up kubeconfig…
Stopping extra container runtimes…
Starting cluster components…
Verifying kubelet health …
Verifying apiserver health …
Kubectl is now configured to use the cluster.
Loading cached images from config file.
Everything looks great. Please enjoy minikube!

>C:\Windows\system32>minikube version

minikube version: v0.33.1

>C:\Windows\system32>kubectl cluster-info

Kubernetes master is running at https://10.0.0.116:8443
KubeDNS is running at https://10.0.0.116:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

minikube ip address:

>C:\Windows\system32>minikube ip

10.0.0.116

C:\Windows\system32>minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 10.0.0.116

 

 

C:\Windows\system32>kubectl version
Client Version: version.Info{Major:”1″, Minor:”10″, GitVersion:”v1.10.3″, GitCommit:”2bba0127d85d5a46ab4b778548be28623b32d0b0″, GitTreeState:”clean”, BuildDate:”2018-05-21T09:17:39Z”, GoVersion:”go1.9.3″, Compiler:”gc”, Platform:”windows/amd64″}
Server Version: version.Info{Major:”1″, Minor:”13″, GitVersion:”v1.13.2″, GitCommit:”cff46ab41ff0bb44d8584413b598ad8360ec1def”, GitTreeState:”clean”, BuildDate:”2019-01-10T23:28:14Z”, GoVersion:”go1.11.4″, Compiler:”gc”, Platform:”linux/amd64″}

 

C:\Windows\system32>minikube dashboard
Enabling dashboard …
Verifying dashboard health …
Launching proxy …
Verifying proxy health …
Opening http://127.0.0.1:50096/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ in your default browser…

This will open a kubernetes dashboard on browser

http://127.0.0.1:50096/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/deployment?namespace=default

Following options are to be consider:

Node: Number of nodes

Deployment: deployment status

Pod: Number of pods

Service: service deployed

replica sets: number of replica sets

similar information can be retrieved from command prompt

C:\Windows\system32>kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 5m v1.13.2

 

C:\Windows\system32>kubectl.exe run hello-nginx –image=nginx –port=80
deployment.apps “hello-nginx” created

 

C:\Windows\system32>kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-nginx-79cd57d7cd-4whlz 1/1 Running 0 32s

 

C:\Windows\system32>kubectl.exe expose deployment hello-nginx –type=NodePort
service “hello-nginx” exposed

 

C:\Windows\system32>kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-nginx NodePort 10.111.96.190 <none> 80:31251/TCP 53s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m

 

C:\Windows\system32>kubectl scale –replicas=3 deployment/hello-nginx
deployment.extensions “hello-nginx” scaled

 

C:\Windows\system32>kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-nginx 3 3 3 3 10m

 

C:\Windows\system32>kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 19m v1.13.2

 

C:\Windows\system32>kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-nginx-79cd57d7cd-4whlz 1/1 Running 0 11m
hello-nginx-79cd57d7cd-czf9l 1/1 Running 0 1m
hello-nginx-79cd57d7cd-smxdk 1/1 Running 0 1m

 

in this blog we have successfully configured kubernetes – minikube cluster using hyper-v on windows 10. with 3 replica set pods of nginx service as a load balance.

 

 

 

 

 

 

 

Posted in Cloud, DevOps, High Avaliability, Kubernetes/Docker/Container, Others | Tagged , , , | Leave a comment

Docker Swarm

As we studied the basics of Docker and container and containerization ,

Docker – is a open-source platform to have multiple light weighted containers (applications/services).

Docker Swarm: now if we want to use Docker as High Availability or Load-balancer , this requires multiple servers/VM/Hosts/hyper-v system

Architecture:

Manager Node: initialize Docker Swarm ,controls worker nodes, Services configured

Worker nodes: services configured

in this post I will be configuring Docker Swarm on my windows 10 with hyper -v

Enable Hyper-v SWITCH:

Open Hyper-V Manager -> select “Virtual Switch Manager” and created a new external  virtual switch as”MySwitch

once switch is created you can create the virtual managers (hyper-v) as follows:

cmd>docker-machine create –driver hyperv Manager1

Running pre-create checks…
Creating machine…
(manager1) Copying C:\Users\hi2vi\.docker\machine\cache\boot2docker.iso to C:\Users\hi2vi\.docker\machine\machines\manager1\boot2docker.iso…
(manager1) Creating SSH key…
(manager1) Creating VM…
(manager1) Using switch “MySwitch”
(manager1) Creating VHD
(manager1) Starting VM…
(manager1) Waiting for host to start…
Waiting for machine to be running, this may take a few minutes…
Detecting operating system of created instance…
Waiting for SSH to be available…
Detecting the provisioner…
Provisioning with boot2docker…
Copying certs to the local machine directory…
Copying certs to the remote machine…
Setting Docker configuration on the remote daemon…
Checking connection to Docker…
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env manager1

Similarly we can create Docker-machines for workers

cmd>docker-machine create –driver hyperv Worker1

cmd>docker-machine create –driver hyperv Worker2

you can stop or start the Docker-machine with

cmd> docker-machine stop manager1

cmd>docker-machine start manager1

we can do the same using Hyper-V Manager GUI/Console as well.

generally as my Laptop has low memory 8GB i would go with mininum memory setup for Docker-machines.

stop/Turn off  Docker -machine  and un-check the “Enable Dynamic Memory” and provide RAM as 678MB (Minimum)

once it is done we are all set . here we have 3 Docker-Machines now we have to configure /init docker swarm

get all the ip address (specially Manager1 machine before going to ssh Manager1 machine

cmd>Docker-machine ls

C:\Windows\system32>docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
manager1 – hyperv Running tcp://10.0.0.109:2376 v18.09.1

cmd>Docker-machine ssh manager1

docker@manager1:~$ docker swarm init –advertise-addr 10.0.0.109
Swarm initialized: current node (4yu1xzgbqpekhy1g4alou4oyp) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join –token SWMTKN-1-5qvgw5jlvdmucvmsms6hp8cyh4pnumm50ukwg4bs4uyoxdyvc1-23ua9ls98jtz0suxbf6b7xmjc 10.0.0.109:2377
To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.

as you could see the docker swarm is configured as master and you got a command to run to setup the docker swarm as worker.

go to /ssh worker machines and run the command to join docker swarm as workers.

cmd> docker-machine ssh worker1

docker swarm join –token SWMTKN-1-5qvgw5jlvdmucvmsms6hp8cyh4pnumm50ukwg4bs4uyoxdyvc1-23ua9ls98jtz0suxbf6b7xmjc 10.0.0.109:2377

docker@worker1:~$ docker swarm join –token SWMTKN-1-5qvgw5jlvdmucvmsms6hp8cyh4pnumm50ukwg4bs4uyoxdyvc1-23ua9ls98jtz0suxbf6b7xmjc 10.0.0.109:2377
This node joined a swarm as a worker.

Docker is configured with Managers and workers

so if you run on manager as

docker@manager1:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
4yu1xzgbqpekhy1g4alou4oyp * manager1 Ready Active Leader 18.09.1
ao5ldknmk9k8vg5h8ywfg6nr5 worker1 Ready Active 18.09.1
b1vkisbeol9v053vumugvd4xm worker2 Ready Active 18.09.1

now on manager  we have to deploy the services (containers) and it can be scale on –replicas as many as we want it will replicate/deploy on all the managers and workers evenly.

docker@manager1:~$ docker service create –name backend-app-swarm -p 8080:80 –replicas 3 nginx
r6iqvj4bn0kmz8m6vwnqu9nt0
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
docker@manager1:~$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
r6iqvj4bn0km backend-app-swarm replicated 3/3 nginx:latest *:8085->80/tcp
docker@manager1:~$ docker service ps backend-app-swarm
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
uwqyrgabjfxa backend-app-swarm.1 nginx:latest worker2 Running Running 16 seconds ago
k4zphrkin419 backend-app-swarm.2 nginx:latest manager1 Running Running 15 seconds ago
mohentpe1oi5 backend-app-swarm.3 nginx:latest worker1 Running Running 18 seconds ago
docker@manager1:~$

this way we have deployed nginx on multiple containers

we can run on all of those on ip’s on browser

http://10.0.0.109:8080/

it works

 

 

 

Posted in Containerization, Docker Swarm, Kubernetes/Docker/Container, Others | Tagged , , | Leave a comment

Error – Hung Docker

Sometime when you not actively working on Docker and on you are not actively working on the system. it could be possible that Docker services goes into hung State and you will get an error as follows:

Starting containers fails with: Error response from daemon: Cannot restart container my_container: driver failed programming external connectivity on endpoint my_container (782f444833c57027050a58f8c0302473f76d9029a50944960d13c6ed940a4392): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:49161:tcp:172.17.0.2:1521: input/output error

and the solution is to restart the Docker services or restart the system.

you can also restart the dockerd 

I have observed following error today and though of writing the same.

referrence – here

Posted in Kubernetes/Docker/Container, Others, What I learned today | Tagged | Leave a comment

Containerization

Docker has containers to work on processes or application.

now docker containers has some limitations as it uses resources from the system and its shared.

Docker Compose: to group all related application/services  or container one system is called compose i would call it a package of related containers.

Docker Swarm: is a High Availability feature provided by Docker for containerization, the architecture includes – MANAGER and multiple WORKER nodes.

Kubernetes: is similar to Docker Swarm , by Google for high availability, its a well known product for High availability, It  contains PODs for multiple related containers and maintains the high availability on it, more details is here.

Comparison is here.

I would like to provide details on each topic in future.

 

Posted in Containerization, Docker Swarm, Kubernetes/Docker/Container, Others | Tagged , , | Leave a comment

MongoDB Replica-Set

The most important feature of MongoDB is its easy setup and use of Replica set.

in mongo db you can easily setup it here is the link for it which I am used by Soham Kamani

Create a network:

Docker network create my-momngo-cluster-net

Then create three containers (mongo1, mongo2 and mongo3) with that cluster_network with replica-set as my-mongo-replSet

docker run -p 30001:27017 –name mongo1 –net my-mongo-cluster-net mongo mongod –replSet my-mongo-replset

 

docker run -p 30002:27017 –name mongo2 –net my-mongo-cluster-net mongo mongod –replSet my-mongo-replset

 

docker run -p 30003:27017 –name mongo3 –net my-mongo-cluster-net mongo mongod –replSet my-mongo-replset

the command in bold is case sensitive

all three command should be run on separate window.

once all three contain with replica-set is created you are done. it will automatically create one container as a primary container

once contains create, we have to configure the replica-set

select one container and configure it as follows, here we select mongo1 as primary:

Docker exec -it mongo1 mongo

config = { “_id” : “my-mongo-replset”,”members” : [{“_id” : 0,”host” : “mongo1:27017”},{“_id” : 1,”host” : “mongo2:27017”},{“_id” : 2,”host” : “mongo3:27017”}]}

 

rs.initiate(config)

once initiate the config replica-set would be configured and it will start working.

to test. stop any of the mongo as

docker stop mongo1

then mongo3 or mongo2 either one of it would become primary.

my-mongo-replset:PRIMARY>

and again mongo1 became online it will be part of replica set

we can set the priority of replica set so that we can prefer the PRIMARY replica with configuration.

to check the replication information
db.printSlaveReplicationInfo()

you will observe all the replication set will be in sync

by default READ operations only happens on primary replica-set

more about replication can be read here

 

 

 

Posted in Disaster Recovery, High Avaliability, MongoDB, Others, Replication | Tagged , , , | Leave a comment

MongoDB – Storage

MongoDB has a Storage Wired Tiger by default MongoDB generated some standard wt (WiredTiger extension) files, WiredTiger uses MultiVersion Concurrency Control (MVCC)  and can provide point time recovery using Journal (write ahead log-checkpoint) and ACID property – Concurrency.

MongoDB stores the data in BSON format ( Bin­ary JSON)

Mongod

To disable jouranal:

storage.journal.enabled to false

there are good QA on it here

 

 

Posted in MongoDB, Others | Tagged | Leave a comment

MongoDB – DBs and Configurations

Continue to our last discussion , today we will be discussing on MongoDB default databases and more…

When you install mongoDB by default you will get following databases

test : when we open mongo  shell you will get default database ” test” used.

Admin :Admin database is used for store user and roles related information.

Local : Local database is used for replication process, and other instance-specific data

Config : Config database generates when you configure sharding and other configure

Starting in MongoDB 3.2 uses default storage engine as “WiredTiger” using this it can store transaction information in “Journal” folder, and can maintain the ACID property.

The configuration Option values are stored in the configuration file  when mongoDB starts mongod it uses configuration parameters from configuration file stored at On Linux, a default /etc/mongod.conf – Linux  on Windows and install directory>/bin/mongod.cfg this can be setup at run time at database configuration

To get the currant parameter values of configuration options(All parameters)

>db.adminCommand( { getParameter : ‘*’ } )

 

 

 

 

Posted in MongoDB, NoSQL, Others | Tagged , | Leave a comment