MSDB Size growing

msdb is growing due to queue.

their is a great blog to help to reduce this

How to reduce MSDB size from 42Gb to 200Mb

query to get object size:

USE msdb

, obj = SCHEMA_NAME(o.[schema_id]) + ‘.’ +
, o.[type]
, i.total_rows
, i.total_size
FROM sys.objects o
, total_size = CAST(SUM(a.total_pages) * 8. / 1024 AS DECIMAL(18,2))
, total_rows = SUM(CASE WHEN i.index_id IN (0, 1) AND a.[type] = 1 THEN p.[rows] END)
FROM sys.indexes i
JOIN sys.partitions p ON i.[object_id] = p.[object_id] AND i.index_id = p.index_id
JOIN sys.allocation_units a ON p.[partition_id] = a.container_id
WHERE i.is_disabled = 0
AND i.is_hypothetical = 0
GROUP BY i.[object_id]
) i ON o.[object_id] = i.[object_id]
WHERE o.[type] IN (‘V’, ‘U’, ‘S’)
ORDER BY i.total_size DESC


if you observed sys.sysxmitqueue is top highest

run following command to clear it

USE msdb


Bingo….. restart the sql to resolve the issue.



Posted in Experts, Others, Performance Tuning, Troubleshooting | Tagged | Leave a comment

Always On: Troubleshooting Consolidated

Working on Always On and configuration/setup of AG we have observed many errors.

especially when configuration listener with following error 19471 or AG could not configure:

Creating availability group listener resulted in an error.
Create failed for Availability Group Listener ‘L********0’. (Microsoft.SqlServer.Smo)
An exception occurred while executing a Transact-SQL statement or batch.
The WSFC cluster could not bring the Network Name resource with DNS name ‘U*********0‘ online. The DNS name may have been taken or have a conflict with existing name services, or the WSFC cluster service may not be running or may be inaccessible. Use a different DNS name to resolve name conflicts, or check the WSFC cluster log for more information.
The attempt to create the network name and IP address for the listener failed. The WSFC service may not be running or may be inaccessible in its current state, or the values provided for the network name and IP address may be incorrect. Check the state of the WSFC cluster and validate the network name and IP address with the network administrator. (Microsoft SQL Server, Error:


So AG configuration requires WSFC and it contains the Static IP for Listener which will point the local nodes when you ping(to validate it working). But it does not have any DNS name associated with it, we can provide any Listener name only check is that name should not exist on the AD – but to see above error you might feel that name we are providing to Listener might already exist (this error would confuse us) that might be the case.

Following are the troubleshooting links information to resolve the issue:

  1. Login with “sa” on SSMS and try to configure AG from the beginning again. Try with different node. – this works
  2. Logon with “sa” from local system /different system other than WSFC nodes. – this will work for some places where Domains forests are multiple and parent child cases.
  3. Validate the CAP and ensure IP and DNS are tested (repair it)- this will help us for validation


  1. For SQL Server 2012 it would be an Hotfix issue – Apply the hotfix described in article 2838043.


  1. On event viewer we find error as: Contact Domain Controller team and ask for granting access to cluster – contact Domain Controller team and ask them for granting access computer to create virtual computer object (VCO) access for object


  1. This is weird but works some time: I also observed that sometimes when tried to configure the AG on the setup wizard towards the execution the other node port of AG configuration gets restarted automatically and AG configuration gets failed. With similar kind of error


  • A Worked around I used is to logon SSMS from remotely and configure the AG before node comes up online after reboot. It works for me – please try this on your own risk I have not sure the reason for such(please provide your input for such cases).


  1. Validate any policy exception (violation due to AG).
  2. Sometime you get Quorum related errors – Validated the WSFC report and everything should be clean.
  3. Finally rebuild the WSFC – if above steps no luck.

I will try to keep this updated for other findings.

Happy Troubleshooting!!!

Posted in Disaster Recovery, High Avaliability, Installation, Others, Troubleshooting, What I learned today | Tagged , , | Leave a comment

Docker – Install Other Open source DB

We can quickly install anything on Docker with


$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres


docker run --name mariadbtest -e MYSQL_ROOT_PASSWORD=mypass -d mariadb/server:10.3 --log-bin --binlog-format=MIXED
Posted in Kubernetes/Docker/Container, Open Source, Others, RDBMS | Tagged , | Leave a comment

Informix on Docker

Today we will discuss how to install/configure informix on Docker,

>docker run -it –name ifx –privileged -p 9088:9088 -p 9089:9089 -p 27017:27017 -p 27018:27018 -p 27883:27883 -e LICENSE=accept ibmcom/informix-developer-database:latest


>docker exec -it ifx bash


Other information to consider:
find . -type f -name ‘*.tar’


Posted in Kubernetes/Docker/Container, Others, RDBMS | Tagged | 1 Comment

Imp Linux Commands

Important Linux Commands
1. uname -a: To know what version of unix we are using
2. hostname: System name
3. ifconfig:      ip information
4. ping -c 3 :    ping 3 times

5. chmod:        change mode – permission (read/write/executable) – 777 – 7(me)7(group)7(others)

N         Description                  ls          binary
0          No permissions at all   —        000
1          Only execute               –x        001
2          Only write                   -w-       010
3          Write and execute       -wx      011
4          Only read                    r–        100
5          Read and execute       r-x        101
6          Read and write           rw-       110
7          Read, write, and exe   rwx 111

eg: sudo chmode 777 vin
First Number 7 – Read, write, and execute for user.
Second Number 7 – Read, write, and execute for group.
Third Number 7 – Read, write, and execute for other.

6. chown: Change owner
eg: chown -f root vin

7. df -h :human readable heading form

8. du -kh: Disk Usage all human readable – s summarize
9. whoami: returns the current user running command
10. pwd: present working directory
11. vi: Editor – :w (save) :q (quit)
12. touch : create empty file
13. mv: rename or move
14. cp:copy
15. rm:remove
16. ls -ltrh -R: (last lasting format, time wise sort, humen readable) (-r -reccursive -sub folder)
17. grep: search/contains
18. top: resource utilization
19. tail -f <no>: tail follows – live log information
20. dpkg -l |grep mysql : software package installed list
21. scp <source> <target> : Secure copy
22. wget: www get retrive the software from net -cloud
23. find /home *.txt : search in /home directory for all txt file on folders and sub folders
24. mkdir,chdir,rmdir:make, change, remove directory


Posted in General, Linux, Others | Tagged | Leave a comment

Configure Kubernetes with minikube

We have discussed about Docker Swarm to know how to use Docker as a Load balance. Today we will discuss about Kubernetes –


“Kubernetes is an open-source container orchestration system for automating application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation”

I will be using Romin irani’s blog to configure Minikube and kubernetes:

I am using Window’s 10 pro.

using same MySwitch network – Hyper-V Manager

install and configure minikube as

>C:\Windows\system32>minikube start –vm-driver hyperv –hyperv-virtual-switch “MySwitch”

Starting local Kubernetes v1.13.2 cluster…
Starting VM…
Downloading Minikube ISO
181.48 MB / 181.48 MB [============================================] 100.00% 0s
Getting VM IP address…
Moving files into cluster…
Downloading kubelet v1.13.2
Downloading kubeadm v1.13.2
Finished Downloading kubeadm v1.13.2
Finished Downloading kubelet v1.13.2
Setting up certs…
Connecting to cluster…
Setting up kubeconfig…
Stopping extra container runtimes…
Starting cluster components…
Verifying kubelet health …
Verifying apiserver health …
Kubectl is now configured to use the cluster.
Loading cached images from config file.
Everything looks great. Please enjoy minikube!

>C:\Windows\system32>minikube version

minikube version: v0.33.1

>C:\Windows\system32>kubectl cluster-info

Kubernetes master is running at
KubeDNS is running at
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

minikube ip address:

>C:\Windows\system32>minikube ip

C:\Windows\system32>minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at



C:\Windows\system32>kubectl version
Client Version: version.Info{Major:”1″, Minor:”10″, GitVersion:”v1.10.3″, GitCommit:”2bba0127d85d5a46ab4b778548be28623b32d0b0″, GitTreeState:”clean”, BuildDate:”2018-05-21T09:17:39Z”, GoVersion:”go1.9.3″, Compiler:”gc”, Platform:”windows/amd64″}
Server Version: version.Info{Major:”1″, Minor:”13″, GitVersion:”v1.13.2″, GitCommit:”cff46ab41ff0bb44d8584413b598ad8360ec1def”, GitTreeState:”clean”, BuildDate:”2019-01-10T23:28:14Z”, GoVersion:”go1.11.4″, Compiler:”gc”, Platform:”linux/amd64″}


C:\Windows\system32>minikube dashboard
Enabling dashboard …
Verifying dashboard health …
Launching proxy …
Verifying proxy health …
Opening in your default browser…

This will open a kubernetes dashboard on browser!/deployment?namespace=default

Following options are to be consider:

Node: Number of nodes

Deployment: deployment status

Pod: Number of pods

Service: service deployed

replica sets: number of replica sets

similar information can be retrieved from command prompt

C:\Windows\system32>kubectl get nodes
minikube Ready master 5m v1.13.2


C:\Windows\system32>kubectl.exe run hello-nginx –image=nginx –port=80
deployment.apps “hello-nginx” created


C:\Windows\system32>kubectl get pods
hello-nginx-79cd57d7cd-4whlz 1/1 Running 0 32s


C:\Windows\system32>kubectl.exe expose deployment hello-nginx –type=NodePort
service “hello-nginx” exposed


C:\Windows\system32>kubectl get services
hello-nginx NodePort <none> 80:31251/TCP 53s
kubernetes ClusterIP <none> 443/TCP 13m


C:\Windows\system32>kubectl scale –replicas=3 deployment/hello-nginx
deployment.extensions “hello-nginx” scaled


C:\Windows\system32>kubectl get deployment
hello-nginx 3 3 3 3 10m


C:\Windows\system32>kubectl get nodes
minikube Ready master 19m v1.13.2


C:\Windows\system32>kubectl get pods
hello-nginx-79cd57d7cd-4whlz 1/1 Running 0 11m
hello-nginx-79cd57d7cd-czf9l 1/1 Running 0 1m
hello-nginx-79cd57d7cd-smxdk 1/1 Running 0 1m


in this blog we have successfully configured kubernetes – minikube cluster using hyper-v on windows 10. with 3 replica set pods of nginx service as a load balance.








Posted in Cloud, DevOps, High Avaliability, Kubernetes/Docker/Container, Others | Tagged , , , | Leave a comment

Docker Swarm

As we studied the basics of Docker and container and containerization ,

Docker – is a open-source platform to have multiple light weighted containers (applications/services).

Docker Swarm: now if we want to use Docker as High Availability or Load-balancer , this requires multiple servers/VM/Hosts/hyper-v system


Manager Node: initialize Docker Swarm ,controls worker nodes, Services configured

Worker nodes: services configured

in this post I will be configuring Docker Swarm on my windows 10 with hyper -v

Enable Hyper-v SWITCH:

Open Hyper-V Manager -> select “Virtual Switch Manager” and created a new external  virtual switch as”MySwitch

once switch is created you can create the virtual managers (hyper-v) as follows:

cmd>docker-machine create –driver hyperv Manager1

Running pre-create checks…
Creating machine…
(manager1) Copying C:\Users\hi2vi\.docker\machine\cache\boot2docker.iso to C:\Users\hi2vi\.docker\machine\machines\manager1\boot2docker.iso…
(manager1) Creating SSH key…
(manager1) Creating VM…
(manager1) Using switch “MySwitch”
(manager1) Creating VHD
(manager1) Starting VM…
(manager1) Waiting for host to start…
Waiting for machine to be running, this may take a few minutes…
Detecting operating system of created instance…
Waiting for SSH to be available…
Detecting the provisioner…
Provisioning with boot2docker…
Copying certs to the local machine directory…
Copying certs to the remote machine…
Setting Docker configuration on the remote daemon…
Checking connection to Docker…
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env manager1

Similarly we can create Docker-machines for workers

cmd>docker-machine create –driver hyperv Worker1

cmd>docker-machine create –driver hyperv Worker2

you can stop or start the Docker-machine with

cmd> docker-machine stop manager1

cmd>docker-machine start manager1

we can do the same using Hyper-V Manager GUI/Console as well.

generally as my Laptop has low memory 8GB i would go with mininum memory setup for Docker-machines.

stop/Turn off  Docker -machine  and un-check the “Enable Dynamic Memory” and provide RAM as 678MB (Minimum)

once it is done we are all set . here we have 3 Docker-Machines now we have to configure /init docker swarm

get all the ip address (specially Manager1 machine before going to ssh Manager1 machine

cmd>Docker-machine ls

C:\Windows\system32>docker-machine ls
manager1 – hyperv Running tcp:// v18.09.1

cmd>Docker-machine ssh manager1

docker@manager1:~$ docker swarm init –advertise-addr
Swarm initialized: current node (4yu1xzgbqpekhy1g4alou4oyp) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join –token SWMTKN-1-5qvgw5jlvdmucvmsms6hp8cyh4pnumm50ukwg4bs4uyoxdyvc1-23ua9ls98jtz0suxbf6b7xmjc
To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.

as you could see the docker swarm is configured as master and you got a command to run to setup the docker swarm as worker.

go to /ssh worker machines and run the command to join docker swarm as workers.

cmd> docker-machine ssh worker1

docker swarm join –token SWMTKN-1-5qvgw5jlvdmucvmsms6hp8cyh4pnumm50ukwg4bs4uyoxdyvc1-23ua9ls98jtz0suxbf6b7xmjc

docker@worker1:~$ docker swarm join –token SWMTKN-1-5qvgw5jlvdmucvmsms6hp8cyh4pnumm50ukwg4bs4uyoxdyvc1-23ua9ls98jtz0suxbf6b7xmjc
This node joined a swarm as a worker.

Docker is configured with Managers and workers

so if you run on manager as

docker@manager1:~$ docker node ls
4yu1xzgbqpekhy1g4alou4oyp * manager1 Ready Active Leader 18.09.1
ao5ldknmk9k8vg5h8ywfg6nr5 worker1 Ready Active 18.09.1
b1vkisbeol9v053vumugvd4xm worker2 Ready Active 18.09.1

now on manager  we have to deploy the services (containers) and it can be scale on –replicas as many as we want it will replicate/deploy on all the managers and workers evenly.

docker@manager1:~$ docker service create –name backend-app-swarm -p 8080:80 –replicas 3 nginx
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
docker@manager1:~$ docker service ls
r6iqvj4bn0km backend-app-swarm replicated 3/3 nginx:latest *:8085->80/tcp
docker@manager1:~$ docker service ps backend-app-swarm
uwqyrgabjfxa backend-app-swarm.1 nginx:latest worker2 Running Running 16 seconds ago
k4zphrkin419 backend-app-swarm.2 nginx:latest manager1 Running Running 15 seconds ago
mohentpe1oi5 backend-app-swarm.3 nginx:latest worker1 Running Running 18 seconds ago

this way we have deployed nginx on multiple containers

we can run on all of those on ip’s on browser

it works




Posted in Containerization, Docker Swarm, Kubernetes/Docker/Container, Others | Tagged , , | Leave a comment

Error – Hung Docker

Sometime when you not actively working on Docker and on you are not actively working on the system. it could be possible that Docker services goes into hung State and you will get an error as follows:

Starting containers fails with: Error response from daemon: Cannot restart container my_container: driver failed programming external connectivity on endpoint my_container (782f444833c57027050a58f8c0302473f76d9029a50944960d13c6ed940a4392): Error starting userland proxy: mkdir /port/tcp: input/output error

and the solution is to restart the Docker services or restart the system.

you can also restart the dockerd 

I have observed following error today and though of writing the same.

referrence – here

Posted in Kubernetes/Docker/Container, Others, What I learned today | Tagged | Leave a comment


Docker has containers to work on processes or application.

now docker containers has some limitations as it uses resources from the system and its shared.

Docker Compose: to group all related application/services  or container one system is called compose i would call it a package of related containers.

Docker Swarm: is a High Availability feature provided by Docker for containerization, the architecture includes – MANAGER and multiple WORKER nodes.

Kubernetes: is similar to Docker Swarm , by Google for high availability, its a well known product for High availability, It  contains PODs for multiple related containers and maintains the high availability on it, more details is here.

Comparison is here.

I would like to provide details on each topic in future.


Posted in Containerization, Docker Swarm, Kubernetes/Docker/Container, Others | Tagged , , | Leave a comment

MongoDB Replica-Set

The most important feature of MongoDB is its easy setup and use of Replica set.

in mongo db you can easily setup it here is the link for it which I am used by Soham Kamani

Create a network:

Docker network create my-momngo-cluster-net

Then create three containers (mongo1, mongo2 and mongo3) with that cluster_network with replica-set as my-mongo-replSet

docker run -p 30001:27017 –name mongo1 –net my-mongo-cluster-net mongo mongod –replSet my-mongo-replset


docker run -p 30002:27017 –name mongo2 –net my-mongo-cluster-net mongo mongod –replSet my-mongo-replset


docker run -p 30003:27017 –name mongo3 –net my-mongo-cluster-net mongo mongod –replSet my-mongo-replset

the command in bold is case sensitive

all three command should be run on separate window.

once all three contain with replica-set is created you are done. it will automatically create one container as a primary container

once contains create, we have to configure the replica-set

select one container and configure it as follows, here we select mongo1 as primary:

Docker exec -it mongo1 mongo

config = { “_id” : “my-mongo-replset”,”members” : [{“_id” : 0,”host” : “mongo1:27017”},{“_id” : 1,”host” : “mongo2:27017”},{“_id” : 2,”host” : “mongo3:27017”}]}



once initiate the config replica-set would be configured and it will start working.

to test. stop any of the mongo as

docker stop mongo1

then mongo3 or mongo2 either one of it would become primary.


and again mongo1 became online it will be part of replica set

we can set the priority of replica set so that we can prefer the PRIMARY replica with configuration.

to check the replication information

you will observe all the replication set will be in sync

by default READ operations only happens on primary replica-set

more about replication can be read here




Posted in Disaster Recovery, High Avaliability, MongoDB, Others, Replication | Tagged , , , | Leave a comment