I +91 9666019191

Archives July 2020

Top 20 Business anlyst Interview Questions and Answers

1) What is a flowchart? Why it is important?

A Flowchart shows the complete flow of the system through symbols and diagrams. It is important as it makes the system easy to understand for developers as well as nontechnical stakeholders.

2) Define the Use Case Model

Use case model shows a sequence of events and a stream of actions regarding any process performed by an actor.

3) What does UML stand for?

It stands for Unified Modeling Language.

4) Do you think Activity Diagram is important?

As the name implies, an activity diagram is all about system activities. The main purpose of the activity diagram is to show various events taking place in an organization in different departments.

5) Name two types of diagrams used in Business analyst

The two diagrams are Use Case Diagram and Collaboration Diagram

6) What is meant by an alternate flow in a use case?

It is the alternative solution or activity in a use case that should be followed in case of any failure in the system.

7) What are exceptions?

These are unexpected situations or results in an application.

8) What are extends?

Extends is a relationship that is shown by a dotted line. It is usually used to specify optional behavior which has no independent meaning. Example: Help on “Sign on” extends use case “Sign on”

9) Name the two documents related to a use case

The two documents are:

  • FRD (Functional Requirement Document)
  • SDD (System Design Document).

10) What is the difference between Business Analyst and Business Analysis?

Business Analysis is the process performed by the Business Analyst.

11) As a business analyst, what are the tools which are more helpful to you?

There are many tools, but mostly use the tool are: 1)MS Visio, 2)MS Word, 3)MS Excel, 4)PowerPoint, 5)MS Project.

12) In your previous experience, what kind of documents you have created?

I have worked on, Functional Specification Documents, Technical Specification Documents, Business Requirements Documents, Use Case Diagram, etc.

13) Explain the term INVEST

INVEST means Independent, Negotiable, Valuable, Estimable, Sized Appropriately, Testable. It can assist project managers and technical team in delivering quality products/services.

14) Define SaaS

SaaS means Software as a Service. It is related to cloud computing. It is different from other software bundles as you don’t need this type of software to be installed on your machine. All you need is the Internet connection and a Web Browser to use it.

15) What steps are required to develop a product from an idea?

You have to perform, Market Analysis, Competitor Analysis, SWOT Analysis, Personas, Strategic Vision and Feature Set, Prioritize Features, Use Cases, SDLC, Storyboards, Test Cases, Monitoring, Scalability.

16) What do you think is better, the Waterfall Model or Spiral Model?

It all depends on the type and scope of the project. A life cycle model is selected based on organizational culture and various other scenarios to develop the system.

17) How can you explain a user-centered design methodology?

It all depends on the end-users. In such a scenario, we develop the system with a user’s point of view. Who are the end-users, what they require etc. Personas are helpful in this process.

18) How do you define Personas?

Personas are used instead of real users that assist developers and technical team in judging the user behavior in different scenarios. Personas are social roles, performed by any actor or character. It is derived from a Latin word meaning “character.” In marketing terminology, it represents a group of customers/end users.

19) Define the term Application Usability

Application usability is the quality of the system that makes the system useful for its end users. System’s usability is good if it is capable of achieving users’ goals.

20) What is a database transaction?

When we perform any activity in a database, such as addition, deletion, modification, searching, etc. is said to be a database transaction.

Call us for Free Demo on AWS,Vmware,Citrix,Azure,Devops,Python,Realtime Projects
Calls will be forwarded to Our Trainers for demo

Top 20 Machine learning Interview Questions and Answers

1)      What is Machine learning?

Machine learning is a branch of computer science which deals with system programming in order to automatically learn and improve with experience.  For example: Robots are programed so that they can perform the task based on data they gather from sensors. It automatically learns programs from data.

2)      Mention the difference between Data Mining and Machine learning?

Machine learning relates with the study, design and development of the algorithms that give computers the capability to learn without being explicitly programmed.  While, data mining can be defined as the process in which the unstructured data tries to extract knowledge or unknown interesting patterns.  During this process machine, learning algorithms are used.

3)      What is ‘Overfitting’ in Machine learning?

In machine learning, when a statistical model describes random error or noise instead of underlying relationship ‘overfitting’ occurs.  When a model is excessively complex, overfitting is normally observed, because of having too many parameters with respect to the number of training data types. The model exhibits poor performance which has been overfit.

4)      Why overfitting happens?

The possibility of overfitting exists as the criteria used for training the model is not the same as the criteria used to judge the efficacy of a model.

5)      How can you avoid overfitting ?

By using a lot of data overfitting can be avoided, overfitting happens relatively as you have a small dataset, and you try to learn from it. But if you have a small database and you are forced to come with a model based on that. In such situation, you can use a technique known as cross validation. In this method the dataset splits into two section, testing and training datasets, the testing dataset will only test the model while, in training dataset, the datapoints will come up with the model.

In this technique,  a model is usually given a dataset of a known data on which training (training data set) is run and a dataset of unknown data against which the model is tested. The idea of cross validation is to define a dataset to “test” the model in the training phase.

6)      What is inductive machine learning?

The inductive machine learning involves the process of learning by examples, where a system, from a set of observed instances tries to induce a general rule.

7)      What are the five popular algorithms of Machine Learning?

a)      Decision Trees

b)      Neural Networks (back propagation)

c)       Probabilistic networks

d)      Nearest Neighbor

e)      Support vector machines

8)      What are the different Algorithm techniques in Machine Learning?

The different types of techniques in Machine Learning are

a)      Supervised Learning

b)      Unsupervised Learning

c)       Semi-supervised Learning

d)      Reinforcement Learning

e)      Transduction

f)       Learning to Learn

9)      What are the three stages to build the hypotheses or model in machine learning?

a)      Model building

b)      Model testing

c)       Applying the model

10)   What is the standard approach to supervised learning?

The standard approach to supervised learning is to split the set of example into the training set and the test.

11)   What is ‘Training set’ and ‘Test set’?

In various areas of information science like machine learning, a set of data is used to discover the potentially predictive relationship known as ‘Training Set’. Training set is an examples given to the learner, while Test set is used to test the accuracy of the hypotheses generated by the learner, and it is the set of example held back from the learner. Training set are distinct from Test set.

12)   List down various approaches for machine learning?

The different approaches in Machine Learning are

a)      Concept Vs Classification Learning

b)      Symbolic Vs Statistical Learning

c)       Inductive Vs Analytical Learning

13)   What is not Machine Learning?

a)      Artificial Intelligence

b)      Rule based inference

14)   Explain what is the function of ‘Unsupervised Learning’?

a)      Find clusters of the data

b)      Find low-dimensional representations of the data

c)       Find interesting directions in data

d)      Interesting coordinates and correlations

e)      Find novel observations/ database cleaning

15)   Explain what is the function of ‘Supervised Learning’?

a)      Classifications

b)      Speech recognition

c)       Regression

d)      Predict time series

e)      Annotate strings

16)   What is algorithm independent machine learning?

Machine learning in where mathematical foundations is independent of any particular classifier or learning algorithm is referred as algorithm independent machine learning?

17)   What is the difference between artificial learning and machine learning?

Designing and developing algorithms according to the behaviours based on empirical data are known as Machine Learning.  While artificial intelligence in addition to machine learning, it also covers other aspects like knowledge representation, natural language processing, planning, robotics etc.

18)   What is classifier in machine learning?

A classifier in a Machine Learning is a system that inputs a vector of discrete or continuous feature values and outputs a single discrete value, the class.

19)   What are the advantages of Naive Bayes?

In Naïve Bayes classifier will converge quicker than discriminative models like logistic regression, so you need less training data.  The main advantage is that it can’t learn interactions between features.

20)   In what areas Pattern Recognition is used?

Pattern Recognition can be used in

a)      Computer Vision

b)      Speech Recognition

c)       Data Mining

d)      Statistics

e)      Informal Retrieval

f)       Bio-Informatics

21)   What is Genetic Programming?

Genetic programming is one of the two techniques used in machine learning. The model is based on the testing and selecting the best choice among a set of results.

22)   What is Inductive Logic Programming in Machine Learning?

Inductive Logic Programming (ILP) is a subfield of machine learning which uses logical programming representing background knowledge and examples.

23)   What is Model Selection in Machine Learning?

The process of selecting models among different mathematical models, which are used to describe the same data set is known as Model Selection. Model selection is applied to the fields of statistics, machine learning and data mining.

24)   What are the two methods used for the calibration in Supervised Learning?

The two methods used for predicting good probabilities in Supervised Learning are

a)      Platt Calibration

b)      Isotonic Regression

These methods are designed for binary classification, and it is not trivial.

25)   Which method is frequently used to prevent overfitting?

When there is sufficient data ‘Isotonic Regression’ is used to prevent an overfitting issue.

26)   What is the difference between heuristic for rule learning and heuristics for decision trees?

The difference is that the heuristics for decision trees evaluate the average quality of a number of disjointed sets while rule learners only evaluate the quality of the set of instances that is covered with the candidate rule.

27)   What is Perceptron in Machine Learning?

In Machine Learning, Perceptron is an algorithm for supervised classification of the input into one of several possible non-binary outputs.

28)   Explain the two components of Bayesian logic program?

Bayesian logic program consists of two components.  The first component is a logical one ; it consists of a set of Bayesian Clauses, which captures the qualitative structure of the domain.  The second component is a quantitative one, it encodes the quantitative information about the domain.

29)   What are Bayesian Networks (BN) ?

Bayesian Network is used to represent the graphical model for probability relationship among a set of variables .

30)   Why instance based learning algorithm sometimes referred as Lazy learning algorithm?

Instance based learning algorithm is also referred as Lazy learning algorithm as they delay the induction or generalization process until classification is performed.

Call us for Free Demo on AWS,Vmware,Citrix,Azure,Devops,Python,Realtime Projects
Calls will be forwarded to Our Trainers for demo

Top 20 RPA Interview Questions and Answers

1. What is RPA?

Robotic Process Automation(RPA) allows organizations to automate a task, just like an employee of your organization doing them across application and systems.

2. What are the different applications of RPA?

Some popular applications of RPA are

  • Barcode Scanning
  • Enter PO to receive invoices
  • Match PO and Invoice
  • Complete Invoice Processing.

3. Give three advantages of RPA tool

Here are three benefits of using RPA tools.

  • RPA offers real time visibility into bug/defect discovery
  • RPA allows regular compliance process, with error-free auditing.
  • It allows you to automate a large number of processes.

4. What are the things you should remember in the process of RPA Implementation?

  • Define and focus on the desired ROI
  • You should target to automate important and highly impactful processes
  • Combine attended and unattended RPA

5. Which RPA offers an open platform for automation?

UiPath is open-source RPA tool that allows you to design, deploy any robotic workforce upon their organization.

6. Explain important characteristics of RPA

Three most important characteristics of RPA are:

  • Code-free
  • User-Friendly
  • Non-Disruptive

7. What are Popular RPA tools? Describe each one in detail

There are mainly three popular RPA tools.

Blue Prism:

Blue Prism software offers business operations to be agile and cost-effective by automating rule-based, repetitive back-office processes.

Automation Anywhere:

Automation Anywhere offers powerful and User- friendly Robotic Process Automation tools to automate tasks of any complexity.


UiPath is a Windows desktop software used for automation for various types of web and desktop-based applications.

8. What are the steps you should follow to implement Robotic Process Automation?

Six steps to be followed for a successful RPA implementation are:

  • Identify the Automation Opportunities
  • Optimize the Identified Processes
  • Build a Business Case
  • Select the RPA Vendor of your choice
  • Model RPA Development
  • Start Continue Building Expertise RPA bots

9. Can you audit the RPA process? What are the benefits of same?

Yes, it is possible to audit the RPA process. Auditing brings several new strategies that can easily be adopted.

10. State the different between between Thin Client & Thick Client?

Thick ClientThin Client
The thick client is the application that requires certain attribute features using RPA tools, e.g., computer, calculator, Internet Explorer.The thin client is the application that never acquires the specific properties while using RPA tools.

11. How long does a robot automation project take?

Generally, any projects are measured in weeks. However, the complex project might take more time depending on the level of object re-use available.

12. Does Blue Prism need Coding?

No, the Blue prism is a code-free and can automate and software. This digital workforce should be applied to automate the process in any department where clerical or administrative work is performed across an organization.

13. What is the main difference between Blue Prism And UiPath?

Blue Prism uses C# for coding and UiPath uses Visual Basic for coding.

14. What is the future scope of RPA?

The future of Robotic Process Automation is very bright as there are plenty of human actions that can be automated, handling RPA tools and technology.

15. Does handling RPA operations need special skills?

RPA is an approach that doesn’t require programming skills. Anyone can become an RPA certified professional with some basic knowledge or training, which is also a short duration. Everything can be managed easily using the flowchart or in a stepwise manner.

16. Name two scripting standards which you will consider during automation testing?

Two scripting stands that you need to consider during automation testing are

  • Adequate indentation
  • Uniform naming convention

17. What are the key metrics which you should consider to map the success of automation testing?

Two key metrics to measure the success of automation testing are:

  • Reduction in cost of various modules
  • Defect Detection Ratio

18. Explain the use of PGP

PGP allows you to encrypt and decrypt a file by assigning a passphrase.

19. What is meant by Bot?

A bot is a set of the command used to automate the task.

20. Name different types of bots

Different types of Bots used in RPA process are:

  • TaskBot
  • MetaBot
  • IQ Bot
  • Chatbot

llus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Call us for Free Demo on AWS,Vmware,Citrix,Azure,Devops,Python,Realtime Projects
Calls will be forwarded to Our Trainers for demo

Docker Interview questions

  1. What is Container and Docker?
  2. What is Virtulization and How docker is diff from Virtulization?
  3. What are advantages and disadvantages of docker?
  4. What is container and Image?
  5. Explain Docker Container lifecycle
  6. What are the Networking adapters supported by docker?
  7. How we can persistent container data?
  8. How we can enter in to command or how to we can run command in container?
  9. What does the volume parameter do in a docker run command?
  10. What is the main difference between the approaches of Docker and standard hypervisor virtualization?
  11. what is the docker save and docker load commands?
  12. What is the default Docker network driver,
  13. What are a Docker container’s possible  states, and what do they mean? How can you change it when running a Docker image?
  14. What is a Docker image? What is a Docker image registry?
  15. What is container orchestration and why should we use it?
  16. What features are provided by Docker Enterprise Edition instead of Docker Community Edition?
  17. Is there any problem with just using the latest tag in a container orchestration environment? 
  18. What is considered best practice for image tagging?
  19. What is Docker Swarm and which network driver should be used with it?
  20. What are the possible ways of using insecure Docker image registries?
  21. What is Docker Compose? What can it be used for?
  22. How do you scale your Docker containers?
  23. How to build envrionment-agnostic systems with Docker?
  24. What are the most common instructions in Dockerfile?
  25. What type of applications – Stateless or Stateful are more suitable for Docker Container?
  26. Explain basic Docker usage workflow?
  27. How will you monitor Docker in production?
  28. What is an orphant volume and how to remove it?
  29. How is Docker different from a virtual machine?
  30. Can you explain Dockerfile ONBUILD instruction?
  31. Is it good practice to run stateful applications on Docker? What are the scenarios where Docker best fits in?
  32. Can you run Docker containers natively on Windows?
  33. How does Docker run containers in non-Linux systems?
  34. How containers works at low level?
  35. Name some limitations of containers vs VM?
  36. Why Docker compose does not wait for a container to be ready before moving on to start next service in dependency order?
  37. What is Docker file and how we can create image from Dockerfile?
  38. What are the Differances between Dockerfile and DockerCompose
  39. Tell Important Instructions used in Dockerfile
  40. What are the differences between COPY and ADD in Dockerfile
  41. What are the differences between CMD and RUN
  42. What are the differences between ENTRYPOINT and CMD
  43. What is dockerhub and DTR
  44. What are the disadvantages of using dockerhub
  45. What is Amazon ECR? how to push image to ECR?
  46. What is DTR and how we can secure image push to Docker Registry?
  47. What are the differences between Docker PUll and Docker push?
  48. What is docker commit command ? How we can create image from container?
  49. What is docker service ?command to create docker service
  50. What is docker stack ? How to create stack ?
  51. what are the differences between docker compose vs stack?
  52. What is .dockerignore file?why we will use in Dockerfile?
  53. What is docker machine? explain the command using drivers for AWS and Azure?
  54. What is docker swarm ? how we can create docker swarm and how we can add nodes?
  55. is docker swarm supports multi manager docker swarm cluster?
  56. How to promote docker worker in to manager in docker swarm
  57. How to list nodes in docker swarm?
  58. How to attach volume to Docker container and service
  59. What is docker tmpfs volume?
  60. What is multistage docker files?
  61. What are the advantages of using multistage docker file?
  62. How we can perform update and rollback in docker swarm?
  63. What are the differences between docker swarm and kubernetes
  64. Explain docker compose /docker stack file format
  65. How we can link two containers?
  66. Explain how we can integrate docker with Jenkins for CI and CD
Call us for Free Demo on AWS,Vmware,Citrix,Azure,Devops,Python,Realtime Projects
Calls will be forwarded to Our Trainers for demo

Azure and Azure Devops Interview Questions

1.I have some private servers on my premises, also I have distributed some of my workload on the public cloud, what is this architecture called?Which Servers normally we will keep in private cloud and public cloud?
2. What is Microsoft Azure and why is it used?
3. Which service in Azure is used to manage resources in Azure?
4. Which of the following web applications can be deployed with Azure?
5. What are Roles and why do we use them?
6. Is it possible to create a Virtual Machine using Azure Resource Manager in a Virtual Network that was created using classic deployment?
7. What are virtual machine scale sets in Azure?
8.What is an Availability Set and Avalabile Zone
9.What are Fault Domains?
10.What are Update Domains?
11.What are Network Security Groups?
12.Do scale sets work with Azure availability sets?
13. What is a break-fix issue?
14. Why is Azure Active Directory used?
15.What happens when you exhaust the maximum failed attempts for authenticating yourself via Azure AD?
16.What happens when you exhaust the maximum failed attempts for authenticating yourself via Azure AD?
17.How can I use applications with Azure AD that I’m using on-premises?
18. What is a VNet?
19.What are the differences between Subscription Administrator and Directory Administrator?
20. What is the difference between Service Bus Queues and Storage Queues?
21.. What is Azure Redis Cache?
22 Why doesn’t Azure Redis Cache have an MSDN class library reference like some of the other Azure services?
23.What are Redis databases?
24.Is it possible to add an existing VM to an availability set?
25. What are the username requirements when creating a VM?
26.What are the password requirements when creating a VM
27. How much storage can I use with a virtual machine?
28. How can one create a Virtual Machine in Powershell?
29.How to create a Network Security Group and a Network Security Group Rule?
30. How to create a new storage account and container using Power Shell?
31. How can one create a VM in Azure CLI?
32.What are the various power states of a VM?
33. How can you stop a VM using Power Shell?
34.Why was my client disconnected from the cache?
35.Why was my client disconnected from the cache?
36. What are the expected values for the Startup File section when I configure the runtime stack?
37.How are Azure Marketplace subscriptions priced?
38. What is the difference between “price,” “software price,” and “total price” in the cost structure for Virtual Machine offers in the Azure Marketplace?
39. What are stateful and stateless microservices for Service Fabric?
40.What is the meaning of application partitions?
41.What is the meaning of application partitions?
42.Name some important applications of Microsoft Azure
43.What is Azure as PaaS?
44.Explain the crucial benefits of Traffic Manager
45.What are Break-fix issues in Microsoft Azure?
46. State the difference between repetitive and minimal monitoring.
47.Explain command task in Microsoft Azure
48.What are unconnected lookups?
49.Explain Cmdlet command of Microsoft Azure
50.What is the use of the Migration Assistant tool in Azure Websites?
51.What is the use of the Database Migration tool in Azure Databases?
52.Explain role instance in Microsoft Azure
53. What are the important drawbacks of using Microsoft Azure?
54.What is MOSS?
55. What is the step you need to perform when drive failure occurs?
56. What it’s the difference between PROC MEANS and PROC SUMMARY?
57.. State the difference between a library and a list
58.What the important requirements when creating a new Virtual Machine?
59.What are the three main components of the Windows Azure platform?
60. Explain cspack in Microsoft Azure
61. What is Azure Container Instances?
62.How to Push images to Azure container Registry?
63.What is Azure devops and Pipelines?
64.What is Azure function app and App Services
65.Explain the crucial benefits of Traffic Manager
66.Explain Diagnostics in Windows Azure and Azure Monitor
67.What is Windows Azure AppFabric?
68.What is Vnet,subnet,Route tables,Nat gateway? to install Azure powershell on windows powershell
70.What is Cloud shell ? and what are the advantages of Cloudshell?
71.What is network security Group and how to create it from bash and powershell? to create resource group,users,ad,vm,storage account,scaleset using bash and powershell?
73.What are the regions supported by Azure ?
74.What are ARM Templetes
75.How we can deploy ARM Templates from Gui?
76.How we can deploy ARM Templates from powershell and bash?
77.How ARM Templates and Devops can be integrated?
78.How to deploy dotnet project using devops?
79.What is Azure pipeline?
80.What are the sizes of azure vm and based on which factors we can choose azure vm types?
81.What is your understanding of Service fabric?
82.How to restore data from disk failure?how to take backup of vm
83.API in Azure and Api functions?
84.Explain serverless features of Azure
85.What are the different types of storage in Azure
86.What is the difference between Windows Azure Queues and Windows Azure Service Bus Queues?What is the dead letter queue?
87.What is SQL Azure database?What are the databases supported by Azure
88.How can you create an HDInsight Cluster in Azure from portal,ARM,Cloudshell?
89.What is Azure Service Level Agreement (SLA)?
90.What is Azure monitor and what are the metrics we will use in AzureVm ,RDS?
91.Name various power states of a Virtual Machine.
92.What is azure active directory and AD B2C
93.What is Azure API Management and how we can publish API?
94.What is Azure Site Recovery and how we can restore services from diaster?
95.What is azure migrate and Azure database migration service?
96.What is Azure Boards,Azure Repos,Azure Artifacts,Azure Test Plans
97.What is Azure DNS and how we can host websites using Azure DNS
98.Azure Scheduler:how we can Run your jobs on simple or complex recurring schedules
99.Azure Policy:how we can Implement corporate governance and standards at scale for Azure resources?
100.Azure Resource Manager templates: How we can Deliver infrastructure as code for all your Azure resources using Resource Manager? What are the advantages of ARM Templates? to create a backup and copy files in CI/CD pipelines?
102.What is federation in SQL Azure?
103.What is TFS build system in Azure?
104.What is Azure App Service?
105.What is Text Analytics API in Azure Machine?\
106 What is Migration Assistant tool in Azure Websites?
107. Which factors should I consider for choosing one from Azure DevOps Services and Azure DevOps Server?
108. What are containers in DevOps, and which container platforms does Azure DevOps support?
109.What is the role of Azure Artifacts?
110.What are Azur3e Test Plans? to deploy AKS into my existing virtual network?
112.Can I limit who has access to the Kubernetes API server?
113.Can I have different VM sizes in a single cluster?
114.Are security updates applied to AKS agent nodes?
115.Why are two resource groups created with AKS?
116.Can I provide my own name for the AKS node resource group?
117what are the kubernetices adminstration controleers?does it suports aks?
118.Is Azure Key Vault integrated with AKS?
119.Can I run Windows Server containers on AKS? we can set max pods in aks?
121.Can I move/migrate my cluster between Azure tenants? to take backup of aks clusters? to delete a single cluster in aks?
124.If I have a cluster with one or more nodes in an Unhealthy state or shut down, can I perform an upgrade?
125.I ran an upgrade, but now my pods are in crash loops, and readiness probes fail?
126.My cluster was working, but suddenly can not provision LoadBalancers, mount PVCs, etc.?
127.Can I use virtual machine scale sets to manually scale to 0 nodes?
128.Can I use custom VM extensions?
129.Can I stop or de-allocate all my VMs?
130.How to get access token for multi resource
131.How to connect two on-premise domain controllers (not in the same network) to a single AzureAD?
132do we have the authority to download Office programs?
133.Azure Domain Deployment failed?what are the errors u were faced?
134.Log in to Azure AD B2C without redirecting to b2clogin Microsoft page? to change or upgrade the powershell versions
136.can i Delete Azure Active Directory?
137.What is CI and CD in Devops
138.How to Deploy a Docker container app to Azure Kubernetes Service
139.How to Build, test and deploy Javascript and Node.js apps in Azure Pipelines?
140.How to Start monitoring your Java Web Application using Azure Devops?
141.How we can integrate selenium Testing with Azure
142.How we can Create a complete Linux virtual machine infrastructure in Azure with Terraform?
143.How we can create Vm Cluster with terraform modules?
144.How we can Build java app with Azure pipeline and how we can deploy to Azure function?
145.How we can deploy apps to Vms using Azure pipeline?
146.How we can build ,Push Images to Azure container Registry
147.How we can troubleshoot Azure Build and Release,Deployment issues?
148.How to create backlog in Azure Boards?
149.what is azure devops Server
150. How to Install ,Upgrade,Migrate and Manage Azure devops server?
151.What is Azure Artifacts?
152.How we can run automated tests in Azure Devops Test plans? to integrage Azure devops with visual studio?
154.Describe sample yaml schema for azure pipeline?
155.How to Continuously deploy to Azure Functions with DevOps Projects?
156.How to Deploy ASP.NET Core apps to Azure Kubernetes Service with Azure DevOps Projects?Write yaml for this.
157.What is Azure Service Fabric?
158.How to Deploy your ASP.NET Core app to Azure Service Fabric by using Azure DevOps Projects? to integrate Continuously deploy to Azure Functions with DevOps Projects?
160.How We Deploy ASP.NET app and Azure SQL Database code by using Azure DevOps Projects?

Call us for Free Demo on AWS,Vmware,Citrix,Azure,Devops,Python,Realtime Projects
Calls will be forwarded to Our Trainers for demo

Selenium Interview Questions

Selenium Interview Questions:

Write code to capture screenshot?

Write selenium code to assert all links are working and there is no broken links on page.

How to switch within Iframes.

Write relative Xpath using following ancestor and following siblings.

Why we use WebDriver Manager ?

What are methods available in WebDriver Event Listener interface?

Explain internal architecture of Selenium WebDriver ?

Why and how you use Selenium Grid ?

How to get details of multiple windows and how to switch between Windows.

How you get data from WebTables ?

What is desired capabilities and why it is used ?

What are different types of Wait available in Selenium?

What is difference between implicit and explicit wait?

What are the common exceptions found during Selenium Automation and how you handle?

Explain your automation framework and what are the features in it?

Write few utilities code like excel reader, csv reader and JDBC connection code.


Testng is one of the popular framework in industries so most of the interviewer like deep dive into it.

TestNg Interview Questions:

Why testing is used in Automation framework?

What is difference between Hard assert and soft assert.

How to pass parameter from testing xml.

How to run test classes parallel and how test methods.

Write structure of testing xml structure.

How you run Testng xml from command line.

What are the annotations available in Testng and what are sequence of their execution.

How to ignore a test in testng.

How to use dataProviders in Testng?

How to run failed testcase using testng xml.?

What is the difference between Assert and verify commands?

What is the syntax of XPath?

What is the difference between findElement() and findElements()?

What are the different Exceptions present in Selenium Web driver?

What are the different ways to refresh browser using Selenium?

What is frame and how can you switch between frames in Selenium?

How can you find total number of frames in a web-page?

How can you move to some element using Actions class?

Can captcha and Barcode reader be automated using Selenium?

What are BreakPoints and StartPoints in Selenium?

What is the difference between getWindowHandles() and getWindowHandle()?

What is FluentWait in Selenium?

Call us for Free Demo on AWS,Vmware,Citrix,Azure,Devops,Python,Realtime Projects
Calls will be forwarded to Our Trainers for demo

Linux Commands

1. pwd command
Use the pwd command to find out the path of the current directory (folder) you’re
in. The command will return an absolute (full) path, which is basically a path that
starts with a forward slash (/). An example of an absolute path
is /home/username.
2. cd command
To navigate through the Linux filesystem, use the cd command. It requires either
the full path or the name of the directory, depending on the current directory
you’re in.
Let’s say you’re in /home/username/Documents and want to go to Photos, a
subdirectory of Documents. To do so, simply type cd Photos.
Another scenario is if you want to switch to a completely new directory,
say /home/username/Movies. In this case, you have to type cd followed by the
directory’s absolute path.
There are some shortcuts if you want to navigate quickly. Use cd.. (with two dots)
to move one directory up, or go straight to the home folder with cd. To move to
your previous directory, type cd- (with a hyphen).
On a side note, Linux’s shell is case sensitive. Hence, you have to type the name’s
directory exactly as it is.
3. ls command
ls command is used to view the contents of a directory. By default, this command
will display the contents of your current directory.
If you want to see the content of other directories, type ls and then the directory’s
path. For example, enter ls /home/username/Documents to view the content
of Documents.

4. cat command
cat is one of the most frequently used command in Linux. It is used to view the
content of a file on the standard output (sdout). To run this command,
type cat followed by the file’s name and its extension. For instance: cat file.txt.
5. cp command
Use the cp command to copy files from the present directory. For instance, the
command cp scenery.jpg /home/username/Pictures would create a copy
of scenery.jpg to the Pictures directory.
6. mv command
The primary use of the mv command is to move files, although it can also be used
to rename files.
The arguments in this command are similar to the cp command. You need to
type mv, the file’s name, and the destination’s directory. For example: mv file.txt
To rename files, the syntax is mv oldname.ext newname.ext
7. mkdir command
Use mkdir command to make a new directory — like mkdir Music will create a new
directory called Music.
8. rmdir command
If you need to delete a directory, use the rmdir command. However, rmdir only
allows you to delete empty directories.
9. rm command
The rm command is used to delete directories along with the contents within
them. If you only want to delete the directory — as an alternative to rmdir —
use rm -r.
10. touch command
The touch command allows you to create blank new files through the command
line. As an example, enter touch /home/username/Documents/Web.html to
create an HTML file entitled Web under the Documents directory.
11. locate command
You can use this command to locate a file, just like the search command in
Windows. What’s more, using the -i argument along with this command will make

it case-insensitive, so you can search for a file even if you don’t remember its
exact name.
To search for a file that contains two or more words, use an asterisk (*). For
example, locate -i school*note command will search for any file that contains the
word “school” and “note”, no matter if it is uppercase or lowercase.
12. find command
Similar to the locate command, using find also searches for files. The difference is,
you use the find command to locate files within a given directory.
As an example, find /home/ -name notes.txt command will search for a file
called notes.txt within the home directory and its subdirectories.
13. grep command
Another command that is undoubtedly very useful for everyday use. grep lets you
search through all the text through a given file.
To illustrate, grep blue notepad.txt will search for the word blue in the notepad
file. Lines that contain the searched word will be displayed fully.
14. sudo command
Short for “SuperUser Do”, this command enables you to perform tasks that require
administrative or root permissions. However, it is not advisable to use this
command for daily use because it might be easy for an error to occur if you did
something wrong.
15. df command
Use df command to get a report on the system’s disk space usage, shown in
percentage and KBs. If you want to see the report in megabytes, type df -m.
16. du command
If you want to check how much space a file or a directory takes, the du (Disk
Usage) command is the answer. However, the disk usage summary will show disk
block numbers instead of the usual size format. If you want to see it in bytes,
kilobytes, and megabytes, add the -h argument to the command line.
17. head command
The head command is used to view the first lines of any text file. By default, it
will show the first ten lines, but you can change this number to your liking. For
example, if you only want to show the first five lines, type head -n 5

18. tail command
This one has a similar function to the head command, but instead of showing the
first lines, the tail command will display the last ten lines of a text file.
19. diff command
Short for difference, the diff command compares the content of two files line by
line. After analyzing the files, it will output the lines that do not match.
Programmers often use this command when they need to make some program
alterations instead of rewriting the entire source code.
The simplest form of this command is diff file1.ext file2.ext
20. tar command
The tar command is the most widely used command to archive multiple files into
a tarball — a common Linux file format that is similar to zip format, but
compression is optional.
This command is quite complex with a long list of functions such as adding new
files into an existing archive, listing the content of an archive, extracting the
content from an archive, and many more. Check out some practical examples to
know more about other functions.
21. chmod command
chmod is another essential command, used to change the read, write, and execute
permissions of files and directories. As this command is rather complicated, you
can read the full tutorial in order to execute it properly.
22. chown command
In Linux, all files are owned by a specific user. The chown command enables you
to change or transfer the ownership of a file to the specified username. For
instance, chown linuxuser2 file.ext will make linuxuser2 as the owner of
the file.ext.
23. jobs command
jobs command will display all current jobs along with their statuses. A job is
basically a process that is started by the shell.
24. kill command
If you have an unresponsive program, you can terminate it manually by using
the kill command. It will send a certain signal to the misbehaving app and instructs
the app to terminate itself.

There is a total of sixty-four signals that you can use, but people usually only use
two signals:
 SIGTERM (15) — requests a program to stop running and gives it some time
to save all of its progress. If you don’t specify the signal when entering the
kill command, this signal will be used.
 SIGKILL (9) — forces programs to stop immediately. Unsaved progress will
be lost.
Besides knowing the signals, you also need to know the process identification
number (PID) of the program you want to kill. If you don’t know the PID, simply
run the command ps ux.
After knowing what signal you want to use and the PID of the program, enter the
following syntax:
kill [signal option] PID.
25. ping command
Use the ping command to check your connectivity status to a server. For example,
by simply entering ping, the command will check whether you’re able
to connect to Google and also measure the response time.
26. wget command
The Linux’s command line is super useful — you can even download files from the
internet with the help of the wget command. To do so, simply type wget followed
by the download link.
27. uname command
The uname command, short for Unix Name, will print detailed information about
your Linux system like the machine name, operating system, kernel, and so on.
28. top command
As a terminal equivalent to Task Manager in Windows, the top command will
display a list of running processes and how much CPU each process uses. It’s very
useful to monitor the system resource usage, especially knowing which process
needs to be terminated because it consumes too many resources.
29. history command
When you’ve been using Linux for a certain period of time, you’ll quickly notice
that you can run hundreds of commands every day. As such,
running history command is particularly useful if you want to review the
commands you’ve entered before.

30. man command
Confused about the function of certain commands? Don’t worry, you can easily
learn how to use them right from Linux’s shell by using the man command. For
instance, entering man tail will show the manual instruction of the tail command

Monitoring commands:

Top – Linux Process Monitoring
Linux Top command is a performance monitoring program which is used frequently by many system administrators to monitor Linux performance and it is available under many Linux/Unix like operating systems. The top command used to dipslay all the running and active real-time processes in ordered list and updates it regularly. It display CPU usage, Memory usage, Swap Memory, Cache Size, Buffer Size, Process PID, User, Commands and much more. It also shows high memory and cpu utilization of a running processess.

2.Linux Vmstat:

Linux VmStat command used to display statistics of virtual memory, kernerl threads, disks, system processes, I/O blocks, interrupts, CPU activity and much more. By default vmstat command is not available under Linux systems you need to install a package called sysstat that includes a vmstat program. The common usage of command format is.
Lsof – List Open Files

Lsof command used in many Linux/Unix like system that is used to display list of all the open files and the processes. The open files included are disk files, network sockets, pipes, devices and processes. One of the main reason for using this command is when a disk cannot be unmounted and displays the error that files are being used or opened. With this commmand you can easily identify which files are in use. The most common format for this command is.

Tcpdump – Network Packet Analyzer

Tcpdump one of the most widely used command-line network packet analyzer or packets sniffer program that is used capture or filter TCP/IP packets that received or transferred on a specific interface over a network. It also provides a option to save captured packages in a file for later analysis. tcpdump is almost available in all major Linux distributions.
# tcpdump -i eth0

Netstat – Network Statistics

Netstat is a command line tool for monitoring incoming and outgoing network packets statistics as well as interface statistics. It is very useful tool for every system administrator to monitor network performance and troubleshoot network related problems.
# netstat -a | more

Htop – Linux Process Monitoring

Htop is a much advanced interactive and real time Linux process monitoring tool. This is much similar to Linux top command but it has some rich features like user friendly interface to manage process, shortcut keys, vertical and horizontal view of the processes and much more. Htop is a third party tool and doesn’t included in Linux systems, you need to install it using YUM package manager tool. For more information on installation read our article below.

# htop

7. Iotop – Monitor Linux Disk I/O

Iotop is also much similar to top command and Htop program, but it has accounting function to monitor and display real time Disk I/O and processes. This tool is much useful for finding the exact process and high used disk read/writes of the processes.
# iotop

8. Iostat – Input/Output Statistics

IoStat is simple tool that will collect and show system input and output storage device statistics. This tool is often used to trace storage device performance issues including devices, local disks, remote disks such as NFS.

# iostat

ps – Displays the Linux processes

ps command will report a snapshot of the current processes. To select all processes use the -A or -e option:
# ps -A

Print All Process On The Server
# ps ax
# ps axu

Want To Print A Process Tree?
# ps -ejH
# ps axjf
# pstree
Top 10 Memory Consuming Process
# ps -auxf | sort -nr -k 4 | head -10
Show Us Top 10 CPU Consuming Process
# ps -auxf | sort -nr -k 3 | head -10

6. free – Show Linux server memory usage
free command shows the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel.
# free

mpstat – Monitor multiprocessor usage on Linux
mpstat command displays activities for each available processor, processor 0 being the first one. mpstat -P ALL to display average CPU utilization per processor:
# mpstat -P ALL

pmap – Montor process memory usage on Linux
pmap command report memory map of a process. Use this command to find out causes of memory bottlenecks.
# pmap -d PID
To display process memory information for pid # 47394, enter:
# pmap -d 47394

/proc/ file system – Various Linux kernel statistics
/proc file system provides detailed information about various hardware devices and other Linux kernel information. See Linux kernel /proc documentations for further details. Common /proc examples:
# cat /proc/cpuinfo
# cat /proc/meminfo
# cat /proc/zoneinfo
# cat /proc/mounts

Networking Commands:


ifconfig utility is used to configure network interface parameters.
Mostly we use this command to check the IP address assigned to the system.

PING Command
PING (Packet INternet Groper) command is the best way to test connectivity between two nodes. Whether it is Local Area Network (LAN) or Wide Area Network (WAN). Ping use ICMP (Internet Control Message Protocol) to communicate to other devices. You can ping host name of ip address using below command.
# ping

7. ROUTE Command

route command also shows and manipulate ip routing table. To see default routing table in Linux, type the following command.
# route

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface * U 0 0 0 eth0
link-local * U 1002 0 0 eth0
default UG 0 0 0 eth0

Adding, deleting routes and default Gateway with following commands.

Route Adding
# route add -net gw
Route Deleting
# route del -net gw
Adding default Gateway
# route add default gw

8. HOST Command

host command to find name to IP or IP to name in IPv4 or IPv6 and also query DNS records.
# host has address has address has address has address has address has IPv6 address 2404:6800:4003:802::1014
Using -t option we can find out DNS Resource Records like CNAME, NS, MX, SOA etc.
# host -t CNAME is an alias for

9. ARP Command

ARP (Address Resolution Protocol) is useful to view / add the contents of the kernel’s ARP tables. To see default table use the command as.
# arp -e

Address HWtype HWaddress Flags Mask Iface ether 00:50:56:c0:00:08 C eth0

10. ETHTOOL Command

ethtool is a replacement of mii-tool. It is to view, setting speed and duplex of your Network Interface Card (NIC). You can set duplex permanently in /etc/sysconfig/network-scripts/ifcfg-eth0 with ETHTOOL_OPTS variable.
# ethtool eth0

Settings for eth0:
Current message level: 0x00000007 (7)
Link detected: yes

11. IWCONFIG Command
iwconfig command in Linux is use to configure a wireless network interface. You can see and set the basic Wi-Fi details like SSID channel and encryption. You can refer man page of iwconfig to know more.

12. HOSTNAME Command
hostname is to identify in a network. Execute hostname command to see the hostname of your box. You can set hostname permanently in /etc/sysconfig/network. Need to reboot box once set a proper hostname.

# iwconfig [interface]

traceroute print the route packets take to network host.
Destination host or IP is mandatory parameter to use this utility

dig (Domain Information Groper) is a flexible tool for interrogating DNS name servers.
It performs DNS lookups and displays the answers that are returned from the name servers.

telnet connect destination host:port via a telnet protocol if connection establishes means connectivity between two hosts is working fine.
telnet 443

nslookup is a program to query Internet domain name servers.

[root@localhost ~]# nslookup

Netstat command allows you a simple way to review each of your network connections and open sockets.

netstat with head output is very helpful while performing web server troubleshooting.

[root@localhost ~]# netstat


nmap is a one of the powerful commands, which checks the opened port on the server.

Usage example:
nmap $server_name

Enable/Disable Network Interface

You can enable or disable the network interface by using ifup/ifdown commands with ethernet interface parameter.

To enable eth0

#ifup eth0

To disable eth0
#ifdown eth0

Some Useful commands:

  • arpwatch – Ethernet Activity Monitor.
  • bmon – bandwidth monitor and rate estimator.
  • bwm-ng – live network bandwidth monitor.
  • curl – transferring data with URLs. (or try httpie)
  • darkstat – captures network traffic, usage statistics.
  • dhclient – Dynamic Host Configuration Protocol Client
  • dig – query DNS servers for information.
  • dstat – replacement for vmstat, iostat, mpstat, netstat and ifstat.
  • ethtool – utility for controlling network drivers and hardware.
  • gated – gateway routing daemon.
  • host – DNS lookup utility
  • hping – TCP/IP packet assembler/analyzer.
  • ibmonitor – shows bandwidth and total data transferred.
  • ifstat –  report network interfaces bandwidth.
  • iftop – display bandwidth usage.
  • ip (PDF file) – a command with more features that ifconfig (net-tools).
  • iperf3 – network bandwidth measurement tool. (above screenshot Stacklinux VPS)
  • iproute2 – collection of utilities for controlling TCP/IP.
  • iptables – take control of network traffic.
  • IPTraf – An IP Network Monitor.
  • iputils – set of small useful utilities for Linux networking.
  • iw – a new nl80211 based CLI configuration utility for wireless devices.
  • jwhois (whois) – client for the whois service.
  • “⦁ lsof⦁ -⦁ i⦁ ” – reveal information about your network sockets.
  • mtr – network diagnostic tool.
  • net-tools – utilities include: arp, hostname, ifconfig, netstat, rarp, route, plipconfig, slattach, mii-tool, iptunnel and ipmaddr.
  • ncat – improved re-implementation of the venerable netcat.
  • netcat – networking utility for reading/writing network connections.
  • nethogs – a small ‘net top’ tool.
  • Netperf – Network bandwidth Testing.
  • netsniff-ng – Swiss army knife for daily Linux network plumbing.
  • netstat – Print network connections, routing tables, statistics, etc.
  • netwatch – monitoring Network Connections.
  • ngrep – grep applied to the network layer.
  • nload – display network usage.
  • nmap – network discovery and security auditing.
  • nmcli – a command-line tool for controlling NetworkManager and reporting network status.
  • nmtui – provides a text interface to configure networking by controlling NetworkManager.
  • nslookup – query Internet name servers interactively.
  • ping – send icmp echo_request to network hosts.
  • slurm – network load monitor.
  • snort – Network Intrusion Detection and Prevention System.
  • smokeping –  keeps track of your network latency.
  • socat – establishes two bidirectional byte streams and transfers data between them.
  • speedometer – Measure and display the rate of data across a network.
  • speedtest-cli – test internet bandwidth using
  • ss – utility to investigate sockets.
  • ssh –  secure system administration and file transfers over insecure networks.
  • tcpdump – command-line packet analyzer.
  • tcptrack – Displays information about tcp connections on a network interface.
  • telnet – user interface to the TELNET protocol.
  • tracepath – very similar function to traceroute.
  • traceroute – print the route packets trace to network host.
  • vnStat – network traffic monitor.
  • websocat – Connection forwarder from/to web sockets to/from usual sockets, in style of socat.
  • wget –  retrieving files using HTTP, HTTPS, FTP and FTPS.
  • Wireless Tools for Linux – includes iwconfig, iwlist, iwspy, iwpriv and ifrename.
  • Wireshark – network protocol analyzer.
Call us for Free Demo on AWS,Vmware,Citrix,Azure,Devops,Python,Realtime Projects
Calls will be forwarded to Our Trainers for demo

DevOps Course Material

 Table of Contents



DevOpS is a term used to refer to a set of practices which emphasize the communication and collaboration ofthe software developers and the rest IT professionals while auto mating the software integration and delivery process. While Agile has been in the industry for a while, DevOps is mostly a new concept which had turned into a movement in the technical arena. In simple, DevOps refers to Development + Operations where operations are a blanket term for all those apart from developers like system engineers, system administrators etc. It aims at enhancing the collaboration between the set wofield sand produce a high performing and reliableend -product. In order to be successful in DevOps arena, one should be thorough with various IT concepts:

⦁    Operating System (Linux – one of the widely adopted open source OS)
⦁    Version Control System (GIT – leading market shareholder)
⦁    Build tool – Maven
⦁    CI & CD tool –Jenkins
⦁    Configuration Management tool –Chef
⦁    Cloud Services – AWS
⦁    Containerization services –Docker


The booting process of linux is represented in the below diagram:

The basic concept to be understood in linux is the runtime levels of a Linux OS. There are total 7 run levels where:
⦁    0 –Halt  4 –unused
⦁    1 – SingleUser  5 – Full multiuser with networking on
⦁    2 – Multiple user withoutNFS windows – basicallygui
⦁    3 – Full multiusermode  6 –reboot

I node, also called as index number, identifies the file and its attributes – use -I flag Few other important commands in Linux include:

To search for files and directories – find . -name “string” -type f/d

To communicate with another host using Telnet protocol – telnet [host or IP [port] ] (if the telnet server is running, it will be listening on tcp 23 by default) To kill a process – kill -3 PID
To login to a remote machine and execute command – ssh remote_Host
To securely copy files between remote hosts – Scp source_file_path user@dest_host:dest_path
To check the file systems disk space usage DISK FREE– df -h (-T type, -m MB, -k KB)
To check the DISK USAGE of the directories and sub directories and files – du -h (-s summarize) To find top 10 files using most disk space – du -a | sort -nr | head -n 10
To monitor processes based on cpu /memory usage – top (M -memory, P – CPU usage, N – by PID, T – running time, i – idle processes).
To automate the routine tasks – crontab -e, -r (delete), -l (list)




To preserve time stamp while copying – cp -p
To display firewall status – iptables -L -n, -F(to clear),
To get the CPU info – cat /proc/cpuinfo
To run a program in background: nohup node server.js > /dev/null 2>&1 &
No hup means: Do not terminate this process even when the stty is cut off.
> /dev/null means: stdout goes to /dev/null (which is a dummy device that does not record any output).
2>&1 means : stderr also goes to the stdout (which is already redirected to /dev/null). You may replace &1 with a file path t o keep a log of errors, e.g.: 2> /tmp/myLog & at the end means: run this command as a backg round task.

To check top 10 process is using most memory – ps aux –sort=-%mem | awk ‘NR<=10{print $0}’
Disk partition commands: fdisk /dev/sda (m-help, n-new, d-delete, w-to apply changes, p -print, s-size, x followed by f – to fix order of partition table) To transfer files remotely and locally–r syncoptions source
destination(v-verbose,r-recursive,a-archive(r+ symbolic links, file permissions, user & group ownerships and timestamps), z-compress, h-human, — exclude, –include, e-to specify protocol (ssh), –progress, –delete, –max-size, –remove-source-files, –dry- run)

To analyze packets or linux packet sniffer tools – tcpdump -i eth0 (c-to specify number of packets, D-display available interfaces, w-capture&save, r-read captured, n-capture ip address packets, Linux Proccontents: APM ,CPU INFO ,DEVICES, DMA, FILESYSTEMS ,IOMEM ,LOADAVG, LOCKS, MEMINFO, MOUNTS, PARTITIONS, SWAPS, UPTIME.


Both ping and telnet could be used for diagnosing network problems. Ping simply gets a response and it is what it can do. While it is possible to monitor the output from a given port using telnet.

Telnet sends usernames and passwords over network in clear text format so it is open to eavesdropping and considered very unsecure.

Main advantages of SSH over Telnet is all the communication is encrypted and sensitive user data travels encrypted over internet so it is not possible to extract these credentials easily.


It is done by basically monitoring the following sub-systems: CPU, memory, I/O and Network. Four critical performance metrics for CPU are:
1. Contextswitch–during multitasking, CPU stores the current state of the process before switching to the other and higher level of this causes performance issues.
2. Runqueue–refers to the number of active processes in queue for CPU and higher the number lower the performance,
3. CPU utilization
–through TOP command and higher % of utilization causes performance issues and finally Load Average–which refers to the average load on CPU which is displayed for the last 1, 5 and 15minutes.
Network – things like number of packets received, sent and dropped etc needs to be monitored for network interfaces.
I/O – higher I/O wait time indicates disk subsystem problem. Reads per sec and writes per sec needs to be monitored in blocks referred to as bibo. TPS – RPS+WPS
Memory – Virtual memory = Swap space available on the disk + Physical memory.
Remember the 80/20 rule — 80% of the performance improvement comes from tuning the application, and the rest 20% comes from tuning the infrastructure components.

Performance tuning: The parameters that would be altered to perform a performance tuning of a linuxmachine would be the contents of /proc/sys folder which are: shared memory segment (an inter-process communication mechanism that enables the visibility of a memory segment to be visible to all processes in a singlenamespace),

Swappiness (how aggressively memory pages are swapped to disk), disabling the kernel’s ability to respond to ICMP broadcast ping requests,


Server A to Server B
⦁ Generate the ssh keys on A:

ssh-keygen -t rsa
⦁ change the password authentication to yes inA:

sudo vi /etc/ssh/sshd_config 
⦁ From .sshfolder:

ssh-copy-id user@host

⦁    check folder permissions in case of any issue. 700 for user home directory and .ssh folder and 600 for files in it.
⦁    Once done change the passwod authentication back to no in sshd_config file.OPENINGPORTS
To open a port: sudo afw allow portno/port type – example sudo afw allow 22/tcp
The service specific syntax is as follows to open http and https service ports:
sudo ufw allow http sudo ufw allow https OR
sudo ufw allow 80/tcp sudo ufw allow 443/tcp Advanced examples
To allow IP address access to port 22 for all protocols sudo ufw allow from to any port 22
Open port (SSL 443 nginx/apache/lighttpd server) for all, enter: sudo ufw allow from any to port 443 proto tcp
To allows subnet to Sabma services, enter:
ufw allow from to any app Samba You can find service info as follows:
sudo ufw app list


By default, the sshd daemon is configured to refuse direct connections by the root user, so you won’t be able to log in over SSH as a root user with this password. For security reasons, avoid enabling direct SSH access for the

root user. Instead, connect by using the user ID associated with your operating system (for example, “ec2-user” for many Linux distributions) and a key pair.
If you need to add a root user password temporarily:
⦁    Connect to your EC2 instance running Linux by usingSSH.
⦁    Assume root user permissions by running the followingcommand: sudo su
passwd root – to create temporary password passwd -l root – to delete the password


Sudo adduser user_name Sudo su – user_name Mkdir .ssh
Chmod 700 .ssh
Touch .ssh/authorized_keys Chmod 600 .ssh/authorized_keys Exit
Ssh-keygen -t rsa Copy Sudo su – user_name
Paste in .ssh/authorized_keys Sudo userdel -r user_name


⦁    Verify whether your backups are working ornot
⦁    Check diskusage
⦁    Monitor RAID alarms
⦁    Keep on top of your OSupdates
⦁    Check your applicationupdates
⦁    Check for hardware errors (throughlogfiles)
⦁    Check server utilization(systat)
⦁    Review useraccounts
⦁    Changepasswords
⦁    Periodical audit of server security

To display all users: getent passwd SLOW SERVER DEBUGGING:

Courtesy: Scout Monitoring


  sudo su passwd root
  enter password:
  re-enteer password:
  passwd -l root


GIT is the most widely used distributed version control system which is used for storing source code.Unlike other centralized VCS, in GIT every developer’s working copy of the code is also a repository that can contain the full history of all changes.


A typical VCS have two trees viz. working copy and Repository. While working copy stores the copy of the code on which you are currently working, a repository stores the versions of the code. Checking-out is the process of getting files from repository to working copy and committing is the process of placing the work from working copy to repository. Git has a third tree referred to as staging which comes between the repository and working copy which is the place where all the changes that are to be committed will be prepared. Thus, you will have a chance to review your changes once again before committing them.

Server-client: Although GIT is a DVCS, it uses server-client concept in order to avoid the ambiguity among developers and to ensure the availability of the code even when the developer’s system is offline.The sourcecode will be stored centrally in a server from which each developer check-outs and commits.


GitHub, Bitbucket and GitLab are the repository hosting platforms which enables the management of the repositories. Out of these three, only GitLab is the open-source version and GitHub is the largest repository host with more than 38 million projects.


⦁ GIT is a distributed VCS while SVN is a centralized VCS. (in SVN version history is on server sidecopy).
⦁ In the case of large binary files, GIT is problematic in storing them while with SVN, the checkout times are faster with just the latest changes being checked out in SVN.
⦁ Branching is considered to be lighter in GIT than SVN as in GIT a commit to a local repository is referred to as branch and it is easy to create and publish them at discretion.
⦁ While GIT assumes all of the contributors have the same permission, SVN allows to specify read and write access controls.
⦁ While SVN calls for network connection for every commit, GITdoesn’t.


The basic Git workflow goes something like this:
⦁    You clone the files from your remote repository to your workingtree
⦁    You modify files in your working tree.
⦁    You stage the files, adding snapshots of them to your stagingarea.
⦁    You do a commit, which takes the files as they are in the staging area and stores that snapshot permanently to your Git directory.


To initialize a git repository – git init .

To get a local working copy of repository (for the first time) – git clone path

To create a branch – git checkout -b

To add a file to repository – git add file_name

To commit the changed files – git commit -m “message”

To get the changes on to the local repository (.git) – git fetch

To update the working copy with new commits on repository – git pull = fetch + merge

To merge a branch to master (on master) – git merge master To check the link between the local and remote repository – git remote -v To add a link – git remote add path_to_repository

To remove and don’t track anymore – git rm –cached

To go back to a commit – git revert 

To go back to a commit and erase the history too – git reset commit_hash (soft-head, mixed head and staging, hard-head, staging and working directory)

To delete untracked files – git clean -f

To update commit message – git commit –amend -m “message” commit_hash

To choose a commit from a branch and apply it – git cherry-pick commit_hash

To record the current state of working directory and index and keep the working directory clean – git stash save “message”

To get the stashed changes back – git stash apply commit_hash

To see commits – git log

To compare modified files – git diff (–color-words to highlight changes)

To denote specific release versions of the code – git tag -a name -m “message”

To squash last N commits into a single commit – git rebase -I HEAD={n}

To list files changed in a particular commit – git difff-tree -r


A merge conflict happens when two branches both modify the same region of a file and is subsequently merged. Git can’t know which of the changes to keep, and thus needs human intervention to resolve the conflict.

Merge conflicts can be resolved either by using git merge tool which walks us through each conflict in gui. Or, it can be done using the diff3 merge conflict style by setting git config merge. Conflict style diff3. It separates the file into three parts with 1st part being the destination of the merge, 2nd part is common and the 3rd one is the source part.


When we want the changes from master to working copy, it can be done in two ways: merging and rebasing. If we merge, then the changes on the master would be applied on to the working copy with a new commit. The second option is to rebase the entire feature branch on to the tip of master branch by fetching the changes, thus changing the actual history of commits. 

The benefits of rebasing are that it avoids the unnecessary merge commits required by git merge and will also yield linear project history.


Main branches: master and develop with unlimited lifetime. In master, the HEAD of source code always reflects a production ready state while in develop it reflects a state with latest delivered development changes. Dev branch is also called as integration branch. When the source code in develop branch reaches a stable point, it will be merged into master and tagged with a release number and an automatic deployment process will take place through git hooks.

Supporting branches: feature, release and bug fix branches are with limited lifetime and will be removed eventually. Feature branches are used to develop new features of the product and should origin and merge to develop branch only. These exist in developer repos 

(only)as longest hat particular branch is under development. Release branches support the preparation of new production release and allow for minor bug fixes and meta-data preparation. It is created when the develop branch reflects the desired state of new release and thus releasing the develop branch to receive new features for next release. At this stage, version number for upcoming release is assigned. It originates from develop branch and merge to master and develop branches. When a critical bug in a production version must be resolved immediately, a hot fix branch may be branched off from the corresponding tag on the master branch that marks the production version. It originates from master and merges to develop and master and when there is an alive release version, then it has to be merged to release branch instead of develop.


Git SVN– repository path destination path – this will migrate all to the current

SVN is cvs and GIT is dvcs and will have flexibility and is useful for all and allows parallel work on Github licensing 10 users $2500 or per user on an average $250.

We can have Github in an organization in three modes – hosted, on premises and cloud. Onpremises – Fo rthis mode, we will be provided with a nova file by github which is a software. Using Vm ware, (which will have v sphere client) we will deploy ova file in vsphere which will install and open th e github on premises and we will integrate any of the following: active directory, ldap, saml and build-in authentication. Build-in authentication, we need to create an account ever time when I have , user name and domain name.

Github uses the Debian distro.

For on-premises Web management console we have monitoring tab, settings tab, upgrade tab, maintenance tab (We can enable schedule maintenance under maintenance tab while the server is under maintenance to intimate the users of the same.). Under Monitoring tab, we have github enterprise backend process, iops, memory, code deployed /retrieved, statistics. Under Settings tab, we have ssh connection: only to the administrator workstation, authentication, ssl certificate (usually provided by third party), domain name setup for github access (we need to update dns records in vpshere or active directory for this sake), upgrade – details about version (current version
–2.92:asof2017)and details about up dates available(wealwaysupdateonlyton-1versionwherenisthelatest version). We can setup proxy server in settings tab.

We install ntp service in the main server and proxy server to ensure the compatibility.

Once the domain name is set, then we will login to the website using our credentials. Then we will create an organization (each for a team) and will create repositories in it and assign users and collaborators to it.
Post migration, we will hold the repositories in svn/git until two successful releases in git.

One month and will do a poc by migrating a test repo. We migrate repositories in phase-wise. We can migrate two or three repositories.


⦁   Retrieve a list of all Subversion committers….
From the root of your local Subversion checkout, run this command:
svn log -q | awk -F ‘|’ ‘/^r/ {sub(“^ “, “”, $2); sub(” $”, “”, $2); print $2″ = “$2” <“$2″>”}’ | sort -u > authors-transform.txt
This will create a authors-transform.txt file with author names which we need to edit and add the email addresses of authors.

⦁   Clone the Subversion repository using git-svn….
git svn clone [SVN repo URL] –no-metadata -A authors-transform.txt –stdlayout ~/temp

⦁   Convert svn:ignore properties to .gitignore….
If your svn repo was using svn:ignore properties, you can easily convert this to a .gitignore file using:
cd ~/temp
git svn show-ignore > .gitignore git add .gitignore
git commit -m ‘Convert svn:ignore properties to .gitignore.’

⦁   Push repository to a bare git repository….
First, create a bare repository and make its default branch match svn’s “trunk” branch name.
git init –bare ~/new-bare.git cd ~/new-bare.git
git symbolic-ref HEAD refs/heads/trunk
Then push the temp repository to the new bare repository.

cd ~/temp
git remote add bare ~/new-bare.git
git config remote.bare.push ‘refs/remotes/*:refs/heads/*‘ git push bare
You can now safely delete the ~/temp repository.

⦁   Rename “trunk” branch to “master”…
Your main development branch will be named “trunk” which matches the name it was in Subversion. You’ll want to rename it to Git’s standard “master” branch using:
cd ~/new-bare.git
git branch -m trunk master

⦁   Clean up branches and tags….
git-svn makes all of Subversions tags into very-short branches in Git of the form “tags/name”. You’ll want to convert all those branches into actual Git tags using:
cd ~/new-bare.git
git for-each-ref –format=’%(refname)‘ refs/heads/tags | cut -d / -f 4 |
while read ref do
git tag “$ref” “refs/heads/tags/$ref”; git branch -D “tags/$ref”;

This is just for one single repository, if you have multiple repositories to be migrated, then we need to have paths of all the subversion repositories in a single file and then get them invoked by a shell script which will does the migration work foryou.


Apache Maven is a project management and build tool which converts the human readable code into machine readable format.


There are three built-in build lifecycles: default, clean and site. The default lifecycle handles your project deployment, the clean lifecycle handles project cleaning, while the site lifecycle handles the creation of your project’s site documentation. In the Default life cycle, maven has phases like validate, compile, test, package, verify, install and deploy.


Ant Maven
Ant doesn’t has formal conventions, so we need to provide information of the project structure in build.xml file. Maven has a convention to place source code, compiled code etc. So we don’t need to provide information about the project structure in pom.xml file.

Ant is procedural, you need to provide information about what to do and when to do through code. You need to provide order. Maven is declarative, everything you define in the pom.xml file.
There is no life cycle in Ant. There is life cycle in Maven.
It is a tool box. It is a framework.
It is mainly a build tool. It is mainly a project management tool.
The ant scripts are not reusable. The maven plugins are reusable.


A goal represents a specific task which contributes to the building and managing of a project. A Build Phase is Made Up of Plugin Goals. Goals provided by plugins can be associated with different phases of the lifecycle.


The dependency management section is a mechanism for centralizing dependency information. When you have a set of projects that inherits a common parent it’s possible to put all information about the dependency in the common POM and have simpler references to the artifacts in the child POMs. A second, and very important use of the dependency management section is to control the versions of artifacts used in transitive dependencies.


This plugin is used to release a project with Maven, saving a lot of repetitive, manual work. Releasing a project is made in two steps: prepare and perform.
⦁    release:clean Clean up after a releasepreparation.
⦁    release: prepare Prepare for a release inSCM.
⦁    release: prepare-with-pom Prepare for a release in SCM, and generate release POMs that record the fully resolved projectsused.
⦁    release:rollback Rollback a previous release.
⦁    release:perform Perform a release fromSCM.
⦁    release:stage Perform a release from SCM into a stagingfolder/repository.
⦁    release:branch Create a branch of the current project with all versionsupdated.

release:update-versions Update the versions in the POM(s)
The developerConnection contains the URL of the Source Control Management system pointing to the folder containing this pom.xml This URL is prefixed with scm:[scm-provider] so the plugin can pick the right implementation for committing and tagging. It is wise to do a dry run before the actual release.


Maven repository are of three types:local,central and remote.

Maven local repository is a folder location on your machine.It get screated when your unany maven command for the first time. Maven local repository keeps your project’s all dependencies (library jars, plugin jars etc). It is referred to as .M2repository.
Maven central repository is repository provided by Maven community. It contains a large number of commonly used libraries. When Maven does not find any dependency in local repository, it starts searching in central repository. Maven provides concept of Remote Repository which is developer ‘so wn customrepository containing required libraries or other project jars which can be used when Maven doesn’t find the requisite dependency in both local and central repositories.


This file contains elements used to define values which configure Maven execution in various ways such as the local repository location, alternate remote repository servers, and authentication information, like the pom.xml, but should not be bundled to any specific project, or distributed to an audience.

Local repository, plugin registry and groups, servers, mirrors, proxies, profiles etc.


Project Object Model is an xml file which resides in the project base directory, contains the information about project and various configuration details used for building projects. It also contains the goals and plugins. The configuration details include project dependencies plugins, goals, build profiles, project version, developers and mailinglist.

All POMs inherit from a parent (default POM of Maven). This base POM is known as the Super POM, and contains values inherited by default.


The poms of the project would inherit the configuration specified in the super POM. This can be achieved by specifying the parent. While super POM is one example of project inheritance, own parent POMs can be introduced by specifying parent element in the POM. Project Aggregation is similar to Project Inheritance. But instead of specifying the parent POMfromthemodule, it specifies the modules from the parent POM. By doing so, the parent project now knows its modules, and if a Maven command is invoked against the parent project, that Maven command will then be executed to the parent’s modules aswell.

Pom,xml: [Coordinates : groupID, artifactID, version, packaging, classifier] [POM relationships :dependencies (groupID ,artifactID ,version ,type ,scope ;compile /provided /runtime /test /system, exclusions, Parent ,dependency management] Modules, [Build: default goal, directory, final name] Resources, Plugins, Plugin management, Reporting, [Organization, developers, contributors], [Environment settings: issue management, CI management, Mailing list, SCM], Repositories, Plugin repositories, d Management, Profiles,Activation.


Jenkins is a java based CI and CD application that increases your productivity which is helpful in building and testing software projects continuously.

Jenkins Installation:

Install the Jenkins from the official page. 2.46.2 Prerequisites: java and maven

Set Path: add JAVA_HOME: path to java executable file in system and user variables Add path to java executable in path in system variables.

Jenkins Master Folder Structure
JENKINS_HOME has a fairly obvious directory structure that looks like the following:


   +-config.xml (Jenkins root configuration)
   +-*.xml (other site-wide configurationfiles)
   +-userContent (files in this directory will be served under yourhttp://server/userContent/)
   +- fingerprints (stores fingerprint records)
   +-plugins (stores plugins)
   +- workspace (working directory for the version control system)
   +- [JOBNAME] (sub directory for each job)
   +- jobs
   +-[JOBNAME] (sub directory for eachjob)
   +-config.xml (job configuration file)
   +-latest (symbolic link to the last successfulbuild)
   +- builds
   +-[BUILD_ID] (for each build)
   +-build.xml (build result summary)
   +-log (logfile)
   +- changelog.xml (change log)


New Item, People, Build History, Project Relationship, Check File Fingerprint, Manage Jenkins, My Views, Credentials


Commit jobs, Night build jobs, Deployment jobs and Release jobs.




Maven integration plugin, Text file operations plug-in, Artifactory plug-in, Thin backup, Job Importer, Publish over SSH, Build Pipeline, Mask passwords plugin, Github plug-in, Team view plug-in, HTML publisher plug-in, jdk parameter ,green ball, color ball, docker, ec2, cft ,chef, run deck, jacoco, Cobertura, sonarqube, sonarqube quality gates, multiplescm,s cmsnync configuration, job configuration history, matrix authorization, buildanalyzer, b cross examine, pipeline,


Jenkins Pipeline is a suite of plug-in which supports implementing and integrating continuous delivery pipelines into Jenkins. Pipeline provides an extensible set of tools for modeling simple-to-complex delivery pipelines “as code” via the Pipeline DSL.

The pipeline methodology is used for job chaining to automatically start otherjobs which are dependent on a job and for rebuilding the jobs when there is a change in one of its dependencies. Let us assume that there are three jobs Project A, B and C in such a way that A is dependent on B which is dependent on C. In this illustration, While B is a downstream job to A, C is a downstream project to B. Inversely, A is an upstream job to B and B is upstream project to C.


#!/usr/bin/env groovy stage(‘compile’) { node {
checkout scm stash ‘everything’ dir(‘src/cafe’) {
bat ‘dotnet restore’
bat “dotnet build –version-suffix ${env.BUILD_NUMBER}”
parallel unitTests: { test(‘Test’)
}, integrationTests: { test(‘IntegrationTest’)
failFast: false

def test(type) { node {
unstash ‘everything’

dir(“test/cafe.${type}”) { bat ‘dotnet restore’
bat ‘dotnet test’
stage(‘publish’) { parallel windows: { publish(‘win10-x64’)
}, centos: { publish(‘centos.7-x64’)
}, ubuntu: { publish(‘ubuntu.16.04-x64’)

def publish(target) { node {
unstash ‘everything’ dir(‘src/cafe’) {
bat “dotnet publish -r ${target}”
archiveArtifacts “bin/Debug/netcoreapp1.1/${target}/publish/*.*”
Components in jenkins file – Agent, stage, steps, script


⦁    You can’t run parallel commands in the UI, just sequential.
⦁    You can’t commit it to version control and have an approval and promotion process in the UI.
⦁    You can’t know what changes were made in the Pipeline.
⦁    On top of all these, you get more control and options than in UI
⦁    Code review/iteration on the Pipeline

⦁    Audit trail for the Pipeline
⦁    Single source of truth for the Pipeline, which can be viewed and edited by multiple members of the project.

Before pipeline

⦁    Many atom ic jobs
⦁    Hard to share variables/state between jobs
⦁    Limit edlogic
⦁    Job chaining
⦁    Mix build triggers, parameterized build


Continuous Integration is an extreme Programming (XP) development practice where members of a team integrate their work frequently; usually each person commits their code changes at least once daily – leading to multiple integrations per day.


CI allows for:
⦁    Detect integration errors
⦁    Identify early broken builds and fix them
⦁    Reduced development times
⦁    Reduction of time to fix errors


All about compiling/testing/building/packaging your software on a continuous basis. With every check-in, a system triggers the compilation process, runs the unit test, and runs any static analysis tools you use and any other quality- related checks that you can automate.


⦁    Backup Jenkins periodically (Jenkins home directory)
⦁    Use file fingerprinting to manage dependencies
⦁    Build from source code to ensure consistentbuilds
⦁    Integrate with issue management tool
⦁    Take advantage of automatedtesting
⦁    Never delete a job, if at all it is required make sure that you archive a copy ofthat
⦁    Have one build for oneenvironment
⦁    Notify the developers with the buildresults
⦁    Tag, merge or baseline your code after every successfulbuild
⦁    Keep your Jenkins and plugins up todate

⦁    Don’t build on master


⦁    Maven dependency plug-in: we have two dependency plug-in for Maven and at times Maven looks for old mojo which has no goal analyze. In such case, we need to redefine the goal and re-trigger the build.

⦁    Missing Artifact: The reasons would be :at poerror, there ferenced module would haven ever been built till now and that artifact is not deployed to the Artifactory and thus that module build has to be triggered and or the artifact would have be removed from the cache which happens once every few months in which case, we need to build that module again.

⦁    Invalid JDK version: debug the jdk issue and add the required JDK.


Build Part

Whenever Developer commits the code to main line trunk or master incase of git and push the code to the git hub, Jenkins will check out the code using poll scm and it will kick off the maven scripts and maven will do the compile, test, package, install and deploy to the artifactory here we use nexus in my current project. Here I configured

Nexus with maven whenever we do mvn deploy, Artifacts are deployed into the nexus repository. There are again snap shot and release versions, For Continuous integration part wekee ponus ing snapshot version, Whenever developer thinks that the development is done and says like we are good to go for the release. Then there is another build which will be kicked off called release build where it will checkout the latest code, builds the code and deploys the artifacts to the nexus release repository. Till here build part is over.
Ofcourse, we will run code quality checks, unit testcases, unittests, integration test and if everything is good we are going to publish into the nexus repository.

Deployment Part

Coming to deployment we need to create different environments like QA, UAT, PROD

For the deployment part also we will have a job called deploy job. Deploy job will have some parameters like environment, Component, Branch and version. Depending on the environment Jenkins will kick of the CFT templates from GIT repository and CFT will spin off the instances in AWS and
It is Integrated chef with CFT where chef will take care of provisioning of nodes where we kind of install and configure different packages.

Downloads Chef-Client package, installs chef-client, also download required keys like user.pem and validator.pem and configuration files for authenticating node to the chef server into /etc/chef/ directory and bootstrap the node to register node into the chef server, (Then after we Assign the role consisting of the run lists which are having required cookbooks to configure the node for particular cookbook) and runs the chef- cline to me the node and deploys the artifact into the newly created environment. ForexampleifIgivetheparameter as QA, QA environment is created and Deploy the artifacts into QA environment. Now we will give the QA env for the testing purposes, If the testing is done and every c thing is good we will promote the same code into different other environments like UAT, Stage and production.
We have deployment cookbook to pull the artifacts from artifactory and deploy to Web logic Continuous Delivery Continuous Delivery is a software development discipline where you build software in such a way that the software can be released to production at any time.
You achieve continuous delivery by continuously integrating the software done by the development team, building executables, and running automated tests on those executables to detect problems. Furthermore, you push the executables into increasingly production-like environments to ensure the software will work in production.

The principal benefits of continuous delivery are:

⦁    Reduced Deployment Risk: since you are deploying a smaller changes, there’s less to go wrong and it’s easier to fix should a problem appear.
⦁    Believable Progress :many folks track progress by tracking work done. If” done “means” developers declare it to be done” that’s much less believable than if it’s deployed into a production (or production-like) environment.
⦁    User Feedback: the biggest risk to any software effort is that you end up building something that isn’t useful. The earlier and more frequently you get working software in front of real users, the quick eryou get feedback to find out how valuable it really is (particularly if you use Observed Requirements).


Chef is a powerful automation plat form that transforms infrastructure into code and auto mate show infrastructure is configured, deployed, and managed across your network, no matter its size. Chef helps to solve the infrastructure handling complexity.


Organizations: Are the independent tenants in the enterprise chef and may represent companies/business units/ departments. Organizations do not share anything with others.
Environments: These model the life-stages of your application. These are the way that reflect the patterns of your organization. The stages application that goes through during its development process like development,

testing, staging, production etc. Chef generate environment creates environment folder and within which we can define each file for our environment which consists of env name, description, cookbooks and their version.

Policy: maps business and operational requirements, process, and workflow to settings and objects stored on the Chef server.

Roles: a way of identifying and classifying the different types of servers in the infrastructure like load balancer, application server, DB server etc. Roles may include configuration files (run list) and data attributes necessary for infrastructure configuration.

The workstation is the location from which users interact with Chef and author and test cookbooks using tools such as Test Kitchen and interact with the Chef server using the knife and chef command line tools.

Nodes are the machines—physical, virtual, cloud, and so on—that are under management by Chef. The chef- client is installed on each node which performs the automation on that machine.

The Chef server acts as a hub for configuration data. The Chef server stores cookbooks, the policies that are applied to nodes, and meta data that describes each registered node that is being managed by the chef-client. Nodes use the chef-client to ask the Chef server for configuration details, such as recipes, templates, and file distributions. The chef-client then does as much of the configuration work as possible on the nodes themselves (and not on the Chef server).

One (or more)workstations are configured to allow users to author, test, and maintain cook books. Cook books are uploaded to the Chef server from the workstation. Some cookbooks are custom to the organization and other are based on community cookbooks available from the Chef Supermarket.
Ruby is the programming language that is the authoring syntax for cookbooks.

The Chef Development Kit is a package from Chef that provides a recommended set of tooling, including Chef itself, the chef command line tool, Test Kitchen, Chef Spec, Bookshelf, and more.

A chef-client is an agent that runs locally on every node that is under management by Chef. When a chef-client is run, it will perform all the steps that are required to bring the node into the expected state

Run list – A run list, is an ordered list of roles and/or recipes that are run in the exact defined order, defines the information required for chef to configure the chef node into the desired state.
Chef Super market is the location in which community cook book  reshared and managed. Cookbooks that are part of the Chef Super market may be used by any Chefuser. It is not used to just store and access cookbooks but also things like tools, plugins, drivers, modules, DSC resource and compliance profile.
Chef management console is the user interface for the Chef server. It is used to manage data bags, attributes, run-lists, roles, environments, and cookbooks, and to configure role-based access for users and groups. Chef supermarket can resolve its dependencies which implies that it downloads all the dependent cookbooks when a cookbook is downloaded.

Cookbooks – A container used to describe the configuration data/policies about infrastructure. and contains everything that is required to support that configuration like recipes, attribute values, file distributions, templates etc. They allow code reuse and modularity.

Recipes – A recipe is a collection of resources which ensure that the system is in its desired state.

Resources – is a representation of a piece of system in infrastructure and its desired state. These are building blocks. Examples are installing packages, running Ruby code, or accessing directories and file systems.

Attributes – An attribute is a specific detail about a node which are used by the chef-client to understand the current state of the node, state of the no deat the end of the previous chef-client run and desired state at the end of each chef- client run.

Definitions – A definition is used to create additional resources by stringing together one (or more) existing resources.

Files – A file distribution is a specific type of resource that tells a cookbook how to distribute files, including by node, by platform, or by file version.

Libraries – A library allows the use of arbitrary Ruby code in a cookbook, either to extend the chef-client language or to implement a new class.

Custom Resources-A custom resource is an abstract approach for defining a set of actions and (for each action) a set of properties and validation parameters.

Metadata – A metadata file is used to ensure that each cookbook is correctly deployed to each node.

Templates – An embedded ruby (erb) template which uses Ruby statements to manage configuration files.

Data bags: Are generally used to hold global information pertinent to your infrastructure that are not properties of the nodes. These are used to maintain the secrets in chef. These are not specific any cookbook /recipe. They are used to store secure details like credentials which will be managed by chef-vault/HashiCorp Vault. Chef-Vault creates an extra layer of security by creating an additional encryption mechanism by providing separate set of keys to node/workstation to access the Data bags.


Through bootstrap process, in simple, we will be connecting the chef workstation, server and the node. When we bootstrap, the following process happens in the background:
⦁    We will be sshing into the node and then sending the files – Configuration file with details of chef serverurl, pem key path etc, and the pemkey.
⦁    Install, configure and run chef-client on the node
⦁ Later, node gets registered with the chef server and its details are stored in the databaseserver.


Foodcritic – Linting tool (syntaxtesting)

⦁    Kitchen -CLI
⦁    Chef spec – unit test (to test code –recipes)
⦁    Inspec – Integration test (to test the functionality of the code – recipes) – used with testkitchen
⦁    Recipes
⦁    Cookbooks
⦁    Knife & ChefCLI
⦁    Ohai – tool used to detect attributes on anode
⦁    Ruby
⦁    Chef-Vault
⦁    Berkshelf


⦁    API – chefAPI
⦁    Data store – which stores the data pertaining to recipes, nodes, environment, roles etc.
⦁    Search – Knife search which has 5 indexes:-
⦁    API Client
⦁    Node
⦁    Role
⦁    Environment
⦁    Data bags
⦁    Cookbooks – Stored in bookshelf in server
⦁    Supermarket
⦁    Run list
⦁    Policy – which is a role-based access control


Food critic is used to check cookbooks for common problems :Style, Correctness, Syntax, Best practices, Common mistakes and Deprecation’s. Food critic does not validate the intention of a recipe, rather it evaluates the structure of the code
Syntax: Food critic path_to_the_cook book


Berk she lf is a dependency manager for Chef cookbooks. With it, you can easily depend on community cookbooks and have them safely included in your workflow. Berk she lf is included in the Chef Development Kit.

You add the dependencies of your cookbook to metadata. R band then you run berks install Roget those dependency cookbooks to be downloaded from supermarket to cache. A Berks file describes the set of sources and dependencies needed to use a cookbook. It issued in conjunction with the berks command. By default ,a Berksfile has a source for Chef’s public supermarket.


Chef Spec is a framework that tests resources and recipes as part of a simulated chef-client run. ChefSpec tests execute very quickly. Chef Spec tests are often the first indicator of problems that may exist within a cookbook. Chef Spec is packaged as part of the Chef development kit.
Chef exec spec


Use Test Kitchen to automatically test cookbook data across any combination of platforms and test suites:
⦁    Defined in a .kitchen.ymlfile
⦁    Uses a driver plugin architecture
⦁    Supports cookbook testing across many cloud providers and virtualization technologies
⦁    Supports all common testing frameworks that are used by the Ruby community
⦁    Uses a comprehensive set of base images provided by Bento
Use a kitchen.yml file to define what is required to run Kitchen, including drivers, provisionary, verifiers, platforms, and test suites. The stages in test kitchen lifecycle are: create, converge, login, verify, destroy and diagnose.




⦁    Knife cookbook create cookbook_name – this is deprecated and we should use Chef generate command
⦁    drwxr-xr-x. 3 root root 20 Jun 1 09:28 templates drwxr-xr-x. 2root root 6 Jun 1 09:28resources drwxr-xr-x. 2root root 6 Jun 1 09:28providers drwxr-xr-x. 2root root 6 Jun 1 09:28 libraries
drwxr-xr-x. 3 root root 20 Jun 1 09:28 files
drwxr-xr-x. 2root root 6 Jun 1 09:28 definitions
drwxr-xr-x. 2 root root 23 Jun 1 09:28 recipes
drwxr-xr-x. 2 root root 23 Jun 1 09:28 attributes
-rw-r–r–. 1 root root 1472 Jun 1 09:28
-rw-r–r–. 1 root root 282 Jun 1 09:28 metadata.rb
-rw-r–r–. 1 root root 463 Jun 1 09:28

Cook book names

drwxr-xr-x. 3 root root 24 Jun 1 09:28 test
drwxr-xr-x. 3 root root 38 Jun 1 09:28 spec
drwxr-xr-x. 2 root root 23 Jun 1 09:28 recipes
drwxr-xr-x. 7 root root 4096 Jun 1 09:28 .git
-rw-r–r–. 1 root root 231 Jun 1 09:28 metadata.rb
-rw-r–r–. 1 root root 65 Jun 1 09:28
-rw-r–r–. 1 root root 1067 Jun 1 09:28 chefignore
-rw-r–r–. 1 root root 47 Jun 1 09:28 Berksfile
-rw-r–r–. 1 root root 343 Jun 1 09:28 .kitchen.yml
-rw-r–r–. 1 root root 126 Jun 1 09:28 .gitignore


Some times you might use hard-coded values (for example, directory name, filename, username, etc.) at multiple locations inside your recipes. Later when you want to change this value, it becomes a tedious process, as you must brow set hroug hall the recipes that contains this value and change the m accordingly .Instead, you cande fine the hard-code value as variable inside an attribute file, and use the attribute name inside the recipe. This way when you want to change the value, you are changing only at one place in the attribute file.
The attribute precedence is as follows:

⦁  A default attribute located in a cookbook attribute file
⦁  A default attribute located in a recipe
⦁  A default attribute located in an environment
⦁  A default attribute located in a role
⦁  A force_default attribute located in a cookbook attribute file
⦁  A force_default attribute located in a recipe
⦁  A normal attribute located in a cookbook attribute file
⦁  A normal attribute located in a recipe
⦁  An override attribute located in a cookbook attribute file
⦁  An override attribute located in a recipe
⦁  An override attribute located in a role
⦁  An override attribute located in an environment
⦁  A force_override attribute located in a cookbook attribute file 33

⦁  A force_override attribute located in a recipe
⦁  An automatic attribute identified by Ohai at the start of the chef-client run


⦁    Library cookbooks: An existing cookbook, typically an open-source contribution from a user in the Chef community, designed for server configuration purposes. Eg: database, GitHub, lib archive, artifact etc.
⦁    Application Cookbooks: Which contains at least one recipe which installs a piece of software and shares the same name as the cookbook. Eg: mysql. Nginix etc.
⦁    Wrapper cookbook: Depends on an application cookbook and exposes the required /recipes from each cookbook.
Provisioning, configuration, deployment cookbooks


Apt_package : Use the apt_package resource to manage packages for the Debian and Ubuntu platforms. Apt repository: Use the apt_repository resource to additional APT repositories. Adding a new repository will update apt package cache immediately.

Apt_update: Use the apt_update resource to manage Apt repository updates on Debian and Ubuntu platforms. Bash: Use the bash resource to execute scripts using the Bash interpreter. This resource may also use any of the actions and properties that are available to the execute resource.

Batch: Use the batch resource to execute a batch script using the cmd.exe interpreter. The batch resource creates and executes a temporary file (like how the script resource behaves), rather than running the command inline.

Bff_package: Use the bff_package resource to manage packages for the AIX platform using the install utility. Breakpoint: Use the breakpoint resource to add breakpoints to recipes. Run the chef-shell in chef-client mode, and the nuse those break point to debug recipes. Break point sare ignored by the chef-client during an actual chef- client run. That said, break points are typically used to debug recipes only when running the minanon-production environment, after which they are removed from those recipes before the parent cook book is uploaded to the Chef server.

Chef_gem:Usethechef_gemresourcetoinstallagemonlyfortheinstanceofRubythatisdedicatedtothechef- client. When a gem is installed from a local file, it must be added to the node using the remote_file or cookbook_fileresources.
The chef_gem resource works with all the same properties and options as the gem_package resource, but does not accept the gem_binary property because it always uses the Current  Gem Environment under which the chef- client is running. In addition to performing actions like the gem_package resource, the chef_gem resource does the following:

Runs its actions immediately, before convergence, allowing a gem to be used in a recipe immediately after it is installed
Runs Gem. clear_paths after the action, ensuring that gem is aware of changes so that it can be required immediately after it is installed

Chef_handler: Use the chef_handler resource to enable handlers during a chef-client run. The resource allows arguments to be passed to the chef-client, which then applies the conditions defined by the custom handler tothe node attribute data collected during the chef-client run, and then processes the handler based on thatdata.
Chocolatey_package: Use the chocolatey_package resource to manage packages using Chocolatey for the Microsoft Windows platform.

Cookbook_file: Use the cookbook_file resource to transfer files from a sub-directory of COOKBOOK_NAME/files/ to a specified path located on a host that is running the chef-client.

Cron:Use the Cron re source to manage cronentries for time – based job scheduling. Properties for as schedule will default to * if not provided. The cron resource requires access to a cron tab program, typically cron.

Csh: Use the csh resource to execute scripts using the csh interpreter. This resource may also use any of the actions and properties that are available to the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence.

Deploy: Use the deploy resource to manage and control deployments. This is a popular resource, but is also complex, having the most properties, multiple providers, the added complexity of callbacks, plus four attributes that support layout modifications from within a recipe.

Directory: Use the directory resource to manage a directory, which is a hierarchy of folders that comprises all the information stored on a computer. The root directory is the top-level, under which the rest of the directory is organized. The directory re source uses the name property to specify the path to allocation in a directory. Typically, permission to access that location in the directory is required

Recursive Directories: The directory resource can be used to create directory structures, if each directory within that structure is created explicitly. This is because the recursive attribute only applies group, mode, and owner attribute values to the leaf directory.
Dpkg_package: Use the dpkg_package resource to manage packages for the dpkg platform. When a package is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources.
Dsc_resource: Desired State Configuration (DSC) is a feature of Windows PowerShell that provides a set of language extensions, cmdlets, and resources that can be used to declaratively configure software. The dsc_resource resource allow sany DSC resource to be used in a Chef recipe, as well as any customre sources that Have been added to your Windows Power Shell environment. Microsoft frequently adds new resources to the DSC resource collection.

Dsc_script: Thedsc_ script resource is most useful for those DSC resources that do not have a direct comparison to are source in Chef ,such as the Archive resource ,accustom DSC resource, anexisting DSC script that performs an important task, and so on. Use the dsc_script resource to embed the code that defines a DSC configuration directly within a Chefrecipe.

Env: Use the env resource to manage environment keys in Microsoft Windows. After an environment key is set, Microsoft Windows must be restarted before the environment key will be available to the Task Scheduler.

Erl_call: Use the erl_call resource to connect to a node located within a distributed Erlang system. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence.

Execute:Use the execute resource to execute a single command. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence.

File: Use the file resource to manage files directly on a node.

Freebsd_package: Use the freebsd_package resource to manage packages for the FreeBSD platform. A freebsd_package resource block manages a package on a node, typically by installing it.

Gem_package:Usethegem_packageresourcetomanagegempackagesthatareonlyincludedinrecipes.When a package is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources.

Git: Use the git resource to manage source control resources that exist in a git repository. git version 1.6.5 (or higher) is required to use all the functionality in the git resource.

Group: Use the group resource to manage a local group.

Homebrew_package: Use the homebrew_package resource to manage packages for the macOS platform.
http_request: Use the http_request resource to send an HTTP request (GET, PUT, POST, DELETE, HEAD, or OPTIONS) with an arbitrary message. This resource is often useful when custom callbacks are necessary.

Ifconfig: Use the ifconfig resource to manage interfaces.
Ips_package: Use the ips_package resource to manage packages (using Image Packaging System (IPS)) on the Solaris 11 platform.

Link: Use the link resource to create symbolic or hard links.

Log: Use the log resource to create log entries. The log resource behaves like any other resource: built into the resource collection during the compile phase, and then run during the execution phase.(To create alogen try that is not built into the resource collection, use Chef::Log instead of the log resource.)

Macports_package: Use the mac ports_package resource to manage packages for the mac OS platform.

Mdadm: Use the mdadm resource to manage RAID devices in a Linux environment using the mdadm utility. The mdadm provider will create and assemble an array, but it will not create the config file that is used to persist the array up on reboot. If the config file is required, it must be done by specifying atemplate with the correct array layout, and then by using the mount provider to create a file systems table (fstab)entry.
Mount: Use the mount resource to manage a mounted file system.

Ohai: Use the ohai resource to reload the Ohai configuration on a node. This allows recipes that change system attributes (like a recipe that adds a user) to refer to those attributes later during the chef-client run.

Openbsd_package: Use the openbsd_package resource to manage packages for the OpenBSD platform.

Osx_profile: Use the osx_profile resource to manage configuration profiles (.mobileconfig files) on the macOS platform.Theosx_profileresourceinstallsprofilesbyusingtheuuidgenlibrarytogenerateauniqueProfileUUID, and then using the profiles command to install the profile on the system.

Package:Use the package resource to manage packages. When the package is installed from alocal file(suchas with Ruby Gems, dpkg, or RPM Package Manager), the file must be added to the node using the remote_file or cookbook_file resources.

Pacman_package: Use the pacman_package resource to manage packages (using pacman) on the Arch Linux platform.

Perl: Use the perl resource to execute scripts using the Perl interpreter. This resource may also use any of the actions and properties that are available to the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence.

Powershell_script: Use the powershell_script resource to execute a script using the Windows PowerShell interpreter, much like how the script and script-based resources—bash, csh, perl, python, and ruby—are used. The powershell_script is specific to the Microsoft Windows platform and the Windows PowerShell interpreter.

Python: Use the python resource to execute scripts using the Python interpreter. This resource may also use any of the actions and properties that are available to the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence.
Reboot: Use the reboot resource to reboot a node, a necessary step with some installations on certain platforms. This resource is supported for use on the Microsoft Windows, macOS, and Linux platforms. New in Chef Client 12.0.

Registry_key: Use the registry_key resource to create and delete registry keys in Microsoft Windows.

Remote_directory: Use the remote_directory resource to incrementally transfer a directory from a cookbook to a node. The directory that is copied from the cookbook should be located under COOKBOOK_NAME/files/default/REMOTE_DIRECTORY. The remote_directory resource will obey file specificity.

Remote_file: Use the remote_file resource to transfer a file from a remote location using file specificity. This resource is similar to the file resource.
Route: Use the route resource to manage the system routing table in a Linux environment.

Rpm_package: Use the rpm_package resource to manage packages for the RPM Package Manager platform. Ruby: Use the ruby resource to execute scripts using the Ruby interpreter. This resource may also use any of the actions and properties that are available to the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence.

Ruby_block: Use the ruby_block resource to execute Ruby code during a chef-client run. Ruby code in the ruby_block resource is evaluated with other resources during convergence, whereas Ruby code outside of a ruby_block resource is evaluated before other resources, as the recipe is compiled.

Script: Use the script resource to execute scripts using a specified interpreter, such as Bash, csh,Perl,Python,or Ruby. This resource may also use any of the actions and properties that are available to the execute resource. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence.

Service: Use the service resource to manage a service.

Smartos_package: Use the smartos_package resource to manage packages for the SmartOS platform.

Solaris_package: The solaris_package resource is used to manage packages for the Solaris platform.


Systemd_unit: Use the systemd_unit resource to create, manage, and run systemdunits.

Template:A cookbook template is an Embedded Ruby (ERB)template that is used to dynamically generate static text files. Templates may contain Ruby expressions and statements, and are a great way to manageconfiguration files. Use the template resource to add cookbook templates to recipes; place the corresponding Embedded Ruby (ERB) template file in a cookbook’s /templatesdirectory.
To use a template, two things must happen:
⦁    A template resource must be added to arecipe
⦁    An Embedded Ruby (ERB) template must be added to acookbook
User: Use the user resource to add users, update existing users, remove users, and to lock/unlock user passwords.

Windows_package:Use the windows_package resource to manage Micro soft Installer Package(MSI)packages for the Microsoft Windows platform.

Windows_service: Use the windows_service resource to manage a service on the Microsoft Windows platform. New in Chef Client 12.0.

Yum_package: Use the yum_package resource to install, upgrade, and remove packages with Yum for the Red Hat and CentOS platforms. The yum_package resource is able to resolve provides data for packages much like Yum can do when it is run from the command line. This allows a variety of options for installing packages, like minimum versions, virtual provides, and library names.

Yum_repository: Use the yum_repository resource to manage a Yum repository configuration file located at /etc/yum.repos.d/repositoryid.repo on the local machine. This configuration file specifies which repositories to reference, how to handle cached data, etc.

Zypper_package: Use the zypper_package resource to install, upgrade, and remove packages with Zypper for the SUSE Enterprise and OpenSUSE platforms.Linux Academy – Chef Development:


Cloud services refers to various IT resources provided on demand by a service provider to its clients over the internet. It also covers the professional services that assist the selection, deployment and management of various cloud-based resources. Characteristics include scalability, on-demand, self-provisioning etc.
Cloud service providers like AWS include: Microsoft Azure, Google Cloud Platform, IBM Cloud, Rackspace, Oracle and Verizon clouds.
Services within AWS used are: Ec2, VPC, IAM, Elastic Beanstalk, Cloud watch, Autoscaling and Route 53.


An Ec2 instance can be provisioned by the following steps:
⦁  Choose an AMI which is an acronym for server template which is a typical OS with few packagesinstalled.

⦁ In the next step choose the instance type which defines the size of the virtual server that we are launching. Thetandmfamiliesareforgeneralusewhilethecfamilyforcomputeoptimizedfamily,gfamilyforgraphic processing optimized, r family is for memory optimized and the d and I families are memory optimized.
⦁  The third step allows you to configure various things for the server like:

⦁  Number of instances
⦁  Vpc and subnet
⦁  Auto assign public IP and IAM role
⦁  Shutdown behavior, enable termination protection and option to enable cloud watch

⦁  In step four, we can configure the attached disk volume for the server where the default is general purpose SSD and we can also choose either Iops based SSD or magnetic storage. We can provision the size apart from the default 8gb.

⦁  In the next step we can tag the instance based on the organization’s naming convention like name, department, etc.

⦁  In the next step we will be configuring the security groups which determine which ports are open on the server and the source from which they can be accessed. In this step we can either choose from an existing security group or create a new security group.

⦁  Finally ,we will be reviewing and launching the server, post which we will be either choosing an existing key pair or generating and downloading new key pair.


EC2–refers to the web service which provides scalable computing capacity–literally server sonAmazon’s data center – that is used to build and host the software applications.

VPC – This enables to launch AWS resources into a virtual network defined by us which closely resemble a traditional network with the benefits of scalable AWS resources.

RDS–this service make site asierto setup,operate and scale a cost-efficient and resizable relational databasein cloud and handles the regular administration tasks.

IAM–referstotheservicewhichassistsinsecuringtheaccesstoAWSresourcesforusersbyauthenticationand authorization processes where authentication refers to who can use AWS resources while authorization refers to what resources can be used and in whatways.

ElasticBeanstalk–using which we can easily deploy and manage applications in AWS cloud without worrying about the infrastructure required as it reduces the management complexity without restricting choice or control. Through this service, we will be just uploading the application and it automatically handles the capacity, provisioning, load balancing, scaling and application healthmonitoring.

Auto scaling – refers to the service designed to launch and terminate ec2 instances automatically based on user- defined policies and schedules which defined for ensuring thes mooth functioning of the application. Auto scaling groups, minimum/desired/maximum instances and scaling policy will bedefined.

Elastic load balancing – this service distributes the incoming application traffic across multiple targets such as EC2 instances through routing the traffic to healthy targets by monitoring their health. It is the single point of contact for clients and enhances the application availability.

S3- It is the simple storage service which can be used to store and retrieve any amount of data at any time and from anywhere on the web. S3 is a scalable, high-speed, low-cost, web-based object storage service designed for online backup and archiving of data and application programs.

Elastic Block Store–EBS provides highly available and reliable block levels to rage volumes which can be used with EC2 instances in the same availability zones. EBS is suggested for quick access and long-term persistence.

Route53–is a highly available and scalable Domain Naming Service(DNS)and performs three main functions
– register domain names, route internet traffic to the resources for your domain and check the health of your resources. Cloud watch–provides are liable, scalable, and flexible monitoring solution through which we can monitor AWS resources and applications ranon AWS. Using which wec can either send notifications and or automatically make changes to the resource according to predefined monitoring based rules.

Cloud front – web service that speeds up distribution of your static and dynamic web content, such as .html, .css,
.php, and image files,to your users. Cloud Front delivers your content through a world wid enetwork of datacenters called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

Cloud formation – enables you to create and provision AWS infrastructure deployments predictably and repeatedly. It helps you leverage AWS products such as Amazon EC2, Amazon Elastic Block Store, Amazon SNS, Elastic Load Balancing, and Auto Scaling to build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure. AWS Cloud Formation enables you to use a template file to create and delete a collection of resources togetheras a single unit (astack).

⦁    Launch aninstance
⦁    Create an ELB and choose security group so that it allows port 80, configure health checks, add the ec2 instance.
⦁    Go to route 53, get started, create hosted zone where we enter the domain name(ex-, type public hosted zone
⦁    Creates two records by default – start of authority record and name server records
⦁    Now add these name server records in the domain details of your domain registrar.
⦁    Create naked domain name or apex record (without www –
⦁    Since we don’t have a public ip of load balancer, we create an alias record and choose alias record and the routing policy(simple,weighted,latency,fail over and geo location)and choose whether you wish to evaluate the health of resources or not.


⦁    M – general-purpose
⦁    C – Compute optimized
⦁    R – Memory optimized
⦁    D/I – Storage optimized D – large storage I – large I/O
⦁    G – GPU instances
⦁    T – Micro instances


⦁    Simple storage service – object based stored and a key value store
⦁    Highly-scalable, reliable, and low-latency data storage infrastructure at very low costs
⦁    Per object limit is 0 – 5TB
⦁    Object with more than 100 MB should follow Multi-part upload – which uploads the parts of huge object parallel and then stores them in compatible manner.
⦁    Data is spread across multiple device and facilities
⦁    Files are stored in Buckets and S3 name should be unique
⦁    Link:
⦁    When you upload an object for the first time, then it is ready to be read immediately and from next time, it takes time to propagate (which we refer to as eventual consistency).
⦁    The objects in S3 have: Key (name), Value (data), version ID, Metadata, sub resources and Access control lists.
⦁    99.99% availability and 11X9’s durability (reverse of losing an object)
⦁    S3 storage tiers/Classes:
⦁    Standard tier – 99.99% availability, 11 9’s durability and sustain 2 facility loss
⦁    Infrequently accessed – lower fee than S3 standard and availability99.9%
⦁    Reduced Redundancy Storage – 99.99% durability and objects
⦁    Glacier – used for archival only and takes 3-5 hours to restore objects from glacier
⦁    Bucket is a container for objects stored in S3 and in other words we can refer to it as folder and its name had to be unique (should be in lowercase)
⦁    Buckets has four section objects, properties, permissions and management.
⦁    Under objects we can find the contents/objects of the bucket
⦁    Under properties we will find the versioning, logging, static website hosting, tags and events(notification)
⦁    Under permission we have access control list (manage users and manage public permissions at an object level), bucket policy – a easy way to grant cross-account access to your s3 environment through permissions and Cross Origin Resource Sharing(CORS)–which regions should the bucket be accessible from (versioning should been baled ).
⦁    Under management we have lifecycle, analysis, metrics and inventory.
⦁    Versioning allows you to preserve and retrieve and restore version of every object of the S3 bucket. Once enabled cannot be disabled but can be just suspended.

⦁    Lifecycle management s3: We can add a rule either to the whole bucket or folders within the bucket for lifecycle management of the objects which reduce the costs. Lifecycle rules allow the objects to be automatically changed within the available storage tiers. The flow will be standard to infrequent and from infrequent to glacier and later, from glacier it will be deleted, with a 30 day minimum life at each tier.

⦁    It can used in conjunction with versioning
⦁    Applied to current and previous versions

⦁    Encryption: SSL/TLS for securing data in transit and for data at rest, we use: server side encryption – S3 managedKeys(SSE-S3),AWSKMSmanagedkeys–SSE-KMS(additionalcharges),Serversideencryption with customer provided keys – SSE-c and finally client side encryption.

⦁    Storage Gateway: a service which connects the on-premise software with cloud-based storage and secures integration while enabling the storing of data to the cloud. AWS storage gateway S/W appliance is available for download as a VM image which can be installed and activated on local data center and it support VM ware ESXi or Microsoft Hyper-V

⦁    Three types of storage gateways:
⦁    GW stored volumes: data is on site and is backed up on to the Amazon S3, it is durable and inexpensive
⦁    GW Cached volumes: most frequently accessed data is stored locally and data set is stored inS3.
⦁    GW virtual tape library: Virtual tapes on site can be stored in virtual tape library backed by S3 or shelf backed by Amazon glacier.
⦁    Import/Export: two types IED is k and IE snow ball. Through IE disk, you export your data on to AWS using portable storage devices. Snowball is a peta byte-scale data transport solution by AWS. Supports around TB per snowball and addresses challenges with large-scale data transfers, high network costs, long transfer time and security concerns.


Identity Access Management: allows you to manage users and the access to the AWS Console. It gives:
⦁    Centralized control and shared access of AWS account
⦁  Enables granular permissions, identity Federation – where we provide access to AWS account through temporary security credentials (active directory, facebook etc) and Multifactor authentication forusers
⦁    Provide temporary access for users, devices and services and allows to set own password rotation policy Policies can be applied to users, groups and roles which defines one or more permissions and a policy document sits on top of thesethree.

⦁    IAM is not region specific. We usually provide a sign-in link (a customized one) to all IAM users through which they can access the AWS account. Access Key ID refers to the username and secret access key is the password which is used to access AWS console using CLI.
⦁    We are required to download the security credentials of users and store them in a secure location.
⦁    By default, users will n’t have any permissions.
⦁    Policy documents are authored in JSON format.
⦁    Role types in IAM are: service roles, cross-account access and identity provider access.
⦁    Role can’t be assigned to groups.
⦁    250 is the IAM role limit per account and can be increased by request.
⦁    Three types of policies: Managed, custom and inline policies.


We can validate a cft by checking file for syntax errors by using aws cloud formation validate-template — template-body/file.
We can be notified when the resource creation using CFT fails using cfn signal option. There are nine major sections in a CFT. They are:
⦁   Format Version – specifies the CFT version to which the template conforms to
⦁   Description – describes the template
⦁   Metadata – Objects that provide additional information about the template.
⦁   Parameters – specifies the values that can be passed during the runtime
⦁   Mappings – set of keys and values that can be used to specify conditional parameter values
⦁   Conditions – specifies the conditions which control the resource creation and value assignments during stack creation
⦁   Transform – used to specify the AWS Server less Application Model(SAM) for server less applications
⦁   Resources – required, rest all are optional – specifies the stack resources and their properties
⦁   Output – describes the values that would return when you view your stack’s properties

The best practices of CFT are:

Planning and organizing

⦁   Organize Your Stacks By Lifecycle and Ownership
⦁   Use Cross-Stack References to Export Shared Resources
⦁   Use IAM to Control Access
⦁   Reuse Templates to Replicate Stacks in Multiple Environments

⦁   Verify Quotas for All Resource Types
⦁   Use Nested Stacks to Reuse Common Template Patterns
Creating templates

⦁   Do Not Embed Credentials in Your Templates
⦁   Use AWS-Specific Parameter Types
⦁   Use Parameter Constraints
⦁   Use AWS::Cloud Formation::Init to Deploy Software Applications on Amazon EC2Instances
⦁   Use the Latest Helper Scripts
⦁   Validate Templates Before Using Them

Managing stacks

⦁   Manage All Stack Resources Through AWS Cloud Formation
⦁   Create Change Sets Before Updating Your Stacks
⦁   Use Stack Policies
⦁ Use AWS Cloud Trail to Log AWS Cloud Formation Calls
⦁   Use Code Reviews and Revision Controls to Manage Your Templates
⦁   Update Your Amazon EC2 Linux Instances Regularly


AWS Cloud Formation includes a set of helper scripts:
Syntax: cfn-init –stack|-s \
–resource|-r \
–region region
–access-key access.key \
–secret-key secret.key \
–role rolename\
–credential-file|-f credential.file \
–configsets|-c config.sets \
–url|-u service.url \
–http-proxy HTTP.proxy \

cfn-init: This script is used to fetch and parse metadata, install packages, write files to disk and start or stop the services by reading template metadata from AWS::Cloud Formation::Init Key.

fn-signal:This script is used to signal AWS CF to indicate the progress of creation/up dation of ec2 instance and or software applications (if at all) when they are ready. Wait On Resource signals are used to hold the work on stack until a predefined number of signals are received or until the timeout period is exceeded.

Syntax: cfn-signal –success|-s \
–access-key access.key \
–credential-file|-f credential.file \
–exit-code|-e exit.code \
–http-proxy HTTP.proxy \
–https-proxy HTTPS.proxy \
–id|-i \
–region AWS.region \
–resource resource.logical.ID \
–role \
–secret-key secret.key \
–stack \
–url AWS CloudFormation.endpoint

cfn-get-metadata: This script is used to fetch a metadata block from Cloud Formation and print it to standard out.
Syntax: cfn-get-metadata –access-key access.key \
–secret-key secret.key \
–credential-file|f credential.file \
–key|k key \
–stack|-s \
–resource|-r \
–url|-u service.url \
–region region

cfn-hup: This is script is a daemon which detects the changes in the resources metadata and executes user specified actions when a change is detected. Majorly used to make configuration updates.
All these scripts are based on cloud-init. You call these helper scripts from your AWS Cloud Formation to
install, configure, and update applications on Amazon EC2 instances that are in the same template.


Virtual private cloud which we can simply refer to as a private sub-section of AWS which can be controlled by the user who can place the AWS resources with in it and thus creating a logically isolated section. The components of a VPC are: subnets, network ACL, Nat Gateway, VP gateways, internet gateway, route table, elastic IP, endpoint, security group, VPN connections and customer gateways.

When you create an AWS account :a VPC with Internet gate way, route table with predefined routes to the default subnets, NACL with predefined rules and subnets to provision resources in.
Internet Gateway: is the one which routes the connection between a VPC and internet. Only one vpc can be attached to one internet gateway. We cannot detach an IGW when there are active resources in VPC. Route tables : consists as of rules called routes which are used to determine where the net work traffic is directed. You cannot delete a route tables when you have active dependencies. To route the traffic to AWS resources in VPC. Network access control lists (NACL): an optional layer of security for VPC that acts as a firewall for traffic control in subnets. These have inbound and outbound rules. In default NACL all traffic is allowed. The rules are evaluated based on the rule number. In NACL we have a catch all rule which denies all the traffic by default and this cannot be modified.


While NACL is at subnet level, SG is at instance level and in NACL, we can define both allow and deny rules while in SG you can just define only allow rules and by default all the traffic is denied.
Thebestpracticeistoallowtoonlythetrafficthatisrequiredforexample,forawebserverweshouldallowonly http and https traffic. One subnet can have only one NACL while one NACL can be associated with multiple subnets.
Availability zones in VPC: Any resource should be in a VPC subnet and any subnet should be located in only one availability zones and to ensure high availability and fault to leant systems, multiple . VPCs can span multiple AZs. AZs are designed to deal with the failure of the applications.
VPC options:

⦁   VPC with public subnet
⦁   VPC with public and private subnet
⦁   VPC with public and private subnet with a private VPN
⦁   VPC with a private subnet only and Hardware VPN access


This is primarily used to enable the communication between the servers lying in different VPCs as if there were in the same VPC. It allows the machines to connect using private IP addresses. The phases in VPC peering are:
⦁   Initiating-request
⦁   It might Fail
⦁   Or else it moves top ending-acceptance
⦁   Later it might get Expired
⦁   Or Rejected
⦁   Or if accepted then it will enter into Provisioning phase
⦁   And it will interactive
⦁   Deleted


⦁   Cannot be created between VPCs those having a matching or overlapping IPV4/IPV6 Cinderblocks
⦁   Cannot be created for VPCs lying in different regions
⦁   Will not support transitive peering
⦁   Cannot have more than one peering connection between same VPCs


Classic elastic load balancer: distributes incoming application traffic evenly across the available servers in multiple AZ s thus ensuring fault tolerance. In the architecture, the ELB is a head of route tables .Not available for free tier.

In ELB we have both ALB and CLB. And the load balancer should be associated with an VPC. And a proper protocol too needs to be specified based on the traffic that ELB will be dealing with. We should assign the ELB to a SG. We can also configure the health checks of the resources that ELB is serving by defining the ping parameters like protocol, port and path and under details, we should specify the response timeout, interval and unhealthy threshold which define the health of the resources. And finally we add the resources to the ELB. Concept of cross-zone balancing which ensures that the traffic is evenly distributed across all the servers in all the available zones. When disabled just ensures that the traffic is just balanced between the zones only. It can be either internet facing or internal facing. And can be connected to an auto-scaling group or instances directly.


⦁   CLB operates at layer 4 of the OSI (Open System Interconnection) model which implies that it routes the traffic based on the ip address and the port number while the ALB operates at layer 7 of the model which means that it operates not just based on the ip address and the port number but also based on the application- levelcontent.

⦁   By default the cross-zone load balancing is enabled in ALB and we need to enable the same in CLB.
⦁   While http and https are supported by both the load balancers, CLB can support TCP and SSL in addition there by enabling the ssl termination at the load balancer level itself.
⦁   ALB support path based routing which enables a listener to forward the requests based on the url path while CLB cannot.
⦁   ALB supports deletion protection which prevents the accidental deletion of the load balancer while CLB doesn’t.


Refers to the process of scaling up or scaling down there sources based on the load through the collection of AWS resources groups called as Auto scaling group and by defining the minimum, desired and maximum number of resources for each group and the auto scaling policy .It can multiple subnets and multiple AZs but will remain within a VPC. It goes hand in hand with ELB. It has two components launch configuration and auto scaling group.
Launch configuration defines the ec2 specifications like which AMI, size of the volume etc. Auto scaling group refers to the rules and settings that govern the scaling. Finally it is free service.
In auto scaling group we have minimum, maximum and desired number of instances We do that using cloud watch 


Data storage considerations:
⦁   Data format
⦁   Size
⦁   Query frequency
⦁   Write frequency
⦁   Access speed
Types of storage: Unstructured or BLOB and Relational Database. The AWS RDS is considered as Dbase.
Six different SQL engines supported by AWS are: Amazon Aurora, PostgreSQL, My SQL, Maria DB, Oracle, SQL server.
Benefits of a RDS:
⦁   Simple and easy to deploy and Cost effective
⦁   Compatible with your applications
⦁   Manages common administrative tasks and is Secure
⦁   Simple and fast toscale.

Backup options: automatic backup – daily backup, configure backup to custom, point-in-time recovery and geo replicated And Manual snapshots– user initiated, persistedinS3,only recover able to the point snap shot is taken. NoSQL vsSQL:
⦁   Data storage: Rows and columns vs Key-value pairs
⦁   Schemas: Fixed vsDynamic
⦁   Querying: using SQL vs focus on collection of documents by querying individual items
⦁   Scalability: Vertical vs horizontal


Resource Default limits
VPCs per region 5
Subnets per region 200
Elastic IPs per region 5
Internet/Virtual Private gateways per region 5
NAT gateways per AZ 5
NACLs per VPC 200
Rules per NACL 20
Route tables per VPC 200
Routes per Route table 50
Security groups per VPC 500
Inbound/outbound rules per SG 50 each
Active VPC peering connections per VPC 50
Outstanding VPC peering connections per VPC 25
Expiry time of a peering connection 1 week/168hr
VPC endpoints per region 20


⦁   Content Delivery Network – system of distributed servers which delivers the web content to users based on user’s geographic location, origin of webpage and the content delivery server.
⦁   Edge Location – a location where content will be cached and is separate to an Assertion/AZ.
⦁   Origin – simply refers to the location of the files.
⦁   Distribution – refers to the collection of edge locations
⦁   CloudFront–handlesthedeliveryofwebsite(dynamic,static,streamingandinteractivecontent)usingglobal network of edge locations to ensure optimized performance.


⦁   At top-level organized into TABLES EX: employee
⦁   TABLES contain ITEMS EX: each employee
⦁   ITEMS contain ATTRIBUTES EX: details of employee
⦁  Each table contains a primary key and sort key (optional second key) and up to five local secondary indexes. Throughput is provisioned at table level and read capacity and write capacity are separate.
Perform queries using partition key and sort key and optionally use local secondary index. OperationscanbeperformedusingmanagementconsolethroughrestfulapiorthroughcodeusinganSDK.Basic operations are available: create, read, update and query.


A phased strategy is the best solution while migrating to cloud. The phases in this strategy are:
Cloud Assessment Phase: In this phase, a business case is developed for the decision of moving to cloud. Assessments that will be conducted during this phase include: financial (TCO), security and compliance and technical. Tools that can be reused and are required to be built are identified and licensed products are migrated. It helps to identify the gap between the current traditional legacy architecture and the next-generation cloud architecture. By the end of this phase, we would have identified the applications that can be migrated to cloud in the order of priority, define success criteria and create a roadmap and plan.

Proof of concept:

In this phase,the primary go a listotest whether the decision to migrate to cloud by deploying a small application on to it and testing its suitability through a small proof of concept. This can be achieved by testing the critical functionality of the application.In this stage,you can build support in your organization,validate the technology, test legacy software in the cloud, perform necessary benchmarks and set expectations. By the end of this phase, we will be able to gain far more better visibility into the service provider’s offerings and would have had an hands- on experience of them.

Data Migration Phase:

As one size doesn’t fit all, by making right trade offs on various dimensions like – cost, durability, query-ability, availability, latency, performance (response time), relational (SQL joins), size of object stored (large, small), accessibility, read heavy vs. write heavy, update frequency, cache-ability, consistency (strict, eventual) and transience (short-lived) – a decision is made on type of database used.

The first step is to point your pipe of the File servers, Log servers, Storage Area Networks and systems to S3 so that new data is stored in that and the old data can be moved using batch process. Many organizations use existing encryption tools like 256-bit AES for data at-rest, 128-bit SSL for data in-transit to encrypt data in Amazon S3. And Migrate your MySQL Databases toz3 Amazon RDS, Migrate your Commercial Databases to Amazon EC2 using Relational DB AMIs and Move Large Amounts of Data using Amazon Import/Export Service which allows the organizations to load its data on USB 2.0 or eSATA storage and get them shipped to AWS which uploads that into S3.
Application Migration Phase:
There are two main

application migration strategies:

1, Forklift migration strategy: Stateless applications, tightly coupled applications, or self-contained applications might be better served by using the forklift approach which picks it up all at once and moves to cloud. Self contained web applications,backup/archivalsystems and components ofa3-tier web applications which callsfor low- latency are the ones suitable for this strategy. Tasks performed are: copying your application binaries, creating and configuring Amazon Machine Images, setting up security groups and elastic IP addresses, DNS, switching to Amazon RDS databases.
Like with any other migration,having a backup strategy,aroll back strategy and performing end-to-end testing is a must when using this strategy
2. Hybrid migration strategy: A low risk strategy which moves the parts of the application to cloud and leaving the rest behind and is optimal for large applications. It calls to design, architect and build temporary “wrappers” to enable communication between parts residing in your traditional datacenter and those that will reside in the cloud.

Leverage the Cloud:

In this phase, the additional benefits of the cloud are explored through leveraging other cloud services which in the case of AWS are: Auto scaling, Automate elasticity, Harden security etc.

Optimization Phase:

This aims at reducing the costs and optimizing the applications by understanding usage patterns, terminating the under-utilized instances, leveraging reserved Ec2 instances, Improving efficiency etc.


This methodology is adopted to ensure the real-time delivery of the new applications and services to the customers with near zero-downtime release and rollback capabilities. It works on the basic idea of switching the traffic between two identical environments running different versions of an application where BLUE refers to the current version and GREEN refers to the future version intended for deployment. Once the green environment is tested and ready, then the production traffic is redirected to it from blue environment through CANARY ANALYSIS approach where a 20% of traffic is first diverted to GREEN and once it is known that there are no problems in GREEN environment, then the rest of the traffic too is diverted and the BLUE environment is placed to stand-by for a certain period of time and then terminate it. This is mainly to enable the option of roll-back if we find any issues with the green environment.

Benefits of this approach include ability to deploy the current and future versions in parallel and isolated environments, the ability to roll back the application, reduce the blast radius through the adoption of canary testing, minimize the impaired operation/downtime, fits well with the CI/CD workflows through limiting the complexity by eliminating the need for considering too many dependencies on the existing environment as the Green environment uses entirely new resources and finally the cost optimization benefits through the elimination of the need to run an over provisioned architecture for an extended period oftime.

The initial step towards approaching this methodology is to define the environmental boundary to get an understanding of the changes and this is defined by factors like application architecture (dependencies), organizational factors (speed and number of iteration), risk and complexity (blast radius & impact), people, process (QA & rollback) and cost.

The tools/services that enable blue/green deployments are Route 53, ELB, Auto scaling, EBS, Opsworks, Cloud formation, cloud watch etc.

There are multiple techniques for implementing/adopting this blue-green deployment approach like
⦁   Update DNS Routing with Amazon Route53
⦁   Swap the Auto Scaling Group Behind Elastic Load Balancer
⦁   Update Auto Scaling Group Launch Configurations
⦁   Swap the Environment of an Elastic Beanstalk application
⦁   Clone a Stack in AWS Ops Works and Update DNS Best practices and recommendations for this approach are:
⦁   To meet data consistency requirements, both the blue and green environments should share the same data stores.
⦁   Decoupling Schema Changes from Code Changes

⦁   The approach that we have followed in my earlier project was to swap the auto scaling groups behind the ELB. In this technique, while Auto scaling is used to manage the EC2 resources for Blue and Green environment, ELB is used to handle the production traffic between them. When ever an ew instance is added to the auto scaling group, it is automatically added to the load balancing pool provided they pass the health checks which can be a simple ping or complex connection requests which would occur at configurable intervals and have defined thresholds.

Through this technique, as usual, while the blue environment represents the current version of the application, new version of the application will be stage in the green environment and when it is time to deploy the code, we simply attach the new auto scaling group in the green environment to the load balancer which uses the least outstanding requests routing algorithm and there by diverting traffic to the green environment. And the amount of traffic can be controlled by adding or deleting instances to the auto scaling group in the green environment. By scaling up the instances in the green group, we can either terminate or place the min stand-by mod ethus enabling rollback option whenever we find issue with the new version.



Majority of people link Docker containers with VMs, they are two technologies which differ in the way they operate. They share some similarities like both are designed to provide an isolated environment in which to run

an application. The key difference is that the underlying architecture is different. Virtual machines have a full OS with its own memory management. Every guest OS runs as an individual entity from the host system.
Where as containers run on the host OS it self and use thee ephemeral storage. Containers are there fore smaller than Virtual Machines and enable faster start up with better performance, less isolation and greater compatibility possible due to sharing of the host’s kernel .Docker is not a virtualization technology, it’ san application delivery technology.

Finally, the benefits of Docker containers include:
⦁   They are lightweight in nature use less memory and
⦁   Lesser time to start a container

Future is about integrating the Docker containers with VMs which will provide them with pros like proven isolation, security properties, mobility, dynamic virtual networking, software-defined storage and massive ecosystem.


Docker can build images automatically by reading the instructions from a Docker file. A Docker file is a text document that contains all the commands a user could call on the command line to assemble an image.
The Docker daemon runs the instructions in the Docker file one-by-one, committing the result of each instruction to a new image if necessary, before finally outputting the ID of your new image.
The format of Docker file is:

Although the instruction is not case-sensitive, it is convention to specify them in uppercase to distinguish them from arguments.
The instructions frequently used in a Docker sfile are:


Before the Docker CLI sends the context to the Docker daemon, it looks for a file named .docker ignore in the root directory of the context. If this file exists, the CLI modifies the context to exclude files and directories that match patterns init. This helps to avoid unnecessarily sending large or sensitive files and directories to the daemon and potentially adding them to images using ADD or COPY.


Docker build -t Docker inspect
Docker run -d -it –rm -p –name -P(for all ports) -v — Docker search
net –net=bridge –ip –link Docker login and logout
Docker ps & -a Docker pull & push
Docker images Docker attach and Docker exec
Docker commit -m -a container ID/ Image ID docker rmi $(docker images –quiet –filter Docker rm & rmi “dangling=true”)
Docker rm/stop $(Docker ps -a -q) Docker system prune Docker –version – displays docker version
Docker version – will display the details of client and server – version, api version, go version, built no etc. Docker info – detailed information about Docker engine
Docker network ls – list the available docker networks
Docker network inspect bridge name – info about docker bridge
Docker network create –subnet <> –gateway <> –ip-range <> –driver <> –label
<>bridge name
Docker network rm bridge name – caution never remove a default one u cannot retrieve it Docker

Command Description’

Docker attach  Attach to a running container docker build Build an image from a Decker file docker check point Manage checkpoints
Docker commit Create a new image from a container’s changes docker container Manage containers
Docker cp Copy files/folders between a container and the localfilesystem docker create Create a new container
docker deploy Deploy a new stack or update an existing stack
dockerdiff Inspect changes to files or directories on a container’sfilesystem docker events Get real time events from theserver

dockerexec Run a command in a running container
docker export Export a container’s file system as a tar archive docker history Show the history of an image
docker image Manage images docker images List images
docker import Import the contents from a tarball to create a filesystem image dockerinfo Display system-wideinformation
docker inspect Return low-level information on Docker objects dockerkill Kill one or more runningcontainers
docker load Load an image from a tar archive or STDIN dockerlogin Log in to a Dockerregistry
docker logout Log out from a Docker registry dockerlogs Fetch the logs of a container dockernetworkManage 
docker pause Pause all processes within one or more containers docker plugin Manage plugins
docker port List port mappings or a specific mapping for the container dockerps List containers
docker pull Pull an image or a repository from aregistry dockerpush Push an image or a repository to a registry docker rename Rename acontainer
docker restart Restart one or more containers dockerrm Remove one or more containers dockerrmi Remove one or more images dockerrun Run a command in a newcontainer
docker save Save one or more images to a tar archive (streamed to STDOUT bydefault) docker search Search the Docker Hub forimages
docker secret Manage Docker secrets docker service Manage services dockerstack Manage Dockerstacks
docker start Start one or more stopped containers
docker stats Display a live stream of container(s) resource usage statistics

Docker stop Stop one or more runningcontainers

docker swarm Manage Swarm
docker system Manage Docker
docker tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE docker top Display the running processes of a container
docker unpause Un pause all processes within one or more containers docker update Up date configuration of one or more containers
docker version Show the Docker version information docker volume Manage volumes
dockerwait Block until one or more containers stop, then print their exitcodes


The Registry is a stateless, highly scalable server-side application that stores and lets you distribute Docker images.

Docker Hub is a cloud-based registry service which allows you to link to code repositories, build your images and test them, stores manually pushed images, and links to Docker Cloud so you can deploy images to your hosts. It provides a centralized resource for container image distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.


the major difference is that ADD can do more than COPY:
⦁   ADD allows <src> to be an URL
⦁   If the <src> parameter of ADD is an archive in a recognized compression format, it will be unpacked


⦁   docker stop: Stop a running container by sending SIGTERM and then SIGKILL after a grace period
⦁   docker kill: Kill a running container using SIGKILL or a specified signal
⦁   Docker logs<container_ID>
⦁   Docker stats<container_ID>
⦁   Docker cp <container_ID>:path_to_logs /local/path
⦁   Docker exec -it <container_ID>/bin/bash
⦁   docker commit <container_id> my-broken-container && docker run -it my-broken-container/bin/bash


⦁   Containers should be ephemeral

⦁   Use .docker ignore file
⦁   Avoid installing un necessary packages
⦁   Each container should have only one concern
⦁   Minimize the number of layers
⦁   Sort multi-line arguments


⦁   Service Discovery
⦁   Load Balancing
⦁   Secrets/configuration/storage management
⦁   Health checks
⦁   Auto-[scaling/restart/healing] of containers and nodes
⦁   Zero-downtimedeploys

Kubernetes is a system developed by google to manage containerized applications in a clustered environment. It is primarily meant to address the gap between the modern cluster infrastructure cand the presumptions of majority of applications about the environments.

The controlling services in a Kubernetes cluster are called the master, or control plane, components. These operate as the main management contact points for administrators, and provide many cluster-wide systems for the relatively dumb worker nodes. These services can be installed on a single machine, or distributed across multiple machines.


Etcd: Kubernetes uses etcd, which is a distributed key-value store that can be distributed across multiple nodes, to store configuration data that can be used by each of the nodes in the cluster. It can be configured on a single master server or distributed among various machines while ensuring its connectivity to each of kubernetes machines.

API Server: It is the management point of the entire cluster which allows for configuration of Kubernetes workloads and organizational units. Acts as a bridge between various components to maintain cluster health.

Controller Manager Service: It is the one which maintains the state of the cluster which reads the latest information and implements the procedure that fulfills the desired state. This can involve scaling an application up or down, adjusting endpoints, etc.

Scheduler Service: This assigns the work load to the nodes and tracks resource utilization on each host to ensure that they are not overloaded.


Are the one on which actual work is done. They have the following requirements to communicate with the master components and configure networking for containers:

Docker running as a Dedicated subnet

Kubelet service: The main contact point with master components and is a service. It receives commands and work and interacts with etcd to read configuration details of the nodes.

Proxy Service: Used to deal with the individual host level sub netting and make the services available to external parties through forwarding the requests to the correct containers.

Kubernetes Work Units: While containers are the used to deploy applications, the workloads that define each type of work are specific to Kubernetes.

Pods: The basic unit which generally represents one or more containers that should be controlled as a single “application”. It models an application-specific “logical host” and can contain different application containers which are relatively tightly coupled. In Pod, Horizontal scaling is generally discouraged on the pod level because there are other units more suited for the task.

Services: A service, when described this way, is a unit that acts as a basic load balancer and ambassad or for other containers. A service groups together logical collection of pods that perform the same function to present them as a single entity. Services are an interface to a group of containers so that consumers do not have to worry about anything beyond a single access location.

Replicated Controllers:

A more complex version of a pod is a replicated pod. These are handled by a type of work unit known as a replication controller.

A replication controller is a framework for defining pods that are meant to be horizontally scaled. The work unit is a nested unit. A template is provided, which is basically a complete pod definition. This is wrapped with additional details about the replication work that should be done.

Kubernetes for creating new Projects, Services for load balancing and adding them to Routes to be accessible from outside, Creation of Pods through new application and control the scaling of pods, troubleshooting pods through ssh and logs, writing/modification of Build configs, templates, Image streams etc


It is one of the most popular and stable management platforms for Docker containers – it powers Google Containers Engine (GCE) on the Google Cloud platform.

In Kubernetes, a group of one or more containers is called a pod. Containers in a pod are deployed together, and are started, stopped, and replicated as a group. A pod could represent e.g. a web server with a database that run together as a micro service including shared network and storage resources. Replication controllers manage the

Deployment of pods to the cluster nodes and are responsible for creation, scaling and termination of pods. For example, incase of a node shutdown, there application controller moves the pods to other nodes to ensure the desired number of replicas for this pod is available.

 Kubernetes services provide the connectivity with a load balancing proxy for multiple pods that belong to a service. This way clients don’t need to know which node runs a pod for the current service request. Each pod could have multiple labels. These labels are used to select resources for operations. For example, a replication controller and services discover pods by label selectors for various operations.


⦁   The first pre-requisite is to install ntpd package on master and as well as minions and we need to enable and start this service to ensure that all the servers are time synchronized.

⦁   Name the master and minions accordingly and save that in /etc/hosts file in order to refer to them based on those names rather than public ip

⦁   Next is to create a repo to pull latest docker package in /etc/yum.repos.d/virt7-docker-common-release and add the content with name and base url of the repo along with gpgcheck=0 and then run yum update to pull packages on to all the servers.

⦁   Note: For our lab, we just need to ensure ip tables and firewall.d services are disabled.

⦁  Now we need to install two packages on all the servers docker and kubernetes by yum install -y – enablerepo=virt7-docker-common-release kubernetesdocker

⦁  Now we need to configure master server. The first step is to edit the config file in /etc/kubernetes/ where we edit the KUBE_MASTER part to bind it to an interface that we can communicate with and we change its value to the master name that we have changed in /etc/hosts file and leave the default port to 8080. And we will add KUBE-ETCD-SERVERS=”–etcd-servers=http://master_name:2379”

⦁   Next step is to configure etcd in master by editing config file in the /etc/etcd/ by changing the list enclient and advertise client urls to listen to all the servers on port number2379.

⦁   The third step is to edit the api server file in /etc/kubernetes/ and change the kube api address to bind to all servers and ensure that port on the local server is listening on 8080 and the kubelet port is 10250 which is default and we can edit admission control to restrict addition kubes and kublets entering our environment.
⦁   Finally, we need to enable the services etcd, kube-api server, kube-controller-manager and kube-scheduler.
⦁   For configuring minions,the first step is to edit the kubernetes config file by changing the kube master to look out for the name rather than ip and add etcd servers value to interact with the master etcd server.

⦁   Next is to edit the kubelet config file where we change the kubelet address to bind all addresses, enable the kubelet port, set the kubelet hostname to bind to the minion name, map the kubelet api server to interact with the one inmaster
⦁   Now enable and start the services: kube-proxy, kubelet and docker.
⦁   The kubectl is the cli used to work withk8s:
⦁   Kubectl get nodes – to get registered nodes with master, use -o flag to define the output path and filterit out
⦁   Kubectl describe nodes – details about nodes
⦁   Kubectl get pods – to list out pods on cluster


Terraform is an open source tool that allows you to define infrastructure for a variety of cloud providers (e.g. AWS, Azure, Google-Cloud, Digital Ocean, etc)using a simple, declarative programming language and to deploy and manage that infrastructure using a few CLI commands.

Why Terraform? To orchestrate multiple services/providers in a single definition like create instances with a cloud provider, create DNS records with a DNS provider, and register key/value entries in Consul. And you can use a single solution that supports multiple services.

Terraform Components:

Terraform allows you to create infrastructure configurations that affect resources across multiple cloud services and cloud platforms with the following components:

⦁   Configurations: It is a text file which holds the infrastructure resource definitions in .tf or .tf.json formats.
⦁   Providers: Terraform leverages multiple providers to talk to back-end platforms and services, like AWS, Azure, Digital Ocean, Docker, or Open Stack.

⦁   Resources: Resources are the basic building blocks of a Terraform configuration. When you define a configuration, you are defining one or more (typically more) resources. These are provider specific and calls to recreate configuration during migrations.
⦁   Variables: Supports use of variables making configurations more portable and more flexible. Re-use a single configuration multiple times through changing variable values.

We define all the data for terraform in four files:

⦁ – contains information specific to aws (the provider)
⦁ – contains variables that will be later used by terraform
⦁ – contains the bulk of terraform configuration (resources)
⦁ – specifies the information that should be output

After the authoring of these files, the first step is use the terraform plan command. The plan command lets you see what Terraform will do before actually doing it. This is a great way to sanity check your changes before unleashing them onto the world. The output of the plan command is a little like the output of the diff command: resources with a plus sign (+)are going to be created, resources with a minus sign (-)are going to be deleted, and resources with a tilde sign (~) are going to be modified.
Next is to use terraform apply which actually does the work.

NGINX is open source software for web serving, reverse proxying, caching, load balancing, media streaming, and more. It started out as a web server designed form maximum performance and stability. In addition to its HTTP server capabilities, NGINX can also function as a proxy server for email (IMAP,POP3,andSMTP)and a reverse proxy and load balancer for HTTP, TCP, and UDP servers.

Database: Oracle 11g (MYSQL)

The types of statements in MYSQL are:
Database Manipulation Language (DML)

DML statements are used to work with data in an existing database. The most common DML statements are:

Database Definition Language (DDL)
DDL statements are used to structure objects in a database. The most common DDL statements are:


Database Control Language (DCL)

DCL statements are used for database administration. The most common DCL statements are:
⦁   DENY (SQL Server Only)

We used to handle the DMLs and the DDLs and DCLs are handled by the database admins. I do know a little bit of querying the databases like I will be able to retrieve data using select statements.

Ex: SELECT Column FROM Table ORDER BY Column


We can develop various shell scripts for the purpose of automating certain tasks. Shell scripts that are usually developed were meant to serve the tasks like: deployment of the artifacts, log rotation, check disk space and notify through email alerts etc.

⦁   Check disk space and send email alerts:
MAX=95 PART=sda1
USE=`df -h |grep $PART | awk ‘{ print $5 }’ | cut -d’%’ -f1` if [ $USE -gt $MAX ]; then
echo “Percent used: $USE” | mail -s “Running out of disk space” $EMAIL fi.
⦁   Checking server utilization:
#!/bin/bash date;
echo “uptime:” uptime
echo “Currently connected:” w
echo ” ” echo “Last logins:”
last -a |head -3
echo ” “

echo “Disk and memory usage:”

df -h | xargs | awk ‘{print “Free/total disk: ” $11 ” / ” $9}’
free -m | xargs | awk ‘{print “Free/total memory: ” $17 ” / ” $8 ” MB”}’ echo ” “
start_log=`head -1 /var/log/messages |cut -c 1-12` oom=`grep -ci kill /var/log/messages`
echo -n “OOM errors since $start_log :” $oom echo “”
echo ” “
echo “Utilization and most expensive processes:” top -b |head -34
top -b |head -10 |tail – echo ” ” echo “Open TCP ports:” nmap -p- -T4127.0.0.1

echo ” ” echo “Currentconnections:” ss-s
echo ” ” echo”processes:”
ps auxf –width=200 echo ” ” echo”vmstat:”
vmstat 1 5

Script to clear the cache from RAM

## Bash Script to clear cached memory on (Ubuntu/Debian) Linux
## By Philipp Klaus
## see <>
if [ “$(whoami)” != “root” ]
echo “You have to run this script as Superuser!”
exit 1

# Get Memory Information

freemem_before=$(cat /proc/meminfo | grep MemFree | tr -s ‘ ‘ | cut -d ‘ ‘ -f2) &&freemem_before=$(echo
“$freemem_before/1024.0” | bc)
cachedmem_before=$(cat /proc/meminfo | grep “^Cached” | tr -s ‘ ‘ | cut -d ‘ ‘ -f2) &&
cachedmem_before=$(echo “$cachedmem_before/1024.0” | bc)
# Output Information
echo -e “This script will clear cached memory and free up your ram.\n\nAt the moment you have
$cachedmem_before MiB cached and $freemem_before MiB free memory.”
# Test sync
if [ “$?” != “0” ]
echo “Something went wrong, It’s impossible to sync the filesystem.”
exit 1


Ansible is a simple and powerful IT automation tool whose functions include:

⦁   Provisioning
⦁   Configuration management
⦁   Continuous delivery
⦁   Application deployment
⦁   Security &compliance

All ansible playbooks are written in YAML (yet another markup language or ain’t markup language). Ansible play books are simple text files. YAML file is used to represent configuration data. Sample of data representation in YAML:

Ansible can work with one to many servers at the same time and uses SSH/ Power shell to connect to those servers and thus making it Agent less. The information about these servers which are called target systems in the inventory file. If we don’t create an inventory file, then Ansible uses the default file located at etc/ansible/hosts. Inventory file is simple INI file which holds the information like hostname, groups containing hostnames. 

The inventory parameters of ansible are:
⦁ ansible_host – to use alias forFQDN
⦁ ansible_connection – to indicate the type of connectionSSH/winrm/localhost
⦁ ansible_port – defines which port to connect where 22 isdefault
⦁ ansible_user – to define theuser
⦁ ansible_ssh_pass – to define the password (passwords can be encrypted and stored in ansiblevault)


It is a set of instructions that we provide to ansible to do some work on behalf of us.

All playbooks are written in YAML. Play defines the set of activities (tasks) to be run on hosts which include: execute a command, run a script, install a package, shutdown/restart etc. Syntax:
Name: name_of_action Hosts: name_of_host
Tasks: define the set of tasks here. Using different modules like name, command, script, yum, service etc. Each item is defined by ‘–‘ in a separate line. The hosts that we intend to use in playbook should be specified in the inventory file.

The different actions run by tasks are called modules. Ex: command, script, yum, service. Ansible-doc -l will show all the available modules.
To apply the playbook, we simply need to execute the playbook using the command syntax: ansible-playbook

Modules are categorized into groups like:

⦁   system – actions to be performed at system level: user, group, hostname, ip tables, lvg, lvol, make, mount, ping, time zone, systemd, service etc.
⦁   commands – used to execute commands on hosts: command, expect, raw, shell, script etc.
⦁   files – to work with files: acl, archive, copy, file, find, line in file, replace, stat template, un archive etc.
⦁   database – helps in working with databases: mongodb, mssql, mysql, postgresql, proxy sql, vertica etc.
⦁   cloud – modules for cloud service providers: module in the name of each cloud provider.
⦁   Windows – to work with windows hosts: win_copy, command, domain, file, iis_website, msg, msi,package, ping, path, robocopy, regedit, shell, service, useretc.

The parameters for command module include: chdir – to change directory, creates – to check the existence and then run the command, executable – change the shell, free_form – to take free form without the need for parameters, removes – to check the availability of the file so that this command executes, warn etc.
The script module executes a local script on one or more remote hosts. It copies that script on to the hosts and then runs the same.

The service module is used to manage the services on the hosts like to start, stop and restart the services on the hosts. We use parameters like name and state to define the service module where name defines the service and the state defines its action like started, stop, restarted. We are stating actions like this because it ensures the service is started if it is not already started and this is referred to as idem potency which implies that the result of the action performing once is exactly the same as performing it repeatedly without inter vening actions.

The linein file module is used to search for a line in the file and either replace it or add it if it not present.
A variable is used to store the information that varies with each host. Examples of variables in inventory file include ansible_host, ansible_connection and ansible_ssh_pass etc. We will have all the variables declared in a separate file in variables file.To use the variable value,replace the value with variable mentioned in{{variable}}. It is advised to define the variables in either inventory file or a separate host specific file so that the playbooks can bereused.
Conditionals are used to check upon certain conditions. We use either equals ==, or, register and when etc. Register is used to store the outputs of a module.
With_items is a loop iterative which executes the same task multiple number of times.

Ex: Yum: name=’{{item}}’ state=present With_items:
⦁   httpd
⦁   glibc
⦁   nodejs 

We organize our code into packages, modules, classes and functions. And this is implemented in the form of roles as we have inventory files, variables and playbooks. It is not advisable to mention all the actions in a single large playbook. That’s where when include statement and ro lescomeinto play. Syntax for include is:-include
<playbook_name>. We can even declare variables and tasks in a separate file and use them in the playbook by mentioning vars_files for variables and include for tasks.
Roles define a structure for the project and find standards for organizations folders and files in the project.Roles are used to simplify the functionality of a server by grouping them into web servers, db servers etc. In a way we can easily maintain their configuration by mentioning in the playbook that the host should assume that role. The folder structure of a roleis:

⦁   Ansibleproject
⦁   Inventory.txt
⦁   Setup_applications.yml
⦁   Roles

⦁   Webservers
⦁   Files
⦁   Templates
⦁   Tasks
⦁   Handlers
⦁   Vars
⦁   Defaults
⦁   Meta

The advantages of roles is that it is not required for us to import tasks and variables into the playbook as roles will take care of it.

Ansible control machine can only be linux and not windows but we can connect and control windows machines using winrm connection which is not configured by default.

There is some configuration required to set winrm on windows machine and nothing is required to be installed. Requirements include:

⦁   Have the pywinrm installed on ansible control machine – pip install“pywinrm=>0.2.2”
⦁   Now set up winrm on windows server using the scripts provided by ansible called configure remoting for ansible script which is a power shell script which can be downloaded on windows machine and can be run to setup winrm.
⦁   We can set different modes of authentication like: basic, certificate, Kerberos, NTLM, CredSSPetc. Ansible Galaxy is a free site for downloading, sharing and writing all kinds of community based ansible roles. Patterns:
⦁   Host1, Host2, Host3
⦁   Group 1,Host1
⦁   Host*

Dynamic inventory: inventory file that we mentioned earlier is a static and you need to change the information every time, thus,we can use a dynamic inventory script by giving it as an input to the playbook by using‘-I’flag. Syntax: ansible-playbook -i inventory.pyplaybook.yml
Custom modules: we can develop our own module using python program and place it in modules folder.


What is REST?

REST stands for representational State Transfer. REST is a web standards based architecture and uses HTTP Protocol for data communication. It revolves around resources where every component is a resource and a resource is accessed by a common interface using HTTP standard methods. REST was first introduced by Roy Fielding in year 2000.

In REST architecture, a REST Server simply provides access to resources and the REST client accesses and presents the resources. Here each resource is identified by URIs/ Global IDs. REST uses various representations to represent a resource like Text, JSON and XML. JSON is now the most popular format being used in Web Services.

API transaction scripts: with details like endpoint, method,

The following HTTP methods are most commonly used in a REST based architecture.

⦁   GET − Provides a read only access to aresource.
⦁   PUT − Used to create a newresource.
⦁   DELETE − Used to remove aresource.
⦁   POST − Used to update an existing resource or create a newresource.
⦁   OPTIONS − Used to get the supported operations on a resource.


SOAP is most appropriately used for large enterprise applications rather than smaller, more mobile applications. SOAP architecture is most useful when a formal contract must be established to describe the interface that the web service offers, such as details regarding messages, operations, bindings, and the location of the web service. Therefore, SOAP should be used when more capability is needed. For example, including up-to-date stock prices to subscribing websites is a goodtime to use SOAP ince a greater a mount of program
interaction between client and server is required than REST can provide.


REST is implemented most easily using ASP.NET web API in MVC 4.0. REST is most appropriately used for smaller, more mobile applications, rather than large, enterprise applications. This is because REST is best used as a means of publishing information, components, and processes to make them more accessible to other users and machine processes.

An online publisher  could use REST to make   syndicated content available by periodically preparing and activating a web page that included content and XML statements that described the content.
Overall, if the project requires a high level of security and a large amount of data exchange, then SOAP is the appropriate choice. But if there are resource constraints and you need the code to be written faster, then REST is better. Ultimately it depends on the project and the circumstances which architecture fits best.


Admins use multiple JVMs primarily to solve the following issues:
⦁ Garbage collection inefficiencies
⦁ Resource utilization
⦁ 64-bitissues
⦁ Availability


Stateless web services do not maintain a session between requests. An example of this would be sites like search engines which just take your request, process it, and spit back some data.
Stateful web services maintain a session during your entire browsing experience. An example of this would be logging into your bank’s website or your web based email like GMail.
Nexus 2.14.3-02

Tomcat 7
Jenkins 2.46.3 LTS Chef

Ant 1.9.7
Java 1.8.0-121
SVN 1.6.4
Maven 3.3.9
DB server NoSql
VM – VM Player 12.5.6
Centos – 6
Docker – 1.8.0 Kubernetes – RHEL – 7.1


Troubleshooting Apache Web server (server not starting/restarting):
⦁   Check for config syntax error using httpd -t/-s which returns saying SYNTAX OK or SYNTAX ERRORAT LINE SO ANDSO.
⦁   Check Apache error log filetail -f/var/log/httpd-error.log
⦁   Check that your server name is set correctly inhttpd.conf
⦁   Check log files over 2GB because they can cause problem or error 500, so make sure that log files are under limit by moving or removing them out of log directories through logrotation.
⦁   Check the availability of port 80 and443

Directory structure of Apache HTTP server:

⦁   Bin -executables  Icons  Logs
⦁   Conf  Include  modules
⦁   Error  Lib


Running multiple web applications in one Tomcat Server:

⦁   Go to server.xml in conf dir and add a new service tag with details for second application like: port, service name (webapps2), Engine name (webapps2), appbase (webapps2)etc.
⦁   Next is create a directory in the name webapps2 in order to accommodate the secondapplication.

Benefits of Tomcat over other servers:
⦁   Lightweight.
⦁   WidelyUsed.
⦁   Much faster than othercontainers.
⦁   Easy toconfigure.

⦁   Veryflexible
Application Server Tuning

Tomcat JDBC resource configuration:

Java Database Connectivity (JDBC) allows Java technologies to connect to a wide variety of database types, over a single protocol, without altering the Java source code and through a JDBC driver which translates java code into database queries.
⦁   Download and install JDBC driver: there are four drivers:
⦁   JDBC-ODBC (Open database connectivity) bridge (inefficient due to the doubled transformation),
⦁   Native-API driver (similar to 1 and uses client side OS specific native API),
⦁   Network-protocol driver (forwards requests to a middleware server, which supports one or several different data formats)and
⦁   Native-protocol driver. The simplest and most efficient of all the driver types, this is a driver written inpure javathatperformsaconversionofJDBCAPIcallstothenecessarydatabaseformatanddirectlyconnectsto the database in question through a socket.
⦁   Configure your database as a JNDI (Java Naming and Directory Interface)resource:
⦁   The database resource needs to be declared in two places:
⦁   If you wish that database for an application specific, then we declare in the applications META- INF/Context.XML
⦁   if you wish that database to be used by all applications then declare your resource in server.xml and mention the database as a resource reference inMETA-INF/Context.XML

⦁   to make your application more portable in the application specific scenario, we need to provide resource reference information in WEB-INF/web.xml
Tomcat clustering and load balancing with HTTP:

⦁   Download and install mod_jk: mod_jk is the Apache HTTPD module that will be used to provide ourcluster with its load balancing and proxy capabilities. It uses the AJP protocol to facilitate fast communication between servers and the Apache Web Server that will receive the client requests. Download, unzip and place in modules directory ofhttp.
⦁   Next step is to configure/set-up mod_jk in httpd.conf file: skip # Loadmodule
LoadModule jk_module path/to/apache2/
# Specify path to worker configuration file Tomcat JkWorkersFile /path/to/apache2/conf/ # Configure logging and memory
JkShmFile /path/to/desired/log/location/mod_jk.shm JkLogFile /path/to/desired/log/location/mod_jk.log JkLogLevel info
# Configure monitoring
JkMount /jkmanager/* jkstatus
/jkmanager>Order deny, allow Deny from all

Allow from localhost
# Configure applications
JkMount /webapp-directory/* Load Balancer
⦁   Configure the cluster workers which refers to tomcat servers that process the requests and the virtual servers of the module which handle load balancing Workers. properties
# Define worker names worker.list=jkstatus, LoadBalancer # Create virtual workers worker.jkstatus.type=status

# Declare Tomcat server workers 1 through n worker.worker1.type=ajp13 worker.worker1.port=8009
# …
# Associate real workers with virtual LoadBalancer worker
⦁ Configuretomcatworkers–enablingsessionreplication,serializablesessionattributes,stickysessions,make your applications distributable, Setting jvm route, keeping your wokers in sync and then configure your clusters.
⦁ Example clusteringconfiguration
<Engine name=”Catalina” defaultHost=”” jvmRoute=”[worker name]”>
<Cluster className=”org.apache.catalina.ha.tcp.SimpleTcpCluster” channelSendOptions=”8″>
<Manager className=”org.apache.catalina.ha.session.DeltaManager” expireSessionsOnShutdown=”false” notifyListenersOnReplication=”true”/>

<Channel className=””>
<Membership className=”org.apache.catalina.tribes.membership.McastService” address=”″
port=”45564″frequency=”500″ dropTime=”3000″/>
<Sender className=”org.apache.catalina.tribes.transport.ReplicationTransmitter”>
<Transport className=”org.apache.catalina.tribes.transport.nio.PooledParallelSender”/>
<Receiver className=”org.apache.catalina.tribes.transport.nio.NioReceiver” address=”auto” port=”4000″ autoBind=”100″
selectorTimeout=”5000″ maxThreads=”6″/>
<Interceptor className=””/>
<Interceptor className=””/>
<Valve className=”org.apache.catalina.ha.tcp.ReplicationValve”filter=””/>

<ClusterListener className=”org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener”/>
<ClusterListener className=”org.apache.catalina.ha.session.ClusterSessionListener”/>


Is a cloud-based computing model that provides the infrastructure needed to develop, run and manage applications. It is a PaaS. that transforms three areas of IT: consolidation – of all the tasks, consumerization – providing user-friendly experience and automation – process automation : tasks generation, approval and workflow etc.


Coverage on new code – <80%
Maintainability rating on new code – anything 20-100 is good
Reliability rating on new code – A – 0 bug, B – 1 minor, C – 1 major, D – 1 critical and E – 1 blocker Security rating on new code – A – 0 vulnerability, B – 1 minor, C – 1 major, D – 1 critical and E – 1 blocker

Blocker Operational/securityrisk:Thisissuemightmakethewholeapplicationunstableinproduction.Ex: calling garbage collector, not closing a socket, etc.
Critical Operational/security risk: This issue might lead to an unexpected behavior in production without impacting the integrity of the whole application. Ex: Null Pointer Exception, badly caught exceptions, lack of unit tests, etc.
Info Unknown or not yet well-defined security risk or impact on productivity.
Major This issue might have a substantial impact on productivity. Ex: too complex methods, package cycles, etc.
Minor This issue might have a potential and minor impact on productivity. Ex: naming conventions, Finalizer does nothing but call super class finalizer, etc.


In J2EE application, modules are packaged as EAR, JAR and WAR based on their functionality
JAR: EJB modules which contain enterprise java beans (class files) and EJB deployment descriptor are packed as JAR files with .jar extension

WAR: Web modules which contain Servlet class files, JSP Files, supporting files, GIF and HTML files are packaged as JAR file with .war (web archive) extension

EAR: All above files (.jarand.war) are package das JAR file with.ear(enterprise archive)extension and deployed into Application Server


The main purpose of a proxy service (which is the kind of service either of these two provide) is very similar to what a person aims to achieve when he proxies for another person. That is, to act on behalf of that other person. In our case, a proxy server acts on behalf of another machine – either a client or another server.
When people talk aboutaproxyserver(oftensimplyknownasa”proxy”),moreoftenthannottheyarereferring to a forward proxy.

A forward proxy provides proxy services to a client or a group of clients. Oftentimes, these clients belong to a common internal network like the one shown below.
A reverse proxy does the exact opposite of what a forward proxy does. While a forward proxy proxies in behalf of clients (or requesting hosts), a reverse proxy proxies in behalf of servers. A reverse proxy accepts requests from external clients on behalf of servers stationed behind it.

Setting up the reverse-proxy:
⦁   Enable proxy_http Apache module enabled: $ sudo a2enmodproxy_http
⦁   create a new configuration file in your sites_available folder: $ sudo vim /etc/apache2/sites-available/my- proxy.conf
⦁   Add the following to thatfile:
<VirtualHost *:80>
ServerName ServerAlias
ErrorLog ${APACHE_LOG_DIR}/proxy-error.log
CustomLog ${APACHE_LOG_DIR}/proxy-access.log combined ProxyRequests Off
ProxyPass / http://555.867.530.9/ ProxyPassReverse / http://555.867.530.9/
⦁   Disable virtualhost for currentsite:
ls /etc/apache2/sites-enabled
> 000-default.conf
$ sudo a2dissite 000-default
⦁   Enable virtual host for reverse proxy and restartapache
$ sudo a2ensite my-proxy

$ sudo service apache2 restart

⦁   Considerations:
⦁   it applies only to http
⦁   for https, set up reverse proxy on 443 in addition to 80 forhttp


The way a network operates is to connect computers and peripherals using two pieces of equipment – switches and routers. These two let the devices connected to your network talk with each other as well as talk to other networks.
Switches are used to connect multiple devices on the same network within a building or campus. For example, a switch can connect your computers, printers and servers, creating a network of shared resources.
Routers are used to tie multiple networks together. For example, you would use a router to connect your networked computers to the Internet and thereby share an Internet connection among many  users.

Redundant Array of Independent Disks is traditionally implemented in businesses and organizations where disk fault tolerance and optimized performance are must-haves, not luxuries.
Software RAID means you can setup RAID without need for a dedicated hardware RAID controller.
The one you choose depends on whether you are using RAID for performance or fault tolerance (or both) and type of RAID software or hardware too matters.

RAID 0 is used to boost a server’s performance. It’s also known as “disk striping.” With RAID 0, data is written across multiple disks. Needs minimum two disks and downside is lack of fault tolerance.
RAID 1 is a fault-tolerance configuration known as “disk mirroring.” With RAID 1, data is copied seamlessly and simultaneously,from one disk to another,creating a replica,or mirror.Downside is drag on performanceand it splits the disk capacity into two equal parts.

RAID 5 is by far the most common RAID configuration for business servers and enterprise NAS devices. This RAID level provides better performance than mirroring as well as fault tolerance. With RAID 5, data and parity (which is additional data used for recovery) are striped across three or more disks. If a disk gets an error or starts to fail, data is recreated from this distributed data and parity block— seamlessly and automatically. Downside is performance hit to those servers which perform more write operations.
RAID 6 is also used frequently in enterprises. It’s identical to RAID 5, except it’s an even more robust solution because it uses one more parity block than RAID 5.

RAID 10 is a combination of RAID 1 and 0 and is often denoted as RAID 1+0. It combines the mirroring of RAID 1 with the striping of RAID 0. It’s the RAID level that gives the best performance, but it is also costly,

requiring twice as many disks as other RAID levels, for a minimum of four. This is the RAID level ideal for highly utilized database servers or any server that’s performing many write operations.

It functions as a single access point for organizing all binary resources of an organizations. One should adopt a binary repository manager for the following reasons:
⦁   Reliable and consistent access to remote artifacts
⦁   Reduce network traffic and optimize builds
⦁   Full support for Docker
⦁   Full integration with your build ecosystem
⦁   Security and access control
⦁   License compliance and open source governance
⦁   Distribute and share artifacts, smart search etc.


Secured Socket Layer is the standard security which ensures the integrity and privacy of the connection anddata flow between a browser and a web server. It actually binds the domain name, server name and company’s name together. SSL creation steps using open-ssl are:
1. Generate a private key- open ssl tool kit is used to generate private key and CSR. This private key is 1024 bit key and is stored in pem format.
⦁ openssl genrsa -des3 -out server.key 1024
2. Generate a CSR- Generally this CSR is sent to Certificate Authority, who will verify the identity of the requestor and issues acertificate.
⦁ openssl req -new -key server.key -outserver.csr
3. Remove passphrase from key- Important reason for the removal of passphrase is APACHE will ask for the passphrase every time you start thewebserver.
⦁ cp
⦁ openssl rsa -in -outserver.key
4. Generating a self-signed certificate- The below command creates a SSL certificate which is temporary and good for 365days
⦁ openssl x509 -req -days 365 -in server.csr -signkey server.key -outserver.crt
5. Installing the private key andcertificate-
⦁ cp server.crt/usr/local/apache/conf/ssl.crt
⦁ cp server.key/usr/local/apache/conf/ssl.key

⦁ Configuring SSL enabled virtualhosts
⦁ Restart apache andTest


The steps in integrating these two tools are:

⦁   First install DVCS connect or in JIRA account. You can do this by getting into administrator settings into tolbar and search for DVCS connector under choose add-ons.
⦁   Next configure JIRA OAuth application in GITHUB. This can be achieved by setting up the JIRA under user account settings  applications  application settings where we register JIRA as new application and we provide the homepage URL of our JIRA account and authorization call back url of our JIRA account. This will provide us with a client ID and a secret. It is recommended to create a new user for this purpose.
⦁   Finally we configure GitHub for JIRA. This can be done by adding GitHub as a new account under the developments tools in any of the project admin’s account where we provide the client ID as O Auth key and secret as O Auth secret. This step marks the integration.
⦁   It is advised to use smart commits in GIT which should link it to an issue/ticket inJIRA


Installing and Configuring JBoss:

JBoss Application Server is the open source implementation of the Java EE suite of services. It comprises a set of offerings for enterprise customers who are looking for preconfigured profiles of JBoss Enterprise Middleware components that have been tested and certified together to provide an integrated experience. It’s easy-to-uses erver architecture and high flexibility makes J Boss the ideal choice for users just starting out with J2EE, as well as senior architects looking for a customizable middleware platform.


⦁   Download jboss 7 zipfile
⦁   Install java
⦁   Create a jboss user and set password
⦁   Copy the jboss zip file to jboss user home directory and unzip and rename it and change ownership and permissions to775
⦁   Set the jboss and java path either in .bash_profile or.bashrc
⦁   Update the changes by command #source .bash_profile or.bashrc
⦁   Run the jboss application using the .sh file in jboss bindirectory

Table 3.1. The JBoss top-level directory structure

Directory Description

Bin All the entry point JARs and start scripts included with the JBoss distribution are located in the bindirectory.
client The JARs that are required for clients that run outside of JBoss are located in the client directory.

server The JBoss server configuration sets are located under the server directory. The default server configuration set is the server/default set. JBoss ships with minimal, default and all configuration sets. The subdirectories and key configuration files contained default configuration set arediscussed in more detail in Chapter 4,The Default Server Configuration FileSet

Lib The lib directory contains startup JARs used by JBoss. Do not place your own libraries in this directory.
Table 3.2, “The JBoss server configuration directory structure”shows the directories inside of the server configuration directory and their function.

Table 3.2. The JBoss server configuration directory structure

Directory Description

conf The conf directory contains the jboss-service.xml bootstrap descriptor file for a given server configuration. This defines the core services that are fixed for the lifetime of the server.
data The data directory is available for use by services that want to store content in the file system.

deploy The deploy directory is the default location the hot deployment service looks to for dynamic deployment content. This may be overridden through the URL Deployment Scanner URLs attribute.

Lib The lib directory is the default location for static Java libraries that should not be hot deployed. All JARs in this directory are loaded into the shared class path at startup.

Log The log directory is the directory log files are written to. This may be overridden through the conf/log4j.xml configuration file.
tmp The tmp directory is used by JBoss to store temporarily files such as unpacked deployments.


⦁ 1XX –Informational o 402 – Payment required
⦁ 100 –continue o 403 –Forbidden
⦁ 101 – switching protocols o 404 – Not found
⦁ 2XX –Successful o 405 – Method not allowed
⦁ 200 –OK o 406 – Not accepted
⦁ 201 –created o 407 – Proxy authentication required
⦁ 202 –accepted o 408 – Request timeout
⦁ 203 – Non – authoritative information o 409 –Conflict
⦁ 204 – No content o 410 – Gone
⦁ 205 – Reset Content o 411 – Length required
⦁ 206 – Partial Content o 412 – Precondition failed
⦁ 3XX –Redirection o 413 – Request entity too large
⦁ 300 – Multiple choices o 414 – Request URI too long
⦁ 301 – Moved permanently o 415 – Unsupported media type
⦁ 302 –Found o 416 – Request range not satisfied
⦁ 303 – See other o 417 – Expectation failed
⦁ 304 – Not modified  5XX – Server Error
⦁ 305 – Use proxy o 501 – Not implemented
⦁ 306 –Unused o 502 – Bad gateway
⦁ 307 – Temporary redirect o 503 – Service unavailable
⦁ 4XX – Client Error o 504 – Gateway timeout
⦁ 400 – Bad request o 505 – HTTP version not supported
⦁ 401 –Unauthorized

Call us for Free Demo on AWS,Vmware,Citrix,Azure,Devops,Python,Realtime Projects
Calls will be forwarded to Our Trainers for demo

Kubernetes Interview Questions

1.Difference between deployment and daemonset?
2.what are the components of kubernetes master and node? many pods r nodes need in one project ? many masters required in your project?
5.Explain Kubernetes architecture
6.What are the maximums of Kubernetes clustes?
7.helm charts?
8.write any deployment yaml file
9.types of deployments?
10.deployment yaml components? pods will communicate?
12.nodes communications process in kubernetes?
13.two clusters communication in diff networks?
14.what are the networks used in k8s? many services in Kubernetes?what are those? to expose a service within cluster?
17.what is node port?and its ranges?
18.What are the disadvantages of Kubernetes nodeports dns is used in k8s?
20.why need to allocate name spaces in k8s?
21.How is Kubernetes different from Docker Swarm
22.How is Kubernetes related to Docker
23.What is Container Orchestration
24.What is the need for Container Orchestration
25.what is Rolling Updates & Rollbacks?
26.What is the difference between deploying applications on hosts and containers
27.What is Kubectl
28.What is Kubelet
29.What are the different components of Kubernetes Architecture
30.What do you understand by Kube-proxy?
31.Can you brief about the Kubernetes controller manager
32.What is ETCD?
33.What is Ingress network, and how does it work?
34.What is the difference between a replica set and replication controller
35.What are the best security measures that you can take while using Kubernetes?
36.The Kubernetes Network proxy runs on which node?
37.What are the responsibilities of Replication Controller?
38.How to define a service without a selector?
39.The handler invoked by Kubelet to check if a container’s IP address is open or not?
40.what is statefulset and difference from deployment
41.What are differences between dockerswarm and k8s
42.What are differences between openshift and k8s
43.What is ingress and Ingress Controller
44.What are CNI supported by k8s?
45.How multi master communication happens in kubernetes? we can enable multimaster communication in kubernetes cluster?
47.what are namespaces in kubernetes and types
48.What is persistant volumes and pvc we can allocation resources to kubernetes?
50.What is the difference between clusterip and externalip?
51.What is EKS in AWS
52.Where Kubernetes Cluster Data Is Stored?
53. Which Container Runtimes Supported By Kubernetes?
54. How Does fluentd Work?
Call us for Free Demo on AWS,Vmware,Citrix,Azure,Devops,Python,Realtime Projects
Calls will be forwarded to Our Trainers for demo

Docker Interview Questions

Q1. What is Docker?

I will suggest you to start with a small definition of Docker.
Docker is a containerisation platform which packages your application and all its dependencies together in the form of containers so as to ensure that your application works seamlessly in any environment be it development or test or production.

Now you should explain Docker containers:
Docker containers, wrap a piece of software in a complete file system that contains everything needed to run: code, runtime, system tools, system libraries etc.

Anything that can be installed on a server. This guarantees that the software will always run thsame, regardless of its environment.

You can refer the diagram shown below, as you can see that containers run on a single machine share the same operating system kernel, they start instantly as only apps need to start as the kernel is already running and uses less RAM.

Note: Unlike Virtual Machines which has its own OS Docker containers uses the host OS  

As you have mentioned about Virtual Machines in your previous answer, so the next question in this Docker Interview Questions blog will be related to the differences between the two.

Q2. What are the differences between Docker and

Differences: In Docker, each unit of execution is called a container. They share the kernel of the host OS that runs on Linux. The role of a hypervisor is to emulate underlying hardware resources to a set of virtual machines running on the host.

Q3. What is Docker image?

Docker image is the source of Docker container. In other words, Docker images are used to create containers. Images are created with the build command, and they’ll produce a container when started with run. Images are stored in a Docker registry such as because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network.
Tip: Be aware of Docker hub in order to answer questions on pre-available

Q4. What is Docker container?

This is a very important question so just make sure you don’t deviate from the topic and I will advise you to follow the below mentioned format:

Docker containers include the application and all of its dependencies, but share the kernel with other containers, running as isolated processes in user space on the host operating system.

Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud.

Now explain how to create a Docker container, Docker containers can be created by either creating a Docker image and then running it or you can use Docker images that
are present on the Dockerhub. Docker containers are basically runtime instances of Docker images.

Q5 What is Docker hub?

Docker hub is a cloud-based registry service which allows you to link to code repositories, build your images and test them, stores manually pushed images, and links to Docker cloud so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.

Learn Docker With DevOps Now

Q6. How is Docker different from other container

According to me, below, points should be there in your answer:

Docker containers are easy to deploy in a cloud. It can get more applications running on the same hardware than other technologies, it makes it easy for developers to quickly create, ready-to-run containerized applications and it makes managing and deploying applications much easier. You can even share containers with your applications.
If you have some more points to add you can do that but make sure the above the above explanation is there in your answer.

Q7. What is Docker Swarm?

You should start this answer by explaining Docker Swarn.
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.

I will also suggest you to include some supported tools:
 Dokku
 Docker Compose
 Docker Machine
 Jenkins

Q8. What is Docker file used for?

This answer, according to me should begin by explaining the use of Dockerfile.
Docker can build images automatically by reading the instructions from a Dockerfile. Now I will suggest you to give a small definition of Dockerfle.

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.
Now, the next set of Docker interview questions will test your experience with Docker.

Q9. Can I use json instead of yaml for my compose file in Docker?

You can use json instead of yaml for your compose file, to use json file with compose,
specify the filename to use for

eg: docker-compose -f docker-compose.json up

Q10. Tell us how you have used Docker in your past position?

Explain how you have used Docker to help rapid deployment. Explain how you have scripted Docker and used Docker with other tools like Puppet, Chef or Jenkins.

If you have no past practical experience in Docker and have past experience with other
tools in a similar space, be honest and explain the same. In this case, it makes sense if you can compare other tools to Docker in terms of functionality.

Q11. How to create Docker container?

I will suggest you to give a direct answer to this.
We can use Docker image to create Docker container by using the below command:
1 docker run -t -i command name
This command will create and start a container. You should also add, If you want to check the list of all running container with the status on a host use the below command:
docker ps -a

Q12. How to stop and restart the Docker container?

In order to stop the Docker container you can use the below command:
1 docker stop container ID

Now to restart the Docker container you can use:
docker restart container ID

Q13 How far do Docker containers scale?

Large web deployments like Google and Twitter, and platform providers such as Heroku and dot Cloud all run on container technology, at a scale of hundreds of thousands or even millions of containers running in parallel.

Q14. What platforms does Docker run on?

I will start this answer by saying Docker runs on only Linux and Cloud platforms and then I will mention the below vendors of Linux:
 Ubuntu 12.04, 13.04 et al
 Fedora 19/20+
 RHEL 6.5+
 CentOS 6+
 Gentoo
 ArchLinux
 openSUSE 12.3+
 CRUX 3.0+

 Amazon EC2
 Google Compute Engine
 Microsoft Azure
 Rackspace
Note that Docker does not run on Windows or Mac.

Q15. Do I lose my data when the Docker container exits?

You can answer this by saying, no I won’t lose my data when Docker container exits, any data that your application writes to disk gets preserved in its container until you explicitly delete the container. The file system for the container persists even after the container halts.

Q16. Mention some commonly used Docker command?

Below are some commonly used Docker commands:

  • docker run – Runs a command in a new container.
  • docker start – Starts one or more stopped containers
  • docker stop – Stops one or more running containers
  • docker build – Builds an image form a Docker file
  • docker pull – Pulls an image or a repository from a registry
  • docker push – Pushes an image or a repository to a registry
  • docker export – Exports a container’s filesystem as a tar archive
  • docker exec – Runs a command in a run-time container
  • docker search – Searches the Docker Hub for images
  • docker attach – Attaches to a running container
  • docker commit – Creates a new image from a container’s changes

What is Docker?

Docker is a platform to run each application isolated and securely. Internally it achieves it by using kernel containerization feature.

What is the advantage of Docker over hypervisors?

Docker is light weight and more efficient in terms of resource uses because it uses the underlying host kernel rather than creating its hypervisor.

What is Docker Container?

Docker Container is the instantiation of docker image. In other words, it is the run time instance
of images. Images are set of files whereas containers are the one who run the image inside

Is Container technology new?

No, it is not. Different variations of containers technology were out there in *NIX world for a long time.

Examples are:-Solaris container (aka Solaris Zones)-FreeBSD Jails-AIX Workload Partitions (aka WPARs)-Linux  OpenVZ

How is Docker different from other container technologies?

Well, Docker is a quite fresh project. It was created in the Era of Cloud, so a lot of things
are done much nicer than in other container technologies. Team behind Docker looks to
be full of enthusiasm, which is of course very good. I am not going to list all the features
of Docker here, but I will mention those which are important to me. Docker can run on any infrastructure, you can run docker on your laptop, or you can run it in the cloud.
Docker has a Container HUB, it is a repository of containers which you can download
and use. You can even share containers with your applications. Docker is quite well documented.

Difference between Docker Image and container?

Docker container is the runtime instance of docker image. Docker Image does not have a state, and its state never changes as it is just set of files whereas docker container has its execution state.

What is the use case for Docker?

Well, I think, docker is extremely useful in development environments. Especially for
testing purposes. You can deploy and re-deploy apps in a blink of an eye.
Also, I believe there are use cases where you can use Docker in production. Imagine you have some Node.js application providing some services on the web. Do you need to run full OS for this?
Eventually, if docker is good or not should be decided on an application basis. For some
apps, it can be sufficient, for others not.

How exactly are containers (Docker in our case) different from hypervisor virtualization (vSphere)? What are the benefits?

To run an application in a virtualized environment (e.g., v Sphere), we first need to create a VM, install an OS inside and only then deploy the application. To run the same application in docker,

All you need is to deploy that application in Docker. There is no need for additional OS layer.
You just deploy the application with its dependent libraries, docker engine (kernel, etc.) provides the rest.This table from a Docker official website shows it in a quite clear way.

Another benefit of Docker, from my perspective, is speed of deployment. Let’s imagine a scenario:
ACME inc. needs to virtualize application GOOD APP for testing purposes.
Conditions are:
Application should run in an isolated environment.
Application should be available to be redeployed at any moment in a very fast manner.

Solution 1

In v-Sphere world what we would usually do, is:
Deploy OS in a VM running on v-Sphere.
Deploy an application inside OS.
Create a template.
Redeploy the template in case of need. Time of redeployment around 5-10 minutes. Sounds great! Having app up and running in an hour and then being able to redeploy it in 5 minutes.

Solution 2.

-Deploy Docker.
-Deploy the app GOODAPP in container.
-Redeploy the container with an app when needed.
Benefits: No need of deploying full OS for each instance of the application. Deploying a container takes seconds.

How did you become involved with the Docker project?

I came across Docker not long after Solomon open sourced it. I knew a bit about LXC
and containers (a past life includes working on Solaris Zones and LPAR on IBM hardware too), and so I decided to try it out.

I was blown away by how easy it was touse. My prior interactions with containers had left me with the feeling they were complex creatures that needed a lot of tuning and nurturing. Docker just worked out of the box.
Once I saw that and then saw the CI/CD-centric workflow that Docker was building on top I was sold.
Docker is the new craze in virtualization and cloud computing. Why are people so excited about it?
I think it’s the lightweight nature of Docker combined with the workflow. It’s fast, easy to use and a developer-centric DevOps-ish tool. Its mission is basically: make it easy to package and ship code. Developers want tools that abstract away a lot of the details of that process. They just want to see their code working. That leads to all sorts of conflicts  with Sys Admins when code is shipped around and turns out not to work somewhere other than the developer’s environment. Docker turns to work around that by making your code as portable as possible and making that portability user-friendly and simple.

What, in your opinion, is the most exciting potential use for Docker?
It’s the build pipeline. I mean I see a lot of folks doing hyper-scaling with containers, indeed you can get a lot of containers on a host, and they are blindingly fast. But that doesn’t excite me as much as people using it to automate their dev-test-build pipeline.

How is Docker different from standard virtualization?

Docker is operating system level virtualization. Unlike hypervisor virtualization, where virtual machines run on physical hardware via an inter mediation layer (“the hyper-visor”),
containers instead run user space on top of an operating system’s kernel. That makes
them very lightweight and very fast.

Do you think open source development has heavily influenced cloud technology

I think open source software is closely tied to cloud computing. Both in terms of the
software running in the cloud and the development models that have enabled the cloud.
Open source software is cheap, it’s usually low friction both from an efficiency and a licensing perspective.

How do you think Docker will change virtualization and cloud environments?

Do you think cloud technology has a set trajectory, or is there still room for
significant change?
I think there are a lot of workloads that Docker is ideal for, as I mentioned earlier both in the hyper-scale world of many containers and in the dev-test-build use case. I fully expect a lot of companies and vendors to embrace Docker as an alternative form of virtualization on both bare metal and in the cloud.
As for cloud technology’s trajectory. I think we’ve seen a significant change in the last couple of years. I think they’ll be a bunch more before we’re done. The question of OpenStack and whether it will succeed as an IAAS alternative or DIY cloud solution.

I think we’ve only touched on the potential for PAAS and there’s a lot of room for growth and development in that space. It’ll also be interesting to see how the capabilities of PAAS products develop and whether they grow to embrace or connect with consumer cloud-based products.

Can you give us a quick rundown of what we should expect from your Docker
presentation at OSCON this year?

It’s very much a crash course introduction to Docker. It’s aimed at Developers and SysAdmins who want to get started with Docker in a very hands on way. We’ll teach the basics of how to use Docker and how to integrate it into your daily workflow. Your bio says “for a real job” you’re the VP of Services for Docker.

Do you consider your other open source work a hobby?

That’s mostly a joke related to my partner. Like a lot of geeks, I’m often on my
computer, tapping away at a problem or writing something. My partner jokes that I have two jobs: my “real” job and my open source job. Thankfully over the last few years, at
places like Puppet Labs and Docker, I’ve been able to combine my passion with my

Why is Docker the new craze in virtualization and cloud computing?

It’s OSCON time again, and this year the tech sector is abuzz with talk of cloud infrastructure. One of the more interesting startups is Docker, an ultra-lightweight
containerization app that’s brimming with potential I caught up with the VP of Services for Docker, James Turnbull, who’ll be running a
Docker crash course at the con. Besides finding out what Docker is anyway, we discussed the cloud, open source contributing and getting a real job. Your bio says “for a real job” you’re the VP of Services for Docker.

Do you consider your other open source work a hobby?

That’s mostly a joke related to my partner. Like a lot of geeks, I’m often on my computer, tapping away at a problem or writing something. My partner jokes that I have two jobs: my “real” job and my open source job. Thankfully over the last few years, at places like Puppet Labs and Docker, I’ve been able to combine my passion with my paycheck.

Why do my services take 10 seconds to recreate or stop?

Compose stop attempts to stop a container by sending a SIGTERM. It then waits for a
default timeout of 10 seconds. After the timeout, a SIGKILL is sent to the container to kill it forcefully. If you are waiting for this timeout, it means that your containers aren’t shutting down when they receive theSIGTERM signal. There has already been a lot written about this problem of processes handling signals in containers.

To fix this issue, try the following:
Make sure you’re using the JSON form of CMD and ENTRYPOINT in your  Dockerfile .
For example use [“program“, “arg1”, “arg2”] not”program arg1 arg2“. Using the string form causes Docker to run your process using bash which doesn’t handle signals
properly. Compose always uses the JSON form, so don’t worry if you override the command or entrypoint in your Compose file.
-If you are able, modify the application that you’re running to add an explicit signal handler for SIGTERM.
-Set the stop_signal to a signal which the application knows how to handle:
-web: build: . stop_signal: SIGINT
-If you can’t modify the application, wrap the application in a lightweight init system (like s6) or a signal proxy (like dumb-init or tini). Either of these wrappers take care of handling SIGTERM properly.

How do I run multiple copies of a Compose file on the same host?

Compose uses the project name to create unique identifiers for all of a project’s
containers and other resources. To run multiple copies of a project, set a custom project name using the -p command line optionCOMPOSE_PROJECT_NAMEenvironment variable.
Docker Container Interview Questions

What’s the difference between up, run, and start?

Typically, you want docker-compose up. Use up to start or restart all the services defined in a docker-compose.yml. In the default “attached” mode, you’ll see all the logs from all the containers. In “detached” mode (-d), Compose exits after starting the containers, but the containers continue to run in the background.
The docker-compose run command is for running “one-off” or “ad-hoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container. The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
The docker-compose start command is useful only to restart containers that were previously created but were stopped. It never creates new containers.
Can I use json instead of yaml for my Compose file?
Yes. Yaml is a superset of json so any JSON file should be valid Yaml. To use a JSON
file with Compose, specify the filename to use, for example:
docker-compose -f docker-compose.json up
Should I include my code with COPY/ADD or a volume?
You can add your code to the image using COPY or ADD directive in a Dockerfile. This is useful if you need to relocate your code along with the Docker image, for example when
you’re sending the code to another environment (production, CI, etc).
You should use a volume if you want to make changes to your code and see them reflected immediately, for example when you’re developing code and your server supports hot code reloading or live-reload.
There may be cases where you’ll want to use both. You can have the image include the
code using a COPY, and use a volume in your Compose file to include the code from the
host during development. The volume overrides the directory contents of the image.

Where can I find example compose files?

There are many examples of Compose files on github.
Compose documentation
-Installing Compose
-Get started with Django
-Get started with Rails
-Get started with WordPress
-Command line reference
-Compose file reference
Are you operationally

prepared to manage multiple languages/libraries/repositories?

Last year, we encountered an organization that developed a modular application while allowing developers to “use what they want” to build individual components. It was a nice concept but a total organizational nightmare — chasing the ideal of modular design without considering the impact of this complexity on their operations.

The organization was then interested in Docker to help facilitate deployments, but we strongly recommended that this organization not use Docker before addressing the root issues. Making it easier to deploy these disparate applications wouldn’t be an antidote
to the difficulties of maintaining several different development stacks for long-term
maintenance of these apps.

Do you already have a logging, monitoring, or mature deployment solution?

Chances are that your application already has a framework for shipping logs and backing up data to the right places at the right times. To implement Docker, you not only need to replicate the logging behavior you expect in your virtual machine environment, but you also need to prepare your compliance or governance team for these changes.
New tools are entering the Docker space all the time, but many do not match the stability and maturity of existing solutions. Partial updates, rollbacks, and other common deployment tasks may need to be reengineered to accommodate a containerized deployment.
If it’s not broken, don’t fix it. If you’ve already invested the engineering time required to
build a continuous integration/continuous delivery (CI/CD) pipeline, containerizing
legacy apps may not be worth the time investment.

Will cloud automation overtake containerization?

At AWS Re:Invent last month, Amazon chief technology officer Werner Vogels spent a
significant portion of his keynote on AWS Lambda, an automation tool that deploys
infrastructure based on your code. While Vogels did mention AWS’ container service,
his focus on Lambda implies that he believes dealing with zero infrastructure is
preferable to configuring and deploying containers for most developers.
Containers are rapidly gaining popularity in the enterprise, and are sure to be an
essential part of many professional CI/CD pipelines. But as technology experts and
CTOs, it is our responsibility to challenge new methodologies and services and properly
weigh the risks of early adoption. I believe Docker can be extremely effective for
organizations that understand the consequences of containerization — but only if you
ask the right questions.
You say that Ansible can take up to 20x longer to provision, but why?
Docker uses cache to speed up builds significantly. Every command in Dockerfile is
building in another docker container, and its results are stored in a separate layer.
Layers are built on top of each other.
Docker scans Dockerfile and try to execute each steps one after another, before
executing it probes if this layer is already in cache. When a cache is hit, building step is
skipped, and from the user perspective is almost instant.
When you build your Dockerfile in a way that the most changing things such as
application source code are on the bottom, you will experience instant builds.
You can learn more about caching in docker in this article.

Another way of amazingly fast building docker images is using a good base image –
which you specify inFROM command, you can then only make necessary changes, not
rebuild everything from scratch. This way, the build will be quicker. It’s especially
beneficial if you have a host without the cache like Continuous Integration server.
Summing up, building Docker images with Dockerfile is faster than provisioning with
Ansible, because of using docker cache and good base images. Moreover, you
eliminate provisioning, by using ready to use configured images such stgresus.
$ docker run –name some-postgres -d postgres No installing postgres at all –
it’s ready to run.
Also, you mention that docker allows multiple apps to run on one server.
It depends on your use case. You probably should split different components into
separate containers. It will give you more flexibility.
Docker is very lightweight and running containers is cheap, especially if you store them
in RAM – it’s possible to spawn new container for every http callback, however, it’s not
very practical.
At work, I develop using a set of five different types of containers linked together.
In production some of them are replaced by real machines or even clusters of machine
– however, settings on application level don’t change.
Here you can read more about linking containers.
It’s possible because everything is communicating over the network. When you specify
links in dockerrun command – docker bridges containers and injects environment
variables with information about IPs and ports of linked children into
the parent container.
This way, in my app settings file, I can read those values from the environment. In
python it would be:
import os VARIABLE = os.environ.get(‘VARIABLE’)
There is a tool which greatly simplifies working with docker containers, linking included.
It’s called fig, and you can read more about it here.

Some of the popular Docker interview questions are:

  • What is Docker?
  • What is the difference between Docker image and Docker container?
  • How will you remove an image from Docker?
  • How is a Docker container different from a hypervisor?
  • Can we write compose file in json file instead of yaml?

Can we run multiple apps on one server with Docker?

  • What are the common use cases of Docker?
  • What are the main features of Docker-compose?
  • What is the most popular use of Docker?
  • What is the role of open source development in the popularity of Docker?
  • What is the difference between Docker commands: up, run and start?
  • What is Docker Swarm?
  • What are the features of Docker Swarm?
  • What is a Docker Image?
  • What is a Docker Container?
  • What is Docker Machine?
  •  Why do we use Docker Machine?
  • How will you create a Container in Docker?
  • Do you think Docker is Application-centric or Machine-centric?
  • Can we lose our data when a Docker Container exits?
  • Can we run more than one process in a Docker container?
  • What are the objects created by Docker Cloud in Amazon Web Services (AWS) EC2?
  • How will you take backup of Docker container volumes in AWS S3?
  • What are the three main steps of Docker Compose?
  • What is Pluggable Storage Driver architecture in Docker based containers?
  • What is Docker Hub?
  • What are the main features of Docker Hub?
  • What are the main security concerns with Docker based containers?
  • What are the security benefits of using Container based system?
  • How can we check the status of a Container in Docker?
  • What are the main benefits of using Docker?
  • How does Docker simplify Software Development process?
  • What is the basic architecture behind Docker?
  • What are the popular tasks that you can do with Docker Command line tool?
  • What type of applications- Stateless or Stateful are more suitable for Docker Container?
  • How can Docker run on different Linux distributions?
  • Why do we use Docker on top of a virtual machine?
  • How can Docker container share resources?
  • What is the difference between Add and Copy command in a Dockerfile?
  • What is Docker Entrypoint?
  • What is ONBUILD command in Docker?
  • What is Build cache in Docker?
  • What are the most common instructions in Dockerfile?
  • What is the purpose of EXPOSE command in Dockerfile?
  • What are the different kinds of namespaces available in a Container?
  • How will you monitor Docker in production?
  • What are the Cloud platforms that support Docker?
  • How can we control the startup order of services in Docker compose?
  • Why Docker compose does not wait for a container to be ready before moving on to start
    next service in dependency order?
  • How will you customize Docker compose file for different environments?
Call us for Free Demo on AWS,Vmware,Citrix,Azure,Devops,Python,Realtime Projects
Calls will be forwarded to Our Trainers for demo