1. What is the need for DevOps?
Companies need increased deployment frequency, small failure rate of new releases, less lead time between fixes, quicker mean time to recovery suppose a new release is crashing etc. DevOps takes care of all these requirements and assists in providing perfect software delivery. You can also give examples of companies that have embraced DevOps to accomplish levels of performance that was impossible some time back.
2. What are the advantages of DevOps?
- Continuous software delivery
- No need to worry about solving problems all the time
- Quicker resolution of problems
- Quick delivery of features
- A stable operating environment
- Sufficient time present to add value
- Faster delivery of features
- More stable operating environments
- More time available to add value (rather than fix/maintain)
4. What is Test Driven Development (TDD)?
Test Driven Development is a major agile and DevOps practices. It assists in quick iterations and continuous integrations. Test cases are the core of development process here. You can find out problems rapidly and this helps in controlling risks in a graceful manner.
Suppose a test case fails, then TDD is the appropriate solution for writing the code. More than being a testing methodology,it takes the role of even a design and development methodology.
5. What are Design Patterns?
The developers gain from Design Patterns since they provide solutions to problems faced by them. They demonstrate the best practices that are used by developers. Even an inexperienced developer can learn easily and rapidly through design patterns.
There are majorly 3 types of Design Patterns:
Creational: addresses design issues
Structural: provides an easy way to relationships between objects
Behavioral: Provides an easy way of how objects interact with each other.
6. What are the foundational pillars of DevOps Testing?
The major pillars of DevOps testing are:
- Adopt testing early and ascertain production proactiveness every time
- Make use of proven technologies and patterns.
- Utilize the proper rigor
7. What is white box and black box testing?
In black box testing the tester is not aware of internal structures of the application. In white box testing, the tester is aware of the designs and frame of the product.
White box testing is performed at the unit and component level of testing. Black box testing is performed at the acceptance and system level of testing.
There is the requirement of programming languages knowledge with white box testing. In black box testing you may or may not require programming skills.
8. Is Selenium a good testing tool?
Yes. In fact, it is regarded as one of the efficient tools for DevOps-based operations. It is open-source and supports different browsers. It has great communities, supports distributed testing and is free.
9. How is DevOps different from Agile ?
Agile is a set of principles on producing software. However, the software might function on a developer’s laptop or in a test environment. There arises the need for a way to rapidly, easily and repeatedly shift that software into production infrastructure. This is done in a safe and easy way. For this one requires DevOps tools and techniques.
Agile software development methodology concentrates on the development of software. On the other hand, DevOps is needed for the development and deployment of the software in the safest and dependable manner.
10. Which are the top DevOps tools? Which tools have you worked on?
The most famous DevOps tools are listed here:
- Git : Version Control System tool
- Selenium : Continuous Testing tool
- Jenkins : Continuous Integration tool
- Puppet, Chef, Ansible : Configuration Management and Deployment tools
- Docker : Containerization tool
- Nagios : Continuous Monitoring tool
For the second question, answer the tools that you have worked on.
11. What are the best practices for DevOps implementation?
- DevOps implementation differs from one company to another company. Nowadays the companies are trying to deliver the software quicker.
- Every company has a vision and goal. DevOps implementation should adhere to it. Change should be understood.
- There should be motivation of communication and coordination specifically between development and operations.
- The core element is automation and should be performed meticulously within the SDLC stages.
- CI and CD practices are the key factors of DevOps. Continuous integration of your code or continuous testing and performing continuous delivery is the essential factor.
- Cultivating the habit of getting feedback from the end user should be taken up. This leads to continuous improvement which acts as a major force for enhancing the process and delivering quality software.
13. What are the key components of DevOps?
- Continuous Integration
- Continuous Testing
- Continuous Delivery
- Continuous Monitoring
14. Explain Continuous Integration.
Continuous Integration is essential for Agile process. Generally developers take care of features within a sprint and commit their specific changes to the version control repository.
After the code is committed the total work of developers is integrated in a fine manner. Moreover the build is done on a frequent basis dependent on every check-in or schedule. Continuous integration encourages early feedback for the developers.
15. Explain Continuous Delivery.
Being an extension of Continuous Integration Continuous Delivery majorly assists to make the process of development reach the end users as early as possible. This process goes through several stages of QA, Staging etc. Later it goes for delivery in the production system.
16. Explain Continuous Testing.
From the above goal of Continuous Integration which is to get the application out to end users are primarily enabling continuous delivery. This cannot be completed without a sufficient amount of unit testing and automation testing.
Hence, we need to validate that the code produced and integrated with all the developers who perform as required.
17. Explain Continuous Monitoring.
There is a requirement of monitoring the performance of an application as it is developed and deployed. Monitoring is also very essential since it will help in bringing out defects that would have been ignored earlier.
18. What is Version control?
Version Control is a system that records changes to a file or set of files with the evolution of time. This helps to recall specific versions at a later point.
19. What are containers?
Containers are a type of lightweight virtualization. They offer isolation among processes.
20. Describe two-factor authentication.?
Two-factor authentication refers to a security process in which the user gives two means of identification. This is obtained from separate categories of credentials.
21. How would you explain the concept of “infrastructure as code” (IaC)?
Infrastructure as Code is also sometimes termed as a programmable infrastructure. As implied from the name, the infrastructure is looked upon in the same way just like any other code.
Here you don’t manually make configuration changes or use one-off scripts to perform infrastructure adjustments. The operations infrastructure is controlled applying the same rules and structures that take care of code development.
The key best practices of DevOps including version control, continuous monitoring and virtualized tests are deployed on the underlying code the manages the formation and controlling of your infrastructure.
22.Discuss your experience building bridges between IT Ops, QA and development?
The key essence of DevOps is efficient communication and collaboration. Talk about how you have handled production concerns from operations and development sides. Also tell whether you have worked on a common vision and didn’t involve in the blame game.
23. How do you expect you would be required to multitask as a DevOps professional?
Concentrate on bridging communication gaps between Development and Operations teams.
Comprehend system design from the angle of an architect,, software development from a developer’s angle, operations and infrastructure from the angle of an experienced Systems Administrator.
Execute-to be in a position to do what actually needs to be carried out.
24. What are the various phases in the lifecycle of DevOps?
The lifecycle of DevOps contains the following phases
- Plan: Plan for the type of application required to develop
- Code: Develop codes as per the project and end-user requirements
- Build: Build the application by integrating the codes
- Test: Test the developed application and rebuild if required
- Integrate: Integrate multiple codes collected from various programmers
- Deploy: Deploy the code into a cloud platform for further usage
- Operate: Operations will be informed of the code if necessary
- Monitor: The performance of the application is monitored and ensures the changes made to the application could meet the client requirements.
25. Define configuration management in DevOps
Configuration management allows the management of multiple systems, standardize resource configurations that manage IT infrastructure, and it helps the administration and management of more than two servers to maintain the integrity of the entire infrastructure.
26. Explain the role of AWS in DevOps
AWS is performing the following role in DevOps
- Flexible Services that provide a ready-to-use solution without the requirement of installing or configuration of software.
- Built for scale that manages a single instance through AWS services
- Automation to allow the automation tasks and processes that provide maximum time to innovate
- The secured environment that uses AWS identity and access management (IAM) for setting user policies and permissions.
- Large partner ecosystem to integrate with extendable AWS services.
27. What are the important KPIs of DevOps?
There are three important KPIs as meantime to failure recovery, deployment frequency, and percentage of failed deployments. Mean time to failure recovery is the time that taken approximately for recovering from the failure. Deployment frequency is the item of deployment that occurs. Percentage of failed deployments is the number of times that failed in deployment.
28. Define Git Stash
If the developer wants to switch the branch without any commitments of unfinished works for the previous branch is referred to as the Git Stash issue. It takes the modified tracked file and stores them for the future use of unfinished work to reapply at any time.
29. Define Jenkinsfile
Jenkinsfile is a text file and it contains the definitions of the Jenkins pipeline along with the verified source control repository. It enables code review and pipeline iterations to allow an audit trail for the pipeline and a single source of truth for the pipeline that can be edited and viewed.
30. What are the key aspects of the Jenkins pipeline?
The key aspects of the Jenkins pipeline are pipeline, node, step, and stage.
- The pipeline is a part of a continuous delivery pipeline and it is user-defined. This code defines the entire development process that contains building, testing, and delivering the application.
- A node is a machine that becomes part of the Jenkins environment with the capacity of pipeline execution.
- The step is a single task to tell Jenkins what can be done at a particular point in time.
- The stage is a subset of performed tasks that goes through the entire Jenkins pipeline. Build, Test, and Deploy are the stages of Jenkins.
31. What are the two types of Jenkins Pipeline? Explain with Syntax.
Scripted Pipeline and Declarative Pipeline are the two types of Pipeline in Jenkins.
Scripted Pipeline is based on Groovy script since it is Domain Specific Language. It runs more than one node block to be executed throughout the pipeline.
- Execute the pipeline on available agent at any stages
- Initiate the build stage
- Performing the steps coherent to the building stage
- Initiate the test stage
- Performing the steps of the test stage
- Initiate the deploy stage
- Performing the steps of the deploy stage
Declarative Pipeline provides a friendly and simplified syntax or process to define a pipeline. The works are defined throughout the pipeline by pipeline blocks.
- Executing the pipeline on available agents at any stage.
- Declares the build stage
- Performing the steps of the building stage
- Declares the test stage
- Performing the steps of the test stage
- Declare the deploy stage
- Performing the steps of the deploy stage.
32. How a build stage can be scheduled or run in Jenkins?
The build stage can be scheduled and executed in Jenkins in four ways as by source code management commits, completing after the other builds, scheduled to execute at a specified time, and requesting the manual build.
33. Why should an SSL certificate be used in Chef Tool?
SSL certificates are implemented between the Chef server and the client that ensures each node can access proper data. Each node has a public and private key pair and it is stored at the Chef server. SSL certificate will be sent to the server and it includes the private key of the node. The server compares the public key to identify the node and provide access to the required data.
34. What are the resources available in Puppet?
Puppet has the basic units of configuration management, nodes like software services and packages, declaration resources that are written in the catalog, performance actions, and the desired state for the catalog to be executed.
35. Define an Ansible role
An Ansible role contains the independent block of tasks, templates, and files that are embedded inside a playbook.
36. How can you make content reusable and redistributable?
There are three different ways for making content reusable in Ansible as follows:
- Roles are used to managing tasks in the playbook that can be easily shared and access through Ansible Galaxy.
- Submodule or another file to a playbook can be added using “include” and it is used to add multiple playbooks.
- Files are added only once that ensure the improvement through “import” of “include” and it will be helpful for a line to execute repeatedly.
37. What is the architecture of Docker?
- Docker utilizes client-server architecture.
- Docker Client is used to running a command that can be translated through REST API and sent to Docker Server.
- Docker Daemon (Server) accepts the queries and communicates with the OS to develop Docker images to execute Docker containers.
- The Docker image is the template of rules and instructions that can be used to create containers.
- The Docker container is an executable package of an application along with its dependencies.
- Docker Registry is one of the services that distribute and host docker images with other users.
38. List out the advantages of Docker
The notable advantages of docker are as follows
- Occupies less memory space
- Requires short boot-up time
- Performance of docker is better than virtual machines and it hosts in a single docker-engine
- Scaling of a docker is easy and simple
- Docker provides high-efficiency
- Docker is portable across different platforms
- Data volumes of docker are shared and reusable across multiple containers
39. How to execute multiple containers using a single service?
Docker-compose are used to make the possibility of running multiple containers as a single server. Every container executes in isolation to communicate with each other nodes. All docker-compose files are in YAML files.
40. What is the use of Dockerfile?
- Dockerfile is used to create Docker images using the Build Command
- Any user can execute the code to generate Docker containers using a docker image
- Once a docker image is built completely, it has been uploaded to a Docker registry
- Users can access the docker image and develop new containers as per the requirement from the Docker registry.
41. How to create Docker Container?
The user can develop a docker image or access the existing docker image from the docker hub. Docker creates a new container MySQL from the existing image. At the same time, the container layer of the file system creates on the top of the image layer.
- The command for creating the docker container is Docker run –t –i MySQL
- The command for listing the running containers is Docker ps
42. Define Nagios Network Analyzer
Nagios Network Analyzer provides an in-depth look at security threats and traffic sources
- It offers a central view of bandwidth data and network traffic
- It enables the system admin to collect high-level information on network health
- It allows the user to be proactive in abnormal behavior, resolving outages, and threats that affect critical business processes.
43. Explain the active and passive checks in Nagios
Active Checks of Nagios are defined as follows:
- Active Checks are initiated by Nagios Daemon of the Check Logic
- Nagios will run a plugin and pass the information according to the requirement of checks
- The plugin checks the operational state of service or host to report the results back to Nagios Daemon.
- Nagios Check is used to process the results of the service or host and send notifications as per queries.
Passive Checks of Nagios are defined as follows:
- Passive Checks are an external application that checks the status of a service or host
- The external command file is written to check the results
- The external command files are read by Nagios and place the result of passive checks into the queue for further processing
- Nagios send log alerts and notifications according to the check result information.