In the first part of the study was to evaluate the performance of vSphere Integrated Containers Engine, introduced Container Display, Image and Volume and concept Runtime and Package in the container discussed in Part II explores different scenarios of implementation of the Container discussed in this paper, that last part is to The function of vSphere Integrated Containers Engine is referenced.
The vSphere Integrated Containers Engine uses the Docker Image Portability to present itself as a platform for enterprise implementation. Developers create Containers on a system and place them on the Registry. Containers are tested by another system and approved for operation. The vSphere Integrated Containers Engine can then remove the Containers from the Registry and implement them in vSphere .
VSphere Integrated Containers Engine concepts
If we take a van diagram, the vSphere function is one circle and the Docker function the other, there will be a great deal of overlap between these two circles. The purpose of the vSphere Integrated Containers Engine is to extend the vSphere circle as much as possible, adding Docker capabilities that are not available, and reusing Docker code as much as possible. As a result, this technique should not eliminate the portability of the Docker Image format and should be completely transparent to a Client Docker. The key concepts and components that make this possible are described below.
Container Virtual Machines
Container virtual machines created by vSphere Integrated Containers Engine have all the features of a software container:
- A storage layer capable of optionally attaching to a number of Persistent Volumes.
- A custom Linux operating system designed to be known only as a kernel and require images to be functional.
- A mechanism for connecting Read-Only layers
- The PID 1 process identifier, known as Tether, extends the control unit to the Container Virtual Machine.
- There are various ways of configuring and logging in (Ingress) and logging out (Egress), which are well defined.
- Automatic configuration for all types of network topologies.
The prepared container does not contain any operating systems, in fact:
- The Container Virtual Machine comes from an ISO boot that contains the core of Photon Linux. It should be noted that Container virtual machines do not fully run Photon OS .
- The Container Virtual Machine is configured with a Container Image that mounts as a disk.
- Container Image layers are displayed like a hierarchy of snapshot read-only disks, and the changes are stored at the top level of this hierarchy.
- Volume Container are formatted VMDKs that are attached as a disk and placed on a Datastore.
- Networks are port groups that are connected as vNICs.
Host of the Container virtual
A Virtual or Brief Container Host (VCH) is the functional equivalent of a Linux VM running Docker, which has its significant advantages. A VCH has the following elements:
- A clustered pool of resources into which Container virtual machines are prepared.
- Unified naming for Container as Single-Tenant.
- Provides an isolated Docker API Endpoint.
- Authentication to use and configure pre-approved virtual infrastructure.
- A private network to which Containers are connected by default.
If VCH is implemented in a vCenter Server cluster, it provides security for all servers within the cluster and provides flexibility and dynamic use of server resources as usual.
A VCH is essentially different from an older Container Host in the following respects:
- VCH naturally learns clustering and dynamic scheduling by preparing for vSphere goals.
- Resource constraints are configurable dynamically and have no effect on Containers.
- Containers do not have a shared core.
- There is no Image Cache locally. They are stored on a Datastore in a cluster specified at the time of VCH implementation.
- Storage There is no shared Read-Write storage.
VCH is a multi-functional Appliance implemented as a vAPP in a vCenter Server cluster or in a Pool Resource on an ESXi Host. vAPP, or Pool Resource, provides a useful parent-child relationship in the vSphere Web Client to easily identify Container virtual machines that are built into a VCH. Source restrictions on vAPP can also be specified. Multiple VCHs can be prepared on a Pool Resource or within a vCenter Server cluster.
VCH Endpoint VM
The VCH Endpoint is a machine that runs inside vAPP or Pool Resource. There is a one-way relationship between a VCH and a VCH Endpoint VM. The VCH Endpoint VM has the following applications:
- Runs the service that VCH needs.
- Provides a secure Remote API to the Client.
- Receives Docker commands and translates those commands into vSphere API requests and infrastructures.
- Provides network forwarding so that ports facing Containers can be opened on the VCH Endoint VM and that Containers can access a public network.
- Container lifecycle manages Image storage, Volume storage, and Container mode.
- Logging and Monitoring Provides its Services and Containers.
The VCH Endpoint VM lifecycle is managed by a tool called Vic-Machine.
Introducing the Vic-Machine Tool
The Vic-Machine tool is a command for Windows, Linux and OSX, which manages the VCHs lifecycle and is designed for use by vSphere administrators. Vic-Machine receives the CPU, network, storage and vSphere user as input and creates a VCH as output. This tool has the following additional features:
- Creates certificates for Docker Client TLS authentication.
- Ensures that prerequisites for implementing VCH are implemented on a cluster or host, for example, properly configured firewalls, licenses, and more.
- Configures existing VCHs to debug.
- Lists and deletes VCHs.