Finally, we describe the storage network subnet. Continue editing the file to include any IP addresses that are already used by existing physical hosts in the environment where OpenStack will be deployed and ensuring that you've included any reserved IP addresses for physical growth too.
Include the addresses we have already configured leading up to this section. Single IP addresses or ranges start and end placed either side of a ',' can be placed here. For the example architecture used in this book, the following output can be used. The remaining section of this file describes which server each service runs from. Most of the sections repeat, differing only in the name of the service. As these particular example services run on our controller nodes, and in a production setting there are at least three controllers, you can quickly see why this information repeats.
Other sections refer specifically to other services, such as OpenStack compute. Save and exit the file. We will now need to generate some random passphrases for the various services that run in OpenStack. In OpenStack, each service—such as Nova, Glance, and Neutron which are described through the book —themselves have to authenticate with Keystone, and be authorized to act as a service. To do so, their own user accounts need to have passphrases generated.
Finally, there is another file that allows you to fine-tune the parameters of the OpenStack services, such as which backing store Glance the OpenStack Image service will be using, as well as configure proxy services ahead of the installation. In a typical, highly available deployment—one in which we have three controller nodes—we need to configure Glance to use a shared storage service so that each of the three controllers have the same view of a filesystem, and therefore the images used to spin up instances.
We can even allow a private cloud environment to connect out over a public network and connect to a public service like Rackspace Cloud Files. View the other commented out details in the file to see if they need editing to suit your environment, then save and exit.
You are now ready to start the installation of OpenStack! Ansible takes a set of configuration files that Playbooks a defined set of steps that get executed on the servers use to control how they executed. For OpenStack-Ansible, configuration is split into two areas: describing the physical environment and describing how OpenStack is configured.
Look at the following diagram to see the different networks and subnets. The load balancer also takes an IP address from this range. Each container and compute host that participates in the VXLAN tunnel gets an IP from this range the VXLAN tunnel is used when an operator creates a Neutron subnet that specifies the vxlan type, which creates a virtual network over this underlying subnet.
Do not adjust this section. There are three main playbooks in total that we will be using: ff setup-hosts. The first step is to run a syntax check on your scripts and configuration. As we will be executing three playbooks, we will execute the following against each: openstack-ansible setup-hosts. If there are issues, review the output by scrolling back through the output and watch out for any output that was printed out in red. Refer to the Troubleshooting the installation recipe further on in this chapter.
If all is OK, we can proceed to run the next playbook for setting up load balancing. At this stage, it is important that the load balancer gets configured. OpenStack-Ansible installs the OpenStack services in LXC containers on each server, and so far we have not explicitly stated which IP address on the container network will have that particular service installed.
This is because we let Ansible manage this for us. So while it might seem counter-intuitive to set up load balancing at this stage before we know where each service will be installed—Ansible has already generated a dynamic inventory ahead of any future work, so Ansible already knows how many containers are involved and knows which container will have that service installed. If you are using an F5 LTM, Brocade, or similar enterprise load balancing kit, it is recommended that you use HAProxy temporarily and view the generated configuration to be manually transferred to a physical setup.
If all is OK, we can proceed to run the next Playbook that sets up the shared infrastructure services as follows: openstack-ansible setup-infrastructure. This step takes a little longer than the first Playbook. As before, inspect the output for any failures. At this stage, we should have a number of containers running on each Infrastructure Node also known and referred to as Controller Nodes.
On some of these containers, such as the ones labelled Galera or RabbitMQ, we should see services running correctly on here, waiting for OpenStack to be configured against them. We can now continue the installation by running the largest of the playbooks— the installation of OpenStack itself. To do this, execute the following command: openstack-ansible setup-openstack. This may take a while to run—running to hours—so be prepared for this duration by ensuring your SSH session to the deployment host will not be interrupted after a long time, and safeguard any disconnects by running the Playbook in something like tmux or screen.
The first playbook, setup-hosts. At this stage, Ansible knows where it will be placing all future services associated with OpenStack, so we use the dynamic inventory information to perform an installation of HAProxy and configure it for all the services used by OpenStack that are yet to be installed.
The next playbook, setup-infrastructure. The final playbook is the main event—the playbook that installs all the required OpenStack services we specified in the configuration.
This runs for quite a while—but at the end of the run, you are left with an installation of OpenStack. The OpenStack-Ansible project provides a wrapper script to the ansible command that would ordinarily run to execute Playbooks. This is called openstack-ansible. In essence, this ensures that the correct inventory and configuration information is passed to the ansible command to ensure correct running of the OpenStack-Ansible playbooks. Troubleshooting the installation Ansible is a tool, written by people, that runs playbooks, written by people, to configure systems that would ordinarily be manually performed by people, and as such, errors can occur.
The end result is only as good as the input. Typical failures either occur quickly, such as connection problems, and will be relatively selfevident, or after long running jobs that may be as a result of load or network timeouts. In any case, the OpenStack-Ansible playbooks provide an efficient mechanism to rerun playbooks without having to repeat the tasks it has already completed.
This file simply lists the hosts that had failed so this can be referenced when running the playbook again. This targets the single or small group of hosts, which is far more efficient than a large cluster of machines that successfully completed. How to do it We will step through a problem that caused one of the playbooks to fail.
Note the failed playbook and then invoke it again with the following steps: 1. Now rerun that Playbook, but specify the retry file: ansible-openstack setup-openstack. In most situations, this will be enough to rectify the situation, however, OpenStackAnsible has been written to be idempotent—meaning that the whole playbook can be run again, only modifying what it needs to.
Therefore, you can run the Playbook again without specifying the retry file. Should there be a failure at this first stage, execute the following: 1. Now rerun the setup-hosts. As each service gets installed in LXC containers, it is very easy to wipe an installation and start from the beginning.
To do so, carry out the following steps: 1. Follow the ons-screen prompts. We recommend you to uninstall the following package to avoid any conflicts with the future running of the playbooks, and also clear out any remnants of containers on each host: ansible hosts -m shell -a "pip uninstall -y appdirs" 3.
Sometimes failures occur in the environment due to SSH timeouts, or some other transient failure. Also, despite Ansible trying its best to retry the execution of a playbook, the result might be a failure. Failure in Ansible is quite obvious—it is usually predicated by outputs of red text on the screen. In most cases, rerunning the offending playbook may get over some transient problems. Each playbook runs a specific task, and Ansible will state which task has failed.
Troubleshooting why that particular task had failed will eventually lead to a good outcome. Worst case, you can reset your installation from the beginning. Manually testing the installation Once the installation has completed successfully, the first step is to test the install.
Testing OpenStack involves both automated and manual checks. Manual tests verify user-journeys that may not normally be picked up through automated testing, such as ensuring horizon is displayed properly. Automated tests can be invoked using a testing framework such as tempest or the OpenStack benchmarking tool—rally. Getting ready Ensure that you are root on the first infrastructure controller node, infra How to do it… The installation of OpenStack-Ansible creates several utility containers on each of the infra nodes.
These utility hosts provide all the command-line tools needed to try out OpenStack, using the command line of course. Carry out the following steps to get access to a utility host and run various commands in order to verify an installation of OpenStack manually: 1. First, view the running containers by issuing the following command: lxc-ls -f 28 Chapter 1 2.
As you can see, this lists a number of containers because the OpenStack-Ansible installation uses isolated Linux containers for running each service. By the side of each one its IP address and running state will be listed. You can see here that the container network of You will now be running a Terminal inside this container, with access only to the tools and services belonging to that containers.
In this case, we have access to the required OpenStack clients. The first thing you need to do is source in your OpenStack credentials. The OpenStack-Ansible project writes out a generated bash environment file with an admin user and project that was set up during the installation.
Load this into your bash environment with the following command: source openrc Tip: you can also use the following syntax in Bash:.
Now you can use the OpenStack CLI to view the services and status of the environment, as well as create networks, and launch instances. A few handy commands are listed here: openstack server list openstack network list openstack endpoint list openstack network agent list How it works… The OpenStack-Ansible method of installing OpenStack installs OpenStack services into isolated containers on our Linux servers.
On each of the controller or infra nodes are about 12 containers, each running a single service such as nova-api or RabbitMQ. You can view the running containers by logging into any of the servers as root and issuing a lxc-ls -f command. The -f parameter gives you a full listing showing the status of the instance such as whether it is running or stopped.
This container has OpenStack client tools installed, which makes it a great place to start manually testing an installation of OpenStack. Each container has at least an IP address on the container network—in the example used in this chapter this is the You can SSH to the IP address of this container, or use another lxc command to attach to it: lxc-attach -n.
Once you have a session inside the container, you can use it like any other system, provided those tools are available to the restricted four-walls of the container. To use OpenStack commands, however, you first need to source the resource environment file which is named openrc.
This is a normal bash environment file that has been prepopulated during the installation and provides all the required credentials needed to use OpenStack straight away. Modifying the OpenStack configuration It would be ludicrous to think that all of the playbooks would be needed to run again for a small change such as changing the CPU contention ratio from to So instead, the playbooks have been developed and tagged so that specific playbooks can be run associated with that particular project that would reconfigure and restart the associated services to pick up the changes.
The following are the common changes and how they can be changed using Ansible. As we'll adjust the configuration, all of these commands are executed from the same host you used to perform the installation. To make changes to the default Nova Quotas, carry out the following as an example: 1.
Adjust the name of the service in the syntax used. For example, to change a configuration item in the neutron. As we were making configuration changes, we notified Ansible of this through the --tag parameter. Virtual lab - vagrant up! In an ideal world, each of us would have access to physical servers and the network kit in order to learn, test, and experiment with OpenStack. However, most of the time this isn't the case. By using an orchestrated virtual lab, using Vagrant and VirtualBox, allows you to experience this chapter on OpenStack-Ansible using your laptop.
This is the architecture of the Vagrant-based OpenStack environment: Essentially there are three virtual machines that are created a controller node, a compute node and a client machine , and each host has four network cards plus an internal bridged interface used by VirtualBox itself. The authors of this book use macOS and Linux, with Windows as the host desktop being the least tested configuration. The virtual machines that provide the infra and compute nodes in this virtual environment are thin provisioned, so this requirement is just a guide depending on your use.
The faster the better, as the installation relies on downloading files and packages directly from the internet. How to do it… To run the OpenStack environment within the virtual environment, we need a few programs installed, all of which are free to download and use: VirtualBox, Vagrant, and Git. VirtualBox provides the virtual servers representing the servers in a normal OpenStack installation; Vagrant describes the installation in a fully orchestrated way; Git allows us to check out all of the scripts that we provide as part of the book to easily test a virtual OpenStack installation.
The following instructions describe an installation of these tools on Ubuntu Linux. We first need to install VirtualBox if it is not already installed. We recommend downloading the latest available releases of the software.
To do so on Ubuntu Linux as root, follow these steps: 1. We first add the virtualbox. We now run an apt update to refresh and update the apt cache with the following command: apt update 4.
Now install VirtualBox with the following command: apt install virtualbox Follow these steps to install Vagrant: 1. The version we want is Debian Bit. At the time of writing, this is version 2. We can now install the file with the following command dpkg -i. To install these, carry out the following steps: 1. Install vagrant-hostmanager using the vagrant tool: vagrant plugin install vagrant-hostmanager 2. Install vagrant-triggers using the vagrant tool: vagrant plugin install vagrant-triggers If Git is not currently installed, issue the following command to install git on a Ubuntu machine: apt update apt install git Now that we have the required tools, we can use the OpenStackCookbook Vagrant lab environment to perform a fully orchestrated installation of OpenStack in a VirtualBox environment: 1.
We will change into the vagrant-openstack directory that was just created: cd vagrant-openstack 3. We can now orchestrate the creation of the virtual machines and installation of OpenStack using one simple command: vagrant up Tip: This will take quite a while as it creates the virtual machines and runs through all the same playbook steps described in this chapter.
It allows us to describe what virtual servers need to be created, and using Vagrant's provisioner allows us to run scripts once a virtual machine has been created.
Vagrant's environment file is called Vagrantfile. You can edit this file to adjust the settings of the virtual machine, for example, to increase the RAM or number of available CPUs. To retrieve the admin password, follow the steps given here and view the file named openrc.
There is a single controller node that has a utility container configured for use in this environment. Attach to this with the following commands: vagrant ssh controller sudo -i lxc-attach -n lxc-ls grep utility openrc Once you have retrieved the openrc details, copy these to your openstack-client virtual machine. From here you can operate OpenStack, mimicking a desktop machine accessing an installation of OpenStack utilizing the command line. This means that there is a direct dependency on Python being available on the computer that will be running the clients.
To allow us to interact with any one of the three Controller nodes so that each of them can respond independently, we place these services behind a load balancer. There is one particular service that a user is interested in when configuring their environment for use with OpenStack and that is the Keystone service. It authorizes users to allow them to perform the actions they have requested, as well as provides a catalog of services back to the user.
This catalog is a mapping of the OpenStack service address endpoints. For this reason, we do not need to configure our client to know where to find each and every OpenStack service. When a user configures their environment to use OpenStack from the command line, the only information they need to be aware of is the IP address and port that Keystone has been installed on.
See the following diagram for how this conceptually looks to a user: This chapter is intended as a quick reference guide for commands that are explained in more detail throughout the book. The following applies to Windows Getting ready Ensure that you are logged into your desktop and have the following installed: ff PowerShell ff Python 2.
Follow these instructions to ensure that Python is available in your system path, as well as set the appropriate environment variables under PowerShell: 1. Next, choose Advanced system settings from the menu on the left: 3. Now, select Environment Variables from the Advanced tab of the System Properties window, as shown here: 40 Chapter 2 4. Now add a New line to the path as shown as follows.
We assume that you did a default installation of Python 2. Now click on OK. When you load up a PowerShell session, you should now be able to test that Python is working as expected, as shown here: 42 Chapter 2 7. You should now be able to install the OpenStack clients as described in the next recipe. We first have to ensure that Python is set up properly, and available for use in a shell.
We then to have a mechanism for loading environment variables into our shell, which isn't a native feature of Windows. We do this using a PowerShell script. However, because PowerShell is quite powerful, we have to remove a restriction to allow this to work. Once we have this all set up correctly, we are able to use the OpenStack environment from our Windows desktop.
Installing the OpenStack clients There are a number of OpenStack clients available that are used to interact with OpenStack from the command line. Historically, each service in OpenStack has its own client. For example, the OpenStack Compute project, Nova, has its own nova client. Similarly, the OpenStack networking project, Neutron, also has its own client called neutron client.
And so on. Officially, there is a convergence to using one client: the OpenStack client. However, not all commands and features are available under this one tool. Moreover, the OpenStack client still requires each individual project command-line tool installed to function; however, it provides a more consistent interface without the need to remember each individual project name.
Getting ready As we are preparing your desktop for interacting with OpenStack from the command line, you will appreciate that there are a variety of choices you can make for your desktop OS of choice. This section will describe the installation of OpenStack clients.
As we will be installing the OpenStack clients using pip, ensure that this is installed by following these steps: 1. First, load up a Terminal and become the root user with the following command: sudo -i 2. We'll be using pip to install the clients. How to do it… With pip installed on our system, we are able to install the clients using the following simple steps: 1. To install the OpenStack client, carry out the following command: pip install python-openstackclient 2.
To install the individual clients, carry out the following commands: pip install python-novaclient pip install python-neutronclient pip install python-glanceclient pip install python-heatclient pip install python-keystoneclient pip install python-cinderclient pip install python-swiftclient Each of the projects have their own client, so the syntax is: pip install python-PROJECTclient Alternative — use a preconfigured OpenStack client virtual machine Sometimes the clients are in development at a different pace to the projects installed in your, environment which can make for version incompatibilities.
To use a prebuilt virtual environment, carry out the following: 1. Launch the client: cd openstack-client vagrant up 3. Access the virtual machine: vagrant ssh How it works… Installing the OpenStack clients is made very simple using the pip command-line tool, which is used to install Python packages.
The main tool for using OpenStack on the command line is called the OpenStack client. This tool is used to control all aspects of OpenStack. However, there are some commands and options that have yet to make it into the main OpenStack clients. To overcome this, the older legacy project tools can still be used. Alternatively, create these tools in small virtual machine so that the tools are always available. Configuring your Linux or macOS environment The OpenStack tools are configured by setting environment variables in your shell or desktop.
Getting ready Ensure that you have the OpenStack clients installed as described in the first recipe, Introduction — using OpenStack, in this chapter. How to do it… Configuration of your command-line environment is achieved by setting environment variables; however, it is easier and more convenient to place these variables in a file that we can later load into our environment. This file is a great starting point for configuring the environment as it has all the required elements needed to operate your CLI environment.
Keep it safe and ensure the permissions don't allow any other users to read this file. You can now use the command-line tools. Once you make any changes to this file, remember to source them back into your shell.
How it works… Essentially, we're just setting some environment variables in our shell, which our client tools use to authenticate into our OpenStack environment. To make it easier though, we store these environment variables in a file.
This makes access to our environment easy as we just run one command to set all the required credentials. Getting ready The following applies to Windows Ensure that you have followed the steps to install Python. How to do it… Carry out the following to load the required environment variables into your Windows session: 1. Using the same OpenStack credentials as described in the Configuring your Linux or macOS environment recipe, ensure that it is named openrc to match the following example and execute the following in PowerShell:.
Most Windows 10 desktops appear to have a default policy of restricted, which excludes the running of PowerShell scripts that aren't signed — even the ones you have created yourself. We have to have a mechanism for loading environment variables into our shell that isn't a native feature of Windows.
However, we have to remove a restriction to allow this to work. Common OpenStack networking tasks This section outlines common OpenStack networking tasks for quick reference only. Getting ready Ensure that you have the OpenStack clients installed, as described in the first recipes in this chapter. How to do it… Carry out the following steps to create and modify networks in OpenStack: Creating a network There are usually two steps to create a network: creating the equivalent of an L2 network, followed by assigning a subnet and details to it.
This command assumes that our provider interface, as seen from OpenStack and configured in Neutron , is using the "flat" interface. Typical deployments in a datacenter would likely use "vlan" as the provider type and device, so adjust to suit your environment.
Now we specify some options of the subnet that make sense for this network to be accessed from outside of OpenStack: openstack subnet create --project admin --subnet-range For more detailed information and explanation of each task, refer to Chapter 5, Nova — OpenStack Compute. Getting ready Ensure that you have the OpenStack clients installed as described in the first recipes of this chapter. First, list the images available: openstack image list 2.
Now we list the networks available it will be the UUID of the Network we will use : openstack network list 3. We need a flavor, if you need reminding of them, list them with the following command: openstack flavor list 4.
If you require specific security groups, list them with the following command: openstack security group list 5. If you need to get the name of the key pair to use, use the following command: openstack keypair list 6. We first specify the new flavor size with the following command: openstack server resize --flavor m1. Next, list the running instances to confirm the state. Then we confirm the action with the following command: openstack server resize --confirm myWebserver1 Creating a flavor To create a flavor, called m1.
How to do it… Carry out the following steps to create and modify images in OpenStack: Uploading an image to Glance Uploading an image to OpenStack is achieved with the following. However, you are able to share a private image to isolated projects of your choosing. In the following example, we will share the cirros-image, currently only available in admin project with the anotherProject project: 1.
First, query the project list: openstack project list This will bring back output like the following: 2. We will set the image to be shared: openstack image set cirros-image --shared 3. Important: as a user in the receiving anotherProject project, execute the following: openstack image set --accept faf6f-4f2d-9a8a-9d84cec8a60d 5.
Now, as that same user, you can confirm that you can see this shared image by executing an image listing: openstack image list 54 Chapter 2 This will bring back output like the following, showing the available image: Common OpenStack identity tasks This section outlines a number of common steps to take for a number of common actions using the OpenStack Identity service.
This is intended as a quick reference guide only. For more detailed information and explanation of each task, refer to Chapter 3, Keystone — OpenStack Identity Service. Getting ready Ensure that you have the OpenStack clients installed, as described in the first recipes of this chapter.
How to do it… Carry out the following steps to create and modify users and projects in OpenStack: Creating a new project Creating a new user in a project is achieved with the following command. To do this for the developer user, carry out the following command: openstack user set --password cookbook4 developer Changing your own password To change your own password to something else, issue the following command: openstack user password set --password cookbook4 Common OpenStack storage tasks This section outlines a number of common tasks using the OpenStack Block and Object Storage service.
How to do it… Carry out the following steps to create and modify users and projects in OpenStack: Create a new Cinder volume To create a new Cinder block storage volume, carry out the following command. The size is in gigabytes: openstack volume create --size 5 my5GVolume 56 Chapter 2 Attaching a volume To attach a volume to a running instance, carry out the following command.
The running instance UUID is used and can be found by listing the running instances: openstack server add volume my5GVolume 58eabbacdbd70f7c Detaching a volume To detach a volume, first unmount it from the running instance as you would normally, then carry out the following command: openstack server remove volume my5GVolume 58eabbacdbd70f7c Creating a volume snapshot To make a snapshot of a volume, carry out the following steps.
First, you must detach the volume from the running instance to ensure data consistency. The action is described in the previous task. How to do it… Carry out the following steps to create and use Heat templates in OpenStack to create orchestrated environments: Launch a stack from a template and environment file To launch a stack from a heat orchestration template hot , issue the following command: openstack stack create --template myStack.
The OpenStack Identity service authenticates users and projects by sending a validated authorization token between all OpenStack services. This token is passed to the other services, such as Storage and Compute, to grant user access to specific functionalities.
Therefore, configuration of the OpenStack Identity service must be completed first before using any of the other services. Setting up of the Identity service involves the creation of appropriate roles for users and services, projects, the user accounts, and the service API endpoints that make up our cloud infrastructure.
Since we are using Ansible for deploying our environment refer to Chapter 1, Installing OpenStack with Ansible for more details , all the basic configuration is done for us in the Ansible playbooks. In Keystone, we have the concepts of domains, projects, roles, users, and user groups.
A Keystone domain not to be confused with a DNS domain is a high level OpenStack Identity resource that contains projects, users, and groups. A project has resources such as users, images, and instances, as well as networks in it that can be restricted only to that particular project, unless explicitly shared with others.
A user can belong to one or more projects and is able to switch between them to gain access to those resources. Users within a project can have various roles assigned. Users can be organized into user groups and the groups can have roles assigned to them. In the most basic scenario, a user can be assigned either the role of admin or just be a member. When a user has admin privileges within a project, the admin is able to utilize features that can affect the project such as modifying external networks , whereas a normal user is assigned the member role.
This member role is generally assigned to perform user-related roles, such as spinning up instances, creating volumes, and creating isolated, project-specific networks. Projects used to be called tenants in early versions of OpenStack. Creating OpenStack domains in Keystone If you wish to use more than one domain for your OpenStack deployment, consider using separate domains. Think of domains as separate accounts or departments in large organizations.
For this section, we will create a domain for our project, called bookstore. Getting ready Ensure that you are logged on to a correctly configured OpenStack client and can access the OpenStack environment as a user with admin privileges. We start by creating a domain called bookstore as follows: openstack domain create --description "Book domain" bookstore The output will look similar to this: How it works… In OpenStack, high level identity resources can be grouped under different domains.
If you have to manage distinct organizations within your OpenStack environment, having separate domains for managing them might be very beneficial. By default, your OpenStack environment most likely has a default domain called "Default. The syntax is as follows: openstack domain create --description The description parameter is also optional, but highly recommended.
The domain name must be unique to other domains in the environment. In our recipes, we will use the --domain parameter and specify a domain name. If the domain is not specified, the OpenStack command-line client will use the domain set for the current user that was specified in the openrc file. Once you are logged in, to your Ubuntu OpenStack has a few requirements regarding which attribute types are used for user information.
The OpenStack Cloud Computing Cookbook has been written in such a way so that our readers can follow each section to understand, install and configure each component of the OpenStack environment.
These supporting services are:. May the fourth be with you! Book , News , OpenStack , Recipes. Leave a comment Posted by Kevin Jackson on August 20, Book , News , OpenStack cinder , cloud , cookbook , glance , heat , keystone , neutron , nova , openstack , swift.
Book , News cinder , cookbook , glance , heat , juno , keystone , kilo , mariadb , network , neutron , nova , openstack , swift. Pens down.
Leave a comment Posted by Kevin Jackson on July 4, So go out and by the thing. Book , News , OpenStack cookbook , juno , kilo , openstack. Thinking of taking the OpenStack Certified Administrator exam? Leave a comment Posted by Kevin Jackson on April 1, Book , News , OpenStack openstack. Leave a comment Posted by Kevin Jackson on January 15, Book , News , OpenStack. Book , News , OpenStack cookbook , neutron , openstack.
This is perfectly valid syntax as shown below: The example used in the review incorrectly asserts that the syntax we used is not valid. This is also mythbusted below too: The neutron client allows both.
In the PDF there are no such spaces: How to give feedback Please contact any of us on Twitter , or email book openstackbook. To accomodate this in our OpenLDAP we need to add these values to the new-attributes schema file: sudo echo " attributetype 1. Book , OpenStack , Recipes cookbook , mariadb , ntp , openstack , rabbitmq. RSS feed. Follow Following. Sign me up. Already have a WordPress. Log in now.
Loading Comments This book is written for cloud system engineers, system administrators, and technical architects who are moving from a virtualized environment to cloud environments.
This book assumes that you are familiar with cloud computing platforms, and have knowledge of virtualization, networking, and managing Linux environments. This site comply with DMCA digital copyright. We do not store files not owned by us, or without the permission of the owner. We also do not have links that lead to sites DMCA copyright infringement.
If You feel that this book is belong to you and you want to unpublish it, Please Contact us. Download e-Book. Posted on.
0コメント