<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>6lab.cz | RSS Feed</title>
	<atom:link href="http://6lab.cz/author/zahorik/feed/" rel="self" type="application/rss+xml" />
	<link>http://6lab.cz</link>
	<description>Networking, IPv6, Security</description>
	<lastBuildDate>Tue, 24 Oct 2017 08:54:46 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.9.1</generator>
	<item>
		<title>Virtualisation of Critical Network Services &#8211; Best Practice Document</title>
		<link>http://6lab.cz/virtualisation-of-critical-network-services-best-practice-document/</link>
		<comments>http://6lab.cz/virtualisation-of-critical-network-services-best-practice-document/#comments</comments>
		<pubDate>Mon, 14 Oct 2013 15:18:34 +0000</pubDate>
		<dc:creator><![CDATA[Vladimír Záhořík]]></dc:creator>
				<category><![CDATA[IPv6]]></category>
		<category><![CDATA[Networking]]></category>

		<guid isPermaLink="false">http://6lab.cz/?p=1797</guid>
		<description><![CDATA[This document describes a way to virtualise the number of network servers that are required for the operation of a large campus network. These servers provide services, including DHCP, DNS, VPN, email, network monitoring, and radius. Most of these services are so important that the network must operate two or ... <a href="http://6lab.cz/virtualisation-of-critical-network-services-best-practice-document/" class="more-link">Read More</a>]]></description>
				<content:encoded><![CDATA[<p>This document describes a way to virtualise the number of network servers that are required for the operation of a large campus network. These servers provide services, including DHCP, DNS, VPN, email, network monitoring, and radius. Most of these services are so important that the network must operate two or more of them at the same time, and this leads to an increase in the number of servers. Usually, these services do not require a great deal of computing power, indicating an excellent opportunity to use virtualisation The document is focused on the different requirements to be considered when choosing the appropriate hardware for the job, with emphasis on the price/performance ratio, while maintaining all the benefits of the Vmware vSphere system, which was selected as the virtualisation platform. The document also describes practical experience and the pitfalls that may be encountered during the installation of the system. It describes the configuration of network devices, the iSCSI storage, and the VMware vSphere hypervisors. The conclusion summarises the results and explains the benefits of virtualisation for the campus network.</p>
<h2>1 Virtualisation of Critical Network Services</h2>
<p>The first part of this document describes the advantages and disadvantages of virtualisation for given types of services and explains the purpose of building a virtualisation cluster in two geographically distant locations. The second part is devoted to a specific configuration of network devices and to the preparation that needs to be done before connecting individual parts of the virtualisation cluster. It describes actual experience gained in the operation of these clusters in a situation where it was necessary to revise the network topology. The next section of the document explains how to select the appropriate hardware, especially in the choice of storage and hypervisor hardware. It describes the required properties, with an emphasis on diversity, and in particular, how the virtualisation cluster differs from the usual VMware vSphere cluster. This is followed by a section devoted to the practical configuration of a VMware vSphere and a vCenter. The final section presents the results of the measurement of consumption before and after the virtualisation of the critical servers as well as other operating statistics.</p>
<p>The term, virtualisation, is an IT buzzword, referring to technologies that create an abstraction layer between computer hardware and software. This layer creates a transparent, logical structure that masks the physical (real) one. The goals of virtualisation are the simplification of maintenance, easier scalability, higher accessibility, better utilisation of hardware, and improved security. The memory, the processors, the computer network, the storage, the data, or the whole computer system can be virtualised. The virtualisation of a computer enables one physical server to run multiple operating systems. This is called server virtualisation and can be accomplished in different ways. The most frequently used method of server virtualisation is hardwareassisted virtualisation, which requires a special instruction to be set in the CPU (Intel VT-x and AMD-V), but offers the best performance. This method of virtualisation is the focus of this document.</p>
<h3>1.1 Resilient virtualisation cluster</h3>
<p>To be able to choose the right platform, it is necessary to know and understand the basics of server virtualisation. Everyone who starts with server virtualisation typically installs virtualisation software on a server with large amounts of memory and high capacity drives for all of the virtual servers. This system works well in a test environment. In a production environment, is also necessary to provide protection against various types of accidents that may occur during operation. Current servers have two power supplies, hot swap disk drives in RAID array, multiple CPUs, and many memory modules. However, consideration has to be given to the chance that the server motherboard, the RAID controller, or the power can fail. It is necessary to anticipate infrastructure failures caused by network problems, failure of the cooling system in the server room, or revision of the wiring. All of these cases of failure will result in the unavailability of all of the virtual servers. To counter these potential threats, is necessary to extend the virtualisation system by adding several elements. First, it is necessary to increase the number of hypervisors.</p>
<p>This, in itself, does not contribute substantially, because live migrations of the virtual systems cannot be achieved on two servers alone. Migration of these systems is only possible when all hypervisors have access to shared storage. This storage then becomes a bottleneck, because a failure of the device causes unavailability of the system. Therefore, storage is designed with this in mind: enterprise storage and hard disks. These types of storage have two independent RAID controllers. Each enterprise hard disk has two storage interfaces to connect it to both of the RAID controllers, and this provides resilience in the event of the failure of one of the controllers. The security threats for this type of device are storage power failure, air conditioning failure, or some other disaster. Maintenance of this device is also very complicated, because all actions are performed during run time.</p>
<p>The solution to this problem is to have two storage devices in two, geographically distant locations. All hypervisors have access to both of these storage locations. This makes it possible to move all virtual servers to the first device while maintenance is performed on the second. This ability to move virtual systems to another location is important when there is a planned, structural modification of the wiring, and also in the event of a disaster. A potential bottleneck of this system is an unexpected failure of storage that contains production data. Although this scenario is unlikely, due to the features of enterprise storage, it must be considered. These cases can be solved by restoring the data from a backup server or by using storage facilities with cross-data replication. Unfortunately,these functions are only supported on the top brands of storage hardware, where cost and complexity are much higher than for middle-class storage hardware.</p>
<p>This paper describes a virtualisation cluster, based on two middle-class arrays (without the support of crossreplication) and several hypervisors.</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_7_1.png"><img class="aligncenter  wp-image-1786" title="vcns_7_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_7_1.png" alt="" width="445" height="281" /></a></p>
<h3>1.2 Selection of the virtualisation platform</h3>
<p>There are many virtualisation solutions. Each of them has its advantages/disadvantages and developers always claim that their product is the best. Selection of the best may not be completely obvious. Among other requirements, a virtualisation system must permit the installation of any operating system, live migration of virtual systems between hypervisors, and movement of live virtual systems from one storage subsystem to another (live migration of a Virtual Machine from one storage location to another without downtime). Another important feature is the ability to ensure uninterrupted operation of the virtual systems when the hypervisor fails. The following table compares the characteristics of the best-known virtualisation solutions.</p>
<table class="tabulka1px centered">
<tbody>
<tr>
<th></th>
<th>VMware vSphere</th>
<th>VMware ESXi Free</th>
<th>Microsoft Hyper-V</th>
<th>KVM</th>
<th>XEN</th>
<th>OpenVZ</th>
</tr>
<tr>
<td>VM Windows</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td>VM Linux</td>
<td>Yes</td>
<td>Yes</td>
<td>Partially</td>
<td>Yes</td>
<td>Yes</td>
<td>Partially**</td>
</tr>
<tr>
<td>VM Unix</td>
<td>Yes</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
<td>Yes</td>
<td>Partially**</td>
</tr>
<tr>
<td>VM Migration</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>VM Storage M.</td>
<td>Yes*</td>
<td>No</td>
<td>Downtime***</td>
<td>No</td>
<td>No</td>
<td>No</td>
</tr>
</tbody>
</table>
<p class="figure-comment">* Storage vMotion is enabled in VMware vSphere Enterprise<br />
** OpenVZ needs a modified kernel for VM<br />
*** Hyper-V online VM storage motion is possible, but with downtime during motion (partially solved in Windows Server 2012)</p>
<p>VMware vSphere is definitely not the most powerful solution, but its flexibility, ease of implementation, and system-support from hardware vendors sets the standard for virtualisation. It provides many features that competitors do not yet offer, such as Distributed Resource Scheduler (DRS), Distributed Power Management (DPM), High Availability (HA), Fault Tolerance (FT), and Network I/O Control. Thus, it is well ahead of its competitors, who are still several steps behind this solution. This dominance entails a disadvantage in terms of price, which is several times higher than the price of other virtualisation platforms.</p>
<h2>2 Hardware Selection</h2>
<p>Most virtualisation clusters focus on maximum performance, memory size and IOPS. Virtual servers only use the resources related to the services running on them. The performance of most services depends on the speed of CPU (processing video data, simulation, etc.). Other services require maximum IOPS and extra memory (large database systems, web servers). There are also services that never use more than a fraction of CPU and the IOPS-value is not relevant for them because data from storage are rarely loaded. These are also the services necessary for the operation of large computer networks on the campus: DHCP, DNS, VPN, email, radius, and various monitoring tools or web services. The following section explains the selection of hardware suitable for virtualisation of services of this kind.</p>
<h3>2.1 Optimal hypervisor hardware</h3>
<p>As described in previous sections, the most important hardware parts of the virtualisation cluster are the hypervisors and storage devices. Campus network infrastructure requires their interconnection into one robust VMware vSphere cluster. The selection of hardware must be adapted to the characteristics of the virtualised services, and also to VMware licensing policy. It is necessary to license each physical processor unit and every few gigabytes of memory.</p>
<p>Details about VMware vSphere licensing are available on the website [<a href="#lit_1">1</a>]. The best server is equipped with one powerful CPU and up to 64GB memory. This configuration uses all of the resources of one licence of VMware vSphere ESXi 5 Enterprise.</p>
<p>Today&#8217;s processors are about ten times more powerful than the CPU&#8217;s of five years ago. This allows a processor to replace several older servers. The best price-performance ratio is offered by the Intel Xeon 5600 or the Xeon E5-2600 family of processors.</p>
<p>A VMware vSphere hypervisor requires about 1GB of disk space for installation. Essentially, this storage is only used at system boot. The optimal solution for hypervisor storage is an enterprise flash memory with a capacity of at least 2GB. Other storage systems are not necessary because the data of the virtual servers are stored on a shared iSCSI storage device. Because there is no need to install additional hard drives, it is possible to fit the server hardware into a small server chassis with a height of 1U (Standard Rack Unit).</p>
<p>The parameters described above correspond to many of the servers from different vendors, and campus networks use servers from HP, IBM, Supermicro, Dell, and others. The best operating characteristics are found in Dell servers, which are better than their competitors in design, operating characteristics, warranty, and low failure rate. Their price for academic institutions is very advantageous and for larger purchases, and discounts of 50-60 percent from the list prices can be obtained. The best offer found was the Dell R610 server in the configuration:</p>
<ul>
<li>PowerEdge R610;</li>
<li>Intel Xeon X5690 Processor;</li>
<li>48GB Memory for 1CPU;</li>
<li>Internal SD Module with 2GB SD Card;</li>
<li>High Output Redundant Power Supply (2 PSU) 717W;</li>
<li>Intel X520-T2 10GbE Dual Port Server Adapter, Cu, PCIe;</li>
<li>iDRAC6 Enterprise.</li>
</ul>
<p>This is a relatively cheap and powerful solution. This server, with a single license of VMware vSphere 5.0 Enterprise with one-year subscription, cost less than €6000 at the end of 2011. It is possible to equip the server with 48GB of DDR3 memory for one CPU socket (6x8GB), and memory size is more important than CPU performance. Therefore, in the case of limited resources, it is better to use a cheaper CPU with more memory. A server, with less memory than 36 gigabytes will, most likely, be rare in the future. If a failure of one of the hypervisors occurs, it becomes necessary to fit all running virtual systems to the memory of the remaining hypervisors. Otherwise, it is impossible to guarantee their uninterrupted operation. The same restriction applies, not only during failure of a hypervisor, but also for its upgrade. For this reason, it is advantageous for the virtualisation cluster to have at least three or four hypervisors.</p>
<p>VMware vCenter Server is the simplest, most efficient way to manage VMware vSphere. It provides unified management of all of the hosts and VMs in the datacentre from a single console and also provides aggregate performance monitoring. A VMware vCenter Server gives administrators deep insight into the status and configuration of clusters, hosts, VMs, storage, the guest OS, and other critical components of a virtual infrastructure &#8211; all from one location [<a href="#lit_2">2</a>].</p>
<p>VMware vCenter is a software application designed for Windows. Versions for Linux already exist, but do not yet have all the features of the original Windows version. This application is very important, because without it, it is not possible to migrate both of the virtual servers and their virtual disks. The license for this product is determined by the number of hypervisors in the cluster. If the virtualisation cluster includes three hypervisors and this number is not going to increase in the future, it can be very advantageous to choose a VMware vCenter Academic licence, which provides the full version of central management for up to three hypervisors. In other cases, it is necessary to use the VMware vCenter Standard, which can handle an unlimited number of hypervisors. A single license for a VMware vCenter 5.0 Standard with a one-year subscription cost less than €4000 at the end of 2011.</p>
<h3>2.2 Fibre vs. iSCSI storage</h3>
<p>The heart of each storage unit is a host bus adapter (HBA) that manages the physical disk drives and presents them as logical units. Connection to the network is also provided by the host bus adapter. Powerful storage devices have additional HBA for load-balancing between both controllers, and these can secure the full functionality of the storage system during a failure of one of them. Selection of the HBA depends on the choice of suitable technology. The dominant technologies in the enterprise storage field are Fibre Channel and iSCSI. Both technologies are supported by Vmware vSphere and offer adequate performance characteristics. The cost of HBAs for both technologies is comparable. The advantage of Fibre Channel technology is its throughput and overall performance. The disadvantage is that it is more expensive and requires a complex network infrastructure. ISCSI is cheaper because it can run on almost any switches.</p>
<p>Most critical services in the network do not require storage systems with extra performance. Therefore, it is not absolutely necessary to deploy Fibre Channel, although it is better in many aspects. ISCSI was chosen primarily because of its lower cost, operational characteristics, and the support from VMware. In addition to the type of HBA chosen, it is also necessary to specify other operating parameters, such as device dimensions, type, and number of power supplies, speed of iSCSI connectivity, form factors, and the capacity of the hard drive and software features.</p>
<h3>2.3 Storage parameters</h3>
<p>Storage can be connected to the SAN using either 1Gb or 1OGb iSCSI HBA. The difference in price between these HBAs is minimal in comparison with the price of the whole storage system. The advantage of 10Gb is in the acceleration of iSCSI traffic, and thereby, in higher read/write performance of the virtual servers. Sometimes, it may be useful to reduce the interface speed to 1Gb. This allows the connection of the storage system to the existing 1Gb SAN infrastructure, and SAN infrastructure devices can still use the same HBA after an upgrade to 10Gb.</p>
<p>This allows the connection of the storage to the existing 1GbE SAN infrastructure and also makes it possible to continue to use it with higher performance, following an upgrade to 10GbE SAN.</p>
<p>The disk array must be maximally resilient to hardware failures. For this reason, the array must be equipped, not only with two HBA, but also with several power sources. The size of the storage quipment is mostly limited by the height that the storage device would take up in the rack, which should not exceed 2U (Standard Rack Unit).</p>
<p>Twelve classical 3.5&#8243; discs or twenty-four 2.5&#8243; discs can be accommodated in the space of 2U. 3.5&#8243; drives offer the greatest output and capacity. The total capacity of twelve of these disks is 48TB for SATA disks, or roughly 10TB for SAS disks. The total capacity for twenty-four 2.5&#8243; SAS disks is 21.6 TB. These disks have a lower tendency to heat, making them more reliable. A greater number of disks provide a higher IOPS and more flexibility with the RAID array [<a href="#lit_3">3</a>]. Modern enterprise SSD disks are also 2.5&#8243; in size. The following table compares different disk configurations that can be fitted into the 2U disk array.</p>
<table class="tabulka1px centered">
<tbody>
<tr>
<th colspan="2"></th>
<th>Capacity(GB)</th>
<th>Drives in 2U</th>
<th>2U Capacity(GB)</th>
<th>Active Power(W)</th>
<th>MTBF(Mh)</th>
</tr>
<tr>
<td rowspan="2">3.5&#8243;</td>
<td>SATA/SAS (NL)</td>
<td>3000</td>
<td rowspan="2">12</td>
<td>36000</td>
<td>13</td>
<td>1.2</td>
</tr>
<tr>
<td>SAS</td>
<td>600</td>
<td>7200</td>
<td>18</td>
<td>1.6</td>
</tr>
<tr>
<td rowspan="4">2.5&#8243;</td>
<td>SATA/SAS (NL)</td>
<td>1000</td>
<td rowspan="4">24</td>
<td>24000</td>
<td>7</td>
<td>1.2</td>
</tr>
<tr>
<td>SAS</td>
<td>900</td>
<td>21600</td>
<td>9</td>
<td>1.6</td>
</tr>
<tr>
<td>SATA MLC SSD</td>
<td>512</td>
<td>12288</td>
<td>3</td>
<td>1</td>
</tr>
<tr>
<td>SAS SLC SSD</td>
<td>400</td>
<td>9600</td>
<td>9</td>
<td>2</td>
</tr>
</tbody>
</table>
<p class="figure-comment">* values are actual at the end of 2011; [<a href="#lit_4">4</a>]
<p>Therefore, it is best to fill the disk array with a higher number of 2.5&#8243; SAS disks, mostly because they are more reliable and have a higher capacity. In some cases, it is even better to use several 2.5&#8243; SSDs. After considering all the above requirements, the Dell PowerVault MD3620i disk array was chosen as the best, with the following configuration:</p>
<ul>
<li>PowerVault MD 3620i;</li>
<li>2x HBA 10Gb iSCSI (2x 10GbE port and 1x 1GbE Management port per HBA);</li>
<li>24x 600GB 10K RPM SAS 6Gbps 2.5&#8243; Hotplug Hard Drive;</li>
<li>2x Redundant Power Supply (2 PSU) 717W (600W peak output);</li>
</ul>
<h2>3 Network Infrastructure</h2>
<p>The first step in setting up a virtualisation cluster is to prepare the network infrastructure. This infrastructure provides two primary functions: connection of the hypervisors to the backbone network and to the disk storage.</p>
<h3>3.1 Storage interconnection</h3>
<p>Switches that connect hypervisors with the disk storage are referred to as SAN infrastructure. A SAN infrastructure could, theoretically, be shared with server access network. This option was tested over a period of several months. This testing determined that it is not the correct choice.</p>
<p>Some network technologies that are used in backbone networks to ensure uninterrupted operation can result in small interruptions in seconds (STP, OSPF, etc.) and cause their convergence time. Another source of potential problems is short-term network peaks, when a link is saturated with a lot of traffic for several seconds (possibly due to DOS attacks or broadcast storms). These problems are characterised by similar behaviour of the virtual servers. A short-term interruption or significant slowing of the SAN network can lead to serious problems with the virtual server. Problems were found on all virtual operating systems, but the most occurred on Linux servers. Their IO system is set up to avoid problems when accessing the hard drive. If the Linux kernel does not get a response from the hard drive in the predefined interval, it connects to the affected part as read-only, making the virtual server entirely dysfunctional. Other systems of connecting to the disk are not read-only, although these too were affected by interruptions. All processes requiring a disk operation at the time had to wait several seconds for the disk operation to be completed. As a result of these problems, it was necessary to set up separate network infrastructures between remote locations to allow direct connection to the SAN switches in both locations.</p>
<p>HP switches are the most used switches on the VUT campus in Brno, which is why the proven HP 2910-24al switch was selected for the SAN infrastructure. This is a standard L2 switch with a management system that allows the connection of four optical transceivers. This switch has sufficient throughput and supports JUMBO packets. Theoretically, the use of JUMBO packets is highly suitable, although the increase in throughput realised is only a few percentage points. A more complex configuration, without diagnostic tools, represents a disadvantage. This is why JUMBO packets are not permitted in the final SAN infrastructure. The following illustrations describe the SAN connection in detail. Two independent L2 segments, ensuring iSCSI connectivity, are highlighted in blue and red. Each of these segments uses a different range of IP addresses: 10.255.3.0/24 and 10.255.4.0/24. All of the hypervisors and the HBA are part of both subnets. This in the only way that resilience against an interruption in one branch can be guaranteed.</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_12_1.png"><img class="aligncenter  wp-image-1800" title="vcns_12_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_12_1.png" alt="" width="480" height="225" /></a></p>
<h3>3.2 Backbone interconnection</h3>
<p>In addition to the SAN infrastructure, the connectivity of hypervisors is also necessary. Each of these must be connected to two backed-up access devices. Through the backbone network, these are then connected to another pair of access devices in the remote location. Hypervisors in all locations must have access to the same VLAN, in such a way so that only one virtual server with a fixed IP address is required to operate any of the hypervisors. This functionality can be achieved using the RSTP protocol for backing up two remote locations, and the VRRP protocol is used to backup the default gateways in each network. The following illustration depicts the resulting topology. The SAN infrastructure is depicted in green. The connections between the active components within the distribution layer are depicted in blue. The backbone links are drawn in red; these provide a high-speed connection to the remote locations. The illustration also includes two separate optical cables, which are crucial to the entire cluster&#8217;s full redundancy.</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_13_1.png"><img class="aligncenter  wp-image-1801" title="vcns_13_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_13_1.png" alt="" width="489" height="407" /></a></p>
<h3>3.3 Network configuration</h3>
<p>The previous chapter described important aspects for the proper functioning of a virtual cluster. This primarily requires access to the same VLAN for all hypervisors in all locations, as well as back up of the default gates in each subnet in order to keep it accessible even during outages or router upgrades. This functionality is primarily achieved by the backbone network with the STP and VRRP protocols. The configuration of backbone components is examined in detail in [<a href="#lit_5">5</a>] and [<a href="#lit_6">6</a>]. The VLAN configuration is most important when configuring access points. Every hypervisor must have access to all networks. The names of network interfaces must be same on every hypervizor. This is the only way that problem-free migration of the virtual servers can be guaranteed between the individual hypervisors in the various locations.</p>
<p>The crucial configuration of active components is shown in the following examples. The purpose of the configuration referred to in these examples is to set up three VLAN for the switches. Two of them are set aside to operate the virtual servers (10.0.1.0/24, 10.0.2.0/24) and one is for management (10.255.1.0/24). Besides these three networks, the backbone network must also include two networks reserved for iSCSI operation (10.255.3.0/24, 10.255.4.0/24).</p>
<h4>3.3.1 An example of the configuration of a SAN device</h4>
<p>Switches in a SAN infrastructure do not require a complicated configuration. Basic commands suffice for their installation. To confirm the functioning of the SFP modules and the stability of the links, it is easiest to use the command <code>show interface brief</code> or <code>show interface 24</code>, where the number 24 represents the number of the SFP port of the optical module. To confirm the basic connection, the usual <code>ping</code> command suffices.</p>
<p>Each component&#8217;s configuration requires the setting up of IP addresses in the management network and several other commands.</p>
<pre>hostname "virtual-kou-sw1"
no cdp run
no web-management
vlan 506
	name "mgmt"
	ip address 10.255.1.9 netmask 255.255.255.0
	untagged 1
	exit
snmp-server community "public" operator
snmp-server contact "hostmaster@example.com" location "kou, sal"</pre>
<p>Properly set values should be applied to the switches. Otherwise, the logs will not make sense. It is a good idea to store the logs remotely, because the reboot of any equipment usually erases the log file. The <code>ip authorizedmanagers</code> command offers, at least, basic security.</p>
<pre>timesync sntp
time timezone 60
time daylight-time-rule Western-Europe
sntp unicast
sntp server priority 1 10.255.1.1
logging 10.255.1.1
logging facility syslog

ip authorized-managers 10.255.1.0 255.255.255.0 access manager
crypto key generate ssh rsa
ip ssh</pre>
<p>The previous configuration is usually the same on all equipment, while the most important configuration for SAN switches is the configuration of the VLAN for iSCSI operation and their IP addresses.</p>
<pre>vlan 546
	name "vmware-iscsi"
	untagged 2-24
	ip address 10.255.3.109 255.255.255.0
	exit</pre>
<h4>3.3.2 Example of access-device configuration</h4>
<p>The basic configuration has already been described in the previous chapter. Besides these basics, a redundant connection to the backbone network and VLAN configuration for connected equipment is important for component access. Configuration of VLAN management and spanning tree protocol (STP) is a basic component of backbone connectivity.</p>
<pre>vlan 506
	name "mgmt"
	tagged 1-4
	ip address 10.255.1.8 netmask 255.255.255.0
	exit
spanning-tree force-version rstp-operation
spanning-tree</pre>
<p>Use the <code>show spanning-tree</code> command to verify STP functionality. This provides an abundance of useful information. Usually it suffices to verify the state of both uplinks. One should be set to the &#8220;Forwarding state&#8221;, while the other to &#8220;Blocking&#8221;. Furthermore, the &#8220;Time Since Last Change&#8221; value should be the same for all other components in the same STP domain. Once the STP functions properly, it is possible to set up the user VLANs.</p>
<pre> vlan 3
	name "ant-servers"
	tagged 1-4
	exit
vlan 654
	name "kou-servers"
	tagged 1-4
	exit</pre>
<h4>3.3.3 Example of backbone-device configuration</h4>
<p>The configuration of backbone components is sufficiently dealt with in the GN3 documentation [<a href="#lit_6">6</a>]. The most important part of the configuration is to set up the STP, GVRP, VRRP, and OSPF protocols.</p>
<pre> vlan 506
	name "mgmt"
	tagged 1-4
	ip address 10.255.1.3 netmask 255.255.255.0
	exit
spanning-tree force-version rstp-operation
spanning-tree
gvrp</pre>
<p>To implement the OSPF protocol, it is first necessary to create a point-to-point connection between neighbouring routers and to activate the OSPF to these interfaces. The <code>show ip ospf neighbour</code> and <code>show ip route</code> commands are the most important for determining the protocol states.</p>
<pre> ip routing
router ospf
area 0.0.0.2
	redistribute connected
	exit
vlan 240
	name "ext240"
	ip address 147.229.240.2 255.255.255.252
	ip ospf 147.229.240.2 area 0.0.0.2
	tagged B21
	exit
vlan 241
	name "ext241"
	ip address 147.229.241.2 255.255.255.252
	ip ospf 147.229.241.2 area 0.0.0.2
	tagged B22
	exit</pre>
<p>The last part of the configuration concerns the VRRP protocol. Its configuration must be made on both backbone routers (which perform the backups) at the same time. The configuration should be made for all subnets, whose default gateway should be backed up. The configuration of the primary router can be as follows.</p>
<pre> vlan 3
	name "ant-servers"
	ip address 147.229.3.1 255.255.255.128
	tagged 1-4
	vrrp vrid 1
		owner
		virtual-ip-address 147.229.3.1 255.255.255.128
		enable
		exit
	exit
vlan 654
	name "kou-servers"
	ip address 147.229.3.254 255.255.255.128
	tagged 1-4
	vrrp vrid 2
		backup
		virtual-ip-address 147.229.3.130 255.255.255.128
		enable
		exit
	exit</pre>
<p>The configuration of the other router can be as follows.</p>
<pre> vlan 3
	name "ant-servers"
	ip address 147.229.3.126 255.255.255.128
	tagged 1-4
	vrrp vrid 1
		backup
		virtual-ip-address 147.229.3.1 255.255.255.128
		enable
		exit
	exit
vlan 654
	name "kou-servers"
	ip address 147.229.3.130 255.255.255.128
	tagged 1-4
	vrrp vrid 2
		owner
		virtual-ip-address 147.229.3.130 255.255.255.128
		enable
		exit
exit</pre>
<h2>4 Storage Installation</h2>
<p>Preparation of the disk array is another step in the installation process. When mounting into the rack, it is best to install such equipment close to the ground, mostly for stability reasons, because the disk array full of hard drives is usually rather heavy. The temperature surrounding the drives is also important. In some cases, the temperature between the upper and lower sections of the rack can differ by tens of degrees Celsius, depending on the performance of the server room&#8217;s air conditioning system. After installation into the rack, the disk array management ports can be connected to the same subnet, and both ports should be connected to different switches, for backup purposes. Within the same subnet, a station running on a Windows operating system with the Powervault Modular Disk Storage Manager must be installed in advance. The most up-to-date version of this software is available from the developer [<a href="#lit_7">7</a>].</p>
<h3>4.1 Connection to the device</h3>
<p>The first step in installing the disk array is to connect it to the management software. For arrays with default settings, it is best to use Automatic Discovery. If the management ports are already configured (if they are already assigned to an IP address), it is faster to connect to this array using the assigned IP address.</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_18_1.png"><img class="aligncenter  wp-image-1804" title="vcns_18_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_18_1.png" alt="" width="495" height="199" /></a></p>
<p>After connecting to the disk array, the basic status is displayed. Modification of the management port IP addresses is accomplished via <code>Setup &gt; Configure Ethernet Management Ports</code>. Configuration of the iSCSI ports is closely related to the SAN infrastructure topology (refer to Chapter 3.1) and is accomplished via Setup <code>&gt; Configure iSCSI Host Ports</code>.</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_19_1.png"><img class="aligncenter  wp-image-1805" title="vcns_19_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_19_1.png" alt="" width="499" height="414" /></a></p>
<h3>4.2 Disk group configuration</h3>
<p>The disk array must first be partitioned into disk groups. The size (capacity) of individual disk groups is determined by the number of assigned hard disks, the RAID level, and the set cash of the SSD unit. The total capacity of individual disk groups is then further partitioned into virtual disks.</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_20_1.png"><img class="aligncenter  wp-image-1806" title="vcns_20_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_20_1.png" alt="" width="495" height="303" /></a></p>
<h3>4.3 Virtual disk mappings</h3>
<p>These virtual disks are assigned to individual iSCSI clients within the &#8220;Mappings&#8221; tab. The virtual disk can be shared among clients, provided the sharing is supported by the file operating system. The individual clients are identified, not only by their IP addresses, but are also identified by an &#8220;iSCSI Initiator String&#8221;. For iSCSI clients of the VMware vSphere hypervisor, the &#8220;iSCSI Initiator String&#8221; is generated once the iSCSI protocol is activated. This is further explained in Chapter 5.5. The mapping of the new hypervisor to the virtual disk or a group of virtual disks is depicted in the following illustration. To add a new host, you will need to know its identifier, as described above. If the host had already attempted to connect to the iSCSI array, its hostname and iSCSI Initiator String have already been stored in the unestablished connection table, where the correct host can be chosen with a single click, without the need to manually input the host&#8217;s identifier.</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_21_1.png"><img class="aligncenter  wp-image-1807" title="vcns_21_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_21_1.png" alt="" width="494" height="315" /></a></p>
<h2>5 Hypervisor Installation</h2>
<p>A hypervisor&#8217;s installation is not significantly different from the installation of other operating systems. When booting from the installation disk, it is sufficient to run the Esxi-5.0.0 Installer from the menu. It is necessary to choose the location where the VMware sSphere system should be installed. This location can be on a hard drive, a RAID disk array, an SSD disk, or a flash drive. In Chapter 2.1, the Dell R610 server was chosen, configured for SD memory, and is the server on which the hypervisor is installed. The last step before initiating the installation is to configure the root password. Once it has been rebooted, the system is now set up.</p>
<h3>5.1 Connection to the hypervisor</h3>
<p>In the default state, the hypervisor is assigned a dynamic IP address from the DHCP server. For problemfree work with the hypervisor, it is better to set a fixed IP address. This is possible from the system&#8217;s console. Press F2 and enter your login details to show the basic configuration panel.</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_22_1.png"><img class="aligncenter  wp-image-1808" title="vcns_22_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_22_1.png" alt="" width="497" height="199" /></a></p>
<p>The purpose of each item should be evident. A fixed IP address can be set under &#8220;Configure Management Network&#8221;. In addition to the IP address, this item menu can also be used to set up the network interface, VLANid, or the DNS parameters. The Management Network should be restarted and tested according to the following steps. A specialised client should be used for complete configuration.</p>
<h3>5.2 VMware vSphere client</h3>
<p>This is a separate application used to control and configure the VMware vSphere system. The easiest means of obtaining a client is through the hypervisor&#8217;s web interface.</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_22_2.png"><img class="aligncenter  wp-image-1809" title="vcns_22_2" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_22_2.png" alt="" width="488" height="232" /></a></p>
<h3>5.3 Hypervisor configuration</h3>
<p>Configuration of the hypervisor, using the VmWare vSphere Client, requires a preset IP address on the network interface, and login details. All these configuration details are inputted locally on the server (refer to Chapter 5.1).</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_23_1.png"><img class="aligncenter  wp-image-1810" title="vcns_23_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_23_1.png" alt="" width="480" height="280" /></a></p>
<p>When logging into the hypervisor, it is advisable to configure the NTP server, the DNS parameters, and the SSH server.</p>
<ul>
<li><code>Time Configuration - Properties - Options - NTP Settings</code></li>
<li><code>DNS and Routing - Properties - DNS Configuration - Look for hosts in the following domains</code></li>
<li><code>DNS and Routing - Properties - Routing - Default gateway</code></li>
<li><code>Security Profile - Services Properties - Options - SSH - Startup Policy - Start and Stop with host</code></li>
<li><code>Security Profile - Services Properties - Options - SSH - Services commands - Start</code></li>
</ul>
<h4>5.3.1 Network interface configuration</h4>
<p>In VmWare, there are two types of network interface: VMKernel and Virtual Machine. VMKernel is a network interface for connecting the following ESXi services: vMotion, iSCSI, NFS, and Host Management. The Virtual Machine interface establishes a connection between the virtual server and the computer network. These interfaces can be configured, either through a VMware vSphere Client or from the command line. The easiest way to configure the network interface is with the VMware vSphere Client. The advantage of this type of configuration is its simplicity; even a beginner can set up the required network interface relatively comfortably. The disadvantage of this method of configuration is the time required. Also, with a larger number of network interfaces, it is more difficult to achieve an identical configuration for all hypervisors (which is required for the proper migration of virtual servers between hypervisors). When configuring multiple hypervisors, it is better to configure the network interfaces using the console. This allows for the configuration of all hypervisor network interfaces using a sequence of commands, which may repeated for all hypervisors that have the same hardware configuration. This method enables an identical configuration of hypervisor network interfaces within a cluster. The following command will show the initial state of network interfaces once the installation of the hypervisor is complete.</p>
<pre> ~ # esxcfg-vswitch -l
Switch Name	Num Ports	Used Ports 	Configured Ports 	MTU 	Uplinks
vSwitch0 	128 		3 		128 			1500 	vmnic0

	PortGroup Name 		VLAN ID 	Used Ports 	Uplinks
	VM Network 		0 		0 		vmnic0
	Management Network 	0 		1 		vmnic0</pre>
<p>To understand the virtual network interface in VMware, it is important to understand the following hierarchal concepts: vswitch, vmknic, vmnic, and port group. On the lowest level, physical network interfaces (vmnic) are assigned to individual vswitch objects. These physical network interfaces are used for communication between the hypervisor and the virtual servers with connected systems. More PNIC interfaces in a single vSwitch reveal a higher degree of redundancy (standby or link-aggregation). The lack of a VMNIC interface means that the given vSwitch only handles communication between the virtual servers. Each vSwitch is similar to an L2 switch that supports VLAN. In VMware terminology, &#8220;port group&#8221; is used instead of VLAN. These port groups are also used to connect virtual servers and for the services of the hypervisor system.</p>
<h4>5.3.2 vSwitch configuration</h4>
<p>Before configuring vSwitches, you must decide which physical interface should be used to create the vSwitch objects, and which services the vSwitch should stop. In most cases of a VMware cluster with iSCSI storage/array, the correct partition for four physical network cards (vmnic0-4) is as follows:</p>
<ul>
<li>vSwitch0 &#8211; vmnic0, vmnic1 &#8211; back up connection of virtual servers, host management and vMotion;</li>
<li>vSwitch1 &#8211; vmnic2 &#8211; iSCSI operation;</li>
<li>vSwitch2 &#8211; vmnic3 &#8211; iSCSI operation.</li>
</ul>
<p>The first step is to configure port groups for host management. At start, a &#8220;Management Network&#8221; port group is created on the vSwitch0 with vmnic0 uplink. The following commands add a second physical network interface to the virtual switch, set them as standby backup interface, and create port groups for connection to the virtual servers. With the <code>esxcfg-vswitch</code> command, the numbers following the -v parameter represent the VLAN ID value, with which the given port group&#8217;s packets are distributed across the vmnic0 and vmnic1 physical interfaces to the backbone network.</p>
<pre>esxcfg-vswitch -L vmnic1 vSwitch0
esxcli network vswitch standard policy failover set -s vmnic1 -v vSwitch0

esxcfg-vswitch -A "vpn-server" vSwitch0
esxcfg-vswitch -A "mgmt-ro" vSwitch0
esxcfg-vswitch -A "vlan3" vSwitch0
esxcfg-vswitch -A "vlan3-kou" vSwitch0

esxcfg-vswitch -v 660 -p "vpn-server" vSwitch0
esxcfg-vswitch -v 506 -p "mgmt-ro" vSwitch0
esxcfg-vswitch -v 3 -p "vlan3" vSwitch0
esxcfg-vswitch -v 654 -p "vlan3-kou" vSwitch0</pre>
<p>The following steps describe how to create other vSwitch objects and their configuration as iSCSI ports.</p>
<pre> esxcfg-vswitch -a vSwitch1
esxcfg-vswitch -a vSwitch2
esxcfg-vswitch -L vmnic2 vSwitch1
esxcfg-vswitch -L vmnic3 vSwitch2
esxcfg-vswitch -A "iSCSI1" vSwitch1
esxcfg-vswitch -A "iSCSI2" vSwitch2</pre>
<p>Each hypervisor has two separate IP address in the subnets, and each address establishes connectivity with the disk array independently.</p>
<p>hypervisor1:</p>
<pre> esxcfg-vmknic -a -i 10.255.3.10 -n 255.255.255.0 iSCSI1
esxcfg-vmknic -a -i 10.255.4.10 -n 255.255.255.0 iSCSI2</pre>
<p>hypervisor2:</p>
<pre> esxcfg-vmknic -a -i 10.255.3.11 -n 255.255.255.0 iSCSI1
esxcfg-vmknic -a -i 10.255.4.12 -n 255.255.255.0 iSCSI2</pre>
<h4>5.3.3 iSCSI configuration</h4>
<p>A graphics client is suitable for hypervisor iSCSI configuration.</p>
<ul>
<li><code>Configuration - Storage adapters - Add - Add software iSCSI adapter</code><br />
A new software iSCSI adapter will be added to the Storage Adapter list. After it has been added, select the software iSCSI adapter in the list and click on Properties to complete the configuration.</li>
<li><code>OK</code></li>
<li><code>iSCSI Software Adapter - vmhba&lt;number&gt; - Properties - Dynamic Discovery / Static Discovery</code><br />
Add IP addresses of iSCSI target. These addresses match topology of SAN infrastructure (Chapter 3.1).</p>
<pre>10.255.3.1
10.255.4.1
10.255.3.2
10.255.4.2</pre>
</li>
<li><code>Next</code><br />
A rescan of the host bus adapter is recommended for this configuration change. Rescan the adapter?</li>
<li><code>Yes</code><br />
This step secures a hypervisor-connection attempt to storage, using its IP addresses and iSCSI name. The iSCSI session must be permited on storage. Here, it is important to check that the storage knows the iSCSI name of hypervisor. The next steps are realised in Powervault Modular Disk Storage Manager and build on information obtained in Chapter 4.3.</li>
<li><code>Open Powervault Modular Disk Storage Manager</code></li>
<li><code>Mappings - Storage(in left window) - View - Unassociated Host Port Identifiers</code><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_26_1.png"><img class="aligncenter  wp-image-1811" title="vcns_26_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_26_1.png" alt="" width="388" height="182" /></a></li>
<li><code>List of unassociated host port identifiers.</code><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_26_2.png"><img class="aligncenter size-full wp-image-1812" title="vcns_26_2" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_26_2.png" alt="" width="334" height="63" /></a></li>
<li><code>Mappings - Storage - Host Group - Define Host - &lt;Host name&gt; - Add by selecting a know unassociated host port identifier &lt;choose right one&gt; - User Laber &lt;write some good string&gt; - Add &lt;check that Host port Identifier and User Label match hypervisors values&gt; - Next</code><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_26_3.png"><img class="aligncenter  wp-image-1813" title="vcns_26_3" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_26_3.png" alt="" width="388" height="356" /></a></li>
<li><code>Host Type - VMWARE (or linux) - Next - Finish</code></li>
<li><code>Close Powervault Modular Disk Storage Manager</code></li>
<li><code>Go back to VMware vSphere Client</code></li>
<li><code>Configuration - Storage adapters - Rescan all</code><br />
Now it is possible to see active devices and paths<br />
<a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_27_1.png"><img class="aligncenter  wp-image-1815" title="vcns_27_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_27_1.png" alt="" width="380" height="332" /></a></li>
</ul>
<h5>5.3.3.1 Partitions</h5>
<p>The following lines can be skipped if partitions have been created and formatted on the disk array. In other cases, partitions must be created and formatted. Follow these next steps for each newly added partition, according to need.</p>
<ul>
<li><code>Configuration - Storage - Add Storage - Disk/LUN</code><br />
<a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_27_2.png"><img class="aligncenter  wp-image-1816" title="vcns_27_2" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_27_2.png" alt="" width="483" height="173" /></a></li>
<li><code>A partition will be created and used - Next</code></li>
<li><code>&lt;Enter the partition name&gt; - Next</code></li>
<li><code>&lt;Choose optimal Block Size&gt; - Maximum available space - Next</code><br />
1024GB, 4MB Block Size should be good for majority</li>
<li><code>Finish</code></li>
</ul>
<h3>5.4 vMotion</h3>
<p>The VMware Cluster base is now functional. It has access to the iSCSI array. In order to migrate virtual servers between hypervisors, all hypervisors must be administered centrally by the VMware vCenter Server application. It is also necessary to define the hypervisor&#8217;s VMKernel network interface (refer to Chapter 5.3.1), across which the vMotion transmissions will be made.</p>
<ul>
<li><code>Configuration - Networking - vSwitch0 - properties</code></li>
<li><code>&lt;choose right VMKernel or define a new one&gt;</code></li>
<li><code>Port Properties - vMotion (checkbox on)</code></li>
</ul>
<h2>6 vCenter Server</h2>
<p>VMware vCenter Server is a software application for the centralised management of a virtual infrastructure. VCenter vSphere 5 comes in either Windows or Linux versions. However, the Linux version is somewhat behind, since it is not possible to run the Update Manager or some of the plugins. For these reasons, it is better to use the full functionality of the Windows version. This version operates on the MSSQL database. MS SQL 2005 is included in vCenter Server&#8217;s installation. This server is fully functional, but without additional software, it is not possible to backup the database, or to perform more advanced management of the database. However, in most cases, it is sufficient. If you require some of the more advanced functions, simply install MS SQL Server Management Studio. Alternatively, you can migrate to MS SQL 2008, which may be used free-of-charge after fulfilling the licensing conditions.</p>
<h3>6.1 vCenter installation</h3>
<p>Some conditions must be met before installing the vCenter Server. Above all, you will need a 64bit version of Windows installed on a suitable server. The server may be physical or virtual, but should not be located on any of the hypervisors in the created virtualisation cluster. One of the disadvantages to this would be evident with any hypervisor outage on which vCenter Server is running. If this were the case, then the central management would stop running, as would the arbitrator, which would normally be the server that would determine which virtual server had been affected by the hypervisor outage, and which server should run instead on another hypervisor. From this perspective, it is truly better to run vCenter Server on a separate server, ideally as a virtual server in a standalone installation of VMware vSphere. The advantage of this approach, as opposed to a hardware server, is the VCenter Server&#8217;s ability to take snapshots, its easy re-installation, and its ability to better manage the resources of the physical servers.</p>
<p>The actual installation of the vCenter Server is trivial and does not require any special effort. The user name used during the installation is the same as the one that will be used to run the server application. However, later users and groups will be added, and their access to virtualisation clusters will be administered. After the server is installed, it is a good idea to also install VMware Update Manager. This is a software application that manages updates of hypervisors and virtual servers. After installing these core applications, you should restart the server and verify, through the services administrator, whether or not the corresponding services have automatically rebooted.</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_29_1.png"><img class="aligncenter  wp-image-1817" title="vcns_29_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_29_1.png" alt="" width="416" height="224" /></a></p>
<p>Sometimes, the order in which the services start may cause conflicts, so if the vCenter does not start up properly, both services should have the following Startup Type value: &#8220;Automatic (Delayed Start)&#8221; for the following services:</p>
<ul>
<li>VMware VirtualCenter Management Webservices;</li>
<li>VMware VirtualCenter Server.</li>
</ul>
<p>The VMware vCenter should now be fully functional. For login, the VMware vSphere Client uses the same username and password as the system.</p>
<h3>6.2 vCenter configuration</h3>
<p>In its default state, vCenter does not yet include any objects that could be administered. Such objects must first be created. The basic object types are as follows.</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_29_2.png"><img class="size-full wp-image-1818 alignnone" title="vcns_29_2" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_29_2.png" alt="" width="16" height="16" /></a> vCenter</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_29_3.png"><img class="alignnone size-full wp-image-1819" title="vcns_29_3" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_29_3.png" alt="" width="18" height="18" /></a> Datacenter</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_29_4.png"><img class="alignnone size-full wp-image-1820" title="vcns_29_4" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_29_4.png" alt="" width="16" height="17" /></a> Cluster</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_29_5.png"><img class="alignnone size-full wp-image-1821" title="vcns_29_5" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_29_5.png" alt="" width="11" height="16" /></a> Host</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_29_6.png"><img class="alignnone size-full wp-image-1822" title="vcns_29_6" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_29_6.png" alt="" width="17" height="16" /></a> Virtual Machine</p>
<p>The virtual server (VM) runs on the hypervisor (Host). Hosts are assigned, either to a cluster or directly into the Datacenter. The Datacenter is the basic hierarchical block, which groups clusters with individual hosts. The root of the hierarchy tree is an instance of the VMware vCenter.</p>
<p>The first step in configuring the vCenter is to create a Datacenter object. The importance of this block is that it separates the hardware resources provided by individual hosts and the disk array into functional blocks. Another step is to create a cluster. The last configuration step is to assign the hypervisors to a cluster.</p>
<ul>
<li><code>&lt;Focus on vCenter instance and show context menu&gt; - New Datacenter - Name - Finish</code></li>
<li><code>&lt;Focus on Datacenter instance and show context menu&gt; - New Cluster</code>
<ul>
<li><code>Name - Turn ON vSphere HA - Next</code></li>
<li><code>Host Monitoring Status: Enable Host Monitoring</code></li>
<li><code>Admission Control: Enable</code></li>
<li><code>Admission Control Policy: Host failures the cluster tolerates: 1</code></li>
<li><code>Next</code></li>
<li><code>Cluster Default Settings</code></li>
<li><code>VM restart priority: Medium</code></li>
<li><code>Host Isolation response: Leave powered on</code></li>
<li><code>Next</code></li>
<li><code>VM Monitoring: Disabled</code></li>
<li><code>Monitorign sensitivity: High</code></li>
<li><code>Next</code></li>
<li><code>&lt;Choose right type of CPU&gt;</code></li>
<li><code>Enable EVC for Intel Hosts: Intel Sandy Bridge Generation</code></li>
<li><code>Next</code></li>
<li><code>Store the swapfile in the same directory as the Virtual Machine</code></li>
<li><code>Next - Finish</code></li>
</ul>
</li>
<li><code>&lt;Focus on Cluster instance and show context menu&gt; - Add Host</code>
<ul>
<li><code>Connection: &lt;Enter IP or HOSTNAME of hypervisor&gt;</code></li>
<li><code>Authorization: &lt;Enter right credentials&gt;</code></li>
<li><code>Next - Finish</code></li>
</ul>
</li>
</ul>
<p>These steps create a tree structure, such as the one that follows.</p>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_30_1.png"><img class="aligncenter size-full wp-image-1823" title="vcns_30_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_30_1.png" alt="" width="224" height="129" /></a></p>
<h3>6.3 Virtual machines</h3>
<p>When creating virtual servers, an item from the &#8220;Cluster&#8221; or &#8220;Host&#8221; context menu is usually used.</p>
<ul>
<li><code>&lt;Focus on Cluster or Host instance and show context menu&gt; - New Virtual Machine</code>
<ul>
<li><code> Configuration: Typical - Next</code></li>
<li><code> Name - Next</code></li>
<li><code> Choose a specific host within a cluster</code></li>
<li><code> Select a destination storage - &lt;iSCSI shared storage must be selected to provide redundancy&gt;</code></li>
<li><code> Next</code></li>
<li><code> Guest Operating System - Next</code></li>
<li><code> Number of NICs</code></li>
<li><code> Network: &lt;Choose network name - VLAN ID&gt;</code></li>
<li><code> Type of adapter: &lt;Intel E1000 is widely supported network adapter&gt;</code></li>
<li><code> Next</code></li>
<li><code> Virtual disk size:</code>
<ul>
<li><code> 64 GB is good minimal value for WINDOWS 2008 R2/WINDOWS 7 and newer</code></li>
<li><code> 8 GB is good minimal value for UNIX/LINUX without X system mounted on / Additional virtual disk for special server functionality is necessary.<br />
e. g., 64 GB virtual disk mounted on /var/www for webserver<br />
e. g., 64 GB virtual disk mounted on /var/db for databases<br />
This schema with every partition on extra virtual disk is very useful for resizing. Instead of resizing a virtual disk and partition and file system is possible to add a new bigger virtual disk to the system a copy all data to a new one. This way save a lot of time and is much saver for your data.</code></li>
</ul>
</li>
<li><code> Thin Provisioning - &lt;little bit slower, but save a lot of disk space, highly recommended&gt;</code></li>
<li><code> Next - Finish</code></li>
</ul>
</li>
</ul>
<p><a href="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_31_1.png"><img class="aligncenter  wp-image-1824" title="vcns_31_1" src="http://6lab.cz/new/wp-content/uploads/2013/08/vcns_31_1.png" alt="" width="357" height="232" /></a></p>
<p>A virtual server may be installed by running an ISO file from a mapped CV/DVD device, or by PXE. This document does not cover the installation of a virtual operating system.</p>
<h3>6.4 Plugins</h3>
<p>The following plugins offer advanced functionality of a vSphere Client connected to a vCenter Server. The server part of a plugin is usually installed in the vCenter Server. The vSphere Client displays which plugins are available in the plugin menu</p>
<h4>6.4.1 Update Manager</h4>
<p>After installing the VMware Update Manager, the Update Manager plugin becomes available.</p>
<ul>
<li><code>Plugins - Manage Plugins - VMware vSphere Update Manager Extension</code>Download and Install in Status column will begin the installation.</li>
<li><code>Run - Next - Accept - Next - Install - Finish</code></li>
<li><code>Install this certificate - Ignore</code>VMware vSphere Update Manager Extension is enabled</li>
</ul>
<p>The vSphere Client interface now includes an Update Manager menu and an interface for Update Manager Administration. Below is a description of how to upgrade a hypervisor. The first step is to define which patches will be applied to a hypervisor.</p>
<ul>
<li><code>Home - Update Manager - Download patches and upgrades</code></li>
<li><code>Go to Compliance View</code></li>
<li><code>Attach</code></li>
<li><code>Patch Baselines</code></li>
<li><code>Critical Host Patches</code></li>
<li><code>Non-Critical Host Patches</code></li>
<li><code>Attach</code></li>
</ul>
<p>There are crucial steps to consider whenever patching a hypervisor.</p>
<ul>
<ul>
<li><code>Home - Update Manager - Download patches and upgrades</code></li>
<li><code>Go to Compliance View</code></li>
<li><code>Scan</code></li>
</ul>
</ul>
<p>Compliant Host is green. Non-Compilant is red.</p>
<ul>
<li><code>&lt;Focus on Non-Compilant Host&gt;</code></li>
<li><code>&lt;Migrate all VM to another Host&gt;</code></li>
<li><code>&lt;Enter the Maintenance Mode&gt;</code></li>
<li><code>Remediate - Next - Next - Next - Next - Finish</code>Installation and restart or host will take about 5 minutes.</li>
<li><code>&lt;Exit the Maintenance Mode&gt;</code></li>
</ul>
<p>&nbsp;</p>
<h3>6.5 vMotion and Storage vMotion</h3>
<p>A final step should include verifying the functional migration of virtual servers between hypervisors.</p>
<ul>
<li><code>&lt;Focus on online VM to migrate&gt; - Migrate</code></li>
<li><code>Change Host - Next</code></li>
<li><code>&lt;Choose different Host&gt; - Next</code></li>
<li><code>Finish</code></li>
</ul>
<p>The migration of a virtual server&#8217;s data storage may be verified by the following steps.</p>
<ul>
<li><code>&lt;Focus on online VM to migrate&gt; - Migrate</code></li>
<li><code>Change Datastore - Next</code></li>
<li><code>&lt;Choose different Datastore&gt; - Next</code></li>
<li><code>Finish</code></li>
</ul>
<p>Both of the above tasks should be possible without a VM outage. The time required to make changes to a hypervisor depend on the operating memory of the virtual server. The time required to make changes to a data storage depends on the size of a virtual server&#8217;s hard disk. A data storage change usually takes some time and may take several dozen hours with large data servers.</p>
<h2>7 Conclusion</h2>
<p>The purpose of this document is to describe all aspects of operating a virtualisation cluster in order to use it as a manual to design other virtualisation clusters. At the moment, VMware vSphere is the best-tuned platform for virtualisation. This best practice document focuses on this most widely-used tool and on the hardware required for its operation. When choosing hardware and software tools, emphasis is placed on an optimal price/performance ratio so that the chosen software would best serve the conditions of the target environment, and not require the purchase of unnecessary virtualisation software licenses. The final result is proposal for the optimal hardware for this type of virtualisation cluster, as described in the second chapter.</p>
<p>As the following table shows, the migration of the original thirty physical servers to a newly-created virtualisation cluster lowers the power required to operate these systems by 77%.</p>
<table class="tabulka1px centered">
<tbody>
<tr>
<th>Previous devices</th>
<th>Count</th>
<th>Power [W]</th>
<th>Total Power [W]</th>
</tr>
<tr>
<td>Server PowerEdge 1950 III (2x CPU)</td>
<td>15</td>
<td>295</td>
<td>4425</td>
</tr>
<tr>
<td>Server PowerEdge 2950 III (2x CPU)</td>
<td>15</td>
<td>327</td>
<td>4905</td>
</tr>
<tr>
<td>Summary</td>
<td></td>
<td></td>
<td>9330</td>
</tr>
</tbody>
</table>
<p>After virtualisation cluster installation and migration of all physical systems into a virtual environment, a basic measurement of consumption yielded the following results.</p>
<table class="tabulka1px centered">
<tbody>
<tr>
<th>New devices</th>
<th>Count</th>
<th>Power [W]</th>
<th>Total Power [W]</th>
</tr>
<tr>
<td>Storage array MD3620i (24x 2.5&#8243; HDD)</td>
<td>2</td>
<td>452</td>
<td>904</td>
</tr>
<tr>
<td>Server PowerEdge R610 (1x CPU, no HDD)</td>
<td>4</td>
<td>228</td>
<td>912</td>
</tr>
<tr>
<td>Switch HP ProCurve 2910al-24G</td>
<td>4</td>
<td>82</td>
<td>328</td>
</tr>
<tr>
<td>Summary</td>
<td></td>
<td></td>
<td>2144</td>
</tr>
</tbody>
</table>
<p>The performance of the redundant virtualisation cluster significantly surpassed all expectation and clearly demonstrates the advantages of virtualisation, especially for older server systems. In addition to the energy savings, there was a savings of about a 60% in the required rack space. However, the greatest advantage is the resilience of virtual systems against hardware failures and the ability to migrate the virtual systems to other locations. Migration to other locations is important during power or temperature-control failures or due to natural disasters affecting the data centre, because it would be able to transfer the virtual servers with their storage to an unaffected location.</p>
<p>The system described is the result of many years of experience with VMware virtualisation at the Brno VUT campus. The maintenance of such a system is easier than dozens of individual servers and the operation of a virtualisation cluster has avoided problems associated with incompatible hardware servers.</p>
<h2>8 References</h2>
<p><a name="lit_1"></a><br />
[1] VMware vSphere 5, Licensing, Pricing and Packaging <a href="http://www.vmware.com/files/pdf/vsphere_pricing.pdf">http://www.vmware.com/files/pdf/vsphere_pricing.pdf</a></p>
<p><a name="lit_2"></a><br />
[2] VMware vSphere Documentation <a href="http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html">http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html</a></p>
<p><a name="lit_3"></a><br />
[3] Tom&#8217;s Hardware, 3.5&#8243; Vs. 2.5&#8243; SAS HDDs: In Storage, Size Matters Patrick Schmid, Achim Roos, May 2010 <a href="http://www.tomshardware.com/reviews/enterprise-storage-sas-hdd,2612.html">http://www.tomshardware.com/reviews/enterprise-storage-sas-hdd,2612.html</a></p>
<p><a name="lit_4"></a><br />
[4] Dell Enterprise HDD Specification, August 2011 <a href="http://www.dell.com/downloads/global/products/pvaul/en/enterprise-hdd-sdd-specification.pdf">http://www.dell.com/downloads/global/products/pvaul/en/enterprise-hdd-sdd-specification.pdf</a></p>
<p><a name="lit_5"></a><br />
[5] Configuration of HP Procurve Devices in a Campus Environment, Tomas Podermanski, Vladimir Zahorik, March 2010 (CBPD111, the Czech Republic) <a href="http://www.terena.org/activities/campus-bp/pdf/gn3-na3-t4-cbpd111.pdf">http://www.terena.org/activities/campus-bp/pdf/gn3-na3-t4-cbpd111.pdf</a></p>
<p><a name="lit_6"></a><br />
[6] Recommended Resilient Campus Network Design, Tomas Podermanski, Vladimir Zahorik, March 2010 (CBPD114, the Czech Republic) <a href="http://www.terena.org/activities/campus-bp/pdf/gn3-na3-t4-cbpd114.pdf">http://www.terena.org/activities/campus-bp/pdf/gn3-na3-t4-cbpd114.pdf</a></p>
<p><a name="lit_7"></a><br />
[7] Drivers for PowerVault MD3620i, August 2011 <a href="http://ftp.euro.dell.com/Pages/Drivers/powervault-md3620i.html">http://ftp.euro.dell.com/Pages/Drivers/powervault-md3620i.html</a></p>
<h2>9 List of acronyms</h2>
<dl>
<dt>DHCP</dt>
<dd>Dynamic Host Configuration Protocol</dd>
<dt>DNS</dt>
<dd>Domain Name System</dd>
<dt>DOS</dt>
<dd>Denial-of-service Attack (DoS Attack)</dd>
<dt>GVRP</dt>
<dd>GARP VLAN Registration Protocol</dd>
<dt>GARP</dt>
<dd>Generic Attribute Registration Protocol</dd>
<dt>IOPS</dt>
<dd>Input/Output Operations Per Second</dd>
<dt>IP</dt>
<dd>Internet Protocol</dd>
<dt>iSCSI</dt>
<dd>Internet Small Computer System Interface</dd>
<dt>L2</dt>
<dd>Layer 2 &#8211; Data link layer of OSI model</dd>
<dt>L3</dt>
<dd>Layer 3 &#8211; Network layer of OSI model</dd>
<dt>OSPF</dt>
<dd>Open Shortest Path First</dd>
<dt>PSU</dt>
<dd>Power Supply Unit</dd>
<dt>RSTP</dt>
<dd>Rapid Spanning Tree Protocol</dd>
<dt>SFP</dt>
<dd>Small Form-factor Pluggable Transceiver</dd>
<dt>SM</dt>
<dd>fiber Single-mode Optical Fiber</dd>
<dt>STP</dt>
<dd>Spanning Tree Protocol</dd>
<dt>VLAN</dt>
<dd>Virtual Local Area Network</dd>
<dt>VPN</dt>
<dd>Virtual Private Network</dd>
<dt>VRRP</dt>
<dd>Virtual Router Redundancy Protocol</dd>
</dl>
<div  class="x-author-box cf" ><h6 class="h-about-the-author">About the main author</h6><div class="x-author-info"><h4 class="h-author mtn">Vladimír Záhořík</h4><a href="http://www.vutbr.cz/lide/vladimir-zahorik-1849" class="x-author-social" title="Visit the website for Vladimír Záhořík" target="_blank"><i class="x-icon-globe"></i> http://www.vutbr.cz/lide/vladimir-zahorik-1849</a><span class="x-author-social"><i class="x-icon-envelope"></i> zahorik@cis.vutbr.cz</span><p class="p-author mbn"></p></div></div>
]]></content:encoded>
			<wfw:commentRss>http://6lab.cz/virtualisation-of-critical-network-services-best-practice-document/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>IPv6 Configuration on HP ProCurve Switches</title>
		<link>http://6lab.cz/ipv6-configuration-on-hp-procurve-switches/</link>
		<comments>http://6lab.cz/ipv6-configuration-on-hp-procurve-switches/#comments</comments>
		<pubDate>Sat, 13 Nov 2010 10:15:00 +0000</pubDate>
		<dc:creator><![CDATA[Vladimír Záhořík]]></dc:creator>
				<category><![CDATA[IPv6]]></category>
		<category><![CDATA[Networking]]></category>

		<guid isPermaLink="false">http://ipv6.vutbr.cz/?p=692</guid>
		<description><![CDATA[New firmware for HP ProCurve switches was released on 15<sup>th</sup> November 2010. With this step, the manufacturer removed a significant shortcoming of the ProCurve switches – no full support for the IPv6 protocol. Partial IPv6 support was already introduced in earlier versions, but only for device management and filtering (ACL). Version K.15 brings IPv6 routing support in hardware with all features, including support of the OSPFv3 routing protocol. This firmware was released for the L3 switches series 54xx, 81xx – i.e., all switches with the “K” letter in their firmware name. The release number of the new version is 15 (K.15). The current document presents a detailed look at the implementation of IPv6 support. Giving examples, it will be shown that IPv6 configuration is not very complicated. Since for many people practical use of IPv6 is still unknown territory, some differences from IPv4 will be described in more detail below. Management and syntax of IPv6 commands copy the Cisco philosophy to a large degree. Yet, there are some small differences. The procedures below definitely do not represent all IPv6 possibilities in the K.15 firmware or IPv6 configuration possibilities, but are merely a manual to put IPv6 into production on these switches easily and quickly.
]]></description>
				<content:encoded><![CDATA[<h2>1 Setting Addresses on Interfaces</h2>
<p>The first thing that must be done is to set an IPv6 address. The common IPv4 set-up was one address and a relevant subnet mask for each interface. The situation is slightly different for IPv6. First of all, each interface must be equipped with a <em><strong>Link-local address</strong></em>. This address has only local significance and must be set automatically on each IPv6 interface immediately after the device is turned on. From the administrator’s point of view, this process is fully automated. As far as configuration is concerned, it does therefore not require any special attention. The other generally used addresses are <em><strong>Global IPv6 addresses</strong></em>. This type of address resembles more or less the addresses that we know from the IPv4 world. Most likely, the change of address length (to 128 bit) will not surprise anyone, but setting the <em><strong>prefix length</strong></em> for most cases to 64 bits is a new thing. In IPv4 terminology we used to refer to a subnet mask and the mask length. With IPv6, we are talking about prefix and prefix length.</p>
<p>As mentioned above, the Link-Local address is set up automatically. The global address configuration is done using an interface. Here, we have two options. Either the whole address, i.e,. both the network and the host part (host ID), can be set statically, or you can set the network part only and have the host part set by an EUI64 algorithm, based on the device’s MAC address.</p>
<pre>
hp-test<strong># configure</strong>
hp-test(config)<strong># vlan 224</strong>
hp-test(vlan-224)<strong># ipv6 address 2001:718:802:224::1/64</strong>
hp-test(vlan-224)<strong># exit</strong>
hp-test(config)<strong># vlan 225</strong>
hp-test(vlan-224)<strong># ipv6 address 2001:718:802:225::0/64 eui-64</strong>
</pre>
<p>Just to be sure we can check the configuration</p>
<pre>
hp-test(vlan-225)<strong># show ipv6</strong>

  Internet (IPv6) Service

    IPv6 Routing : Enabled
    ND DAD       : Enabled
    DAD Attempts : 3

    VLAN Name    : DEFAULT_VLAN
    IPv6 Status  : Disabled
    VLAN Name    : VLAN224
    IPv6 Status  : Enabled

    Address     |                                             Address
    Origin      |       IPv6 Address/Prefix Length            Status
    ----------- + ------------------------------------------- -----------
    manual      |       2001:718:802:224::1/64                tentative
    autoconfig  |       fe80::21d:b3ff:fe01:a700/64           tentative

    VLAN Name : VLAN225
    IPv6 Status : Enabled

    Address     |                                             Address
    Origin      |         IPv6 Address/Prefix Length          Status
    ----------- + ------------------------------------------- -----------
    manual      |     2001:718:802:225:21d:b3ff:fe01:a700/64  tentative
    autoconfig  |     fe80::21d:b3ff:fe01:a700/64             tentative
</pre>
<p>As shown in the listing, there are three different IPv6 addresses set on two IP interfaces (which are represented by VLAN). The first one is the address we set on the<em> 2001:718:802:224</em> network. The first available address in the relevant network is used (with the number 1 in the host ID). The other address was created by the EUI64 algorithm. In this case, the network address is <em>2001:718:802:225</em> and the host ID is <em>21d:b3ff:fe01:a700</em>. The third address shown in the listing (fe80::21d:b3ff:fe01:a700) is a <em>Link-Local</em> address. Note that the <em>Link-Local</em> address has the same value on all interfaces. When working with this address, we must therefore add to this address after the % symbol the interface to which the relevant <em>Link-Local</em> address belongs (e.g., <em>fe80::21d:b3ff:fe01: 700%VLAN224</em>).</p>
<h2>2 IPv6 Management</h2>
<p>Using IPv6 for switch management will probably remain rather marginal for some time. The main reason for this is the effort to focus on providing native IPv6 (or dual stack) connectivity for servers and client systems. IPv6 support for management was included in the K.14 firmware release, but customers probably never used this feature on a large scale.</p>
<p>If you decide to keep using IPv4 for management, you must not forget that each configured IPv6 address automatically becomes an address that can be used to manage the switch. You can limit access by defining the mgmt-vlan option. But you cannot always afford to use this method. When configuring the first IPv6 address on the switch you should always set up limitations for access to component management. Use the following command to limit management to selected networks:</p>
<pre>hp-test(config)<b># ipv6 authorized-managers 2001:718:802:228::0
ffff:ffff:ffff:ffff:: access manager</b></pre>
<p>The IPv6 management can be restricted completely using the follow command:</p>
<pre>hp-test(config)<b># ipv6 authorized-managers 0::
ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff</b></pre>
<p>Note that the net address mask is entered in a somewhat unusual format. Unfortunately, at this moment there is no way to enter the net mask using, for instance, the prefix length. If you wish to manage the switch via IPv6 only, you can do it. Most settings will not present major issues – i.e., access through the SNMP protocol to the switch MIB, sending remote logs via syslog, time servers etc. The only problem is the definition of RADIUS servers. The current version does not permit entering IPv6 addresses. Thus, if you use 802.1X user authentication or authenticate the management access to the switch through a RADIUS server, you will need to keep at least one IPv4 address on the component for management purposes.</p>
<h2>3 Client Configuration</h2>
<p>In the previous sections, we carried out the first step that is required to run IPv6 on the interface. Now, we will find out how to set up an IPv6 address on the endpoint systems. We will skip the possibility of configuring a static address (which can of course be done) and focus on tools that will make the job easier for us. The IPv6 protocol introduces new mechanisms to configure addresses for endpoint systems. The <em><strong>router advertisement</strong></em> (RA) protocol, a part of <em>Neighbour Discovery</em> (RFC 4861) is one of these. Each router will send information about network addresses that are configured on the router interfaces at regular intervals or upon request (<em>Router Solicitation</em>). These data are used by an endpoint system or device to set its own IPv6 address. It is a completely different approach than we were accustomed to in the IPv4 environment, where assigning IPv4 by DHCP was the common way to configure an IPv4 address. If the routing functionality is enabled on the switch, then<em> Router advertisement</em>is generated automatically and it includes all networks configured on the interface. But in some cases you could suppress the spread of RA. This can be done globally for the whole switch</p>
<pre>hp-test(config)<strong># ipv6 nd suppress-ra</strong></pre>
<p>or within the configuration of a given interface</p>
<pre>hp-test(vlan-224)<strong># ipv6 nd ra suppress</strong></pre>
<p>In practice we will probably use these commands very rarely. But the <em>MANAGED</em> and <em>OTHER</em> commands are vastly more important to configure RA. The <em>MANAGED</em> flag says that the device’s IPv6 address and other parameters may be discovered in the given network through DHCPv6. The <em>OTHER</em> flag tells the client that it can use DHCPv6 only to obtain other parameters such as DNS server addresses, DNS suffixes etc. Setting the <em>MANAGED</em> flag automatically has a higher priority. If the <em>MANAGED</em> flag is set, setting the <em>OTHER</em>flag is meaningless. By default, both flags are turned off. They can be turned on with the following commands.</p>
<pre>hp-test(vlan-224)<strong># ipv6 nd ra managed-config-flag</strong>
hp-test(vlan-224)<strong># ipv6 nd ra other-config-flag</strong></pre>
<p>Most likely, in practice these options, especially the <em>MANAGED</em> flag, will be used very often.</p>
<h2>4 DHCPv6</h2>
<p>It was already mentioned that in the IPv6 world DHCP support is not a necessary prerequisite to automatically configure a device, in contrast to what we are used to in the IPv4 world. The router advertisement (RA) mechanism mentioned above takes care of transferring data that are required to create the basic network connectivity. But in RA messages there is no way to provide other necessary data, such as DNS server addresses or search domain suffixes. These data can be received either via DHCP over IPv4 (with dual-stack support) or over DHCPv6. If no DHCPv6 server is connected directly to the given network you will have to use a remote DHCPv6 server and set up DHCPv6 relay on the switch. The set-up for DHCPv6 relay is very similar to the set-up of DHCPv4 relay. The configuration on the switch will be the same for stateful and for stateless<br />
configuration.</p>
<pre>hp-test(vlan-224)<strong># ipv6 helper-address unicast 2001:718:802:4::93e5:394</strong>
hp-test(vlan-224)<strong># ipv6 helper-address unicast 2001:718:802:3::93e5:318</strong></pre>
<p>and to turn on the DHCPv6 relay support in the main configuration:</p>
<pre>hp-test(config)<strong># dhcpv6-relay</strong></pre>
<h2>5 Neighbour Cache</h2>
<p>Careful readers will have noticed that address assignment to endpoint systems is not managed centrally like we are used to with IPv4, where the DHCP server usually provides this service. In the case of IPv6, the endpoint system addresses are often randomly generated (RFC 4941), not influenced by an external authority. In many cases, in practical operation the relation between the communicating IPv6 address and the link-layer address (MAC address) will need to be known. With IPv4, this information was stored in an ARP table. With IPv6, the corresponding structure is called <em>neighbour cache</em>. The meaning and use are in principle the same as with an ARP table. You can list its contents with the following command:</p>
<pre>
hp-test<b># show ipv6 neighbours</b>

  IPv6 ND Cache Entries

  IPv6 Address                           MAC Address  State  Type   Port
--------------------------------------- ------------- ----- ------- ----
  ...
  2001:718:802:3:223:32ff:fe31:50d4     002332-3150d4 STALE dynamic   2
  2001:718:802:3:81f3:b2e7:f738:3bd8    000423-c915c4 STALE dynamic   2
  2001:718:802:3:915a:50d3:f16e:919a    000423-c915c4 STALE dynamic   2
  fe80::214:22ff:fe7b:8673%vlan223      001422-7b8673 STALE dynamic   23
  2001:718:802:80::1                    001ec1-daab81 STALE dynamic   4
  fe80::21e:c1ff:feda:ab81%vlan224      001ec1-daab81 STALE dynamic   4
  ...
</pre>
<p>As you see, <em>neighbour cache</em> contains records for all types of addresses, i.e. <em>Link-local</em> and global addresses. For the time being, browsing cached records is not very convenient: only VLAN ID is supported as a filtering option. Therefore, we must use some external tool for more advanced filtering or sorting. The neighbour cache records are also available through the MIB tree – as defined in RFC 4293 in the <em>ipNetToPhysicalTable</em>.</p>
<h2>6 Unicast Routing</h2>
<p>Having overcome all the hurdles of end network configuration, you can start the routing configuration. Routing support is activated with a single command:</p>
<pre>hp-test(config)# ipv6 unicast-routing</pre>
<p>It is obvious from the command that only unicast routing is activated this way. You would search in vain for a command to activate multicast routing. We must hope that support for multicast routing on the network layer including the related protocols (<em>PIM-SM, PIM-DM</em>) will be included in some future version.</p>
<p>The <em><strong>static routing</strong></em> configuration is also simple. In principle, record entry to the routing table is not different from the entry that is commonplace in the IPv4 world. The following command probably does not need further comments.</p>
<pre>hp-test(config)# ipv6 route 2001:718:802:228::/64 2001:718:802:224::10</pre>
<h2>7 Routing Protocol &#8211; OSPFv3</h2>
<p>The situation is slightly different for the configuration of the routing protocol. The components support the <em>OSPF</em> protocol, specifically its equivalent in the IPv6 world, i.e., <em>OSPFv3</em> (RFC 2740, 5340). The way in which the protocol works is largely similar to <em>OSPF</em>. The key change is the fact that communication between routers and the exchange of routing information are performed only over the <em>Link-Local</em> addresses. In practice, this means that global IPv6 addresses do not need to be configured on networks that interconnect OSPFv3 routers. The OSPFv3 interface configuration is simplified to allowing IPv6 on the given interface and assigning OSPFv3 area. The absence of a global IPv6 address on the interface causes some complications. Some diagnostic tools using the <em>ICMPv6</em> protocol, like <em>traceroute6</em> and <em>ping6</em>, cannot produce the proper information, because routers are not reachable by a global IPv6 address. This problem can be solved by setting up an IPv6 address on interconnecting networks. In that case, it is not important if the network is identical between routers. Only an arbitrary global address available from the rest of the network is necessary. Another option, very elegant in our opinion, is configuring a single global IPv6 address on the loopback interface of the L3 switch. Some other parameters must also be set for OSPFv3. Most likely the need to configure an area will not surprise anyone. This is set in the same way as with OSPF – through a 32-bits identifier written in the form of four single byte numbers separated with dots. The value <em>0.0.0.0</em>is used to mark the backbone area just like in OSPF. With OSPFv3, you will certainly need to manually set the router ID parameter more often. It is a unique router identifier whose value is normally derived from the highest configured IPv4 address on the router. You did not have to deal with its configuration much in the IPv4 world, because the address was derived automatically. But if you want to have only IPv6 routing set up on a router, you need to set this parameter manually. The setting is done with a single command for the OSPF and the OSPFv3 routing process.</p>
<pre>hp-test(config)<strong># ip router-id 147.229.240.123</strong></pre>
<h2>8 Playing with Multicast</h2>
<p>Multicast support consists of two parts: link-layer support (multicast distribution optimisation) and support on the network layer (multicast routing). The first part includes mechanisms supporting effective distribution of multicast data. This mechanism was known as IGMP SNOOPING in the IPv4 world. The IGMP protocol is replaced with the <em>MLD</em> protocol (RFC2710 &#8211; Multicast Listener Discovery (MLD) for IPv6). The operation of this protocol is in principle identical to mechanisms known from <em>IGMPv2</em> and <em>IGMPv3</em> (RFC 2236, RFC 3376). The MLD protocol is automatically activated at the switch layer. When configuring, we will typically need to activate MLD on the IPv6 layer, i.e. VLAN:</p>
<pre>hp-test(vlan-224)<strong># ipv6 mld</strong></pre>
<p>Subsequently we can look at the connection status in individual groups with the following command:</p>
<pre>hp-test(config)<strong># show ipv6</strong></br>
  mld vlan 224

  MLD Service Protocol Info

  VLAN ID : 310
  VLAN Name : list
  Querier Address : ::
  Querier Up Time : 0h:0m:0s
  Querier Expiry Time : 0h:0m:0s
  Ports with multicast routers :

  Active Group Addresses                   Type ExpiryTime        Ports
  ---------------------------------------- ---- ---------- --------------------
  ff02::c                                  FILT 0h:4m:20s         1
  ff02::1:3                                FILT 0h:4m:20s         1
  ff02::1:ff57:e0b2                        FILT 0h:4m:20s         1
  ff02::1:ffb5:2df1                        FILT 0h:4m:20s         1
  ff02::1:ffda:768d                        FILT 0h:4m:20s         1
</pre>
<p>Activating MLD snooping support is recommended as an automatic option for all VLANs.</p>
<p>The configuration mentioned above will provide an effective distribution of multicast operation within the local network. A logical subsequent step would be to activate the support of IPv6 multicast routing and an appropriate multicast routing protocol. But presently we would search in vain for such support. Multicast support on the network layer is planned for some future version.</p>
<h2>9 Filtering – IPv6 Access Lists</h2>
<p>If you start operating an IPv6 network, you will surely want to secure it in a suitable way. For this purpose, you can use an access-list-based packet filter on HP switches. Support for creating IPv6 access lists was included in the K.14 firmware release. The new version brings filtering support at the VLAN layer and routing support. The management is identical to creating access lists in the IPv4 environment.</p>
<p>First we must create a relevant access list in which we describe the filtering rules themselves:</p>
<pre>hp-test(config)# ipv6 access-list "acl_1"
hp-test(config-ipv6-acl)# permit tcp any host 2001:718:802:4::93e5:394 eq 25
hp-test(config-ipv6-acl)# permit tcp host 2001:718:802:4::93e5:394 eq 25 any
hp-test(config-ipv6-acl)# deny tcp any any eq 25
hp-test(config-ipv6-acl)# permit ipv6 any any</pre>
<p>The example describes a simple access list that blocks all SMTP traffic with the exception of the address 2001:718:802:4::93e5:394 which is the SMTP server. The access list created in this way must then be connected either to an interface (port):</p>
<pre>hp-test(config-ipv6-acl)# interface a1
hp-test(eth-A1)# ipv6 access-group acl_1 in</pre>
<p>or VLAN:</p>
<pre>hp-test(vlan-223)# ipv6 access-group acl_1 in
hp-test(vlan-223)# ipv6 access-group acl_1 out</pre>
<h2>10 Conclusion</h2>
<p>IPv6 support for components in the ProCurve series was released a bit later than with other manufacturers. You will need to wait a bit longer for support that provides all features, including multicast operation and various protection mechanisms. Despite small shortcomings, the implementation can be considered functional and it can be put into production on ordinary networks. The big advantage is that IPv6 support is released in the standard software release, which is available from on the ProCurve webpage, so that you do not have to pay<br />
anything extra to enable IPv6 features.</p>
<div  class="x-author-box cf" ><h6 class="h-about-the-author">About the Author</h6><div class="x-author-info"><h4 class="h-author mtn">Vladimír Záhořík</h4><a href="http://www.vutbr.cz/lide/vladimir-zahorik-1849" class="x-author-social" title="Visit the website for Vladimír Záhořík" target="_blank"><i class="x-icon-globe"></i> http://www.vutbr.cz/lide/vladimir-zahorik-1849</a><span class="x-author-social"><i class="x-icon-envelope"></i> zahorik@cis.vutbr.cz</span><p class="p-author mbn"></p></div></div>
]]></content:encoded>
			<wfw:commentRss>http://6lab.cz/ipv6-configuration-on-hp-procurve-switches/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
