Friday, October 21, 2011

Microsoft Hyper-V Dynamic Memory vs. VMware Memory Overcommit

In an attempt to provide higher levels of server consolidation, both Microsoft and VMware have developed their own solution for higher utilization of Random Access Memory (RAM). VMware’s Memory Overcommit has been available for quite some time, while Dynamic Memory is a new player. How do they compare?

VMware Memory Overcommit
I have always been amazed by VMware’s success with the Memory Overcommit technique, and the ability to provide more RAM to virtual machines than a physical computer actually has. I’m sure many system administrators use this feature in their organizations. It works on a very basic principles:
1. Give more memory to virtual machines than a physical computer has.
2. Identify the same memory blocks (by hash) in multiple virtual machines.
3. Compress the host memory by storing those blocks only once.
This is accomplished by External Page Sharing or Second Level Paging technique, which I won’t describe in detail here. It is important to note that VMware does not trust the information from the guest OS. Memory is assigned to virtual machines based on the information that is available on the level of the host OS. This makes sense because of security and stability reasons.

Microsoft Hyper-V Dynamic Memory

Hyper-V Dynamic Memory works different. Instead of compressing the host RAM, it allows virtual machines to demand more RAM if required. This works similar as with dynamic virtual disks. System administrators have to configure a startup (minimum) and a maximum amount of memory for each virtual machine. The sum of all maximum values can be larger than the total amount of memory on a physical server. Microsoft calls this Memory Oversubscription. However, the sum of all amounts of memory given to each virtual machine will never exceed the limits of a physical computer.
To use Dynamic Memory for Hyper-V, you have to install Service Pack 1 for Windows Server 2008 R2 which is still in beta. If you install the Integration Components in the guest OS, a small driver called Dynamic Memory Virtual Service Consumer (DM VSC) will monitor the memory usage in the guest OS. This driver collects information about current RAM requirements and reports it to the host which decides to give or take RAM from the virtual machine.

Hyper-V Dynamic Memory vs. VMware Memory Overcommit

It is too early to give a final judgment which technology is more effective. Microsoft virtualization team has a huge advantage for being able to work with the Windows kernel team (Linux distributions are not supported, yet, but it might change in future), and use information from the Windows kernel to calculate the current memory needs of a virtual machine.
However, Hyper-V Dynamic Memory still has to prove its efficiency in real world scenarios. Windows Server 2008 SP1 is still in beta, while VMware Memory Overcommit has been working reliably in enterprise environments for a long time. What’s your experience? Which technology do you prefer?

 

Friday, September 30, 2011


Error on Linux Host

This error is captured /var/log messages

scsi: host 0 ch 0 id 0 lun 16384 has a LUN larger than allowed by the host adapter
scsi: host 0 ch 0 id 0 lun 16385 has a LUN larger than allowed by the host adapter
scsi: host 0 ch 0 id 0 lun 16386 has a LUN larger than allowed by the host adapter
scsi: host 0 ch 0 id 0 lun 16387 has a LUN larger than allowed by the host adapter

Solution:1
                    1-Rescan the HBA for the assigned LUNS
                                         echo "- - -" > /sys/class/scsi_host/hostX/scan

                    Here hostX : host0 /host1 /host2 /host4 (HBA port number)

Solution:2

        1. Don't assign the host LUN ID more than 256 in storage . HBA can see only 256 LUNS





Where to find the LUN (host ID)  NUMBER in Linux  OS (exposed from storage)






[root@host# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: HP       Model: P410i            Rev: 3.66
  Type:   RAID                             ANSI  SCSI revision: 05
Host: scsi0 Channel: 00 Id: 00 Lun: 01
  Vendor: HP       Model: LOGICAL VOLUME   Rev: 3.66
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
  Vendor: DGC      Model: LUNZ             Rev: 0531
  Type:   Direct-Access                    ANSI  SCSI revision: 04
Host: scsi2 Channel: 00 Id: 00 Lun: 98
  Vendor: DGC      Model: VRAID            Rev: 0531
  Type:   Direct-Access                    ANSI  SCSI revision: 04
Host: scsi2 Channel: 00 Id: 01 Lun: 00
  Vendor: DGC      Model: LUNZ             Rev: 0531
  Type:   Direct-Access                    ANSI  SCSI revision: 04hostess # file *


In above Lun 98 is the Lun host ID  assigned from the storage (Clariion)
 How to find  WWN number of HBA in LINUX

cat /sys/class/fc_host/host1/port_name

Note:host1 refers either host1 or host2 & OS is RHEL 6.0

Monday, September 26, 2011

Configuring the ESXi firewall in VMware vSphere 5



VMware vSphere 5 features a new ESXi firewall that you can configure though the vSphere Client or command line. The addition brings a feature to ESXi 5 that was previously found only in the recently discontinued ESX hypervisor. VMware argued that ESXi didn't require a firewall, because the lightweight hypervisor had hardly any services or ports open, leaving it with almost nothing to attack.

I believe VMware added a firewall to ESXi 5 for few reasons. With a firewall, ESXi 5 isn't missing a notable feature found in the old ESX Server. Also, a firewall signals to customers and partners that VMware is committed to security. And finally, vSphere 5 is just as secure as before, if not more so.
Five things to know about the ESXi 5 firewall

It’s a stateless firewall based on ESXi services.
It’s enabled by default.
It sits between the ESXi host management interface and the management network on the local area network.
It’s configurable through the vSphere Client. Go to Host Configuration > Software > Security Profile

As I was wading thru all of the new materials from yesterday, I thought it would be helpful to create a big list of all of the new features in vSphere 5.0.  There were really only a few named in the presentation (or else the preso would have been 3 hours and put the analysts to sleep).  While we wait for the release notes, I put together this list for you.  This is not every new feature, but rather as many as I could find or remember.  I’ve also added a quick blurb on what that feature does and my comments in parenthesis.  If you are aware of something that I missed, please add in the comments below (with your own comments/opinions of course).  Here we go:
VMware vSphere 5.0
  • ESXi Convergence – No more ESX, only ESXi (they said they would do it, they meant it)
  • New VM Hardware:  Version 8 – New Hardware support (VS5 still supports VM Hardware 4 & 7 as well if you still want to migrate to the old hosts)
    • 3D graphics Support for Windows Aero
    • Support for USB 3.0 devices
  • Platform Enhancements (Blue Requires Hardware v8)
    • 32 vCPUs per VM
    • 1TB of RAM per VM
    • 3D Graphics Support
    • Client-connected USB devices
    • USB 3.0 Devices
    • Smart-card Readers for VM Console Access
    • EFI BIOS
    • UI for Multi-core vCPUs
    • VM BIOS boot order config API and PowerCLI Interface
  • vSphere Auto Deploy – mechanism for having hosts deploy quickly when needed ( I’m going to wait and see how customers use this one.)
  • Support for Apple Products – Support for running OSX 10.6 Server (Snow Leopard) on Apple Xserve hardware. (although I betting technically, you can get it to run on any hardware, you will just not be compliant in your license)
  • Storage DRS – Just like DRS does for CPU and Memory, now for storage
    • Initial Placement – Places new VMs on the storage with the most space and least latency
    • Load Balancing – migrates VMs if the storage cluster (group of datastores) gets too full or the latency goes too high
    • Datastore Maintenance Mode  - allow you to evacuate VMs from a datastore to work on it (does not support Templates or non-registered VMs yet…)
    • Affinity & Anti-Affinity – Allows you to make sure a group of VMs do not end up on the same datastore (for performance or Business Continuity reasons) or VMs that should always be on the same datastore.  Can be at the VM or down to the individual VMDK level.
    • Support for scheduled disabling of Storage DRS – perhaps during backups for instance.
  • Profile-Driven Storage – Creating pools of storage in Tiers and selecting the correct tier for a given VM.  vSphere will make sure the VM stays on the correct tier(pool) of storage.  (Not a fan of this just yet.  What if just 1GB of the VM needs high-tier storage? This makes you put the whole VM there.)
  • vSphere File System – VMFS5 is now available.  (Yes, This is a non-disruptive upgrade, however I would still create new and SVmotion)
    • Support for a single extent datastore up to 64TB
    • Support for >2TB Physical Raw Disk Mappings
    • Better VAAI (vStorage APIs for Array Integration) Locking with more tasks
    • Space reclamation on thin provisioned LUNs
    • Unified block size (1MB) (no more choosing between 1,2,4 or 8)
    • Sub-blocks for space efficiency (8KB vs. 64KB in VS4)
  • VAAI now a T10 standard – All 3 primitives (Write Same, ATS and Full Copy) are now T10 standard compliant.
    • Also now added support for VAAI NAS Primitives including Full File Clone (to have the nas do the copy of the vmdk files for vSphere) and Reserve Space (to have the NAS create thick vmdk files on NAS storage)
  • VAAI Thin Provisioning – Having the storage do the thin provisioning and then vSphere telling the storage which blocks can be reclaimed to shrink the space used on the storage
  • Storage vMotion Enhancements
    • Now supports storage vMotion with VMs that have snapshots
    • Now supports moving linked clones
    • Now supports Storage DRS (mentioned above)
    • Now uses mirroring to migrate vs change block tracking in VS4.  Results in faster migration time and greater migration success.
  • Storage IO Control for NAS – allows you to throttle the storage performance against “badly-behaving” VMs also prevents them from stealing storage bandwidth from high-priority VMs.  (Support for iSCSI and FC was added in VS4.)
  • Support for VASA (vStorage APIs for Storage Awareness) – Allows storage to integrate tighter with vcenter for management.  Provides a mechanism for storage arrays to report their capabilities, topology and current state.  Also helps Storage DRS make more educated decisions when moving VMs.
  • Support for Software FCoE Adapters – Requires a compatible NIC and allows you to run FCoE over that NIC without the need for a CNA Adapter.
  • vMotion Enhancements
    • Support for multiple NICs.  Up to 4 x 10GbE or 16 x 1GbE NICs
    • Single vMotion can span multiple NICs (this is huge for 1GbE shops)
    • Allows for higher number of concurrent vMotions
    • SDPS Support (Slow Down During Page Send) – throttles busy VMs to reduce timeouts and improve success.
    • Ensures less than 1 second switchover in almost all cases
    • Support for higher latency networks (up to ~10ms)
    • Improved error reporting – better, more detailed logging (thank you vmware!)
    • Improved Resource Pool Integration – now puts VMs in the proper resource pool
  • Distributed Resource Scheduling/Dynamic Power Management Enhancements
    • Support for “Agent VMs” – These are VMs that work per host (currently mostly vmware services – vshield, edge, app, endpoint, etc)  DRS will not migrate these VMs
    • “Agents” do not need to be migrated for maintenance mode
  • Resource pool enhancements – now more consistent for clustered vs. non-clustered hosts.  No longer can modify resource pool settings on the host itself when it is managed by vcenter.  It does allow for making changes if the host gets disconnected from vCenter
  • Support for LLDP Network Protocol – Standards based vendor-neutral discovery protocol
  • Support for NetFlow – Allows collection of IP traffic information to send to collectors (CA, NetScout, etc) to provide bandwidth statistics, irregularities, etc.  Provides complete visibility to traffic between VMs or VM to outside.
  • Network I/O Control (NETIOC) – allows creation of network resource pools, QoS Tagging, Shares and Limits to traffic types, Guaranteed Service Levels for certain traffic types
  • Support for QoS (802.1p) tagging – provides the ability to Q0S tag any traffic flowing out of the vSphere infrastructure.
  • Network Performance Improvements
    • Multiple VMs receiving multicast traffic from the same source will see improved throughput and CPU efficiency
    • VMkernel NICs will see higher throughput with small messages and better IOPs scaling for iSCSI traffic
  • Command Line Enhancements
    • Remote commands and local commands will now be the same (new esxcli commands are not backwards compatible)
    • Output from commands can now be formatted automatically (xml, CSV, etc)
  • ESXi 5.0 Firewall Enhancements
    • New engine not based on iptables
    • New engine is service-oriented and is a stateless firewall
    • Users can restrict specific services based on IP address and Subnet Mask
    • Firewall has host-profile support
  • Support for Image Builder – can now create customized ESXi CDs with the drivers and OEM add-ins that you need.  (Like slip-streaming for Windows CDs) Can also be used for PXE installs.
  • Host Profiles Enhancements
    • Allows use of an answer file to complete the profile for an automated deployment
    • Greatly expands the config options including: iSCSI, FCoE, Native Multipathing, Device Claming, Kernel Module Settings & more)  (I don’t think Nexus is supported yet)
  • Update Manager Enhancements
    • Can now patch multiple hosts in a cluster at a time.  Will analyze and see how many hosts can be patched at the same time and patch groups in the cluster instead of one at a time.  Can still do one at a time if you prefer.
    • VMTools can now be scheduled at next VM reboot
    • Can now configure multiple download URLs and restrict downloads to only the specific versions of ESX you are running
    • More management capabilities: update certificates, change DB password, proxy authentication, reconfigure setup, etc.
  • High Availability Enhancements
    • No more Primary/Secondary concept, one host is elected master and all others are slaves
    • Can now use storage-level communications – hosts can use “heartbeat datastores” in the event that network communication is lost between the hosts.
    • HA Protected state is now reported on a per/VM basis.  Certain operations no longer wait for confirmation of protection to run for instance power on.  The result is that VMs power on faster.
    • HA Logging has been consolidated into one log file
    • HA now pushes the HA Agent to all hosts in a cluster instead of one at a time.  Result:  reduces config time for HA to ~1 minute instead of ~1 minute per host in the cluster.
    • HA User Interface now shows who the Master is, VMs Protected and Un-protected, any configuration issues, datastore heartbeat configuration and better controls on failover hosts.
  • vCenter Web Interface – Admins can now use a robust web interface to control the infrastructure instead of the GUI client.
    • Includes VM Management functions (Provisioning, Edit VM, Poer Controls, Snaps, Migrations)
    • Can view all objects (hosts clusters, datastores, folders, etc)
    • Basic Health Monitoring
    • View the VM Console
    • Search Capabilities
    • vApp Management functions (Provisioning, editing, power operations)
  • vCenter Server Appliance – Customers no longer need a Windows license to run vCenter.  vCenter can come as a self-contained appliance (This has been a major request in the community for years)
    • 64-bit appliance running SLES 11
    • Distributed as 3.6GB, Deployment range is 5GB to 80GB of storage
    • Included database for 5 Hosts or 50 VMs (same as SQL Express in VS4)
    • Support for Oracle as the full DB (twitter said that DB2 was also supported but I cannot confirm in my materials)
    • Authentication thru AD and NIS
    • Web-based configuration
    • Supports the vSphere Web Client
    • It does not support:  Linked Mode vCenters, IPv6, SQL, or vCenter heartbeat (HA is provided thru vSphere HA)
  • vCenter Heartbeat 6.4 Enhancements
    • Allows the active and standby nodes to be reachable at the same time, so both can be patched and managed
    • Now has a plug-in to the vSphere client to manage and monitor Heartbeat
    • Events will register in the vSphere Recent Tasks and Events
    • Alerts will register in the alarms and display in the client
    • Supports vCenter 5.0 and SQL 2008 R2
That’s what I have on vSphere 5, next up is SRM5, vShield5, Storage Appliance, and vCloud Director 1.5.

Tuesday, September 20, 2011



LVM  CONFIGURATION ON LINUX

1 LVM Layout


Basically LVM looks like this:


You have one or more physical volumes (/dev/sdb1 - /dev/sde1 in our example), and on these physical volumes you create one
or more volume groups (e.g.
fileserver), and in each volume group you can create one or more logical
volumes. If you use multiple physical volumes, each logical volume can be
bigger than one of the underlying physical volumes (but of course the sum of
the logical volumes cannot exceed the total space offered by the physical
volumes).

It is a good practice to not allocate the full space to logical
volumes, but leave some space unused. That way you can enlarge one or more
logical volumes later on if you feel the need for it.

In this example we will create a volume group called fileserver, and we will also create the logical volumes /dev/fileserver/share, /dev/fileserver/backup, and /dev/fileserver/media (which will use only half of the space offered by our physical
volumes for now - that way we can switch to RAID1 later on (also described in
this tutorial)).




2 Our First LVM Setup


Let's find out about our hard disks:


fdisk -l


The output looks like this:

server1:~# fdisk -l



Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          18      144553+  83  Linux

/dev/sda2              19        2450    19535040   83  Linux

/dev/sda4            2451        2610     1285200   82  Linux swap / Solaris



Disk /dev/sdb: 85.8 GB, 85899345920 bytes

255 heads, 63 sectors/track, 10443 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



Disk /dev/sdb doesn't contain a valid partition table



Disk /dev/sdc: 85.8 GB, 85899345920 bytes

255 heads, 63 sectors/track, 10443 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



Disk /dev/sdc doesn't contain a valid partition table



Disk /dev/sdd: 85.8 GB, 85899345920 bytes

255 heads, 63 sectors/track, 10443 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



Disk /dev/sdd doesn't contain a valid partition table



Disk /dev/sde: 85.8 GB, 85899345920 bytes

255 heads, 63 sectors/track, 10443 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



Disk /dev/sde doesn't contain a valid partition table



Disk /dev/sdf: 85.8 GB, 85899345920 bytes

255 heads, 63 sectors/track, 10443 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



Disk /dev/sdf doesn't contain a valid partition table

There are no partitions yet on /dev/sdb - /dev/sdf. We will create the partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1 and leave /dev/sdf untouched for now. We act as if our hard disks had only 25GB of
space instead of 80GB for now, therefore we assign 25GB to
/dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1:


fdisk /dev/sdb


server1:~# fdisk /dev/sdb



The number of cylinders for this disk is set to 10443.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)



Command (m for help):
 <-- m

Command action

   a   toggle a bootable flag

   b   edit bsd disklabel

   c   toggle the dos compatibility flag

   d   delete a partition

   l   list known partition types

   m   print this menu

   n   add a new partition

   o   create a new empty DOS partition table

   p   print the partition table

   q   quit without saving changes

   s   create a new empty Sun disklabel

   t   change a partition's system id

   u   change display/entry units

   v   verify the partition table

   w   write table to disk and exit

   x   extra functionality (experts only)



Command (m for help):
 <-- n

Command action

   e   extended

   p   primary partition (1-4)


<-- p

Partition number (1-4): <-- 1

First cylinder (1-10443, default 1): <-- <ENTER>

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-10443, default 10443):
 <-- +25000M



Command (m for help): <-- t

Selected partition 1

Hex code (type L to list codes):
 <-- L



 
0  Empty           1e  Hidden W95 FAT1 80  Old Minix       be  Solaris boot

 1  FAT12           24  NEC DOS         81  Minix / old Lin bf  Solaris

 2  XENIX root      39  Plan 9          82  Linux swap / So c1  DRDOS/sec (FAT-

 3  XENIX usr       3c  PartitionMagic  83  Linux           c4  DRDOS/sec (FAT-

 4  FAT16 <32M      40  Venix 80286     84  OS/2 hidden C:  c6  DRDOS/sec (FAT-

 5  Extended        41  PPC PReP Boot   85  Linux extended  c7  Syrinx

 6  FAT16           42  SFS             86  NTFS volume set da  Non-FS data

 7  HPFS/NTFS       4d  QNX4.x          87  NTFS volume set db  CP/M / CTOS / .

 8  AIX             4e  QNX4.x 2nd part 88  Linux plaintext de  Dell Utility

 9  AIX bootable    4f  QNX4.x 3rd part 8e  Linux LVM       df  BootIt

 a  OS/2 Boot Manag 50  OnTrack DM      93  Amoeba          e1  DOS access

 b  W95 FAT32       51  OnTrack DM6 Aux 94  Amoeba BBT      e3  DOS R/O

 c  W95 FAT32 (LBA) 52  CP/M            9f  BSD/OS          e4  SpeedStor

 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a0  IBM Thinkpad hi eb  BeOS fs

 f  W95 Ext'd (LBA) 54  OnTrackDM6      a5  FreeBSD         ee  EFI GPT

10  OPUS            55  EZ-Drive        a6  OpenBSD         ef  EFI (FAT-12/16/

11  Hidden FAT12    56  Golden Bow      a7  NeXTSTEP        f0  Linux/PA-RISC b

12  Compaq diagnost 5c  Priam Edisk     a8  Darwin UFS      f1  SpeedStor

14  Hidden FAT16 <3 61  SpeedStor       a9  NetBSD          f4  SpeedStor

16  Hidden FAT16    63  GNU HURD or Sys ab  Darwin boot     f2  DOS secondary

17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fd  Linux raid auto

18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fe  LANstep

1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid ff  BBT

1c  Hidden W95 FAT3 75  PC/IX

Hex code (type L to list codes):
 <-- 8e

Changed system type of partition 1 to 8e (Linux LVM)



Command (m for help):
 <-- w

The partition table has been altered!



Calling ioctl() to re-read partition table.

Syncing disks.

Now we do the same for the hard disks /dev/sdc - /dev/sde:


fdisk /dev/sdc

fdisk /dev/sdd

fdisk /dev/sde


Then run


fdisk -l


again. The output should look like this:

server1:~# fdisk -l



Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          18      144553+  83  Linux

/dev/sda2              19        2450    19535040   83  Linux

/dev/sda4            2451        2610     1285200   82  Linux swap / Solaris



Disk /dev/sdb: 85.8 GB, 85899345920 bytes

255 heads, 63 sectors/track, 10443 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1        3040    24418768+  8e  Linux LVM



Disk /dev/sdc: 85.8 GB, 85899345920 bytes

255 heads, 63 sectors/track, 10443 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1               1        3040    24418768+  8e  Linux LVM



Disk /dev/sdd: 85.8 GB, 85899345920 bytes

255 heads, 63 sectors/track, 10443 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



   Device Boot      Start         End      Blocks   Id  System

/dev/sdd1               1        3040    24418768+  8e  Linux LVM



Disk /dev/sde: 85.8 GB, 85899345920 bytes

255 heads, 63 sectors/track, 10443 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



   Device Boot      Start         End      Blocks   Id  System

/dev/sde1               1        3040    24418768+  8e  Linux LVM



Disk /dev/sdf: 85.8 GB, 85899345920 bytes

255 heads, 63 sectors/track, 10443 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



Disk /dev/sdf doesn't contain a valid partition table

Now we prepare our new partitions for LVM:


pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1


server1:~# pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

  Physical volume "/dev/sdb1" successfully created

  Physical volume "/dev/sdc1" successfully created

  Physical volume "/dev/sdd1" successfully created

  Physical volume "/dev/sde1" successfully created

Let's revert this last action for training purposes:


pvremove /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1


server1:~# pvremove /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

  Labels on physical volume "/dev/sdb1" successfully wiped

  Labels on physical volume "/dev/sdc1" successfully wiped

  Labels on physical volume "/dev/sdd1" successfully wiped

  Labels on physical volume "/dev/sde1" successfully wiped

Then run


pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1


again:

server1:~# pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

  Physical volume "/dev/sdb1" successfully created

  Physical volume "/dev/sdc1" successfully created

  Physical volume "/dev/sdd1" successfully created

  Physical volume "/dev/sde1" successfully created

Now run


pvdisplay


to learn about the current state of your physical volumes:

server1:~# pvdisplay

  --- NEW Physical volume ---

  PV Name               /dev/sdb1

  VG Name

  PV Size               23.29 GB

  Allocatable           NO

  PE Size (KByte)       0

  Total PE              0

  Free PE               0

  Allocated PE          0

  PV UUID               G8lu2L-Hij1-NVde-sOKc-OoVI-fadg-Jd1vyU



  --- NEW Physical volume ---

  PV Name               /dev/sdc1

  VG Name

  PV Size               23.29 GB

  Allocatable           NO

  PE Size (KByte)       0

  Total PE              0

  Free PE               0

  Allocated PE          0

  PV UUID               40GJyh-IbsI-pzhn-TDRq-PQ3l-3ut0-AVSE4B



  --- NEW Physical volume ---

  PV Name               /dev/sdd1

  VG Name

  PV Size               23.29 GB

  Allocatable           NO

  PE Size (KByte)       0

  Total PE              0

  Free PE               0

  Allocated PE          0

  PV UUID               4mU63D-4s26-uL00-r0pO-Q0hP-mvQR-2YJN5B



  --- NEW Physical volume ---

  PV Name               /dev/sde1

  VG Name

  PV Size               23.29 GB

  Allocatable           NO

  PE Size (KByte)       0

  Total PE              0

  Free PE               0

  Allocated PE          0

  PV UUID               3upcZc-4eS2-h4r4-iBKK-gZJv-AYt3-EKdRK6