It is recommended to implement any redundancy RAID 15Z MPxIO on the control

It is recommended to implement any redundancy raid

This preview shows page 11 - 13 out of 16 pages.

It is recommended to implement any redundancy (RAID 1/5/Z, MPxIO, ...) on the control domain to reduce complexity. Installation option: Add the ISO image of an installation medium for installation purposes: # ldm add-vdsdev options=ro <path to image> <disk name> @ <vdisk service> Example: # ldm add-vdsdev options=ro /ldoms/disk-images/sol-10-u7-ga-sparc- dvd.iso [email protected] NB: Please make sure to specify the whole pathname. Recommended: Option A: From a new or existing ZFS pool: Create a new zpool: Recommended # zpool create <pool> <physical disk> Create a new volume: # zfs create -V <size> <pool> / <volname> # ldm add-vdsdev /dev/zvol/dsk/<pool>/ <volname> <disk name> @ <vdisk service> Example: # zpool create ldoms-pool c0t1d0 # zfs create -V 30G rpool/zvol-ldg1 # ldm add-vdsdev /dev/zvol/dsk/rpool/zvol-ldg1 [email protected] Option B: Use file containers: Note: The file container should be a ZFS for optimum performance and flexibility. Storing file based images on UFS offers reduced performance when compared to ZFS. Create a directory for disk images: # zfs create <pool> / <zfs> Create a blank image file: # mkfile <size> / <pool> / <zfs> / <file name> # ldm add-vdsdev / <pool>/<zfs>/<file name> <disk name> @ <vdisk service> Example: # zfs create ldoms-pool/ldom-images # mkfile 30G /ldoms-pool/ldom-images/ldg2-d0-30G.img # ldm add-vdsdev /ldoms-pool/ldom-images/ldg2-d0-30G.img [email protected] Intentionally left blank Oracle Internal and Approved Partners Only Page 11 of 16 Vn 1.17 Created: 21 Aug 2017
Image of page 11
Task Comment LDom 1 2 Option C: Use file containers and ZFS clones to multiply images: Create a disk image file as outlined in Option B, install Solaris, and optionally run sys-unconfig at the end if the domains shall have fixed configurations. This image will be used for a template for subsequent installations. Rename the disk image to a generic name if preferred. Create a zfs snapshot and clone of the template image: # zfs snapshot <pool name> / <vol name> @initial # zfs clone <pool name> / <vol name> @initial <pool name> / <vol name> / <clone dir> Example: # mv /ldoms-pool/ldom-images/ldg2-d0-30G.img /ldoms-pool/ldom-images/30G-a.img # zfs snapshot rpool/[email protected] # zfs clone ldoms-pool/[email protected] ldoms-pool/ldom-images/clones # ls -l /ldoms-pool/ldom-images/clones/ total 62922717 -rw------T 1 root root 32212254720 Oct 13 15:46 30g-a.fil e When ZFS is used inside the cloned disk image, please make sure that no copies of the same original disk image ever meet in one domain because they share the same identifier and are thus identical to ZFS! When UFS is used inside the cloned root disk image, please make sure that the clone is assigned the same “[email protected]” as the original, otherwise you may end up with a read-only root file system. Recommended for higher Disk I/O Workloads: Option D: Use a physical disk slice directly: Note: A physical disk will offer higher performance but lower flexibility for future changes to the configuration. It is preferable when adding extra virtual disks to an ldom but is more limiting when used as the root virtual disk.
Image of page 12
Image of page 13

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture