clzczc1fs end clzczc1 verify clzczc1 commit clzczc1 exit Check setup with clzc

Clzczc1fs end clzczc1 verify clzczc1 commit clzczc1

This preview shows page 31 - 34 out of 43 pages.

clzc:zc1:fs> end clzc:zc1> verify clzc:zc1> commit clzc:zc1> exit Check setup with: # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p \ FilesystemMountPoints=/data app-hasp-rs C) Global filesystem as loopback file system In the global zone configure global filesystem and it to /etc/vfstab on all global nodes e.g.: /dev/md/datads/dsk/d0 /dev/md/datads/dsk/d0 /global/fs ufs 2 yes global,logging and # clzc configure zc1 clzc:zc1> add fs clzc:zc1:fs> set dir=/zone/fs (zc-lofs-mountpoint) clzc:zc1:fs> set special=/global/fs (globalcluster-mountpoint) clzc:zc1:fs> set type=lofs clzc:zc1:fs> end clzc:zc1> verify clzc:zc1> commit clzc:zc1> exit Check setup with: # clzc show -v zc1 In the zone cluster do: (Create scalable rg if not already done) # clrg create -S -p desired_primaries=2 -p maximum_primaries=2 \ app-scal-rg # clrs create -g app-scal-rg -t SUNW.HAStoragePlus -p \ FilesystemMountPoints=/zone/fs hasp-rs Switch resource group online: # clrg online -eM app-rg For global filesystem: # clrg online -eM app-scal-rg Test: Switch of the resource group in the zone cluster: # clrg switch -n zonehost2 app-rg Oracle Internal and Approved Partners Only Page 31 of 43 Vn 1.2d Created: 11 Jan 2018
Image of page 31
Task Comment Node 1 2 3 4 Add supported dataservice to zone cluster. Oracle Internal and Approved Partners Only Page 32 of 43 Vn 1.2d Created: 11 Jan 2018
Image of page 32
Task Comment Node 1 2 3 4 CONFIGURATION STEPS FOR DATA SERVICE HA-NFS NFS client – No Cluster node can be a NFS client of a HA for NFS exported file system that is being mastered on a node in the same cluster. Such cross-mounting of HA for NFS is prohibited. Use the cluster file system to share files among global-cluster nodes. IF HA-NFS is configured with HAStoragePlus FFS (failover file system) AND automountd is running/used. THEN exclude: lofs in file /etc/system If both of these conditions are met, LOFS must be disabled to avoid switchover problems or other failures. If one but not both of these conditions is met, it is safe to enable LOFS. Solaris Zones require both LOFS and the automountd daemon to be enabled, exclude from the automounter map all files that are part of the highly available local file system that is exported by HA-NFS. HA-NFS requires that all NFS client mounts be 'hard' mounts. Consider if using NFSv3: If you are mounting file systems on the cluster nodes from external NFS servers, such as NAS filers and you are using the NFSv3 protocol, you cannot run NFS client mounts and the HA for NFS data service on the same cluster node. If you do, certain HA for NFS data- service activities might cause the NFS daemons to stop and restart, interrupting NFS services. However, you can safely run the HA for NFS data service if you use the NFSv4 protocol to mount external NFS file systems on the cluster nodes. NFSv4 is the default in /etc/default/nfs for NFS clients.
Image of page 33
Image of page 34

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture