In this post, I will continue my experiments with Oracle policy managed databases with Grid Infrastructure, and present one of the new features introduced with Grid Infrastructure 12c last June: the cluster configuration policies and policy sets. It allows the administrator to dynamically reconfigure all server pools at a time, with a single command, according to the business workload.
To take a simple example, imagine you have two servers allocated for your production instances, and two servers allocated for you development instances. This repartition is conform to your daily workload. But you have very intensive batches running every night on your production databases. The production servers are strongly used, and you have two development servers, up and running, but not used at all. What a great resources wasting!
In this case you have several solutions:
1) Shut down development servers during the night because they are not used. It will at least allow energy saving
2) Review your needs and allocate more server(s) to the production system
3) Create a crazy script to reconfigure each server pool, one by one, to accomodate the configuration to the real needs, with srvctl commands run in serial.
4) Use policies and policy sets
As you may expect, I will develop the last solution!
A policy set is a “document” (Oracle definition) which contains one or more policies. Each policy contains the configuration of all server-pools in the cluster: MIN_SIZE and MAX_SIZE, IMPORTANCE, server list, etc.
It will make the administrator able to activate a policy or another, according to the workload or business needs, to dynamically and immediately change all the configuration of the cluster server-pools. For instance: use two servers for production and two servers for development during working days, and use three servers for production and one server for development during nights and week ends.
First of all, there is my current configuration:
[oracle@node01 ~]$ srvctl config srvpool Server pool name: Free Importance: 0, Min: 0, Max: -1 Category: Candidate server names: Server pool name: Generic Importance: 0, Min: 0, Max: -1 Category: Candidate server names: Server pool name: srvprod Importance: 0, Min: 2, Max: 2 Category: hub Candidate server names: Server pool name: srvtest Importance: 0, Min: 2, Max: 2 Category: hub Candidate server names:
I have two nodes running in my srvprod pool and two nodes running in the srvtest pool.
An instance DB1 (prod) is running on nodes 1 and 4:
[oracle@node01 ~]$ srvctl status database -d DB1 Instance DB1_1 is running on node node01 Instance DB1_2 is running on node node04
And an instance DB2 (dev) is running on nodes 2 and 3:
[oracle@node01 ~]$ srvctl status database -d DB2 Instance DB2_1 is running on node node02 Instance DB2_2 is running on node node03
The simplest way to benefit of policies and policy sets is to create a text file with the wanted configuration. Following steps explain how to proceed.
Step 1: Create a new policy set
The crsctl tool offers an option to generate a new policy set file, based on the current active configuration. We must connect as grid:
[grid@node01 ~]$ mkdir -p /u00/app/oracle/admin/policies [grid@node01 ~]$ crsctl create policyset -file /u00/app/oracle/admin/policies/new_policy
The file has been generated:
[grid@node01 ~]$ ll /u00/app/oracle/admin/policies total 4 -rw-r--r-- 1 grid oinstall 305 Apr 29 16:21 new_policy
There is the file content:
[grid@node01 ~]$ cat /u00/app/oracle/admin/policies/new_policy SERVER_POOL_NAMES=Free ora.srvprod ora.srvtest POLICY NAME=Default SERVERPOOL NAME=ora.srvprod IMPORTANCE=0 MAX_SIZE=2 MIN_SIZE=2 SERVER_CATEGORY=ora.hub.category SERVERPOOL NAME=ora.srvtest IMPORTANCE=0 MAX_SIZE=2 MIN_SIZE=2 SERVER_CATEGORY=ora.hub.category
The line “NAME=Default” corresponds to the name of the policy in the policy set. The policy set itself is represented by the file “new_policy” created above.
Step 2: Update the current policy set
To update the current policy set, we just have to edit our policy set file “new_policy” and to add new policies above the Default one.
As an example, imagine we want to allocate one server more for production during the nights, and to allocate all servers to the production during week ends, to disable the development instances. We append the following policies:
Once the file is edited, it must be loaded into the policy set configuration using crsctl:
[grid@node01 ~]$ crsctl modify policyset -file /u00/app/oracle/admin/policies/new_policy
Step 3: Enable a new policy
Our policy set now contains three policies (Default, Nights and WeekEnds). The currently enabled policy is “Default”:
[grid@node01 ~]$ crsctl status policyset | grep -i last LAST_ACTIVATED_POLICY=Default
We are going to enable the “Nights” policy and wee what happens. To enable a policy, use the following crsctl command as grid:
[grid@node01 ~]$ crsctl modify policyset -attr "LAST_ACTIVATED_POLICY=Nights" CRS-2673: Attempting to stop 'ora.db2.db' on 'node02' CRS-2677: Stop of 'ora.db2.db' on 'node02' succeeded CRS-2672: Attempting to start 'ora.db1.db' on 'node02' CRS-2676: Start of 'ora.db1.db' on 'node02' succeeded
In the previous output, we can see on node02 that Oracle has stopped DB2 and started DB1 instead. We can confirm with srvctl:
[grid@node01 ~]$ srvctl status srvpool Server pool name: Free Active servers count: 0 Server pool name: Generic Active servers count: 0 Server pool name: srvprod Active servers count: 3 Server pool name: srvtest Active servers count: 1 [grid@node01 ~]$ srvctl status database -d DB1 Instance DB1_1 is running on node node01 Instance DB1_2 is running on node node04 Instance DB1_3 is running on node node02 [grid@node01 ~]$ srvctl status database -d DB2 Instance DB2_2 is running on node node03
Now I enable the “WeekEnds” policy. The remaing development server will be allocated to the production server pool, and 4 production instances will be running:
[grid@node01 ~]$ crsctl modify policyset -attr "LAST_ACTIVATED_POLICY=WeekEnds" CRS-2673: Attempting to stop 'ora.db2.db' on 'node03' CRS-2677: Stop of 'ora.db2.db' on 'node03' succeeded CRS-2672: Attempting to start 'ora.DB1.db' on 'node03' CRS-2676: Start of 'ora.DB1.db' on 'node03' succeeded
All nodes are now part of the production pool:
[grid@node01 ~]$ srvctl status srvpool Server pool name: Free Active servers count: 0 Server pool name: Generic Active servers count: 0 Server pool name: srvprod Active servers count: 4 Server pool name: srvtest Active servers count: 0
Four production instances are opened:
[grid@node01 ~]$ srvctl status database -d DB1 Instance DB1_1 is running on node node01 Instance DB1_2 is running on node node04 Instance DB1_3 is running on node node02 Instance DB1_4 is running on node node03
And the service for development is completely down:
[grid@node01 ~]$ srvctl status database -d DB2 Database is not running.
To reset to the initial policy, just use the same command with the “Default” policy attribute:[grid@node01 ~]$ crsctl modify policyset -attr “LAST_ACTIVATED_POLICY=Default”
This feature represents an easy way to configure a whole policy managed cluster at will. I have demonstrated how to edit a policy set using a text file generated using crsctl, but Oracle also documented how to manually definea policy set with command line. In my opinion, this is most fastidious and the text file is a better choice.
I only regret that Oracle has not created a way to enable policies according to the database workload, or to schedule policies over the time. Here, we must manually enable the policy or use the crontab…