[Fwd: Re: onconfig recommendations]
Kennedy, Randy wrote:
> Would like to receive some feedback on our onconfig setup for current
> platform. This is the 3rd physical server that the database has moved
> to and just want to ensure we are using the best settings for given
> Current Server: HP rp5405 4-Way (650 MHz) with 8GB RAM. 1 73GB hard
> drive (4 drives installed, but sysadmin took 2 for O/S, system and other
> 2 are mirrored pair so effectively 1 spindle to work with). I didn't
> get to specify the configuration.
> Old Server: HP something 4 way with 4GB RAM. 5 spindles to work with.
> When moved to current server, I did up the buffers and locks.
> We use ontape for Level 0 and logical log backups. They are done via
> files and links to remote server instead of actual tape.
> System is primarily OLTP with some batch jobs run overnight (off
> business hours).
> Any insight will be appreciated. Please let me know if you would like
> any other information.
You don't state the server version you are using. You should always do
so. Also since HPUX runs on two different processor families, you
should specify that as well. See below:
> # Root Dbspace Configuration
> ROOTNAME rootdbs # Root dbspace name
> ROOTPATH /db/links/rootdbs # Path for device containing root
> ROOTOFFSET 0 # Offset of root dbspace into device
> ROOTSIZE 384000 # Size of root dbspace (Kbytes)
> # Disk Mirroring Configuration Parameters
> MIRROR 0 # Mirroring flag (Yes = 1, No = 0)
> MIRRORPATH # Path for device containing mirrored
> MIRROROFFSET 0 # Offset into mirrored device (Kbytes)
> # Physical Log Configuration
> PHYSDBS rootdbs # Location (dbspace) of physical log
> PHYSFILE 60000 # Physical log file size (Kbytes)
> # Logical Log Configuration
> LOGFILES 85 # Number of logical log files
> LOGSIZE 12000 # Logical log size (Kbytes)
> LOG_BACKUP_MODE CONT
> # Diagnostics
> MSGPATH /i9.4/informix/online.log # System message log file path
> CONSOLE /dev/console # System console message path
> ALARMPROGRAM /db/scripts/no_log.sh # Alarm program path
> # System Archive Tape Device
> #TAPEDEV /dev/rmt/2m # Tape device path
> TAPEDEV /remote/recovery/informix/level0 # Tape device path
> TAPEBLK 16 # Tape block size (Kbytes)
> TAPESIZE 30000000 # Maximum amount of data to put on tape
> # Log Archive Tape Device
> #LTAPEDEV /dev/rmt/2m # Log tape device path
> LTAPEDEV /ltape1/informix/log1 # Log tape device path
> LTAPEBLK 16 # Log tape block size (Kbytes)
> LTAPESIZE 4500000 # Max amount of data to put on log tape
> # Optical
> STAGEBLOB # INFORMIX-OnLine/Optical staging area
> # System Configuration
> SERVERNUM 0 # Unique id corresponding to a OnLine
> DBSERVERNAME courtshm # Name of default database server
> DBSERVERALIASES # List of alternate dbservernames
You have no DBSERVERALIASES entry for a TCP connection name, but you
have a soctcp NETTYPE entry. Even if all of your apps are local to the
server machine, you'll still want a TCP connection with minimal poll
entries configured for maintenance..
> DEADLOCK_TIMEOUT 60 # Max time to wait of lock in
> distributed env.
> RESIDENT 0 # Forced residency flag (Yes = 1, No =
Don't know HP model numbers, but it this is a PA-RISC machine, you
almost have to set RESIDENT to -1 to prevent poor performance due to the
architectures restrictions of only four active shared memory segments
per process. On HP PA-RISC platforms, besides marking all shared memory
lockable, it combines the 'resident' segment with the initial 'virtual'
segment to reduce the count. VERY important on PA-RISC if you are using
shared memory connections. On Itanium machines, no problem, though I'd
still recommend using RESIDENT 1 or -1 to prevent the shared segments
> NETTYPE soctcp,4,150,NET
The above line is no-op with no TCP connection names in DBSERVERNAME or
> NETTYPE ipcshm,1,25,NET
It's very bad for performance and responsiveness to have shared memory
poll threads in NET VPs. It wastes CPU cycles for no reason. Configure
this in CPU VPs and as many as there are NUMCPUVPs (see below on that).
> MULTIPROCESSOR 1 # 0 for single-processor, 1 for
> NUMCPUVPS 16 # Number of user (cpu) vps
That's too many CPU VPs for a 4 core 650MHZ processor system. You
should be able to effectively set this to 6 (to reserve one CPU for
HPUX) or 8 (to utilize all 4 CPUs) and may even be able to get away with
9 or 12, but I wouldn't go to 4 x CPU cores until I had processors at
> SINGLE_CPU_VP 0 # If non-zero, limit number of cpu vps
> to one
> NOAGE 1 # Process aging
> AFF_SPROC 0 # Affinity start processor
> AFF_NPROCS 0 # Affinity number of processors
> # Shared Memory Parameters
> LOCKS 80000 # Maximum number of locks
LOCKS looks low. Note someone said a lock is 4 bytes, no. On 32bit IDS
servers each lock is 44bytes and on 64bit IDS servers each lock is 96
bytes (even OL5 used 32bytes per lock). But still, even if you are
running a 64bit server, a million locks only takes up 96MB out of 4GB
(2.4% of memory).
> BUFFERS 150000 # Maximum number of shared buffers
Evaluate the size of a normal working set of data (remember to allow for
indexes) and resize BUFFERS to just exceed that. You have enough memory
to use more than 300MBof buffers.
> NUMAIOVPS 26 # Number of IO vps
Unless you have not enabled KAIO (it's off by default on HPUX - set
KAIOON in the server's environment to enable KAIO) you should not need
more than 6 AIO VPs and many servers can get away with as few as the
default of 2. Watch onstat -g iov to determine if you need more or less
of these (I've posted the criteria many times on CDI).
> PHYSBUFF 64 # Physical log buffer size (Kbytes)
> LOGBUFF 32 # Logical log buffer size (Kbytes)
> CLEANERS 28 # Number of buffer cleaner processes
CLEANERS should be >= LRUs to minimize LRU write times and == number of
chunks to minimize checkpoint write times.
> SHMBASE 0x0 # Shared memory base address
> SHMVIRTSIZE 160000 # initial virtual shared memory segment size
That looks low. Check onstat -g seg to see if you have more than one
virtual segment (performance death on a PA-RISC system!). If you do,
fold their sizes into SHMVIRTSIZE so you get only one segment most of
the time, and increase SHMADD to minimize the number of additional
segments when they are necessary.
> SHMADD 80000 # Size of new shared memory segments
> SHMTOTAL 0 # Total shared memory (Kbytes).
> CKPTINTVL 300 # Check point interval (in sec)
> LRUS 20 # Number of LRU queues
LRUS settings will depend on the number of users you have concurrently
on the system and how active they are. Check my Bufwaits Ratio (BR)
metric calculation to determine if this should be increased (and
remember you may have to adjust CLEANERS at the same time).
> LRU_MAX_DIRTY 5.000000 # LRU percent dirty begin cleaning limit
> LRU_MIN_DIRTY 2.000000 # LRU percent dirty end cleaning limit
> LTXHWM 50 # Long transaction high water mark
> LTXEHWM 60 # Long transaction high water mark
> TXTIMEOUT 0x12c # Transaction timeout (in sec)
> STACKSIZE 64 # Stack size (Kbytes)
> PC_POOLSIZE 110 # Stored Procedures Cache
> # System Page Size
> # BUFFSIZE - OnLine no longer supports this configuration parameter.
> # To determine the page size used by OnLine on your platform
> # see the last line of output from the command, 'onstat -b'.
> # Recovery Variables
> # OFF_RECVRY_THREADS:
> # Number of parallel worker threads during fast recovery or an offline
> # ON_RECVRY_THREADS:
> # Number of parallel worker threads during an online restore.
> OFF_RECVRY_THREADS 15 # Default number of offline worker
> ON_RECVRY_THREADS 15 # Default number of online worker
> # Data Replication Variables
> # DRAUTO: 0 manual, 1 retain type, 2 reverse type
> DRINTERVAL 30 # DR max time between DR buffer flushes
> (in sec)
> DRTIMEOUT 30 # DR network timeout (in sec)
> DRLOSTFOUND /usr/informix/etc/dr.lostfound # DR lost+found file path
> # Read Ahead Variables
> RA_PAGES 64 # Number of pages to attempt to read ahead
> RA_THRESHOLD 8 # Number of pages left before next group
These look OK to me unless you tend to do lots of sequential scans, in
which case you may want to increase RA_THRESHOLD to 16 or 32. For std
OLTP servers, the current setting is ideal.
Art S. Kagel