Performance Counters for IBM DB2

Introduction               

DB2 is a widely recognized and used database software product developed by IBM. It is specifically designed to handle both relational and non-relational data, making it a versatile option for data management. DB2 has gained significant popularity as a relational database management system (RDBMS) primarily used in large-scale corporate data warehouses. Its robust capabilities allow for efficient storage, management, and analysis of data from various sources, such as databases, big data platforms, and cloud-based resources. DB2’s comprehensive features and scalability make it an ideal choice for organizations seeking reliable and powerful solutions for their data-related needs.

DB2 stands out as a leading DBMS due to its numerous advantages and superior features. One of its key strengths lies in its high-performance components, which allow it to handle large datasets efficiently and with minimal effort. DB2’s exceptional scalability further enhances its ability to manage big data, enabling organizations to scale their operations seamlessly.

Another notable aspect of DB2 is its robust security model. With built-in data encryption and advanced auditing tools, it provides organizations with a highly secure environment to store and manage their data. These security measures help protect sensitive information and ensure compliance with data privacy regulations.

Furthermore, DB2 offers high availability and disaster recovery capabilities. This means that in the event of failures or disruptions, DB2 enables swift recovery, minimizing downtime and ensuring continuity of operations. Such resilience is crucial for businesses that rely heavily on their databases for critical operations.

In addition to its performance and scalability, DB2 offers advanced data modeling and query optimization features that further enhance its capabilities. DB2 supports a wide range of SQL capabilities, including advanced features, allowing users to create complex queries that can be optimized for optimal performance. This means that users can leverage the power of SQL to efficiently retrieve and manipulate data within the database.

Furthermore, DB2 supports stored procedures and user-defined functions, which can be used to customize the database and improve efficiency. Stored procedures are pre-compiled database programs that can be executed directly within the DB2 environment. They enable users to encapsulate frequently performed operations into reusable modules, reducing the need for repetitive code and enhancing performance. User-defined functions, on the other hand, allow users to define their own custom functions that can be incorporated into SQL queries, providing flexibility and extensibility.

In general, companies searching for a database system of the highest caliber should consider DB2. It is built to handle massive datasets and provides superior scalability, high availability, and sophisticated data modeling and query optimization features. Businesses of all sizes should choose it since it is safe, dependable, and economical.

Important Features of DB2

  1. Scalability and adaptability: DB2 scales very well and is quite flexible when it comes to workloads, environments, and data quantities.
  2. Security: Advanced security features including authentication, authorization, encryption, auditing, and logging are offered by DB2 in terms of security.
  3. High Availability: To guarantee that data is always accessible, DB2 offers high availability and disaster recovery solutions.
  4. Ease of Use: DB2 is simple to use and offers extensive monitoring and administration options.
  5. Cost-Effective: When compared to other databases, DB2 provides a cost-effective option.

Performance Counters of IBM DB2

Connections

  • Remote connections: The number of active connections made by distant clients to the instance of the database manager under observation.
  • Connections processing requests: The number of remote applications currently logged into a database and conducting a task within the monitored database manager instance.
             Versions 9. x and higher do not support this monitor.
  • Idle Agents: in the Agent, the pool is currently idle since they are not allocated to any applications.
    No one can use this element to assist in configuring the num_poolagents configuration parameter. Performance can be enhanced by having idle Agents available to respond to requests for Agents.

Catalog Cache

  • Lookups of the catalog cache: The number of times the catalog cache was used to retrieve permission or table descriptor data.
    Both successful and unsuccessful attempts to access the catalog cache are included in this section. A reference to the catalog cache is made whenever:
    1. The creation of a SQL statement involves processing a table, view, or alias name.
    2. Access is made to the database authorization information.
    3. In the process of creating a SQL statement, a routine is processed.
  • Catalog cache inserts: The system’s attempts to put table descriptors or authorization data into the catalog cache are counted as catalog cache inserts.
  • Catalog cache overflows The number of times the catalog cache has exceeded the limits of the memory that it has been allotted.
    If cat_cache_overflows is large, the workload may be too great for the catalog cache. The catalog cache’s performance might be enhanced by expanding it.

IO Requests

  • Direct reads: The number of read operations that do not employ the buffer pool are known as direct reads.
  • Direct writes: The number of write operations that do not use a buffer pool is referred to as direct writes.
  • Direct read time: The amount of time (measured in milliseconds) needed to complete a direct read.
               An I/O conflict may be indicated by a high average time.
  • Direct write time: The amount of time (measured in milliseconds) needed to complete a direct write.
    An I/O conflict may be indicated by a high average time.

Command Rates:

  • Select statements: The quantity of SQL SELECT statements that were executed. You can use this component to gauge the level of database activity.
  • Commit statements: The total number of SQL COMMIT statements that have been tried are known as commit statements.
    If this counter changes at a slow rate throughout the Monitor period, apps may not be committing changes frequently, which could cause issues with logging and data concurrency.
  • Rollback statements: SQL statements that have been attempted to roll back include the following.
    The number of rollbacks should be kept to a minimum because more rollback activity slows down database throughput.
  • Update/insert/delete statements: The number of SQL UPDATE, INSERT, and DELETE statements that were executed, or update/insert/delete statements. You can use this component to gauge the level of database activity.
  • Overflow accesses: Accesses (both read and write) to overflowing rows in tables are referred to as overflow accesses.
    Overflowing rows are a sign of data fragmentation. If this figure is high, the database may need to be reorganized to function better.
    Versions 9. X and higher do not support this Monitor.

Tables:

  • Rows read: This represents how many rows were taken from the tables.
  • Rows inserted: This represents the number of attempted row insertions. This component can be used to learn more about the database’s current level of activity.
  • Rows deleted: This represents the attempted row deletions. This component can be used to learn more about the database’s current level of activity.
  • Rows updated: This represents the number of attempted row updates. This component can be used to learn more about the database’s current level of activity.

Memory:

  • Committed private memory: Committed private memory is private memory that the database management instance has already committed.
    This element can be used to modify the min_priv_mem configuration value to ensure that there is enough private memory available.
  • Total log space utilized: The database’s total active log space usage (measured in bytes).
    This component has to be combined with To avoid running out of log space, consider adjusting the following setup values based on the log space available:
    • logfiles
    • log primary
    • log second
  • Log space available: The amount of active log space in the database that is not being consumed by pending transactions is known as the log space available.
  • Log pages read: The number of log pages the logger has retrieved from the disc.
    This component can be used in conjunction with an operating system monitor to estimate how much I/O on a device is due to database activity.
  • Log pages write The total amount of log pages that the logger has written to the disc.
    This component can be used in conjunction with an operating system monitor to estimate how much I/O on a device is due to database activity.
  • Secondary logs allocated: If this number is continuously high, it might be essential for the application to use bigger log files, more primary log files, or more COMMIT statements.
    If this value is consistently high, it may be necessary to have larger log files or more primary log files or more frequent COMMIT statements within the application.

Pool

  • Pool data logical reads: This statistic shows how many data pages for both small and large table spaces have been requested from the buffer pool (logically).
  • % Pool data logical reads: The proportion of data pages for small and big table spaces that have been logically requested from the buffer pool as opposed to those physically read into the table space containers.
  • Pool data physical reads: For small and big table spaces, this figure represents the number of data pages that were physically read into the table space containers.
  • Pool index logical reads: This statistic shows how many index pages for small and big table spaces have been requested from the buffer pool (logically).
  • % Pool index logical reads: Index pages requested from the buffer pool (logically) for small, medium, and large table spaces as a percentage of index pages read from (physical) table space containers.
  • Group Index Physical Reads – This metric describes how many index pages were physically retrieved from large and small table space containers.
  • Pool Index Writes – The number of times a buffer pool index page has been physically written to disk is indicated by the Pool Index Write field.
  • Pool Read Time – For all table space types, this value represents the total time spent reading data and index pages from the actual table space containers. It is expressed as a value in microseconds.
  • Pool Write Time – The total amount of time physically spent writing data or index pages from the buffer pool to disk is given by the pool write time metric. Microseconds are used to indicate elapsed time.

Hash joins

  • Total Number of Hash Combinations: The total number of hash combinations executed. To find out if a significant portion of hash joins would benefit from small changes to the sort heap size, use this value in conjunction with hash_join_overflows.
  • Hash join overflows: Data was combined more times than there was room for it in the sort heap, causing an overflow.
  • % Hash Join Overflows – The percentage of hash join data exceeded the amount of available heap space.

Miscellaneous

  • Waiting Blocks: This shows how many agents are waiting for a slot. It is recommended to identify applications that are blocked or exclusively blocked for a long period of time if this number is high, as the applications may have concurrency issues.
  • Locks Held: The number of locks currently held. This is the total number of locks currently held by all running applications in the database.
  • Total Deadlocks: The number of deadlocks that occurred. This part can be a sign that applications have contention issues. The following conditions can cause these problems:
    Lock escalations take place for the database.
    • 1. Lock escalations occur for the database.
    • 2. When system-generated row locks may be sufficient, an application can explicitly lock tables.
    • 3. When bound, an application may use an ineffective isolation level.
    • 4. Catalog tables are locked for repeatable reading.
    • 5. Deadlocks occur because applications receive the same lock in different orders. Finding the applications (or application processes) where the deadlock occurs will help resolve the issue. It may be possible to modify the application to make it more suitable for concurrent execution.
  • Active Sorts – The number of active sorts in the database is the number for which a sorted heap is reserved.
  • Total Ordered: The total number of operations ordered. Included in this value are many different types of temporary tables created during relational operations.
  • Sort Overflow: The total number of sorts that have exceeded the capacity of the sort heap and may have required temporary storage space on disk.
    When a collar overflows, there is more overhead because the collar must go through a merge phase and may require more I/O if data needs to be written to disk.
  • Sort Overflow %: The percentage of sorts that overflowed your sort heap and may have required disk space for temporary storage is known as the sort overflow rate.
    When a collar overflows, there is more overhead because the collar has to go through a merge phase and may require more I/O if data needs to be written to disk.
  • Version: Database server version DB2 version.

Top Statements

  • CPU: The requests have consumed the most processor time.
  • Rows read: The requests having read the most rows from the database.
  • Rows written: The requests having written the most rows to the database.
  • Sort time: The requests with the longest sort times.
  • Sort overflows: The requests with the highest number of sort overflows.
Scroll to Top