June 29, 2009

Peningo Systems has been Selected to Provide Tivoli Access Manager Consultants for a Professional Services Organization

 

Peningo Systems has recently been selected to provide Tivoli Access Manager Consultants to one of the largest IT Organizations in Saudi Arabia. Peningo Systems will be providing TAM Consultants to assist their client’s Professional Services in implementation, upgrade planning  and  technical support of Tivoli Access Manager.

 

We at Peningo Systems always insure that we provide the end client with the best available resources within their respective areas of expertise. These deliveries of services are at rates that below the rates of the Software Vendor’s Professional Services organizations that utilize resources that that are not as experienced and seasoned as the Peningo Systems Consultant.

If you are an "End Client" looking for IT Consulting Service providers to support your Applications, Peningo Systems provides Consultants with expertise in many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page or go to the Peningo Tivoli Access Manager Consultants page.

June 04, 2009

Tivoli Business Service Manager Performance Tuning Recommendations

[This article is sponsored by Peningo Systems, Inc., a provider of Tivoli Consulting Services on a nationwide basis. For more information on Peningo Systems, please go to the  Peningo Tivoli Consultants page. ]

 

The IBM DeveloperWorks site is an excellent repository of information and resources regarding various IBM offerings and systems.  

We recommend this article, that was recently released on Developerworks,  to any Tivoli Consultant who is involved in the implementation and performance tuning of the Tivoli Business Service Manager.
 
 
 
This paper includes performance and tuning recommendations for IBM Tivoli Business Service Manager (TBSM) version 4.2.

·                     1 Overview

·                     2 TBSM 4.2 and WebSphere Application Server tuning

o                                            2.1 Identifying current JVM settings within TBSM 4.2

o                                            2.2 Enabling Java Virtual Machine (JVM) Garbage Collection (GC) logging

o                                            2.3 Running a representative workload

o                                            2.4 Analyzing the GC Logs for TBSM

·                     3 Additional Dashboard tuning suggestions

·                     4 Client side Java Virtual Machine tuning

·                     5 PostgreSQL database and the Discovery Library/XML toolkit

o                                            5.1 Specific PostgreSQL tuning parameters

o                                            5.2 Vacuuming the TBSM database

·                     6 Final thoughts about TBSM 4.2 performance

·                     7 Hardware for production environments

·                     8 References

·                     9 Trademarks

·                     10 Copyright and Notices

Overview

IBM® Tivoli® Business Service Manager (TBSM) 4.2 delivers technology for IT and business users to visualize and assure the health and performance of critical business services. The product does this by integrating a logical representation of a business service model with status-affecting alerts that are raised against the underlying IT infrastructure. Using browser-based TBSM Web Consoles, operators can view how the enterprise is performing at a particular time, or how it performed over a given period of time. As a result of this, TBSM delivers the real-time information that you need to respond to alerts effectively and in line with business requirements, and optionally to meet Service Level Agreements (SLAs).

Given the size of today's large business enterprises, TBSM must be able to represent and manage the status and related attributes of very large business service models. To enhance scalability, TBSM 4.2 divides the previous TBSM 4.1.x server architecture into two separate servers, referred to in this paper as the "Data server" for back-end processing, and "Dashboard Server" for front-end operations.

For reference, the Data server maintains the canonical TBSM business service model representation, processing events from various sources, and updating service status based on those events. In this role, it interacts with various data stores.

The Dashboard Server, by contrast, is primarily responsible for supporting the user interface. It retrieves service information from the Data server as needed to support the user interactions.

TBSM 4.2 is primarily processor dependant (the number and speed of processors being two of the key factors) as long as sufficient memory is configured for the TBSM Java_™_ Virtual Machines (JVMs). It is important to be aware of the minimum and recommended hardware specifications (See Section 6) for an optimal user experience.

To that end, the purpose of this paper is to describe some of the performance tuning capabilities available for you to use with the product, how to interpret and analyze the results of performance tuning, and to suggest some recommendations for installing and tuning the product to achieve optimal scale and performance in your own unique TBSM environment.

TBSM 4.2 and WebSphere Application Server tuning

This release of TBSM uses an embedded version of the WebSphere Application Server 6.1 for the Data server and Dashboard Servers. Tuning WebSphere for TBSM 4.2 includes the following actions:

  • Identifying the current TBSM JVM settings
  • Enabling JVM Garbage Collection (GC) logging
  • Running a representative workload
  • Analyzing the GC log results
  • Tuning the JVM appropriately
  • Running the workload again (and again, if needed)
  • Reviewing the new results

The following statements are from the WebSphere 6.1 documentation on Java memory and heap tuning:

"The JVM memory management and garbage collection functions provide the biggest opportunities for improving JVM performance."

"Garbage collection normally consumes from 5% to 20% of total execution time of a properly functioning application. If not managed, garbage collection is one of the biggest bottlenecks for an application."

The TBSM 4.2 Data server and Dashboard Servers each run in their own JVM; subsequently, each has the capability to be independently tuned.

Of primary consideration is the memory allocation to each of the JVMs, bounded by two key values:

  • Initial memory (Xms)
  • Maximum memory (Xmx)

The Data server and Dashboard Server also use the default Garbage Collector (optthruput) for TBSM 4.2 that can be used without modification (with the exception of the Solaris Operating Environment, which uses a generational garbage collector instead). The following statement is from the WebSphere 6.1 documentation:

"optthruput, which is the default, provides high throughput but with longer garbage collection pause times. During a garbage collection, all application threads are stopped for mark, sweep and compaction, when compaction is needed. optthruput is sufficient for most applications."

Based on performance analysis of TBSM 4.2, the default Garbage Collector has proven quite capable, and is recommended in most cases, especially in environments where high event processing rates are needed. (For reference on the Sun Garbage collection algorithms, review the Sun JVM link provided in the reference section of this document.)

Most of the remainder of this paper explains how to efficiently size the TBSM 4.2 JVMs to allow the default garbage collection algorithms to most operate most efficiently.

To determine the Java version and level that is in use, run the following command:

$TIP_HOME/java/bin/java -version

In response to this command, the TBSM server writes information to the command line, including the JVM provider information and level of release. Knowing this up-front directs you to the correct parameters that follow in this document for Java™ Virtual Machine configuration.

A few considerations about JVM sizing and GC activity

Proper JVM heap memory sizing is critical to TBSM 4.2.

Memory is allocated to objects within the JVM heap, so as the number of objects grows, the amount of free space within the heap decreases. When the JVM cannot allocate additional memory requested for new objects as it nears the upper memory threshold (Xmx value) of the heap, a Garbage Collection (GC) is called by the JVM to reclaim memory from objects no longer accessible to satisfy this request.

Depending on the JVM and type of GC activity, this garbage collection processing can temporarily suspend other threads in the TBSM JVM, granting the garbage collection threads priority to complete the GC work as quickly and efficiently as possible. This prioritization of GC threads and pausing of the JVM is commonly referred to as a "Stop the World" pause. With proper heap analysis and subsequent JVM tuning, this overhead can be minimized, thereby increasing TBSM application throughput. Essentially, the JVM spends less time paused for GC activities, and more time processing core TBSM activities.

Identifying current JVM settings within TBSM 4.2

There are several ways to gather the WebSphere JVM settings in a TBSM 4.2 environment. One of the easiest (and safest) ways to do this is by leveraging a WebSphere command to create custom startup scripts for both the TBSM Data and Dashboard Servers.

To do this, run the following command from both the Data server and Dashboard Server /profile/bin directory (the servers can be up or down). For the TBSM Data server, run the following command:

./startServer.sh server1 -username [Dataserver_UserID] -password [Dataserver_UserID_password] -script start_dataserver.sh

The output of this command is a file named start_dataserver.sh in the same /profile/bin directory. Utilizing a custom start-up script allows your original WebSphere configuration files to remain intact, and provides a few unique capabilities you might want to leverage for performance tuning.

The following section is part of the start_dataserver.sh file that was created:

# Launch Command
 
exec "/opt/IBM/tivoli/tip/java/bin/java"  $DEBUG "-Declipse.security" "-Dosgi.install.area=/opt/IBM/tivoli/tip"
 "-Dosgi.configuration.area=/opt/IBM/tivoli/tip/profiles/TBSMProfile/configuration" "-Djava.awt.headless=true"
 "-Dosgi.framework.extensions=com.ibm.cds" "-Xshareclasses:name=webspherev61_%g,groupAccess,nonFatal" "-Xscmx50M"
 "-Xbootclasspath/p:/opt/IBM/tivoli/tip/java/jre/lib/ext/ibmorb.jar:/opt/IBM/tivoli/tip/java/jre/lib/ext/ibmext.jar"
 "-classpath" "/opt/IBM/tivoli/tip/profiles/TBSMProfile/properties:/opt/IBM/tivoli/tip/properties:
/opt/IBM/tivoli/tip/lib/startup.jar:/opt/IBM/tivoli/tip/lib/bootstrap.jar:/opt/IBM/tivoli/tip/lib/j2ee.jar:
/opt/IBM/tivoli/tip/lib/lmproxy.jar:/opt/IBM/tivoli/tip/lib/urlprotocols.jar:/opt/IBM/tivoli/tip/deploytool/itp/batchboot.jar:
/opt/IBM/tivoli/tip/deploytool/itp/batch2.jar:/opt/IBM/tivoli/tip/java/lib/tools.jar" "-Dibm.websphere.internalClassAccessMode=allow"
 "-Xms256m" "-Xmx512m"

Note that the last 2 arguments passed to the JVM are "-Xms256m" and "Xmx512m". These 2 arguments are responsible for setting the initial JVM size (Xms) to 256 MB of memory, and the maximum JVM size (Xmx) to 512 MB of memory.

Next, issue the startServer.sh command from above; however, this time, run it from the Dashboard Server /profile/bin directory. Also, change the name of the startup script argument to "start_dashboard.sh" as in the following example:

./startServer.sh server1 -username [Dashboard_UserID] -password [Dashboard_UserID_password] -script start_dashboard.sh

The output of this command is a file named start_dashboard.sh in the same /profile/bin directory.

Enabling Java Virtual Machine (JVM) Garbage Collection (GC) logging

To fully understand how the JVM is using memory in your unique TBSM environment, you need to add a few arguments to the start_dataserver.sh script as indicated to log garbage collection (GC) data to disk for later analysis:

# Launch Command: Dataserver
exec "/opt/IBM/tivoli/tip/java/bin/java" "-verbose:gc" "- Xverbosegclog:/holdit/dataserver_gc.log" "-XX:+PrintHeapAtGC" 
"-XX:+PrintGCTimeStamps"  $DEBUG "-Declipse.security" "-Dosgi.install.area=/opt/IBM/tivoli/tip" 
"-Dosgi.configuration.area=/opt/IBM/tivoli/tip/profiles/TBSMProfile/configuration" "-Djava.awt.headless=true" 
"-Dosgi.framework.extensions=com.ibm.cds" "-Xshareclasses:name=webspherev61_%g,groupAccess,nonFatal" "-Xscmx50M" 
"-Xbootclasspath/p:/opt/IBM/tivoli/tip/java/jre/lib/ext/ibmorb.jar:/opt/IBM/tivoli/tip/java/jre/lib/ext/ibmext.jar" 
"-classpath" "/opt/IBM/tivoli/tip/profiles/TBSMProfile/properties:/opt/IBM/tivoli/tip/properties:/opt/IBM/tivoli/tip/lib/startup.jar:
/opt/IBM/tivoli/tip/lib/bootstrap.jar:/opt/IBM/tivoli/tip/lib/j2ee.jar:/opt/IBM/tivoli/tip/lib/lmproxy.jar:
/opt/IBM/tivoli/tip/lib/urlprotocols.jar:/opt/IBM/tivoli/tip/deploytool/itp/batchboot.jar:
/opt/IBM/tivoli/tip/deploytool/itp/batch2.jar:/opt/IBM/tivoli/tip/java/lib/tools.jar" 
"-Dibm.websphere.internalClassAccessMode=allow" "-Xms256m" "-Xmx512m"

Note that the directory for GC log file data must exist prior to launching TBSM with the customized start_dataserver.sh script. For this scenario, a /holdit directory (with read/write access for the TBSM user ID) has already been created.

Important: For the Sun JVM (TBSM 4.2 on Solaris), the syntax for the GC log file location ("-Xverbosegclog:/holdit/dataserver_gc.log")
is different than the one used for the IBM version Use the following argument instead:

./startServer.sh server1 -username [Dashboard_UserID] -password [Dashboard_UserID_password] -script start_dashboard.sh

Repeat this procedure to edit the Dashboard Server custom startup script; however, change the log name from "dataserver_gc.log" to "dashboard_gc.log".

The log file names should be different both to distinguish between the two, and to ensure that the GC log data does not combine into one log if both TBSM Servers are installed on the same system. Combining both logs together renders the GC log file useless for matters of performance analysis and subsequent tuning.

For reference, the TBSM Dashboard Server script should resemble this:

# Launch Command: Dashboard Server
exec "/opt/IBM/tivoli/tip/java/bin/java" "-verbose:gc" "- Xverbosegclog:/holdit/dashboard_gc.log" 
"-XX:+PrintHeapAtGC" "-XX:+PrintGCTimeStamps"  $DEBUG "-Declipse.security" "-Dosgi.install.area=/opt/IBM/tivoli/tip" 
"-Dosgi.configuration.area=/opt/IBM/tivoli/tip/profiles/TBSMProfile/configuration" "-Djava.awt.headless=true" 
"-Dosgi.framework.extensions=com.ibm.cds" "-Xshareclasses:name=webspherev61_%g,groupAccess,nonFatal" "-Xscmx50M" 
"-Xbootclasspath/p:/opt/IBM/tivoli/tip/java/jre/lib/ext/ibmorb.jar:/opt/IBM/tivoli/tip/java/jre/lib/ext/ibmext.jar" 
"-classpath" "/opt/IBM/tivoli/tip/profiles/TBSMProfile/properties:/opt/IBM/tivoli/tip/properties:
/opt/IBM/tivoli/tip/lib/startup.jar:/opt/IBM/tivoli/tip/lib/bootstrap.jar:/opt/IBM/tivoli/tip/lib/j2ee.jar:
/opt/IBM/tivoli/tip/lib/lmproxy.jar:/opt/IBM/tivoli/tip/lib/urlprotocols.jar:
/opt/IBM/tivoli/tip/deploytool/itp/batchboot.jar:/opt/IBM/tivoli/tip/deploytool/itp/batch2.jar:
/opt/IBM/tivoli/tip/java/lib/tools.jar" "-Dibm.websphere.internalClassAccessMode=allow" "-Xms256m" "-Xmx512m"

Running a representative workload

At this point, start the TBSM Servers (Data server first, then Dashboard Server as soon as the processor quiesces on the Data server). Next, proceed with a common scenario or representative workload in your environment to populate the GC logs for subsequent performance analysis. It can be a simple scenario that you would like to optimize, perhaps TBSM Data server startup.

Or, perhaps you want to tune a representative scenario as the following example illustrates for a steady-state workload captured over a 30 minute span of time.

First, record some notes on the TBSM environment configuration. The following scenario was measured for initial performance and subsequent tuning:

Service Model: 50 000 Service Instances, 4 level hierarchy, no leaf node with more than 50 children.

Initial heap size: (-Xms): 256 MB

Maximum heap size: (-Xmx): 512 MB

Dataserver Started: 9:57:00
Dataserver Started: 9:59:00

Workload Start Time: 10:09:00
Workload End Time: 10:39:00

For this reference scenario, the TBSM Data server was started with GC logging at 9:57:00. After the processor quiesced on the server (indicating that the Data server startup and initial Service Model processing had completed), the Dashboard Server was started and 50 unique TBSM Web Consoles were logged in.

After all consoles were started, each was set to a unique Service Tree and Service Viewer desktop session. Finally, a steady-state event workload using thousands of unique events (send by way of remote EIF probes) was introduced at 10:09:00, and continued until 10:39:00 when the event flow was stopped and GC log files immediately collected.

Also, while this workload was being processed, a "vmstat -n 15 120 >> vmstat_out.txt" command was run (on each TBSM Server), which collected CPU statistics every 15 seconds for a 30 minute period to a local file (for later analysis and review). After the workload was complete, these vmstat_out.txt files were also collected for review.

Analyzing the GC Logs for TBSM

To analyze the resultant GC log files, download the IBM Pattern Matching and Analysis (PMAT) tool from the IBM Alphaworks Web site:

Taken from the PMAT Web site:

"The Pattern Modeling and Analysis Tool for IBM® Java Garbage Collector (PMAT) parses verbose GC trace, analyzes Java heap usage, and recommends key configurations based on pattern modeling of Java heap usage... This information can be used to determine whether garbage collections are taking too long to run; whether too many garbage collections are occurring; and whether the JVM crashed during garbage collection."

Although there is an in-depth tutorial on the same Web site (See: Webcast replay - "How to analyze verbosegc trace with IBM Pattern Modeling and Analysis Tool for IBM Java Garbage Collector"), the following information is provided to expedite utilization of the PMAT tool within a Windows environment.

To analyze the GC log file that you collected, start the IBM PMAT Tool:

"C:\Program Files\Java\jdk1.6.0\bin\java" -Xmx128m -jar "C:\TBSM 4.2\Tools\IBMPMAT\ga31.jar"

Use this example and edit it as needed (substitute the location of your Java executable file and location of PMAT files). Note that the Xmx value of 128m limits the PMAT tool to use no more than 128 MB RAM on the system. If you have a number of very large GC log files, you might want to increase the Xmx value.

Review the PMAT Web site for other configuration details or a more in-depth walk-thru as needed. The following examples assume that the tool is correctly installed and ready for you to use.

Loading the GC log file

The following screen capture shows the initial screen of the PMAT tool.

Click the I folder to load an IBM generated GC log; the IBM version is used across all TBSM 4.2 platforms with the exception of the Solaris Operating environment which uses the Sun JVM. To open a Sun-generated log, click the N folder instead. This document assumes an IBM-generated log is used for the 30 minute steady-state scenario.

Navigate to the GC log that you want to analyze and select it. The PMAT tool processes the log, and displays the analysis and recommendations you can review.

Analyzing the initial Data server results

The following screen capture shows the result after a garbage collection log has been opened within the PMAT tool for the TBSM Data server.

Review the Analysis and Recommendations sections. For this scenario, the Analysis section indicates that no Java heap exhaustion was found, typically indicating that there is sufficient space within the JVM to satisfy required memory allocations. However, the Overall Garbage Collection Overhead metric notes that 20% of the application time was spent performing Garbage Collection activities, most likely indicating a need for tuning the JVM memory parameters.

To minimize the GC overhead, review the Recommendations section and assign additional memory to the JVM for more efficient processing of the workload. As the PMAT tool recommendation is to set the JVM Xmx value to approximately 678 MB or greater (and because the system has plenty of memory), a new value of 1024 MB was chosen as the new Xmx value (recall that the as-provided Xmx setting is 512 MB).

To make this change, do the following steps:

#. Edit the start_dataserver.sh script.

  1. Change the Xmx value from "-Xmx512m" to "-Xmx1024m".

  2. Change the "-Xms256m" to "-Xms512m", which is one half of the new Xmx parameter. Save the changes to the script.
Analysis - Initial Dashboard Server Results

The following screen capture shows the result after a garbage collection log has been opened within the PMAT tool for the TBSM Dashboard Server.

Next, load the dashboard_gc.log file and review the Analysis and Recommendations sections. For the Dashboard, the Analysis section indicates that no heap exhaustion was found. It also reveals that 10% of the application time was spent performing Garbage Collection activities, certainly not excessive, but some slight tuning might be beneficial.

To reduce the GC overhead for the Dashboard Server, again review the Recommendations section. As the PMAT tool advises a maximum JVM size of approximately 375 MB or greater (and the TBSM 4.2 default is already at 512 MB), a change might not be warranted. However, because the system has plenty of memory, an interesting decision is to choose 768 MB as the new Xmx value, with a new initial size (Xms) of 384 MB.

To make these changes, do the following steps:

  1. Edit the start_dashboardserver.sh script.
  2. Change the Xmx value from "-Xmx512m" to "-Xmx768m".
  3. Change the "-Xms256m" to "-Xms384m", which is one half of the new Xmx parameter.
  4. Save the changes to the script.
  5. At this point, restart both servers, and rerun the same scenario as before. After it is complete, review the new GC logs in PMAT to determine changes in TBSM performance.
Reviewing the results after tuning: Data server

The following screen capture shows the result after the new garbage collection log has been opened within the PMAT tool for the TBSM Data server.

After the run is complete, load the new Dataserver_gc.log file into the PMAT tool and review the Analysis and Recommendations sections. For this "tuned" scenario, the analysis section again indicates that no Java heap exhaustion was found. However, Overall Garbage Collection Overhead is now calculated at 5% (down from 20% prior to tuning), representing a reduction of 15%. Less time spent in garbage collection essentially translates to more processer cycles available for application processing.

To illustrate the CPU savings for the Data server, the vmstat data for total processor utilization was collected and plotted in the following chart for the event processing workload of 30 minutes (Note that these CPU comparison results are not a guarantee of service or performance improvement; the performance of each unique TBSM environment will vary):

For the initial event processing workload, the average processor utilization was 43.4% of total CPU on the Data server system. After tuning, the same workload used an average of 18.3% of total processor utilization, a reduction of 57.9% of processor overhead. Also, an extended period of 100% processor utilization late in the run was almost entirely eliminated in the tuned environment.

Reviewing the results after tuning: Dashboard Server

The following screen capture shows the result after the new garbage collection log has been opened within the PMAT tool for the TBSM Dashboard Server.

Review the Analysis and Recommendations sections. For this "tuned" scenario, the analysis section again indicates no Java heap exhaustion. Garbage Collection overhead is now calculated at 9% (down from 10% prior to tuning), which seems to be a minimal gain.

However, an interesting metric to consider is the Total Garbage Collection pause, which is now 252 seconds, down from 288 seconds in the original (untuned) Dashboard Server scenario. As previously stated, application processing is essentially paused while some garbage collection activities occur. Although each of these pauses can range from several milliseconds to several hundred milliseconds each (spread over time and unique to each environment), a reduction in the total number of overall garbage collection time is another worthwhile metric to consider.

Finally, to illustrate the CPU comparison for the Dashboard Server, the vmstat data for total processor utilization was plotted in the following chart for the same event processing workload. Again note that these CPU results are not a guarantee of service or performance improvement; performance for each unique TBSM environment will vary):

For the initial event processing workload, the average processor utilization was 19.1% of total CPU on the Dashboard Server. After tuning, the same workload now used an average of 20.2% of total processor utilization, a minimal increase in total system processor utilization.

This illustrates an important concept: the larger the Xmx value, potentially, the more objects are loaded into JVM memory because there is additional memory space. Therefore, additional CPU processing is most likely needed by the GC threads to purge the larger JVM heap of unreachable objects.

While a minor gain in overall processor utilization was discovered using the larger Xmx setting, the cost savings of 36 fewer seconds (over the 30 minute period) spent in GC pause time might or might not be worth the trade off. However, in a production environment, you might want to consider conducting a longer scenario (perhaps over the course of a day) to determine if the larger setting or smaller setting is a better choice.

This is just a basic, but powerful example of how you can use the PMAT tool to become familiar with your own TBSM 4.2 environment. As each TBSM environment is unique for each customer, making educated tuning decisions by using the PMAT tool is a recommended strategy for performance tuning.

Additional Dashboard tuning suggestions

There are some additional considerations for tuning performance of the TBSM 4.2 Dashboard Server to reduce processing overhead. Review the following areas and consider implementting them based on the needs of your unique environment.

Service tree refresh interval: Changing the automatic Service tree refresh interval might help reduce server side workload (on the TBSM Dashboard Server) related to TBSM Web Console activity. The service tree refresh interval is set to 60 seconds by default.

The service tree refresh interval controls how frequently the TBSM Web Console requests an automatic service tree update from the TBSM Dashboard Server. If every client connected to the TBSM Dashboard is updated every 60 seconds, this might affect the Dashboard Server when there are a large number of concurrent consoles. To help mitigate this, you can increase the interval between refreshes.

To do this, edit the RAD_sla.props file in the $TBSM_DATA_SERVER_HOME/etc/rad/ directory:

#Service Tree refresh interval; in seconds - default is 60
impact.sla.servicetree.refreshinterval=120

Canvas update interval multiplier: The update interval multiplier helps to control the frequency of automatic canvas refresh requests initiated by the service viewer for canvases containing business service models of different sizes. The default multiplier is 30.

For example, loading a larger, 100 item canvas takes longer than loading a smaller, 50 item canvas. Because of this, the refresh intervals of the larger canvas should be spaced apart so that the canvas is not constantly in a refresh state. The TBSM Web Console accomplishes this by computing a dynamic refresh interval by taking the amount of time spent performing the previous load request and multiplying it by the update interval multiplier constant. So, if the large service model in this example takes 5 seconds to load, a refresh of the same model is not attempted for another 2.5 minutes (5 x 30 or 150 seconds).

When considering a change to this parameter, keep in mind that there are lower and upper boundaries of 30 seconds and 180 seconds for the refresh interval. As a result, the update interval multiplier is useful only to a certain point.

Nonetheless, you can easily update the interval multiplier parameter by editing the $TIP_HOME/systemApps/isclite.ear/sla.war/av/canvasviewer_simple.html file on the TBSM Data server and changing the value for all occurrences of the UpdateIntervalMultiplier property to the new value. Because this is a client side property, you do not have to reboot the server for this value to take effect. However, you might need to log out and log on to the TBSM console.

Client side Java Virtual Machine tuning

Within the client Web browser that hosts the TBSM Web Console is a JVM plug-in needed for running client side Java code. Just like a JVM running on either of the TBSM servers, the browser plug-in JVM can also be tuned to specify initial and maximum heap sizes. Typically, an untuned JVM plug-in has an initial heap size of no more than 4 MB and a maximum heap size of 64 MB, though these numbers can vary depending on the platform used.
The graphical Service Viewer is the function most affected by changes to JVM plug-in parameters. It might be possible to improve the performance of the Service Viewer by increasing the initial heap size (-Xms) to 64 MB and the maximum heap size (-Xmx) to 128 MB. Whether this configuration change is really needed or not depends on the size of the business service model that is loaded by the service viewer.
The procedure to change the JVM plug-in tuning parameters can be different depending on the provider of the plug-in (for example IBM or Sun) and also depending on the system (Windows or UNIX). As an example, the following procedure illustrates how to access and set the JVM plug-in parameters for the IBM-provided 1.5.0 plug-in on a Windows system:

  1. Open Control Panel -> Java Plug-in.
  2. Click the Advanced tab.
  3. In the text box under Java Runtime Parameters, type the following value:
    Xms64m -Xmx128m
  4. Click the Apply button and then close the window.

After these changes are made, it might be necessary for you to log out and log back in to the TBSM console. For a complete list of the supported combinations of Java plug-ins, Web browsers, and platforms, see the TBSM Installation Guide.
Important: To change the JVM plug-in parameters on a supported UNIX system, navigate to the bin directory under the file system location to which the plug-in was installed and look for a shell script named either ControlPanel or JavaPluginControlPanel, depending on your Java version. Run this shell script to launch a GUI that looks similar to the equivalent interface on the Windows system.

PostgreSQL database and the Discovery Library/XML toolkit

A PostgreSQL database can be a very fast database, but the as-is configuration tends to be rather conservative. A few configuration changes to the postgresql.conf file can improve PostgreSQL performance dramatically. Note that these settings worked well in the performance test environment, and are provided as a starting point for your own unique environments.

Important: Back up your original postgresql.conf file before making any changes.

Specific PostgreSQL tuning parameters

Shared_buffers: Sets the number of shared memory buffers that are used by the database server. The default is typically 1000 X 8K pages. Settings significantly higher than the minimum are usually needed for good performance; values of a few thousand are recommended for production installations. This option can only be set at server startup.

Suggestion: shared_buffers = 16384

If editing this setting, also change the rad_dbconf file pg_buffer parameter in UNIX or Linux systems to the same value.

Work_mem: Non-shared memory that is used for internal sort operations and hash tables. This setting is used to put a limit on any single operation memory-utilization before being forced to use disk.

Suggestion: work_mem = 32000

Effective_cache_size: Sets the planner's assumption about the effective size of the disk cache that is available to a single index scan. This is factored into estimates of the cost of using an index; a higher value makes it more likely that index scans are used, a lower value makes it more likely sequential scans are used.

Suggestion: effective_cache_size = 30000

Random_page_cost: Sets the planner's estimate of the cost of a nonsequentially fetched disk page. This is measured as a multiple of the cost of a sequential page fetch. A higher value makes it more likely that a sequential scan is used, a lower value makes it more likely an index scan is used.

Suggestion: random_page_cost = 2

Fsync: To speed up bulk loads by way of the XML Toolkit, disable the fsync parameter in the postgresql.conf file as follows:

fsync = false # turns forced synchronization on or off

The fsync parameter sets whether you write data to disk as soon as it is committed, which is done through the Write Ahead Logging (WAL) facility. Do this only if you want faster load times; the caveat is that the load scenario mighty need to run again if the server shuts down prior to the completion of processing due to a power failure, disk crash, and so on.

Vacuuming the TBSM database

After completing a large bulk load, you should vacuum the TBSM Data server database to improve performance. A vacuumdb utility is provided with the PostgreSQL database that can be used to clean up database storage. Running this utility periodically or after a significant number of database rows change helps subsequent queries process more efficiently. The utility resides in the $TBSM_HOME/platform/<arch>/pgsql8/bin/vacuumdb directory and can be run as follows:

$TBSM_HOME/platform/arch/pgsql8/bin/vacuumdb -f -z -p 5435 -U postgres rad

The parameters for the vacuumdb command:

-f: The utility does a full vacuum
-z: The utility analyzes and updates statistics that are used by the query planner
5435: The port that the database process is listening on
Postgres: The user ID used to connect to the rad database
Rad: The database name

Important: The TBSM Discovery Library toolkit periodically vacuums the tables that are used by the toolkit. Control of this is handled with the DL_DBVacuum properties in the xmltoolkitsvc.properties file. For more information on these properties, see the Discovery Library toolkit properties. Depending on how often the toolkit imports data, the automatic vacuums might be sufficient.

Final thoughts about TBSM 4.2 performance

To review, TBSM 4.2 is primarily processor dependant (the number and speed of processors are two of the key factors); as long as sufficient JVM memory is configured (use the IBM PMAT tool to assist you in tuning TBSM 4.2 for your own workloads and environments). You must be aware of the minimum and recommended hardware specifications for an optimal user experience. The TBSM 4.2 minimum and recommended hardware tables are supplied in the Hardware for production environments section of this document for easy access and review.

Prior to beginning any in-depth performance tuning for TBSM, it review the trace_*.log files that are created by both the Data and Dashboard Servers. These logs are in the profiles /logs/server1 directory. Review any exceptions or error conditions to remove a functional issue from hampering overall application performance.

After functional processing is observed, two of the primary tuning "knobs" for TBSM 4.2 are the "Xms" and "Xmx" values that control the memory allocation for each of the TBSM JVMs.

For review:

-Xms256m // Sets the initial memory to 256 MB (default)
-Xmx512m // Sets the maximum memory size to 512 MB (default)

After the upper memory setting for Xmx is established (through PMAT analysis), a good rule of Java tuning is typically to set the initial memory allocation to half that of the maximum size. Again, one "size" does not fit all environments, so you might want to try setting the initial value smaller (or larger) and rerun the scenarios. Note that you should not set the Xms value larger than the Xmx value, or the JVM will most likely not start.

After the Data and Dashboard Servers are properly tuned, if Web Consoles using the Service Viewer feel "slow," review the Client side Java Virtual Machine tuning section on tuning the JRE plug-in JVM, and restarting the Console.

Every TBSM Environment is unique; with regard to tuning, one tuning size does not fit all. What this means is that multiple factors come into play, such as number and speed of processors, available RAM, service model size, number of concurrent Web consoles, SLAs, KPIs, to name a few. JVM analysis is the correct way to ensure proper performance tuning is in place.

To do this, a regular schedule for Performance data collections, analysis, and subsequent tuning (as needed) is strongly encouraged. Using the PMAT tool at some regular interval (perhaps monthly) can uncover trends in application throughput on the TBSM Data server, as new Business Services are added to the in-memory model. Also, as additional Web Consoles are added to the Dashboard Server, looking at such metrics as overall garbage collection pause times might be helpful in uncovering tuning areas to reduce application response times while serving a higher number of end-users.

In summary, making performance analysis a proactive subject in your own unique TBSM environments can go a long way to minimizing or preventing future performance concerns.

Hardware for production environments

The following tables summarize the minimum and recommended hardware and configuration for production environments (see the readme file provided with the TBSM 4.2 installation image for the latest information and updates regarding supported hardware).


Table 1: Data server - Recommended hardware and configuration for production

Important: The amount of disk space needed is directly related to how many events are processed by the system and the related logging and log levels configured on the system.


Table 2: Dashboard Server - Recommended hardware and configuration for production

References

  1. TBSM 4.2 Beta Web Conference Series: Performance Tuning: Internal IBM presentation delivered in September 2008 to customers participating in the TBSM 4.2 beta program.

  2. PostgreSQL Online Documentation: http://www.postgresql.org/docs/8.0/static/index.html

  3. Tivoli Business Service Manager 4.2 Installation and Administrator's Guides: http://publib.boulder.ibm.com/infocenter/tivihelp/v3r1/index.jsp?topic=/com.ibm.tivoli.itbsm.doc/

  4. A reference book for everything related to IBM Developer Kit and Runtime Environment, Java 2 Technology Edition, Version 5.0. (In PDF format.): http://download.boulder.ibm.com/ibmdl/pub/software/dw/jdk/diagnosis/diag50.pdf

  5. Tuning Garbage Collection with the 5.0 Java Virtual Machine: http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html

  6. IBM Pattern Matching and Analysis (PMAT) tool from IBM Alphaworks: http://www.alphaworks.ibm.com/tech/pmat

June 01, 2009

Using Tivoli Access Manager Enterprise Single Sign-on with IBM middleware - Removing the dependancy on Microsoft components

[This article is sponsored by Peningo Systems, Inc., a provider of IBM Tivoli Consulting and Implementation Services on a nationwide basis. For more information on Peningo Systems, please go to the Peningo Tivoli Access Manager Consultants page. ]

 

Below is an article found on the IBM DeveloperWorks website addressing removing the dependencies on Microsoft Components. This is an excellent article for any Tivoli Access Manager Consultant implementing TAM E-SSO.

IBM® Tivoli® Access Manager Enterprise Enterprise Single Sign-on (TAM E-SSO) provides cross application (that is, Web, Java™, mainframe or terminal services) single sign-on capabilities. The TAM E-SSO AccessAgent and IMS server are supported on Microsoft® Windows® operating system platforms, and typically leverage Active Directory for user management. However, many customers want to leverage their existing investment in IBM middleware products, and also extend the reach for TAM E-SSO beyond their intranet. This article shows how TAM E-SSO can be deployed into an environment consisting of IBM middleware, namely DB2® and IBM Tivoli Directory Server. 

 Introduction to TAM E-SSO dependencies

TAM E-SSO mandates the use of a database for storage of product data, including users' wallets and system configuration. The TAM E-SSO IMS server installation media embeds a version of Microsoft SQL Server Express (SQL2K5 Express) for ease of installation. Expect this to change in the future to accomodate IBM DB2 Express. In addition to this embedded database, TAM E-SSO v8 supports the use of IBM DB2 v9.5 and Oracle 9i databases. Many IBM customers' services teams want to leverage existing IBM software deployments to maximise re-use and minimise cost. Therefore, this article focuses on the use of DB2 as the database for TAM E-SSO.

TAM E-SSO also relies on an existing (or new) identity store for management of user data. TAM E-SSO refers to these user repositories as enterprise directories. Since TAM E-SSO is typically deployed within an intranet environment, many customers opt to leverage existing Active Directory deployments for TAM E-SSO. However, this does not suit all customer deployments, so TAM E-SSO provides support for LDAP-based products as enterprise directories. TAM E-SSO v8 supports IBM Tivoli Directory Server (ITDS) 6.1+, SunOne directory 5.1+, Novell eDirectory 8.6+ and Sun Java Directory 5.2+ as LDAP-based enterprise directories. This article outlines how to configure TAM E-SSO to use IBM Tivoli Directory Server (ITDS)

 The operating architecture

In order to simplify the outline, this article assumes the simple deployment illustrated in Figure 1. This deployment best represents a single server TAM E-SSO IMS server installation connecting to an enterprise ITDS server.

 Figure 1: TAM E-SSO v8 conceptual architecture

TAM E-SSO v8 conceptual architecture

Note: For all components deployed on the same machine, IBM DB2 v9.5 shipped with Tivoli Directory Server v6.2 technically can be used to host the TAM E-SSO database, but licensing restrictions might apply. This might be the perfect arrangement for a Proof Of Concept, but take care to ensure the DB2 database instances are suitably scaled according to the usage patterns of the products.

 

Although this environment is simplistic, scaling the components for higher availability should be transparent to the product configuration outlined in this article.

If the reader's intention is to follow the configuration steps outlined within this article, a number of pre-requisite tasks should be performed.

  • ITDS v6.1 must be installed and configured on the ITDS server machine.
  • TAM E-SSO v8 IMS server installation images must be available on the IMS server.
  • DB2 9.5 must be installed on the IMS server.
  • TAM E-SSO AccessAgent software needs to be copied onto the ITDS server.
  • The servers will need to communicate over TCP/IP. Add the hostname to the %SystemRoot%\System32\drivers\etc\hosts file, so that server names can be used rather than IP addresses. This makes the configuration more portable.
Configuring TAM E-SSO to use DB2

DB2 should be installed and configured in accordance with the installation instructions on page 56 of the TAM E-SSO Deployment Guide. These instructions worked well for the installation used in this article's development.

After DB2 is configured, IMS server must be installed. Whilst performing the install, select the custom installation option, and point the installation at the DB2 server setup on the IMS server. One point to note is that when performing the IMS server installation, the DB2 database table setup seems to take a long time. This is normal, be patient during this step. If you are concerned about the time it is taking, monitor the IMS installation log at c:\TAM_E-SSO_IMS_installer.log When installed successfully, the tomcat stdout.log file, located as below, is a good reference for determining system state.

 Figure 2: Tomcat file system error log
 

Note that no ITDS configuration is performed at setup time.

When the setup is complete, the IMS Web-based configuration utility starts. When the configuration utility loads, the domain configuration page is displayed.

 

Tomcat file system error log

 

The stdout.log file contains the most useful information for witnessing server operation.

You also might want to consider changing the IMSService windows service to a manual startup option at this point. Making the IMSService a manual startup processs provides greater control over the process at the time of reboot.

 

 Configuring TAM E-SSO to use ITDS

The ITDS instance now needs to be setup with the objects required for users registering through the TAM E-SSO AccessAgent. When this is done, IMS can be configured with the ITDS as the enterprise directory.

Setting up ITDS with test users

On the ITDS machine, the first step is to create the suffix for storing users and groups, for example, o=ibm,c=au. In the development of this article, the following LDIF was loaded into the ITDS server. This LDIF includes a number of test users.


Example LDIF for creating the LDAP objects

 

    dn: o=ibm,c=au
objectclass: organization
o: ibm

dn: cn=chrish,o=ibm,c=au
objectclass: inetorgperson
userpassword: passw0rd
cn: chrish
sn: hockings

dn: cn=root,o=ibm,c=au
objectclass: inetorgperson
userpassword: passw0rd
cn: root
sn: root

 

 

You might want to grant the cn=root,o=ibm,c=au user the ability to search, add, delete and modify entries within the directory, but by default the user can search the repository, which is all that is required for TAM E-SSO. The next step is to configure the ITDS as the enterprise directory within the IMS.

Configuring the IMS to use ITDS

It is now time to setup the IMS to use the ITDS server as the identity and authentication store. Start the IMS configuration utility. Note that the IMS configuration utility starts at the Add a domain option, which is used for Active Directory domain configuration. This is a little confusing, because domain creation is not required for configuring other enterprise directories, such as ITDS.

On the IMS Configuration Utility landing page, select Enterprise Directories in the left column. On the right side, select Add Directory, as shown in Figure 3.

Note that after IMS installation, the AccessAnywhereEnterpriseDirectory is configured, allowing any user to register without validating credentials. Hence, if there is any attempt to create an IMS administrator prior to this point, it will accept any username/password combination. It is never checked against a directory.

 Figure 3: Add enterprise directory
Add enterprise directory

The next step is to add a name and description for the new enterprise directory, as shown in Figure 4.

 Figure 4: IMS add enterprise directory
IMS add enterprise directory

Make sure that Include this directory in TAM E-SSO user validation is selected. When this is done, this directory becomes the authentication service for registering users through AccessAgent.

Now select the Add button, and select the Generic LDAP connector, as shown in Figure 5.

 

 Figure 5: IMS add LDAP details
IMS add LDAP details

The ITDS server information must now be entered in the screen shown in Figure 6.

 Notice the username is simply root. The IMS server automatically (either sensibly or not) adds the cn= to the front and the user container to the end, to result in cn=root,o=ibm,c=au.

 Figure 6: IMS add standard LDAP configuration

 

 IMS add standard LDAP configuration

 

 

 

Open the Advanced configuration keys twisty and configure ITDS details. SSL should not be configured at this point. The instructions for setting SSL up are provided later in this article. Note that the full class name, that is, com.sun.jndi.ldap.LdapCtxFactory, is not shown in the diagram below.


Figure 7: IMS add advanced LDAP configuration
IMS add advanced LDAP configuration

The configuration can now be tested, by selecting the Save and test button. The result is a success message being displayed at the top of this configuration page. If an error appears, check the details entered. Consult the IMS stdout.log file if the problem cannot be determined from the configuration information. The Troubleshooting section below shows techniques for resoving issues encountered.

 

 

 

 



.


 





 




 





 











 



 


Provisioning your IMS administrator

The next step is to provision an IMS administrator, in this case, the cn=root,o=ibm,c=au account. On the IMS configuration utility Web page, select the Create an IMS Administrator link. Now enter root/passw0rd and complete the task.

Having configured an IMS administrator, that user can now login to the AccessAdmin to configure IMS session behaviour. Access the URL for AccessAdmin through the TAM E-SSO start menu folder. When prompted, authenticate using the IMS administrator, root/passw0rd. If the search user's option is now selected in AccessAdmin, the registered users will be displayed. Upon selecting a user, the enterprise directory that user was registered with can be determined. Check that the root user has an ims_ldap\ as its prefix. This provides confidence that ITDS enterprise directory is configured correctly.


 


Configuring the IMS server session behaviour through AccessAdmin

The next step is to configure the IMS server for connections from the AccessAgent instances. Select the Setup assistant on the left side menu of the AccessAdmin Web application. The following page is displayed. Note that the Begin button appears on the lower right side of the page.


Figure 8: Setup IMS user sessions
Setup IMS user sessions
The system can now be configured in accordance with the requirements for user session management. This article simply enables self service using a shared desktop. Complete the configuration according to the specific requirements.

The following selections have been made for this article:

  1. Enable self-service options
  2. Support shared workstations
  3. Use a shared desktop
  4. Continue to select the defaults for all other options

 


Successful user registration

The next step is to install the AccessAgent on the ITDS server instance. Perform the installation in accordance with the product documentation. When installed, AccessAgent will ask for the IMS server location. Use the hostname of the IMS server in the environment. The system will then reboot.

When the system reboots, instead of the standard Windows Authentication window, the TAM E-SSO GINA is presented. By selecting the Go to Windows Logon option and authenticating using Administrator, the Windows desktop is displayed. Before proceeding, check that the ITDS server started successfully (through Microsoft services). Now, right click on the AccessAgent icon on the toolbar, as shown below.


Figure 9: AccessAgent Taskbar Options
AccessAgent Taskbar Options

Select the Sign up link. Enter the text chrish/passw0rd. The AccessAgent then performs first-time registration, asking the user to select two Q&A responses and to reset their wallet password. Note that this does not change the Enterprise Directory password.

When completed, the AccessAgent changes to a bright red color (not flashing). This signals that the user has registered and is now logged into their new wallet.


 


Unsuccessful user registration

Now, right click on the AccessAgent taskbar icon and select Logoff AccessAgent. Now try to register a user that does not exist within the ITDS. Proceed through the self-registration functions. When the final submit is performed, the AccessAgent displays the following message.


Figure 10: AccessAgent Failed Sign up
AccessAgent Failed Sign up

This then proves that the ITDS instance is authenticating users during registration.


 


Setting up SSL for the enterprise directory

As with any SSL configuration exercise, there is a client-side SSL component and a server-side SSL component to configure. The server in this case is the ITDS server, with TAM E-SSO IMS acting as the client. This section outlines the configuration steps required for both the server and the client. Before attempting to configure SSL, make sure the IMS server and ITDS server have the same timezone and time settings.

Setting up SSL for ITDS

The first step is to configure the ITDS server with a self-signed certificate to use for SSL connections. Self-signed certificates are a convenient way to configure a non-production server to perform SSL. Of course, production servers should use trusted certificate authorities to create certificates.

On the ITDS server machine, open a command prompt and issue the following commands.


Commands for setting up SSL with ITDS
C:\Program Files\IBM\GSK7\bin>gsk7cmd -keydb -create -db c:\serverkey -pw passw0rd -type
cms -stash

C:\Program Files\IBM\GSK7\bin>gsk7cmd -cert -create -db c:\serverkey.kdb -pw passw0rd
-label testlabel

-dn "CN='tam6',o=ibm,c=au -default_cert yes -expire 999

C:\Program Files\IBM\GSK7\bin>gsk7cmd -cert -list -db c:\serverkey.kdb -pw passw0rd
Certificates in database: c:\serverkey.kdb
Entrust.net Global Secure Server Certification Authority
Entrust.net Global Client Certification Authority
Entrust.net Client Certification Authority
Entrust.net Certification Authority (2048)
Entrust.net Secure Server Certification Authority
VeriSign Class 3 Public Primary Certification Authority
VeriSign Class 2 Public Primary Certification Authority
VeriSign Class 1 Public Primary Certification Authority
VeriSign Class 4 Public Primary Certification Authority - G2
VeriSign Class 3 Public Primary Certification Authority - G2
VeriSign Class 2 Public Primary Certification Authority - G2
VeriSign Class 1 Public Primary Certification Authority - G2
VeriSign Class 4 Public Primary Certification Authority - G3
VeriSign Class 3 Public Primary Certification Authority - G3
VeriSign Class 2 Public Primary Certification Authority - G3
VeriSign Class 1 Public Primary Certification Authority - G3
Thawte Personal Premium CA
Thawte Personal Freemail CA
Thawte Personal Basic CA
Thawte Premium Server CA
Thawte Server CA
RSA Secure Server Certification Authority
testlabel

C:\Program Files\IBM\GSK7\bin>gsk7cmd -cert -create -db c:\serverkey.kdb -pw passw0rd
-label testcert

-dn "cn=tam6,o=ibm,c=au" -default_cert yes -expire 999

The next step is to create an LDIF file for uploading the certificate and configuration information into ITDS. The text file appears as follows:


Creating LDIF configuration for SSL
dn: cn=SSL,cn=Configuration
changetype: modify
replace: ibm-slapdSslAuth
ibm-slapdSslAuth: serverAuth
-
replace: ibm-slapdSecurity
ibm-slapdSecurity: SSL

dn: cn=SSL,cn=Configuration
changetype: modify
replace: ibm-slapdSSLKeyDatabase
ibm-slapdSSLKeyDatabase: c:\serverkey.kdb
-
replace:ibm-slapdSslCertificate
ibm-slapdSslCertificate: testlabel
-
replace: ibm-slapdSSLKeyDatabasePW
ibm-slapdSSLKeyDatabasePW: passw0rd

Upload the file contents with the following command:


Loading SSL configuration into ITDS
C:\Program Files\IBM\GSK7\bin>idsldapmodify -D cn=root -w passw0rd -i file.ldif -p 389
modifying entry cn=SSL,cn=Configuration

modifying entry cn=SSL,cn=Configuration

The final server-side configuration task is to extract the self signed certificate, so that it can be loaded into TAM E-SSO. The gsk7ikm utility can be used to extract the certificate. Simply open the CMS file and export the certificate created above, as shown in Figure 11.


Figure 11: Export certificate in der format
Export certificate in der format

Copy this certificate onto the IMS server, placing it in the c:\ directory.

Restart the ITDS server instance through the Windows service manager. Check that the server is listening on port 636 by issuing the netstat command and checking the server is listening on that port.

Setting up SSL for TAM E-SSO

The next step is to setup SSL for TAM E-SSO. There are two steps involved. First, the ITDS exported CA certificate must be added to the trusted CA certificate store used by tomcat. This can be done by issuing the following commands at the command prompt:


Loading the certificate CA into Java trust store
   C:\Encentuate\IMSServer8.0.0.12\j2sdk1.5\bin>keytool -import -alias ldapcert -file
c:\cert.der

-keystore C:\Encentuate\IMSServer8.0.0.12\j2sdk1.5\jre\lib\security\cacerts
Enter keystore password: changeit
Owner: CN='tam6', O=ibm, C=au -default_cert yes -expire 999
Issuer: CN='tam6', O=ibm, C=au -default_cert yes -expire 999
Serial number: 7620914a145654c4
Valid from: 12/22/08 10:34 AM until: 12/23/09 10:34 AM
Certificate fingerprints:
MD5: C1:8B:81:C3:C3:EA:37:EB:68:4D:22:C8:59:39:6F:B9
SHA1: 35:0F:A6:20:C1:EF:43:5F:45:CB:24:F3:C4:E7:C3:D3:0E:5A:8D:07
Trust this certificate? [no]: yes
Certificate was added to keystore

Restart the IMS server.

Are the timezones in synch ? This might be a good time to check while the IMS server is restarting.

The next step is to configure IMS Enterprise Directory configuration to enable SSL. Open the IMS Configuration Utility, select Enterprise Directories and select the ITDS server entry. Now select Update directory. The LDAP server URI must now include the new protocol and port, as follows: ldaps://ldap-hostname:636. Within the advanced configuration keys, the LDAP security protocol must be changed to ssl. Now select Save and test, which results in a success message being displayed at the top of the configuration page.

You should now be able to re-test some user registration processes to guarantee the SSL changes were correct.


 


Troubleshooting

Most of the problems encoutered during the development of this article were not related to functional product issues (other than those where tips have been provided in the text above), but more around the networking and system configuration of the systems. The following section outlines methods that can be used to debug product specific issues.

ITDS problems

A person skilled in the art should be able to debug server side issues with LDAP, so this section will not focus on this area.

Problems encountered in the development of this article were mainly due to actual connectivity issues as well as LDAP protocol request data anomolies.

Wireshark was used extensively to debug problems encoutered with connectivitiy and LDAP problems. To configure Wireshark to listen on a particular network interface, simply select Capture->interfaces from the menu and then select the adapter that the protocol information will flow through. Next run a test, like registering a new user. This should result in a Wireshark output similar to that shown below.


Figure 12: Wireshark trace of LDAP
Wireshark trace of LDAP

You can also test connectivity simply by use a telnet client (telnet command on Windows) to access the port used by the ITDS server. You can do this by issuing telnet itds-server-name 389 from a command prompt. If connectivity is OK, you might need to install the ITDS client on the IMS machine and simulate the LDAP requests being performed.

Of course, Wireshark cannot be used for inspecting SSL encrypted payloads, so I recommend ensuring that the IMS to ITDS configuration be performed successfully without SSL. Wireshark can, at the very least, reveal problems within the SSL handshake, which can be useful at times. In addition to the handshake errors, inspecting the IMS server stdout log file will give stack traces for any other errors encountered.

IMS problems

The IMS server stdout.log file is the best place to identify issues at any time during the configuration exercise. Monitor it closely to ensure the system remains stable across your configuration changes. A number of other tips include:

  • During installation, make sure you follow the product installation instructions closely, always rebooting whenever required.
  • If for any reason an AccessAgent had been configured to a particular IMS server, and you attempt to point it at a different one, it fails. If this happens, try re-installing the AccessAgent.
  • Also, make sure the IMS tomcat process is fully initialised before attempting any operations. When the IMS server has started and CPU drops to zero, the DB2 process hovers around 200MB, and the tomcat process is about 600MB.
  • If you are setting up the IMS server on VMWare, I found it needed to be allocated 2GB of memory for that image. Any less than this amount caused issues on IMS server startup.

 


Conclusion

Although TAM E-SSO can be used internally to provide Single Sign-on services to intranet users, there are many use cases where enterprise directories might be considered more relevant in a TAM E-SSO deployment. If this is the case, this article provides detailed instructions on how to configure such a deployment without the need to use Active Directory. Without Active Directory, TAM E-SSO maintains its business benefits and extends the reach beyond the internal Active Directory deployment.


 



May 30, 2009

A Deployment Guide for Tivoli Access Manager for Enterprise Single Sign-On

[This article is sponsored by Peningo Systems, Inc., a provider of IBM Tivoli Consulting and Implementation Services on a nationwide basis. For more information on Peningo Systems, please go to the Peningo Tivoli Access Manager Consultants page. ]

 

Peningo Systmem Tivoli ConsultingIBM has recently release a Redbook publication “Deployment Guide Series: IBM Tivoli Access Manager for Enterprise Single Sign-On 8.0”.  This book introduces IBM Tivoli Access Manager for Enterprise Single Sign-On 8.0, which provides single sign-on to many applications without a lengthy and complex implementation effort.

This book introduces IBM Tivoli Access Manager for Enterprise Single Sign-On 8.0 (TAM E-SSO), which provides single sign-on to many applications without a lengthy and complex implementation effort. Whether you are deploying strong authentication, implementing an enterprise-wide identity management initiative, or simply focusing on the sign-on challenges of a specific group of users, this solution can deliver the efficiencies and security that come with a well-crafted and comprehensive single sign-on solution.

We at Peningo Systems recommend this Redbook as a valuable resource to security officers, administrators, and architects who want to understand and implement an identity management solution in a medium scale environment.

IBM Tivoli Access Manager for Enterprise Single Sign-On Highlights:

 

IBM Tivoli Access Manager for Enterprise Single Sign-On automates sign-on and access to your enterprise applications.

Eliminate the need to remember and manage user names and passwords with this single sign-on software from Tivoli.

  • Simplify, strengthen and track access to Microsoft Windows, Web, Java, mainframe and teletype applications, over all major network access points with this single sign-on solution.
  • Enhance security by reducing poor end-user password behavior and reduce the number of password reset calls to your service desk.
  • Take advantage of comprehensive session management of kiosk machines to improve security
  • Enhance security with a wide choice of strong authentication factors.
  • Use centralized audit and reporting capabilities to facilitate compliance with privacy and security regulations.
  • Enable end-to-end identity and access management by integrating the centralized identity management functions of IBM Tivoli Identity Manager with enterprise single sign-on, password management software and access automation
  • Operating systems supported: Windows

 

TAM E-SSO relieves password headaches with a proven single sign-on solution across all network access points. The complexity and number of logons employees must manage on a daily basis are increasing, resulting in frustration and lost productivity. In most organizations, employees must remember between 5 and 30 passwords and are required to change them every 30 days. The time wasted entering, changing, writing down, forgetting and resetting passwords represents a significant loss in productivity and a significant cost of IT help-desk operations. With IBM Tivoli Access Manager for Enterprise Single Sign-On—the market-leading enterprise single sign-on solution—employees authenticate once, and the software then detects and automates all password-related events for the employee, including:

  • Logon.
  • Password selection.
  • Password change.
  • Password reset.
  • Logoff.

 

 

If you wish to download this Redbook, please go to the following link:

 

http://www.redbooks.ibm.com/redpieces/pdfs/sg247350.pdf

 

About Peningo Systems:

Peningo Systems and it founders have been involved in IT Consulting for over 30 years. Peningo Systems provides IT Consultants at the Professional Service level on a nationwide basis supporting many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page or go to the Peningo Tivoli Access Manager Consultants page.

April 10, 2009

Peningo Systems has been Selected to Provide Tivoli Storage Manager Implementation and Support Services

Peningo Systems has recently been selected to provide Tivoli Storage Manager Implementation, upgrade and Technical Support Services to one of the largest Credit Unions in Ohio.

 

Peningo Systems will be providing TSM implementation , upgrade planning  and  technical support Consulting Services. These services will be provided by a team of Senior Consultants with expertise in Tivoli Storage Manager Architecture and Implementation. The level of the TSM expertise of these Peningo Systems consultants is arguably the best available resources found in the country.

 

While many of the Professional Service organizations look to optimize their profits by reducing the cost of labor, many times these organizations cut their resources with the expertise and seniority needed for successful implementations, in lieu of less experienced and cheaper resources. Though the cost of labor has decreased for the Professional Services, many times their rates increase to the client. Many of the end clients are not aware of this, and assume that going to a Software Vendor’s Professional Service organization will yield the best available resources for their project.

 

For these reasons above, the quality of services from these Professional Services Organizations has declined drastically. The founders of Peningo Systems have thrived for years in providing Consulting Services with the Senior Level Experts at rates that are below the rates of the Software Vendor’s Professional Services Organizations with their sub-par to mediocre talent.

 

We at Peningo Systems always insure that we provide the end client with the best available resources within their respective areas of expertise. These deliveries of services are at rates that below the rates of the Software Vendor’s Professional Services organizations that utilize resources that that are not as experienced and seasoned as the Peningo Systems Consultant.

 

If you are an "End Client" looking for IT Consulting Service providers to support your Applications, Peningo Systems provides Consultants with expertise in many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page or go to the Peningo Tivoli Storage Manager Consultants page.

March 22, 2009

A Guide to Implementing Maximo Asset Management Essentials

[This article is sponsored by Peningo Systems, Inc., a provider of IBM Maximo Consulting and Implementation Services on a nationwide basis. For more information on Peningo Systems, please go to the Peningo Tivoli Maximo Consultants page. ]

 

IBM has recently release a Redbook publication “Maximo Asset Management Essentials V7.1 Implementer’s Guide”. This guide to implementing Maximo Asset Management Essentials provides general product information and covers the planning, installation, and initial configuration processes.

 

Within the IBM Maximo Asset Management product family, Essentials is an ideal Asset Management Solution for smaller organizations that require a subset of the extensive range of features in the IBM Maximo Asset Management product. Essentials enables smaller organizations and departments to organize, track, and manage their asset and work management processes, and to implement a maintenance regimen based on industry leading technology and best practices.

 

We at Peningo Systems recommend this Redbook to any Asset Management Consultant who will be involved in the implementation of Maximo Asset Management Essentials.

 

 

About IBM Tivoli Maximo Asset Management Essentials:

 

 

IBM Maximo Asset Management Essentials is an asset management system that provides asset management, maintenance management, inventory management, and purchasing capabilities that help corporations maximize productivity and increase the life of assets.

 

This solution is targeted toward small-to-medium businesses that do not have multiple sites and simply need a subset of the core functionality of Maximo Asset Management. The enterprise edition of Maximo Asset Management has been a leader in enterprise asset management for many years. IBM Maximo is the only solution to have been recognized in the EAM Leader’s Quadrant 11 times since 1998.

 

Smaller businesses can benefit from the core functionality of Maximo Asset Management Essentials and do not need an enterprise-level solution for asset management. Thus, Maximo Asset Management Essentials is a lighter, less complex version of Maximo Asset Management V7.1. The differences in functionality are discussed in subsequent sections of this book.

 

Maximo enables companies to manage assets by providing information and real-time data, thereby enabling the creation of a strategy for maintenance management through information-based decision making capabilities and predicting the impact on productivity of asset downtime for all categories of assets.

 

If you wish to download this Redbook, please go to the following link:

 

http://www.redbooks.ibm.com/redbooks/pdfs/sg247645.pdf

 

About Peningo Systems:

Peningo Systems and it founders have been involved in IT Consulting for over 30 years. Peningo Systems provides IT Consultants at the Professional Service level on a nationwide basis supporting many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page or go to the Peningo Tivoli Maximo Consultants page.

 

December 18, 2008

An intoduction to Storage Management with Tivoli Storage Manager

[This article is sponsored by Peningo Systems, Inc., a provider of Tivoli Consulting Services on a nationwide basis. For more information on Peningo Systems, please go to the  Peningo Tivoli Consultants page.  or to the Peningo Tivoli Storage Manager Consultants page]

                                                      

IBM has released the Redbook, “IBM Tivoli Storage Management Concepts”.  The book introduces storage management software by explaining the concepts, architecture, and systems management features of IBM Tivoli Storage Manager and showing available complementary products. It will help you design solutions to protect data holdings from losses ranging from those caused by user error to complete site disasters. This easy-to-follow guide gives a broad understanding of IBM Tivoli Storage Manager software, the key technologies to know, and the solutions available to protect your business. It offers a broad understanding of how IBM Tivoli Storage Manager will work in heterogeneous environments including Windows, UNIX/Linux, OS/400, and z/OS platforms, and with mission-critical applications such as DB/2, Oracle, Lotus Domino, Exchange, SAP, and many more.

We at Peningo Systems strongly recommend this Redbook for any Storage Consultant / Storage Architect who are involved in the evaluation of a Storage Management solution.

 

The Table of Contents of this Redbook is as follows:

 

Part 1. Storage management concepts

Chapter 1. Introduction to IBM Tivoli Storage Manager
Chapter 2. Business requirements
Chapter 3. Architectural concepts
Chapter 4. Planning concepts

Part 2. Client architecture

Chapter 5. Client data movement methods
Chapter 6. Backup-archive client
Chapter 7. API client
Chapter 8. HSM solutions

Part 3. Server architecture

Chapter 9. Policy management
Chapter 10. Scheduling
Chapter 11. Data storage
Chapter 12. Managing users and security levels
Chapter 13. Licensing
Chapter 14. Enterprise Management
Chapter 15. High availability clustering
Chapter 16. Disaster Recovery Manager
Chapter 17. Reporting

Part 4. Complementary products

Chapter 18. IBM Tivoli Continuous Data Protection for Files
Chapter 19. IBM Tivoli Storage Manager for Databases
Chapter 20. IBM Tivoli Storage Manager for Mail
Chapter 21. IBM Tivoli Storage Manager solutions for mySAP
Chapter 22. IBM Tivoli Storage Manager for Applications
Chapter 23. Complementary products

Part 5. Appendixes

Appendix A. Planning and sizing worksheets

 

If you wish to download this Redbook you can to the following link:

 

IBM Tivoli Storage Management Concepts

 

If you wish to view the IBM Resource link for this Redbook, please go to the flowing link:

 

Click here to view the Redbook Resource page

 

If you wish to purchase a hard copy of this Redbook, please go to the following link:

 

Click here to Purchase this RedBook

If you are an "End Client" looking for IT Consulting Service providers to support your Applications, Peningo Systems provides Consultants with expertise in many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page or go to the Peningo Tivoli Consultants page.

 

December 15, 2008

Implementing IBM Federated Identity Manager in a Services-Oriented Architecture (SOA) Evironment

[This article is sponsored by Peningo Systems, Inc., a provider of Tivoli Consulting Services on a nationwide basis. For more information on Peningo Systems, please go to the  Peningo Tivoli Consultants page. ]

                                                      

A White Paper has been released by The Enterprise Strategy Group, Inc. title “Services-Oriented Architecture and Federated Identity Management” which addresses the need that SOA has for Federated Identity Management (FIM). This paper also addresses how the needs for Federated Identity Management can be addressed with IBM Federated Identity Manager.  IBM Tivoli FIM can act as a federated identity middleware bridge between external business partners and SOA security domains. In this role, Tivoli FIM, centralizes operations, enables rapid user provisioning, identity propagation, customizes business rules, and acts as a hub for token mediation, identity mapping, logging and reporting.

 

 

We at Peningo Systems strongly recommend this White Paper for any Tivoli Security Consultant / Security Architect who are involved the decision process of selecting and deploying a Federated Identity Management solution in an SOA environment.

 

The Table of Contents of this White Paper is as follows:

 

 

If you wish to download this White Paper you can to the following link:

 

Services-Oriented Architecture and Federated Identity Management

 

If you are an "End Client" looking for IT Consulting Service providers to support your Applications, Peningo Systems provides Consultants with expertise in many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page or go to the Peningo Tivoli Consultants page. If you wish to view some sample resumes of Professional Serivce Level Tivoli Identity Manager Consultants, please click here.

 

P

 

December 10, 2008

Implementing an Identity Management Solution using IBM Tivoli Identity Manager

[This article is sponsored by Peningo Systems, Inc., a provider of Tivoli Consulting Services on a nationwide basis. For more information on Peningo Systems, please go to the  Peningo Tivoli Consultants page. ]

                                                      

IBM has recently released the Redbook, “Identity Management Design Guide with IBM Tivoli Identity Manager ”.  This IBM Redbook provides a methodology for designing an Identity Management solution with IBM Tivoli Identity Manager 4.6. Starting from the high-level, organizational viewpoint we show how to define user registration and maintenance processes using the self-registration and self-care interfaces as well as the delegated administration capabilities.

 

Identity Management is the concept of providing a unifying interface to manage all aspects related to individuals and their interactions with the business. It is the process that enables business initiatives by efficiently managing the user lifecycle (including identity/resource provisioning for people (users)), and by integrating into the required business processes. Identity management encompasses all the data and processes related to the representation of an individual involved in electronic transactions.

 

We at Peningo Systems strongly recommend this Redbook for any Tivoli Security Consultant / Security Architect who are involved in deploying an Identity Management solution using IBM Tivoli Identity Manager.

 

The Table of Contents of this Redbook is as follows:

 

Part 1. Architecture and design


Chapter 1. Business context for Identity and Credential Management
Chapter 2. Architecting Identity and Credential Management Solution
Chapter 3. Identity Manager component structure
Chapter 4. Detailed component design
Chapter 5. Operational solution design
Chapter 6. Tivoli Access Manager integration

Part 2. Customer environment


Chapter 7. Tivoli Austin Airlines, Inc.
Chapter 8. Identity Management design
Chapter 9. Technical implementation: Phase I
Chapter 10. Technical implementation: Phase II
Chapter 11. Technical implementation: Phase III
Chapter 12. Technical implementation: Phase IV

Part 3. Appendixes

Appendix A. Corporate policy and standards
Appendix B. Organization chart design

 

If you wish to download this Redbook you can to the following link:

 

Identity Management Design Guide using IBM Tivoli Identity Manager

 

If you wish to view the IBM Resource link for this Redbook, please go to the flowing link:

 

Click here to view the Redbook Resource page

 

If you wish to purchase a hard copy of this Redbook, please go to the following link:

 

Click here to Purchase this RedBook

If you are an "End Client" looking for IT Consulting Service providers to support your Applications, Peningo Systems provides Consultants with expertise in many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page or go to the Peningo Tivoli Consultants page.If you wish to view some sample resumes of Professional Serivce Level Tivoli Identity Manager Consultants, please click here.

December 05, 2008

Deploying IBM Tivoli Identity Manager 5.0

[This article is sponsored by Peningo Systems, Inc., a provider of Tivoli Consulting Services on a nationwide basis. For more information on Peningo Systems, please go to the  Peningo Tivoli Consultants page. ]

                                                      

IBM has recently released the Redbook, “Deployment Guide Series: IBM Tivoli Identity Manager 5.0”.  This IBM Redbook takes a step-by-step approach to implementing an identity management solution based on IBM Tivoli Identity Manager v5.0. Part 1 introduces the general business context for identity management in general and discusses a typical deployment approach for an identity management project. Part 2 takes you through an example company profile with existing business policies and guidelines and builds an identity management solution design for this particular environment.

 

We at Peningo Systems strongly recommend this Redbook for any Tivoli Security Consultant / Security Architect who are involved in deploying IBM Tivoli Identity Manager 5.0.

 

This book consists of 2 parts:

 

Part 1. Planning and deploying
Chapter 1. Business context for Identity and Credential Management
Chapter 2. Planning for customer engagement

Part 2. Customer environment
Chapter 3. Company profile
Chapter 4. Solution design
Chapter 5. Installing the components
Chapter 6. Configuring Identity Manager
Chapter 7. Identifying initial tasks

Appendix A. Troubleshooting
Appendix B. Rapid Installer Option
Appendix C. Import/export
Appendix D. Self-service
Appendix E. Statement of work

 

If you wish to download this Redbook you can to the following link:

 

Deployment Guide Series: IBM Tivoli Identity Manager 5.0

 

If you wish to view the IBM Resource link for this Redbook, please go to the flowing link:

 

Click here to view the Redbook Resource page

 

If you are an "End Client" looking for IT Consulting Service providers to support your Applications, Peningo Systems provides Consultants with expertise in many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page or go to the Peningo Tivoli Consultants page. If you wish to view some sample resumes of Professional Serivce Level Tivoli Identity Manager Consultants, please click here.

 

November 01, 2008

How to Audit Tivoli Identity Manager with Tivoli Compliance Insight Manager

[This article is sponsored by Peningo Systems, Inc., a provider of Tivoli Consulting Services on a nationwide basis. For more information on Peningo Systems, please go to the  Peningo Tivoli Consultants page. ]

 

The IBM DeveloperWorks site is an excellent repository of information and resources regarding various IBM offerings and systems.  IBM has IBM recently published an article on the IBM DeveloperWorks site regarding the importance and how to faclitate an Audit of Tivioli Identity Manager with the Tivoli Compliance Insight Manager.

The following is the article from IBM. We recommend this article to any Tivoli Identity Manager Consultant ot IT Security Consultant who is considering perorming an Audit of the provisioning that is done by the IBM Tivioli Identity Manager:

 

IBM Tivoli Identity Manager (TIM), being a provisioning platform, manages provisioning to various target applications. Hence, it becomes all important to audit the provisioning done from the TIM system. Tivoli Compliance Insight Manager (TCIM) allows us to monitor the usage of administrative rights, which is important to identify any breach in security rights or violations of any security policy in an organization.


The auditing and reporting of the TIM logs is done using the W7 Model from within TCIM.
TCIM applies the policy and attention rules to load the audit data into the GEM database whenever data is processed i.e., loaded into the reporting database.

W7 model

The audit data, from event sources such as TIM, is normalized into the w7 language and stored in the Generic Event Model (GEM) database.


Following are the W7 attributes into which all the data is normalized.

  • Who Which user, application, or process initiated the event?
  • What What type of action does the event represent?
  • When When did the event happen?
  • onWhat What object was affected? An object could be any type of file, database, application, permissions, etc., that was manipulated by the event.
  • Where On which machine did the event happen?
  • WhereFrom Which system is the source of the event?
  • WhereTo Which system is the target of the event?

GEM database

TCIM databases are based on the Generic Event Model(GEM) and is used to store the audit data collected by the actuator.


TCIM loads the audit data that is collected by the Actuator, into these databases.
The audit data is normalized into seven parts representing the W7 audit model and then the normalized data is loaded in to the GEM database.

Tivoli Compliance Insight Manager policy

The primary goal of TCIM is to monitor security compliance. To achieve this, the corporate security policy is translated into rules that form the TCIM Policy. The security policy defines the compliance to the Company's IT security rules and can be based on well known industry standards like ISO27001, Sarbanes-Oxley, PCI, Basel II, HIPAA and GLBA.

The policy consists of:

  • W7 groups which are queries used to determine the W7 categorization of an event.
  • Policy rules are a combination of W7 elements that describe allowed behavior.
  • Attention rules are a combination of W7 elements which identifies events that require special attention.

Architecture


Figure 1. Architecture
Architecture

Figure 1 shows the TCIM architecture.


The primary functions of TCIM are the collection of logs, archiving and preparation of data for reporting.

The actuator consists of an agent and numerous actuator scripts.

  • The agent is responsible for maintaining a secure link with the agents running on the TCIM server and other audited systems.
  • The actuator scripts are invoked by the agent to collect the log for a particular event source.

Point of presence is the server where the actuator is installed.

The collection of logs, also called as audit trials can be done two different ways

1. Local
2. Remote

Local The point of presence that collects the logs resides on the local machine. This could be the target machine or the audited machine.

Target machine: The target machine is not always the audited machine. This could be a machine from where the Tivoli Compliance Manager has access to the audit data.

Audited machine: The audit trails or logs are collected from this machine. It is on this machine where the events occur. Here TIM system is the Audited System.

Remote The purpose of remote event source is to audit a system that does not have a point of presence installed.

Configurations

The collection of logs can be designed in one of the following configurations

  • Configuration 1. TIM on one machine with TCIM server acting as point of presence.
  • Configuration 2. TIM system and point of presence on same machine and TCIM server on separate machine.
  • Configuration 3. TIM, point of presence and TCIM server on separate machines.


Figure 2 shows the configurations.


Figure 2. Configurations
Configurations

The event sources support auditing of TIM system running on IBM AIX®, Sun Solaris and Microsoft® Windows®.

The point of presence is always a Windows server in all the configurations and the audited system could be either Unix or Windows based systems.

Implementation

Enable Tivoli Identity Manager auditing

The first step for auditing a TIM System is to enable auditing on the server. To enable auditing set the "itim.auditing" property to true in the 'enroleAuditing.properties' file. The path to the 'enrole.Auditing.properties' file is <itim_home>\data. For example, in AIX the file is in /opt/IBM/itim/data directory. Apart from enabling auditing the file, it also allows you to set the itim.auditing.retrycount (number of times auditing will be retried in case of failure and itim.auditing.retrydelay(wait time before each retry). Also, it allows setting the auditing property for particular event category like ACIManagement, Authentication, etc., to true in case you want to turn ON the auditing for that category.

Create GEM database

The TIM GEM database is used to store the audit data.


To create a GEM database called TIM, that will be used to store the TIM audit data, open the TCIM management console and click on "database-> Add GEM Database".

Figure 3 shows the 'Add GEM Database' window.


Figure 3. Add GEM database
Add GEM Database

At the next screen, provide the name and size of the database to be created. Here we name it as 'TIM', the name with which it will appear on the management console and size as 10 MB.
The GEM databases can grow as needed and the size of these databases range from 10 MB to 32 GB. The maximum size of a database is 32 GB.

Figure 4 shows the GEM database parameters defined.


Figure 4. GEM database parameters
GEM Database Parameters

The 'TIM' GEM database is created on clicking "OK" and is shown in the management console.

Figure 5 shows the newly created GEM database in the management console.


Figure 5. TIM GEM database
TIM GEM Database

Create security policy

The security policy is applied every time an event data is processed i.e., when collected data is loaded into a reporting database.
While creating a security policy you could follow the model given in Figure 6.


Figure 6. Security policy creation model
Security Policy Creation Model

The first step involved in creating the TCIM Policy is to transform the company's security rule into w7 classifications, identifying all entities of the w7 model.
Once the rule is translated into W7 groups, check the TIM audit trail to match the w7 grouping. Drop the rules that do not match and incorporate all other rules in the security policy. Once all the rules have been processed and added or dropped, the TCIM security policy is ready for use. For example, if the security policy says "All administrators should use their own id to log on to the TIM system and should not use the TIM administrator account." Using this statement we could create security rules.

  • Administrators(Privileged Users) should use their own id to log on to TIM System is a requirement and forms the policy rules.
  • Any Login with TIM Administrator (TIM Manager) account is a restriction and forms Attention rules.

Before we create policy rule, we will first create groups to categorize all logon events. To create a new group, go to the policy explorer, right click on last committed policy and select duplicate. Name the policy as 'TIMAudit'.

Figure 7 shows the newly created TCIM policy in the policy explorer window.


Figure 7. TIMAudit policy
TIMAudit Policy

In the policy panel, expand the itim group and double click the "itim_group" grouping file.

Figure 8 shows the 'itim_group' in the policy explorer window.


Figure 8. itim_group
itim_group

Right click the 'who' folder, select New Group and enter 'TIM Administrators' for group name with significance 90. Click "OK" to add the group.


Now right click the 'TIM Administrators' group and select New Requirement. Here give "Logon name is TIM MANAGER" . Similarly add TIM Users group with requirement 'not in group TIM Administrators'. The 'TIM Administrators' group is used to categorize logon events of "TIM Manager" account and 'TIM Users' group to categorize logon events of all other account that does not belong to the 'TIM Administrators' group.

Figure 9 shows the 'Group Definition' window.


Figure 9. Group definition window
Group Definition Window

Once the group definitions are added, we create a policy rule for logon events. All logon events of users in 'TIM Users' group are acceptable events.

To add a policy rule, right click on Policy tab and select New Rule.

Figure 10 shows the 'New Rule' window.


Figure 10. New rule window
New Rule Window

On clicking 'New Rule' the 'Edit Rule' window opens. Enter 'TIM Users' in the Who field and 'Logon' in 'what' field to signify that all logon events of TIM Users is acceptable.

Figure 11 shows the parameters for the policy rule defined as per the policy.


Figure 11. Policy rule parameters
Policy Rule Parameters

To create an Attention rule, click on 'Attention' Tab and right click to select New Rule.
This will open the 'Edit Rule' for Attention rules. Enter 'TIM Administrators' in the 'who' field and 'Logon' in 'what' field to signify that all logon events of accounts belonging to 'TIM Administrators' group requires special attention. Here all logon by the "TIM Manager" account will be treated as special attention events.

Figure 12 shows the parameters for the attention rule defined as per the policy.


Figure 12. Attention rule parameters
Attention Rule Parameters

Save the rule and exit the policy after saving the group definition set, and saving the changes to the Policy and Attention Rules for TIMAudit Policy.



Back to top


Adding target machine

Once auditing is enabled in the TIM System, the next step is to add the target machine to the Tivoli Compliance Manager Server. On the add machine wizard, first select the audited machine type. The audited machine type could be one of the supported operating systems, AIX, Solaris or Windows.

Figure 13 shows the 'Choose Audit Machine Type' window.


Figure 13. Choose audit machine type
Choose Audit Machine Type

In the next screen, select or add the machines on which you want to audit. This is essentially the Hostname or IP address of the audited machine. You can also select the machines through a network browse by clicking on the Network Browse selection box.

Figure 14 shows the 'Choose Audited Machine' window.


Figure 14. Choose audited machine
Choose Audited Machine

The next step involves the selection of the point of presence. The point of presence is always a Windows server and could do a remote or local collection of the logs.

The Remote point of presence could be any of the following:
1. The TCIM server
2. Point of presence existing on some other machine
3. We could install a new actuator

The local point of presence is the point of presence that is installed on the audited machine itself.

Figure 15 shows the 'Select Point of Presence' window.

As the TIM audit records are stored in the databases, we require ODBC drivers installed on the Windows server and should also have supported ODBC drivers installed.


Figure 15. Select point of presence
Select Point of Presence

Once the add machine wizard completes, the next step is to choose the event source type. Select TIM event source from the list of available event sources.

Figure 16 shows the 'Choose Event Source Type' window.


Figure 16. Choose event source type
Choose Event Source Type

On successful addition of the audited and target machines, we move forward to define the properties of the event source.
For defining the event source properties, the database DSN has to be first defined.
To define the database DSN, go to Windows "Administrative Tools" and click on 'data Sources' to create new data source in SystemDSN.

Figure 17 shows the 'Create New Data Source' window.


Figure 17. Create new data source
Create New Data Source

Since IBM DB2® is the database used for TIM, we define the IBM DB2 ODBC driver. Configure the data source by setting the data source name, user ID/password as shown in Figure 18.
The audit logs of the TIM server are stored in the database tables namely AUDIT_EVENT, AUDIT_MGMT_TARGET, AUDIT_MGMT_DELEGATE, AUDIT_MGMT_PROVISIONING. The user account must have the create view privilege and read permissions for these tables.


Figure 18. ODBC datasource settings
ODBC Datasource settings

Once data source is configured, move on to the TCP/IP settings. Define the database name, database alias, hostname/IP address where the database resides, and the port number to connect to the database.

Figure 19 shows the 'TCP/IP settings' defined.


Figure 19. TCP/IP settings
TCP/IP Settings

Once the data source name is successfully configured, move ahead to add the event source. We define the properties of the event source using the add event source wizard. Here configure the database DSN (which was defined earlier) and the user name/ password that is used to connect to the database.

Figure 20 shows the 'Event Source Properties' defined for the TIM event source.


Figure 20. Event source properties
Event Source Properties

Next select the database in which the audit logs will be loaded and the collection schedule for collecting the logs from the Event Source i.e., the TIM.

Figure 21 shows the 'Choose a GEM Database' window.


Figure 21. Choose a GEM database
Choose a GEM Database

Once the machine and event source are added to the TCIM server, the TIM server is ready to be audited. The data is collected and loaded to the server as per the schedules. For adhoc collection and loading of data, right click on the database where TIM Server logs are to be loaded and click on Load from the menu.
Choose the period between which the data should be collected and loaded and move on to the next screen where you have to select whether to collect the data before loading or just load the data for analysis from the previous collections.
At this screen select the 'Collect the latest data for the associated event sources before loading data from the requested period into the database'. This ensures that all new data that is not collected already, is first collected and then loaded in to the database.

Figure 22 shows the 'Load Database' window.


Figure 22. Load database
Load Database

At the 'Choose a Policy' select 'TIMAudit' as the policy to be applied to the data that is to be loaded into the database, and start the collection and loading of the data from the TIM Server. 'TIMAudit' is the policy that we created earlier.

Figure 23 shows the 'Choose a Policy' window.


Figure 23. Choose a policy
Choose a Policy

Once the collection is completed, the data is loaded into the database according to the policy.
The audited results from the TIM Server can be analyzed and viewed at the iView console. Login to the Web portal and click on iView console. Click on the TIM database icon at the bottom of the iView main screen to open the summary view.

Figure 24 shows the iView console.


Figure 24. iView console
iView Console

Next, click on the TIM database icon in the iView console. This opens up the summary view of the events. Figure 25 shows the summary view of all the events associated with the TIM database.


Figure 25. Summary view
Summary View

You can see that the screen shows the two groups we had created earlier, and summarizes all the events from users of both these groups. All other events are classified as Unknown. Also, the event detail shows special attention for events that require attention, which was earlier defined in the security policy.

Figure 26 shows the event detail for a particular event. The event is a special attention event and is an exception to the policy rules set in the TCIM policy.


Figure 26. Event detail
Event Detail

Back to top


Reporting

The audited results can be used to generate reports that can be used for various purposes. For example, we will create a custom report called 'Logon by TIM Manager'. We will configure this report to shown all Logon events.

To Create reports on the event sources go to the 'Reports' Tab and click on "Add custom report" button as shown in Figure 27.


Figure 27. Add custom report
Add Custom Report

This will open up the report editor. At the general information section, add title as TIM Manager Logon Events. Choose standard report center for report center and name it as custom report examples as shown in Figure 28.


Figure 28. Report editor general information
Report Editor General Information

At the report layout section, there are four types of reports available.

  • Event List- Detailed report showing events in simple list format.
  • Summary Report- Summary report showing events, exceptions, attentions and failures per group.
  • Top N Report- Summary report showing user defined number of events in a specified time period. It displays top N(number of) rows of the report.
  • Threshhold Report- Summary report showing events that happened more than a user-defined number of times.

At this section we select the 'Report Type' as shown in Figure 29.


Figure 29. Report type
Report Type

After defining the report layout, we specify the criteria that will be searched in the GEM database to generate the report.
The criteria available are events, policy exceptions, special attention events, Failures, and successes. Selecting any of these means selecting events that fulfill these criteria. For example: selecting 'Special Attention Events' means we are selection only GEM events that are labeled 'Special Attention Events' by policy.
At this point, we select events and move on to specify the conditions. The conditions are defined to find events that satisfy the condition set.
For example: To select all events generated by "TIM Manager", we give condition as The value of field Who group is equal to "TIM Administrators".

Figure 30 shows the field values defined for the above condition.


Figure 30. Report conditions
Report Conditions

Next, save the report and execute it. On execution, we generate a report of all logon events by TIM Manager.

Figure 31 shows a sample report showing all events by "TIM Manager"


Figure 31. Sample report
Sample Report

Again, we could refine the condition to show all TIM Manager Logon events that were successful. To accomplish this, you could edit the report and add one more condition. The value of field What detail is equal to "Authenticate : Priviligeduser/Success" as shown in Figure 32.


Figure 32. Report conditions
Report Conditions

On execution, we generate a report of all successful logon events of "TIM Manager" user as shown in Figure 33.


Figure 33. Sample report
Sample Report

Back to top


Conclusion

TCIM helps to provide comprehensive log management and user access monitoring for enterprises. It helps to monitor, audit and report on the logs that have been collected from different systems. The reports generated from TCIM have the ability to report on the current compliance status of security controls of any installed system. The TCIM reports can help risk officers and auditors to view any anomalies within the IT environment. It helps ensure that security, regulatory, and operational policies of a company are followed. Using TCIM, this article demonstrates the auditing of TIM System.

Should you wish to go to the IBM DeveloperWorks Site to view this article, click here to view the article.

 

If you are an "End Client" looking for Consulting Service providers to support your WebSphere Applications, Peningo Systems provides Consultants with expertise in many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page or go to the Peningo Tivoli Consultants page.

 

 

 

 

October 31, 2008

Configuring WebSphere Portal to use Tivoli Access Manager

[This article is sponsored by Peningo Systems, Inc., a provider of Tivoli Consulting Services on a nationwide basis. For more information on Peningo Systems, please go to the  Peningo Tivoli Consultants page. ]

The "IBM WebSphere Portal Tivoli Access Manager Configuration Wizard" is an application that assists Portal Administrators through the task of configuring WebSphere Portal to use Tivoli Access Manager. With this tool, the WebSphere Portal Administrator can automate the following:

  • Setup the Trust Association Interceptor (TAI) in WebSphere Application Server
  • Configuring the WebSEAL junction (TCP or SSL options)
  • Setup the Tivoli Access Manager (TAM) Credential Vault adapter
  • Configure the Tivoli Access Manager for Authorization, or Externalization of Portal roles
  • Configure the JAAS login modules
  • Provide Backups to the files that are modified during the configuration
The IBM Developer Works is a great web site that offers a wealth of information regarding IBM Application information, troubleshooting issues, tutorials, etc.. To see more details regarding implementing the IBM WebSphere Portal Tivoli Access Manager Configuration Wizard, please go to the following link at IBM:

http://www.ibm.com/developerworks/websphere/zones/portal/catalog/doc/1wp10004g/

If you are an "End Client" looking for WebSphere Consulting Service providers to support your WebSphere Applications, Peningo Systems provides Consultants with expertise in many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page or go to the Peningo Tivoli Consultants page.

 

October 26, 2008

Big Blue Cloud Computing and IBM Tivoli Provisioning Manager (TPM)

[This article is sponsored by Peningo Systems, Inc., a provider of Tivoli Consulting Services on a nationwide basis. For more information on Peningo Systems, please go to the  Peningo Tivoli Consultants page. ]

 

 The newest version of IBM's Tivoli Provisioning Manager (TPM) helps clients reduce manual steps to better automate the changing demands for IT resources. This is a key to IBM's version of cloud computing, which is called Blue Cloud. The technologies developed as part of this initiative enable clients to build large-scale, distributed, globally accessible datacenters.

 

As a foundation for datacenter automation, TPM plays an important role in helping IT organizations manage power consumption by switching servers to standby mode when not in use, which saves energy, and automatically restores them to active mode when needed.

 

Blue Cloud is based on an updated version of IBM's Tivoli Provisioning Manager product that includes new automation features that remove much of the manual graft in data center management.

 

TPM is used to deploy, configure and manage data center IT infrastructure including software applications, virtualization, storage, network devices, routers and firewalls.

IBM is positioning TPM at the forefront of its "Blue Cloud" cloud computing initiative. Cloud computing refers to a new type of distributed IT architecture that pools computer resources together rather than managing individual PCs or servers.

 

TPM dynamically provisions and allocates resources to compensate for workload fluctuations in changing business environments. It’s these fluctuations that strain IT resources while other systems run at less efficiently. In concept this is a problem cloud computing can resolve.

 

The idea is that users of the Blue Cloud need only be concerned with computing services requested, and not the underlying resources that are being accessed. Cloud Computing increases efficiencies of systems resources because it ensures that underutilized systems aren't sitting idle. It also removes the IT administrator burden of installing and setting up software manually.

 

From the IBM website, I was able to get the following information regarding the newest version of TPM:

 

TPM 5.1.1 includes a variety of enhancements to expand the capabilities and benefits of automating common datacenter tasks while providing interoperability to support diverse IT environments and varying levels of IT maturity. The new capabilities help simplify software installation and improve distribution, monitor IT resources across an enterprise, and create reusable automation packages to perform complex tasks that can be used again later.

 

Specifically, TPM automates the discovery, deployment, configuration and management of operating systems, patches, middleware, and applications on physical and virtual servers. It can manage virtualization technologies, SAN- and NAS-based storage resources, and network devices acting as routers, switches, firewalls and load balancers. TPM also allows a company to automate its own datacenter procedures and processes either by modifying automation packages or creating new packages that match a company's best practices.

 

TPM lowers the cost of managing infrastructure resources by automating tasks to execute change in the enterprise. Provisioning, maintaining and re-purposing IT resources is made easier, while systems are made more secure and stable through software and security configuration compliance and remediation, freeing up resources to focus on business issues and innovation.

 

Specific new enhancements include:

 

  • Web Replay -- Web Replay works by enabling experts to share knowledge with others. With Web Replay, a user can "record" the mouse clicks, data insertion and other processes involved in any complex task. Afterwards, any user with the appropriate access can run the recorded scenario. Operations that may require a series of screen interactions can be condensed down to a single push of a button. Consequently, experts on a subject can develop and record the actions needed to execute very complex tasks. These "recordings" can then be used and altered as needed by others. This ensures that tasks are executed correctly and completely.

 

  • Cross-platform patch support -- TPM helps customers automate many steps involved in compliance efforts. For example, the software gives customers an integrated way to manage patches on Windows, Linux, Solaris and AIX. This helps staff be more efficient and avoid errors.

 

  • TADDM integration -- TPM 5.1.1 delivers improved integration with the Tivoli Application Dependency Discovery Manager (TADDM). TADDM provides complete visibility into application complexity by automatically creating and maintaining application infrastructure maps. The information can be used to improve compliance and remediation, speed up problem solving, and simplify day-to-day resource management.

 

  • Dynamic Content Delivery -- TPM Dynamic Content Delivery enables the efficient delivery of large data payloads such as high-resolution video, computer-aided design data and online learning content across an enterprise. It decreases hardware and administrative costs associated with application software delivery and life cycle management, and provides high-performance, optimized delivery of emergency fixes or complete software suites.

 

  • Streamlined installation -- A user friendly installation wizard lets customers install the entire package with minimal user interaction, getting them up and running with TPM in just a few hours.

 

 

If you are an “End Client” looking for a Consulting Service provider to support your Applications, Peningo Systems provides Consultants with expertise in many areas including:

 

 

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page.

 

 

 

 

Peningo Systems selected to provide Tivoli Provisioning Manager Consulting Services

The Peningo Opinion Blog - Rye, New York - Peningo Systems has recently been selected to provide Tivoli Provisioning Manager Consulting and Training Services to a leading ERP Software Development firm. Peningo systems has provided Tivoli Consulting services for over fifteen years.

While pricing will always be a factor in the selection process by the “End Clients”, Peningo Systems was selected to provide Tivoli Provisioning Manager Consulting and Training Services services base on the combination of price and high level of quality and expertise in the Consultants that will be providing the services. The selection of Peningo System is an example of successful bypass of these Prestigious Names in IT Consulting and Professional Services, which Peningo often refer to as “The Prestigious One”.

Peningo will provide the Tivoli Provisioning Manager Consulting and Training Services with a Senior Consultant with expertise in Tivoli Provisioning Manager (TPM) implementation.

If you are an “End Client” looking for a Consulting Service provider to support your Applications, Peningo Systems provides Consultants with expertise in many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page.

October 25, 2008

The Peningo Tivoli Consultants Blog

Peningo Systems has identified that the Tivoli technical services industry is moving in a direction that we at Peningo do not want to follow.  Our reasons for rejecting such a direction are:

Suppliers of Tivoli IT services have been organized into Networks of organizations or Cartels, with the purpose to control and enforce a reduction in the rates of individual consultants.  On the one side the Networks hopes to control demand and through the Networks force the rate reduction that will allow them a larger Margin.


The "End Client" recipient of the Tivoli Consulting Services continues to pay an ever increasing rate for such services, while the quality of the Consultants being delivered to them, declines; as the more “experienced Tivoli Consultants” refuse to participate in such equalitarian scheme.

Over the years, we at Peningo Systems have seen that the rates for the Tivoli Consultants have been reduced or become stagnant. As the Networks of these “Prestigious” Tivoli Consulting Service providers have increased needs to provide Tivoli Services, they seek "offshore" resources or lower paid H1-B / L-1 based resources to maintain their margins.  The more “Experienced Consultants” are not part of the equation, since their rate requirements would not fit into the “Prestigious” Tivoli Consulting Service provider’s lofty profit margin.  These "offshore and H1-B / L-1 resources" generally are not in tune with the needs of the American Business Community.

We invite your efforts to participate as a Commenter on this Blog in order to bring to the attention of the "End Client" the benefits of contracting direct, which will result in increase rate for the Consultant and lower billing rate to the "End Client”.

Peningo Systems supports and provides Consultants with expertise in many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas  or got to the Peningo Tivoli Consultants page.

 

<a href="http://technorati.com/claim/ihhar4r93w" rel="me">Technorati Profile</a>