« May 2009 | Main

June 29, 2009

Peningo Systems has been Selected to Provide Tivoli Access Manager Consultants for a Professional Services Organization

 

Peningo Systems has recently been selected to provide Tivoli Access Manager Consultants to one of the largest IT Organizations in Saudi Arabia. Peningo Systems will be providing TAM Consultants to assist their client’s Professional Services in implementation, upgrade planning  and  technical support of Tivoli Access Manager.

 

We at Peningo Systems always insure that we provide the end client with the best available resources within their respective areas of expertise. These deliveries of services are at rates that below the rates of the Software Vendor’s Professional Services organizations that utilize resources that that are not as experienced and seasoned as the Peningo Systems Consultant.

If you are an "End Client" looking for IT Consulting Service providers to support your Applications, Peningo Systems provides Consultants with expertise in many areas including:

To see Peningo Systems areas of expertise, please go to the Peningo Technical Areas page or go to the Peningo Tivoli Access Manager Consultants page.

June 04, 2009

Tivoli Business Service Manager Performance Tuning Recommendations

[This article is sponsored by Peningo Systems, Inc., a provider of Tivoli Consulting Services on a nationwide basis. For more information on Peningo Systems, please go to the  Peningo Tivoli Consultants page. ]

 

The IBM DeveloperWorks site is an excellent repository of information and resources regarding various IBM offerings and systems.  

We recommend this article, that was recently released on Developerworks,  to any Tivoli Consultant who is involved in the implementation and performance tuning of the Tivoli Business Service Manager.
 
 
 
This paper includes performance and tuning recommendations for IBM Tivoli Business Service Manager (TBSM) version 4.2.

·                     1 Overview

·                     2 TBSM 4.2 and WebSphere Application Server tuning

o                                            2.1 Identifying current JVM settings within TBSM 4.2

o                                            2.2 Enabling Java Virtual Machine (JVM) Garbage Collection (GC) logging

o                                            2.3 Running a representative workload

o                                            2.4 Analyzing the GC Logs for TBSM

·                     3 Additional Dashboard tuning suggestions

·                     4 Client side Java Virtual Machine tuning

·                     5 PostgreSQL database and the Discovery Library/XML toolkit

o                                            5.1 Specific PostgreSQL tuning parameters

o                                            5.2 Vacuuming the TBSM database

·                     6 Final thoughts about TBSM 4.2 performance

·                     7 Hardware for production environments

·                     8 References

·                     9 Trademarks

·                     10 Copyright and Notices

Overview

IBM® Tivoli® Business Service Manager (TBSM) 4.2 delivers technology for IT and business users to visualize and assure the health and performance of critical business services. The product does this by integrating a logical representation of a business service model with status-affecting alerts that are raised against the underlying IT infrastructure. Using browser-based TBSM Web Consoles, operators can view how the enterprise is performing at a particular time, or how it performed over a given period of time. As a result of this, TBSM delivers the real-time information that you need to respond to alerts effectively and in line with business requirements, and optionally to meet Service Level Agreements (SLAs).

Given the size of today's large business enterprises, TBSM must be able to represent and manage the status and related attributes of very large business service models. To enhance scalability, TBSM 4.2 divides the previous TBSM 4.1.x server architecture into two separate servers, referred to in this paper as the "Data server" for back-end processing, and "Dashboard Server" for front-end operations.

For reference, the Data server maintains the canonical TBSM business service model representation, processing events from various sources, and updating service status based on those events. In this role, it interacts with various data stores.

The Dashboard Server, by contrast, is primarily responsible for supporting the user interface. It retrieves service information from the Data server as needed to support the user interactions.

TBSM 4.2 is primarily processor dependant (the number and speed of processors being two of the key factors) as long as sufficient memory is configured for the TBSM Java_™_ Virtual Machines (JVMs). It is important to be aware of the minimum and recommended hardware specifications (See Section 6) for an optimal user experience.

To that end, the purpose of this paper is to describe some of the performance tuning capabilities available for you to use with the product, how to interpret and analyze the results of performance tuning, and to suggest some recommendations for installing and tuning the product to achieve optimal scale and performance in your own unique TBSM environment.

TBSM 4.2 and WebSphere Application Server tuning

This release of TBSM uses an embedded version of the WebSphere Application Server 6.1 for the Data server and Dashboard Servers. Tuning WebSphere for TBSM 4.2 includes the following actions:

  • Identifying the current TBSM JVM settings
  • Enabling JVM Garbage Collection (GC) logging
  • Running a representative workload
  • Analyzing the GC log results
  • Tuning the JVM appropriately
  • Running the workload again (and again, if needed)
  • Reviewing the new results

The following statements are from the WebSphere 6.1 documentation on Java memory and heap tuning:

"The JVM memory management and garbage collection functions provide the biggest opportunities for improving JVM performance."

"Garbage collection normally consumes from 5% to 20% of total execution time of a properly functioning application. If not managed, garbage collection is one of the biggest bottlenecks for an application."

The TBSM 4.2 Data server and Dashboard Servers each run in their own JVM; subsequently, each has the capability to be independently tuned.

Of primary consideration is the memory allocation to each of the JVMs, bounded by two key values:

  • Initial memory (Xms)
  • Maximum memory (Xmx)

The Data server and Dashboard Server also use the default Garbage Collector (optthruput) for TBSM 4.2 that can be used without modification (with the exception of the Solaris Operating Environment, which uses a generational garbage collector instead). The following statement is from the WebSphere 6.1 documentation:

"optthruput, which is the default, provides high throughput but with longer garbage collection pause times. During a garbage collection, all application threads are stopped for mark, sweep and compaction, when compaction is needed. optthruput is sufficient for most applications."

Based on performance analysis of TBSM 4.2, the default Garbage Collector has proven quite capable, and is recommended in most cases, especially in environments where high event processing rates are needed. (For reference on the Sun Garbage collection algorithms, review the Sun JVM link provided in the reference section of this document.)

Most of the remainder of this paper explains how to efficiently size the TBSM 4.2 JVMs to allow the default garbage collection algorithms to most operate most efficiently.

To determine the Java version and level that is in use, run the following command:

$TIP_HOME/java/bin/java -version

In response to this command, the TBSM server writes information to the command line, including the JVM provider information and level of release. Knowing this up-front directs you to the correct parameters that follow in this document for Java™ Virtual Machine configuration.

A few considerations about JVM sizing and GC activity

Proper JVM heap memory sizing is critical to TBSM 4.2.

Memory is allocated to objects within the JVM heap, so as the number of objects grows, the amount of free space within the heap decreases. When the JVM cannot allocate additional memory requested for new objects as it nears the upper memory threshold (Xmx value) of the heap, a Garbage Collection (GC) is called by the JVM to reclaim memory from objects no longer accessible to satisfy this request.

Depending on the JVM and type of GC activity, this garbage collection processing can temporarily suspend other threads in the TBSM JVM, granting the garbage collection threads priority to complete the GC work as quickly and efficiently as possible. This prioritization of GC threads and pausing of the JVM is commonly referred to as a "Stop the World" pause. With proper heap analysis and subsequent JVM tuning, this overhead can be minimized, thereby increasing TBSM application throughput. Essentially, the JVM spends less time paused for GC activities, and more time processing core TBSM activities.

Identifying current JVM settings within TBSM 4.2

There are several ways to gather the WebSphere JVM settings in a TBSM 4.2 environment. One of the easiest (and safest) ways to do this is by leveraging a WebSphere command to create custom startup scripts for both the TBSM Data and Dashboard Servers.

To do this, run the following command from both the Data server and Dashboard Server /profile/bin directory (the servers can be up or down). For the TBSM Data server, run the following command:

./startServer.sh server1 -username [Dataserver_UserID] -password [Dataserver_UserID_password] -script start_dataserver.sh

The output of this command is a file named start_dataserver.sh in the same /profile/bin directory. Utilizing a custom start-up script allows your original WebSphere configuration files to remain intact, and provides a few unique capabilities you might want to leverage for performance tuning.

The following section is part of the start_dataserver.sh file that was created:

# Launch Command
 
exec "/opt/IBM/tivoli/tip/java/bin/java"  $DEBUG "-Declipse.security" "-Dosgi.install.area=/opt/IBM/tivoli/tip"
 "-Dosgi.configuration.area=/opt/IBM/tivoli/tip/profiles/TBSMProfile/configuration" "-Djava.awt.headless=true"
 "-Dosgi.framework.extensions=com.ibm.cds" "-Xshareclasses:name=webspherev61_%g,groupAccess,nonFatal" "-Xscmx50M"
 "-Xbootclasspath/p:/opt/IBM/tivoli/tip/java/jre/lib/ext/ibmorb.jar:/opt/IBM/tivoli/tip/java/jre/lib/ext/ibmext.jar"
 "-classpath" "/opt/IBM/tivoli/tip/profiles/TBSMProfile/properties:/opt/IBM/tivoli/tip/properties:
/opt/IBM/tivoli/tip/lib/startup.jar:/opt/IBM/tivoli/tip/lib/bootstrap.jar:/opt/IBM/tivoli/tip/lib/j2ee.jar:
/opt/IBM/tivoli/tip/lib/lmproxy.jar:/opt/IBM/tivoli/tip/lib/urlprotocols.jar:/opt/IBM/tivoli/tip/deploytool/itp/batchboot.jar:
/opt/IBM/tivoli/tip/deploytool/itp/batch2.jar:/opt/IBM/tivoli/tip/java/lib/tools.jar" "-Dibm.websphere.internalClassAccessMode=allow"
 "-Xms256m" "-Xmx512m"

Note that the last 2 arguments passed to the JVM are "-Xms256m" and "Xmx512m". These 2 arguments are responsible for setting the initial JVM size (Xms) to 256 MB of memory, and the maximum JVM size (Xmx) to 512 MB of memory.

Next, issue the startServer.sh command from above; however, this time, run it from the Dashboard Server /profile/bin directory. Also, change the name of the startup script argument to "start_dashboard.sh" as in the following example:

./startServer.sh server1 -username [Dashboard_UserID] -password [Dashboard_UserID_password] -script start_dashboard.sh

The output of this command is a file named start_dashboard.sh in the same /profile/bin directory.

Enabling Java Virtual Machine (JVM) Garbage Collection (GC) logging

To fully understand how the JVM is using memory in your unique TBSM environment, you need to add a few arguments to the start_dataserver.sh script as indicated to log garbage collection (GC) data to disk for later analysis:

# Launch Command: Dataserver
exec "/opt/IBM/tivoli/tip/java/bin/java" "-verbose:gc" "- Xverbosegclog:/holdit/dataserver_gc.log" "-XX:+PrintHeapAtGC" 
"-XX:+PrintGCTimeStamps"  $DEBUG "-Declipse.security" "-Dosgi.install.area=/opt/IBM/tivoli/tip" 
"-Dosgi.configuration.area=/opt/IBM/tivoli/tip/profiles/TBSMProfile/configuration" "-Djava.awt.headless=true" 
"-Dosgi.framework.extensions=com.ibm.cds" "-Xshareclasses:name=webspherev61_%g,groupAccess,nonFatal" "-Xscmx50M" 
"-Xbootclasspath/p:/opt/IBM/tivoli/tip/java/jre/lib/ext/ibmorb.jar:/opt/IBM/tivoli/tip/java/jre/lib/ext/ibmext.jar" 
"-classpath" "/opt/IBM/tivoli/tip/profiles/TBSMProfile/properties:/opt/IBM/tivoli/tip/properties:/opt/IBM/tivoli/tip/lib/startup.jar:
/opt/IBM/tivoli/tip/lib/bootstrap.jar:/opt/IBM/tivoli/tip/lib/j2ee.jar:/opt/IBM/tivoli/tip/lib/lmproxy.jar:
/opt/IBM/tivoli/tip/lib/urlprotocols.jar:/opt/IBM/tivoli/tip/deploytool/itp/batchboot.jar:
/opt/IBM/tivoli/tip/deploytool/itp/batch2.jar:/opt/IBM/tivoli/tip/java/lib/tools.jar" 
"-Dibm.websphere.internalClassAccessMode=allow" "-Xms256m" "-Xmx512m"

Note that the directory for GC log file data must exist prior to launching TBSM with the customized start_dataserver.sh script. For this scenario, a /holdit directory (with read/write access for the TBSM user ID) has already been created.

Important: For the Sun JVM (TBSM 4.2 on Solaris), the syntax for the GC log file location ("-Xverbosegclog:/holdit/dataserver_gc.log")
is different than the one used for the IBM version Use the following argument instead:

./startServer.sh server1 -username [Dashboard_UserID] -password [Dashboard_UserID_password] -script start_dashboard.sh

Repeat this procedure to edit the Dashboard Server custom startup script; however, change the log name from "dataserver_gc.log" to "dashboard_gc.log".

The log file names should be different both to distinguish between the two, and to ensure that the GC log data does not combine into one log if both TBSM Servers are installed on the same system. Combining both logs together renders the GC log file useless for matters of performance analysis and subsequent tuning.

For reference, the TBSM Dashboard Server script should resemble this:

# Launch Command: Dashboard Server
exec "/opt/IBM/tivoli/tip/java/bin/java" "-verbose:gc" "- Xverbosegclog:/holdit/dashboard_gc.log" 
"-XX:+PrintHeapAtGC" "-XX:+PrintGCTimeStamps"  $DEBUG "-Declipse.security" "-Dosgi.install.area=/opt/IBM/tivoli/tip" 
"-Dosgi.configuration.area=/opt/IBM/tivoli/tip/profiles/TBSMProfile/configuration" "-Djava.awt.headless=true" 
"-Dosgi.framework.extensions=com.ibm.cds" "-Xshareclasses:name=webspherev61_%g,groupAccess,nonFatal" "-Xscmx50M" 
"-Xbootclasspath/p:/opt/IBM/tivoli/tip/java/jre/lib/ext/ibmorb.jar:/opt/IBM/tivoli/tip/java/jre/lib/ext/ibmext.jar" 
"-classpath" "/opt/IBM/tivoli/tip/profiles/TBSMProfile/properties:/opt/IBM/tivoli/tip/properties:
/opt/IBM/tivoli/tip/lib/startup.jar:/opt/IBM/tivoli/tip/lib/bootstrap.jar:/opt/IBM/tivoli/tip/lib/j2ee.jar:
/opt/IBM/tivoli/tip/lib/lmproxy.jar:/opt/IBM/tivoli/tip/lib/urlprotocols.jar:
/opt/IBM/tivoli/tip/deploytool/itp/batchboot.jar:/opt/IBM/tivoli/tip/deploytool/itp/batch2.jar:
/opt/IBM/tivoli/tip/java/lib/tools.jar" "-Dibm.websphere.internalClassAccessMode=allow" "-Xms256m" "-Xmx512m"

Running a representative workload

At this point, start the TBSM Servers (Data server first, then Dashboard Server as soon as the processor quiesces on the Data server). Next, proceed with a common scenario or representative workload in your environment to populate the GC logs for subsequent performance analysis. It can be a simple scenario that you would like to optimize, perhaps TBSM Data server startup.

Or, perhaps you want to tune a representative scenario as the following example illustrates for a steady-state workload captured over a 30 minute span of time.

First, record some notes on the TBSM environment configuration. The following scenario was measured for initial performance and subsequent tuning:

Service Model: 50 000 Service Instances, 4 level hierarchy, no leaf node with more than 50 children.

Initial heap size: (-Xms): 256 MB

Maximum heap size: (-Xmx): 512 MB

Dataserver Started: 9:57:00
Dataserver Started: 9:59:00

Workload Start Time: 10:09:00
Workload End Time: 10:39:00

For this reference scenario, the TBSM Data server was started with GC logging at 9:57:00. After the processor quiesced on the server (indicating that the Data server startup and initial Service Model processing had completed), the Dashboard Server was started and 50 unique TBSM Web Consoles were logged in.

After all consoles were started, each was set to a unique Service Tree and Service Viewer desktop session. Finally, a steady-state event workload using thousands of unique events (send by way of remote EIF probes) was introduced at 10:09:00, and continued until 10:39:00 when the event flow was stopped and GC log files immediately collected.

Also, while this workload was being processed, a "vmstat -n 15 120 >> vmstat_out.txt" command was run (on each TBSM Server), which collected CPU statistics every 15 seconds for a 30 minute period to a local file (for later analysis and review). After the workload was complete, these vmstat_out.txt files were also collected for review.

Analyzing the GC Logs for TBSM

To analyze the resultant GC log files, download the IBM Pattern Matching and Analysis (PMAT) tool from the IBM Alphaworks Web site:

Taken from the PMAT Web site:

"The Pattern Modeling and Analysis Tool for IBM® Java Garbage Collector (PMAT) parses verbose GC trace, analyzes Java heap usage, and recommends key configurations based on pattern modeling of Java heap usage... This information can be used to determine whether garbage collections are taking too long to run; whether too many garbage collections are occurring; and whether the JVM crashed during garbage collection."

Although there is an in-depth tutorial on the same Web site (See: Webcast replay - "How to analyze verbosegc trace with IBM Pattern Modeling and Analysis Tool for IBM Java Garbage Collector"), the following information is provided to expedite utilization of the PMAT tool within a Windows environment.

To analyze the GC log file that you collected, start the IBM PMAT Tool:

"C:\Program Files\Java\jdk1.6.0\bin\java" -Xmx128m -jar "C:\TBSM 4.2\Tools\IBMPMAT\ga31.jar"

Use this example and edit it as needed (substitute the location of your Java executable file and location of PMAT files). Note that the Xmx value of 128m limits the PMAT tool to use no more than 128 MB RAM on the system. If you have a number of very large GC log files, you might want to increase the Xmx value.

Review the PMAT Web site for other configuration details or a more in-depth walk-thru as needed. The following examples assume that the tool is correctly installed and ready for you to use.

Loading the GC log file

The following screen capture shows the initial screen of the PMAT tool.

Click the I folder to load an IBM generated GC log; the IBM version is used across all TBSM 4.2 platforms with the exception of the Solaris Operating environment which uses the Sun JVM. To open a Sun-generated log, click the N folder instead. This document assumes an IBM-generated log is used for the 30 minute steady-state scenario.

Navigate to the GC log that you want to analyze and select it. The PMAT tool processes the log, and displays the analysis and recommendations you can review.

Analyzing the initial Data server results

The following screen capture shows the result after a garbage collection log has been opened within the PMAT tool for the TBSM Data server.

Review the Analysis and Recommendations sections. For this scenario, the Analysis section indicates that no Java heap exhaustion was found, typically indicating that there is sufficient space within the JVM to satisfy required memory allocations. However, the Overall Garbage Collection Overhead metric notes that 20% of the application time was spent performing Garbage Collection activities, most likely indicating a need for tuning the JVM memory parameters.

To minimize the GC overhead, review the Recommendations section and assign additional memory to the JVM for more efficient processing of the workload. As the PMAT tool recommendation is to set the JVM Xmx value to approximately 678 MB or greater (and because the system has plenty of memory), a new value of 1024 MB was chosen as the new Xmx value (recall that the as-provided Xmx setting is 512 MB).

To make this change, do the following steps:

#. Edit the start_dataserver.sh script.

  1. Change the Xmx value from "-Xmx512m" to "-Xmx1024m".

  2. Change the "-Xms256m" to "-Xms512m", which is one half of the new Xmx parameter. Save the changes to the script.
Analysis - Initial Dashboard Server Results

The following screen capture shows the result after a garbage collection log has been opened within the PMAT tool for the TBSM Dashboard Server.

Next, load the dashboard_gc.log file and review the Analysis and Recommendations sections. For the Dashboard, the Analysis section indicates that no heap exhaustion was found. It also reveals that 10% of the application time was spent performing Garbage Collection activities, certainly not excessive, but some slight tuning might be beneficial.

To reduce the GC overhead for the Dashboard Server, again review the Recommendations section. As the PMAT tool advises a maximum JVM size of approximately 375 MB or greater (and the TBSM 4.2 default is already at 512 MB), a change might not be warranted. However, because the system has plenty of memory, an interesting decision is to choose 768 MB as the new Xmx value, with a new initial size (Xms) of 384 MB.

To make these changes, do the following steps:

  1. Edit the start_dashboardserver.sh script.
  2. Change the Xmx value from "-Xmx512m" to "-Xmx768m".
  3. Change the "-Xms256m" to "-Xms384m", which is one half of the new Xmx parameter.
  4. Save the changes to the script.
  5. At this point, restart both servers, and rerun the same scenario as before. After it is complete, review the new GC logs in PMAT to determine changes in TBSM performance.
Reviewing the results after tuning: Data server

The following screen capture shows the result after the new garbage collection log has been opened within the PMAT tool for the TBSM Data server.

After the run is complete, load the new Dataserver_gc.log file into the PMAT tool and review the Analysis and Recommendations sections. For this "tuned" scenario, the analysis section again indicates that no Java heap exhaustion was found. However, Overall Garbage Collection Overhead is now calculated at 5% (down from 20% prior to tuning), representing a reduction of 15%. Less time spent in garbage collection essentially translates to more processer cycles available for application processing.

To illustrate the CPU savings for the Data server, the vmstat data for total processor utilization was collected and plotted in the following chart for the event processing workload of 30 minutes (Note that these CPU comparison results are not a guarantee of service or performance improvement; the performance of each unique TBSM environment will vary):

For the initial event processing workload, the average processor utilization was 43.4% of total CPU on the Data server system. After tuning, the same workload used an average of 18.3% of total processor utilization, a reduction of 57.9% of processor overhead. Also, an extended period of 100% processor utilization late in the run was almost entirely eliminated in the tuned environment.

Reviewing the results after tuning: Dashboard Server

The following screen capture shows the result after the new garbage collection log has been opened within the PMAT tool for the TBSM Dashboard Server.

Review the Analysis and Recommendations sections. For this "tuned" scenario, the analysis section again indicates no Java heap exhaustion. Garbage Collection overhead is now calculated at 9% (down from 10% prior to tuning), which seems to be a minimal gain.

However, an interesting metric to consider is the Total Garbage Collection pause, which is now 252 seconds, down from 288 seconds in the original (untuned) Dashboard Server scenario. As previously stated, application processing is essentially paused while some garbage collection activities occur. Although each of these pauses can range from several milliseconds to several hundred milliseconds each (spread over time and unique to each environment), a reduction in the total number of overall garbage collection time is another worthwhile metric to consider.

Finally, to illustrate the CPU comparison for the Dashboard Server, the vmstat data for total processor utilization was plotted in the following chart for the same event processing workload. Again note that these CPU results are not a guarantee of service or performance improvement; performance for each unique TBSM environment will vary):

For the initial event processing workload, the average processor utilization was 19.1% of total CPU on the Dashboard Server. After tuning, the same workload now used an average of 20.2% of total processor utilization, a minimal increase in total system processor utilization.

This illustrates an important concept: the larger the Xmx value, potentially, the more objects are loaded into JVM memory because there is additional memory space. Therefore, additional CPU processing is most likely needed by the GC threads to purge the larger JVM heap of unreachable objects.

While a minor gain in overall processor utilization was discovered using the larger Xmx setting, the cost savings of 36 fewer seconds (over the 30 minute period) spent in GC pause time might or might not be worth the trade off. However, in a production environment, you might want to consider conducting a longer scenario (perhaps over the course of a day) to determine if the larger setting or smaller setting is a better choice.

This is just a basic, but powerful example of how you can use the PMAT tool to become familiar with your own TBSM 4.2 environment. As each TBSM environment is unique for each customer, making educated tuning decisions by using the PMAT tool is a recommended strategy for performance tuning.

Additional Dashboard tuning suggestions

There are some additional considerations for tuning performance of the TBSM 4.2 Dashboard Server to reduce processing overhead. Review the following areas and consider implementting them based on the needs of your unique environment.

Service tree refresh interval: Changing the automatic Service tree refresh interval might help reduce server side workload (on the TBSM Dashboard Server) related to TBSM Web Console activity. The service tree refresh interval is set to 60 seconds by default.

The service tree refresh interval controls how frequently the TBSM Web Console requests an automatic service tree update from the TBSM Dashboard Server. If every client connected to the TBSM Dashboard is updated every 60 seconds, this might affect the Dashboard Server when there are a large number of concurrent consoles. To help mitigate this, you can increase the interval between refreshes.

To do this, edit the RAD_sla.props file in the $TBSM_DATA_SERVER_HOME/etc/rad/ directory:

#Service Tree refresh interval; in seconds - default is 60
impact.sla.servicetree.refreshinterval=120

Canvas update interval multiplier: The update interval multiplier helps to control the frequency of automatic canvas refresh requests initiated by the service viewer for canvases containing business service models of different sizes. The default multiplier is 30.

For example, loading a larger, 100 item canvas takes longer than loading a smaller, 50 item canvas. Because of this, the refresh intervals of the larger canvas should be spaced apart so that the canvas is not constantly in a refresh state. The TBSM Web Console accomplishes this by computing a dynamic refresh interval by taking the amount of time spent performing the previous load request and multiplying it by the update interval multiplier constant. So, if the large service model in this example takes 5 seconds to load, a refresh of the same model is not attempted for another 2.5 minutes (5 x 30 or 150 seconds).

When considering a change to this parameter, keep in mind that there are lower and upper boundaries of 30 seconds and 180 seconds for the refresh interval. As a result, the update interval multiplier is useful only to a certain point.

Nonetheless, you can easily update the interval multiplier parameter by editing the $TIP_HOME/systemApps/isclite.ear/sla.war/av/canvasviewer_simple.html file on the TBSM Data server and changing the value for all occurrences of the UpdateIntervalMultiplier property to the new value. Because this is a client side property, you do not have to reboot the server for this value to take effect. However, you might need to log out and log on to the TBSM console.

Client side Java Virtual Machine tuning

Within the client Web browser that hosts the TBSM Web Console is a JVM plug-in needed for running client side Java code. Just like a JVM running on either of the TBSM servers, the browser plug-in JVM can also be tuned to specify initial and maximum heap sizes. Typically, an untuned JVM plug-in has an initial heap size of no more than 4 MB and a maximum heap size of 64 MB, though these numbers can vary depending on the platform used.
The graphical Service Viewer is the function most affected by changes to JVM plug-in parameters. It might be possible to improve the performance of the Service Viewer by increasing the initial heap size (-Xms) to 64 MB and the maximum heap size (-Xmx) to 128 MB. Whether this configuration change is really needed or not depends on the size of the business service model that is loaded by the service viewer.
The procedure to change the JVM plug-in tuning parameters can be different depending on the provider of the plug-in (for example IBM or Sun) and also depending on the system (Windows or UNIX). As an example, the following procedure illustrates how to access and set the JVM plug-in parameters for the IBM-provided 1.5.0 plug-in on a Windows system:

  1. Open Control Panel -> Java Plug-in.
  2. Click the Advanced tab.
  3. In the text box under Java Runtime Parameters, type the following value:
    Xms64m -Xmx128m
  4. Click the Apply button and then close the window.

After these changes are made, it might be necessary for you to log out and log back in to the TBSM console. For a complete list of the supported combinations of Java plug-ins, Web browsers, and platforms, see the TBSM Installation Guide.
Important: To change the JVM plug-in parameters on a supported UNIX system, navigate to the bin directory under the file system location to which the plug-in was installed and look for a shell script named either ControlPanel or JavaPluginControlPanel, depending on your Java version. Run this shell script to launch a GUI that looks similar to the equivalent interface on the Windows system.

PostgreSQL database and the Discovery Library/XML toolkit

A PostgreSQL database can be a very fast database, but the as-is configuration tends to be rather conservative. A few configuration changes to the postgresql.conf file can improve PostgreSQL performance dramatically. Note that these settings worked well in the performance test environment, and are provided as a starting point for your own unique environments.

Important: Back up your original postgresql.conf file before making any changes.

Specific PostgreSQL tuning parameters

Shared_buffers: Sets the number of shared memory buffers that are used by the database server. The default is typically 1000 X 8K pages. Settings significantly higher than the minimum are usually needed for good performance; values of a few thousand are recommended for production installations. This option can only be set at server startup.

Suggestion: shared_buffers = 16384

If editing this setting, also change the rad_dbconf file pg_buffer parameter in UNIX or Linux systems to the same value.

Work_mem: Non-shared memory that is used for internal sort operations and hash tables. This setting is used to put a limit on any single operation memory-utilization before being forced to use disk.

Suggestion: work_mem = 32000

Effective_cache_size: Sets the planner's assumption about the effective size of the disk cache that is available to a single index scan. This is factored into estimates of the cost of using an index; a higher value makes it more likely that index scans are used, a lower value makes it more likely sequential scans are used.

Suggestion: effective_cache_size = 30000

Random_page_cost: Sets the planner's estimate of the cost of a nonsequentially fetched disk page. This is measured as a multiple of the cost of a sequential page fetch. A higher value makes it more likely that a sequential scan is used, a lower value makes it more likely an index scan is used.

Suggestion: random_page_cost = 2

Fsync: To speed up bulk loads by way of the XML Toolkit, disable the fsync parameter in the postgresql.conf file as follows:

fsync = false # turns forced synchronization on or off

The fsync parameter sets whether you write data to disk as soon as it is committed, which is done through the Write Ahead Logging (WAL) facility. Do this only if you want faster load times; the caveat is that the load scenario mighty need to run again if the server shuts down prior to the completion of processing due to a power failure, disk crash, and so on.

Vacuuming the TBSM database

After completing a large bulk load, you should vacuum the TBSM Data server database to improve performance. A vacuumdb utility is provided with the PostgreSQL database that can be used to clean up database storage. Running this utility periodically or after a significant number of database rows change helps subsequent queries process more efficiently. The utility resides in the $TBSM_HOME/platform/<arch>/pgsql8/bin/vacuumdb directory and can be run as follows:

$TBSM_HOME/platform/arch/pgsql8/bin/vacuumdb -f -z -p 5435 -U postgres rad

The parameters for the vacuumdb command:

-f: The utility does a full vacuum
-z: The utility analyzes and updates statistics that are used by the query planner
5435: The port that the database process is listening on
Postgres: The user ID used to connect to the rad database
Rad: The database name

Important: The TBSM Discovery Library toolkit periodically vacuums the tables that are used by the toolkit. Control of this is handled with the DL_DBVacuum properties in the xmltoolkitsvc.properties file. For more information on these properties, see the Discovery Library toolkit properties. Depending on how often the toolkit imports data, the automatic vacuums might be sufficient.

Final thoughts about TBSM 4.2 performance

To review, TBSM 4.2 is primarily processor dependant (the number and speed of processors are two of the key factors); as long as sufficient JVM memory is configured (use the IBM PMAT tool to assist you in tuning TBSM 4.2 for your own workloads and environments). You must be aware of the minimum and recommended hardware specifications for an optimal user experience. The TBSM 4.2 minimum and recommended hardware tables are supplied in the Hardware for production environments section of this document for easy access and review.

Prior to beginning any in-depth performance tuning for TBSM, it review the trace_*.log files that are created by both the Data and Dashboard Servers. These logs are in the profiles /logs/server1 directory. Review any exceptions or error conditions to remove a functional issue from hampering overall application performance.

After functional processing is observed, two of the primary tuning "knobs" for TBSM 4.2 are the "Xms" and "Xmx" values that control the memory allocation for each of the TBSM JVMs.

For review:

-Xms256m // Sets the initial memory to 256 MB (default)
-Xmx512m // Sets the maximum memory size to 512 MB (default)

After the upper memory setting for Xmx is established (through PMAT analysis), a good rule of Java tuning is typically to set the initial memory allocation to half that of the maximum size. Again, one "size" does not fit all environments, so you might want to try setting the initial value smaller (or larger) and rerun the scenarios. Note that you should not set the Xms value larger than the Xmx value, or the JVM will most likely not start.

After the Data and Dashboard Servers are properly tuned, if Web Consoles using the Service Viewer feel "slow," review the Client side Java Virtual Machine tuning section on tuning the JRE plug-in JVM, and restarting the Console.

Every TBSM Environment is unique; with regard to tuning, one tuning size does not fit all. What this means is that multiple factors come into play, such as number and speed of processors, available RAM, service model size, number of concurrent Web consoles, SLAs, KPIs, to name a few. JVM analysis is the correct way to ensure proper performance tuning is in place.

To do this, a regular schedule for Performance data collections, analysis, and subsequent tuning (as needed) is strongly encouraged. Using the PMAT tool at some regular interval (perhaps monthly) can uncover trends in application throughput on the TBSM Data server, as new Business Services are added to the in-memory model. Also, as additional Web Consoles are added to the Dashboard Server, looking at such metrics as overall garbage collection pause times might be helpful in uncovering tuning areas to reduce application response times while serving a higher number of end-users.

In summary, making performance analysis a proactive subject in your own unique TBSM environments can go a long way to minimizing or preventing future performance concerns.

Hardware for production environments

The following tables summarize the minimum and recommended hardware and configuration for production environments (see the readme file provided with the TBSM 4.2 installation image for the latest information and updates regarding supported hardware).


Table 1: Data server - Recommended hardware and configuration for production

Important: The amount of disk space needed is directly related to how many events are processed by the system and the related logging and log levels configured on the system.


Table 2: Dashboard Server - Recommended hardware and configuration for production

References

  1. TBSM 4.2 Beta Web Conference Series: Performance Tuning: Internal IBM presentation delivered in September 2008 to customers participating in the TBSM 4.2 beta program.

  2. PostgreSQL Online Documentation: http://www.postgresql.org/docs/8.0/static/index.html

  3. Tivoli Business Service Manager 4.2 Installation and Administrator's Guides: http://publib.boulder.ibm.com/infocenter/tivihelp/v3r1/index.jsp?topic=/com.ibm.tivoli.itbsm.doc/

  4. A reference book for everything related to IBM Developer Kit and Runtime Environment, Java 2 Technology Edition, Version 5.0. (In PDF format.): http://download.boulder.ibm.com/ibmdl/pub/software/dw/jdk/diagnosis/diag50.pdf

  5. Tuning Garbage Collection with the 5.0 Java Virtual Machine: http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html

  6. IBM Pattern Matching and Analysis (PMAT) tool from IBM Alphaworks: http://www.alphaworks.ibm.com/tech/pmat

June 01, 2009

Using Tivoli Access Manager Enterprise Single Sign-on with IBM middleware - Removing the dependancy on Microsoft components

[This article is sponsored by Peningo Systems, Inc., a provider of IBM Tivoli Consulting and Implementation Services on a nationwide basis. For more information on Peningo Systems, please go to the Peningo Tivoli Access Manager Consultants page. ]

 

Below is an article found on the IBM DeveloperWorks website addressing removing the dependencies on Microsoft Components. This is an excellent article for any Tivoli Access Manager Consultant implementing TAM E-SSO.

IBM® Tivoli® Access Manager Enterprise Enterprise Single Sign-on (TAM E-SSO) provides cross application (that is, Web, Java™, mainframe or terminal services) single sign-on capabilities. The TAM E-SSO AccessAgent and IMS server are supported on Microsoft® Windows® operating system platforms, and typically leverage Active Directory for user management. However, many customers want to leverage their existing investment in IBM middleware products, and also extend the reach for TAM E-SSO beyond their intranet. This article shows how TAM E-SSO can be deployed into an environment consisting of IBM middleware, namely DB2® and IBM Tivoli Directory Server. 

 Introduction to TAM E-SSO dependencies

TAM E-SSO mandates the use of a database for storage of product data, including users' wallets and system configuration. The TAM E-SSO IMS server installation media embeds a version of Microsoft SQL Server Express (SQL2K5 Express) for ease of installation. Expect this to change in the future to accomodate IBM DB2 Express. In addition to this embedded database, TAM E-SSO v8 supports the use of IBM DB2 v9.5 and Oracle 9i databases. Many IBM customers' services teams want to leverage existing IBM software deployments to maximise re-use and minimise cost. Therefore, this article focuses on the use of DB2 as the database for TAM E-SSO.

TAM E-SSO also relies on an existing (or new) identity store for management of user data. TAM E-SSO refers to these user repositories as enterprise directories. Since TAM E-SSO is typically deployed within an intranet environment, many customers opt to leverage existing Active Directory deployments for TAM E-SSO. However, this does not suit all customer deployments, so TAM E-SSO provides support for LDAP-based products as enterprise directories. TAM E-SSO v8 supports IBM Tivoli Directory Server (ITDS) 6.1+, SunOne directory 5.1+, Novell eDirectory 8.6+ and Sun Java Directory 5.2+ as LDAP-based enterprise directories. This article outlines how to configure TAM E-SSO to use IBM Tivoli Directory Server (ITDS)

 The operating architecture

In order to simplify the outline, this article assumes the simple deployment illustrated in Figure 1. This deployment best represents a single server TAM E-SSO IMS server installation connecting to an enterprise ITDS server.

 Figure 1: TAM E-SSO v8 conceptual architecture

TAM E-SSO v8 conceptual architecture

Note: For all components deployed on the same machine, IBM DB2 v9.5 shipped with Tivoli Directory Server v6.2 technically can be used to host the TAM E-SSO database, but licensing restrictions might apply. This might be the perfect arrangement for a Proof Of Concept, but take care to ensure the DB2 database instances are suitably scaled according to the usage patterns of the products.

 

Although this environment is simplistic, scaling the components for higher availability should be transparent to the product configuration outlined in this article.

If the reader's intention is to follow the configuration steps outlined within this article, a number of pre-requisite tasks should be performed.

  • ITDS v6.1 must be installed and configured on the ITDS server machine.
  • TAM E-SSO v8 IMS server installation images must be available on the IMS server.
  • DB2 9.5 must be installed on the IMS server.
  • TAM E-SSO AccessAgent software needs to be copied onto the ITDS server.
  • The servers will need to communicate over TCP/IP. Add the hostname to the %SystemRoot%\System32\drivers\etc\hosts file, so that server names can be used rather than IP addresses. This makes the configuration more portable.
Configuring TAM E-SSO to use DB2

DB2 should be installed and configured in accordance with the installation instructions on page 56 of the TAM E-SSO Deployment Guide. These instructions worked well for the installation used in this article's development.

After DB2 is configured, IMS server must be installed. Whilst performing the install, select the custom installation option, and point the installation at the DB2 server setup on the IMS server. One point to note is that when performing the IMS server installation, the DB2 database table setup seems to take a long time. This is normal, be patient during this step. If you are concerned about the time it is taking, monitor the IMS installation log at c:\TAM_E-SSO_IMS_installer.log When installed successfully, the tomcat stdout.log file, located as below, is a good reference for determining system state.

 Figure 2: Tomcat file system error log
 

Note that no ITDS configuration is performed at setup time.

When the setup is complete, the IMS Web-based configuration utility starts. When the configuration utility loads, the domain configuration page is displayed.

 

Tomcat file system error log

 

The stdout.log file contains the most useful information for witnessing server operation.

You also might want to consider changing the IMSService windows service to a manual startup option at this point. Making the IMSService a manual startup processs provides greater control over the process at the time of reboot.

 

 Configuring TAM E-SSO to use ITDS

The ITDS instance now needs to be setup with the objects required for users registering through the TAM E-SSO AccessAgent. When this is done, IMS can be configured with the ITDS as the enterprise directory.

Setting up ITDS with test users

On the ITDS machine, the first step is to create the suffix for storing users and groups, for example, o=ibm,c=au. In the development of this article, the following LDIF was loaded into the ITDS server. This LDIF includes a number of test users.


Example LDIF for creating the LDAP objects

 

    dn: o=ibm,c=au
objectclass: organization
o: ibm

dn: cn=chrish,o=ibm,c=au
objectclass: inetorgperson
userpassword: passw0rd
cn: chrish
sn: hockings

dn: cn=root,o=ibm,c=au
objectclass: inetorgperson
userpassword: passw0rd
cn: root
sn: root

 

 

You might want to grant the cn=root,o=ibm,c=au user the ability to search, add, delete and modify entries within the directory, but by default the user can search the repository, which is all that is required for TAM E-SSO. The next step is to configure the ITDS as the enterprise directory within the IMS.

Configuring the IMS to use ITDS

It is now time to setup the IMS to use the ITDS server as the identity and authentication store. Start the IMS configuration utility. Note that the IMS configuration utility starts at the Add a domain option, which is used for Active Directory domain configuration. This is a little confusing, because domain creation is not required for configuring other enterprise directories, such as ITDS.

On the IMS Configuration Utility landing page, select Enterprise Directories in the left column. On the right side, select Add Directory, as shown in Figure 3.

Note that after IMS installation, the AccessAnywhereEnterpriseDirectory is configured, allowing any user to register without validating credentials. Hence, if there is any attempt to create an IMS administrator prior to this point, it will accept any username/password combination. It is never checked against a directory.

 Figure 3: Add enterprise directory
Add enterprise directory

The next step is to add a name and description for the new enterprise directory, as shown in Figure 4.

 Figure 4: IMS add enterprise directory
IMS add enterprise directory

Make sure that Include this directory in TAM E-SSO user validation is selected. When this is done, this directory becomes the authentication service for registering users through AccessAgent.

Now select the Add button, and select the Generic LDAP connector, as shown in Figure 5.

 

 Figure 5: IMS add LDAP details
IMS add LDAP details

The ITDS server information must now be entered in the screen shown in Figure 6.

 Notice the username is simply root. The IMS server automatically (either sensibly or not) adds the cn= to the front and the user container to the end, to result in cn=root,o=ibm,c=au.

 Figure 6: IMS add standard LDAP configuration

 

 IMS add standard LDAP configuration

 

 

 

Open the Advanced configuration keys twisty and configure ITDS details. SSL should not be configured at this point. The instructions for setting SSL up are provided later in this article. Note that the full class name, that is, com.sun.jndi.ldap.LdapCtxFactory, is not shown in the diagram below.


Figure 7: IMS add advanced LDAP configuration
IMS add advanced LDAP configuration

The configuration can now be tested, by selecting the Save and test button. The result is a success message being displayed at the top of this configuration page. If an error appears, check the details entered. Consult the IMS stdout.log file if the problem cannot be determined from the configuration information. The Troubleshooting section below shows techniques for resoving issues encountered.

 

 

 

 



.


 





 




 





 











 



 


Provisioning your IMS administrator

The next step is to provision an IMS administrator, in this case, the cn=root,o=ibm,c=au account. On the IMS configuration utility Web page, select the Create an IMS Administrator link. Now enter root/passw0rd and complete the task.

Having configured an IMS administrator, that user can now login to the AccessAdmin to configure IMS session behaviour. Access the URL for AccessAdmin through the TAM E-SSO start menu folder. When prompted, authenticate using the IMS administrator, root/passw0rd. If the search user's option is now selected in AccessAdmin, the registered users will be displayed. Upon selecting a user, the enterprise directory that user was registered with can be determined. Check that the root user has an ims_ldap\ as its prefix. This provides confidence that ITDS enterprise directory is configured correctly.


 


Configuring the IMS server session behaviour through AccessAdmin

The next step is to configure the IMS server for connections from the AccessAgent instances. Select the Setup assistant on the left side menu of the AccessAdmin Web application. The following page is displayed. Note that the Begin button appears on the lower right side of the page.


Figure 8: Setup IMS user sessions
Setup IMS user sessions
The system can now be configured in accordance with the requirements for user session management. This article simply enables self service using a shared desktop. Complete the configuration according to the specific requirements.

The following selections have been made for this article:

  1. Enable self-service options
  2. Support shared workstations
  3. Use a shared desktop
  4. Continue to select the defaults for all other options

 


Successful user registration

The next step is to install the AccessAgent on the ITDS server instance. Perform the installation in accordance with the product documentation. When installed, AccessAgent will ask for the IMS server location. Use the hostname of the IMS server in the environment. The system will then reboot.

When the system reboots, instead of the standard Windows Authentication window, the TAM E-SSO GINA is presented. By selecting the Go to Windows Logon option and authenticating using Administrator, the Windows desktop is displayed. Before proceeding, check that the ITDS server started successfully (through Microsoft services). Now, right click on the AccessAgent icon on the toolbar, as shown below.


Figure 9: AccessAgent Taskbar Options
AccessAgent Taskbar Options

Select the Sign up link. Enter the text chrish/passw0rd. The AccessAgent then performs first-time registration, asking the user to select two Q&A responses and to reset their wallet password. Note that this does not change the Enterprise Directory password.

When completed, the AccessAgent changes to a bright red color (not flashing). This signals that the user has registered and is now logged into their new wallet.


 


Unsuccessful user registration

Now, right click on the AccessAgent taskbar icon and select Logoff AccessAgent. Now try to register a user that does not exist within the ITDS. Proceed through the self-registration functions. When the final submit is performed, the AccessAgent displays the following message.


Figure 10: AccessAgent Failed Sign up
AccessAgent Failed Sign up

This then proves that the ITDS instance is authenticating users during registration.


 


Setting up SSL for the enterprise directory

As with any SSL configuration exercise, there is a client-side SSL component and a server-side SSL component to configure. The server in this case is the ITDS server, with TAM E-SSO IMS acting as the client. This section outlines the configuration steps required for both the server and the client. Before attempting to configure SSL, make sure the IMS server and ITDS server have the same timezone and time settings.

Setting up SSL for ITDS

The first step is to configure the ITDS server with a self-signed certificate to use for SSL connections. Self-signed certificates are a convenient way to configure a non-production server to perform SSL. Of course, production servers should use trusted certificate authorities to create certificates.

On the ITDS server machine, open a command prompt and issue the following commands.


Commands for setting up SSL with ITDS
C:\Program Files\IBM\GSK7\bin>gsk7cmd -keydb -create -db c:\serverkey -pw passw0rd -type
cms -stash

C:\Program Files\IBM\GSK7\bin>gsk7cmd -cert -create -db c:\serverkey.kdb -pw passw0rd
-label testlabel

-dn "CN='tam6',o=ibm,c=au -default_cert yes -expire 999

C:\Program Files\IBM\GSK7\bin>gsk7cmd -cert -list -db c:\serverkey.kdb -pw passw0rd
Certificates in database: c:\serverkey.kdb
Entrust.net Global Secure Server Certification Authority
Entrust.net Global Client Certification Authority
Entrust.net Client Certification Authority
Entrust.net Certification Authority (2048)
Entrust.net Secure Server Certification Authority
VeriSign Class 3 Public Primary Certification Authority
VeriSign Class 2 Public Primary Certification Authority
VeriSign Class 1 Public Primary Certification Authority
VeriSign Class 4 Public Primary Certification Authority - G2
VeriSign Class 3 Public Primary Certification Authority - G2
VeriSign Class 2 Public Primary Certification Authority - G2
VeriSign Class 1 Public Primary Certification Authority - G2
VeriSign Class 4 Public Primary Certification Authority - G3
VeriSign Class 3 Public Primary Certification Authority - G3
VeriSign Class 2 Public Primary Certification Authority - G3
VeriSign Class 1 Public Primary Certification Authority - G3
Thawte Personal Premium CA
Thawte Personal Freemail CA
Thawte Personal Basic CA
Thawte Premium Server CA
Thawte Server CA
RSA Secure Server Certification Authority
testlabel

C:\Program Files\IBM\GSK7\bin>gsk7cmd -cert -create -db c:\serverkey.kdb -pw passw0rd
-label testcert

-dn "cn=tam6,o=ibm,c=au" -default_cert yes -expire 999

The next step is to create an LDIF file for uploading the certificate and configuration information into ITDS. The text file appears as follows:


Creating LDIF configuration for SSL
dn: cn=SSL,cn=Configuration
changetype: modify
replace: ibm-slapdSslAuth
ibm-slapdSslAuth: serverAuth
-
replace: ibm-slapdSecurity
ibm-slapdSecurity: SSL

dn: cn=SSL,cn=Configuration
changetype: modify
replace: ibm-slapdSSLKeyDatabase
ibm-slapdSSLKeyDatabase: c:\serverkey.kdb
-
replace:ibm-slapdSslCertificate
ibm-slapdSslCertificate: testlabel
-
replace: ibm-slapdSSLKeyDatabasePW
ibm-slapdSSLKeyDatabasePW: passw0rd

Upload the file contents with the following command:


Loading SSL configuration into ITDS
C:\Program Files\IBM\GSK7\bin>idsldapmodify -D cn=root -w passw0rd -i file.ldif -p 389
modifying entry cn=SSL,cn=Configuration

modifying entry cn=SSL,cn=Configuration

The final server-side configuration task is to extract the self signed certificate, so that it can be loaded into TAM E-SSO. The gsk7ikm utility can be used to extract the certificate. Simply open the CMS file and export the certificate created above, as shown in Figure 11.


Figure 11: Export certificate in der format
Export certificate in der format

Copy this certificate onto the IMS server, placing it in the c:\ directory.

Restart the ITDS server instance through the Windows service manager. Check that the server is listening on port 636 by issuing the netstat command and checking the server is listening on that port.

Setting up SSL for TAM E-SSO

The next step is to setup SSL for TAM E-SSO. There are two steps involved. First, the ITDS exported CA certificate must be added to the trusted CA certificate store used by tomcat. This can be done by issuing the following commands at the command prompt:


Loading the certificate CA into Java trust store
   C:\Encentuate\IMSServer8.0.0.12\j2sdk1.5\bin>keytool -import -alias ldapcert -file
c:\cert.der

-keystore C:\Encentuate\IMSServer8.0.0.12\j2sdk1.5\jre\lib\security\cacerts
Enter keystore password: changeit
Owner: CN='tam6', O=ibm, C=au -default_cert yes -expire 999
Issuer: CN='tam6', O=ibm, C=au -default_cert yes -expire 999
Serial number: 7620914a145654c4
Valid from: 12/22/08 10:34 AM until: 12/23/09 10:34 AM
Certificate fingerprints:
MD5: C1:8B:81:C3:C3:EA:37:EB:68:4D:22:C8:59:39:6F:B9
SHA1: 35:0F:A6:20:C1:EF:43:5F:45:CB:24:F3:C4:E7:C3:D3:0E:5A:8D:07
Trust this certificate? [no]: yes
Certificate was added to keystore

Restart the IMS server.

Are the timezones in synch ? This might be a good time to check while the IMS server is restarting.

The next step is to configure IMS Enterprise Directory configuration to enable SSL. Open the IMS Configuration Utility, select Enterprise Directories and select the ITDS server entry. Now select Update directory. The LDAP server URI must now include the new protocol and port, as follows: ldaps://ldap-hostname:636. Within the advanced configuration keys, the LDAP security protocol must be changed to ssl. Now select Save and test, which results in a success message being displayed at the top of the configuration page.

You should now be able to re-test some user registration processes to guarantee the SSL changes were correct.


 


Troubleshooting

Most of the problems encoutered during the development of this article were not related to functional product issues (other than those where tips have been provided in the text above), but more around the networking and system configuration of the systems. The following section outlines methods that can be used to debug product specific issues.

ITDS problems

A person skilled in the art should be able to debug server side issues with LDAP, so this section will not focus on this area.

Problems encountered in the development of this article were mainly due to actual connectivity issues as well as LDAP protocol request data anomolies.

Wireshark was used extensively to debug problems encoutered with connectivitiy and LDAP problems. To configure Wireshark to listen on a particular network interface, simply select Capture->interfaces from the menu and then select the adapter that the protocol information will flow through. Next run a test, like registering a new user. This should result in a Wireshark output similar to that shown below.


Figure 12: Wireshark trace of LDAP
Wireshark trace of LDAP

You can also test connectivity simply by use a telnet client (telnet command on Windows) to access the port used by the ITDS server. You can do this by issuing telnet itds-server-name 389 from a command prompt. If connectivity is OK, you might need to install the ITDS client on the IMS machine and simulate the LDAP requests being performed.

Of course, Wireshark cannot be used for inspecting SSL encrypted payloads, so I recommend ensuring that the IMS to ITDS configuration be performed successfully without SSL. Wireshark can, at the very least, reveal problems within the SSL handshake, which can be useful at times. In addition to the handshake errors, inspecting the IMS server stdout log file will give stack traces for any other errors encountered.

IMS problems

The IMS server stdout.log file is the best place to identify issues at any time during the configuration exercise. Monitor it closely to ensure the system remains stable across your configuration changes. A number of other tips include:

  • During installation, make sure you follow the product installation instructions closely, always rebooting whenever required.
  • If for any reason an AccessAgent had been configured to a particular IMS server, and you attempt to point it at a different one, it fails. If this happens, try re-installing the AccessAgent.
  • Also, make sure the IMS tomcat process is fully initialised before attempting any operations. When the IMS server has started and CPU drops to zero, the DB2 process hovers around 200MB, and the tomcat process is about 600MB.
  • If you are setting up the IMS server on VMWare, I found it needed to be allocated 2GB of memory for that image. Any less than this amount caused issues on IMS server startup.

 


Conclusion

Although TAM E-SSO can be used internally to provide Single Sign-on services to intranet users, there are many use cases where enterprise directories might be considered more relevant in a TAM E-SSO deployment. If this is the case, this article provides detailed instructions on how to configure such a deployment without the need to use Active Directory. Without Active Directory, TAM E-SSO maintains its business benefits and extends the reach beyond the internal Active Directory deployment.