Benchmark (hwm_benchmark)
Last updated
Last updated
The benchmark utility can be used for checking network accessibility and performance of communications used by ProcMan and its modules. Currently implemented benchmark tests allow to check for example the database access, LDAP and z/OS (using the horcclnt utility) authentications, Control-M access, etc.
The configuration for the tool itself and for the benchmark tests is stored in the file etc/hwm_benchmark_config.php and in the files for module specific benchmark tests, e.g. etc/hos_ctm_benchmark_config.php for the Control-M module. The partial options are described in the etc/*.sample versions of these files (e.g. etc/hwm_benchmark_config.php.sample), in in-line comments.
For to start the utility, change in the directory where ProcMan is installed (with cd command in the command window or by setting the work directory in a scheduler) and execute the command:
Windows:
Unix:
In the execution command replace the parameter <file> with the file path/name of the file, in which the result of the benchmark, in CSV format, has to be stored. If the .csv extension is missing at the end of the specified file name, the utility adds it automatically.
In the optional parameter -i, the number of iterations (how many times each benchmark test has to be done) can be specified instead of <iterations>. If the number of iterations is missing in the call, a value specified for it in the configuration is being used for it.
The table exported in CSV format contains for all test types the time of the fastest and the slowest test run, the mean test run time evaluated for all iterations, the maximal expected time of each test run, the result of the benchmark and the time information about the start/end of the whole benchmark. The result for each test type is evaluated from the mean time and the expected time. It is declared as fast, if the mean time is lower than the expected time, as slow, if the mean time is greater than the expected time, or as very slow, if the mean time is more than twice as great, as the expected time. If an error occurred during some test run of some test type, the result of the test type is set to error. Error messages are displayed at the end of the output of the utility in such a case.
Alternatively benchmarks can be started also from the administration dialog in ProcMan. For more information see the .