Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
When this constant is FALSE then no debug information is printed to the log file. To print debug information set it to TRUE:
This constant sets how detailed logs are written to HOS log file when DEBUG_FILES_ENABLED is true. The higher number is used the more detailed information is printed.
This constant is false by default, which deactivates debug information on the screen. It is intended only for HORIZONT support and you should keep it false.
This PHP variable in an array that contains names of files with dictionaries. By default only hos.xml file is added to this list. This file contains translations of all text appearing in Procman/HOS in English and German language. Maybe you need to define your own dictionary with text used in your processes. As you should never put your own text to hos.xml (as it is always a part of official new release and your changes would be lost), you can create your own dictionary and register it by this variable.
Example (default setting) :
To add another dictionary (in this case loaded from my_own_dictionary.xml), add another line similar to this:
The order of the files is important. If the same text id is used in more files then the one lastly read is used. Therefore, always use hos.xml at the top and your own dictionaries below.
The basic configuration (options not related to processes) is divided to several files:
By setting this constant to TRUE script_access_log file is created. Usually you have no reason for activating this kind of log and you should keep it false. If it equals true then the log is created in a file set by option.
- This file contains default definitions provided by HORIZONT which you should not usually edit. There are only a few options that you can change. You should never touch options that are not described in this document, otherwise the application can work incorrectly.
- This file contains options that you can change. You can find here definition of systems, dataset names, users, z/OS connectors (HORILST), settings of XINFO parameters for XINFO analysis, and many other options that control how Procman/HOS behaves
- This file configures database connection.
- This file sets a few constants that control a behavior of JCL processes and is reserved for further JCL options.
- This file sets a few constants that control a behavior of SYSIN processes and is reserved for further SYSIN options.
- This file configures connection to z/OS via HORILST tool. You can have several INI files, each one for a different system. Names of these INI files must be specified in hos_confog2.php.
This option controls compatibility of PHP code with z/OS modules. If you use PRC scripts (i.e. the older version) on the host then you should set version to "3.2". If you use the latest PRX scripts (extended version of PRC) then "5.0" version is required.
Procman/HOS allocates temporary datasets on the system where it submits jobs (for JCL analysis, generation, checking or copying). The dataset name is built by appending the last qualifier to the prefix that is configured by base_tmp_dsn option.
In the sample below, all datasets created on DEMO_ID system start with P391D.HOS.DEMO.TEMP and datasets created on SYSH_ID system start with P391D.HOS.SYSH.TEMP.
This setting is valid for all clients (as there is an asterisk). You can replace it with a client name (and also add more clients) to have the prefix different for each client.
The value entered here is used when skeletons are substituted (when ##CODEPAGE## variable is replaced). PRX code must know the codepage used on z/OS so that it is able to correctly convert special characters. The default value (if it is not explicitly set by codepage option) is 273.
Example:
This file is included from hos_config.php and extends $hos_config PHP variable (array allocated in hos_config.php) with many options that configure your installation.
For the initial setup, e.g. after an installation, please look into the following article:
This is a very important option.
The process configuration defines a list of operations (tasks) executed in sequence. This sequence configures what the process does. But you must also set where and with what data the process works, what JCL skeletons are used, what dataset and systems are used, what JCL statements are supported and so on. For setting this kind of options you must use the environment.You can have general part of the environment, that is valid for all processes and a specific one that is valid only for some process. The general part and the specific part are usually merged together, which produces the final set of environmental options.
The environment is sometimes very large, especially when you have many processes or several clients. In this case it is a good practice to split the environment to more files, for example one per a client, and include them like in the following example:
A content of each included file looks like in the sample below (only a part is displayed here, as the setting is too large).
This example is used for generic part of the environment that is valid for all clients, processes, activities and names of environment. This is because of to asterisks used for client, process, activity and name. This way you can predefine default values (typically templates - sometimes they are called skeletons, supported statements, ...). This general part should be placed on the top (therefore hos_env_general.php is the first included file in the sample). Some options can be later overwritten by other included files. There is a rule that the last setting wins. Procman processes the complete environment definition from the top to the bottom. If some option is defined more times then the last one is finally used.
For this reason always follow this simple rule: put the most general setting (with most wildcards) on the top and more restrictive ones (with named clients, processes, activities, ... ) below.
Comments typed in green in the sample explain what is the meaning of the value on the line where they are coded.
You can type * (it means all clients) or a real client name on the line with '// client name' comment.
You can type * (it means all processes) or a real process name on the line with '// process name' comment.
You can type * (it means all activities) or a real activity name on the line with '// activity name' comment.
You can type * (it means all environment names) or a real environment name on the line with '// environment name' comment.
Of course you easily put the environment directly into hos_config2.php without splitting to more files. Then the option looks like this (also not complete):
A sample with explicitly specified values (no more asterisk is used):
It is clear what client, process and activity are. But what is the environment? Every process contains m_environment module at the beginning. This module scans your environment setting and finds all distinct values used as environment names. In case of our samples only 'Test environment 1' is found. Therefore we have only one environment name. When the module finds only one environment then it uses it automatically. If more environment names are found then the module offers a selection. Once the environment is selected the whole process works with its setting. Therefore you can define more than one environment (which can contain system definitions, templates, ...) and allow the user to choose the one he wants at the beginning of the process. In spite of the fact this idea can be useful in some cases it is not usually configured in real installations. Most often you have only one environment name that is used throughout all your configuration files. Just make sure that once you run some process then the environment name in your configuration is not changed any more.
In the following sub sections all environmental options are described in details.
format - specifies format of the generated members.
automatically_filled - specifies a number of jobs per source member generated when Duplicate button is pressed.
silent - allows to create copies without any page being displayed when it equals true.
This option specifies the number of generated job names per each source member when the Duplicate button is pressed.
This option specifies how new member names are generated when Duplicate button is pressed. By default (when the option is empty) the names are constructed by concatenating 'DUP' and a serial number coded on 5 positions and filled with zeroes from left. For example if you want to duplicate 3 members then DUP00001, DUP00002 and DUP00003 are generated by default. Of course these names can be later edited.
Alternatively, if you are not satisfied with the default format, you can set another rules via duplicate_member_format option.
Tokens listed below are searched in duplicate_member_format option and if they are found then they are replaced with values they represent. All remaining letters are kept at their positions.
&userid. name of the current user.
&userid[1-8]. results in the user name complemented from right with 'X' letter up to the length specified by the number or it is a substring of userid if the number is lower then the name's length. [1-8] means a number in 1-8 range. &procid. process id.
&procid[1-8]. results in the process id complemented from left with 0 up to the length specified by the number or it is a substring of procid if the number is lower then a length of the process id. [1-8] means a number in 1-8 range.
&suffix. one automatically generated letter.
&inc. sequence number (1, 2, ....).
&inc[1-8]. sequence number coded on the specified number of positions, filled from left with zeroes. &member. source member name.
&member[1-8]. substring of the source member name (the number specifies the new length).
&var(?). HWM variable's value (code variable name in a place of the question mark).
&var(?)[1-8]. substring of the HWM variable value.
The dot coded at the very end of duplicate_member_format can be omitted.
A few samples (assuming that the user name is P391D, PID=166, automatically generated suffix is 'W' and the sequence number is 12):
$&userid.&suffix - result: $P391DW
TEST&suffix.&procid3 - result: TESTW166
&suffix.&procid6.&suffix - result: W000166W
PO&inc6 - result: PO000012
This option combines the old and options. You should use this instead of the options coded separately.
For more details about the format see option.
For more details about automatically_filled option see option.
Note: This option is deprecated and you should use instead.
Note: This option is deprecated and you should use instead.
Typically you have several user forms defined for DD statements in your Procman/HOS installation and typically PRX code recognizes what form to assign to every DD statement while the JCL analysis is running. After the analysis you can see all DD statements found in the analyzed job in a table and you can start to enter user input:
The content of select boxes in DD Type column is built from jcl_dd_forms option. This option maps form names (names of form XML without ".XML" extension) to a text id used in a dictionary. As a result you see correct texts in the select boxes and you can change the form type by mouse.
This option sets what log files are downloaded after JCL analysis. You can deactivate some of the files if you don't need them, nevertheless it is not recommended. Every file contains useful information. Setting the value to true like in the example enables the log file and Procman/HOS downloads it.
JCKDTLO detailed listing of errors, warnings and notifications found by SmartJCL
JCKSUMO summary listing.
SYSPRINT SYSPRINT of SmartJCL.
template template name used as a base for the job that is generated and submitted (calls PGM=OSJIBAT).
tws_sys id if the system where the substitution runs.
app_id_pad letter used for padding of the application id to 16 letters if app_id_prefix is not specified.
app_id_prefix prefix used for padding of application id to 16 letters. When it exists then app_id_pad is ignored.
wsid id if the workstation that appears in the substituted job.
iat input arrival time in HHMM format that appears in the substituted job.
Note that JOB, EXEC and DD are always enabled. You can disable only OUTPUT, SET and INCLUDE.
This option configures IWS variable substitution. In order to run the substitution the setting for selected target system and dataset must exist. You can type the target system explicitly (specify or you can use an asterisk if the setting is valid for all systems. This holds true also for libraries, enter an asterisk or specify DSN mask. If the current target system and dataset match TargetSysIdMask and TargetDsnMask then the setting in the internal array is used. The meaning of individual options is as follows:
This option controls what JCL statements are supported by Procman/HOS for modification via user forms. You can setin PRX script (if you have their XML definition ready) for JCL statements that are enabled in this option. If you don't need forms for some of the statements (sometimes SET or INCLUDE is not needed) then you should disable them by this option.
By this option you can specify datasets that can never be deleted or from which members can never be deleted. If a user select such a dataset in DELETE process then he is informed that the deletion is not allowed.
You can use wildcards instead of the system id (SYSH_ID in the example). In the inner array you can specify datasets or masks with wildcards. If the real dataset that a user selects in the process matches an item in exclude_detete then the deletion is not allowed. If you specify ( and ) like in first two lines in the example then the dataset can't be deleted as well as any of its members. When brackets are missing then members from the dataset can be deleted, but not the dataset itself. The result of the example is as follows:
members can't be deleted from P391D.HOS.DATA.OUT
members can't be deleted from P391D.HOS.DATA.MIRROR1
P391D.HOS.DATA.OUT can't be deleted
P391D.HOS.DATA.MIRROR1 can't be deleted
P391D.HOS.DATA.MIRROR2 can't be deleted
P391D.HOS.DATA.MIRROR3 can't be deleted
Use this option if you want to copy members by the copy module also to additional libraries. Members written to mirrored datasets are not stored in a database. This option only ensures that a backup of members that are copied to target libraries is also copied to mirrored libraries. If the real target system and dataset match the ones specified by target_mirror option then copying to locations specified in the inner array is done. You can copy members to more than one library or system if you want. You can use wildcards both in and and also in and , as is shown in the sample below.
In the case of the above example this mirroring occurs:
If the target dataset is P391D.HOS.DATA.SYS on SYSH_ID system then members are mirrored to P391D.HOS.DATA2.SYS on the same SYSH_ID system.
If the target dataset is P391D.HOS.DATA.OUT on SYSH_ID system then members are mirrored to:
P391D.HOS.DATA.MIRROR1, ...MIRROR2, ...MIRROR3 on SYSH_ID system
P391D.HOS.DATA.PRE.MIRROR1,, ...MIRROR2 in SYSH2_ID system.
In the following examples mapping takes effect then any qualifier(s) appears at the place of asterisk:
Or with their variables:
It will map:
DDD.PROCMAN.TARGET.AAA.STEUKA to DDD,PROCMAN.TARGET.AAA.STEUKA.MIRROR DDD.PROCMAN.TARGET.BBB.STEUKA to DDD,PROCMAN.TARGET.BBB.STEUKA.MIRROR DDD.PROCMAN.TARGET.AAA.BBB.STEUKA to DDD,PROCMAN.TARGET.AAA.BBB.STEUKA.MIRROR
The mirroring works also when files are deleted. In this case approved members are deleted from target libraries as well as from all their mirrors.
When Procman/HOS needs to submit a job on the host then it first reads an appropriate template (skele-ton of the JOB) and makes substitution of special variables (they are surrounded with ##). A result of this substitution is a real valid job that is subsequently copied to temporarily allocated dataset and sub-mitted. There are many templates in Procman/HOS which are used in many scripts. There are templates for JCL analysis and generation, for checking JCL after is has been edited, for copying to the host and much more. In PHP scripts names of templates are used instead of member names in order to allow to set member names freely. The mapping of template names to member names must be correctly defined by template option. The example below is not complete, it only shows the syntax. A list of standard tem-plates is listed below the sample.
Standard templates:
jck_job_new JCL analysis in JCL NEW processes (m_jcl_analyse_1 module).
jck_job_change JCL analysis in JCL CHANGE processes (m_jcl_analyse_1 module).
jck_job_editchk1 JCL check of jobs in the request activity (m_jcl_change, m_jcl_change_objects, m_jcl_generate modules and in the preview started from m_jcl_fillforms module).
jck_job_editchk1_tws JCL check of jobs with IWS variable substitution in the request activity (m_jcl_change, m_jcl_change_objects, m_jcl_generate modules and in the preview started from m_jcl_fillforms module).
jck_job_ref JCL generator in JCL processes (m_jcl_analyse_2 module).
jck_job_approve JCL check of jobs started from m_jcl_approve module.
jck_job_tws_approve JCL check of jobs with TWS variable substitution started from m_jcl_approve module.
jck_job_final final JCL check of jobs executed after members have been copied to the target libraries (m_jcl_final_check module).
jck_job_fast fast JCL check of jobs (from target libraries, members are not copied to temporary dataset), available in m_hos_jcl_approve module.
approve_regenerate_job JCL generator started from m_jcl_approve module (uses selected jobs as the input).
jck_proc_new JCL analysis in PROC NEW processes (m_jcl_analyse_1 module).
jck_proc_change JCL analysis in PROC CHANGE processes (m_jcl_analyse_1 module).
jck_proc_editchk1 JCL check of procedures in the request activity (m_jcl_change, m_jcl_change_objects, m_jcl_generate modules and in the preview started from m_jcl_fillforms module).
jck_proc_editchk1_tws JCL check of procedures with IWS variable substitution in the request activity (m_jcl_change, m_jcl_change_objects, m_jcl_generate modules and in the preview started from m_jcl_fillforms module).
jck_proc_ref JCL generator in PROC processes (m_jcl_analyse_2 module).
jck_proc_approve JCL check of procedures started from m_jcl_approve module.
jck_proc_tws_approve JCL check of procedures with TWS variable substitution started from m_jcl_approve module.
jck_proc_final final JCL check of procedures executed after members have been copied to the target libraries (m_jcl_final_check module).
jck_proc_fast fast JCL check of procedures (from target libraries, members are not copied to temporary dataset), available in m_hos_jcl_approve module.
approve_regenerate_proc JCL generator started from m_jcl_approve module (uses selected procedures as the input).
change_by_parm_jobname1 analysis of selected JCL/PROC members in 'Change by parameters' function in m_jcl_fillforms module.
change_by_parm_jobname2 generation of new JCL/PROC members that performs required changes in 'Change by parameters' function in m_jcl_fillforms module.
change_cc when control cards are processed in JCL processes then a special job with variables that are substituted is analyzed instead of the selected member (because valid JCL is required by the analyzer). The member that is analyzed (by subsequent m_jcl_analyse_1 module) is specified by this template.
copy copying of members to target libraries by m_jcl_copy module (members are first copied to temporary library and then the copy job is submitted).
fast_copy copying of members to target libraries by m_jcl_fast_copy module (assumes the members already exist on the host, which is typically a case of INIT processes).
delete deletion of members from target libraries by m_jcl_delete module.
gdg_change_limit changing GDG limit in DSN processes by m_dsn_gdg_submit module.
dsn_append_tape appending (merging) datasets into the target dataset (newly created) in DSN_APPEND processes (m_dsn_append_submit mudule) in case of TAPE device type.
dsn_append_volume appending (merging) datasets into the target dataset (newly created) in DSN_APPEND processes (m_dsn_append_submit mudule) in case of VOLUME device type.
dsn_rename renaming datasets in DSN_RENAME processes by m_dsn_rename_submit module.
dsn_copy copying datasets in DSN_COPY processes by m_dsn_copy_submit module.
idcams an alternative template used instead of gdg_change_limit. When this template is defined then it has a higher priority (by default it calls IDCAMS utility).
dsn_gtyp finds current GDG limits of selected datasets in DSN processes by m_dsn_gdg_enter module.
dsn_parm finds parameters of selected datasets in DSN processes by m_dsn_append_enter module.
split members that are checked by JCL checker or members where IWS variables should be substituted have to be copied to the host into a temporary dataset. This copying takes a while when members are copied one by one. If more members are copied then Procman/HOS can use a fast method of saving members to the host. All files are concatenated in one single PS dataset and split to members by IEBUPDTE utility. To enable this fast method the split template must be defined. Even when the template is defined Procman/HOS can decide to use the standard one by one method if the number of analyzed files is small.
universal job that is submitted in UNIVERSAL processes by m_universal_submit module.
There can be other user-defined templates as well. They are used in interfaces of process configuration in cases when the template name is configurable by the user. Names of such templates are fully under control of the administrator who prepares your process definition. A typical example of such a template is (extracted from process configuration):
members_download job submitted when members are downloaded by the the fast method when Import button is pressed in m_jcl_edit and m_jcl_approve modules. The fast method is used when it is enabled by r option.
members_upload job submitted when members are uploaded by the the fast method when Export button is pressed in m_jcl_edit and m_jcl_approve modules. The fast method is used when it is enabled by option.
By this option you can specify parameters used for allocation of temporary PS datasets. This is used when HOSXIN dataset is created. You can specify space and units (tr, cy).
For PO datasets use instead.
By this option you can specify parameters used for allocation of temporary PO datasets. These datasets are allocated for jobs that Procman/HOS submits and for members that are analyzed. You can specify units (tr, cy), primary and secondary quantities, directory blocks and type.
Example:
For PS datasets use instead.
This option is needed for specifying parameters of tasks used in process configuration. The process configuration consists of several tasks grouped into activities. Many of tasks used there need to know some input data, for example system names, dataset names, and much more. The options which are required are described in details in the online help of Procman/HOS and therefore we show only a sam-ple demonstrating how to configure these options. The fact that the process definition and task options are separated helps to separate process logic from the data like systems, datasets, content of select boxes,...
The example is just a little part of the environment definition. All preceding options of the environment are usually inserted in the request and activity specified with wildcards. But this is usually not the case of task option. This is because task options must be valid for concrete process definition and the task names in the environment and in the process definition must match. In the example we use t0400 task and we define several options in the inner array. These options will therefore be available in t0400 task of JCL_CHANGE_PROD_APPROVE activity of JCL_CHANGE_PROD process in HORIZONT client. In this case t0400 in the corresponding process configuration calls m_jcl_approve module, which shows the approve web page. Systems and datasets coded in the environment are required by this module in order it is able to offer correct systems and libraries in its select boxes.
There are a few methods how to define items in the task array. Each one produces a different result on the web page. All available methods are listed below.
In this case there is no visible field on the web page, where the user can select or see the value. The value is always SYSH_ID and it can't be edited and it is hidden.
In this case an empty text field is displayed on the web page:
In this case a text field is rendered on the web page and its content is initialized with the value speci-fied in the array.
In this case the value can't be changed (like when you define a constant), but it is visible on the screen. You can make the field read-only by adding ! in front of the value. The result is:
In this case a select box is rendered on the web page. When you specify only values like in case of the target library then you see exactly the values from the array. You can also specify a value and a label separated by a colon. In this case you see the label but the value is used when an item is selected. This is common when you specify systems, as their ID is im-portant for Procman/HOS while the label is important for users.
Enter ! in front of the value you wish to select by default:
In this case P391D.PETRH.TEMP2 is selected by default when the page is first displayed:
Use it when the value is the same as a value of another field:
It is possible to render a field by any of the preceding methods conditionally. That means the rendering method depends on a value of another field:
In this case when SYSH_ID export system is selected then the export library shows an edit field initial-ized with your SYSUID followed with '.DATA.TEST'. If SYSH2_ID system is selected then the export library changes to a select box with two items.
and
You can also use remote dependencies in conditional fields. They are available only in some cases (when the referenced task stores data to JCKIN database table). This shows a reference in the same activity:
The target library is rendered as a read-only edit field initialized with P391D.PETRH.TEMP1 if the tar-get library selected at task t0500 in the same activity is P391D.NEW.TEMP1. The second row should be clear, it is analogous.
This sample shows a reference to another activity:
In this case the condition is based on target_library option of t0500 task in JCL_CHG_REQUEST activity.
You can use variables in any value used in task options. Please note the mandatory dot at the end. Available variables are:
%SYSUID. it is replaced with the user that is logged in Procman in upercase.
%(VAR). it is replaced with VAR HWM variable.
%(VAR,n). it is replaced with a substring of VAR HWM variable that starts at 'n' position. The position is 1-based.
%(VAR,n,p). it is replaced with a substring of VAR HWM variable that starts at 'n' position and has a maximal length equal to 'p' . The position is 1-based.
Let's now assume that the current user is P391D and that HWM variable DEPARTMENT equals 'DEP14':
%SYSUID..JOBLIB.* is substituted as P391D.JOBLIB.* %SYSUID..JOBLIB.%(DEPARTMENT)..* is substituted as P391D.JOBLIB.DEP14.* %SYSUID..JOBLIB.%(DEPARTMENT,3)..* is substituted as P391D.JOBLIB.P14.* %SYSUID..JOBLIB.%(DEPARTMENT,1,4)..* is substituted as P391D.JOBLIB.DEP1.*
You can use variables also in keys of conditions. A few examples:
If the selected source_system equals the value of TESTVAR1 HWM variable then use the first value. If the selected source_system equals the value of TESTVAR2 HWM variable then use the second value. Else use the last default value:
If the value of client HWM variable equals TEST then use the first value. If the value of client HWM variable equals HORIZONT then use the second value.
Else use the last default value:
You can use HWM variables and %SYSUID. (which is replaced with the current user name).
Setting at activity level for SYSH_ID system:
Setting at task level for SYSH_ID system:
This option allows you to overwrite default set outside of the environment. If you want to set the template dataset name for various activities then you can put this option into the environment. It is also possible to set at the task level as is clear from the samples below.
You can use HWM variables and %SYSUID. (which is replaced with the current user name).
Setting at activity level for SYSH_ID system
Setting at task level for SYSH_ID system:
This option allows you to overwrite default set outside of the environment. If you want to set this prefix of temporary dataset names for various activities then you can put this option into the environment. It is also possible to set at the task level as is clear from the samples below.
This option allows you to overwrite default set outside of the environment. If you want to set technical user differently for various activities then you can put zos_tech_user into the environment. This example sets technical users for DEMO_ID and SYSH_ID systems only in JCL_CHANGE_PROC_APPROVE activity in JCL_CHANGE_PROD process in HORIZONT client.
This option configures how the page with files listed from an archive reacts on mouse clicks and what columns the list contains.
grid_member_action configures mouse click reaction at the member name in the main table (the top one). Possible values are:
empty string do nothing - in this case the member is not rendered as a hyperlink.
browse member is browsed when clicked.
select clicked member is selected and the page is confirmed, which triggers continuation in the process with further tasks.
add member is added to the table below (table with selected members).
sel_member_action configures mouse clicks in the table below the main table. Possible values are:
empty string do nothing, no hyperlink is rendered.
browse the file that you click on is browsed.
order configures what columns are displayed and their order in the table. Type tokens from the following list separated by commas.
version file version.
group group of the process that last updated the file
process_id id of the process that last updated the file.
status status of the process that last updated the file.
member member name.
library target library
system target system
cre_user user who created the file.
cre_date date of creation.
cha_user user of the last change of the file.
cha_date last change date.
In case of the sample above the table has 6 columns only. When a member is clicked in the top table then it is automatically selected, when it is clicked in the bottom table then it is browsed.
This option specifies how a name of JOB statement of jobs created by substitution of templates is created. It is empty by default, which generates the job name as the user id followed with one additional automatically computed character. If you do not like this format then you can set a new one. Tokens listed below are searched in the jobname format and if they are found then they are replaced with values they represent. All remaining letters are kept at their positions.
&userid. name of the user who submits the job.
&userid[1-8] user name complemented from right with 'X' letter up to the length specified by the number or it is a substring of userid if the number is lower then the name's length. [1-8] means a number in 1-8 range.
&procid. process id.
&procid[1-8]. process id complemented from left with 0 up to the length specified by the number or it is a substring of procid if the number is lower then a length of the process id. [1-8] means a number in 1-8 range.
&suffix. one automatically generated letter.
A dot coded at the very end of jobname_format can be omitted.
The default value used when the option is empty is: '&userid.&suffix'
A few samples (assuming that the user name is P391D, PID=166 and automatically generated suffix is 'W'):
$&userid.&suffix - result: $P391DW TEST&suffix.&procid3 - result: TESTW166 &suffix.&procid6.&suffix - result: W000166W
This option sets the maximal number of rows listed in a view of files read from an archive. It is used only when paging is disabled.
There are several modules which support loading of files from the host in parallel. It can improve the performance. This feature can be activated by member_processing_method option. Available values are:
'serial' standard serial processing.
'parallel' advanced parallel downloading.
The default value is 'serial'.
This option specifies a separator used for splitting members in JUPJCLO output file of JCKIRFMT program.
By default members are separated with ./ADD MEMBER=.... line. You can edit it (the ADD token) if you have reasons to do it. To keep the default value is recommended.
This option specifies whether the view listing files from an archive uses a pager at the bottom. If the pager is enabled ('enable' parameter is true) then you can also adjust the number of rows listed per page ('page_size') and the number of page numbers displayed in the navigator ('pager_size').
It is possible to define a technical user in the , which has higher priority than the defined in hos_config2.php. But the environment is stored in a cache when a process is interrupted (the credentials are safely encoded). When the password changes during the time the process is interrupted then the one in the cache is no more valid, which causes failures when the process is opened again. This can be easily solved by passwords option. Password coded here has even higher priority than the one stored in the cache. For this reason if you change a password of your then you should also set it here.
Some modules support copying of files via PS dataset. This method is much faster than copying members one by one. Files are stored in one big file and split to members on the other side. Because this method of copying requires a job submission then it is faster than the standard method only when a number of files is higher than a threshold (currently set to 5). Set actions for which you want to activate this method to true. Note that PS method is used only when the number of files equals at least the defined threshold.
In the example PS method is used for reading, exporting and importing files when at least 5 members are selected.
When a process in interrupted then it usually keeps its environment in the cache. Reopening the process uses the cached environment setting. In some cases (mainly when you are testing new processes) it can be useful to reload values stored in the environment (like skeletons, libraries,...) from the config file.
To activate it, set this option to true. Then the environment is always reloaded when interrupted process is started. Please note that not all environment settings can be reloaded, for example, task names can't change.
When completed process is deleted then it is possible to delete also files it has stored in JCL, PROC or SYSIN archives. This behavior is controlled by remove_from_archive_on_delete option. Available values are:
true - files are deleted without asking.
false - files are kept in the archive.
'ASK' - a prompt opens and the user can choose what to do.
Please note that it works only for completed processes. Processes in other states always delete also files they have created.
This option sets a code page of data received by HORILST tool. You should keep the default value 'iso-8859-1'.
Use this option to configure INI files names that contain parameters for HORILST communication with z/OS. System names must be exactly specified because an asterisk in their place is not supported here.
This option specifies the separator used in case of fast download method of members. The default value used when the option is missing is a pipe character (|).
This option sets a file name where names of interpreted scripts are logged to in case DEBUG_ACCES constant is true. It is recommended to keep the default value.
In this case templates are downloaded from P391D.HOS.DEMO.CNTL on DEMO_ID system and from P391D.HOS.SYSH.CNTL on SYSH_ID system. This is client independent.
Procman/HOS submits jobs which are a result of JCL templates substitution. Templates are skeletons of jobs with several variables (surrounded with ##) that are replaced with real values. This option configures a location from where these templates are downloaded. The definition in the sample below is valid for all clients (therefore there is an asterisk). You can define a set of templates for every client if you replace the asterisk with a client name.Below the asterisk or client there is an array that maps to the dataset where templates are located. Define this mapping for all your systems.
This option maps domain names or IP addresses to names displayed on web pages and stored in the database. It can be best illustrated by the following example:
There are 4 systems defined in this example. First two by IP addresses and last two by domain names.
The name parameter contains a text displayed on the web page (in select boxes for instance). The id parameter is stored to the database. You should never edit id once it has been used at least once. This is because every member stored in the database is uniquely identified by the system id, dataset name and member. If you want for some reason to edit a name of a system that has already been used in a process in the past, you can edit the name parameter, but never its id.
Sometimes it can be useful (mainly for testing) to have two different system names but the same IP address. It can be achieved by the following syntax, there is one more array level (SYSH and SYSH_COPY use the same IP address):
You can define as many systems as you need. If you need to reference a system in your then use the value of id parameter.
This option defines how JCL or SYSIN members look like in view and edit modes (when they are browsed or edited).
There are four possible values for 'view' and 'edit' options for JCL:
'DISABLED' no syntax highlighting but it can be enabled on the web page when syntax_highlighter_combo option allows it.
'WEB' the content is rendered with white background convenient for web.
'ISPF' the content is rendered in ISPF black style.
false standard text area is displayed instead of the editor with highlight support.
max_line_length option sets the maximal number of letters per line.
When this option is true and when option is not false then a select box with WEB and ISPF styles is displayed above an editor with JCL data. This allows you to switch from WEB to ISPF style by mouse.
This options sets the temporary folder on the server where some temporary files are written to. It is recommended to keep the default setting.
This option sets the code page in which the web page is displayed. It must be 'utf-8'.
Never change this default value.
This option allows you to set user name and password of technical users. You can set it for any system and process. Asterisk can be used instead of system or process name. In this case it is valid for all systems or processes. The password must be encoded. The actions array supports three options: read, write, check. Set them to true if you want to use the technical user for the appropriate action. For example if you set write=true, check=true and read=false then the technical user is used only for writing to z/OS and running JCL analysis, but not for reading (from personal libraries). The current logon user is used for all systems, processes and actions for which the technical user is not defined.
In this file you must set connection parameters to Procman/HOS database. A typical example follows:
In case of DB2 fill items of the array this way:
keep_alive use 10.
type use 'db2'.
alias database alias.
user database user name.
pwd an encrypted password.
platform use 'zos'.
The example above sets technical user P391D for DEMO_ID and SYSH_ID . It is used for all actions and for all processes.
You can see that in this case there is an asterisk just below the actions. This makes the database setting valid for both of the two supported actions ('hos' and 'xinfo'). You can also specify the setting for each action separately. You can do it by entering the action name instead of the asterisk. Setting a connection for xinfo this way has lower priority than setting it in option of file.
This option specifies parameters for XINFO analysis. It is used in case of DELETE processes where Procman/HOS checks for example whether deleted procedures or SYSIN files are used by other existing jobs. As real XINFO table and column names can differ from the default ones then this setting ensures that correct tables and columns are always known. This option also allows you to set an user that is used for reading data from XINFO database and also allows you to set values for XXRDATCLIENT, XXRDATENV and XXRDATINFO columns, which are important if your XINFO installation has multi client support activated.
You can use an asterisk in the place of 'target_system' or 'client'. Then the setting becomes valid for all systems or clients.
The 'name' parameter maps table names (without a prefix) to real database table names used in SQL queries. In the example any SQL query that reads data from XXRTDDF table uses XINFO43.XXRTDDF table name in the generated SQL queries.
Column names are in vast majority (if not in all cases) unchanged in real XINFO installations. If this is your case then the key and value (which have a meaning of Procman/HOS names and DB table columns) are equal.
In case of multi-client XINFO installation you can edit parameters in 'system' array. In the example 'XXRDATCLIENT='CLIENT1' AND XXRDATENV='ENV1' AND XXRDATINFO='INFO1'' is added to every SQL query listing data from XINFO database. The 'system' array is optional. Do not specify it if you don't need it.
Parameters in 'conn' array configure connection parameters used for querying data from XINFO database. Do not specify this block if the user used for all other Procman/HOS database queries should also be used for the XINFO database.
This file is reserved for configuration of JCL processes only. Currently it doesn't add any options to $hos_config array but is defines 3 constants:
HOS_JCL_BLOCK_SIZE when you process a lot of files in JCL analysis then the files are analyzed in blocks. This constant defines how many files are analyzed in one single block.
USR_FORM_MAX_FREETEXT_SIZE specifies the maximal length of a text field used in user forms in case of 'freetext' item type.
FORCE_SLOW_DOWNLOAD_METHOD when this option exists and equals true then the slow method of downloading members in JCL analyzer is used. By default this constant does not exist in the config file or it is set to false.
You can edit the block size but please do not edit USR_FORM_MAX_FIELD_SIZE and USR_FORM_MAX_FREETEXT_SIZE values.
USR_FORM_MAX_FIELD_SIZE specifies the maximal length of a text field used in in case of 'multiline' item type.
This file is reserved for configuration of SYSIN processes only. Currently it doesn't add any options to $hos_config array but is defines one constant:
ALLOW_EMPTY_SYSIN set it to true if you want to allow processing of SYSIN members without any content. If you set false then empty SYSIN members are not accepted by Procman/HOS.
1. [GLOBAL] section
MSGLEVEL enter a value in [1-5] range. The higher the value is the more detailed output is written to the log file. The default value is 1.
HORCCLNT_PATH absolute path to a directory where horcclnt.exe tool is stored.
TEMP_PATH path to a the directory where temporary data are created.
WORK_DIR keep it empty.
LOG_PATH path to the directory where log files are created.
CODEPAGE code page used on the host. Use value 17 or "Austria, Germany (IBM-1141)".
[TCP] section
TCPDEBUG set it to NO unless you need to enable TCP/IP log output to tcptool.log file. In this case set it to YES and ensure that tcplogger.xml file exists in HORCCLNT_PATH. This XML file can be created by running: horccltn.exe -x-
IP_ADDR default IP address where to connect when it is not set as a program parameter (Procman/HOS PHP scripts use program parameter when they call horcclnt.exe, so the value entered for IP_ADDR is not used in this case).
HOST_PORT port where the service is listening on the host.
MEMBER STC member name on z/OS.
USERISSTEPNAME set it to YES if the current user name is used as a step name in the task started on the host. Use NO otherwise. The default value is YES.
TCPCRYPT this option controls how data are encrypted. Possible values are:-
0 weak password encryption. Passwords on the host can have only up to 8 characters. No TLS.
1 default, strong password encryption. Passwords on the host can have up to 8 or 14 - 100 characters. No TLS
2 reserved, not used.
3 strong password and data encryption with TLS. Passwords (1-8) and passphrases (14-100) are supported. It requires HORILST module on the host.
TCPCOMMONNAME specifies the Common Name of the server certificate. It is required only when TCPCRYPT=3. The default value is * and should be changed as soon as you set your own certificate.
TCPPEMCA specifies a file containing trusted certificates that are used during server authentication. The certificates can be concatenated and need to be in PEM format. If the value is empty the Windows Certificate Store (Trusted Root Certification Authorities) is used instead. Please make sure the root certificate of the server certificate can be found either in the file specified or in the Windows Certificate Store. It is required only when TCPCRYPT=3. As the default value you should use hor-ca-store.pem.
PASSCHK specifies whether a password is checked with RACF (when set to YES) or only the user name is checked with RACF (when set to NO). Use YES as the default value.
[FTP] section
T0 communication timeout in seconds. If it expires then HORCCLNT is closed automatically. The default value is 30.
T1 timeout when connection is being established. If it expires then HORCCLNT is closed automatically. The default value is 60.
LISTLIMIT count of dataset or member names downloaded when they are listed. Use zero if you do not want to set any limit. This option prevents long delays when too many names match the search criteria. When it is in effect (i.e. when there are more matching items on the host) then you are informed about it on the web page and you can set a new limit by pressing a button. Please note then you can see even less data on the web page due to additional filtering (there can be another filter for only PO or PS datasets in the process configuration). The default value is 0.
[SQLITE] section
T0 timeout applied if SQLITE table is locked (in seconds). The default value is 30
DBNAME name of SQLITE database file. The default value is rotbintf.db.
Procman/HOS connect to the host by HORILST tool. You must define connection parameters for each system where you are connecting in INI file. These INI file names must be assigned to systems via option in . Parameters that are required in the INI file are as follows: