Free cookie consent management tool by TermsFeed Policy Generator

Changes between Version 17 and Version 18 of Documentation/Howto/UseHiveForOptimization


Ignore:
Timestamp:
01/05/12 11:48:35 (13 years ago)
Author:
ascheibe
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Documentation/Howto/UseHiveForOptimization

    v17 v18  
    1010  * You need a user and proper privileges as ''Hive User'' on the HEAL Hive server (which is services.heuristiclab.com). These privileges can be requested from S. Wagner if you are a research partner. If you are not a member of the HEAL research group or a research partner you can [[wiki:UsersHowtosSetupHiveServer| set up your own Hive server]].
    1111 1. Go to ''Services > Hive > Job Manager''. Now the list of your Hive Jobs is updated. The jobs are grouped into the owners of the experiments.
    12  1. Click the plus symbol on top of the list to create a new ''Hive Job'' \\ [[Image(HiveExperiment.png, width=700)]] \\
     12 1. Click the plus symbol on top of the list to create a new ''Hive Job'' \\ [[Image(HiveExperiment.png, width=800)]] \\
    1313 1. Now you need to specify which job should be executed on the Hive. You can either
    1414  * ... create a new task by clicking the ''"plus symbol"''.
    1515  * ... move an existing experiment onto the view via drag and drop.
    1616 1. After having created or loaded an experiment you can click ''"Modify Optimizer"'' in the details view to modify it.
    17  1. Each algorithm (also experiments and batch-runs) represent one job. \\ [[Image(JobConfiguration.png, width=800)]] \\
     17 1. Each algorithm (also experiments and batch-runs) represent one task. \\ [[Image(JobConfiguration.png, width=800)]] \\
    1818 1. Important configurations are:
    1919  * Nr. of needed cores: (default: 1) Specifies how many cores will be reserved for this job on the executing machine. If you know the algorithm corresponding to this job will use multiple threads for computation, increase this number!
    20   * Memory needed: (default: 0) Specifies how much memory will be reserved for this job on the executing machine. It will only deployed on machines where the specified amount of memory is available.
    21   * Priority: (default: 0) This number affects the scheduling of the jobs. Higher numbers will be executed earlier. It is recommended to rank jobs with long execution-times to be executed earlier to avoid waiting for them in the end.
     20  * Memory needed: (default: 128) Specifies how much memory will be reserved for this job on the executing machine. It will only deployed on machines where the specified amount of memory is available.
     21  * Priority: (default: Normal) The priority effects the scheduling of the tasks. There are 3 options available. Tasks with priority ''Critical'' get scheduled before tasks with priority ''Urgent'' and of course tasks with priority ''Urgent'' get scheduled before tasks with priority ''Normal''. It is recommended to rank jobs with long execution-times to be executed earlier to avoid waiting for them in the end. Please use the ''Critical'' priority really only in case of emergency because this could lead to blocking the jobs of other users.
    2222  * Distribute child tasks: This flag is only available for experiments and batch-runs.
    23    * Experiment: (default: true) If true, a job will be created for each child-optimizer. If false the whole experiment will be executed as one job.
    24    * !BatchRun: (default: false) If true, a number of jobs will be created corresponding to the number of repetitions specified. If false the batch-run will be executed as one job.
     23   * Experiment: (default: true) If true, a task will be created for each child-optimizer. If false the whole experiment will be executed as one job.
     24   * !BatchRun: (default: false) If true, a number of tasks will be created corresponding to the number of repetitions specified. If false the batch-run will be executed as one task.
    2525 1. There are also some configurations available for the whole job:
    2626   * !ResourceIds: You can specify on which machines your jobs should be executed. Those machines/resources are grouped into resource-groups. The top group is ''"HEAL"''. You can either enter the name of a group or the name of a specific machine.
    27    * !IsPrivileged: Jobs on Hive are executed in a secure sandboxed appdomain. However some plugins might require elevated privileges. If !IsPrivileged is checked, the jobs will be executed in a unrestricted appdomain. This option is only enabled if the user is allowed to (there is a role called ''"Hive !IsAllowedPrivileged"''). If you need this permission, please contact S. Wagner.
    28  1. After configuring the job you can hit ''"Start Job"'' and the tasks will be uploaded to the Hive. Hive will then take care of the distribution and execution of each task.
    29  1. While you keep the Hive Job Manager open, it will periodically fetch the status-updates of all jobs. When jobs are finished it will download the results automatically.
    30  1. You can also close HeuristicLab after uploading an ''Hive Job''. When you open the ''Hive Job Manager'' again, it will download you list of ''Hive Jobs'' and you can choose to download a specific one.
     27   * Privileged: Tasks on Hive are executed in a secure sandboxed appdomain. However some plugins might require elevated privileges. If Privileged is checked, the tasks will be executed in a unrestricted appdomain. This option is only enabled if the user is allowed to (there is a role called ''"Hive !IsAllowedPrivileged"''). If you need this permission, please contact S. Wagner.
     28 1. After configuring the job you can hit ''"Start Job"'' and the job will be uploaded to the Hive. Hive will then take care of the distribution and execution of each task.
     29 1. While you keep the Hive Job Manager open, it will periodically fetch the status-updates of all tasks. When tasks are finished it will download the results automatically.
     30 1. You can also close HeuristicLab after uploading an ''Hive Job''. When you open the ''Hive Job Manager'' again, it will download the list of your ''Hive Jobs'' and you can choose to download a specific one.
    3131 1. Notice that when results are downloaded they will be reassembled into the original job. So after it finished you can again open the original experiment (''"Open Experiment"'') and see all the results in the !RunCollection as if it would have been executed locally.
    3232  * Unfortunately it is not possible to preserve the ''!ResultsCollection'' of each algorithm when reassembling the experiment. However each ''!ResultsCollection'' is stored in the ''!RunsCollection'' anyway.
    3333
    3434== Plugins ==
    35 Hive automatically uses your local plugins. If they are not yet available on the server (because you are the first one to use this plugin), they are uploaded. When the jobs are executed, exactly those plugins are used.
     35Hive automatically uses your local plugins. If they are not yet available on the server (because you are the first one to use this plugin), they are uploaded. When the tasks are executed, exactly those plugins are used.
    3636
    3737Hive now also checks if you have modified a plugin. This means that you can modify plugins locally and use them in Hive, even though they have the same name and version as already existing plugins on the Hive server.