<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://wiki.define-technology.com/mediawiki-1.35.0/index.php?action=history&amp;feed=atom&amp;title=LSF_Multicluster</id>
	<title>LSF Multicluster - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://wiki.define-technology.com/mediawiki-1.35.0/index.php?action=history&amp;feed=atom&amp;title=LSF_Multicluster"/>
	<link rel="alternate" type="text/html" href="http://wiki.define-technology.com/mediawiki-1.35.0/index.php?title=LSF_Multicluster&amp;action=history"/>
	<updated>2026-05-04T20:11:10Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.35.0</generator>
	<entry>
		<id>http://wiki.define-technology.com/mediawiki-1.35.0/index.php?title=LSF_Multicluster&amp;diff=2304&amp;oldid=prev</id>
		<title>Michael: Created page with &quot;* Note, see the lsf.shared bug at bottom. Changes not maintained in PCM 3.0 (and previous)  ===== Multicluster License ===== * Ensure your license includes a line with lsf_mul...&quot;</title>
		<link rel="alternate" type="text/html" href="http://wiki.define-technology.com/mediawiki-1.35.0/index.php?title=LSF_Multicluster&amp;diff=2304&amp;oldid=prev"/>
		<updated>2013-05-01T09:31:14Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;* Note, see the lsf.shared bug at bottom. Changes not maintained in PCM 3.0 (and previous)  ===== Multicluster License ===== * Ensure your license includes a line with lsf_mul...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;* Note, see the lsf.shared bug at bottom. Changes not maintained in PCM 3.0 (and previous)&lt;br /&gt;
&lt;br /&gt;
===== Multicluster License =====&lt;br /&gt;
* Ensure your license includes a line with lsf_multicluster, otherwise request from platform&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
FEATURE lsf_multicluster lsf_ld 7.000 31-JUL-2011 0 AD3E3C81D267A3C1C0B6 &amp;quot;Platform&amp;quot; DEMO&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Configuration Files =====&lt;br /&gt;
The configuration below:&lt;br /&gt;
* uses two PCM 3.0 clusters (pcm30, pcm-mctest)&lt;br /&gt;
* pcm30 will forward jobs on to pcm-mctest&lt;br /&gt;
* pcm-mctest will receive jobs from pcm30&lt;br /&gt;
* All change were made in /etc/cfm/templates/lsf/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;lsf.cluster&amp;#039;&amp;#039;&amp;#039; (or default.lsf.cluster), two additions RemoteCluster and PRODUCTS&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# Update the PRODUCT line to include multicluster&lt;br /&gt;
PRODUCTS=LSF_Base LSF_Manager LSF_MultiCluster&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# drop in an the end of the file&lt;br /&gt;
# dp multicluster, note: names are the cluster names as defined by LSF (typically hostname_cluster1)&lt;br /&gt;
Begin RemoteClusters&lt;br /&gt;
CLUSTERNAME&lt;br /&gt;
pcm30_cluster1&lt;br /&gt;
pcm-mctest_cluster1&lt;br /&gt;
End RemoteClusters&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;lsf.shared (or default.lsf.shared)&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# Note: replaced XXX_clustername_XXX with the cluster name. Guess this is ok provided the cluster name doesnt change&lt;br /&gt;
Begin Cluster&lt;br /&gt;
ClusterName             Servers&lt;br /&gt;
pcm30_cluster1          pcm30             &lt;br /&gt;
pcm-mctest_cluster1     pcm-mctest&lt;br /&gt;
End Cluster&lt;br /&gt;
###### NOTE PROBLEM, INFO NOT SYNCd AFTER ADDHOST -U ###########&lt;br /&gt;
###### SEE RESOLUTION at the bottom of this page     ###########&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;lsf.conf&amp;#039;&amp;#039;&amp;#039; (or default.lsf.conf)&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# Multicluster enable, append to end of file&lt;br /&gt;
MC_PLUGIN_REMOTE_RESOURCE=y&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Multicluster Model =====&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Job forwarding model&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
In this model, the cluster that is starving for resources sends jobs over to the cluster that has resources to spare. To work&lt;br /&gt;
together, two clusters must set up compatible send-jobs and receive-jobs queues.&lt;br /&gt;
With this model, scheduling of MultiCluster jobs is a process with two scheduling phases: the submission cluster selects&lt;br /&gt;
a suitable remote receive-jobs queue, and forwards the job to it; then the execution cluster selects a suitable host and&lt;br /&gt;
dispatches the job to it. This method automatically favors local hosts; a MultiCluster send-jobs queue always attempts&lt;br /&gt;
to find a suitable local host before considering a receive-jobs queue in another cluster.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Resource leasing model&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
In this model, the cluster that is starving for resources takes resources away from the cluster that has resources to spare.&lt;br /&gt;
To work together, the provider cluster must export resources to the consumer, and the consumer cluster must&lt;br /&gt;
configure a queue to use those resources.&lt;br /&gt;
In this model, each cluster schedules work on a single system image, which includes both borrowed hosts and local&lt;br /&gt;
hosts.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Choosing a model&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
** Consider your own goals and priorities when choosing the best resource-sharing model for your site.&lt;br /&gt;
** The job forwarding model can make resources available to jobs from multiple clusters, this flexibility allows maximum throughput when each clusters resource usage fluctuates. &lt;br /&gt;
** The resource leasing model can allow one cluster exclusive control of a dedicated resource, this can be more efficient when there is a steady amount of work. &lt;br /&gt;
** The lease model is the most transparent to users and supports the same scheduling features as a single cluster.&lt;br /&gt;
** The job forwarding model has a single point of administration, while the lease model shares administration between provider and consumer clusters.&lt;br /&gt;
&lt;br /&gt;
===== Job Forwarding Model =====&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;lsb.queues&amp;#039;&amp;#039;&amp;#039; (or lsbatch/default/configdir/lsb.queues)&lt;br /&gt;
* On the host sending jobs, create a queue with SNDJOBS_TO (pcm30) &lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
Begin Queue&lt;br /&gt;
QUEUE_NAME   = sendq&lt;br /&gt;
PRIORITY     = 40&lt;br /&gt;
HOSTS        = none&lt;br /&gt;
SNDJOBS_TO   = receiveq@pcm-mctest_cluster1&lt;br /&gt;
End Queue&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* On the cluster receiving jobs, create a queue with RCVJOBS_FROM (pcm-mctest)&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
Begin Queue&lt;br /&gt;
QUEUE_NAME   = receiveq&lt;br /&gt;
RCVJOBS_FROM = sendq@pcm30_cluster1&lt;br /&gt;
HOSTS        = all&lt;br /&gt;
End Queue&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Resource Sharing Model =====&lt;br /&gt;
* In this example, cluster pcmtest is exporting a single node to vhpchead&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;lsb.resources&amp;#039;&amp;#039;&amp;#039; file on &amp;#039;&amp;#039;&amp;#039;pcmtest&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
Begin HostExport&lt;br /&gt;
PER_HOST     = pcmcomp000               # export host list&lt;br /&gt;
SLOTS        = 12                       # for each host, export 5 job slots&lt;br /&gt;
DISTRIBUTION = [vhpchead_cluster1, 6]   # share distribution for remote clusters:&lt;br /&gt;
                                        # cluster &amp;lt;vhpchead_cluster1&amp;gt; has 6 shares, &lt;br /&gt;
End HostExport&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;lsb.queues&amp;#039;&amp;#039;&amp;#039; file on &amp;#039;&amp;#039;&amp;#039;vhpchead&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# resource borrow queue&lt;br /&gt;
Begin Queue&lt;br /&gt;
QUEUE_NAME   = resourceborrowq&lt;br /&gt;
PRIORITY     = 40&lt;br /&gt;
HOSTS        = compute005 pcmcomp000@pcm30_cluster1   # 2 hosts on this queue, one remote host pcmcomp000&lt;br /&gt;
DESCRIPTION  = Resource Borrow Queue&lt;br /&gt;
End Queue&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Verify jobs are being run correctly&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
[root@pcmcomp000 ~]# bclusters &lt;br /&gt;
[Job Forwarding Information ]&lt;br /&gt;
LOCAL_QUEUE     JOB_FLOW   REMOTE     CLUSTER    STATUS    &lt;br /&gt;
receiveq        recv       -          vhpchead_c ok        &lt;br /&gt;
&lt;br /&gt;
[Resource Lease Information ]&lt;br /&gt;
REMOTE_CLUSTER  RESOURCE_FLOW   STATUS     &lt;br /&gt;
vhpchead_cluste EXPORT          ok        &lt;br /&gt;
# Check the hosts that are being exported: &lt;br /&gt;
[root@pcmcomp000 ~]# bhosts -e&lt;br /&gt;
HOST_NAME             MAX  NJOBS    RUN  SSUSP  USUSP    RSV &lt;br /&gt;
pcmcomp000             12      3      3      0      0      0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Also check output from vhpchead&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
[david@vhpchead multicluster]$ bjobs &lt;br /&gt;
JOBID   USER    STAT  QUEUE      FROM_HOST   EXEC_HOST   JOB_NAME   SUBMIT_TIME&lt;br /&gt;
21472   david   RUN   resourcele vhpchead    compute005  sleep 60   Aug  3 17:20&lt;br /&gt;
21474   david   RUN   resourcele vhpchead    compute005  sleep 60   Aug  3 17:20&lt;br /&gt;
21476   david   RUN   resourcele vhpchead    compute005  sleep 60   Aug  3 17:20&lt;br /&gt;
21473   david   RUN   resourcele vhpchead    pcmcomp000@ sleep 60   Aug  3 17:20&lt;br /&gt;
21475   david   RUN   resourcele vhpchead    pcmcomp000@ sleep 60   Aug  3 17:20&lt;br /&gt;
21477   david   PEND  resourcele vhpchead                sleep 60   Aug  3 17:20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[david@vhpchead multicluster]$ bclusters &lt;br /&gt;
[Job Forwarding Information ]&lt;br /&gt;
LOCAL_QUEUE     JOB_FLOW   REMOTE     CLUSTER    STATUS    &lt;br /&gt;
sendq           send       receiveq   pcm30_clus ok        &lt;br /&gt;
&lt;br /&gt;
[Resource Lease Information ]&lt;br /&gt;
REMOTE_CLUSTER  RESOURCE_FLOW   STATUS     &lt;br /&gt;
pcm30_cluster1  IMPORT          ok&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[david@vhpchead ~]$ bhosts -w&lt;br /&gt;
HOST_NAME          STATUS          JL/U    MAX  NJOBS    RUN  SSUSP  USUSP    RSV &lt;br /&gt;
compute000         ok              -     12      0      0      0      0      0&lt;br /&gt;
compute001         ok              -     12      0      0      0      0      0&lt;br /&gt;
compute002         ok              -     24      0      0      0      0      0&lt;br /&gt;
compute003         ok              -      8      0      0      0      0      0&lt;br /&gt;
compute004         ok              -      8      0      0      0      0      0&lt;br /&gt;
compute005         ok              -      8      0      0      0      0      0&lt;br /&gt;
compute007         ok              -      8      0      0      0      0      0&lt;br /&gt;
compute008         ok              -      8      0      0      0      0      0&lt;br /&gt;
compute009         ok              -      8      0      0      0      0      0&lt;br /&gt;
pcmcomp000@pcm30_cluster1 ok              -     12      0      0      0      0      0&lt;br /&gt;
vhpchead           ok              -      8      0      0      0      0      0&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Update System  =====&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
addhost -u&lt;br /&gt;
# Or if you edited files in /opt/lsf/conf&lt;br /&gt;
lsadmin reconfig&lt;br /&gt;
badmin mbdrestart&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Note, if the configuration doesn&amp;#039;t apply correctly run the &amp;#039;&amp;#039;&amp;#039;lsadmin&amp;#039;&amp;#039;&amp;#039; and &amp;#039;&amp;#039;&amp;#039;badmin&amp;#039;&amp;#039;&amp;#039; commands listed above to verify the configuration files (addhost -u does not report configuration errors correctly!)&lt;br /&gt;
&lt;br /&gt;
===== Setup IPtables =====&lt;br /&gt;
* The cluster LSF processes will try and communication, by default only ssh traffic is allowed on eth1, update iptables on both servers&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# Generated by iptables-save v1.3.5 on Fri Jul  1 17:40:27 2011&lt;br /&gt;
*nat&lt;br /&gt;
:PREROUTING ACCEPT [1339:165189]&lt;br /&gt;
:POSTROUTING ACCEPT [205:14830]&lt;br /&gt;
:OUTPUT ACCEPT [516:36221]&lt;br /&gt;
-A POSTROUTING -o eth1 -j MASQUERADE &lt;br /&gt;
COMMIT&lt;br /&gt;
# Completed on Fri Jul  1 17:40:27 2011&lt;br /&gt;
# Generated by iptables-save v1.3.5 on Fri Jul  1 17:40:27 2011&lt;br /&gt;
*filter&lt;br /&gt;
:INPUT ACCEPT [0:0]&lt;br /&gt;
:FORWARD ACCEPT [0:0]&lt;br /&gt;
:OUTPUT ACCEPT [43133:352914090]&lt;br /&gt;
-A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT &lt;br /&gt;
-A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT &lt;br /&gt;
-A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT &lt;br /&gt;
-A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 53 -j ACCEPT &lt;br /&gt;
-A INPUT -i eth1 -p udp -m state --state NEW -m udp --dport 53 -j ACCEPT &lt;br /&gt;
-A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT &lt;br /&gt;
-A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT &lt;br /&gt;
-A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 873 -j ACCEPT &lt;br /&gt;
-A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 5432 -j ACCEPT &lt;br /&gt;
# multicluster&lt;br /&gt;
-A INPUT -i eth1 --source 172.28.10.0/24 -p tcp -m state --state NEW -m tcp --dport 7869 -j ACCEPT&lt;br /&gt;
-A INPUT -i eth1 --source 172.28.10.0/24 -p tcp -m state --state NEW -m tcp --dport 6878 -j ACCEPT&lt;br /&gt;
-A INPUT -i eth1 --source 172.28.10.0/24 -p tcp -m state --state NEW -m tcp --dport 6881 -j ACCEPT&lt;br /&gt;
-A INPUT -i eth1 --source 172.28.10.0/24 -p tcp -m state --state NEW -m tcp --dport 6882 -j ACCEPT&lt;br /&gt;
# end multicluster&lt;br /&gt;
-A INPUT -i eth0 -j ACCEPT &lt;br /&gt;
-A INPUT -i lo -j ACCEPT &lt;br /&gt;
-A INPUT -p icmp -m icmp --icmp-type any -j ACCEPT &lt;br /&gt;
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT &lt;br /&gt;
-A INPUT -i eth1 -j REJECT --reject-with icmp-port-unreachable &lt;br /&gt;
-A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT &lt;br /&gt;
-A FORWARD -i eth0 -j ACCEPT &lt;br /&gt;
COMMIT&lt;br /&gt;
# Completed on Fri Jul  1 17:40:27 2011&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== /etc/hosts - add clusters =====&lt;br /&gt;
* On each node, add the cluster headnode to the hosts file (if not using external DNS which can resolve both hostname)&lt;br /&gt;
* As these are cluster &amp;#039;&amp;#039;&amp;#039;external&amp;#039;&amp;#039;&amp;#039; hosts, add to &amp;#039;&amp;#039;&amp;#039;/etc/hosts/append&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# /etc/hosts.append on pcm-mctest&lt;br /&gt;
172.28.10.69   	pcm30.viglen.co.uk	pcm30&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Then update the hosts file and sync across cluster&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
kusu-genconfig hosts &amp;gt; /etc/hosts&lt;br /&gt;
cfmsync -f&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Check MultiCluster Status =====&lt;br /&gt;
* Use &amp;#039;&amp;#039;&amp;#039;bclusters&amp;#039;&amp;#039;&amp;#039; and &amp;#039;&amp;#039;&amp;#039;lsclusters&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* Status should be &amp;#039;&amp;#039;&amp;#039;ok&amp;#039;&amp;#039;&amp;#039;, if you see &amp;#039;&amp;#039;&amp;#039;disc&amp;#039;&amp;#039;&amp;#039; there may be some communication problems&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
root@pcm-mctest lsf]# bclusters &lt;br /&gt;
[Job Forwarding Information ]&lt;br /&gt;
LOCAL_QUEUE     JOB_FLOW   REMOTE     CLUSTER    STATUS    &lt;br /&gt;
receiveq        recv       -          pcm30_clus ok        &lt;br /&gt;
&lt;br /&gt;
[Resource Lease Information ]&lt;br /&gt;
No resources have been exported or borrowed&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
[root@pcm-mctest lsf]# lsclusters &lt;br /&gt;
CLUSTER_NAME   STATUS   MASTER_HOST               ADMIN    HOSTS  SERVERS&lt;br /&gt;
pcm-mctest_clu ok       pcm-mctest             hpcadmin        2        2&lt;br /&gt;
pcm30_cluster1 ok       pcmtest.viglen.co.     hpcadmin        2        2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Move Files between Clusters =====&lt;br /&gt;
* LSF will use lsrcp&lt;br /&gt;
* Need to setup both cluster to use SSH for [lsrcp|rsh|rcp], replace/create link for the binaries on the headnode. Create these files in /etc/cfm/[compute-group]&lt;br /&gt;
* All need to ensure ssh keys are setup between the clusters (passwordless access)&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# Either change lsrcp:&lt;br /&gt;
/opt/lsf/7.0/linux2.6-glibc2.3-x86_64/bin/lsrcp -&amp;gt; [scp] #mkdir /etc/cfm/compute-centos-5.6-x86_64/$LSF_BINDIR&lt;br /&gt;
&lt;br /&gt;
# Or change rcp (the default lsrcp will fall back on rcp)&lt;br /&gt;
/usr/kerberos/bin/rsh -&amp;gt; [ssh]&lt;br /&gt;
/usr/kerberos/bin/rcp -&amp;gt; [scp]&lt;br /&gt;
&lt;br /&gt;
# ssh keys&lt;br /&gt;
cat ~/.ssh/id_rsa.pub | ssh user@remote.machine.com &amp;#039;cat &amp;gt;&amp;gt; .ssh/authorized_keys&amp;#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* NOTE: Output files created on remote cluster at not automatically copied back&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# Sample scripts that copied an input file across, and then all output files back.&lt;br /&gt;
#BSUB -q sendq&lt;br /&gt;
#BSUB -o sendq_output.%J.txt&lt;br /&gt;
#BSUB -e sendq_error.%J.txt&lt;br /&gt;
#BSUB -f &amp;quot;/home/david/test_input.inp &amp;gt; /home/david/copied_across.inp&amp;quot;&lt;br /&gt;
#BSUB -f &amp;quot;/home/david/result_copied.out &amp;lt; /home/david/result.out&amp;quot;&lt;br /&gt;
#BSUB -f &amp;quot;/home/david/sendq_output_copied.%J.txt &amp;lt; /home/david/sendq_output.%J.txt&amp;quot;&lt;br /&gt;
#BSUB -f &amp;quot;/home/david/sendq_error_copied.%J.txt &amp;lt; /home/david/sendq_error.%J.txt&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;hi&amp;quot;&lt;br /&gt;
hostname &lt;br /&gt;
id&lt;br /&gt;
cat /home/david/copied_across.inp&lt;br /&gt;
&lt;br /&gt;
hostname &amp;gt;&amp;gt; result.out&lt;br /&gt;
id &amp;gt;&amp;gt; result.out&lt;br /&gt;
&lt;br /&gt;
sleep 30&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== lsf.shared bug =====&lt;br /&gt;
/etc/cfm/templates/default.lsf.shared &amp;#039;&amp;#039;&amp;#039;Cluster&amp;#039;&amp;#039;&amp;#039; section gets overwrote on sync. The following changes need to be made:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
vi /opt/kusu/lib/plugins/genconfig/lsfshared_7_0_6.py&lt;br /&gt;
&lt;br /&gt;
# Change from: &lt;br /&gt;
 84             if re.compile(&amp;quot;^End.*Cluster&amp;quot;).search(instr):&lt;br /&gt;
 85                 inClusterSection = False&lt;br /&gt;
 86             else:&lt;br /&gt;
 87                 if re.compile(&amp;quot;^ClusterName&amp;quot;).search(instr):&lt;br /&gt;
 88                     pass&lt;br /&gt;
 89                 else:&lt;br /&gt;
 90                     if inClusterSection:&lt;br /&gt;
 91                         print clusterName&lt;br /&gt;
 92                         continue&lt;br /&gt;
&lt;br /&gt;
# Change to:&lt;br /&gt;
 84             if re.compile(&amp;quot;^End.*Cluster&amp;quot;).search(instr):&lt;br /&gt;
 85                 inClusterSection = False&lt;br /&gt;
 86             else:&lt;br /&gt;
 87                 if re.compile(&amp;quot;^ClusterName&amp;quot;).search(instr):&lt;br /&gt;
 88                     pass&lt;br /&gt;
 89                 else:&lt;br /&gt;
 90                     if inClusterSection and re.compile(&amp;quot;^XXX_clustername_XXX&amp;quot;).search(instr): # &amp;lt;---- This line!&lt;br /&gt;
 91                         print clusterName&lt;br /&gt;
 92                         continue&lt;br /&gt;
 93 &lt;br /&gt;
 94             print instr,&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Verify the update has been applied correctly: &lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
kusu-genconfig lsfshared_7_0_6 &amp;#039;insert-cluster-name&amp;#039;&lt;br /&gt;
e.g: kusu-genconfig lsfshared_7_0_6 pcm30_cluster1&lt;br /&gt;
&lt;br /&gt;
# once confirmed as updating correctly:&lt;br /&gt;
addhost -u&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Michael</name></author>
	</entry>
</feed>