7.6 CMS Computing

7.6.1 Local computing resource and usage

CMS has 446 CPU cores in local cluster which is managed by HTCondor. You can log in lxlogin.ihep.ac.cn to use it.

The command to submit HTCondor jobs:

 condor_submit submit  -group cms

7.6.2 Local storage resource and usage

There are three kinds of storage areas:

(1) public storage area for data

/publicfs/cms

(2) public storage area for software

/afs/ihep.ac.cn/soft/CMS/  
/cvmfs/cms.cern.ch/(由CERN维护)

(3) private storage area

/workfs, /scratchfs, /afs/ihep.ac.cn/users/

注意事项:

(1) /workfs : used to store important personal files, and each user has limitation of 5GB space and 50000 files. The computing center provides backup service for this directory. This directory can be accessed from lxlogin, but can't be written by jobs in work nodes.

(2)/scratchfs:only for temporary files, each user has limitation of 500GB and the files on it are kept for two weeks. You can access them from lxlogin and workload.

(3) /afs/ihep.ac.cn/users :only for user private files, each user has limitations of 500MB. The computing center provides backup service for this directory.

7.6.3 Grid computing resources

There are 544 cores, managed by CreamCE.

7.6.4 Grid storage resources

There are 540TB, managed by dCache。

7.6.5 Grid job submission

(1) login lxlogin

ssh  lxlogin.ihep.ac.cn

(2) initialize grid certificate

voms-proxy-init --voms cms

(3) source CMSSW env

export VO_CMS_SW_DIR=/cvmfs/cms.cern.ch/
source $VO_CMS_SW_DIR/cmsset_default.sh
#check CMSSW version
scramv1 list -c CMSSW
#for the first time need to build CMSSW first
scramv1 project CMSSW CMSSW_X_Y_Z
#initialize the runtime
cd CMSSW_X_Y_Z/src/
eval `scramv1 runtime –sh

(4) initialize CRAB env

`source source /cvmfs/cms.cern.ch/crab/crab.(c)sh`

(5) create and submit jobs

`crab submit [-c <CRAB-configuration-file>`

(6) check job status to see if jobs finished or not

`crab status [-d] <CRAB-project-directory>`

​ get logs after jobs finished

`crab getlog [-d] <CRAB-project-directory> `

(7) More details refer to:

https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookCRAB3Tutorial

7.6.6 Transfer via PhEDEx

PhEDEx is CMS dataset transfer system,CMS users submit dataset transfer requests、delete requests and transfer monitoring via PhEDEx

Note:

(1)All the transfer requests are only based on dataset, not single file.

(2)Before applying transfers, users need to upload grid certificate to his brower and the PhEDEx can identify users, only CMS VO users are allowed to submit requests.

The procedure of submitting transfers:

(1) transfer requests

Open the PhEDEx submission page below and fill in the information needed to make a transfer:

https://cmsweb.cern.ch/phedex/prod/Request::Create?type=xfer
Data Item: dataset name
Destination:the site you want to transfer dataset to(T2_CN_Beijing)

(2) check transfer status

https://cmsweb.cern.ch/phedex/prod/Data::Subscriptions#state=create_since%3D1367031333

(3) delete submissions (including data registered in DBS and SE)

Open the PhEDEx deletion page below and fill in the information needed to delete a submission:

https://cmsweb.cern.ch/phedex/prod/Request::Create?type=delete
Data Item: dataset name
Destination:the site you want to delete dataset (T2_CN_Beijing)

7.6.7 Access SE data

(1)SE (Storage Element)

SRM server:srm.ihep.ac.cn

Root directory:/pnfs/ihep.ac.cn/data/cms/

(2)Read SE data via SRM

create directory:

srmmkdir   srm://srm.ihep.ac.cn:8443/pnfs/ihep.ac.cn/data/zhangxm/test1

check SE files:

srmls srm://srm.ihep.ac.cn:8443/pnfs/ihep.ac.cn/data/cms/zhangxm/test1

delete directories:

srmrmdir   srm://srm.ihep.ac.cn:8443/pnfs/ihep.ac.cn/data/cms/zhangxm/test1

delete files:

srmrm   srm://srm.ihep.ac.cn:8443/pnfs/ihep.ac.cn/data/cms/largefile2

copy files:

srmcp -debug=true   file:////home/zhangxm/test/srmv2test/test1   srm://srm.ihep.ac.cn:8443/pnfs/ihep.ac.cn/data/cms/zhangxm/test1 
srmcp -debug=true         srm://srm.ihep.ac.cn:8443/pnfs/ihep.ac.cn/data/cms/zhangxm/test1   file:////home/zhangxm/test/srmv2test/test2

results matching ""

    No results matching ""