You will be running a virtualized system ("the container") for this package on your own server or workstation (the "host machine"). The container has a complete CMAQ working environment for you to use on that virtualized system (for CMAQ versions 5.3.1 and 5.3.2), without needing to build anything, and not needing to worry about installation of prerequisite software (compilers, libraries, etc.) except for singularity itself.Back to Contents
Throughout, CMAQ_${VRSN} means your choice of CMAQ_531 (for CMAQ version 5.3.1) or CMAQ_532 (for CMAQ-5.3.2)This package has two components:
This singularity container acts as a virtual machine with its own operating system (CentOS-7, in this case), and with compilers, libraries, and applications installed on it. Because of that virtualized set-up, all the necessary dependencies are managed within that environment and you do not have to worry about installing the pre-requisites, building the models, etc.—you can just use Singularity commands to run the models on that virtual machine, (almost) no matter what machine and operating system you're using as the host for it.
- A Singularity container cmaq.simg that contains a virtualized Linux OS, the CMAQ model, its pre-processors and post-processors, the SMOKE emissions model, as well as various "tool", utility and analysis-&-visualization programs (with all PATHs and aliases already set up for you on the container); and
- A "local" directory singularity-cmaq/ for your host-machine, that contains various sample scripts for interacting with CMAQ and SMOKE submodels, tools, and other programs on that container, as well as this documentation.
All modeling components are compiled for the "64-bit medium memory model" (see https://cjcoats.github.io/ioapi/AVAIL.html#medium) so that runs even on very-large grids are supported. Only the tools VERDI and Panoply should be problematic in this regard.
Installed in this container are:
Note that two-way WRF-CMAQ is not supported on this container.
- CMAQ-git of June 10, 2020 and Nov 22, 2020) (versions
5.3.1
and5.3.2
)- including CCTM (and the CCTM_DDM3D and CCTM_ISAM executables for 5.3.2, also),
preprocessors bcon, create_omi, icon, and mcip,
postprocessors appendwrf, block_extract, combine, sitecmp_dailyo3, bldoverlay, calc_tmetric, hr2day, sitecmp, and writesite, and
utility programs chemmech, create_ebi, inline_phot_preproc, and jproc;- SMOKE-git of June 10, 2020 (version
4.7
)- including run-scripts and programs aggwndw, beld3to2 bluesky2inv, cemscan, cntlmat, elevpoint, extractida, gcntl4carb, gentpro, geofac, grdmat, grwinven, inlineto2d, invsplit, layalloc, laypoint, met4moves, metcombine, metscan, movesmrg, mrgelev, mrggrid, mrgpt, pktreduc, saregroup, smk2emis, smkinven, smkmerge, smkreport, spcmat, surgtool, temporal, tmpbeis3, uam2ncf.
- AMET version
1.4
- model evaluation tool (scripting currently under development...)
- verdi version
2.0_beta
- visualization tool
- pave version
3.0-beta
- I/O API / UAM / CAMX data visualization tool, from MCNC and Carlie J. Coats, Jr., Ph.D.
Supports data sets larger than 2 GB.- ncview version 2.1.2
- netCDF-file visualization tool, from UCSD
- panoply
- netCDF, HDF and GRIB Data Viewer tool, from NASA
- GrADS version 2.0.2
- Grid Analysis and Display System, from GMU
- NCAR Graphics and NCO-4.7.5
- from the University Corporation for Atmospheric Research (who run NCAR for NSF)
- gnuplot-4.6.2
- command-line driven graphing utility
- I/O API-3.2 version 2020-04-11 17:51:44Z
- M3Tools version 2020-04-18 16:10:51Z
- NetCDF-C 4.3.3.1,
- and also NetCDF-Fortran 4.2-16, and NetCDF-C++ 4.2-8
- gcc-4.8.5 and gfortran-4.8.5
- compilers
- MPICH-3, MVAPICH-2, and OpenMPI-3
- MPI libraries, compilers, and utility programs, for gcc/gfortran
- ddd and gdb
- GUI and command-line debuggers
- nedit-5.7
- GUI programming editor, aliased to xx
- xxdiff
- GUI difference tool, aliased to xd
- okular-4.10
- Document (PDF/PostScript/MarkDown) viewer, e.g., to view CMAQ docs in /opt/CMAQ_531/DOCS
- findent
- Fortran indentation/code-transformation tool
Because the Singularity container itself is an "immutable image", any new data files (etc.) that you create can not "live" in the container but instead must be in directories that you mount from your host-machine onto the container as part of the use of singularity to run commands on the container. The supplied scripts give examples of how this works; more information is given in a section below.
On this container are directories
- /opt/ioapi-3.2/ioapi
/opt/ioapi-3.2/m3tools- I/O API source and
INCLUDE
-files, and M3Tools source.- /opt/ioapi-3.2/Linux2_x86_64gfort_medium/
/opt/ioapi-3.2/Linux2_x86_64gfort_mediumdbg/- Optimized and debug/check-everything I/O API libraries and module-files, and M3Tools executables.
- /opt/CMAQ_${VRSN}/scripts/
- worker-scripts designed to run non-CCTM CMAQ pre-processor, post-processor, and utility-program modeling components. These are invoked by host-machine scripts such as cmaq_cctm.csh or cmaq_icon.csh (below)
- /opt/CMAQ_${VRSN}/bin/
- optimized executables for the non-CCTM CMAQ modeling components
- /opt/CMAQ_${VRSN}/CCTM/scripts/BLD_CCTM_${VRSN}_gcc[dbg]-*/
/opt/CMAQ_532/CCTM/scripts/BLD_CCTM_${VRSN}_ISAM_gcc[dbg]-*/
/opt/CMAQ_532/CCTM/scripts/BLD_CCTM_${VRSN}_DDM3D_gcc[dbg]-*/- optimized and debug CMAQ CCTM executables for various MPI versions (*). ISAM and DDM3D executables are available for CMAQ-5.3.2 only.
- /opt/SMOKE/scripts/run/
- worker-scripts to run SMOKE. These are invoked by host-machine scripts such as smk_point_nctox.csh (below)
- /opt/SMOKE/Linux2_x86_64gfort_medium/,
/opt/SMOKE/Linux2_x86_64gfort_mediumdbg/,- optimized and debug SMOKE executables
Accompanying this container and installed on your host-machine will be a directory singularity_cmaq/ with five subdirectories:
For more about Singularity see the Singularity User Guide at https://sylabs.io/guides/3.5/user-guide/index.html and Singularity: A Container System for HPC Applications at https://cloud4scieng.org/singularity-a-container-system-for-hpc-applications/
- Docs/
- with this document singularity_cmaq.html, and with configuration-files indicating how this singularity container was configured;
- Logs
- for log-files;
- Scripts/
- sample host-scripts to run CMAQ modeling components, SMOKE, vis programs, or interactive shell tcsh on the container. The paradigm is that these scripts set up environment variables (etc.) on the container, then do singularity exec of either vis-program executables or "worker scripts" that actually run the modeling programs.
Note that the cmaq_ and smk_ and singularity-term.csh scripts also contain batch-queue directives, e.g., for queue/batch usage on the UNC servers longleaf or dogwood, where singularity is only available on the compute-nodes.Reference copies of these scripts are available in the list below, for you to view or download (use browser-command "Save link as..."):
- singularity-shell.csh
- Log on to the container from the host command-line (non-batch! ...in your current terminal-window).
- singularity-term.csh
- Launch an interactive rxvt terminal from the container (e.g., from a debug batch-queue)
- cmaq_ncview.csh
- Run visualization-tool ncview
- cmaq_panoply.csh
- etc...
- cmaq_verdi.csh
- copy_cmaq_bld.csh
- Copy a CMAQ CCTM build-directory to the host machine.
- copy_cmaq_nml.csh
- to copy the CMAQ CCTM namelist-files to a specified directory on the host machine.
- cmaq_cctm.csh
cmaq_cctm.mpich.csh
cmaq_cctm.mvapich.csh
cmaq_cctm.openmpi.csh
cmaq_ddm.mpich.csh
cmaq_ddm.mvapich.csh
cmaq_ddm.openmpi.csh
cmaq_isam.openmpi.mpich.csh
cmaq_isam.mvapich.csh
cmaq_isam.csh- Set up environment on the container for a (multi-day) CMAQ CCTM run (for "vanilla", DDM3D enabled, or ISAM enabled, respectively) and then use the container's run_cctm.csh (etc.) worker-scripts to execute that run.
Note that there are versions of these scripts for each of the supported MPI versions.
- cmaq_appendwrf.csh
- Run CMAQ post-processor appendwrf
- cmaq_bcon.csh
- etc...
- cmaq_bldoverlay.csh
- cmaq_block_extract.csh
- cmaq_calc_tmetric.csh
- cmaq_combine.csh
- cmaq_icon.csh
- cmaq_mcip.csh
- cmaq_writesite.csh
- smk_area_nctox.csh
- Set up the environment and run a (multi-day) SMOKE area source run on the container.
- smk_bg_nctox.csh
- etc...
- smk_edgar_HEMI108k.csh
- smk_met4moves.nctox.csh
- smk_mrgall_nctox.csh
- smk_nonroad_nctox.csh
- smk_point_nctox.csh
- smk_rateperdistance_nctox.csh
- smk_rateperhour_nctox.csh
- smk_rateperprofile_nctox.csh
- smk_ratepervehicle_nctox.csh
- [AMET scripts]
Your host machine needs to have Singularity installed on it. Frequently, Linux vendors will have native Singularity packages available for you to use, so that Singularity installation is easy and painless (su root; yum install singularity or su root; apt-get install singularity). If not, the Singularity User Guide gives instructions on how to install it on your own system.Back to ContentsNote: on the compute clusters at UNC (and possibly other sites), Singularity is configured to run on the compute nodes only, but not on the login nodes. The cmaq_cmaq/Scripts-BATCH/ versions of the scripts are intended for this usage, e.g., on the UNC cluster dogwood. For other such situations, consult your cluster's systems administrator for instructions on how to run Singularity applications and (for the CCTM) how to select the appropriate MPI implementation.
CMAQ CCTM NOTE: MPI implementation is the sticky point. Because the different MPI implementations are not compatible with each other (mpirun from MPICH-3 will not work with a program built with OpenMPI, for example) your host machine needs to be running the same MPI implementation as the CCTM executable on this Singularity container. There are CCTM builds for three different MPI implementations: MPICH-3, MVAPICH-2, and OPENMPI-3; script-variable
MPIVERSION
in the cmaq_cctm*.csh script selects which of these will be used.In this container, the only MPI application affected by this is the CMAQ CCTM; all of the other applications in this container are either "serial" or (shared-memory) OpenMP-parallel (some m3tools and SMOKE programs) and don't need to use mpirun at all.
One way of using this container is to think of it as a new machine that already contains pre-built executables, together with a number of additional modeling tools. You can either use the pre-written modeling scripts (below), or you can just &qult;log in" to this machine and work, writing your own scripts etc. as you may wish, using these provided scripts:Back to ContentsSingularity will automatically mount your home directory and the /tmp directory from your host-machine onto the container's virtual machine. You can also mount additional directories from your host machine to the container's virtual machine. In these scripts, you can set
- singularity-shell.csh
- Log in to a tcsh-shell session on the container; or
- singularity-term.csh
- Open a new terminal from the container, with a tcsh-shell session in it.
CMAQDATA
andSMOKEDATA
to your host-machine directories for CMAQ and SMOKE data, respectively, that will be mounted to standard container-directories /opt/CMAQ_532/data/ and /opt/SMOKE/data/, or you can setextradirs
to mount a list of directories to the container, under the same names as on the host machine. For example,will make these scripts mount directories /proj/ie/proj, /work, and /ms/<your-login-name> from the host-machine onto the container, under the same names as on the host-machine.set extradirs = "-B /proj/ie/proj,/work,/ms/${USER}"
From such a container-session, you can run various programs or models that are already installed on the container interactively, or you can write scripts that run them (or that do other tasks), just as you would on a other login-session to any other machine. You will need to know the directories for modeling-components you want to run (see above); paths for other utilities listed above, such as GrADS, NCAR Graphics, and the nedit editor are set up already.
Note that the container itself is read-only; your session can only write to files in directories that you have mounted from your host-machine; you cannot write directly into the container's own directories.
Note also that executables from your host machine will almost certainly have shared-library problems that keep them from running in a container-session. To build new executables that will run in a container-session, you need to build them in a container-session (therefore using the container's own compilers and libraries), in a directory that you have mounted from the host machine: see APPENDIX 2 below.
More details are given in the sections below.
There are three (and a half) parts of this issue:Back to ContentsOn the container, modeling software is installed under directory /opt/ (following UNIX tradition for software that has its own directory-hierarchy) in a fashion generally similar to the usual CMAQ, SMOKE, and I/O API directory hierarchies but adapted to the specifics of this container. Here is a selection of relevant parts the top few levels of that installation hierarchy. Note that all the CMAQ related optimized executables are sym-linked to directories
- Where is modeling software installed?
- What directories are mounted from the container's host (and how do you mount additional data directories)?
- How do you establish environment variables on the container (from a script on your host-machine)?
/opt/CMAQ_${VRSN}/bin/
; all the extra analysis tools, etc., are in/opt/bin/
or/opt/ioapi-3.2/Linux2_x86_64gfort_medium/
, which are already in yourPATH
on the container; the container's run-CMAQ-component scripts are in/opt/CMAQ_${VRSN}/scripts/
, and data in your host machine data-directory${HOSTDATA}
is generally mounted on your container's/opt/CMAQ_${VRSN}/data/
; the container's SMOKE scripts are in/opt/SMOKE/scripts/run/
, and${HOSTDATA}
is mounted on/opt/SMOKE/data/
, as indicated in the APPENDIX.Selected Host-machine CMAQ Directories and Files:
Singularity mounts various directories from the host-machine; it is in these directories that you will wish to have the container "do its work". Because the container itself is "immutable" (i.e., read-only), any outputs you create must be in those directories mounted from the host-machine.
The assumption in the current "execute a CMAQ model component on the container" scripts is that a single master data-directory
${HOSTDATA}
on the host should be mounted onto the container's /opt/CMAQ_${VRSN}/data/: that master data-directory will have sub-directories for all of the input data and for the CCTM output data and logs. The expected sub-directory structure for the master directory is given below.
Note that this is a unified-and-simplified directory structure used by all of the CMAQ modeling components. The top level subdirectories of${HOSTDATA}
are grid or case specific subdirectories named for environment variable${APPL}
(or possibly more than one of these, e.g., for programs ICON and BCON that are used with nested-grid applications). For consistency's same among all the scripts, and to avoid "brittleness" (failure to work correctly from version to version without having to make detailed script-changes), component names do not have program-version numbers in them—met/mcip for example, instead of met/mcipv5.0.${APPL} ${APPL}/GRIDDESC ${APPL}/WRF-CMAQ/ ${APPL}/WRF-CMAQ/wrf_inputs/ ${APPL}/cctm/ ${APPL}/emis/ ${APPL}/emis/inln_point/ ${APPL}/emis/inln_point/othpt/ ${APPL}/emis/inln_point/pt_oilgas/ ${APPL}/emis/inln_point/ptegu/ ${APPL}/emis/inln_point/ptagfire/ ${APPL}/emis/inln_point/ptnonipm/ ${APPL}/emis/inln_point/ptfire/ ${APPL}/emis/inln_point/ptfire_othna/ ${APPL}/emis/inln_point/cmv_c3/ ${APPL}/emis/inln_point/stack_groups/ ${APPL}/emis/gridded_area/ ${APPL}/emis/gridded_area/rwc/ ${APPL}/emis/gridded_area/gridded/ ${APPL}/icbc/ ${APPL}/land/ ${APPL}/logs/ ${APPL}/met/ ${APPL}/met/wrf/ ${APPL}/met/mcip/ ${APPL}/POST/where in fact for multi-part or multi-grid studies (and particularly for program ICON) there may be several sets of these sub-directories, each having its own distinguishing${APPL}
.A number of additional directories are automatically mounted by a singularity ... command:
You can also use the
${HOME}
, your home directory${PWD}
, the directory from which singularity was invoked/tmp
, and various system directories--bind <host-machine-directory>:<container-directory>(or-B
instead of--bind
) command-line option for the singularity commands to specify what additional host-machine directories are mounted on the container, and at what locations. If the container-directory is not given, then the directory is available on the container with the same name as on the host. For example,--bind /projwould mount the /proj directive from the host onto the container, also as /proj.This command-line directive is how we will normally deal with input and output directories for model-data. For example, if the container is ${CONTAINER}=/work/cmaq.simg, and the host-directory is ${HOSTDATA}=/work/SCRATCH/CMAQv5.3.1_Benchmark_2Day, the following command mounts that directory on container-directory /opt/CMAQ_${VRSN}/data before invoking container-script /opt/CMAQ_${VRSN}/scripts/run_cctm.csh:
singularity exec \ --bind ${HOSTDATA}:/opt/CMAQ_${VRSN}/data \ ${CONTAINER} /opt/CMAQ_${VRSN}/scripts/run_cctm.cshSubdirectories of host data-directory ${HOSTDATA} will be seen on the container as matching subdirectories of the container data-directory /opt/CMAQ_${VRSN}/data. Here in this example, /work/SCRATCH/CMAQv5.3.1_Benchmark_2Day/2016_12SE1/met/ on the host corresponds to /opt/CMAQ_${VRSN}/data/2016_12SE1/met/ on the container, etc. The full subdirectory structure of the data directory is given above.Note that each --bind command-line option does only one mount-operation; if you wish to mount multiple directories from the host-machine, you need multiple --binds.
Note also that these mounts do not follow symbolic links, so you can't use ln -s ...to add sub-directories to them...To set environment variables in the container, there is a special setenv form that is used in the host environment before invoking a singularity command—you prefix the desired environment-variable name with
SINGULARITYENV_
. For example, the following sequence in host-script Scripts-CMAQ/cmaq_cctm.cshsetenv SINGULARITYENV_START_DATE "2016-07-01" setenv SINGULARITYENV_START_TIME 0000000 setenv SINGULARITYENV_RUN_LENGTH 2400000 setenv SINGULARITYENV_TIME_STEP 100000 setenv SINGULARITYENV_END_DATE "2016-07-02" setenv SINGULARITYENV_APPL 2016_12SE1 setenv SINGULARITYENV_EMIS 2016ff setenv SINGULARITYENV_PROC mpi setenv SINGULARITYENV_NPCOL 1 setenv SINGULARITYENV_NPROW 3 setenv SINGULARITYENV_CTM_DIAG_LVL 1will set the following environment variables on the container, where they are used to control the container script run_cctm.csh (in the above example):
START_DATE
START_TIME
RUN_LENGTH
TIME_STEP
END_DATE
APPL
EMIS
PROC
NPCOL
NPROW
CTM_DIAG_LVL
All of the scripts have been modified not only to fit with the environment of the container, but also for consistency among themselves, for full control via environment variables, to support correct return of execution status, to support a common set of "verbose" options, and to support debugging.Back to ContentsUnfortunately, a number of CMAQ pre-processing, post-processing, and utility programs do not follow the modeling standard of returning the program's exit status using I/O API routine
M3EXIT()
to terminate execution, thus making proper process management difficult for them.The sample scripts from directory cmaq_cmaq/Scripts/ are of three types:
For SMOKE scripts using singularity exec to run SMOKE applications; see the section below. Note that the standard SMOKE script-structure runs a (potentially large) set of time-independent SMOKE programs, followed by a sequence of per-day runs of a set of time stepped SMOKE programs, and can be quite complex :-)
- Scripts that use singularity exec to run on-container executables (e.g., vis programs) or modeling scripts (found in directory/files /opt/CMAQ_${VRSN}/scripts/*csh for CMAQ components or /opt/SMOKE/scripts/run/*csh), after setting up data directories mounted from your host machine, and after setting up environment variables used to control those scripts;
- Script singularity-shell.csh (for interactive use within your own terminal-window), and singularity-term.csh (for batch use) that (after setting up environment and mounted directories), uses the singularity shell command that gives you a tcsh session on the container from your host-machine command-line, to allow you to run interactive programs such as ncdump, ncview, m3stat (etc.), VERDI, or pave that are installed in the container, e.g., for Interactive Tool Use.
singularity-term.csh launches a terminal from the container with a tcsh session for you, so that it can be used from batch queues.
NOTE that for the UNC servers, singularity is not available from login-node command-lines; the singularity-term.csh can be launched into a debug-queue, where it will launch an X-based terminal from the container, to give you that sort of command-line access there.- Scripts copy_cmaq_bld.csh and copy_cmaq_nml.csh copy respectively either a CMAQ CCTM build-directory or a CMAQ CCTM namelist-file from the container to your host machine.
CMAQ-component scripts using singularity exec to run a CMAQ modeling component, say foo, need to mount a data-directory
${HOSTDATA}
on your host machine to the expected data-directory/opt/CMAQ_${VRSN}/data
on the container (using--bind
), and to establish environment variables (of the formSINGULARITYENV_<name>
) on the host that singularity maps into environment variables on the container, as shown below, to run on-container modeling script run _foo.csh for that modeling component:... set HOSTDATA = <path for data directory on your host machine> set CONTAINER = <path for CMAQ container on your host machine> ... setenv SINGULARITYENV_<name> <value> ... singularity exec \ --bind ${HOSTDATA}:/opt/CMAQ_531/data \ ${CONTAINER} /opt/CMAQ_531/scripts/run_foo.csh set err_status = ${status} if ( ${err_status} != 0 ) then echo "" echo "********************************************************" echo "** Error for /opt/CMAQ_531/scripts/run_foo.csh **" echo "** STATUS=${err_status} **" echo "********************************************************" endif exit( ${err_status} )Note that the on-container modeling scripts always return the exit status (whether fromM3EXIT()
orSEGFAULT
, or...) of the program being executed, with an error-message to the log if the status indicates failure. This status is further passed back to the singularity exec scripts, which also write appropriate error-messages and return the status to their callers.Generally, the singularity exec scripts will echo all output to the screen; to capture it in a log, you will need to re-direct it. For a modeling-component foo, if the package is installed under your home directory, that might look like
[ cd ${HOME}/cmaq_cmaq/Scripts-CMAQ ] cmaq_foo.csh >& ../Logs/cmaq_foo.log &For every such singularity exec script on your host machine, you will need to customize the following shell variables:
For batch-queue use of the scripts you may also need to customize the batch-queue parameters.
${HOSTDATA}
- path for data-directory on your your host-machine
${CONTAINER}
- path name for the CMAQ container on your host-machine
For the CCTM scripts, you will also need to customize the MPI-version parameter to match the MPI version on your host system
MPIVERSION
mpich
,mvapich
, oropenmpi
,setenv SINGULARITYENV_MPIVERSION <value>If you want verbose script operation, you can control it with environment variable
CTM_DIAG_LVL
on the container:In order to change values of this environment variable on the container, edit the
CTM_DIAG_LVL = 0
: no extra diagnostics [default]CTM_DIAG_LVL = 1
: log the sorted environment, size of executable, and process limitsCTM_DIAG_LVL = 2
: full script echovalue
in following line in your singularity exec script:setenv SINGULARITYENV_CTM_DIAG_LVL <value>If you want to mount additional directories on the container, you may use shell-variableextradirs
to put a directive with a comma delinited list of directories-B <directories>
that will cause the container to mount the directories specified under the same names as the host-machine uses. For example, if you want the container to mount host-directories /proj and /work (as /proj and /work on the container), modify the script like this:set extradirs = '-B /proj,/work'
If you want a debug-run for a modeling component, the scripts are also set up to support debugging, if requested. You will need to do the following: First, build a debug-executable for that modeling component (except for the CTM, for which a debug-executable already exists on the container), and make sure it is in a directory mounted on the container. Then customize on environment variables
${DEBUG}
and${EXEC}
, as follows: In the singularity exec script, uncomment the two following statements, and fill in the container-side path to that executable:setenv SINGULARITYENV_DEBUG 1 setenv SINGULARITYENV_EXEC <path to debug-executable>Note that environment variableSINGULARITYENV_EXEC
can also be used to override the executable for the modeling component that you are running. See Appendix 2 below. The valueSINGULARITYENV_EXEC
should be the path on the container to the executable (after any host-directory mount-operations). Be aware that you will have problems running executables built on the host-machine because of problems due to shared-library incompatibilities between your host machine and the container's CentOS-7 virtual OS. If you do this, you should use the singularity-shell.csh or singularity-term.csh script to use the container and its compilers and libraries to build the executable on a directory you mount from your host machine. You may want to look at that component's Makefile to help you determine which compile flags, etc., to use.
There are optimized and debug CMAQ executables for each of three MPI implementations: MPICH-3, OPENMPI-3, and MVAPICH-2 for each of the "normal", DDM3D, and ISAM versions of the CMAQ CCTM, for the cb6r3_ae7_aq chemical mechanism. The executables can be found asBack to ContentsCCTM_*.exe
in the following CMAQ-container directories:/opt/CMAQ_${VRSN}/CCTM/scripts/ BLD_CCTM_v${VRSN}_gcc-mpich3/ BLD_CCTM_v${VRSN}_gcc-openmpi/ BLD_CCTM_v${VRSN}_gcc-mvapich2/ BLD_CCTM_v${VRSN}_gccdbg-mpich3/ BLD_CCTM_v${VRSN}_gccdbg-openmpi/ BLD_CCTM_v${VRSN}_gccdbg-mvapich2/ BLD_CCTM_v532_DDM3D_gcc-mpich3/ BLD_CCTM_v532_DDM3D_gcc-openmpi/ BLD_CCTM_v532_DDM3D_gcc-mvapich2/ BLD_CCTM_v532_DDM3D_gccdbg-mpich3/ BLD_CCTM_v532_DDM3D_gccdbg-openmpi/ BLD_CCTM_v532_DDM3D_gccdbg-mvapich2/ BLD_CCTM_v532_ISAM_gcc-mpich3/ BLD_CCTM_v532_ISAM_gcc-openmpi/ BLD_CCTM_v532_ISAM_gcc-mvapich2/ BLD_CCTM_v532_ISAM_gccdbg-mpich3/ BLD_CCTM_v532_ISAM_gccdbg-openmpi/ BLD_CCTM_v532_ISAM_gccdbg-mvapich2/respectively. In all cases, they are compiled for "64-bit medium memory model" (see https://cjcoats.github.io/ioapi/AVAIL.html#medium) so that even runs on very-large grids are supported.Note that since these are the only CCTM executables (matching exactly the compilers and MPI implementations on the container), other MPI versons and compiler-choices (Intel, PGI, ...) are not supported. The choice of which executable to use (and whether to invoke the debugger on that executable) is controlled by container-environment variables
MPIVERSION
andDEBUG
.The attempt has been made to re-structure the CMAQ run-scripts and the CMAQ directories for use with the container. The reasons for this are two-fold: first, for consistency among the CMAQ CCTM, its pre-processors, post-processors, and utility programs; secondly, so that there is a single "generic" CCTM run-script on the container for each CMAQ CCTM version:
These scripts are controlled by the following list of environment variables (each of which has a default, indicated in square brackets [LIKE THIS]):
- cmaq_cctm.csh
- for the "vanilla" CMAQ CCTM;
- cmaq_ddm.csh
- for the DDM3D enabledCMAQ CCTM;
- cmaq_isam.csh
- for the ISAM enabled CMAQ CCTM.
MPIVERSION
mpich
,openmpi
, ormvapich
, to select MPI version compatible with that of the host-server [mpich]PROC
- processing-mode:
mpi
orserial
[mpi]DEBUG
- if this environment variable is defined: run the model under debug using ddd, in which case the run is confined to the first day of the modeling-period.
Note thatPROC=mpi
debugging has not been tested; frequently the interaction between mpirun and debugging is flaky. But one may hope :-)
NMLDIR
(optionally)- if this environment variable is defined: use this directory for CCTM namelist files.
BLDDIR
(optionally)- if this environment variable is defined: use this directory as the CCTM build-directory, to find the executable.
NOTE that theBLDDIR
must be consistent with theMPIVERSION
, since theMVAPICH
mpirun cannot necessarily run anOPENMPI
executable, etc.
START_DATE
- Run starting-date, formatted
YYYY-MM-DD
[2016-07-01]END_DATE
- Run ending-date, formatted
YYYY-MM-DD
[2016-07-02]START_TIME
- Run starting-date, formatted
HHMMSS
[0000000]RUN_LENGTH
- Run duration, formatted
H*MMSS
[240000TIME_STEP
- Output time step, formatted
HHMMSS
[10000]APPL
- Application name (e.g. gridname) [2016_12SE1]
EMIS
- emissions case [2016ff]
NPCOL
- number of processor-columns in the horizontal domain decomposition [8]
NPROW
- number of processor-rows in the horizontal domain decomposition [4]
CTM_DIAG_LVL
- script-diagnostics/logging level:
- 0: no extra diagnostics
- 1: environment, file, and directory based diagnostics
- 2; full scripting-echo
RUNID
- any no-whitespace combination of parameters to identify the run [${VRSN}_gcc_${APPL}]
- Optionally,
GRIDDESC
- path for
GRIDDESC
file on the container [${HOSTDATA}/${APPL}/GRIDDESC on your host machine; this binds to container-file /opt/CMAQ_${VRSN}/data/${APPL}/GRIDDESC]
Advanced Topics
- to customize
NAMELIST
files, you can use script copy_cmaq_nml.csh to copy the "vanilla" namelists to a directory on your host machine given by the script's environment-variableSINGULARITYENV_NMLDIR
, customize the file(s) there, and then use theSINGULARITYENV_NMLDIR
in the cmaq_cctm.csh, cmaq_ddm.csh, cmaq_isam.csh scripts to tell the CCTM to use those namelists.to build and use a custom executable, you can use script copy_cmaq_bld.csh to copy a build-directory on the container to a directory on your host machine given by the script's environment-variable
SINGULARITYENV_BLDDIR
, do a custom re-build of the CMAQ CCTM executable there (using the singularity-shell.csh script to give you access to the container's compilers and libraries), and then use theSINGULARITYENV_EXEC
environment variable to give the path for the executable you want to use (provided it is in a directory (like${HOME}
) mounted onto the container.
See APPENDIX 2 below.
In the run_cctm.csh, run_ddm.csh, and run_isam.csh scripts on the container, additional CCTM-control environment variables, e.g.,
GRID_NAME, CONC_SPCS, CTM_MAXSYNC, CTM_OCEAN_CHEM,
etc., are not hard-coded (changeable only by editing the script), but are established, with their default values, after the patternif ( ! $?FOO ) setenv FOO BARwhich potentially sets the default value of container-environment variableFOO
toBAR
; i.e., ifFOO
exists in the container environment, then use its existing value; else use the defaultBAR
. Consequently, one can change all the other CCTM control variables in the cmaq_cctm.csh script, as follows: To put a different valueQUX
for environment variableFOO
to override these defaults, you need to do a setenv of the following form in the cmaq_cctm.csh script, prefixing the environment-variable nameFOO
bySINGULARITYENV_
)setenv SINGULARITYENV_FOO QUXThe run_cctm.csh script (etc.)makes potentially multiple single-day CCTM runs, one for each day fromSTART_DATE
throughEND_DATE
, inclusive.Note that both the container based scripts like run_cctm.csh and the host based scripts like cmaq_cctm.csh script have been re-structured to capture exit-status (from
M3EXIT()
or from other causes of failure, e.g.,SEGFAULT
) correctly; and in case of such a failure, run_cctm.csh terminates the current run with a descriptive message immediately if that status indicates error, rather than to go ahead blindly ahead with more runs after a failure.
Back to Contentsbcon
Host-script cmaq_bcon.csh sets up control variablesmounts a data-directory (which should contain subdirectories for both the input and output grids, and then executes the container-script /opt/CMAQ_${VRSN}/scripts/run_bcon.csh which runs program ICON on the container.
FIN_APPL
- ICON case, usually the (fine-grid) output-grid name.
CRS_APPL
- input CCTM case, usually the (coarse-grid) CONC-file input-grid name.
BCTYPE
regrid
for regridding CMAQ CTM concentration files; or
profile
for using default profile inputsGRID_NAME
- GRIDDESC-name for the output grid
START_DATE
- Gregorian-style starting date, formatted
YYYY-MM-DD
START_TIME
- Starting-time, formatted
HHMMSS
RUN_LENGTH
- Run duration, formatted
HHMMSS
- Optionally,
GRIDDESC
- path for
GRIDDESC
file on the container [/opt/CMAQ_${VRSN}/data/${CRS_APPL}/GRIDDESC]
create_omi
deferred to a later date...If you want to do it yourself, look at the script /opt/CMAQ_${VRSN}/PREP/create_omi/scripts/cmaq_omi_run.csh on the container, copy it out to a host-machine directory that will be mounted on the container (${HOME}?), edit it there, using
setenv SINGULARITYENV_...for the environment variables), and then usingsingularity exec /opt/CMAQ_${VRSN}/bin/create_omito execute the program.icon
Host-script cmaq_icon.csh sets up control variablesmounts a data-directory (which should contain subdirectories for both the input and output grids), and then executes the container-script /opt/CMAQ_${VRSN}/scripts/run_icon.csh which runs program ICON on the container.
FIN_APPL
- ICON case, usually the (fine-grid) output-grid name.
CRS_APPL
- input CCTM case, usually the (coarse-grid) CONC-file input-grid name.
BCTYPE
regrid
for regridding CMAQ CTM concentration files; or
profile
for using default profile inputsGRID_NAME
- GRIDDESC-name for the output grid
START_DATE
- Gregorian-style starting date, formatted
YYYY-MM-DD
START_TIME
- Starting-time, formatted
HHMMSS
RUN_LENGTH
- Run duration, formatted
HHMMSS
- Optionally,
GRIDDESC
- path for
GRIDDESC
file on the container [/opt/CMAQ_${VRSN}/data/${CRS_APPL}/GRIDDESC]
mcip
Host-script cmaq_mcip.csh sets up the following control variables (using different conventions than the other CMAQ modeling components):for the container, and mounts the data-directory (which should contain subdirectories for both WRF input data and MCIP output data) on the container, and then executes the container-script /opt/CMAQ_${VRSN}/scripts/run_mcip.csh which runs program MCIP on the container.
APPL
- run identifier [160702]
CoordName
- 16-character-max coordinate system name, for
GRIDDESC
[LamCon_40N_97W]GridName
- 16-character-max grid name, for
GRIDDESC
[2016_12SE1]EXECUTION_ID
- 80-character-max run-identification string ["mcip.exe $APPL $GridName"]
IfGeo
- Use
InGeoFile
input? [F]LPV
- 0: Do not compute and output potential vorticity
1: Compute and output potential vorticityLWOUT
- 0: Do not output vertical velocity
1: Output vertical velocityLUVBOUT
- 0: Do not output u- and v-component winds on B-grid
1: Output u- and v-component winds on both B-grid and C-gridMCIP_START
- UTC starting date&time, formatted
YYYY-MM-DD-HH:MM:SS.SSSS
[2016-07-02-00:00:00.0000]MCIP_END
- UTC final date&time, formatted
YYYY-MM-DD-HH:MM:SS.SSSS
[2016-07-02-00:00:00.0000]INTVL
- Output time step (minutes) [60]
IOFORM
- 1: Models-3 I/O API
2: WRF-format "raw" netCDFBTRIM
- number of meteorology "boundary" points to remove on each of four horizontal sides of MCIP domain, or
-1 to use explicit window informationX0, Y0, NCOLS, NROWS
, as below.X0
- output-grid starting column, if
BTRIM=-1
[13]Y0
- output-grid starting row, if
BTRIM=-1
[94]NCOLS
- output-grid column-dimension, if
BTRIM=-1
[89]NROWS
- output-grid row-dimension, if
BTRIM=-1
[104]LPRT_COL
- column for diagnostic prints on output domain
IfLPRT_COL=0
use domain-center columnLPRT_ROW
- row for diagnostic prints on output domain
IfLPRT_ROW=0
use domain-center rowWRF_LC_REF_LAT
- Lambert conformal reference latitude [40]
If -999.0, MCIP will use average of the two true latitudes.
Back to Contentsappendwrf
Host-script cmaq_appendwrf.csh sets up the data directory${HOSTDIR}
, optionally the container-subdirectoriesINDIR
andOUTDIR
and the basenamesINFILE1, INFILE2, INFILE3
for the three input files and the one output file for appendwrf, and then executes the container-script /opt/CMAQ_${VRSN}/scripts/run_appendwrf.csh which runs program appendwrf on the container.
bldoverlay
Host-script cmaq_bldoverlay.csh sets up environment variablesSTART_DATE, END_DATE, APPL, HOURS_8HRMAX
and optionallyMISS_CHECK, SPECIES, UNITS
, mounts the indicated data-directory${HOSTDIR}
, and then executes the container-script /opt/CMAQ_${VRSN}/scripts/run_bldoverlay.csh which runs program bldoverlay on the container.
block_extract
Host-script cmaq_block_extract.csh sets up the data directory${HOSTDIR}
, environment variablesfor the container, and mounts the data-directory on the container, then executes the container-script /opt/CMAQ_${VRSN}/scripts/run_block_extract.csh which runs program block_extract on the container.
APPL
- run identifier name (e.g., grid-name) [2016_12SE1]
SPECLIST
- Array of species to extract.
ALL
is supported also. ["( O3 NO2 )"
]TIME_ZONE
- Time Zone (
GMT
orEST
[GMT
]OUTFORMAT
- Format of input files (SAS or IOAPI) [IOAPI]
LOCOL
- starting column for the extraction region [44]
HICOL
- ending column for the extraction region [46]
LOROW
- starting row for the extraction region [55]
HIROW
- ending row for the extraction region [57]
LOLEV
- starting lvel for the extraction region [1]
HILEV
- ending level for the extraction region [1]
RUNID
- Run identifier for the input files [
gcc_${VRSN}_${APPL}
]INFILES
- array of basenames for the input files [
"( COMBINE_ACONC_${RUNID}_201607.nc )"
]
Note that all these files should be in directory${HOSTDIR}/${APPL}/POST
calc_tmetric
Host-script cmaq_calc_tmetric.csh sets up the data directory${HOSTDIR}
, environment variablesfor the container, mounts the data-directory on the container, and then executes the container-script /opt/CMAQ_${VRSN}/scripts/run_calc_tmetric.csh which runs program calc_tmetric on the container.
APPL
- run identifier name (e.g., grid-name) [2016_12SE1]
RUNID
- Run identifier for the input files [
${VRSN}_gcc_${APPL}
]OPERATION
- operation to perform -
SUM
orAVG
[AVG
]SPECIES
- Array of species to extract.
ALL
is supported also. ["( O3 CO PM25_TOT )"
]INFILES
- array of basenames for the input files [
"( COMBINE_ACONC_${RUNID}_201607.nc )"
]
Note that all these files should be in directory${HOSTDIR}/${APPL}/POST
combine
Host-script cmaq_combine.csh sets up the data directory${HOSTDIR}
, environment variablesfor the container, mounts the data-directory on the container, and then executes the container-script /opt/CMAQ_${VRSN}/scripts/run_combine.csh which runs program combine on the container, with one execution for (3-D) concentration files and one execution for (2-D) deposition files for each day from START_DATE through END_DATE, inclusive.
MECH
- Chemical mechanism name [cb6r3_ae6_aq]
APPL
- run identifier name (e.g., grid-name) [2016_12SE1]
RUNID
- Run identifier for the input files [
gcc_${VRSN}_${APPL}
]START_DATE
- Gregorian-style starting date, formatted
YYYY-MM-DD
END_DATE
- Gregorian-style final date, formatted
YYYY-MM-DD
hr2day
Host-script cmaq_hr2day.csh sets up the data directory${HOSTDIR}
, environment variablesfor the container, mounts the data-directory on the container, and then executes the container-script /opt/CMAQ_${VRSN}/scripts/run_hr2day.csh which runs program hr2day on the container.
APPL
- run identifier name (e.g., grid-name) [2016_12SE1]
RUNID
- Run identifier for the input files [
gcc_${VRSN}_${APPL}
]USELOCAL
- Use local time? [N]
USEDST
- Use daylight savings time? [N]
PARTIAL_DAY
- Partial day calculation (computes value for last day)? [Y]
HROFFSET
- constant hour offset between desired time zone and GMT [0]
START_HOUR
- starting hour for daily metrics [0]
END_HOUR
- ending hour for daily metrics [23]
HOURS_8HRMAX
- Number of 8hr values to use when computing daily maximum 8hr ozone (17 or 24) [24]
START_DATE
- Gregorian-style starting date, formatted
YYYY-MM-DD
[2016-07-01]END_DATE
- Gregorian-style final date, formatted
YYYY-MM-DD
[2016-07-02]SPECIES_1
- define species&operations
format: comma-list"Name,Units,From_species,Operation"
operations:{SUM, AVG, MIN, MAX, @MAXT, MAXDIF, 8HRMAX, SUM06}
["O3,ppbV,O3,8HRMAX"
]INFILES
- array of basenames for the input files [
"( COMBINE_ACONC_${RUNID}_201607.nc)"
]
Note that all these files should be in directory${HOSTDIR}/${APPL}/POST
sitecmp
tbd...
Look at the following scripts on the container and the suggestions for scripting create_omi, above (or use the singuilarity-shell.csh script to run /opt/CMAQ_${VRSN}/bin/sitecmp interactively):/opt/CMAQ_${VRSN}/POST/sitecmp/scripts/run_sitecmp_AQS_Daily.csh /opt/CMAQ_${VRSN}/POST/sitecmp/scripts/run_sitecmp_AQS_Hourly.csh /opt/CMAQ_${VRSN}/POST/sitecmp/scripts/run_sitecmp_CSN.csh /opt/CMAQ_${VRSN}/POST/sitecmp/scripts/run_sitecmp_IMPROVE.csh /opt/CMAQ_${VRSN}/POST/sitecmp/scripts/run_sitecmp_NADP.csh /opt/CMAQ_${VRSN}/POST/sitecmp/scripts/run_sitecmp_SEARCH_Hourly.cshsitecmp_dailyo3
tbd... look at the following scripts on the container:/opt/CMAQ_${VRSN}//POST/sitecmp_dailyo3/scripts/run_sitecmp_dailyo3_AQS.csh /opt/CMAQ_${VRSN}//POST/sitecmp_dailyo3/scripts/run_sitecmp_dailyo3_CASTNET.cshwritesite
Host-script cmaq_writesite.csh sets up the data directory${HOSTDIR}
, environment variablesfor the container, mounts the data-directory on the container, and then executes the container-script /opt/CMAQ_${VRSN}/scripts/run_writesite.csh which runs program writesite on the container.
APPL
- run identifier name (e.g., grid-name) [2016_12SE1]
RUNID
- Run identifier for the input files [
gcc_${VRSN}_${APPL}
]START_DATE
- Gregorian-style starting date, formatted
YYYY-MM-DD
END_DATE
- Gregorian-style ending date, formatted
YYYY-MM-DD
SITE_FILE
- Name of input file containing sites to process, or
ALL
(i.e., process all cells) [ALL]USELOCAL
- Use local time? [N]
TIME_SHIFT
- constant hour offset between desired time zone and GMT [0]
TIME_SHIFT
- Shifts time of data from GMT [0]
USECOLROW
- Site file contains column/row values? (else Lat-Lon values) [N]
LAYER
- grid layer to output [1]
PRTHEAD
- Output header records? [Y]
PRT_XY
- Output map projection coordinates X and Y? [Y]
SPECIES_1
- Name of species to process [
O3
]IN_FILE
- Base-name for input file [
COMBINE_ACONC_${RUNID}_201607.nc
]
Back to Contentschemmech
pending...
Or use the singularity-shell.csh script to run it interactively...create_ebi
pending...inline_phot_preproc
pending...jproc
pending...
The SMOKE programs have all been built for both optimized and debug on the container, using the gfortran/gcc compiler set for "medium" memory model (so that even very large data sets are supported); the executables can be found in directories /opt/SMOKE/Linux2_x86_64gfort_medium/ and opt/SMOKE/Linux2_x86_64gfort_mediumdbg/ (for a more complete listing of directories on the container, see the APPENDIX.Back to ContentsThe SMOKE scripts have all been re-structured to make correct use of program exit-status (stopping the sequence of execution when there is a failure), and pass that status back through to the caller. They have also been re-structured so that if debugging is requested by means of environment variable
DEBUGMODE
, it will "just work" (using the ddd GUI debugger on the container) without requiring extensive and deep hacking of multiple scripts to make it work. In that case, they will only run for the first day of the episode, rather than running the debugger repeatedly for each separate day of a multi-day run-sequenceThere are three relevant sets of SMOKE scripts for use with SMOKE on this container:
On-container ASSIGNS-scripts in container directory /opt/SMOKE/assigns/ have been modified to set environment variable
SMK_HOME
correctly for this container, and to look at environment variableDEBUGMODE
and set environment variableBIN
appropriately for this container: eitherLinux2_x86_64gfort_medium
for optimized, orLinux2_x86_64gfort_mediumdbg
for debug.On-container runscripts smk_run.csh, qa_run.csh, cntl_run.csh in container directory /opt/SMOKE/scripts/run/ have been re-structured so that if an error occurs (whether reported by
M3EXIT()
, or because ofSEGFAULT
, or ...), they will terminate execution the current set of runs immediately and return the exit-status to the invoking script, rather than blindly going ahead and trying to execute everything that follows, irrespective of the failure. They also properly support running SMOKE component programs under the ddd debugger without needing the detailed "script-hacking" needed by their predecessors.
These scripts source the relevant ASSIGNS-script (passed in from the on-host runscripts as environment variableASSIGNS_FILE
) as needed for their execution.On-host runscripts such as smk_ratepervehicle_nctox.csh in host-machine directory cmaq_cmaq/Scripts-SMOKE/ pass the basename of the appropriate ASSIGNS-script in environment variable
ASSIGNS_FILE
to the container, and invoke the appropriate sequence of smk_run.csh and qa_run.csh there, making use of the returned exit-status from these scripts to further control the run-sequence: it will stop and log an error message for the first program-run that exits with a failing (non-zero) exit status (or else it will run to completion, if everything succeeds).For debugging, in the appropriate on-host run-script, replace the statement
unsetenv SINGULARITYENV_DEBUGMODEbysetenv SINGULARITYENV_DEBUGMODE Yand set the other environment variables to ensure that only the one requested modeling-component is run, and that only for the date of interest.
AMET Version 1.4 is installed under container-directory /opt/AMET_v14/.Back to ContentsNote that AMET support tools bldoverlay_${VRSN}.exe, combine_${VRSN}.exe, sitecmp_${VRSN}.exe, and sitecmp_dailyo3_${VRSN}.exe are installed with CMAQ in container-directories /opt/CMAQ_${VRSN}/bin.
[TBD: the host-scripts for this are not yet developed; one probably should treat AMET usage as an interactive-session problem as described above.]
See the annotated copy of Scripts-CMAQ/singularity-shell.csh at the bottom of this section, below, which sets up an interactive shell-session on the container for you...Back to ContentsMany of the modeling tasks you wish to do are best done interactively, not from "batch". The
singularity shell ...command allows you to run an interactive shell (e.g., tcsh) in the container, frequently by acting on data in a directory mounted from the host-machine, and generating outputs in a(nother) directory mounted from the host-machine (recalling that attempts to write data into the container's file-system itself will fail, with a "permission denied" nasty-gram); you may recall that your ${HOSTDATA}, your ${HOME}, and /tmp/ are examples of such directories mounted on the container from your host-machine...Note that PATHs and aliases, etc., have already been set up for you on the container; that set-up can be found in the container's /etc/profile.d/local.csh.
Examples of commands you might want to run interactively include the following applications installed in the container. For the most part, they are installed under /opt/bin/; they are all on the default path for singularity shell. A few of these tools also have singularity exec scripts to run them directly on your host machine; these last scripts need to be customized in the same way that the CMAQ host-machine scripts are.
- M3Tools programs version
3.2 2020-04-18 16:10:51Z
- such as m3cple, m3diff, m3probe, m3stat, and a variety of others.
These are probably best run interactively after you invoke singularity-shell.csh (or script them in a directory mounted from your host machine, using the principles described above, and invoke the script on the container after doing singularity-shell.csh or launching singularity-term.csh to a debug-queue).
- verdi.sh version
2.0_beta
- a gridded Java based netCDF data visualization tool from EPA: see https://www.cmascenter.org/verdi/
Host script: cmaq_cmaq/Scripts-CMAQ/cmaq_verdi.csh will directly invoke verdi on the container. Edit this script as indicated above, to suit your host machine and data directory situation.
verdi may also be run interactively on the container, after you invoke singularity-shell.csh or launching singularity-term.csh to a debug-queueNote that any output from verdi (e.g., any image-files you created, or output from
save project
must be in a directory mounted from your host-machine; you may recall that your ${HOME is one such directory...
- AMET version
1.4
- software for the analysis and evaluation of predictions from meteorological and air quality models. See https://www.cmascenter.org/amet/
AMET matches the model output for particular locations to the corresponding observed values from one or more networks of monitors.
- pave version
3.0 beta
- a visualization tool for I/O API / UAM / CAMX data, from MCNC and Carlie J. Coats, Jr., Ph.D.; see https://cjcoats.github.io/pave/PaveManual.html: this version has been re-structured to offer vastly improved performance for large data sets. (It is so much faster that for animations you will probably need to use environment variable
TENTHS_SECS_BETWEEN_FRAMES
to slow down the animations enough that you can interpret them.)
Built for 64-bit-medium memory model, so that usable data set sizes are limited only by available memory (unlike the other vis tools, which tend to have 2GB limits)
Note also that the file-selection GUI fails, due to software versioning problems ("library rot"); however,pave [<config>] -f <path to file> ...does work, where ${config} = 2, 3, 3a, 3b, 3d, 3g, 5, 6, 51, frac, lu, o3, soil, strm, tk identifies one of the on-container PAVE configuration-files pave.${config}.config found in container directory /opt/pave-3.0/Config/
A number of these use "zebra" color palettes: pave.3.config, for example, uses a 5-hue/50-color palette, where the first ten colors are blues with varying saturation ranging from near-white to fully-saturated.
${config} = frac, lu, o3, soil, strm, tk are for the relevant specific variable, e.g., tk forTK
, Temperature (Kelvin).
pave is probably best run interactively after you invoke singularity-shell.csh or launching singularity-term.csh to a debug-queue
- ncview version 2.1.2
- a netcdf-file visualization tool from UCSD; see http://meteora.ucsd.edu/~pierce/ncview_home_page.html
Host script: cmaq_cmaq/Scripts-CMAQ/cmaq_ncview.csh
Edit this script as indicated above, to suit your host machine and data directory situation, or run ncview interactively after you invoke singularity-shell.csh or launching singularity-term.csh to a debug-queue
- panoply
- a netCDF, HDF and GRIB data viewer tool from NASA: see https://www.giss.nasa.gov/tools/panoply/
Host script: cmaq_cmaq/Scripts-CMAQ/cmaq_panoply.csh
Edit this script as indicated above, to suit your host machine and data directory situation, or run panoply interactively after you invoke singularity-shell.csh or launching singularity-term.csh to a debug-queue.
- GrADS
- the Grid Analysis and Display System from GMU: see http://cola.gmu.edu/grads/
GrADS is probably best run interactively after you invoke singularity-shell.csh or launching singularity-term.csh to a debug-queue
- NCAR Graphics
- see http://ngwww.ucar.edu/
NCAR Graphics is probably best run interactively after you invoke singularity-shell.csh or launching singularity-term.csh to a debug-queue
- gnuplot
- graphics/plotting tool: see http://www.gnuplot.info/
gnuplot is probably best run interactively after you invoke singularity-shell.csh or launching singularity-term.csh to a debug-queue
- ddd and gdb
- debuggers: ddd is a GUI "wrapper" for gdb
These are invoked automatically when requested by the modeling-component scripts; or you can run them interactively after you invoke singularity-shell.csh or launching singularity-term.csh to a debug-queue
- nedit
- GUI text editor for interactive use, after you invoke singularity-shell.csh
There is an alias xx that runs it in the background: e.g., to bring up edit-windows on files foo, bar, and qux, issue the commandxx foo bar qux
- okular
- PDF/MarkDown viewer, after you invoke singularity-shell.csh, e.g., for reading CMAQ documents in /opt/CMAQ_${VRSN}/DOCS.
- xxdiff
- GUI file-differencing tool for interactive use, after you invoke singularity-shell.csh
There is an alias xd that runs it in the background with "ignore-whitespace" command-line options; to see the differences in files foo and bar, issue the commandxd foo bar
- findent
- see https://github.com/wvermin/findent
Fortran source indentation and beautification program for both fixed ("f77-style") and free ("f90-style") format; also converts Fortran fixed format to Fortran free format (and vice-versa). It will accept CMAQ and SMOKE's non-Standard "fixed-132" source format.
There is an alias tof90 that converts fixed-format Fortran source to free format, using the I/O API's indentation conventions, as in the following:tof90 < prog.f > prog.f90
cmaq_cmaq/Scripts-CMAQ/singularity-shell.csh is an example of a host-system script that
- sets up some environment variables;
- mounts host-machine directories on the container as described above; and
- then runs tcsh on the container, giving you an interactive prompt,
for you to use tools (such as those listed above) on the container. The essential content of it is the following, which establishes various container-environment variables
APPL , EMIS
, etc., and then mounts the host directory${HOSTDATA}
on container-directory/opt/CMAQ_${VRSN}/data
, and then invokes an interactive tcsh session on the container ${CONTAINER}, and starting from directory/opt/CMAQ_${VRSN}/data
on the container:#!/bin/csh -f # # Script to Invoke "singularity shell" for cmaq container # Data directory on host: mounts onto container-directory "/opt/CMAQ_${VRSN}/data" set HOSTDATA = <path for data directory on your host machine> set CONTAINER = <path for CMAQ container on your host machine> # Examples of setting up environment variables such as APPL and EMIS # for the container: setenv SINGULARITYENV_APPL 2016_12SE1 setenv SINGULARITYENV_EMIS 2016ff # invoke "singularity shell" using bindings of host-directories to # container-directories, and starting tcsh at mount-point of ${HOSTDATA} cd ${HOSTDATA} singularity shell -s /usr/bin/tcsh \ --bind ${HOSTDATA}:/opt/CMAQ_${VRSN}/data \ ${CONTAINER}You will then probably want to do something like the following (at the tcsh prompt within the container):verdi.shorpave -f /opt/CMAQ_${VRSN}/data/${APPL}/met/mcip/METCRO2D_160701.nc \ -f /opt/CMAQ_${VRSN}/data/${APPL}/cctm/CCTM_ACONC_v531_gcc_2016_12SE1_20160701.ncor something like the following m3stat run (noting that the report-file created by m3stat below must be in a host-machine-mounted directory such as$HOME
; if it's not a directory mounted from the host-system, the system will give you a nasty-gram indicating "permission denied"):cd /opt/CMAQ_${VRSN}/data/${APPL}/met/mcip ls setenv AFILE $cwd/METCRO2D_160701.nc setenv REPORT $HOME/METCRO2D_160701.stats m3stat AFILE REPORT DEFAULT
Back to Contents/data # extra mount-point, if needed /opt/AMET_14/ /opt/CMAQ_${VRSN}/ /opt/CMAQ_${VRSN}/bin/ # optimized Linux2_x86_64gfort_medium executables appendwrf_v531.exe BCON_v531.exe bldmake_gcc.exe bldoverlay_v531.exe block_extract_v531.exe calc_tmetric_v531.exe CCTM_v531.exe combine_v531.exe hr2day_v531.exe ICON_v531.exe mcip.exe sitecmp_dailyo3_v531.exe sitecmp_v531.exe writesite_v531.exe /opt/CMAQ_${VRSN}/data/ /opt/CMAQ_${VRSN}/scripts/ # run_<something>.csh model-component scripts run_appendwrf.csh run_bcon.csh run_bldoverlay.csh run_block_extract.csh run_calc_tmetric.csh run_cctm.csh run_combine.csh run_hr2day.csh run_icon.csh run_mcip.csh run_writesite.csh /opt/CMAQ_${VRSN}/tables/ # time independent ASCII files and tables /opt/CMAQ_${VRSN}/CCTM/ /opt/CMAQ_${VRSN}/CCTM/scripts/ # various bldit, run-cctm, etc. scripts, and CCTM build-directories BLD_CCTM_${VRSN}_gcc-mpich3/ BLD_CCTM_${VRSN}_gcc-mvapich2/ BLD_CCTM_${VRSN}_gcc-openmpi/ BLD_CCTM_${VRSN}_gccdbg-mpich3/ BLD_CCTM_${VRSN}_gccdbg-mvapich2/ BLD_CCTM_${VRSN}_gccdbg-openmpi/ BLD_CCTM_${VRSN}_DDM3D_gcc-mpich3/ BLD_CCTM_${VRSN}_DDM3D_gcc-mvapich2/ BLD_CCTM_${VRSN}_DDM3D_gcc-openmpi/ BLD_CCTM_${VRSN}_DDM3D_gccdbg-mpich3/ BLD_CCTM_${VRSN}_DDM3D_gccdbg-mvapich2/ BLD_CCTM_${VRSN}_DDM3D_gccdbg-openmpi/ BLD_CCTM_${VRSN}_ISAM_gcc-mpich3/ BLD_CCTM_${VRSN}_ISAM_gcc-mvapich2/ BLD_CCTM_${VRSN}_ISAM_gcc-openmpi/ BLD_CCTM_${VRSN}_ISAM_gccdbg-mpich3/ BLD_CCTM_${VRSN}_ISAM_gccdbg-mvapich2/ BLD_CCTM_${VRSN}_ISAM_gccdbg-openmpi/ /opt/CMAQ_${VRSN}/CCTM/src/ /opt/CMAQ_${VRSN}/CCTM/src/MECHS/ # namelists and chemical-mechanism files /opt/CMAQ_${VRSN}/DOCS/ /opt/CMAQ_${VRSN}/POST/ /opt/CMAQ_${VRSN}/PREP/ /opt/CMAQ_${VRSN}/UTIL/ /opt/SMOKE/ /opt/SMOKE/assigns/ ASSIGNS.EDGAR.cmaq.cb05_soa.HEMI_108k ASSIGNS.nctox.cmaq.cb05_soa.us12-nc /opt/SMOKE/data/ /opt/SMOKE/scripts/ /opt/SMOKE/scripts/run/ cntl_run.csh qa_run.csh smk_run.csh /opt/SMOKE/src/ /opt/SMOKE/Linux2_x86_64gfort_medium/ /opt/SMOKE/Linux2_x86_64gfort_mediumdbg/ /opt/AMET_v14/ /opt/AMET_v14/R_analysis_code /opt/AMET_v14/R_analysis_code/batch_scripts /opt/AMET_v14/R_db_code /opt/AMET_v14/bin /opt/AMET_v14/configure /opt/AMET_v14/docs /opt/AMET_v14/model_data /opt/AMET_v14/model_data/AQ /opt/AMET_v14/model_data/MET /opt/AMET_v14/model_data/MET/metExample_wrf /opt/AMET_v14/obs /opt/AMET_v14/obs/AQ /opt/AMET_v14/obs/MET /opt/AMET_v14/output /opt/AMET_v14/scripts_analysis /opt/AMET_v14/scripts_db /opt/ioapi-3.2/ /opt/ioapi-3.2/ioapi/ /opt/ioapi-3.2/m3tools/ /opt/ioapi-3.2/Linux2_x86_64gfort_medium/ /opt/ioapi-3.2/Linux2_x86_64gfort_mediumdbg/ /opt/bin/ findent panoply pave verdi.sh wfindent
As noted above, environment variableSINGULARITYENV_EXEC
can also be used to override the executable for the modeling component that you are running.
NOTE that you will need to build such executables using the compilers and libraries on the container; otherwise, you will almost certainly encounter shared-library problems. Gee, thanks, Ulrich Drepper!First, you will probably need to copy a source-directory from the container to an area on the host machine, using the singularity-shell.csh command. It is recommended that this area be under either /tmp or your home directory, so that host-machine and singularity-container paths to the executable will be the same. Here is a sample of how you might do this, assuming you want to build a CMAQ-5.3.2-mpich3 CCTM executable under directory $HOME/mystuff/.
You can use either the copy_cmaq_bld.csh script to copy a selected CMAQ CCTM BLD-directory to your host machine, or do the following steps:
Then build the new executable and use it:
- On the host, do mkdir -p $HOME/mystuff/ (if you don't have that directory already)
- On the host, do singularity-shell.csh or singularity-term.csh to go to the container.
- On the container, do cd /opt/CMAQ_532/CCTM/scripts/ to get to the container-directory holding the container's appropriate build-directory
- On the container, copy that build-directory to your chosen build-directory: cp -r BLD_CCTM_532_gcc-mpich3/ $HOME/mystuff/
- On the host, modify the codes iny our chosen build-directory (e.g., $HOME/mystuff/BLD_CCTM_532_gcc-mpich3/) as you desire.
- On the container, use singularity-shell.csh to do cd $HOME/mystuff/BLD_CCTM_532_DDM3D_gcc-mpich3/ then make on the container. This will build the CCTM executable.
- In your host's cmaq_*.csh script, add the command for that executable:
setenv SINGULARITYENV_EXEC $HOME/mystuff/BLD_CCTM_532_DDM3D_gcc-mpich3/CCTM_v532_DDM3D.exe
- Run your new script.
To re-build your own version of one of the CMAQ pre- or post-processors, go through the analogous process of copying its build-directory to your host machine, modifying the code in it, then using the container's compilers and libraries to build it, and modifying the host-script's
setenv SINGULARITYENV_EXEC ...
for that processor to use your new executable, much as above.
This work is licensed under a
Creative
Commons Attribution-ShareAlike 4.0 International License.
Send comments to
Carlie J. Coats, Jr.
cjcoats@email.unc.edu