About bump to generate stddev parameters

Thanks a lot in advance for your help on my problems in the following.

Some context:
I am working on JEDI interface for FV3-lam interface. A current focus is to use bump to generate regional background background covariance files for regional DA use. it works with the results output (though we need close look to see and how it works in the regional 3dvar). However, I noticed currently, only ch and cv are specified (correlation). The “parameter: stddev” block in the yaml was commented out. Does this mean , in those tests, only correlations are specified while the std of background will be specified in the analysis step (like some constant parameters)?
I uncommented the “parameter stddev” block to run the same test and the executable (fv3jedi_parameters.x) aborted with error message: stddev is not allocated in bump%copy_to_field.

end of context:).
So, would you please help clarify what the status is for the bump to generate background error covariance? Is it supposed to only generate correlation parameters for being now?
Your clarification is appreciated.

Dear Ting, thank you for posting your question on the JCSDA forum. Since you are the first one, you will get a long and detailed answer!

While the correlation length-scales rh and rv can be specified by the user in the yaml file when generating the BUMP/NICAS correlation operator, the background error standard deviation cannot. I see three possible options for your case:

  1. If you have an ensemble available, you can use it to compute and filter the variance field in BUMP. The required keys in the BUMP section of the yaml file are:

    new_var: 1
    ne: ${your_ensemble_size}

    and optionally for the filtering:

    var_filter: 1
    var_niter: ${number_of_filtering_iterations}
    var_rhflt: ${initial_filtering_length-scale}

    See the file saber/test/testinput/qg_parameters_bump_cov.yaml for an example. Then, you have to read the standard deviation in the 3DVar, with this kind of variable change:

     variable changes:
     - variable change: StdDev
       input variables: [${variables}]
       output variables: [${variables}]
         datadir: ${your_BUMP_data_directory}
         load_var: 1
         prefix: ${your_BUMP_prefix}

    See the file saber/test/testinput/qg_3dvar_bump.yaml for an example.

  1. You can hack the BUMP NetCDF files between the first step where BUMP computes the covariance operator parameters and the second step where this covariance operator is applied (the 3DVar in your case). It is not very clean, but it might be the easiest way.

    In the first step, BUMP produces parameters files for the NICAS smoother, named ${your_BUMP_data_directory}/${your_BUMP_prefix}_nicas_${number_of_MPI_tasks}_${MPI_task_index}.nc. In these files, look for fields called coef_ens (one for each couple of variable) and modify them with any NetCDF-enabled software (e.g. python). The first dimension nl0 is the number of levels, and the second nc0a the number of grid-points handled by this MPI task. It would be easy, for instance, to set a uniform value at each level. Be careful: coef_ens should contain the variance, not the standard-deviation!

    Then, add the key nonunit_diag: 1 in the BUMP section of the 3DVar yaml file. I didn’t test this hack, but it should work.

  2. You can also define your own model-specific variable change to apply a standard deviation field. However, this might require significantly more coding… let me know if you want to know more about this option.

I am currently working on a new feature for BUMP, where rh and rv profiles can be specified in the yaml file with one value for each level. Your question makes me think that I should add another key to specify a profile of standard-deviation. Thanks for your feedback!

1 Like

Thank you so much for your detailed answer. They are also very helpful for me to understand how to use Bump. It will be a good reference for me in the future.
For being now, your proposed first two solutions are enough for our current test/evaluation purposes.
Your help is really appreciated.