PILPS 2e Frequently Asked Questions

The list is starting to get a little long for a casual glance over, so I have tried to group the questions into categories. Additions since the previous update are highlighted in green.

Date of update: 10/11/2000.

 

 

Categories

  1. General
  2. Vegetation specifications
  3. Soil specifications
  4. Snow, lakes and frozen soils
  5. Netcdf and processing programs
  6. Calibration experiment and discharge files
  7. Output requirements

 

 

General:

  1. Cloud data only from 1989
  2. Reference heights of wind and humidity
  3. UTC vs. local time
  4. Backward-averaged variables
  5. Initial state
  6. The vertical level of observations
  7. Time of the forcing files

Vegetation specifications:

  1. Stem Area Index (SAI)
  2. Average LAI
  3. Greenness fraction
  4. Vegetation parameters for lumped vegetation schemes
  5. Vegetation rooting depths

Soil specifications:

  1. Definition of Qg
  2. Definition of BareSoilT
  3. DelSoilHeat again
  4. Choice of soil texture parameters
  5. Definition of DelSoilHeat
  6. Soil solar index
  7. Wilting point
  8. Soil hydraulic properties

Snow, lakes and frozen soils:

  1. Tdepth and Fdepth
  2. IceT and IceFrac
  3. Representation of lakes
  4. Transmittance of the snow

Netcdf and processing programs:

  1. Preferred netcdf output format
  2. Writing model output in netcdf
  3. Viewing netcdf in Grads
  4. Writing netcdf output
  5. Netcdf for PC and F77

Calibration experiment and discharge files:

  1. Routing for calibration catchments
  2. Error in sub-basin areas
  3. Grid cell fraction larger than 1.0
  4. Grid cell area for streamflow computation
  5. Runoff scaling factors for calibration basins
  6. Coordinates of Ovre Abiskojokk gauge
  7. Time period of calibration runs
  8. Discharge timestp axis

Output requirements:

  1. Dimensions of subsurface variables
  2. Time period of the output files
  3. Range of values for the output variables
  4. Definition of evaporation components
  5. Time axis for output files
  6. Average vs. instantaneous prognostic and diagnostic variables

 

 

 

Questions and Answers

General:

  1. Cloud data only from 1989

    I have found cloud data only for the time from 1989 to 1998 on the ftp-server. Where can I look for the data from 1979 to 1988?

    Response:

    The cloud cover data for 1979-1988 was never produced Since this is an optional spin-up period, I thought it would be sufficient to re-use the 1989-1998 data for this purpose. If this is impossible, due to the time axis etc., please let me know.

  2. Reference heights of wind and humidity

    Regarding table 4 and section 3.2.4 (other parameters), the reference heights of temperature and humidity and of wind are set below several of the landcover heights. For the model, the measurement heights need to be above the landcover, or else the LAI will be meaningless. If the measurement height is within the landcover, it is necessary to know the LAI both above and below that height. How should we deal with this problem?

    Response:

    According to our best interpretation of the NCEP/NCAR reanalysis model, the model does not explicitly represent vegetation, rather it just sees a surface roughness. Therefore, it is erroneous to think of the 10 m reanalysis winds as 10 meters above the surface. Rather, we feel the best interpretation is 10 meters above the displacement height. This is the point where the wind calculated assuming a logarithmic profile with displacement height will equal the '10 meter' winds of a logarithmic profile calculated without displacement height, which we presume is the case with the NCEP/NCAR model. If your model is not capable of calculating a distinct measurement height for each vegetation type, the height (10 m + displacement) for the tallest vegetation class should be used. We also believe that it can be assumed that temperature and humidity are measured at this same altitude with no adjustment of the quantity.

  3. UTC vs. local time

    Page 14, note 2: The use of AM there suggests that time is given as LT, rather than UTC. I suggest that you change the text there to: "The value at 12 UTC represents the 11-12 UTC interval, 13 UTC, the 12-13 UTC interval, etc."

    Response:

    Yes, this change will be made in the next revision. This comment made me realize that there was an error in the input meteorological forcings. The incoming shortwave radiation had been calculated with respect to local time, although all other met variables were with respect to UTC. The incoming shortwave radiation has been recalculated, and the forcing files re-posted to the Pilps 2E ftp site as of August 19, 2000.

  4. Backward-averaged variables

    On page 16 of the instructions, 2nd paragraph in Section 6.1, shouldn't 'forward-averages' read 'backward-averages'?

    Response:

    Yes, the change will be made in the next version of the instructions.

  5. Initial state

    In PILPS2d and GSWP experiments, initial surface temperature, initial available moisture in root zone, initial snow depth, and soil color index were given to modellers, but in PILPS2e these data aren't given. Does this mean that these data can be determined by modellers?

    Response:

    Yes, these data can be determined by the modellers. Since we are providing an extra ten years of data for model spin-up purposes, initial conditions should not be an issue. Each group should run their model until they feel they have reached equilibrium, prior to starting the final 1989-1998 runs.

  6. The vertical level of observations

    The vertical level for the observations of the atmospheric temperature/humidity and wind is 2 m and 10 m, respectively. Most models will not be able to work with different levels for T/q and u, because in many places in the stability dependent part of the exchange coefficient you have assumed that the level is the same. Normally, in case of models that have been coupled to an atmospheric model, it is the height of the lowest atmospheric level, which can be prescribed to any number, but ONE SINGLE number, not two separate numbers. So, for practical reasons, we need to get the two levels together. There are several possibilities, in approximate order of complexity:

    a) Duck the problem altogether, and nominally assign 6 m (=(2+10)/2) for T and u.

    b1) Bring the u to 2 m, by interpolation, using a logarithmic (neutral) profile;

    b2) As in (b1), but using a stability dependent profile, using the stability information of the last timestep, at run-time.

    c1) Bring the T up to 10 m, by extrapolation, using a logarithmic profile

    c2) As in (c1), but with stability dependent profile.

    Response:

    We understand that for many models it will be impossible to specify two different heights for the temperature/humidity and wind speed observations. If this is the case, wind speed should be interpolated to 2 m height, using a logarithmic profile, as follows:

    U(2m) = U(10m)*ln(2/Zo)/ln(10/Zo)

    U(10m) is the provided 10 meter wind speeds and Zo are the proposed roughness heights provided in Table 6 of the experiment design.

  7. Time of the forcing files

    The forcing comes, every year, with the first timestamp corresponding to the first hour, ie, the first timestamp is 3600 s. Does that mean:

    a) The data is representative (an average ?) of the time 0-3600 s, ie, the previous time. In this case the labeling is inconsistent with the proposed labeling of the output, where a forward average is recommended (footnote to your table 9)

    b) The data is an average of the forward time, ie, 3600-7200 s, in which case it would be consistent with the label on the output. However, that means that we are missing timestep 0 on the forcing.

    c) The data is a centered average, or any other combination.

    Response:

    A). Good catch, this was simply an oversite in the final writing of the netcdf files, which makes the time axis inconsistent with what we had proposed. The first data values are representative of the time period from 0 - 3600 seconds, so the data is in fact backward averaged, rather than forward averaged as originally proposed. Rather than reproducing the forcing files for this minor change, we will change all input and output variables to being backward averaged quantities.

Vegetation specifications:

  1. Stem Area Index (SAI)

    Does the LAI in Table 5 include SAI (steam area index)?

    Response:

    No, I don't believe it does. These values represent the canopy LAI derived from satelite NDVI, defined as "one-sided green leaf area per unit ground area in broadleaf canopies, and variously (projected or total) in needle canopies."

  2. Average LAI

    In a grid cell, should the LAI value used be considered the LAI of the actual vegetated area, or should it be considered the average over the whole cell?

    Response:

    For a lumped vegetation scheme, the LAI value used should be the weighted average over the whole grid cell. (For example, LAI = (LAI0*F0 + LAI1*F1 + ... + LAI10*F10 + LAI12*F12)/(F0 + F1 + ... + F10 + F12). However, if bare soil and and/or open water are modeled independently by your scheme you should subtract these fractions from the denominator. (For example, LAI = (LAI1*F1 + ... + LAI10*F10)/(F1 + ... + F10).

  3. Greenness fraction

    I have grabbed the PILPS 2e monthly greenness fraction file (tor_gvi_mon.nc) and after looking at the data, I have a concern. The October, November, and December values seem high, and all three months are identical to each other.

    Response:

    The greenness fraction is based on AVHRR data, and at the latitude of the Torne, the visual bands are not available during the winter months (November, December and January). My simple fix was to use the October values for November and December and the February values for January. I realize now that this is questionable since there is a very large jump between the "December" and "January" values. The files have been re-created by interpolating between the October and February values and re-posted to the ftp site on August 8, 2000.

  4. Vegetation parameters for lumped vegetation schemes

    In a grid cell there are several vegetation types, how should we calculate albedo, LAI,and every parameter dependent on vegetation types. E.g., if a1 = albedo and s1 = fraction for Vegetation 1, a2 and s2 for Vegetation 2, .... Can we use an area average for this grid cell? That is, A=(a1s1+a2s2+...)/(s1+s2+...).

    Response:

    Certainly. The more detailed vegetation information is provided for the models which can (and prefer) to make use of such information. For models which require one value of albedo, LAI, etc. per grid cell, a weighted average of all the input values for each grid cell may be used.

  5. Vegetation rooting depths

    Our model needs root-zone depth to calculate the root-zone's available moisture capacity which is needed, but the depth in Table 6 (new guide) for all vegetation type is 0.7 m (100% of roots). At least this depth is wrong for forests.

    Response:

    Actually, that is a mistake, the last column in Table 6 should read "% of roots (0.3 - 1.0 m)". In any case, if this value still sounds wrong to you, please derive your own value. Table 6 represents optional values which may be changed.

Soil specifications:

  1. Definition of Qg

    The definition of Qg in Table 9 includes the total change in heat storage in snow and soil layers. However, the variable DelSoilHeat has been added since the original plan. Should the change in soil heat really be included in both places?

    Response:

    No, it should not. Qg should now include only the ground heat flux. DelSoilHeat should contain the change in heat storage for each time step, for the layer over which the energy balance is calculated. That is, the point of including this term is to be able to close the energy balance.

  2. Definition of BareSoilT

    What do you mean by BaresoilT? If it is the soil skin T of the bare soil, some grids do not have bare soil.

    Response:

    It is the soil skin temperature of the bare soil tile. Some grid cells do not have bare soil, these should be filled in with no data. The no data value should be specified in the header information of the netcdf output file. For example, for the forcing data, the no data value is 1 x 1020.

  3. DelSoilHeat again

    DelSoilHeat - is this calculated over the surface layer, all layers or integrated?

    Response:

    DelSoilHeat should be calculated for the surface soil layer over which the energy balance is resolved. I think that for some models this may be very small or zero. In our case it is usually a 5 - 15 cm layer.

  4. Choice of soil texture parameters

    In Table 3, there are three rows of data for each soil texture class. Is there some requirement/recommendation for which row to use?

    Response:

    No, the three commonly referenced sources of soil texture parameters (Clapp et al. 1978, Cosby et al. 1984 and Rawls et al. 1982) are provided, and it is up to the discretion of the user which set to choose. The same source should be used for all parameters. It seems that often modellers are used to using one set of parameters over another, and we wanted to allow flexibility to each user. In addition, since basin-wide soil parameters may be adjusted following calibration of the sub-basin, we are not overly concerned with everyone starting from the same value.

  5. Definition of DelSoilHeat

    DelSoilHeat: The definition in ALMA is "Change in heat storage over the soil layer for which the energy balance is calculated, accumulated over the sampling time interval". This means that change of residue calculated by energy balance, e.g., R1 at timestep 1, R2 at time step 2, ....my output should be R1, R2-R1, R3-R2 .... or something else?

    Response:

    Yes, over the course of the one hour model timestep, the prognostic variable x will progress from x1 (the initial condition) to x2 (the value at the end of the time step), the variable delx (whether soilheat, coldcontent, soilmoist, SWE, surfstor or Intercept) = x(i) - x(i-1) for all time steps. So your ouput will be x1-x0, x2-x1, x3-x2, etc. I realize that this variable will cause some difficulty for models which are running on a sub-hourly timestep, but we would like to receive the difference over the entire hour, not the sub-hourly interval.

  6. Soil solar index

    Our model needs soil solar index, but this parameter is not provided in pilps2e (although it is not very important). I think, other models (eg., BATS) need this parameter, too. If you have no suggestions, I will used an intermediate index.

    Response:

    I think it is best if you come up with your own index.

  7. Wilting point

    Wilting point is listed in the table of soil hydraulic properties, but it would seem to be a vegetation property. What is the meaning of listing wilting point as a soil property?

    Response:

    In this context the wilting point is in fact a soil property. The conventional definition of wilting point is the moisture retained at a suction pressure of 1500 kPa, which is highly dependent on soil characteristics. I suppose this has the implicit assumption that average "vegetation" is able to exert this maximum suction and to be truly precise every vegetation type probably has a unique wilting point in each soil. But as the Handbook of Hydrology puts it "The behavior seems to be similar for both crops and soils." At this point, hydraulic conductivity approaches zero and the evaporation rate is no longer controlled by meteorological conditions, but by soil characteristics.

  8. Soil hydraulic properties

    I have a question concerning the 'derived soil hydraulic properties' (Table 3 of the PILPS2e manual). Is it really correct that for sandy clay and clay the value of the field capacities are bigger than the porosity?

    Response:

    The problem arises because for the Rawls et al. (1982) soil hydraulic properties we originally reported effective porosity in Table 3. This is inconsistent because total porosity was reported for the Cosby et al. (1984) and Clapp and Hornberger (1978) values. Therefore, Table 3 should be updated with the total porosity from Rawls et al. (1982), as follows:

    • Sand - 43.7 %
    • Loamy sand - 43.7 %
    • Sandy loam - 45.3 %
    • Silt loam - 50.1 %
    • Loam - 46.3 %
    • Sandy clay loam - 39.8 %
    • Silty clay loam - 47.1 %
    • Clay loam - 46.4 %
    • Sandy clay - 43.0 %
    • Silty clay - 47.9 %
    • Clay - 47.5 %

Snow, lakes and frozen soils:

  1. Tdepth and Fdepth

    What should we write to the output file for the soil freezing depth and the soil thawing depth in the winter season in the case of permafrost? In this case the soil temperature is negative in the entire soil layer and the isotherm T=0 is absent.

    Response:

    To distinguish it from unfrozen ground, as well as the frozen layer which may develop on top of the active layer, set the freezing depth to your no data value during the winter and summer. It will take on values if a frozen layer develops on top of the thawed active layer during freeze-up in the fall. The thaw depth should be equal to the depth of the active layer development in the summer and 0 (the soil surface) in the winter.

  2. IceT and IceFrac

    IceFrac and IceT deals with sea or lake ice? So if there is no representation of lakes in our scheme, these variables are undefined?

    Response:

    That is correct, these variables do not need to be returned at all in your output file.

  3. Representation of lakes

    Or model has no lake submodel, therefore, how large a fraction of water can be ignored in my model (e.g., 10%)? In other words, if the fraction of water is larger than a limit value, we will manage this grid point cell as missing values. Have you a good suggestion for this limit value?

    Response:

    I would suggest that if your model doesn't explicitly represent lakes, that you simply ignore all open water fractions, and rescale the vegetation and bare soil fractions accordingly. After all, this is what is usually done when surface water is neglected in LSMs. If you treat the area as missing values, I'm concerned that the water budget would not match up when we route all of the results.

  4. Transmittance of the snow

    Could you provide transmittance of the snow?

    Response:

    Your question actually made me aware of a discrepancy in the instructions which no one has questioned, that is the thermal emissivity of all surfaces is said to be 1, but that the thermal albedo of snow is said to be 0.65, which cannot both be true. (Since thermal emmitance = absorptance and absorptance + transmittance + reflectance = 1). So the transmittance and reflectance of snow in the thermal range are both zero.

    For snow transmittance, I would suggest a Beer's law relationship: I = Io * exp(-lamda*h)

    Where Io is (1-albedo)* incident radiation and h is the snow depth in meters, lamda (visible) = 6 m^-1 and lamda(infrared) = 20 m^-1.

    These values are from Patterson and Hamblin, Limnol. Oceanogr., 33(3), 1988. This falls into the category of optional parameters, so I make no guarantees of the appropriateness of these values.

Netcdf and processing programs:

  1. Preferred netcdf output format

    Are there specific requirements for the netcdf output files?

    Response:

    Please follow the following conventions.

    Dimensions: The dimensions are simply the numbers which give sizes to the array. They should be defined as: lat,lon,lev and tstep (input file format) or x, y, z, t. The latter is the better, less restrictive, choice. The dimensions will take the values:

    • x = 29;
    • y = 14;
    • z="number of soil levels";
    • t = UNLIMITED;

    Variables: The position variables lon, lat, lev, time and timestp, should be defined next using the dimensions to specify their size. Finally, the standard variables are defined. It is required for quality control purposes that each variable have (at minimum) the following attributes:

    • units (e.g. Qair:units = "kg/kg");
    • associate (e.g. Qair:associate = "time lat lon"); and
    • missing value (e.g. Qair:missing_value = 1.e+20f);.

    The associate attribute is needed to point from the variable to the appropriate geographical coordinates.

    In addition, a global attribute must define the sign convention used in each file. (e.g. nc_global:SurfSgn_convention = "traditional").

  2. Writing model output in netcdf

    Could you please give me the reference where I can find the program to read in Netcdf forcing files and write out model output for phase 1 experiment of PILPS-2e.

    Response:

    On the following URL : http://www.lmd.jussieu.fr/~polcher/PILPS4c/interface_implementation_ipsl.html you will find how it is done by IPSL. The code is provided and if you wish you may use it (You can probably use the routine readdim2.f90 as it is for reading in the forcing files).

    It is all based on a library which was developed at IPSL (Institut Pierre Simon Laplace). This library is freely accessible and might help you : http://www.ipsl.jussieu.fr/~ioipsl.

    Documentation is relatively poor, but if you choose to follow this method, you may contact Jan Polcher for help if needed.

  3. Viewing netcdf in Grads

    I have tried to open the forcing data files(*.nc) using Grads, but I can not. Could you tell me how to open those files?

    Response:

    You must use Grads version 1.6 or later and the udunits package, version 1.10 or later in order to read netcdf files. See http://www.cdc.noaa.gov/~hoop/grads2.html for details. The netcdf forcing files I provided are consistent with COARDS conventions and therefore the sdfopen command should work.

    I have been using the freely-distributed program ferret in order to view netcdf files. Information on ferret can be found at http://ferret.wrc.noaa.gov/Ferret/. I was not very familiar with netcdf prior to this project, but I have found ferret to be relatively easy to install and use.

  4. Writing netcdf output

    I am trying to write out my output files in netcdf format. Because our input files are written in netcdf format using a program, could you please tell me where I can get this program (in FORTRAN 77) as an example or whom I can contact?

    Response:

    The code that I am using to generate the pilps files can be downloaded from www.ipsl.jussieu.fr/~ssipsl/data/data_pilps2e.html. The code is in Fortran-90 format. It was written by Jan Polcher, of LMD in Paris. Please keep in mind that this program was written to deal with VIC-format data, which includes the entire timeseries in a separate file for each gridcell. Questions on adapting your model to write in netcdf should be addressed to Jan Polcher .

  5. Netcdf for PC

    First of all, we tried to install netCDF package (version 3) on our computer, but failed. Could you please advise us how to install netCDF package on PC with Windows (or DOS) operating system and without C-compiler (we only have Fortran-77)? Is it possible?

    Response:

    The easiest thing is to use the pre-compiled binary files available from: ftp.unidata.ucar.edu/pub/netcdf/contrib/win32/netcdf-3.5.win32bin.ZIP, you will also need WIN32_README.TXT from the same directory. Follow the instructions in section 1 of the README file. This requires putting the files in the appropriate directories on your system, it is system dependent and is not something I can answer for you. But this should be all you need to do.

Calibration experiment and discharge files:

  1. Routing for calibration catchments

    Do we need to route the runoff to streamflow before calibration? Although the two catchments are small, they still represent a fair amount of area. If we match our runoff to observed streamflow both in phase and magnitude, the routed runoff will be different from the observations.

    Response:

    The runoff from the calibration and validation catchments will not be routed. The simulated grid cell runoff should be adjusted by the contributing area of each grid cell (see question 4 for the most recent values) and summed. We will only analyze simulated and observed discharge from these basins on a monthly basis to account for the lack of routing. Each user may pick whichever time scale seems most appropriate for calibration.

  2. Error in sub-basin areas

    The accumulated area in km2 of the gridcells contributing to the basins is a little different from the observed area of the basins given in Table 7.

    Response:

    The fractional areas given for the Kaalasjarvi basin on 9/21/00 were in error. The correct values were posted as of 10/10/2000, see the response to question 4. The sum of the fractional area for each basin can be rounded to the values printed in Table 7.

  3. Grid cell fraction larger than 1.0

    Is it correct that the fraction of a gridcell that contributes to streamflow is higher than 1?

    Response:

    Yes, this is the intended value. There are two reasons why this might happen. If less than 5% of a grid cell was draining to the calibration basin outlet, I would add this area to the adjacent grid cell in the same latitude to avoid including many extra cells with very small areas. In addition, I found it necessary to adjust all of the fractional areas slightly so that the digital basin area matched the previously reported basin area. I chose to do this uniformally and accept fractions greater than 1.

  4. Grid cell area for streamflow computation

    According to your previous answer, Discharge = (Qs1+Qsb1)*A1+(Qs2+Qsb2)*A2+...

    Where Qs and Qsb are surface runoff and subsurface runoff, and A is fractional area. If it is true, the units of discharge are kg/m^2s, but the units of observed discharge are m^3/s, therefore discharge should be multiplied by the grid cell area. Could you tell me which units should be used for discharge?

    Response:

    If you have calculated grid cell runoff as a unit depth per timestep (i.e. kg/m^2s), you will also need the actual area of each 1/4 degree grid cell in order to calculate the sub-basin streamflow. For example:

    summation("grid cell fractional area" * cell area * unit depth runoff) = sub-basin streamflow

    The fractional area is the fraction of each 1/4 degree grid cell which lies within each calibration basin.

    If you have already calculated grid cell streamflow (volume per unit time, or the unit depth per timestep * total grid cell area), then you just need:

    summation("grid cell fractional area" * grid cell streamflow) = sub-basin runoff

    A revised version of Table 8 from the instructions document follows. The values in parenthesis represent the actual area of the calibration basin, in square kilometers, which lies within each grid cell (i.e. "grid cell fractional area" * cell area). Please note that the fractional areas have changed slightly so that the digital area matches the stated sub-basin area. Please be sure to use these updated fractions.

    Gauge Names

    No. of Cells

    Grid Cell Centers

    Fractional Area

    Grid Cell Centers

    Fractional Area

    Grid Cell Centers

    Fractional Area

    Ovre Lansjarv

    10

    21.125/ 67.125

    0.071 (21.3)

    21.125/ 66.875

    0.478 (145.2)

    21.375/ 66.875

    1.000 (303.8)

    21.625/ 66.875

    0.756 (229.6)

    21.875/ 66.875

    0.497 (150.9)

    22.125/ 66.875

    0.158 (48.0)

    21.375/ 66.625

    0.283 (86.8)

    21.625/ 66.625

    0.268 (82.2)

    21.875/ 66.625

    0.384 (117.8)

    22.125/ 66.625

    0.506 (155.2)

    Ovre Abiskojokk

    7

    18.125/ 68.125

    0.126 (36.3)

    18.375/ 68.125

    0.303 (87.3)

    18.625/ 68.125

    0.230 (66.3)

    18.125/ 68.375

    0.151 (43.0)

    18.375/ 68.375

    0.504 (143.6)

    18.625/ 68.375

    0.457 (130.2)

    18.875/ 68.375

    0.208 (59.3)

    Pello

    12

    24.125/ 67.375

    0.539 (160.3)

    24.375/ 67.375

    0.428 (127.3)

    23.375/ 67.125

    0.753 (226.3)

    23.625/ 67.125

    0.975 (293.1)

    23.875/ 67.125

    0.859 (258.2)

    24.125/ 67.125

    1.05 (315.6)

    24.375/ 67.125

    0.762 (229.0)

    23.375/ 66.875

    0.697 (211.7)

    23.625/ 66.875

    0.708 (215.0)

    23.875/ 66.875

    0.878 (266.6)

    24.125/ 66.875

    0.727 (220.8)

    24.125/ 66.625

    0.319 (97.9)

    Kalaasjarvi

    11

    18.125/ 68.125

    0.221 (63.7)

    18.375/ 68.125

    0.541 (155.9)

    18.375/ 67.875

    0.188 (54.8)

    18.625/ 67.875

    0.743 (216.4)

    18.875/ 67.875

    0.743 (216.4)

    19.125/ 67.875

    0.623 (181.4)

    19.375/ 67.875

    0.625 (182.0)

    19.625/ 67.875

    0.634 (184.6)

    19.875/ 67.875

    0.378 (110.1)

    19.625/ 67.625

    0.120 (35.3)

    19.875/ 67.625

    0.242 (71.2)

  5. Runoff scaling factors for calibration basins

    Please confirm that only surface and subsurface flow for each grid cell in a sub-basin should be scaled according to the fractional area given in table 8. Not, for example, fluxes or other types of output.

    Response:

    None of the variables returned for each grid cell, including Qs and Qsb, should be scaled when returning this information (e.g. in the file [scheme].base.wb.torne.hrly). The scaling is only used for you to calculate a simulated sub-basin discharge for comparison with observed for calibration purposes. For example, the simualted discharge for Ovre Abiskojokk in each time step will be the summation of (Qs + Qsb) * grid cell fraction * grid cell area, over all 7 sub-basin grid cells. The result of this summation will be returned as the simulated discharge in [scheme].cal.abisko.torne.hrly, but the scaled Qs and Qsb will not be returned.

  6. Coordinates of Ovre Abiskojokk gauge

    Also, the tor_dis_abisk.nc file, under global attributeslatitude and longitude lists the wrong values for location of the gauge (at least compared to the instructions file).

    Response:

    Yes, the correct coordinates are 18.7917, 68.3664. In the file tor_dis_abisk.nc I give the coordinates for the gauge at Ovre Lansjarv. Oddly enough, the latitude for Ovre Abiskojokk given in the instructions is also wrong. This has been corrected in the discharge files which were re-posted to the ftp site on 8/4/00.

  7. Time period of calibration runs

    The instructions file doesn't indicate which decade the calibration/validation runs are for, although I assume they would be for 1989 to 1998, same as the base runs.

    Response:

    That is correct, the calibration/validation runs should be run for the period 1/1/1989 - 12/31/1998.

  8. Discharge timestp axis

    I have a question about the tor_dis_abisk.nc and tor_dis_lansj.nc netCDF files of observed discharge. The "time" variable lists seconds since 1979Jan01, the first value of this variable is 3.156624e+08, or about 10 years, making the first data record valid on 1989Jan01. However, the "timestp" variable's first value is 1, while the units for this variable are timesteps since 1979Jan01. I can't tell if this data is valid from 1979 to 1988, or from 1989 to 1998. Which is it?

    Response:

    The data is from 1989 - 1998, the "timestp" variable is in error, it should increase from 3654 to 7305. For consistency I decided to keep the same time origin (1/1/1979 0:00) for all files, but I overlooked the timestp variable when I made this change. The discharge files have been corrected and re-posted to the ftp site as of 8/4/00.

Output requirements:

  1. Dimensions of subsurface variables

    Subsurface state variables - Table 10 gives dimensions as (lat,lon,time). Do you want a separate array for each variable and each model level?

    Response:

    There should technically be another dimension with model level. I would prefer that a fourth dimension, 'level' be defined. The values taken by the 'level' axis will be the distance in meters from the soil surface of the bottom of each soil layer. (For example, level will take on values of 0.1, 0.80 and 1.0 if SoilMoist, LSoilMoist and SoilTemp will be returned for soil layers which go from 0 - 10cm, 10 - 80 cm and 80 - 100 cm). This approach requires that each layer depth is constant across the model domain. For models which include different layer depths for each model grid cell, please report values for an effective depth over the area that basically includes the "equivalent" values as if the layers were fixed. If you feel that this approach would result in a significant loss of information, please contact me and we will work out a more detailed reporting system.

  2. Time period of the output files

    For the netCDF output files, I wonder if you expected one output file for each year (as for the forcing data), or just one file for the 10 years?

    Response:

    Please provide one forcing file for each simulated year, as for the forcing data. These can then be tarred together for data transfer.

  3. Range of values for the output variables

    I began to check the range of the output parameters, and the extreme values are often out of limits, while still physically acceptable. For two of these variables, the problem can be linked to the definition :

    • SnowT : What should I write when all the snow pack has disappeared ? So far, I've set SnowT=AvgSurT when the soil is snow free, but then, the maximum limit (280K) is passed. If I simply write 0, the minimum value (213K) will be passed too.
    • Subsnow : I understood subsnow as the total evaporation from the snowpack, including sublimation and condensation. However, the minimum value set for Subsnow is 0, which means no condensation. But the condensation over the snowpack is not negligeable!

    The other parameters for which the limits are overpassed are:

    • Qle= The maximum value of 300W/m2 seems pretty low! We easily reach 400 W/m2. Same problems of course for Evap, and the evaporation components.

    • Qg, Qv, Qf : Both min and max values are passed.

    So I wonder if some of the limits will be reconsidered, and what will happen if the range of values are not met?

    Response:

    The screening program is meant to protect us from unnecessary delays in the processing of model output by throwing out values that are clearly in the wrong units or assigned to the wrong variable name, etc. Results which do not pass the screening will not be accepted until they do. That being said, certainly we do not mean to exclude reasonable values which are not contained within our limits. The following changes are proposed for this experiment:

    • Qle: +/- 700 W/m2;
    • Qh: +/- 600 W/m2;
    • Qg: +/- 500 W/m2;
    • Qf: +/- 1200 W/m2;
    • Qv: +/- 600 W/m2;
    • Qa: +/- 50 W/m2;
    • DelSoilHeat: +/- 500 W/m2;
    • DelColdCont: +/- 200 W/m2;
    • Evap: +/- 0.0003 kg/m2/s;
    • TVeg: +/- 0.0003 kg/m2/s;
    • ECanop: +/- 0.0003 kg/m2/s;
    • ESoil: +/- 0.0003 kg/m2/s;
    • EWater: +/- 0.0003 kg/m2/s;
    • SubSnow: +/- 0.0003 kg/m2/s;

    SnowT (and other cold season parameters such as IceT and SAlbedo), should be set to the no data value when snow or ice no longer exist. The no data value is defined in the attribute of each netcdf variable. For the forcing files a value of 1.0x1020 was used.

  4. Definition of evaporation components

    For the evaporation components, I have set ESoil as the sum of bare soil evaporation (Eg) and sublimation from the soil ice (Egi), and Subsnow as the sum of the snow sublimation (Es) and the evaporation of the liquid contained in the snow pack (Els). The problem if the latent heat of vaporisation (Lv) should be used for Eg and Els, while the latent heat of sublimation (Ls) should be used for Egi and Es. So, writing Esoil=Eg+Egi and Subsnow=Es+Esl, it is not possible to compute from the evaporation components, a latent heat flux equal to Qle. I wonder if it's a problem? Of course, the equation Evap= Ecanop+Tveg+Esoil+Ewater+Subsnow is still verified.

    Response:

    In order to avoid the proliferation of output variables, we will to keep the definitions you have adopted. We are requesting the energy associated with fusion and sublimation separately, so although we will not be able to back-calculate every variable, we will be able to distinguish the energy associated with each of these processes.

  5. Time axis for output files

    On the time label for the netcdf file. No format is recommended either in your instructions or on the ALMA site. Several possibilities arise:

    (a) Start date/time + elapsed time

    (b) Current date/time + elapsed time

    (c) Start of the interval date/time + end of the interval date/time.

    Response:

    It is not entirely clear what is meant by option c, however, we think it would be best if the time axis of the output file mirrored that of the input file, that is a time unit of seconds since 1/1/1979 0:00.

  6. Average vs. instantaneous prognostic and diagnostic variables

    Any of the variables in Table 9 can be binned into one of 3 categories: (a) Prognostic, which I will label X; (b) Flux, which I will label F; (c) Diagnostic, which I will label D. Our problem concerns the footnote that specifies that variables should be forward averages, i.e., for 1 UTC of one particular day you have the average over the 1-2 UTC interval of that day.

    Let us start with something simple, soil water in top soil layer, first variable of your 0.4 category. I would bin it in my category (a), i.e., a prognostic variable, one that belongs to the l.h.s. of your partial differential equations (pdf) (i.e., for which you have {\partial X} \over {\partial t} = ...). We would rather output there, violating your instructions, the instantaneous quantity, valid at 1 UTC.

    One of the forcings for changing soil moisture is rainfall rate, second variable on your category 0.2. This is a flux quantity, i.e., it appears as an additive term on the r.h.s. of your pdf (i.e. {\partial X} \over {\partial t} = ... + F + ...). We would comply to your instructions and write output rainfall rate time-averaged.

    If we follow our convention (different from your Table 9 for prognostic variables) we will be able to do an exact budget of water in the top soil layer between the times t1 and t2:

    X_t2 = X_t1 + (t2-t1)*F + ...

    If you average X quantities you cannot verify your water budget, something that has proved an essential component of PILPS time and again.

    On diagnostic variables (any variable that can be obtained by a function of ancillary information and prognostic variables, and may be forcing quantities, at a given time) we would be applying discretion on using time-averaged or instantaneous quantities. E.g., we would have liquid soil water instantaneous for time 1 UTC, to be consistent with the main quantity from which it stems, soil water. For quantities that are a more complex function of instantaneous variables, such as skin temperature, we have not made our minds, but both time-averaged and instantaneous seem possible.

    Response:

    When first putting together the list of variables which should be returned for the base runs, we did consider this issue of averaged versus instantaneous values. At that time it was our opinion that for an experiment in which models are run in simulation mode (rather than assimilation) and the preferred simulation time step is one hour, that the difference between average and instantaneous quantities over that hour would be small. However, your point is well-taken and if the sub-hourly variation in some models is large enough to cause concern than we should certainly track this information. Therefore, IN ADDITION to the output variables summarized in Table 9 of the experiment design, we would request all participants to include the following variables, defined on the ALMA web site:

    • DelSoilHeat;
    • DelColdContent;
    • DelSoilMoist;
    • DelSWE;
    • DelSurfStor; and
    • DelIntercept.

    All of these variables represent the accumulated change in the designated quantity over the course of the model time step, and should allow calculation of the complete water and energy budget over each hourly time interval.