/
Manual Testing

Manual Testing

Test Setup

  1. Create the tests in the test project (https://dynamicsolutions-intl.atlassian.net/jira/software/projects/TEST)

  2. Prepare your test environment by ensuring the following:

    1. Using the correct version of EFDC+ & EE

    2. Have a folder for each model, and a subfolder for each test for that model.

  3. You have loaded and saved the the test project in EE

  4. (Optional) Create a .bat file in each project folder with the related run command.

  5. The comparison step uses the python comparison tool, which is available here: https://github.com/dsi-llc/modelcomparer . Please follow the instructions in the Readme for getting the tool set up.

Running Models

Run each test for the model and follow the necessary steps depending on the outcome.

If Model Crashes

  1. Take a screenshot of the command prompt that shows the EFDC output.

  2. Create an .zip file of the failed model test run (excluding #output if the output is large).

  3. Create a bug issue type in the EFDC Jira project that has:

    1. the screenshot taken of the command prompt

    2. the model .zip file

    3. an issue link to the related Jira Test item.

    4. example: https://dynamicsolutions-intl.atlassian.net/browse/EFDC-292

  4. Move the Jira Test issue status to “Failed”. (example below)

 

Once all the tests for a model complete without crashing, proceed to do the model comparison.

Model Comparison

This section assumes you have already followed the necessary steps to use the comparison tool.

  • Create your config.json file that specifies the parameters for the model comparisons you want to make. This file should be in the root comparison tool folder.

  • In the following example config.json, I am comparing each test to a reference model. I am specifying that I want the comparison to start at the 3rd time step, I want to compare 1 time step at a time, and I want to stop the comparison at the 6th time step.

[ { "comparisonDirectory": "C:\\models\\Chesapeake_Bay\\REFERENCE", "sourceDirectory": "C:\\models\\Chesapeake_Bay\\Single_Domain", "startIndex": 2, "count": 1, "endIndex": 5 }, { "comparisonDirectory": "C:\\models\\Chesapeake_Bay\\REFERENCE", "sourceDirectory": "C:\\models\\Chesapeake_Bay\\IC_Decomp", "startIndex": 2, "count": 1, "endIndex": 5 }, { "comparisonDirectory": "C:\\models\\Chesapeake_Bay\\REFERENCE", "sourceDirectory": "C:\\models\\Chesapeake_Bay\\JC_Decomp", "startIndex": 2, "count": 1, "endIndex": 5 } ]
  • Next, run the comparison tool.

Recommend running in powershell, because the command prompt does not recognize the color characters.

  • The tool will report if it finds significant differences in the vel.out or ws.out files. It should produce an output like this…

If the differences are considered significant, follow the same procedure as outlined for a model crash. Include a screenshot of the comparison output (like shown above).


If the models are running to completion, and there are no significant differences between the reference model, then move the Jira Test status to “Passed”. Finally!

Related content