For monthly reconstructions, it is possible to focus on particular months. Selections on the left will populate all other tabs.

There may be fitting issues with this reconstuction, please check the Goodness of Fit tab for details.

Plots are dynamic. Click and drag within the time series to zoom or use the scroll bar at the bottom. Double-click on the graph to zoom out.

**Recommended Citation:**

Data loading.

Flows are sorted by most extreme. You may re-sort by clicking column headers. You may also change the period of interest using the Date Subset drop-down.

Data loading.

While this indicates a relatively poor calibration, it may be acceptable depending on your use case. Carefully consider how you will use the reconstruction and double-check the fit before proceeding.

This means the reconstruction performs worse than if the model had assumed the historical average for all years. While this indicates a relatively poor calibration, it may be acceptable depending on your use case. Carefully consider how you will use the reconstruction and double-check the fit before proceeding.

There was a large decrease in predictive skill in the validation step, indicating possibility of an over-fit model that could perform poorly outside the calibration period. This may be acceptable depending on your use case. Carefully consider how you will use the reconstruction and double-check the fit before proceeding.

Reconstructed flows plotted against observed flows for the instrumental (calibration) period. A perfect calibration fit would be along the 1:1 line, shown in red.

Hover over a point to see its date and values. Click and drag to zoom in on an area. Right click to zoom out.

Calibration statistics indicate the best expected model performance and should be complemented with validation.

If validation is much worse than calibration, it indicates the model is too sensitive to new data, or 'overfit'

R-squared describes how much of the observed variance in reconstructed flow is accounted for by the predictors (tree-rings).

It represents percent explained (%) and ranges from zero to one, with one being a perfect fit.

Reconstruction-Statistical Background

It represents percent explained (%) and ranges from zero to one, with one being a perfect fit.

Reconstruction-Statistical Background

Nash-Sutcliffe Efficiency measures a model's predictive ability against the long term mean. NSE ranges from 1 (perfect fit) to -∞. Values greater than zero indicate that the model has some skill (performs better than assuming the mean).

Within the reconstruction community, this metric is typically referred to as the Coefficient of Efficiency (CE) when applied to calibration and Reduction of Error (RE) when applied to validation.

Efficiency Criteria

Within the reconstruction community, this metric is typically referred to as the Coefficient of Efficiency (CE) when applied to calibration and Reduction of Error (RE) when applied to validation.

Efficiency Criteria

RMSE provides the 'typical' error and is an absolute measure of fit, meaning it is displayed in flow units.

RMSE is calculated as the square root of mean square of errors (predicted minus observed flow). A perfect model would have no errors, and therefore RMSE = 0.

Reconstruction-Statistical Background

RMSE is calculated as the square root of mean square of errors (predicted minus observed flow). A perfect model would have no errors, and therefore RMSE = 0.

Reconstruction-Statistical Background

Mean Absolute Error (MAE) calculates the average absolute (no sign) difference between predicted and observed flows. It is displayed in original flow units and a perfect fit have MAE = 0.

MAE is similar in interpretation to RMSE and is the more intuitive of the two. Where RMSE handles negative values by squaring them, MAE simply applies the absolute value ||.

MAE is similar in interpretation to RMSE and is the more intuitive of the two. Where RMSE handles negative values by squaring them, MAE simply applies the absolute value ||.

Mean Error (ME) calculates the average error (predicted - observed). It is a measure of bias, if the model consistently over-predicts or under-predicts. Unbiased models have a ME near zero.

Mean Error should always be considered with other metrics because poor predictive models can have ME near zero if large positive and negative errors can cancel each other out.

Mean Error should always be considered with other metrics because poor predictive models can have ME near zero if large positive and negative errors can cancel each other out.

Validation is typically performed by splitting the observed record into a 'training' set and a 'validation' set. A model fit using the training set is then used to predict values in the witheld valdation set to simulate prediction errors outside the observed record.

'Split Sample' validation defines these training and validation sets explicitly when fitting the model.

'K-fold' cross-validation predicts part of the sample (typically 10%, referred to as 'k') and predicts using the remaining 90%. This is repeated k times until each point is predicted without being used in training.

Leave-One-Out is a commonly used method and is an extreme version of K-fold cross-validation, where each point is predicted using a model fit with all other data.

'Split Sample' validation defines these training and validation sets explicitly when fitting the model.

'K-fold' cross-validation predicts part of the sample (typically 10%, referred to as 'k') and predicts using the remaining 90%. This is repeated k times until each point is predicted without being used in training.

Leave-One-Out is a commonly used method and is an extreme version of K-fold cross-validation, where each point is predicted using a model fit with all other data.

To submit a new reconstruction, first download the template (Step 1). Then upload the completed template (Step 2). Please include as much metadata in the CSV file as possible, along with entering a name for the reconstruction and a valid email address so that we may contact in case of questions (Step 3). Finally, push Submit (Step 4).

We strongly recommend that the original files submitted to PaleoFlow be housed in a standard repository, such as the International Tree-Ring Data Bank (ITRDB).

Download a blank template and insert data. All submissions must use the template provided. A completed file is provided as an example.

Submitting...

Push submit when finished. You will receive confirmation on the next screen.

The Reconstructed Streamflow Explorer was developed by James Stagge in conjunction with the Utah State University Water Research Lab and the Wasatch Dendroclimatology Research Group . It was funded in part by Utah Mineral Lease funds.

When using the Reconstructed Streamflow Explorer for research or reference, please cite as follows:

All code for this application is available as a GitHub repository , made available under the MIT license.

Please direct any questions to James Stagge .