# Classification trees of the interpolation methods offered in Geostatistical Analyst

One of the most important decisions you will have to make is to define what your objective(s) is in developing an interpolation model. In other words, what information do you need the model to provide so that you can make a decision? For example, in the public health arena, interpolation models are used to predict levels of contaminants that can be statistically associated with disease rates. Based on that information, further sampling studies can be designed, public health policies can be developed, and so on.

Geostatistical Analyst offers many different interpolation methods. Each has unique qualities and provides different information (in some cases, methods provide similar information; in other cases, the information may be quite different). The following diagrams show these methods classified according to different criteria. Choose a criterion that is important for your particular situation and a branch in the corresponding tree that represents the option that you are interested in. This will lead you to one or more interpolation methods that may be appropriate for your situation. Most likely, you will have several important criteria to meet and will use several of the classification trees. Compare the interpolation methods suggested by each tree branch you follow and pick a few methods to contrast before deciding on a final model.

The first tree suggests methods based on their ability to generate predictions or predictions and associated errors.

Some methods require a model of spatial autocorrelation to generate predicted values, but others do not. Modeling spatial autocorrelation requires defining extra parameter values and interactively fitting a model to the data.

Different methods generate different types of output, which is why you must decide what type of information you need to generate prior to building the interpolation model.

Interpolation methods vary in their levels of complexity, which can be measured by the number of assumptions that must be met for the model to be valid.

Some interpolators are exact (at each input data location, the surface will have exactly the same value as the input data value), while others are not. Exact replication of the input data may be important in some situations.

Some methods produce surfaces that are smoother than others. Radial basis functions are smooth by construction, for example. The use of a smooth search neighborhood will produce smoother surfaces than a standard search neighborhood.

For some decisions, it is important to consider not only the predicted value at a location but also the uncertainty (variability) associated with that prediction. Some methods provide measures of uncertainty, while others do not.

Finally, processing speed may be a factor in your analysis. In general, most of the interpolation methods are relatively fast, except when barriers are used to control the interpolation process.

The classification trees use the following abbreviations for the interpolation methods:

Abbreviation | Method name |
---|---|

GPI | |

LPI | |

IDW | |

RBF | |

KSB | |

DKB | |

Kriging | Ordinary, simple, universal, indicator, probability, disjunctive, and empirical Bayesian kriging |

Simulation | Gaussian geostatistical simulation, based on a simple kriging model |