Author Contributions
Conceptualization, C.O. and A.J.K.; methodology, C.O.; software, C.O.; formal analysis, C.O. and N.M.M.; writing—original draft preparation, C.O. and N.M.M.; writing—review and editing, C.O., N.M.M., A.J.K., J.N.M. and B.S.C.; supervision, A.J.K., J.N.M. and B.S.C.; funding acquisition, A.J.K., J.N.M. and B.S.C. All authors have read and agreed to the published version of the manuscript.
Figure 1.
Location map of training (blue triangles), validation (red circles) and testing (green circles) sites. Training and validation sites were monitored using water level loggers and manual measurements at testing sites.
Figure 1.
Location map of training (blue triangles), validation (red circles) and testing (green circles) sites. Training and validation sites were monitored using water level loggers and manual measurements at testing sites.
Figure 2.
Flowchart of the methodology adopted in this study for bathymetry estimation using Landsat 8 imagery from Google Earth Engine.
Figure 2.
Flowchart of the methodology adopted in this study for bathymetry estimation using Landsat 8 imagery from Google Earth Engine.
Figure 3.
Correlation between band reflectance, band ratio, and depth. (a) the correlation between the log-transformed Blue band and depth, (b) the correlation between the log-transformed SWIR1 band and depth, (c) the correlation between the ratio of the Green to the Blue band and depth, and (d) the correlation between the ratio of the Green to the SWIR1 band and depth.
Figure 3.
Correlation between band reflectance, band ratio, and depth. (a) the correlation between the log-transformed Blue band and depth, (b) the correlation between the log-transformed SWIR1 band and depth, (c) the correlation between the ratio of the Green to the Blue band and depth, and (d) the correlation between the ratio of the Green to the SWIR1 band and depth.
Figure 4.
General architecture of the RF Model (Image adapted from Choi et al. [
20]).
Figure 4.
General architecture of the RF Model (Image adapted from Choi et al. [
20]).
Figure 5.
General architecture of the SVR Model (Image adapted from Kabir et al. [
43]).
Figure 5.
General architecture of the SVR Model (Image adapted from Kabir et al. [
43]).
Figure 6.
Hydrograph of estimated water depth (m) using the Bramante (light blue), Stumpf (orange), random forest (RF; green), support vector regression (SVR; red) and k-nearest neighbor (KNN; purple) compared to the observed data (blue) at (a) validation site 1, (b) validation site 2, and (c) validation site 3.
Figure 6.
Hydrograph of estimated water depth (m) using the Bramante (light blue), Stumpf (orange), random forest (RF; green), support vector regression (SVR; red) and k-nearest neighbor (KNN; purple) compared to the observed data (blue) at (a) validation site 1, (b) validation site 2, and (c) validation site 3.
Figure 7.
Box plots of model residual errors (observed minus predicted) (a,c,e) and Taylor diagrams (b,d,f) comparing model results to the observed data based on their statistics for validation site 1, validation site 2, and validation site 3.
Figure 7.
Box plots of model residual errors (observed minus predicted) (a,c,e) and Taylor diagrams (b,d,f) comparing model results to the observed data based on their statistics for validation site 1, validation site 2, and validation site 3.
Figure 8.
Performance of SDB at manually measured test locations for (a) Bramante, (b) Stumpf, (c) RF, (d) SVR, and (e) KNN models. Red line is the 1:1 line (limited to the maximum of the observed data). Blue line is the estimated best-fit regression line relating the observed and estimated water depths.
Figure 8.
Performance of SDB at manually measured test locations for (a) Bramante, (b) Stumpf, (c) RF, (d) SVR, and (e) KNN models. Red line is the 1:1 line (limited to the maximum of the observed data). Blue line is the estimated best-fit regression line relating the observed and estimated water depths.
Figure 9.
Hydrograph of estimated water depth (m) using the Bramante (light blue), Stumpf (orange), random forest (RF; green), support vector regression (SVR; red) and k-nearest neighbor (KNN; purple) compared to the observed data (blue) at HOBO testing site 1 and 2 ((a,b), respectively).
Figure 9.
Hydrograph of estimated water depth (m) using the Bramante (light blue), Stumpf (orange), random forest (RF; green), support vector regression (SVR; red) and k-nearest neighbor (KNN; purple) compared to the observed data (blue) at HOBO testing site 1 and 2 ((a,b), respectively).
Figure 10.
Hydrograph of estimated water depth (m) using the Bramante (light blue), Stumpf (orange), random forest (RF; green), support vector regression (SVR; red) and k-nearest neighbor (KNN; purple) compared to the observed data (blue) at transfer sites: (a) P36, (b) G-3272 and (c) NP201.
Figure 10.
Hydrograph of estimated water depth (m) using the Bramante (light blue), Stumpf (orange), random forest (RF; green), support vector regression (SVR; red) and k-nearest neighbor (KNN; purple) compared to the observed data (blue) at transfer sites: (a) P36, (b) G-3272 and (c) NP201.
Figure 11.
Residual error (observed minus modeled) at transfer sites: (a) P36, (b) G-3272 and (c) NP201.
Figure 11.
Residual error (observed minus modeled) at transfer sites: (a) P36, (b) G-3272 and (c) NP201.
Table 1.
List of training, validation, and testing sites with the number of satellite images per site and geographic locations.
Table 1.
List of training, validation, and testing sites with the number of satellite images per site and geographic locations.
Site | Latitude | Longitude | No. of Images |
---|
Training site 1 | 36.6264 | −89.1430 | 43 |
Training site 2 | 36.6278 | −88.9615 | 40 |
Training site 3 | 36.6298 | −88.9492 | 43 |
Training site 4 | 36.6151 | −89.0290 | 43 |
Training site 5 | 36.7714 | −88.9160 | 37 |
Training site 6 | 36.7001 | −88.8016 | 19 |
Training site 7 | 36.9348 | −88.9368 | 41 |
Validation site 1 | 36.6069 | −89.1179 | 43 |
Validation site 2 | 36.6117 | −89.0331 | 43 |
Validation site 3 | 36.7751 | −88.9108 | 40 |
Testing site 1 | 36.2369 | −89.2059 | 41 |
Testing site 2 | 36.1664 | −89.4009 | 23 |
P36 | 25.5272 | −80.7956 | 35 |
G-3272 | 25.6649 | −80.5389 | 35 |
NP201 | 25.7166 | −80.7194 | 35 |
Table 2.
Results of band combination assessment to select optimum band combination based on MAE. Bold text indicates the best-performing model for each band combination.
Table 2.
Results of band combination assessment to select optimum band combination based on MAE. Bold text indicates the best-performing model for each band combination.
Band Combination | Mean Absolute Error (MAE) (m) |
---|
Stumpf
|
Bramante
|
RF
|
SVR
|
KNN
|
---|
Blue/Red | 0.948 | 0.934 | 1.042 | 0.893 | 0.949 |
Blue/Green | 0.897 | 0.926 | 0.971 | 0.893 | 0.898 |
Blue/NIR | 0.650 | 0.631 | 0.662 | 0.890 | 0.597 |
Blue/SWIR1 | 0.548 | 0.515 | 0.673 | 0.887 | 0.602 |
Blue/SWIR2 | 0.534 | 0.526 | 0.648 | 0.886 | 0.582 |
Red/Green | 0.906 | 0.909 | 1.018 | 0.893 | 0.888 |
Red/NIR | 0.679 | 0.655 | 0.662 | 0.889 | 0.624 |
Red/SWIR1 | 0.566 | 0.525 | 0.699 | 0.885 | 0.623 |
Red/SWIR2 | 0.547 | 0.523 | 0.680 | 0.884 | 0.597 |
Green/NIR | 0.605 | 0.611 | 0.680 | 0.890 | 0.601 |
Green/SWIR1 | 0.529 | 0.510 | 0.679 | 0.886 | 0.594 |
Green/SWIR2 | 0.546 | 0.544 | 0.666 | 0.885 | 0.588 |
NIR/SWIR1 | 0.829 | 0.552 | 0.896 | 0.892 | 0.795 |
NIR/SWIR2 | 0.893 | 0.579 | 0.934 | 0.893 | 0.863 |
SWIR1/SWIR2 | 0.885 | 0.545 | 1.016 | 0.893 | 0.896 |
Table 3.
Results of machine learning model hyperparameter tuning. These are the values of the parameters that result in the optimal performance of the models.
Table 3.
Results of machine learning model hyperparameter tuning. These are the values of the parameters that result in the optimal performance of the models.
Model | Hyperparameter | Value |
---|
RF | No. of estimators | 40 |
SVR | Epsilon () | 0.01 |
Regularization constant (C) | 1 |
KNN | number of neighbors, k | 21 |
Table 4.
Performance of SDB models at validation site 1 and the effect of turbidity on models. The first row for each model presents model performance without the turbidity index and the second row shows when the model result was combined with the turbidity index in a linear regression model for final depth estimation.
Table 4.
Performance of SDB models at validation site 1 and the effect of turbidity on models. The first row for each model presents model performance without the turbidity index and the second row shows when the model result was combined with the turbidity index in a linear regression model for final depth estimation.
Model | NSE | RMSE (m) | MAE (m) | |
---|
Stumpf | 0.242 | 1.776 | 1.154 | 0.641 |
Stumpf_NDTI | 0.237 | 1.782 | 1.155 | 0.639 |
Bramante | 0.259 | 1.756 | 1.107 | 0.649 |
Bramante_NDTI | 0.237 | 1.782 | 1.155 | 0.639 |
RF | 0.251 | 1.766 | 1.158 | 0.476 |
RF_NDTI | 0.414 | 1.562 | 1.086 | 0.474 |
SVR | 0.056 | 1.982 | 1.336 | 0.567 |
SVR_NDTI | 0.491 | 1.456 | 0.969 | 0.565 |
KNN | 0.105 | 1.930 | 1.255 | 0.574 |
KNN_NDTI | 0.508 | 1.431 | 0.969 | 0.574 |
Table 5.
Performance of SDB models at validation site 2 and the effect of turbidity on models. The first row for each model presents model performance without the turbidity index and the second row shows when model result was combined with the turbidity index in a linear regression model for final depth estimation.
Table 5.
Performance of SDB models at validation site 2 and the effect of turbidity on models. The first row for each model presents model performance without the turbidity index and the second row shows when model result was combined with the turbidity index in a linear regression model for final depth estimation.
Model | NSE | RMSE (m) | MAE (m) | |
---|
Stumpf | 0.750 | 0.177 | 0.131 | 0.803 |
Stumpf_NDTI | 0.747 | 0.178 | 0.133 | 0.803 |
Bramante | 0.697 | 0.195 | 0.154 | 0.834 |
Bramante_NDTI | 0.747 | 0.178 | 0.133 | 0.803 |
RF | 0.516 | 0.247 | 0.190 | 0.741 |
RF_NDTI | 0.119 | 0.333 | 0.216 | 0.731 |
SVR | 0.619 | 0.219 | 0.170 | 0.785 |
SVR_NDTI | 0.271 | 0.303 | 0.192 | 0.769 |
KNN | 0.683 | 0.200 | 0.146 | 0.761 |
KNN_NDTI | −0.403 | 0.420 | 0.238 | 0.759 |
Table 6.
Performance of SDB models at validation site 3 and the effect of turbidity on models. The first row for each model presents model performance without the turbidity index and the second row shows when the model result was combined with the turbidity index in a linear regression model for final depth estimation.
Table 6.
Performance of SDB models at validation site 3 and the effect of turbidity on models. The first row for each model presents model performance without the turbidity index and the second row shows when the model result was combined with the turbidity index in a linear regression model for final depth estimation.
Model | NSE | RMSE (m) | MAE (m) | |
---|
Stumpf | 0.732 | 0.408 | 0.277 | 0.765 |
Stumpf_NDTI | 0.728 | 0.412 | 0.281 | 0.763 |
Bramante | 0.761 | 0.386 | 0.246 | 0.787 |
Bramante_NDTI | 0.728 | 0.412 | 0.281 | 0.763 |
RF | 0.653 | 0.465 | 0.321 | 0.714 |
RF_NDTI | 0.313 | 0.654 | 0.469 | 0.708 |
SVR | 0.721 | 0.417 | 0.242 | 0.725 |
SVR_NDTI | 0.072 | 0.760 | 0.457 | 0.721 |
KNN | 0.769 | 0.379 | 0.254 | 0.772 |
KNN_NDTI | 0.237 | 0.690 | 0.427 | 0.769 |
Table 7.
Descriptive statistics of the training, testing and transfer data to show how characteristics of the testing and transfer sites compare to the training sites (values in m).
Table 7.
Descriptive statistics of the training, testing and transfer data to show how characteristics of the testing and transfer sites compare to the training sites (values in m).
Statistic | Training Data | Manual Data | HOBO Test 1 | HOBO Test 2 | P36 | G-3272 | NP201 |
---|
Mean | 0.73 | 0.12 | 0.21 | 0.50 | 0.34 | 0.18 | 0.30 |
Median | 0.55 | 0.00 | 0.01 | 0.22 | 0.28 | 0.13 | 0.23 |
Std. Dev | 0.81 | 0.17 | 0.35 | 0.81 | 0.21 | 0.18 | 0.28 |
Min | 0.00 | 0.00 | 0.00 | 0.01 | 0.04 | 0.00 | 0.00 |
Max | 5.26 | 0.61 | 1.42 | 3.23 | 0.76 | 0.56 | 0.88 |
25th percentile | 0.25 | 0.00 | 0.01 | 0.02 | 0.18 | 0.00 | 0.08 |
75th percentile | 0.93 | 0.24 | 0.36 | 0.49 | 0.48 | 0.38 | 0.45 |
Table 8.
Performance of SDB models at testing locations where measurements were taken manually and at two HOBO logger locations in West Tennessee.
Table 8.
Performance of SDB models at testing locations where measurements were taken manually and at two HOBO logger locations in West Tennessee.
Site | Model | RMSE (m) | MAE (m) | |
---|
Manually-measured sites | Stumpf | 0.79 | 0.63 | 0.55 |
Bramante | 0.83 | 0.62 | 0.58 |
RF | 0.92 | 0.64 | 0.43 |
SVR | 0.63 | 0.51 | 0.55 |
KNN | 0.74 | 0.59 | 0.55 |
HOBO Test Site 1 | Stumpf | 0.27 | 0.24 | 0.65 |
Bramante | 0.22 | 0.20 | 0.67 |
RF | 0.28 | 0.26 | 0.49 |
SVR | 0.25 | 0.24 | 0.65 |
KNN | 0.26 | 0.25 | 0.70 |
HOBO Test Site 2 | Stumpf | 0.42 | 0.29 | 0.88 |
Bramante | 0.35 | 0.27 | 0.87 |
RF | 0.47 | 0.32 | 0.79 |
SVR | 0.51 | 0.31 | 0.83 |
KNN | 0.40 | 0.28 | 0.91 |
Table 9.
Performance of SDB models at transfer locations: P36, G-3272, and NP201 in the Everglades Depth Estimation Network (EDEN), Florida.
Table 9.
Performance of SDB models at transfer locations: P36, G-3272, and NP201 in the Everglades Depth Estimation Network (EDEN), Florida.
Site | Model | RMSE (m) | MAE (m) | |
---|
P36 | Stumpf | 0.19 | 0.18 | 0.16 |
Bramante | 0.15 | 0.58 | 0.11 |
RF | 0.29 | 0.12 | 0.23 |
SVR | 0.16 | 0.43 | 0.12 |
KNN | 0.19 | 0.46 | 0.15 |
G-3272 | Stumpf | 0.50 | 0.41 | 0.46 |
Bramante | 0.46 | 0.85 | 0.42 |
RF | 0.48 | 0.30 | 0.41 |
SVR | 0.35 | 0.62 | 0.31 |
KNN | 0.43 | 0.70 | 0.40 |
NP201 | Stumpf | 0.91 | 0.73 | 0.86 |
Bramante | 0.94 | 0.67 | 0.83 |
RF | 0.88 | 0.45 | 0.73 |
SVR | 0.61 | 0.61 | 0.54 |
KNN | 0.82 | 0.66 | 0.69 |