8 TSS harmonization process
Following the completion of the {dataRetrieval} download process described previously, the pipeline contains raw WQP data for each parameter of interest. Before we harmonize each parameter we run through a series of universal “pre-harmonization” steps, which ensure that the datasets are appropriately formatted when entering their harmonization routines.
The text below first walks through the pre-harmonization steps for the total suspended solids (TSS) dataset and then delves into the specifics of the harmonization process.
8.1 Pre-harmonization of the raw TSS WQP dataset
At the start of the pre-harmonization process the raw TSS WQP dataset contains 6 million rows.
8.1.1 Missing results
Next, records that have missing data are dropped from the dataset. Several criteria are used when checking for missing data. If any of the below criteria are met the row is flagged as missing:
- Both the result column and detection limit column had
NA
data - Result, result unit, activity comment, laboratory comment, and
result comment columns are all
NA
- The result comment column contains any user-provided text indicating
a missing value, currently including:
analysis lost
,not analyzed
,not recorded
,not collected
, orno measurement taken
205.7 thousand rows are dropped, resulting in a final count of 5.8 million.
8.1.2 Filter status
The final step in pre-harmonization is to filter the
ResultStatusIdentifier
column to include only the following statuses:
"Accepted"
"Final"
"Historical"
"Validated"
"Preliminary"
NA
These statuses generally indicate a reliable result having been reached,
however we also include NA
in an effort to be conservative. More
specifically, when making decisions for this and other columns we
occasionally retain NA
values if removing the records would otherwise
drop 10% or more of the available data.
This step removes 14.6 thousand rows of data, leaving it with 5.8 million rows remaining.
8.2 Harmonization-ready TSS dataset
Once ready for harmonization, the TSS WQP dataset contains the following
user-defined characteristicNames
: Total suspended solids, Total
Particulate Matter, Suspended Sediment Concentration (SSC).
These names are chosen in order to select for only those measurements that pertain to TSS.
8.2.1 Filter media and fractions
We next ensure that the media type for all TSS data is
"Surface Water"
, "Water"
, "Estuary"
, or NA
. 4.2 thousand rows
are removed. The final row count after this is 5.8 million.
8.2.2 Document and remove fails
In this step we filter out records based on indications that they have failed data quality assurance or quality control for some reason given by the data provider (these instances are referred to here as “failures”).
After reviewing the contents of the ActivityCommentText
,
ResultLaboratoryCommentText
, ResultCommentText
,
ResultDetectionConditionText
, and ResultMeasureValue_original
columns, we developed a list of terms that captured the majority of
instances where records had failures or unacceptable measurements.
(Note: ResultMeasureValue_original
is a duplicate, character version
of the ResultMeasureValue
column that we use as a reference for the
column’s contents before it was converted to a numeric type.) We found
the phrasing to be consistent across columns, so we searched for the
same (case agnostic) terms in all four locations. The terms are: “beyond
accept”, “cancelled”, “contaminat”, “error”, “fail”, “improper”,
“interference”, “invalid”, “no result”, “no test”, “not accept”,
“outside of accept”, “problem”, “questionable”, “suspect”, “unable”,
“violation”, “reject”, “no data”, “time exceed”, “value extrapolated”,
“exceeds”, “biased”, “parameter not required”, “not visited”.
Below are pie charts that break down the number of failure detections by column. Note that the plotting below is automated so if one or more of the columns listed above are not plotted, this indicates that the column(s) did not return any matches for the failure phrases. Also note that a single record can contain multiple failure phrases; therefore, failure phrases are not mutually exclusive.
8.2.3 Clean MDLs
In this step method detection limits (MDLs) are used to clean up the
reported values. When a numeric value is missing for the data record
(i.e., NA
or text that became NA
during an as.numeric
call) we
check for non-detect language in the ResultLaboratoryCommentText
,
ResultCommentText
, ResultDetectionConditionText
, and
ResultMeasureValue
columns. This language can be "non-detect"
,
"not detect"
, "non detect"
, "undetect"
, or "below"
.
If non-detect language exists then we use the
DetectionQuantitationLimitMeasure.MeasureValue
column for the MDL,
otherwise if there is a <
and a number in the ResultMeasureValue
column we use that number instead.
We then use a random number between 0 and 0.5 * MDL
as the record’s
value moving forward.
We produce a new column, mdl_flag
, from the MDL cleaning process.
Records where no MDL-based adjustment was made and which are at or above
the MDL are assigned a 0. Records with corrected values based on the MDL
method are assigned a 1. Finally, records where no MDL-based adjustment
was made and which contain a numeric value below the provided MDL are
assigned a 2.
This should not result in a change in rows but we still check: 0 rows are removed. The final row count after this is 5.7 million.
8.2.4 Clean approximate values
Cleaning approximate values involves a similar process as for MDL
cleaning. We flag “approximated” values in the dataset. The
ResultMeasureValue
column gets checked for all three of the
following conditions:
- Numeric-only version of the column is still
NA
after MDL cleaning - The original column text contained a number
- Any of
ResultLaboratoryCommentText
,ResultCommentText
, orResultDetectionConditionText
match this regular expression, ignoring case:"result approx|RESULT IS APPROX|value approx"
We then use the approximate value as the record’s value moving forward.
Records with corrected values based on the above method are noted with a
1 in the approx_flag
column. Note, however, that occasionally
approximate language will be used in a record but not changed or
flagged. This occurs when the language is used in a comment-related
column and not the result column itself, meaning that there is a
usable numeric value provided (and thus doesn’t need correction).
This should not result in a change in rows but we still check: 0 rows are removed. The final row count after this is 5.7 million.
8.2.5 Clean values with “greater than” data
The next step is similar to the MDL and approximate value cleaning
processes, and follows the approximate cleaning process most closely.
The goal is to clean up values that were entered as “greater than” some
value. The ResultMeasureValue
column gets checked for all three of
the following conditions:
- Numeric-only version of the column is still
NA
after MDL & approximate cleaning - The original column text contained a number
- The original column text contained a
>
We then use the “greater than” value (without >
) as the record’s value
moving forward.
Records with corrected values based on the above method are noted with a
1 in the greater_flag
column.
This should not result in a change in rows but we still check: 0 rows are removed. The final row count after this is 5.7 million.
8.2.6 Drop unresolved NA measurements
The goal of the preceding three steps was to prevent records with
seemingly missing measurement data from being dropped if there was still
a chance of recovering a usable value. At this point we’ve finished with
that process and we proceed to check for remaining records with NA
values in their harmonized_value
column. If they exist, they are
dropped. We also filter out any negative values in the dataset at this
point.
22.1 thousand rows are removed. The final row count after this is 5.7 million.
8.2.7 Harmonize record units
The next step in TSS harmonization is converting the units of WQP records. We create the following conversion table, which is used to translate units provided in WQP into mg per liter (mg/L):
ResultMeasure.MeasureUnitCode | conversion |
---|---|
mg/l | 1e+00 |
mg/L | 1e+00 |
ppm | 1e+00 |
ug/l | 1e-03 |
ug/L | 1e-03 |
mg/m3 | 1e-03 |
ppb | 1e-03 |
mg/cm3 | 1e+03 |
ug/ml | 1e+00 |
mg/ml | 1e+03 |
ppt | 1e+03 |
ug/mL | 1e+00 |
mg/mL | 1e+03 |
g/L | 1e+03 |
Below is a pie chart that breaks down the different unit codes that were dropped in the unit harmonization process, and how many records were lost with each code.
829.8 thousand rows are removed. The final row count after this is 4.9 million.
8.2.8 Clean depth data
The next harmonization step cleans the four depth-related columns obtained from the WQP. The details behind this step are covered in the Depth flags section of the Tiering, flagging, and quality control philosophy chapter.
This should not result in a change in rows but we still check: 0 rows are removed. The final row count after this is 4.9 million.
8.2.9 Filter and tier methods
We next review the analytical methods used in measuring TSS, primarily
by classifying the text provided with each record in
ResultAnalyticalMethod.MethodName
. Once these methods are classified
we arrange them into hierarchical tiers as described in the Method
tiering section of the Tiering, flagging, and quality control
philosophy chapter.
However, prior to classification we check the
ResultAnalyticalMethod.MethodName
column for names that indicate
non-TSS measurements. Phrases used to flag and remove unrelated methods
from TSS data are: “10200”, “150.1”, “2340”, “2550”, “4500”, “9222”,
“9223”, “Alkalinity”, “Chlorophyll”, “DO NOT USE”, “Mercury”, “Nitrate”,
“Nitrogen”, “Oxygen”, “Phosphorus”, “Temperature”.
This process drops 5 thousand rows leaving 4.9 million remaining.
The next step towards creating tiers is to then identify the methods in
ResultAnalyticalMethod.MethodName
that:
- Indicate non-surface sampling: This includes detection of
“Activity Relative Depth: Bottom”, “Activity Relative Depth: Near
Bottom”, “Activity Relative Depth: Midwater”, “Relative Depth = Below
Thermocline”, or “hypolimnion” in
ActivityCommentText
or “pump”, “peristaltic”, “niskin”, “van dorn”, or “Kemmerer” inSampleCollectionEquipmentName
. - Have
NA
values inResultAnalyticalMethod.MethodName
orSampleCollectionEquipmentName
- Indicate low-flow conditions: This includes detection of “low
flow”, “no flow”, “no visible flow”, “low stream flow”, “Flow: Low”,
“Not flowing”, “low visible flow”, “slow flow”, “BELOW NORMAL”,
“Gentle flow”, “NO DISCERNIBLE FLOW”, or “Low base flow” in
ActivityCommentText
with no detection of “too deep” or “high flow”. - Have
USGSPCode
orResultAnalyticalMethod.MethodName
values typical of TSS or SSC (Suspended Sediment Concentration): Includes “2540”, “160”, “ASTM”, “14B”, “8006”, or “108” inResultAnalyticalMethod.MethodName
or “00530”, “80154”, “70299”, “70293”, “69613”, “69586”, “69587”, “69588”, “69584”, “69581”, “69582”, “69585”, “69583”, “69580”, “69579”, “70292” inUSGSPCode
. - Have 15 or fewer records associated with them
These classifications are not the final tiers, but they inform the tiering in the final step of this process.
Finally, we group the data into three tiers as described in Tiering, flagging, and quality control philosophy. These tiers are:
Tier | Name | Description | TSS Details |
---|---|---|---|
0 | Restrictive | Data that are verifiably self-similar across organizations and time-periods and can be considered highly reliable and interoperable | Records that don’t fall into tiers 1 or 2 |
1 | Narrowed | Data that we have good reason to believe are self-similar, but for which we can’t verify full interoperability across data providers | Records that do not meet the definition of tier 2, but have a non-NA ResultCommentText value |
2 | Inclusive | Data that are assumed to be reliable and are harmonized to our best ability given the information available from the data provider. This tier includes NA or non-resolvable descriptions for the analytical method, which often make up the majority of methods descriptions for any given parameter | Records that meet one of the following criteria: Have NA values in ResultAnalyticalMethod.MethodName or SampleCollectionEquipmentName; have ActivityCommentText values indicating low-flow conditions; are TSS samples that were taken at depth, with a pump, etc.; are non-USGS, SSC samples that were not taken at depth or with a pump, etc.; or have ResultAnalyticalMethod.MethodName values with 15 or fewer total records and which are not clearly standard SSC or TSS methods. |
At this point we export a file
(3_harmonize/out/tss_tiering_record.csv
) that contains a record of how
specific method text was tiered and how many row counts corresponded to
each method.
8.2.10 Flag based on field methods
Next, we use the field_flag
column to flag records that have a
qualifier related to sampling depth:
field_flag
=1
when:- TSS was sampled at depth
- non-USGS SSC was not sampled at depth
- In all other cases
field_flag
is 0
Sampling at depth is determined by whether the record was highlighted as indicating non-surface sampling during the previous harmonization step.
No records should be removed by this process and so there are 0 rows dropped leaving 4.9 million remaining in the harmonized TSS dataset.
8.2.11 Miscellaneous flag
Next we add a placeholder for the miscellaneous flag column,
misc_flag
.
This will be filled with NA
. Some parameters will have additional
flagging requirements that TSS does not, so we include this placeholder
to maintain the same columns across all parameter data products.
No records should be removed by this process and so there are 0 rows dropped leaving 4.9 million remaining in the harmonized TSS dataset.
8.2.12 Remove unrealistic values
Before finalizing the dataset we remove TSS values that are beyond a realistic threshold. We use 10,000 mg/L as our cutoff for removal.
We also remove any depths > 592m, the deepest point in a lake in the U.S.
25.2 thousand rows are removed. The final row count after this is 4.8 million.
8.2.13 Aggregate simultaneous records
The final step of TSS harmonization is to aggregate simultaneous
observations. Prior to aggregation, the dataset has the following
distribution of values when organized by CharacteristicName
and
ProviderName
:
The aggregation process: Any group of samples determined to be
simultaneous are simplified into a single record containing the mean and
coefficient of variation (CV) of the group. These can be either true
duplicate entries in the WQP or records with non-identical values
recorded at the same time and place and by the same organization (field
and/or lab replicates/duplicates). The CV can be used to filter the
dataset based on the amount of variability that is tolerable to specific
use cases. Note, however, that many entries will have a CV that is NA
because there are no simultaneous records or 0 because the records are
duplicates and all entries have the same harmonized_value
.
We identify simultaneous records to aggregate by creating identical
subgroups (subgroup_id
) from the following columns: parameter
,
OrganizationIdentifier
, MonitoringLocationIdentifier
,
MonitoringLocationTypeName
, ResolvedMonitoringLocationTypeName
,
ActivityStartDate
, ActivityStartDateTime
,
ActivityStartTime.TimeZoneCode
, harmonized_tz
, harmonized_utc
,
harmonized_top_depth_value
, harmonized_top_depth_unit
,
harmonized_bottom_depth_value
, harmonized_bottom_depth_unit
,
harmonized_discrete_depth_value
, harmonized_discrete_depth_unit
,
depth_flag
, mdl_flag
, approx_flag
, greater_flag
, tier
,
field_flag
, misc_flag
, harmonized_units
. This selection limits the
columns included in the final dataset, but we also provide a copy of the
AquaMatch dataset prior to its aggregation (pipeline target
p3_tss_preagg_grouped
), and include the subgroup_id
column, so that
users can use the disaggregated data as well and match make joins
between dataset versions.
The final, aggregated values are presented in the harmonized_value
and
harmonized_value_cv
columns. The number of rows used per group is
recorded in the harmonized_row_count
column.
0.3 million rows dropped leaving 4.5 million remaining in the final harmonized and aggregated TSS dataset.
8.2.14 Harmonized TSS
At this point the harmonization of the TSS data from the WQP is complete and we export the final dataset for use later in the workflow.
Below are several sets of figures illustrating different qualities of the dataset:
- Histograms showing the distribution of harmonized measurements (top)
and CVs (bottom) broken down by
tier
after aggregating simultaneous records.
- Maps showing the geographic distribution of records by tier within the US:
- Lastly, bar charts showing the distribution of values by tier across years, months, and days of the week: