Select Page

The CRITFC Tribal Data Network

The purpose of the CRITFC Tribal Data Network (TDN) is to assist CRITFC and its member tribes in the timely and accurate capture, storage, processing, and dissemination of data for management of anadromous fish and their habitats.  There are four main objectives for the project.

1.    Assist CRITFC and member tribes to develop cost effective computer architectures and data management strategies for anadromous fish and habitat data.  Develop pilot systems and tools demonstrating cost effective ways to collect, store, summarize, and disseminate fish and habitat data. Convert successful pilot projects into production computer systems as resources allow.

2.    Provide data management services to the tribes. These take a variety of forms, but includes:
a. Partial support for tribal data coordinators (in this expanded proposal),
b. Developing shared data capture, management, and reporting tools for CRITFC and member tribes,
c. Expert advice and coordination of efforts, especially through the annual Tribal Data Workshops and
d. Limited support for tribal infrastructure through one-time purchases of hardware and software.

3.    Assist member tribes to build internal capacity for improved data management and to achieve the tribal gravel-to-gravel management vision of the Commission.  Combine local data from tribal sources with regional and international data on mainstem, estuary and ocean impacts on salmon and salmon management decisions; and

4.    Enable tribal participation in regional data management coordination processes, provide tribal input and coordination on regional coordination of data management and sharing best practices, facilitate inter-tribal coordination on the level of monitoring data.

At the end of the first two years of the project, a suite of pilot projects is underway, and some pilots have already displayed demonstrable success.  The data management approach and technology used in the current pilots has proven successful, and can be applied elsewhere.  The expectation is that the technologies used in the pilots will be used to address priorities identified in the Coordinated Assessment process.  TDN system architectures depend on lessons learned during the past 15 years of experience developing data management systems for the mainstem Columbia River.

Lessons Learned:

1.    The most cost effective data management systems are those that enter field data one time and one time only.  It is always more expensive to re-enter the same data into multiple formats.
2.    For raw data, QA and QC should be pushed out to be as close to the field staff as possible.  The person who entered the data is the best at performing QA/QC on the data, for he/she knows the most about the data.  QA/QC of raw data is far more accurate and cost effective when performed by the field staff. QA/QC of derived estimates are best done by people who understand how to derive those estimates, which often occurs near where data is summarized and aggregated.
3.    A paper trail and an audit trail should be created when possible.  Data in the Columbia Basin is often subjected to scientific, judicial, and policy review; data management systems must be designed to provide defensible accounting from top to bottom.  Estimates and other summarized quantities need to be able to be traced back to field data collection, and responsibility for the data collection must be able to be traced back to identifiable field staff.
4.    To get buy-in from the field staff, provide tools that actually make their job easier.  To get management buy-in, provide aggregated data more timely and accurately.
5.    Minimize fish handling, minimize stress on fish.  When monitoring ESA listed fish in the Columbia Basin, there are so many concurrent studies; over handling of fish is a recurrent problem.
6.    Focus effort on getting the field data into SQL servers under tribal control that drive dynamic web sites.  Once the monitoring data is stored electronically under policy maker control, then it just a matter of what, where, and who to output the data to.  Once data is in an SQL server, it is a simple matter to aggregate and summarize the data into any format.  SQL servers driving dynamic web sites and web services gives one the most flexible method of outputting monitoring data into whatever formats management needs now and in the future.
7.    Build on existing field data collection methods and practices whenever possible to minimize training expense and disruption of current field data collection efforts.
8.    CRITFC generated data is controlled by the Columbia River Inter Tribal Fish Commission, individual tribal monitoring data is controlled by the individual tribe that owns it.  Only each individual tribe can provide access to its own data.

Current Pilot and Production Projects using digital pen technology and web services architecture:

  1. Bonneville Adult Fish Facility Data Management
  2. Snake River Harvest Estimate
  3. Zone 6 Harvest Estimate (Nez Perce Tribe component)
  4. Below Bonneville Harvest Estimate
  5. Willamette Falls Lamprey Data Management
  6. Klickitat Basin Surveys
    • Spawner Patch Survey
    • Habitat Unit Survey
    • Large woody debris survey
    • Stream Bedrock survey
    • Large woody debris jam survey
    • Discharge data
    • Electrofishing survey
    • Sampling events tracking table
    • Stream Survey Tracking table showing reaches for each stream

The projects listed above all use paper forms and a digital pens to collect field data.  There are 17 pens deployed to date. The data is transmitted to a web site hosted in a private tribal cloud where an electronic copy of each form can be viewed by field staff from any web browser.  A slider bar along the bottom of the form allows the field staff to view their own handwriting, and to view what the computer interprets it to be.  Validation code in each cell allows for range checking, lookup tables, and other constraints to reduce transcription and other errors.  When the field staff has validated the data, pressing an “approve” button sends the data via web services to an SQL server hosted by the appropriate entity, be it CRITFC or a member tribe.  Summarization, aggregation, and other data processing takes place on the SQL servers, and the data is almost instantly made available on web sites under tribal management control in whatever form is required.  Once the data is hosted on an SQL server, it is a simple programming task to output it into whatever form is needed, on whatever schedule is needed (for example, DETs).  A digital copy of each form in the PDF format is also generated by this process, so one can trace any estimate back to the original paper forms, or an electronic scan of the original paper forms.

Other current pilot and production projects:

  1. Web based tag loss application for estimating tag loss.
  2. CRITFC data center and private tribal cloud for the CRITFC Tribal Data Network.
  3. Kelt data entry application.
  4. Accords database and interactive map
  5. Limiting factors interactive map
  6. PCSRF projects database and interactive map
  7. Snorkle Survey database and web interface
  8. Lamprey population estimate tool
  9. Hanford Reach entrapment model

The CRITFC Tribal Data Network has many other applications, databases, and tools under development; and can quickly adjust to any new management needs identified through adaptive management.

Pin It on Pinterest