In-Person Meeting

September 28, 2018 Slides

Participants:

Ben Geske (DSP), Jim Peterson (OSU), Erin McCreless (OSU), Dave Mooney (BOR), Steve Zeug (CFS), Scott Hamilton (CSD), Griffin Hill (SLDMWA, Corey Phillis (MWD), Mike Urkov (Flow West), Brett Harvey (DWR), Josh Israel (BOR), Shawn Acuna (MWD), Rod Wittler (BOR), Emanuel Rodriguez (Flow West), Ken Kundargi (CDFW), Evan Carson (USFWS), Matt Reeve (DWR), Michael Beaks (BOR), Matt Nobriga (USFWS), Anka Mueller-Solger (USGS), Darcy Austin (SWC), JD Wikert (USFWS),

Phone: Shaara Ainsley (Fish Bio), Rene Henery (TU), Russ Freeman (Westlands), Mike Hendrick (BOR), Shelley ?, You Chen Chao (DWR), Duane Linander (CDFW), Chris Kwan (DSC), Sheila Greene (Westlands), Anna Allison (CDFW), Bruce DeGennaro (essex)

Action items:

  • Send Mike Urkov (mike.urkov@gmail.com) ideas (sketches, diagrams, etc.) for how you'd like to see results portrayed
  • Review and provide feedback on Chinook and Delta smelt modeling approaches
  • Join or recommend experts for the ag revenue and flood risk groups
  • Let Erin and Ben know if you want to join the communications group

All project documents are currently held on a temporary web portal. Soon we will have a permanent one and will send out the information when it's ready.

https://sites.google.com/site/orcoopunitarmsdmprojects/home/bd_sdm

Dave Mooney – Review and updates on SDM process

  • There is a lot of interest in this SDM project and we're looking forward to increased participation. We need to figure out how to integrate new participants into the current group.
  • Several NGOs have raised concerns about the decisions being made and who is involved. We're working on addressing these concerns in a way that is respectful of both the concerned organizations and the current SDM group.
  • We're coming to the end of the first year pilot effort. The next step will be to decide whether to move onto a second phase, which would: 1) identify priorities for investments to reduce uncertainty; 2) develop updated conceptual models, consider a subset of actions, identify areas of confidence, and identify areas that need improvement; 3) identify experts to commit to involvement; 4) coordinate with CAMT SDM effort. Dave is preparing to discuss the potential for phase 2 with DPIIC.
  • We may also add a phase 3, which would focus on refining the SDM to support the implementation of projects.
  • Timeline if we advance to phase 2: if we stick with current objectives and models, probably 12-16 months; if we add new objectives, we would build new conceptual models and it would be a longer process
  • Adding new members to the group would change the process and dynamics of the project
  • Comment from Rene: in the CVPIA, it was useful to have widespread buy-in and energy up front on the conceptual models and understanding how they were translated into quantitative models; this created a collective sense of the strengths and weaknesses of the models. More participation is good.

Reports from subgroups

Chinook:

  • Have parameterized movement and survival parameters. North Delta data came from Pope, Perry et al. 2018. South Delta was a meta-analysis of data from 2008-2015. This includes operations and entrainment/salvage at water facilities.
  • A temporary web portal has the data, R code, parameters, and maps. see here Mike Urkov/FlowWest will make this a permanent portal in the next two weeks and we'll send a link when it's ready.
  • Movement and survival functions will be coming soon.

Delta smelt

  • The group has discussed how to model the outcomes of each action for Delta smelt using Scott Hamilton's limiting factors model and the Rose et al. 2013 (modified by Will Smith) bioenergetics model

Water quality

  • Metrics are residence time, temperature, salinity, DOC, proportion of San Joaquin river water reaching Chipp's Island (surrogate for selenium), and contaminants
  • For measuring contaminants, the proposed plan is to sum the concentrations coming out of tributaries, weight each tributary's contaminant contribution to the delta by its flow contribution to the delta, sum the totals, and calculate the change relative to baseline
  • Will probably use data from the California Environmental Data Exchange Network
  • Assessing contaminant effects on fish survival: there have been studies looking at the effects of contaminants on silversides, and we could use proportional relationships from these studies. These publications will be added to the Delta smelt folder in the web portal.

Water availability

  • Metrics are South of Delta exports and variability (month to month and year to year), and North of Delta deliveries and variability
  • Will break down by water year type
  • Expect more effects during dry and critically dry years

Flood risk

  • We don't have a group working on this – looking for people to join
  • Matt: CalSim can look at violations of flood risk standards because ACOE has flood rules.
  • We may be able to use CalSim output as an input to another flood risk-specific tool. Ben says there are tools available for this, which for example could evaluate failing levees or show which islands are most likely to flood. Could also look at how many days exports would be interrupted.
  • We should think about who would use this information, what metrics they would be interested in, and how they would want the results presented
  • Actions that may affect flood risk include notching the Yolo Bypass, releasing pulse flows following wet years, and altering levees for smaller scale habitat restoration projects

Ag revenue

  • Russ Freeman from Westlands Water has joined this group. Is thinking about approaching ag revenues by looking at economic impacts of changes in water supply, e.g., the cost of replacing water supplies; also fallowing of land and crop impacts
  • The Yolo Bypass project EIS/EIR has information on ag impacts
  • Josue Medellin has looked at alternative scenarios for impacts on ag revenue in delta – we should contact him

Preliminary CalSim results

  • Dave O'Connor and Derek Hilts have run CalSim models for the X2 actions
  • Results show that setting X2 at 74 May-Aug would empty the reservoirs; this violates NMFS and FWS BiOps and therefore achieving this objective is not feasible unless we override current regulations

Yolo pulse flow action

  • The EIR/EIS lists 6 alternative scenarios, which involve modifications to locations (different configurations of notches) and flow values. There are only 3 proposed flow scenarios – 3, 6, and 12,000 cfs – and these are what we should focus on. Evaluating notch locations and sizes adds too many options, and we'd also have to consider ag and flooding impacts in the analysis.
  • Estimates exist for how many fish would pass through using each alternative.
  • The "preferred scenario" is the east side notch with 6000 cfs. This value provides a balance between ag impacts and fish entrainment. We should start by analyzing this scenario.
  • If CalSim modelers have the time, we could evaluate the extremes – the baseline (no notch) against 3, 6, and 12,000 cfs to capture the full range of impacts. If there's not enough time for this, just analyze the preferred 6,000 cfs scenario.

Visualizing and presenting data and results – Mike Urkov

  • If we consider all candidate actions and expected changes in attributes, there will be 260 combinations. Also we'll want to express uncertainty around the changes in attributes. This is probably more than we can handle.
  • This is an important phase of the project and people will be interested in working on it, but we need to keep in mind that working through many combinations and results will be time consuming
  • Mike U. explained that in the CVPIA project, there were a huge number of combinations and this overwhelmed the participants' ability to see differences between models. They simplified by building visualizations of the submodels in ShinyApps. For example, they diagrammed a conceptual model to show how fall Chinook move through the system, then sent to the group for feedback.
  • Members of this group need to think about how they'd like to see the results. Please give Mike feedback on this – simple sketches and pictures are great.
  • Mike will package the data into a summary page on GitHub, which will keep everything transparent by providing direct reference to all assumptions and model components

Process for evaluating and identifying or ranking priorities and using the DSMs

  • This gets more difficult with more attributes and actions
  • The SIT dealt with this by breaking down the process into phases. Phase I was to summarize the results with no ranking or scoring. Phase IIa was to place actions into two tiers. Phase IIb was to create more detailed summaries and visualization tools. Based on this, participants chose a set of "top" actions but still didn't rank them.
  • We should keep things simple. Things we need to consider are:
  • How to interpret and synthesize the 13 actions. Grade them? Summarize them with no prioritizing or deeper interpretations?
  • How do we want to express uncertainty in the model results, e.g., the degree of confidence?
  • How should we grade the group's confidence in the usefulness of the DSMs? We should do this early to avoid conflicts, and stick to what we decided when reviewing model outputs.

Sensitivity analysis

  • Part of the project's output will be the results of sensitivity analyses. These will help assess the behavior of the models and determine the effects of different actions, and will help decide what information the decisions will be based on.
  • Two kinds of sensitivity analysis.
  • One-way sensitivity analysis: perturb individual parameters and create a tornado diagram that shows the magnitude of the effect of each parameter. The variables at the top of the diagram are the most important.
  • Response profile sensitivity analysis: vary a parameter by some amount, e.g., +/- 50% of its current value, and evaluate the range of results. Here, there are multiple lines on a graph representing different scenarios. The top line is the best scenario, and places where the lines cross are places where scenario rankings change. This can be interpreted for different decision points.
  • Rod: we need to start thinking about sensitivity analyses now; otherwise, we'll get to the end results without understanding how we got there
  • Sensitivity analyses will identify variables that are not influential, including some that we currently think are important
  • Comment: it might be good to add to these graphs something representing people's level of comfort with model outputs
  • Comment: at this early stage, people shouldn't focus too much on model outputs and their beliefs about variables; right now it's better to focus on creating the best models

General discussion about models

  • Model outputs are learning opportunities. It's important to keep an open mind. Does a model change your perception about how things work?
  • Need to consider structural uncertainty. For example, we're using two different models for delta smelt. What will we do if they produce different results? Scott: this is an evolving process. We need to look at the key initial assumptions in each model, and we can decide whether we think we're addressing things correctly in the models.
  • Rod: We should use models to compare output vs. output, not output vs. reality. We still have to do a reality check, but this should be done ahead of time. In the CVPIA, this was done by reaching agreement on the underlying conceptual models before running the quantitative models.
  • Keep in mind that people may have different preferred conceptual models, or they may agree on the conceptual models but disagree on how to quantitatively model them.

Ranking models

  • If we decide to go with tiers, how will we decide? By consensus? Need to be careful to avoid "science by popular vote." If there's disagreement, we need to understand why and see if we can resolve it. Disagreements may result from people's level of comfort with initial assumptions, the model, and results.
  • In the CVPIA, there were natural breaks and the group went with these
  • Need to consider conflicting outcomes and tradeoffs. Example: if a model says an action is good for smelt but bad for salmon, what would we do? This depends on values.
  • One option for ranking models might be to first determine which actions achieve biological objectives (smelt, salmon) and then look at which of these actions have the best impacts (or the least bad impacts) on flood risk, ag revenue, etc. However, this may not be a good approach because we need to be interacting directly with the ag community and generally expand the range of participants in this process.
  • Alternatively, we could look at all objectives simultaneously.
  • We may want to set minimum thresholds, for example say that x amount of increase in fish won't actually make much difference

Communicating model results

  • Need to build a solid foundation before starting to communicate the results to others.
  • We can provide examples of what we've done and the results, then ask recipients for input for the next iterations. This can help with buy-in and trust, even among people who weren't involved in building the models. It is also the best way to improve models.
  • Reclamation needs to understand which actions will benefit one, both, or neither species (smelt and salmon)
  • DPIIC needs this information to consider the actions going on in the delta
  • Have to be careful with interpretation by end users – they may just pick out the numbers they like. We may be able to minimize this by expressing results just in grades of red/green/etc., or by including numbers as a relative change plus a level of uncertainty.
  • Decision makers are at a high level where they're likely to take the numbers and run with them but not incorporate the subtleties of the model, its pros and cons, etc. We need to avoid portraying a false sense of certainty. Maybe start with a page of bullet points about what a model can and cannot be used for.
  • We should also communicate lists of the pros and cons of each action, then go into more detail about how to implement actions and evaluate if they will be successful

Model uncertainty – "known unknowns"

  • What metrics do we use to grade uncertainty? This will be subjective.
  • Rene: there are two types of uncertainty here: 1) quantitative (errors around model calculations) and 2) model "weakness", i.e., differences between empirical estimates and expert opinions and data sources. In the SIT, they found it useful to look at the conceptual models and grade them from red to green based on trust in the model
  • In the SIT, they used several metrics:
  • Was a relationship empirically based?
  • Was it based in Central Valley streams?
  • Was it published?
  • Does the model represent the conceptual model, and if so, how well?
  • Explanatory power (for Chinook model this would be the model calibration results)
  • Opportunity to learn
  • Based on these, participants generally agreed on levels of certainty.
  • For now, we should probably keep grades of uncertainty simple and categorical
  • There are two categories of assumptions: 1) things that go into the conceptual model, and 2) how the conceptual model is represented quantitatively. If you have buy-in for both of these, most people will probably believe the model.
  • Example from CVPIA: predator effects – people thought this was important but it was hard to quantify. This was an opportunity to learn. They went back to the decision makers with the goal of investing more in learning about predator effects.
  • For communications, need to be careful about giving too many caveats about model uncertainty, because this conveys to decision makers that we can't rely on model results.

Developing instruments/reports/memos for communicating results

  • What we develop here will be influential! The SIT has become the primary group for guiding CVPIA decisions. The SIT tech memo summarizes discussions and process, presents conclusions and justifications, identifies key uncertainties, grades model and inputs, and identifies high priorities for monitoring and adaptive management. Now, it is providing members with the opportunity to add dissenting views.
  • Writing the tech memo is the job of the science coordinator (Ben) – but do others want to be involved?
  • Perhaps each subgroup could write its own section and these can be compiled into a larger general document
  • Going beyond the memo – think about graphics, videos, etc. Reports often don't get read, and there is confusion if there are multiple versions. We should think about creating websites with the most current results, interactive ways to visualize results, and links to earlier versions of reports so people can see what has changed.
  • To facilitate communication, we need to decide what products we want to deliver. What are the key objectives and messages? Then we can discuss communication mechanisms.
  • It's important to communicate the process, because that's what rapid prototyping is
  • Focus on active communication : shop this around, visit people individually and develop relationships and trust rather than just giving them a report. Create clear, concise, digestible delivery mechanisms for our message, including benefits to people and tradeoffs. Focus on buy-in and transparency. This will be different for different groups. Make management aware of and trust what we're doing, as their decisions will depend on recommendations that come from this group.
  • Create a subgroup that will focus on outreach strategies. Let Ben and Erin know if you're interested in joining.
  • The big picture for communications is that Reclamation and Delta Science Program will communicate to DPIIC and see if they agree that this is a reasonable approach to decision making. Originally we went to DPIIC with a proposal, using the CVPIA as an example for what successful SDM processes can do, and got approval to start this project. In communicating with DPIIC, we should ask for their input and find out if we're missing things that are important to them. We'll propose a more inclusive process over the next 12-16 months.
  • We should be clear that we've covered our bases by reaching out to many stakeholders, but some haven't gotten involved because they didn't have the time or capacity (e.g. Delta Levee Investment Strategy)