HAZUS Emergency Management Protocol Workshop June 15-16, 2000 HAZUS Automap Capabilities Breakout Group Chair: Lind Gee Co-Chair: Ron Eguchi Moderator: John Evans Recorder: Patrick McLellan Participants: Stu Nishenko, Bruce Worden, David Oppenheimer, Jim Davis, Rich Eisner, Kris Caceres, Jawhar Bouabid Goal: To further the capabilities of HAZUS for the automated production of earthquake loss-estimation maps in the first one to six hours after a major earthquake in the San Francisco Bay area. HAZUS Automap Capabilities Breakout Group Summary Question 1: What seismological data should drive the HAZUS computation? Both the generation of ideas and the discussion ranged around options such as parametric data (location, magnitude, mechanism, fault parameters, duration of strong shaking, directivity) and maps of ground shaking such as peak ground acceleration (PGA), peak ground velocity (PGV), and spectral acceleration (SA) at various periods of interest. Several participants also suggested information about the location of liquefaction and landslides (both observed and predicted) and there was some discussion about the value of actual seismograms (time histories) at sites of interest. At the present, HAZUS accepts different types of input, but ultimately all computations are driven by ground motions and the magnitude (so, for example, if the user enters a location and magnitude, HAZUS predicts the necessary ground motions based on attenution relationships. HAZUS currently requires PGA, PGV, and SA at 0.3 and 1.0 s). In this situation, it seems as if the best solution is to have the seismologists directly provide the ground motions - using whatever method is appropriate to predict the motions when/if observed values are unavailable. The issue of the time frame associated with the availability of ground motion maps was also discussed, as well as the uncertainties associated with the maps. For example, it may be possible that there will be 3 versions of a "Shake Map" in the first hour following an event. And these maps may evolve over over a time scale from 5 min to 12 hours. The group discussed the importance of version numbers associated with Shake Maps in order to clarify this problem. The issue of uncertainties is more difficult - particularly their use in HAZUS. As Jawhar explained it, one model would be to produce 3 maps for each type - the "regular" one, one at +1 sd and one as -1 sd. The interested user could make 3 runs, and evaluate the effect of uncertainties on the computations. During the open forum discussion of the Workshop, the question of multiple events was raised. Specifically, if there are several large events within a 30 min period (perhaps spatially related - and perhaps not), what are the issues for Shake Maps - and HAZUS maps? One approach would be to compute a summary map that would include the peak motions over some fixed time interval. Another approach is to issue a map for each event. The drawback to the first model is the potential impact of multiple cycles of shaking on buildings. Action items: Produce the necessary ground motion maps (PGA, PGV, and SA at 0.3 and 1.0 s) Review the protocol for the scale of the maps. Presently, it appears as if the maps do not cover all the areas of strong shaking (i.e., the maps do not necessarily cover the whole area of interest) Establish a protocol for version numbers on the ground motion maps that includes the date-time stamp and information about the status of the map Uncertainties: location of stations need to be provided. Is there a better way to parameterize uncertainty in HAZUS? Longer-term issues: Provide information about observed ground failure - landslides and liquefaction Review what the question of what seismological information is most appropriate for predicting damage in conjuction with the engineering community. Review question of multiple events and the best way to present ground shaking in this scenario. Question 2: How will the "data" be distributed. To whom? How many? And by what means? This question required some clarification prior to the idea generation stage, and a model of distribution was proposed, based on the USGS' experience with pushing earthquake information in the Quake Data Distribution System (QDDS) model. In this approach, data providers "push' information to 2 hubs, separated geographically (ideally implemented with multiple paths and secure connections for transmitted the data). The hubs then handle the redistribution of these data to interested recipients. The group spent some time discussing who the recipients might be. Primary recipients are OES and FEMA - and they were viewed at the same level as the hubs. Other recipients could get the information either by push or pull from the hubs. The idea of having a subscription - potentially controlled - was suggested. There was a general sense that only a few users would need the automap capability - and that most users would prefer to have an "official" or blessed map. The group agreed that the maps (and in general all HAZUS products) should identify a) the version number or time stamp of the Shake Map used as its basis and b) the maker of the map. In terms of the means of distribution, all agreed that satellite should be a component of multiple methods. Other alternatives include public internet, phone lines, radio, and microwave. Action items: Identify the number of potential recipients for automap capability (EMP group?) as this will be a factor in the design of the distribution mechanism Begin implementation of pushing Shake Maps to OES (work underway in southern California) and FEMA with redundant links Long term: The hub mechanism looks like a good way to implement this, but will depend on other factors (such as the number of recipients) Question 3: How should HAZUS be modified in order to be driven in automap mode? This topic also engendered considerable discussion - even to the point of asking why automatic capability was of value. It was noted that FEMA needs to be convinced that this is important and that the HAZUS users group can play an important role in this. Again the issue of time scales arose - and the need to establish some protocol for this. The version number and data source will be important. HAZUS needs to be able to prioritize a queue of events - based on magnitude, version number, and perhaps geographical area. The issue of formats came up here (it also came up in the 1st question). First, the debate on grids versus contours. Contours are faster for computation, while grids will preserve the detail in the spatial distribution of motion. Second, the issue of Shake Map output - should the conversion to GIS be done by the data providers or by HAZUS at the users level. Action items: Modify HAZUS to accept Shake Maps for automatic operation Resolve issue of formats (where converted and tradeoff between contours and grids) HAZUS needs a protocol to prioritize multiple events and multiple views of the same event (based on magnitude, location, version number, and perhaps additional criteria) Prioritization should be configurable at the user site Shake Map version number and source should be passed through to HAZUS products Need to interface with User Maps working group (vis a vis appropriate products) HAZUS should have greater flexibility to handle parametric data - particularly defining fault orientation, segments, and directivity (parametric data are low bandwidth) HAZUS Users Group should support the generation of automaps to FEMA