Nonlinear dynamic analyses are required to account for the structural performance of mid- to high-rise buildings and complex structures. Generally, time history analyses are carried out considering several ground motions for a certain seismic action. These analyses are often very time-consuming, mainly because of the high resolution of the ground motion signal. Therefore, performing these calculations based on lower resolution accelerograms can be very useful, especially when dealing with large sets of buildings (e.g., seismic vulnerability studies on an urban scale). In this paper, two methods for signal reduction are tested against each other: i) an open-source Fourier-based resampling implementation; and, ii) a simple reduction algorithm that preserves both the highest and lowest peaks of the signal. The experiments compare the two methods at several levels of resolution reduction and for three different accelerograms. The influence of amplitude scaling on important earthquake demand parameters (EDPs), namely, the peak floor displacements and accelerations have been studied for three reinforced concrete case study buildings modelled in OpenSees: low- (5-storey), mid- (8-storey) and high-rise (11-storey). The results allow establishing a set of criteria to choose the appropriate reduction method and level. This depends on the balance desired of computation time versus calculation accuracy. Real accelerograms without baseline corrections have been for the tests. The simple reduction algorithm method appears to capture better the accelerograms by avoiding excessive interpolation. This results in peaks and areas closer to the original signal. However, it presents greater variability in energy preservation, introducing large abrupt changes in acceleration. These large fluctuations have led to inducing significantly larger displacements in OpenSees, causing greater structural damage. The Fourier method led to better and consistent results than the reduction algorithm proposed. Resolution 50 provided a reduction in time of up to 30% and an error margin of the engineering demand parameters of around +/- 15%.
This study addresses the vital issue of the variability associated with modeling decisions in dam seismic analysis. Traditionally, structural modeling and simulations employ a progressive approach, where more complex models are gradually incorporated. For example, if previous levels indicate insufficient seismic safety margins, a more advanced analysis is then undertaken. Recognizing the constraints and evaluating the influence of various methods is essential for improving the comprehension and effectiveness of dam safety assessments. To this end, an extensive parametric study is carried out to evaluate the seismic response variability of the Koyna and Pine Flat dams using various solution approaches and model complexities. Numerical simulations are conducted in a 2D framework across three software programs, encompassing different dam system configurations. Additional complexity is introduced by simulating reservoir dynamics with Westergaard-added mass or acoustic elements. Linear and nonlinear analyses are performed, incorporating pertinent material properties, employing the concrete damage plasticity model in the latter. Modal parameters and crest displacement time histories are used to highlight variability among the selected solution procedures and model complexities. Finally, recommendations are made regarding the adequacy and robustness of each method, specifying the scenarios in which they are most effectively applied.