ASME Conference Presenter Attendance Policy and Archival Proceedings

2018;():V001T00A001. doi:10.1115/JRC2018-NS.

This online compilation of papers from the 2018 Joint Rail Conference (JRC2018) represents the archival version of the Conference Proceedings. According to ASME’s conference presenter attendance policy, if a paper is not presented at the Conference by an author of the paper, the paper will not be published in the official archival Proceedings, which are registered with the Library of Congress and are submitted for abstracting and indexing. The paper also will not be published in The ASME Digital Collection and may not be cited as a published paper.

Commentary by Dr. Valentin Fuster

Railroad Infrastructure Engineering

2018;():V001T01A001. doi:10.1115/JRC2018-6110.

This paper presents the development and demonstration of an efficient reliability-based tool for the reinforced earth retaining wall design in heavy haul railway. Here, two major internal failure modes, rupture and pullout, are the focus in the demonstration. The First-Order Reliability Method (FORM) is adopted to estimate the probability of failure for each failure mode, and is implemented in a spreadsheet. The reliability analysis is conducted with the consideration of the effect of uncertainty in the reinforcement length, horizontal and vertical spacing between steel strips, tensile strength of steel strips as well as the material property of the backfills. Using this design tool, a few candidate designs can be easily obtained by meeting the acceptable probability of failure and the final design is determined based on cost. The results obtained using FORM are also verified by comparing with those from Monte Carlo simulation. This design tool is shown to be simple to use in the retaining wall design of the railway engineering.

Commentary by Dr. Valentin Fuster
2018;():V001T01A002. doi:10.1115/JRC2018-6115.

Operational efficiency is one of the key performance indicators for all railroad systems. Infrastructure inspection and maintenance engineers are tasked with the responsibility of ensuring the reliability, availability, maintainability and safety of the railroad network. However, as rolling stock traffic density increases throughout the network, inspection and maintenance opportunities become less readily available. Inspection and maintenance activities normally take place at night, when there is little or no train movement to avoid disruption of normal railroad network operation. In addition, conventional inspection methodologies fail to deliver the efficiency required for the optimization of maintenance decisions, particularly with respect to track renewals, due to their defect detection sensitivity and level of resolution limitations. The fact that critical structural components such as rails and crossings (frogs) are randomly loaded increases the degree of uncertainty when trying to estimate their remaining service lifetime. Maintenance decisions are predominantly based on the feedback received from inspection engineers, coupled with empirical knowledge that has been gained over the years. The use of structural degradation models is too risky due to the uncertainty arising from the variable dynamic loads sustained by the rail track. The use of structural health monitoring techniques offers significant advantages over conventional approaches. First of all, it is non-intrusive and does not interrupt normal rail traffic operations. Secondly, defects can be detected and evaluated in real-time whilst their evolution can be monitored continuously, enabling maintenance to be scheduled in advance and at times where the need for rail network availability at the section concerned is at its lowest. This paper analyzes the potential risks and benefits of a gradual shift from traditional inspection approaches to advanced structural health monitoring techniques.

Commentary by Dr. Valentin Fuster
2018;():V001T01A003. doi:10.1115/JRC2018-6117.

The ballast layer in a railroad track helps distribute loads from the superstructure to the formation; a well-designed ballast layer is also meant to prevent excessive vertical, lateral and longitudinal movement of the track under loading. When subjected to repeated loading, the granular ballast particles often undergo breakage leading to significant changes in the shear strength as well as drainage characteristics of the ballast layer. Excessive ballast degradation leads to increased vertical settlements, and is often associated with speed restrictions and increased passenger discomfort. Several researchers in the past have studied the phenomenon of ballast breakage in a laboratory setting. However, due to complexities associated with these large-scale laboratory tests, detailed parametric studies are often not feasible. In such cases, numerical modeling tools such as the Discrete Element Method (DEM) become particularly useful. This paper presents findings from an ongoing research effort at Boise State University aimed at studying the phenomenon of ballast breakage under repeated loading using a commercially available Discrete Element Package (PFC3D®). Ballast particles were simulated as clusters of balls bonded together, and were allowed to undergo breakage when either the maximum tensile stress or the maximum shear stress exceeded the corresponding bond strength value. Different factors studied during the parametric analysis were: (1) load amplitude; (2) loading frequency; (3) number of cycles of loading; (4) bond strength; and (5) particle size distribution. The objective was to identify the relative importance of different factors that govern the permanent deformation behavior of railroad tracks under loading.

Commentary by Dr. Valentin Fuster
2018;():V001T01A004. doi:10.1115/JRC2018-6125.

The Unified Soil Classification System (USCS) uses the 4.75 mm sieve opening size (#4 sieve) as the boundary between ‘coarse’ and ‘fine’ particles. Particles larger than 4.75 mm are classified as ‘coarse’, whereas particles smaller than 4.75 mm are classified as ‘fine’. However, applying these definitions to railroad ballast can be erroneous, as most particles in a ballast material are larger than 4.75 mm (often as large as 63 mm in size), therefore indicating the absence of any ‘fine’ particles. However, depending on relative distribution of particle sizes within a granular matrix, certain particles serve to create voids (coarse fraction), and certain particles serve to fill the voids (fine fraction). Accordingly, rather than using the standard definitions of ‘coarse’ and ‘fine’ particles, as has been done in the literature, the analysis of packing conditions in a ballast matrix may be better served by studying the relative packing between different size fractions. This paper focuses on the development of a new gradation parameter, termed as the “Coarse-to-Fine (C/F) Ratio”, which can shed some light on the importance of different size fractions in a ballast matrix. Changing the ‘coarse’ and ‘fine’ fractions within a particular gradation specification, the resulting effect on ballast shear strength was studied through simulated Direct Shear Strength Tests. A commercially available threedimensional Discrete Element Modeling (DEM) package (PFC3D®) was used for this purpose. Details of the numerical modeling effort are be presented, and inferences are drawn concerning the implications of simulation results on the design and construction of railroad ballast layers.

Commentary by Dr. Valentin Fuster
2018;():V001T01A005. doi:10.1115/JRC2018-6126.

Geogrid reinforcement of railroad ballast improves its structural response under loading, limits lateral movement of ballast particles, and reduces vertical settlement through effective geogrid-ballast interlocking. This improved performance can be linked to improved shear strength and resilient modulus properties. An ongoing research study at Boise State University is focusing on investigating the effects of different specimen and test parameters on the mechanism of geogrid-ballast interaction. A commercially available Discrete Element Modeling (DEM) program (PFC3D®) is being used for this purpose, and the effect of geogrid inclusion is being quantified through calculation of the “Geogrid Gain Factor”, defined as the ratio between resilient-modulus of a geogrid-reinforced ballast specimen and that of an unreinforced specimen. Typical load-unload cycles in triaxial shear strength tests are being simulated, and parametric studies are being conducted to determine the effects of particle-size distribution, geogrid aperture size, and geogrid location on railroad-ballast modulus. This paper presents findings from the research study, and presents inferences concerning implications of the study findings on design and construction of better-performing ballast layers.

Commentary by Dr. Valentin Fuster
2018;():V001T01A006. doi:10.1115/JRC2018-6141.

The Federal Railroad Administration (FRA) routinely conducts investigations of railroad accidents to determine causation and any contributing factors to help the railroad industry implement corrective measures that may prevent similar incidents in the future. Over the past decade, FRA has investigated multiple broken rail accidents in which fractures in the rail web were identified. The common features observed in the recovered rail fragments from these accidents included welds and spots or burn marks on the web, indicating that the rails were joined together by pressure electric welding.

Pressure electric welding uses a welding head that clamps around two opposing rail ends, pressing an electrode on each rail, then hydraulically pulling the rail ends together while arcing current through the electrodes into the rails, causing them to essentially melt together to form a continuous rail.

Based on the similarities observed in the web fractures, FRA rail integrity specialists hypothesized that stray (i.e. inadvertent and unwanted) arcing during pressure electric welding can result in the formation of burns or pits on the rail where it makes contact with the electrodes. Moreover, these electrode-induced pits behave as stress raisers (also referred to as stress concentrations). Fatigue cracks often develop at locations of stress concentration. Once a fatigue crack initiates, the localized stress encourages the growth of the crack, which may potentially lead to rail failure.

This paper describes the forensic evaluations of three railroad rails containing electrode-induced pitting. These evaluations include: magnetic particle inspection to nondestructively detect cracks emanating from the pitting; fractography to study the fracture surfaces of the cracks; metallography to study the microstructure; analysis of chemical composition; and measurements of tensile mechanical properties and fracture toughness of rail steel. Moreover, the results of these evaluations confirm the hypothesis postulated by FRA that stray arcing during pressure electric welding can cause electrode-induced pitting.

Commentary by Dr. Valentin Fuster
2018;():V001T01A007. doi:10.1115/JRC2018-6153.

The bond between wire and concrete is crucial for transferring the stresses between the two materials in a prestressed concrete member. Furthermore, bond can be affected by such variables as thickness of concrete cover, type of pre-stressing (typically indented) wire used, compressive (release) strength of the concrete, and concrete mix. This work presents current progress toward the development of a testing procedure to get a clear picture of how all these parameters can ruin the bond and result in splitting. The objective is to develop a qualification test procedure to proof-test new or existing combinations of pre-stressing wire and concrete mix to ensure a reliable result. This is particularly crucial in the concrete railroad crosstie industry, where incompatible conditions can result in cracking and even tie failure. The goal is to develop the capability to readily identify compatible wire/concrete designs “in-plant” before the ties are manufactured, thereby eliminating the likelihood that defectively manufactured ties will lead to in-track tie failures due to splitting.

The tests presented here were conducted on pre-tensioned concrete prisms cast in metal frames. Three beams (prismatic members) with different cross sections were cast simultaneously in series. Four pre-stressing wires were symmetrically embedded into each concrete prism, resulting in a common wire spacing of 2.0 inches. The prisms were 59.5in long with square cross sections. The first prism was 3.5 × 3.5in with cover 0.75in, the second was 3.25 × 3.25in with cover 0.625in and the third prism in series was 3.0 × 3.0 in with cover 0.50in. All pre-stressing wires used in these initial tests were of 5.32 mm diameter and were of the same wire type (indent pattern) denoted by “WE”, which had a spiral-shaped geometry. This is one of several wire types that are the subject of the current splitting propensity investigation. Others wire types include variations of the classical chevron shape, and the extreme case of smooth wire with no indentions. The wires were initially tensioned to 7000 pounds (31.14 KN) and then gradually de-tensioned after reaching the desired compressive strength. The different compressive (release strength) strength levels tested included 3500 psi (24.13 MPa), 4500psi (31.03 MPa), 6000 psi (41.37 MPa) and 12000psi (82.74MPa). A consistent concrete mix with water-cement ratio 0.38 was used for all castings. Geometrical and mechanical properties of test prisms were representative of actual prestressed concrete crossties used in the railroad industry.

Each prism provided a sample of eight different and approximately independent splitting tests of concrete cover (four wire cover tests on each end) for a given release strength. After de-tensioning, all cracks that appeared on the prisms were marked, and photographs of all prism end surfaces were taken to identify the cracking field. During the test procedure longitudinal surface strain profiles, along with live-end and dead-end transfer lengths, were also measured using an automated Laser-Speckle Imaging (LSI) system developed by the authors. Both quantitative and qualitative assessment of cracking behavior is presented as a function of cover and release strength. In addition to the identification of whether cracking took place at each wire end location, measurements of crack length and crack area are also presented for the given WE wire type. The influence of concrete cover and release strength are clearly indicated from these initial tests. The influence of indented wire type (indent geometry) will also be discussed in this paper, along with a presentation of some preliminary test results. This work represents a successful first step in the development of a qualification test for validating a given combination of wire type, concrete cover, and release strength to improve the reliability of concrete railroad crosstie manufacturing.

Commentary by Dr. Valentin Fuster
2018;():V001T01A008. doi:10.1115/JRC2018-6154.

Under-ballast mats (UBMs) have become more popular recently in railroad track engineering. Benefits of introducing an under-ballast mat layer(s) into the track include, but are not limited to: increasing track resilience, reducing ballast breakage, decreasing noise and vibration, and protecting bridge decks. One of the most essential parameters used to evaluate the performance of UBMs is the bedding modulus. Currently, the German Deutsches Institut für Normung (DIN) 45673 is the only test standard specifying procedures to quantify UBM bedding modulus by placing the UBM between two steel plates. However, steel plates might not be an ideal representation of the actual track loading environment. Thus, other types of support conditions have been used to test the UBM bedding modulus, including concrete block and the geometric ballast plate (GBP) specified by European Standard (EN) 16730. How these different support conditions affect the performance of an UBM has not been fully investigated. To better quantify the effects of varying support conditions on UBM bedding modulus, testing was performed in the Research and Innovation Laboratory (RAIL) in University of Illinois at Urbana-Champaign (UIUC). It was found that for a specific type of UBM, the tested bedding modulus values were similar when supported by the steel plate or concrete block, while the value was considerably lower when the mat was supported by the GBP. Finite element simulations were performed to further study the stress distributions under these various support conditions. The results from this study can help practitioners better represent the application environment during the UBM bedding modulus tests by suggesting the appropriate support condition.

Commentary by Dr. Valentin Fuster
2018;():V001T01A009. doi:10.1115/JRC2018-6166.

Sites with known occurrences of mud pumping or other track concerns were investigated to determine the prevalence of concrete bottom tie abrasion and environmental and track conditions that could contribute to its occurrence. Field investigations showed that it occurs in diverse geographic locations around the U.S. and is a source of continued maintenance concern for railroads. Water appeared to be a significant factor involved in concrete bottom tie abrasion. Ballast fouling, center-binding cracking, rail surface profile variations, and large track movement during loading was seen in locations with concrete bottom tie abrasion. Bumps or track stiffness changes were often found at locations of abrasion damage. Specifically, some locations with known stiff track conditions exhibited significant abrasion damage.

Topics: Abrasion , Damage
Commentary by Dr. Valentin Fuster
2018;():V001T01A010. doi:10.1115/JRC2018-6167.

Concrete railroad ties can experience deterioration from freezing and thawing in cold climates typical to many railroads. Materials and manufacturing processes used to make concrete railroad ties can be controlled to give ties a long period of frost immunity. A performance-based criterion for selection of concrete materials and durability requirements would allow plants more flexibility in material selection and improve overall performance in the field. A new performance-based approach is described to concrete freeze-thaw quality control. In order to accommodate implementation, work was performed at a precast concrete railroad tie manufacturing plant to compare currently used concrete freeze-thaw quality control methods to the proposed performance-based method. This comparison is described to illustrate the benefits of this new performance-based approach.

Topics: Concretes , Durability
Commentary by Dr. Valentin Fuster
2018;():V001T01A011. doi:10.1115/JRC2018-6168.

Extensive research has been conducted by the research team in recent years to determine the prestressing steel and concrete properties that must be provided to ensure that the transfer length of a prestressed concrete railroad tie is shorter than the distance from the edge of the tie to the rail seat. In addition, a significant of amount of data has been collected that indicates high bonding stresses can produce longitudinal splitting cracks along the reinforcement. In a study of how prestressing steel and concrete properties relate to a ties propensity for longitudinal cracking, existing ties that have performed well in track for over 25 years without issues are being evaluated. One parameter of interest that affects the bonding stress is the amount of prestress force in a railroad tie, which is unknown for the existing ties being evaluated. The current paper focuses on a new method that was developed for determining the remaining prestress force in a tie.

In a previous method for determining the prestress force, ties were first loaded in four-point bending to initiate flexural cracking. The crack opening displacement was measured in order to determine the applied load required to reopen the crack. Using this load and the cross-sectional parameters at the location of the crack, the prestress force in the tie can be calculated using static equilibrium. The issue with this method is that as a tie is being loaded and the crack propagates, there is a continuous change in the stiffness of the cross-section. This results in the load versus crack opening displacment curve being overly rounded. This increases the error when determining the load required to reopen the crack, and increases the uncertainty of the calculated prestress force. The new test method eliminates the problems associated with flexural testing by loading the ties longitudinally in tension.

In the new proposed experimental method, ties that have been pre-cracked in the center are pulled in tension. Similar to the previous method, the crack opening displacement is measured while the tie is loaded. For the crack to fully open, the applied load must exceed the prestress force holding the crack closed. Prior to the crack opening, the applied load is resisted by the composite section of concrete and prestressing tendons. Once the crack as fully opened, the applied load is resisted by the prestessing tendons only. This creates two distinctly linear portions of the load versus crack opening displacement curve, one prior to the crack opening, and one after. The beginning of the linear portion post-crack opening marks a very clear upper bound for the amount of prestressing force in a tie. This method can estimate the remaining prestress force in a tie with much greater accuracy than the previous method, and eliminates the need of the cross-sectional parameters at the crack location. To verify this method, tests were first conducted on a smaller scale with prismatic beams with a known initial prestressing force. Then the method was applied to a full scale existing tie to determine the remaining prestress force. Results are presented for testing of both the prismatic beams, and the full scale tie.

Commentary by Dr. Valentin Fuster
2018;():V001T01A012. doi:10.1115/JRC2018-6177.

As an important element in track, pre-stressed concrete railroad ties in the high-speed rail industry must meet the safety and performance specifications of high-speed trains. Systematic destructive and non-destructive evaluation of existing concrete ties can lead to a better understanding of the effect of prestressed concrete tie material design on performance and failure within their service life. It has been evident that environmental and climate conditions also have a significant impact on concrete railroad ties, causing various forms of deterioration such as abrasion and freeze-thaw damage. Understanding of the material characteristics that cause failure in different types of existing concrete railroad ties taken from different places is the main focus of this paper. Observing the current status and damages of railroad ties taken from track might give a correlation between the material characteristic and type of distress and cracking seen. Although it has been seen by previous works that effective factors such as air void system and material composition directly affect the performance of concrete ties such as freeze-thaw, material evaluation of existing ties after service life has not been addressed in previous publications. In this research, the authors have investigated the material characteristic such as aggregate and air-void system of existing pre-stressed concrete railroad ties taken from track. However, compressive and splitting tensile strength and fractured surface of samples cored from the ties were acquired. In order to obtain the strength of concrete materials of existing ties, six samples were cored from six different types of ties taken from tracks across the U.S., according to ASTM C42-16, and tested using ASTM C39 and ASTM C496 methods. However, the concrete air-void system (ASTM C457) was measured on saw-cut samples extracted from the ties to evaluate the influence air content and distribution on mechanical properties of the ties. Regarding the history and service life condition of the ties, it seems that material properties of the ties effectively alter the performance of the ties. Aggregate sources used at each location may have different properties such as texture, angularity, and mineralogy, contributing either propagation or resistance in splitting cracking in concrete. Furthermore, the polished surface of samples extracted from the ties show the uniformity and air void system in some ties which demonstrate their superiority in terms of resistance to freeze-thaw damage. Considering the results of this research, comprehensive evaluation of material characteristics might give a better view of existing concrete railroad ties situation, providing a worthwhile background for future tie design considerations.

Commentary by Dr. Valentin Fuster
2018;():V001T01A013. doi:10.1115/JRC2018-6183.

A micromechanical-based 2D framework is presented to study the rolling contact fatigue (RCF) in rail steels using finite element method. In this framework, the contact patch of rail and wheel is studied by explicitly modeling the grains and grain boundaries, to investigate the potential origin of RCF at the microstructural level. The framework incorporates Voronoi tessellation algorithm to create the microstructure geometry of rail material, and uses cohesive zone approach to simulate the behavior of grain boundaries. To study the fatigue damage caused by cyclic moving of wheels on rail, Abaqus subroutines are employed to degrade the material by increasing the number of cycles, and Jiang-Sehitoglu fatigue damage law is employed as evolution law. By applying Hertzian moving cyclic load, instead of wheel load, the effect of traction ratio and temperature change on RCF initiation and growth are studied. By considering different traction ratios (0.0 to 0.5), it is shown that increasing traction ratio significantly increases the fatigue damage. Also by increasing traction ratio, crack initiation migrates from the rail subsurface to surface. The results also show that there are no significant changes in the growth of RCF at higher temperatures, but at lower temperatures there is a measurable increase in RCF growth. This finding correlates with anecdotal information available in the rail industry about the seasonality of RCF, in which some railroads report noticing more RCF damage during the colder months.

Commentary by Dr. Valentin Fuster
2018;():V001T01A014. doi:10.1115/JRC2018-6184.

A series of specially designed granular material pressure cells were precisely positioned directly below the rail at the tie/ballast interface to measure typical interfacial pressures exerted by revenue freight trains. These vertical pressures were compared to the recorded wheel/rail nominal and peak forces for the same trains traversing nearby mainline wheel impact load detectors (WILDs). The cells were imbedded within the bottom of new wood ties so that the surfaces of the pressure cells were even with the bottoms of the ties and the underlying ballast. The cells were inserted below consecutive rail seats of one rail to record pressures for a complete wheel rotation.

The stability and tightness of the ballast support influenced the magnitudes and consistencies of the recorded ballast pressures. Considerable effort was required to provide consistent ballast conditions for the instrumented ties and adjacent undisturbed transition ties. Norfolk Southern (NS) crews surfaced and tamped through the test section and adjacent approach ties. This effort along with normal accruing train traffic subsequently resulted in reasonably consistent pressure measurements throughout the test section.

The impact ratio (impact factor) and peak force values recorded by the WILDs compared favorably with the resulting magnitudes of the transferred pressures at the tie/ballast interface. High peak force and high impact ratio WILD readings indicate the presence of wheel imperfections that increase nominal forces at the rail/wheel interface. The resulting increased dynamic impact forces can contribute to higher degradation rates for the track component materials and more rapid degradation rates of the track geometry.

The paper contains comparative WILD force measurements and tie/ballast interfacial pressure measurements for loaded and empty trains. Typical tie/ballast pressures for locomotives and loaded freight cars ranges from 20 to 30 psi (140 to 210 kPa) for smooth wheels producing negligible impacts. The effect of increased wheel/rail impacts and peak force values on the correspondingly transmitted pressures at the tie/ballast interface is significant, with increased pressures of several orders of magnitude compared to nominal impact forces from wheels.

Topics: Rails , Trains , Wheels
Commentary by Dr. Valentin Fuster
2018;():V001T01A015. doi:10.1115/JRC2018-6185.

Cut spike fasteners, used with conventional AREMA rolled tie plates and solid sawn timber ties, are the most common tie and fastener system used on North American freight railroads. Cut spikes are also used to restrain tie plates that incorporate an elastic rail fastener — that is, an elastic clip that fastens the rail to the tie plate. Elastic fasteners have been shown to reduce gage widening and decrease the potential for rail roll compared to cut spike-only systems. For this reason, elastic fastener systems have been installed in high degree curves on many railroads. Recent observations on one Class I railroad have noted broken cut spikes when used with these types of tie plates in mountainous, high degree curve territory. Broken screw spikes and drive spikes on similar style plates have also been observed.

In this paper, a simulation method that integrates a vehicle-track system dynamics model, NUCARS®, with a finite element analysis model is used to investigate the root causes of the broken spikes. The NUCARS model consists of a detailed multibody train, wheel-rail contact parameters, and track model that can estimate the dynamic loading environment of the fastening system. For operating conditions in tangent and curve track, this loading environment is then replicated in a finite element model of the track structure — ties, tie plates, and cut spikes. The stress contours of the cut spikes generated in these simulations are compared to how cut spikes have failed in revenue service. The tuning and characterization of both the vehicle dynamics multibody model and the finite element models are presented. Additionally, the application of this approach to other types of fastening systems and spike types is discussed.

Preliminary results have identified a mechanism involving the dynamic unloading of the tie plate-to-tie interface due to rail uplift ahead of the wheel and the resulting transfer of net longitudinal and lateral forces into the cut spikes. Continued analysis will attempt to confirm this mechanism and will focus on the severity of these stresses, the effect of increased grade, longitudinal train dynamics, braking forces, and curvature.

Commentary by Dr. Valentin Fuster
2018;():V001T01A016. doi:10.1115/JRC2018-6187.

A ballast layer is used to facilitate drainage and load transferring in railroad track structure. With tonnage accumulation, fine materials, such as coal dust, clay, locomotive sand, degraded ballast aggregate, and other small particles, will penetrate into the clean and uniformly graded ballast layer causing contamination, usually referred as fouling. Fouling is unfavorable to railroad track performance due to the reduced drainage and consequent engineering challenges including but not limited to mud pumping, excessive settlement, and reduced bearing capacity. Previous research has investigated the mechanical behavior of the fouled ballast in both the laboratory and the field environment. However, the fundamental mechanism that governs the manner in which the fouling materials are transported and accumulated in the ballast layer is not thoroughly understood. Researchers at the University of South Carolina have initiated the effort to investigate the fouling process in the ballast layer. High-fidelity computational fluid dynamics (CFD) simulation is used to study the fluid flow patterns in order to quantify the transport behavior of the fine particles within the ballast layer and potential impact to the track performance and drainage. Specifically, the ballast layer is treated as a porous material, and the fouling materials are modeled as distinct individual particles to assess the probability of their trajectory location. This paper presents the preliminary results of the simulated path of the fouling materials in the ballast layer under seepage, and demonstrates the capability of the developed algorithm to quantify the effects of the ballast layer characteristics on fouling materials transport. The findings from this study will be beneficial for optimizing shoulder ballast cleaning or undercutting practices.

Commentary by Dr. Valentin Fuster
2018;():V001T01A017. doi:10.1115/JRC2018-6204.

From the advent of high-speed (HS) railways and with increasing traffic-induced loads transmitted to the superstructure, maintenance costs due to track geometry degradation have become a crucial problem for researchers and railway administrations. Moreover, the operations of ballast renewal, track tamping, and track re-alignment, that are indispensable to guarantee a good track geometry, have dramatic effects on the tie-ballast lateral resistance, which in turn reduce the track flexural strength in the lateral plane and increase the proneness of railway tracks made of continuous welded rails (CWR) to experience either progressive lateral shift of the track panel or thermal track buckling phenomena. To restore proper values of the tie-ballast lateral resistance, railway technicians either impose a speed reduction or compact the ballast bed mechanically by mean of the dynamic track stabilizing machines.

Recently, elastic elements in railway tracks are receiving more and more attention due to their ability to reduce track geometry degradation and to attenuate noise and vibrations. Under Tie Pads, or Under Sleeper Pads (USP), guarantee better homogenization of the track vertical stiffness and have received more attention due to their ability to reduce maintenance costs. Most published studies focused their attention to USPs’ attitude to improve track performances in terms of dynamic impact force mitigation and track quality improvement; however, with few exceptions, no available literature exists on lateral resistance of ballasted track with USP, and some question still remains whether or not the lateral resistance is improved by USP.

In this study, the experimental results of about 40 lateral resistance tests carried out in situ are reported and discussed. The tests were performed with the Discrete Cut Panel Pull Test (DCPPT) technique on three type of concrete ties, with and without USP; each type of tie and the related track conditions (ballast thickness, subgrade thickness and composition, shoulder width, ballast wall, etc.) were representative of specific track conditions, namely traditional tracks, high-speed lines and gallery. The tests were carried out in loaded and unloaded track conditions, in compacted and just-laid track conditions.

In compacted ballast conditions the peak lateral resistance due to USPs can increase up to 20% — depending on the material used — and this variation is almost constant in the bedding modulus range considered in this study, which was quite well representative of typical static bedding modulus values of actual USPs. Even higher advantages seem to be achievable with softer USPs in weak or just-tamped ballast conditions.

Topics: Railroads
Commentary by Dr. Valentin Fuster
2018;():V001T01A018. doi:10.1115/JRC2018-6205.

The lateral stability of the continuous welded rail (CWR) depends on a number of parameters which contribute to the progressive loss of the initial alignment of the track and its consequent predisposition to deform sideways, gradually or sharply, with serious risks for the safety both of passengers and operators. Different types of initial lateral defect, in terms of shape and size, are introduced by many authors in their own numerical and analytical model, but essentially all of them can be traced back — except for small “personalizations” — to the model proposed by Andrew Kish, who hypothesized the existence of a misalignment defect having the shape of a sine curve extended for half-wavelength, characterized by amplitude and wavelength values typical of the USA railroads. Moreover, all previous studies focused their attention on the introduction, in a geometrically perfect railway track, of a single defect confined in a zone of finite dimensions and having a rather simple geometry which qualitatively approximates the real defect, with the aim of simplifying the calculation of the buckling temperatures of the track associated with such geometry.

In this paper, it was preliminarily analyzed the way the defect introduced in the track affects the critical temperature values. It started with a defect created artificially, applying to a geometrically perfect track and in the absence of thermal loads, a lateral displacement in the central transversal section of the track, and calculating, with the hypothesis of linear elastic behavior, the resulting deformed shape, which was assumed, after zeroing the corresponding stress field, as the input geometry for the subsequent buckling calculation. The deformed shape so obtained, being a Zimmermann deformed shape type, has no geometrical discontinuities near the defect and interprets in a natural way the defected geometry of the track, due to the dependence of its configuration on the flexural stiffness of the entire track in the lateral plane.

Afterwards, modeling was carried out taking into account the real behavior of the track after the loss of its rectilinear configuration: the defect was created simulating the response of the track to a momentary lateral load — resulting, e.g., from train passages — which succeeded to cause a permanent displacement resulting from the elastic-plastic response of the track. The deformed shape of the track obtained in this way was used as the input geometry for the calculation of the buckling temperatures, once without resetting the stress field induced in the structure by the loading–unloading hysteresis cycle, and then considering the track free from internal stresses. The results show that both the numerical model that contemplate the defect introduced “plastically”, and that where the track is free from internal stresses, lead to more conservative results against the risk of thermal buckling in railway tracks made with CWR.

A better approximation of the realistic representation of a generic defected railway track was pursued considering an indefinite number of defects distributed along the track, where each defect was characterized by different amplitude and wavelength values. The obtained results show that the presence of multiple defects further reduces the safety factor against the thermal track buckling phenomenon. The paper ends with the proposal of an evaluation criterion that takes into account the effects of multiple alignment defects on the critical buckling temperatures in continuous welded rail tracks.

Topics: Buckling , Railroads
Commentary by Dr. Valentin Fuster
2018;():V001T01A019. doi:10.1115/JRC2018-6206.

Laboratory tests are commonly used to investigate the performance and behavior of ballast in various conditions. In large part, these investigations use freshly quarried rock prepared to meet gradation requirements of AREMA or similar organizations as the ballast material. When worn ballast is desired, some studies run the fresh ballast through an apparatus, such as the one used in LA Abrasion, to create an artificially worn ballast. Past investigations of the effects of angularity on the strength properties of granular materials provide mixed results. There is a clear need for further understanding of the behavior of abraded ballast. In this investigation, a naturally abraded ballast, sourced from an active rail, is used in laboratory testing to assess its mechanical properties and behavior. Based on gradation testing, this material was most likely an AREMA #4 graded material when it was first placed into the track bed. However, particle breakage that occurred during the time that it was used in track led to a broader gradation that included smaller ballast pieces. Triaxial testing was performed to determine the strength properties, stress-strain behavior, and volumetric strain behavior of clean ballast in varying moisture conditions. Box testing was used to investigate the settlement of the ballast under dynamic loading. The ballast in the box test was prepared to the same density as the triaxial tests so that the results are analogous. The results of the triaxial tests exhibited typical behavior, with the samples undergoing initial contraction followed by dilation. However, the results showed that the deformations were larger than might be expected from a similar angular ballast. Additionally, strengths do not appear to be significantly reduced compared to angular ballast. The box tests also showed typical results, though less total settlement occurred than might be expected. The unexpected results from these tests could be explained by the broader gradation of the abraded ballast. It has been shown that a wider range of particle sizes in a granular material increases the strength. It is likely that this holds true for this particular ballast, despite the increased level of abrasion. However, the more rounded particles are still less likely to interlock, resulting in the high deformations exhibited in the triaxial tests. The introduction of water to the tests does not have hugely negative effects on the strength, likely because the lack of fouling prevents large amounts of water from being held by the ballast. Abraded ballast appears to achieve the desired strengths needed for support railroad loads, however the increased deformability of the material makes it less than ideal for use in track.

Commentary by Dr. Valentin Fuster
2018;():V001T01A020. doi:10.1115/JRC2018-6219.

Concrete ties have become a promising alternative to timber ties for freight lines with increased curvature, high annual traffic, and large axle loads. They are also widely adopted in passenger lines. High strength (HS) concrete is the material of choice in the fabrication of prestressed concrete railroad ties. The higher strength of the concrete is directly related to higher values of the Elastic Modulus, thus increasing the rigidity of the material. The combination of increased strength, rigidity, and the material brittleness may lead to the development of high amplitude stresses with high gradients, which appears to be a common underlying cause of premature cracking and deterioration observed in some concrete ties. Realizing the current issues associated with the performance of concrete ties and recalling the findings of an almost fifteen-year-old research conducted at the University of South Carolina (USC), a hypothesis was formulated that there is a potential benefit in introducing weathered granite aggregates into mix designs for railroad concrete ties. A high strength, yet lower rigidity, concrete will reduce the amplitude of the stress field and equally important, will regularize the stress field providing for a smoother load distribution that will diffuse stress concentrations. Consequently, the High Strength Reduced Modulus (HSRM) concrete improves the cracking resistance and fatigue performance, thus extending the life of the tie. A comprehensive research program has been conducted at USC to identify the benefits of using HSRM in concrete ties. The research is based on experimental investigations and computer simulations at the material, component and structural member levels. This work presents the details of the computer simulation studies that pertain to center binding conditions. Three-dimensional nonlinear Finite Element (FE) models have been developed for the HSRM and the “Standard” concrete ties. Nonlinear material models based on damaged plasticity are implemented. The concrete-steel bond interface is also modeled and discussed. Validation of these models is conducted through comparisons with laboratory testing of prestressed concrete prisms, and it has shown excellent accuracy. Subsequently, a study related to center binding conditions in a tangent track have been conducted. These studies showed that the HSRM concrete tie outperformed the Standard concrete tie in these benchmark tests by (i) showing smoother stress distribution, (ii) delaying the initiation of cracks and (iii) failing at higher ultimate loads. The analysis results are discussed and future recommendations presented.

Topics: Concretes
Commentary by Dr. Valentin Fuster
2018;():V001T01A021. doi:10.1115/JRC2018-6256.

Railway ballast is a major structural component of railroad track that also facilitates the drainage of water. Particle breakage and abrasion due to dynamic loading and environmental impacts causes ballast to age and degrade. The finer materials generated from ballast degradation can adversely affect the track stability especially under wet conditions. This paper investigates through laboratory testing the effect of moisture on the behavior and performance of in-service ballast. The tested ballast samples were initially subjected to an artificial rain system as well as train loadings in the Facility for Accelerated Service Testing (FAST) at the Transportation Technology Center, Inc. (TTCI). The rainy test section experiment applied realistic dynamic freight train loads and continuously monitored the test sections to determine the effects of moisture and saturation conditions on the field performance trends of ballasted track. Accordingly, ballast samples at varying levels of degradation were collected from the test locations to investigate ballast gradations as well as strength and permeability characteristics at dry and wet conditions. Shear strength tests were performed using a large-scale triaxial test machine, known as the TX-24, to study ballast degradation effects on the strength of dry ballast. Materials finer than the 3/8 in. (9.5 mm) were then collected and studied for the moisture-density behavior using a modified Proctor type compactive effort. Shear strength samples with the same gradations and degradation levels were prepared and tested at varying moisture contents of the 3/8 in. (9.5 mm) fraction ranging from 3% to 9%, with the latter being the optimum moisture content of these finer materials. The wet ballast triaxial test samples had strength values only in the range of 38% to 65% of the dry strengths. In addition to the strength tests, constant head permeability tests were also conducted on the ballast samples which demonstrated quite low and negligible horizontal flow amounts through ballast under static pressure heads and at various hydraulic gradients.

Commentary by Dr. Valentin Fuster
2018;():V001T01A022. doi:10.1115/JRC2018-6258.

This paper reports on the ballast layer mesoscale behavior, tie-ballast interaction, and ballast-subgrade interaction under five crosstie support conditions, namely full support, lack of rail seat support, lack of center support, high center binding, and severe center binding condition. Discrete Element Method, an effective technique to study particulate natured unbound aggregate materials, i.e., ballast, was adopted in this study. The DEM simulations included one-tie spacing geometry, approximately 11,000 polyhedral particles. The ballast gradation used in DEM models was according to the AREMA No. 3 and No. 4A specifications. The shape properties of ballast particles in DEM models was consistent with field collected samples. The pressure distributions along tie-ballast interface under rail seat load of 10-kips predicted by DEM simulations were in good agreement with the results backcalculated from laboratory tests, which validated the DEM models. Next, DEM simulations considered rail seat loads of 20-kips and 25-kips. The predicted results indicated that support condition is a key factor for predicting normal stress distribution and force transmission within ballast layer. Ballast particles in shoulders and areas with poor support indicated low or negligible contact stresses. Extremely high normal stresses observed in some support conditions often exceeded single particle crushing load limit and thus would cause ballast particle breakage and layer degradation under repeated loading. Further, the tie-ballast pressure captured in some scenarios could be higher than allowable maximum pressure of 85-psi under concrete tie in AREMA standard. Finally, the pressure at bottom of the ballast layer obtained from the DEM simulations were compared with top of subgrade pressure calculated from analytical/empirical equations such as Talbot equation and AREMA manual.

Commentary by Dr. Valentin Fuster
2018;():V001T01A023. doi:10.1115/JRC2018-6264.

Railway ballast under repeated traffic loading deforms and deteriorates. Increases in the rate of settlement in ballast decreases its useful life and contributes to geometry roughness and poor ride quality. Based on laboratory and field studies, as well as mechanic-based models, the fouling condition of the ballast has been shown to have a profound effect on settlement and overall ballast life. Quantifying ballast settlement and the effect of maintenance at different stages of ballast life can define the State of Good Repair (SGR) for ballast. This paper presents an approach for predicting ballast life based on geotechnical principles and maintenance management philosophy. Over 200 revenue-service locations have been studied for developing this model for different tracks including mixed passenger and freight as well as only freight traffic. The approach presented in this paper can be used for optimizing ballast’s life-cycle costs through a maintenance management approach.

Commentary by Dr. Valentin Fuster

Rail Equipment Engineering

2018;():V001T02A001. doi:10.1115/JRC2018-6106.

The Next Generation Passenger Brake Equipment Development (RTDF2013-4732) presented the background and development process to be followed in the application of the Association of American Railroads (AAR) S-4200 Electronically Controlled Pneumatic (ECP) brake standards as applied to Federal Railroad Administration (FRA) Tier I passenger equipment. During the ensuing five-year period, significant work has been performed to allow an ECP-equipped passenger train to enter an FRA revenue service demonstration on the Amtrak Keystone Service between Harrisburg, PA, and New York, NY. This paper summarizes the information contained in the 2013 paper and addresses the actual implementation path that was followed. The paper discusses 26C pneumatic brake characterization testing required for emulation operation, performance standard differences for passenger equipment, and the safety analysis performed. The paper covers interoperability testing, equipment modifications, static train testing, and dynamic train testing that were required to enter the revenue service demonstration phase. Finally, emulation service status, the revenue service demonstration status, American Public Transportation Association (APTA) Passenger Rail Equipment Safety Standards (PRESS) issuance, and Code of Federal Regulation 49CFR238 proposed changes are addressed.

Topics: Tunnels , Brakes
Commentary by Dr. Valentin Fuster
2018;():V001T02A002. doi:10.1115/JRC2018-6127.

In this work, the non-destructive ball indentation technique is applied to estimate fracture toughness for three types of high-strength rail steels based on continuum damage mechanics. Damage parameter, in terms of the deterioration of elastic modulus, is measured for three rail steels using the loading-unloading smooth tensile test, based on which a ductile damage model is calibrated to determine the critical damage parameter at the onset of fracture. Meanwhile, an instrumented ball indentation test is conducted on the three rail steels to generate damage as a function of contact depth under indentation compression. The critical damage parameter from the smooth specimen is then applied to the indentation test to determine the critical contact depth for calculating the indentation fracture toughness based on the concept of indentation energy to fracture. Results show that although the magnitude of the so-determined indentation fracture toughness is greater than that of the corresponding mode I critical stress intensity factor (KIc) measured using the pre-cracked single-edge-notched bend (SENB) specimen, the former can well predict the ranking order of the KIc values among the three rail steels.

Commentary by Dr. Valentin Fuster
2018;():V001T02A003. doi:10.1115/JRC2018-6132.

Cold-formed stainless steel is often used in the design and fabrication of passenger rail vehicle car body structures. It is attractive because of its corrosion resistance and the high yield strength in the quarter- and half-hard conditions. One of the complications in designing with this material is its anisotropic properties: the yield strength and stress-strain curve depend upon whether the material is loaded in tension or compression and upon the loading direction relative to the material rolling direction. This phenomenon has been known for some time in the rail industry, and is also well known in the general structural community. This paper is a review of the mechanical and metallurgical properties of cold-formed stainless steel and of current design approaches related to buckling of members made from this product form.

Commentary by Dr. Valentin Fuster
2018;():V001T02A004. doi:10.1115/JRC2018-6135.

In 2006 an automatic lube oil filtration system with an automatic backflushing filter and a centrifuge for diesel locomotives was presented at the ASME Spring Technical Conference [1]. The filter cleans itself continuously and the system can be used instead of conventional disposable paper filters to reduce servicing requirements, improve oil cleanliness and reduce the oil system’s exposure to contaminants.

In 2015 at the ASME Fall Technical Conference, a development of the system was presented that introduced an electric pump to boost both centrifuge and automatic filter performance at lower engine speeds, as seen during locomotive idling or coasting.

The next development addresses the automatic filter mesh, something that has not improved substantially over the last 20 years.

The main challenge with improving the mesh for a backflushing filter has been balancing the filtration grade with self-cleaning performance. By going to a finer mesh that catches ever smaller particles, the filter element tends to become more difficult to backflush. For a given wire diameter the free flow area also decreases when the openings become smaller, reducing the maximum mesh loading. Reducing the diameter of the wire used increases the free flow area, but makes the mesh more fragile and difficult to weld.

A recent advancement in the mesh design now allows the automatic filter to filter the oil to a much finer degree than was previously possible while maintaining high self-cleaning performance. The filtration performance was evaluated by using the multi-pass method according to ISO 16889, while the backflushing performance was evaluated on our in-house test stand. Currently these elements are being field tested.

Being able to filter and separate much smaller particles is expected to reduce long term engine wear and, in certain cases, improve oil life.

Topics: Filtration , Filters
Commentary by Dr. Valentin Fuster
2018;():V001T02A005. doi:10.1115/JRC2018-6147.

An important consideration when operating a fleet of passenger rail consists is the remaining service life of both the car structure and the trucks. Agencies may choose to undertake studies like this when considering a fleet wide capital improvement program, ranging from minor aesthetic upgrades to major system replacements and interior reconfigurations. With this in mind, the owner needs to determine the remaining fatigue life of the individual cars and trucks within the fleet. This paper describes the fitness-for-service (FFS) assessment performed on railcars and trucks and an example of the method applied in practice.

To establish the current condition of the fleet, an initial structural and service history review was undertaken. In addition, nondestructive examinations (NDE) of a sample of cars and trucks were performed to investigate any regions that have experienced damage due to the years of service. After the baseline condition of the cars and trucks was determined, finite element analyses (FEA) were performed on both the cars and the trucks to assist in locating the potential fatigue critical regions. Strain gages and accelerometers were then installed on both the cars and trucks for field testing. Multiple runs of in-service testing were performed and a typical revenue service fatigue life of both the cars and trucks was calculated based on an S-N approach. Based on the calculated fatigue life and the current accumulated consist mileage, the remaining car and truck service lives were determined. For regions with known flaws more detailed fracture mechanics based crack growth analyses were developed to determine critical flaw sizes and their propagation times to critical from the known initial flaw sizes.

Results of the FFS assessment provide information on the susceptible areas within the car structure and trucks. Depending on those results, decisions can be made on the required inspections, repairs, or decommissioning necessary to operate the fleet in the short term, while also providing valuable insight into long term fleet planning.

Commentary by Dr. Valentin Fuster
2018;():V001T02A006. doi:10.1115/JRC2018-6192.

Freight railroad classification yards have been compared to large-scale manufacturing plants, with inbound trains as the inputs and outbound trains as the outputs. Railcars often take up to 24 hours to be processed through a railyard due to the need for manual inbound inspection, car classification, manual outbound inspection, and other intermediate processes. Much of the inspection and repair process has historically been completed manually with handwritten documents. Until recently, car inspections were rarely documented unless repairs were required. Currently, when a defect is detected in the yard, the railcar inspector must complete a “bad order” form that is adhered to each side of the car. This process may take up to ten minutes per bad order. To reduce labor costs and improve efficiency, asset management technology and Internet-of-Things (IoT) frameworks can now be developed to reduce labor time needed to record bad orders, increase inspection visibility, and provide the opportunity to implement analytics and cognitive insights to optimize worker productivity and facilitate condition-based maintenance. The goal of this project is to develop a low-cost prototype electronic freight car inspection tracking system for small-scale (short line and regional) railroad companies. This system allows car inspectors to record mechanical inspection data using a ruggedized mobile platform (e.g. tablet or smartphone). This data may then be used to improve inspection quality and efficiency as well as reduce inspection redundancy. Data collection will involve two approaches. The first approach is the development of an Android-based mobile application to electronically record and store inspection data using a smartphone or rugged tablet. This automates the entire bad order form process by connecting to IBM’s Bluemix Cloudant NoSQL database. It allows for the information to be accessed by railroad mechanical managers or car owners, anywhere and at any time. The second approach is a web-based Machine-to-Machine (M2M) system using Bluetooth low energy (BLE) and beacon technology to store car inspection data on a secure website and/or a cloudant database. This approach introduces the freight car inspection process to the “physical web,” and it will offer numerous additional capabilities that are not possible with the current radio frequency identification device (RFID) system used for freight car tracking. By connecting railcars to the physical web, railcar specifications and inspection data can be updated in real-time and be made universally available. At the end of this paper, an evaluation and assessment is made of both the benefits and drawbacks of each of these approaches. The evaluation suggests that although some railroads may immediately benefit from these technological solutions, others may be better off with the current manual method until IoT and M2M become more universally accepted within the railroad industry. The primary value of this analysis is to provide a decision framework for railroads seeking to implement IoT systems in their freight car inspection practices. As an additional result, the software and IoT source code for the mobile app developed for this project will be open source to promote future collaboration within the industry.

Commentary by Dr. Valentin Fuster
2018;():V001T02A007. doi:10.1115/JRC2018-6200.

The Federal Railroad Administration (FRA) has partnered with Metro-North Railroad (MNR), Long Island Rail Road (LIRR) and New York & Atlantic Railway (NYA) to promote operations safety through implementation of wayside detection systems and technologies. Under this partnership, opportunities were identified to enhance safety operations through the analysis of existing and planned wheel impact load detector (WILD) installations and operational procedures, including recommendations for future wayside detection systems implementations on these networks.

Currently, MNR has a four-detector system operating near the Grand Central Terminal, since 2010. This paper includes an analysis of this WILD system and its impact on rolling stock maintenance. The analysis shows that the WILD system has gradually reduced the annual average number of high impact load wheels from 0.32/car in 2010 to 0.27/car in 2015.

Review of data from the detectors on four tracks at the WILD site shows that train operations below a certain speed has a significant effect on the detection rate and should be a major consideration in selecting a location for WILD installation.

The data show that the highest number of high impact wheels is detected in the month of November, potentially due to leaves on the rails during fall season.

Our analysis shows that the currently used trigger threshold of dynamic ratio (DR)≥3 at MNR provides well-reasoned balance between the available fleet, maintenance demand and the maintenance shop capacity. At this threshold setting, the number of wheels detected per million wheel passages is quite small indicating a well maintained fleet.

Topics: Sensors , Stress , Railroads , Wheels
Commentary by Dr. Valentin Fuster
2018;():V001T02A008. doi:10.1115/JRC2018-6201.

Modern freight locomotives are built to crashworthiness standards that are defined in Subpart D of Code of Federal Regulations (CFR) Part 229 and Association of American Railroads (AAR) S-580 standards [1, 2]. The first freight locomotive crashworthiness standard, AAR S-580, was released in 1989 and became effective as of August 1990 [3]. Locomotives that were manufactured before the 1990, so-called “legacy”, lack the crew protection offered by the modern “crashworthy” locomotives [4]. While legacy locomotives, such as narrow-nose designs that were manufactured before 1990s, are generally relegated to non-lead locomotive service in Class I railroads, these units are sometimes used in mainline service. Even though the number of narrow-nose locomotives has declined, there remains a risk of injuries and fatalities for the crew for the next several years.

The scope of this study is the following;

• The assessment of injuries and fatalities from collisions involving legacy locomotives,

• The crashworthiness evaluation of collision posts for a sample legacy (narrow-nose, legacy) locomotive assembled in the 1960s and rebuilt in 2001,

• Analysis of alternative collision post designs to meet and exceed the collision post requirements in the 2001 version of AAR S-580.

Commentary by Dr. Valentin Fuster
2018;():V001T02A009. doi:10.1115/JRC2018-6202.

The Federal Railroad Administration (FRA) has partnered with Metro-North Railroad (MNR), Long Island Rail Road (LIRR) and New York & Atlantic Railway (NYA) to enhance operational safety through the implementation of wayside detection systems.

Currently, MNR has a four-track Wheel Impact Load Detector (WILD) system that has been operating since 2010 near the Grand Central Terminal. This paper discusses a Receiver Operating Characteristic (ROC) analysis of this existing WILD system in conjunction with the wheel maintenance practices since 2010.

Currently MNR’s operating procedures require a car with wheel(s) exhibiting a vertical peak load/mean load ratio, called dynamic ratio (DR), ≥3.0 to be shopped for repair. The analysis, using a 30-day repair window after detection, shows that 84% of the cars shopped for wheel(s) with DR≥3.0 required valid maintenance repairs.

The minimum number of total false records (false positive + false negative records, combined) were observed within the DR range of 2.7–2.8 when considering wheel flat defects only. An analysis of the false negative records inclusive of both flat and shell spots, showed that the minimum number of false records dropped slightly to a DR range of 2.6–2.7. The reported ROC analysis shows that MNR’s current DR≥3.0 to trigger inspection and maintenance actions is reasonable.

Topics: Sensors , Stress , Railroads , Wheels
Commentary by Dr. Valentin Fuster
2018;():V001T02A010. doi:10.1115/JRC2018-6209.

It is a known fact that polymers and all other materials develop hysteresis heating due to the viscoelastic response or internal friction. The hysteresis or phase lag occurs when cyclic loading is applied leading to the dissipation of mechanical energy. The hysteresis heating is induced by the internal heat generation of the material, which occurs at the molecular level as it is being disturbed cyclically. Understanding the hysteresis heating of the railroad bearing elastomer suspension element during operation is essential to predict its dynamic response and structural integrity, as well as to predict the thermal behavior of the railroad bearing assembly. The main purpose of this ongoing study is to investigate the effect of the internal heat generation in the thermoplastic elastomer suspension element on the thermal behavior of the railroad bearing assembly. This paper presents an experimentally validated finite element thermal model that can be used to obtain temperature distribution maps of complete bearing assemblies in service conditions. The commercial software package ALGOR 20.3™ is used to conduct the thermal finite element analysis. Different internal heating scenarios are simulated with the purpose of determining the bearing suspension element and bearing assembly temperature distributions during normal and abnormal operation conditions. Preliminary results show that a combination of the ambient temperature, bearing temperature, and frequency of loading can produce elastomer pad temperature increases above ambient of up to 125°C when no thermal runway is present. The higher temperature increase occurs at higher loading frequencies such as 50 Hz, thus, allowing the internal heat generation to significantly impact the temperature distribution of the suspension pad. This paper provides several thermal maps depicting normal and abnormal operation conditions and discusses the overall thermal management of the railroad bearing assembly.

Commentary by Dr. Valentin Fuster
2018;():V001T02A011. doi:10.1115/JRC2018-6214.

Prevention of railroad bearing failures, which may lead to catastrophic derailments, is a central safety concern. Early detection of railway component defects, specifically bearing spalls, will improve overall system reliability by allowing proactive maintenance cycles rather than costly reactive replacement of failing components. A bearing health monitoring system will provide timely detection of flaws. However, absent a well verified model for defect propagation, detection can only be used to trigger an immediate component replacement. The development of such a model requires that the spall growth process be mapped out by accumulating associated signals generated by various size spalls. The addition of this information to an integrated health monitoring system will minimize operation disruption and maintain maximum accident prevention standards enabling timely and economical replacements of failing components. An earlier study done by the authors focused on bearing outer ring (cup) raceway defects. The developed model predicts that any cup raceway surface defect (i.e. spall) once reaching a critical size (spall area) will grow according to a linear correlation with mileage. The work presented here investigates spall growth within the inner rings (cones) of railroad bearings as a function of mileage. The data for this study were acquired from defective bearings that were run under various load and speed conditions utilizing specialized railroad bearing dynamic test rigs owned by the University Transportation Center for Railway Safety (UTCRS) at the University of Texas Rio Grande Valley (UTRGV). The experimental process is based on a testing cycle that allows continuous growth of railroad bearing defects until one of two conditions are met; either the defect is allowed to grow to a size that does not jeopardize the safe operation of the test rig, or the change in area of the spall is less than 10% of its previous size prior to the start of testing. The initial spall size is randomly distributed as it depends on the originating defect depth, size, and location on the rolling raceway. Periodic removal and disassembly of the railroad bearings was carried out for inspection and defect size measurement along with detailed documentation. Spalls were measured using optical techniques coupled with digital image analysis, as well as, with a manual coordinate measuring instrument with the resulting field of points manipulated in MatLab™. Castings were made of spalls using low-melting, zero-shrinkage bismuth-based alloys, so that a permanent record of the spall geometry and its growth history can be retained. The main result of this study is a preliminary model for spall growth, which can be coupled with bearing condition monitoring tools that will allow economical and effective scheduling of proactive maintenance cycles that aim to mitigate derailments, and reduce unnecessary train stoppages and associated costly delays on busy railways.

Topics: Bearings , Railroads
Commentary by Dr. Valentin Fuster
2018;():V001T02A012. doi:10.1115/JRC2018-6218.

Wayside hot-box detectors (HBDs) are devices that are currently used to monitor bearing, axle, and brake temperatures as a way of assessing railcar component health and to indicate any possible overheating or abnormal operating conditions. Conventional hot-box detectors are set to alarm whenever a bearing is operating at a temperature that is 94.4°C (170°F) above ambient, or when there is a 52.8°C (95°F) temperature difference between two bearings that share an axle. These detectors are placed adjacent to the railway and utilize an infrared sensor in order to obtain temperature measurements. Bearings that trigger HBDs or display temperature trending behavior are removed from service for disassembly and inspection. Upon teardown, bearings that do not exhibit any discernible defects are labeled as “non-verified”. The latter may be due to the many factors that can affect the measurement of HBDs such as location of the infrared sensor and the class of the bearing among other environmental factors.

A field test was performed along a route that is more than 483 km (300 mi) of track containing 21 wayside hot-box detectors. Two freight cars, one fully-loaded and one empty, and one instrumentation car pulled by a locomotive were used in this field test. A total of 16 bearings (14 Class F and 2 Class K) were instrumented with K-type bayonet thermocouples to provide continuous temperature measurement. The data collected from this field test were used to perform a systematic study in which the HBD IR sensor data were compared directly to the onboard thermocouple data. The analyses determined that, in general, HBDs tend to overestimate Class K bearing temperatures more frequently than Class F bearing temperatures. Additionally, the temperatures of some bearings were underestimated by as much as 47°C (85°F). Furthermore, the HBD data exhibited some false trending events that were not seen in temperature histories recorded by the bayonet thermocouples. The findings from the field test suggest that HBDs may inaccurately report bearing temperatures, which may contribute to the increased percentage of non-verified bearing removals.

To further investigate the accuracy of the wayside detection systems, a dynamic test rig was designed and fabricated by the University Transportation Center for Railway Safety (UTCRS) research team at the University of Texas Rio Grande Valley (UTRGV). A mobile infrared sensor was developed and installed on the dynamic tester in order to mimic the measurement behavior of a HBD. The infrared temperature measurements were compared to contact thermocouple and bayonet temperature measurements taken on the bearing cup surface. The laboratory-acquired data were compared to actual field test data, and the analysis reveals that the trends are in close agreement. The large majority of temperature measurements taken using the IR sensor have been underestimated with a similar distribution to that of the data collected by the HBDs in field service.

Topics: Sensors
Commentary by Dr. Valentin Fuster
2018;():V001T02A013. doi:10.1115/JRC2018-6231.

This paper examines alternative, improved materials for truck castings. The first part looks at steels that would not require post-weld heat treatment after repair welding. The second part investigates specific applications for temperatures below −50 °F.

The rapid, dramatic temperature changes that occur during welding can form brittle phases or cracking in some steels. The weld has three areas of differing structure: the weld, the heat affected zone (HAZ), and the parent metal. The maximum hardness occurs in the HAZ and is the limiting factor in determining weldability. An ultraweldable steel is a steel that does not require post-weld heat treatment. Many steels were evaluated for their chemical composition and susceptibility to cracking; those that were likely to form brittle phases were eliminated from consideration. Four low alloy steels and one carbon steel were selected as being potentially ultraweldable.

To evaluate the ultraweldability of these steels, groove welds and two types of spot welds were made on the five candidate alloys. The welds were then sectioned and prepared for microstructural and microhardness evaluation. Microhardness readings were taken across the weld, spanning the weld, HAZ, and base material.

Three of the steels formed hard, brittle phases during most of the tests. This indicates these materials are not ultraweldable. Two of the low alloy steels did meet the requirements for ultraweldability. Future work in this area would include producing truck castings from these materials.

At low temperatures, plain carbon steels, such as the types used in truck castings, can fracture in a brittle manner, with no visible deformation. The material property of deforming without fracture is toughness or ductility. Using materials that retain their toughness in low temperatures could prevent brittle failures of truck castings. Six grades of steel currently used in low temperature applications were selected for this research.

Specimens from each of the six materials were evaluated for tensile properties at multiple temperatures. Charpy impact specimens were tested at temperatures ranging from −20 °F to −120 °F. The measured room temperature tensile properties of each of the six steels met or exceeded the requirements for Grade B+, the steel currently used for truck castings. Four of the steels showed impact energies far above that of the current Grade B+, but two of them gave consistently higher impact energies than all others. These would be the best candidates for future work in this area. Future work would involve producing full size truck castings from one or more of these alloys, then testing them for fatigue performance, preferably at low temperatures.

Topics: Trucks
Commentary by Dr. Valentin Fuster
2018;():V001T02A014. doi:10.1115/JRC2018-6243.

Research to develop new technologies for increasing the safety of passengers and crew in rail equipment is being directed by the Federal Railroad Administration’s (FRA’s) Office of Research, Development, and Technology. Crash energy management (CEM) components which can be integrated into the end structure of a locomotive have been developed: a push-back coupler and a deformable anti-climber. These components are designed to inhibit override in the event of a collision. The results of vehicle-to-vehicle override, where the strong underframe of one vehicle, typically a locomotive, impacts the weaker superstructure of the other vehicle, can be devastating. These components are designed to improve crashworthiness for equipped locomotives in a wide range of potential collisions, including collisions with conventional locomotives, conventional cab cars, and freight equipment.

Concerns have been raised in discussions with industry that push-back couplers may trigger prematurely, and may require replacement due to unintentional activation as a result of service loads. Push-back couplers (PBCs) are designed with trigger loads meant to exceed the expected maximum service loads experienced by conventional couplers. Analytical models are typically used to determine these required trigger loads. Two sets of coupling tests have been conducted to demonstrate this, one with a conventional locomotive equipped with conventional draft gear and coupler, and another with a conventional locomotive retrofit with a push-back coupler. These tests will allow a performance comparison of a conventional locomotive with a CEM-equipped locomotive during coupling. In addition to the two sets of coupling tests, car-to-car compatibility tests of CEM-equipped locomotives, as well as a train-to-train test are also planned. This arrangement of tests allows for evaluation of the CEM-equipped locomotive performance, as well as comparison of measured with simulated locomotive performance in the car-to-car and train-to-train tests.

The coupling tests of a conventional locomotive have been conducted, the results of which compared favorably with pre-test predictions. This paper describes the results of the CEM-equipped locomotive coupling tests. In this set of tests, a moving CEM locomotive was coupled to a standing cab car. The primary objective was to demonstrate the robustness of the PBC design and determine the impact speed at which PBC triggering occurs. The coupling speed was increased for each subsequent test until the PBC triggered. The coupling speeds targeted for the test were 2 mph, 4 mph, 6 mph, 7 mph, 8 mph, and 9 mph. The coupling speed at which the PBC triggered was 9 mph. The damage observed resulting from the coupling tests is described. Prior to the tests, a lumped-mass model was developed for predicting the longitudinal forces acting on the equipment and couplers. The test results are compared to the model predictions. Next steps in the research program, including future full-scale dynamic tests, are discussed.

Commentary by Dr. Valentin Fuster
2018;():V001T02A015. doi:10.1115/JRC2018-6265.

Factual evidence from locomotive event data recorders (EDR), locomotive image data recorders, accident site surveys, witness marks, rail equipment, track structure, photographs, video cameras, AEI readers, hot wheel or hot bearing detectors, wayside signal bungalows, train consist documents, and radio communication is integrated, validated, and visualized in a three-dimensional model environment. The goal is to build a physics-based, data-driven model of train position as a function of time to enhance the documentation, investigation, understanding, and analysis of in-service train derailments. Methods to construct, validate, and interrogate time-accurate, interactive visualizations of train movements for partial and complete train consists are discussed and demonstrated. In-service freight train derailments that occurred in Hoxie, Arkansas (offset frontal collision between opposing freight trains), Casselton, North Dakota (unit grain train derailment with car fouling opposing mainline track and subsequent crude oil unit train head-on collision), and Graettinger, Iowa (unit ethanol train derailment) are used to illustrate the accident reconstruction method. Similar vehicle path reconstructions for recent highway, aviation, and marine investigations are also presented.

Commentary by Dr. Valentin Fuster

Signal and Train Control Engineering

2018;():V001T03A001. doi:10.1115/JRC2018-6114.

Communications Based Train Control (CBTC) has matured over the last 30 years to be the premiere choice for signaling of heavy rail metros throughout the world. In the United States, CBTC has lagged behind in use for a number of reasons. This paper researches these reasons and offers a solution that can benefit both the supply community as well as the end user such that CBTC can be successfully deployed as lower cost. Interoperability as a driver for this innovation will be investigated at all levels possible for operations, management, and interfaces, providing a fully integrated system.

Commentary by Dr. Valentin Fuster
2018;():V001T03A002. doi:10.1115/JRC2018-6116.

As the existing communication technologies which for about a decade have supported railway operations and the huge transition from conventional to modern communication-based signaling approach the extent of their performance capabilities, the railway industry strives to migrate to a proven solution aiming to support the new and diverse broadband services and reduce cost. Long Term Evolution (LTE) radio access technology has been globally accepted because of the unparalleled performance, off-the-shelf convenience, and well-developed standardization. An LTE solution, however, brings both the opportunities and challenges to a Data Communication System (DCS) underlying a Communication-Based Train Control (CBTC) system. The presented research targets one of the main LTE deployment challenges; the spectrum availability. To cope with the increasing scarcity of spectrum resources, LTE/LTE-A has envisaged an extension to the unlicensed band which is already heavily populated with incompatible legacy systems such as the immensely popular Wi-Fi networks. In this paper, a design framework is established to dimension the LTE system according to the CBTC DCS sub-system level requirements. Furthermore, the LTE/Wi-Fi coexistence performance is evaluated and studied in a train control application’s context by using a Markov chain analysis approach.

Commentary by Dr. Valentin Fuster
2018;():V001T03A003. doi:10.1115/JRC2018-6123.

This paper discusses two real-world challenges faced by Communications-Based Train Control (CBTC) testing programs.

a) Why is it that even after a successful complete system Factory Acceptance Test (FAT), the performance of the CBTC system during the first few months of field tests is prone to frequent failures? On some projects, it may be months between a successful FAT and the first operation in CBTC mode.

b) How accurately and efficiently can the root cause of failures during the field tests be identified and how could a test program be improved to have a smooth transition from field testing to revenue service.

Unlike commissioning a conventional signaling system, where after circuit break down and operation testing are completed, the system works well during revenue service, CBTC projects experience an additional round of ‘surprises’ when the system is put in service after months or years of testing [1]. This comment is valid for both new lines and signaling upgrade projects, it should be noted that signaling upgrade projects are more prone to ‘surprises’ due to the limited track access which reduces testing time. Even though the final test results prior to revenue service indicate no ‘showstoppers’, once system is placed in service, it is common to unearth major issues that impact sustainable revenue operation. Though, as it should, this often comes as a surprise to transit agencies installing CBTC for the first time, it is almost accepted as fate by most of the experienced CBTC engineers. This paper describes the tests performed prior to placing system in revenue service and analyzes some of the issues experienced. Detailed information regarding the field tests can be found in [2]. Description of possible mitigations used by CBTC suppliers and transit agencies are included, as well as likely reasons for such a predictable pattern on CBTC projects. Finally, ideas about how to continue improving the mitigation to minimize the risk of major system issues are presented.

Commentary by Dr. Valentin Fuster
2018;():V001T03A004. doi:10.1115/JRC2018-6142.

American Railroads are planning to complete implementation of their Positive Train Control (PTC) systems by 2020 with the primary safety objectives of avoiding inter-train collisions, train derailments and ensuring railroad worker safety. Under published I-ETMS specifications, the onboard unit (OBU) communicates with two networks; (1) the Signaling network that conveys track warrants to occupy blocks etc. and (2) the Wayside Interface Unit (WIU) network, a sensor network situated on tracks to gather navigational information. These include the status of rail infrastructure (such as switches) and any operational hazards that may affect the intended train path. In order to facilitate timely delivery of messages, PTC systems will have a reliable radio network operating in the reserved 220MHz spectrum, although the PTC system itself is designed to be a real-time fail safe distributed control systems. Both the signaling and the WIUs communicate their information (track warrants, speed restrictions, and Beacon status) using software defined radio networks. Given that PTC systems are controlled by radio networks, they are subjected to cyber-attacks.

We show a design and a prototype implementation of a PTC Cyber Situational awareness system that gathers information from WIU devices and Locomotives for the use of rail operators. In order to do so, we designed secure IDS components to reside on the On Board Units (OBU), signaling points (SP) and the WIUs that gather real-time status information and share them with the Back Office system to provide the cyber-security health of the communication fabric. Our system is able to detect and share information about command replay, hash breaking guessing and message corruption attacks.

Commentary by Dr. Valentin Fuster
2018;():V001T03A005. doi:10.1115/JRC2018-6143.

This paper provides a unique perspective on successful brownfield railroad applications. It presents realistic challenges and solutions when applying a turnkey solution with a replacement or an overlay system. Brownfield commissioning takes place when an existing infrastructure is to upgrade to a new system with a different technology than the incumbent one. As signaling systems are getting more and more complex, it is extremely important to maintain robustness in the system design as well as project execution, such as logistics, documentation, and issue reporting. Many transportation authorities are moving from their current train control signaling system to a new system to combat obsolescence issues, to gain better system capacity, and to lower operation and maintenance costs. This paper discusses brownfield commissioning in general, and also presents specific cases in migration from a track circuit interlocking system to a Communications Based Train Control (CBTC) system. These two systems have distinct characteristics that provide opportunities of coexistence, but also introduce difficulties in mixed-mode operations.

Commentary by Dr. Valentin Fuster
2018;():V001T03A006. doi:10.1115/JRC2018-6196.

Radio propagation prediction for passenger rail operations using sub-UHF frequencies can be a complex matter. While propagation models and data containing terrain and obstruction information can be used to predict radio coverage, unique methodologies are required to accurately plan and implement rail communication systems. Federally-mandated Positive Train Control (PTC) system requirements rely on consistently available wireless communications, thus the imperative need to accurately design and construct radio networks to fulfill critical requirements.

Topics: Rails
Commentary by Dr. Valentin Fuster
2018;():V001T03A007. doi:10.1115/JRC2018-6222.

The dynamic nature of a train in operation is caused by variables in rail condition, track smoothness, wheel defects, truck hunting and skewing, and coupler effects. These conditions can each cause perturbations that may affect the accuracy of the weight measured by a Weigh In Motion (WIM) system. China is the only country that has mandatory requirements to calibrate a WIM system dynamically after it is initially installed and every year after it’s in revenue service.

This paper talks about the importance of both static and dynamic calibrations to achieve the highest accuracy of a WIM system. This paper also describes the process of conducting a dynamic calibration, and the lessons learned in the past several years in the effort to improve the weight accuracy of a WIM system.

Topics: Calibration
Commentary by Dr. Valentin Fuster
2018;():V001T03A008. doi:10.1115/JRC2018-6223.

An Overload and Imbalance Load Detection (OID) system monitors rolling stock on the railroad tracks, and identifies and alarms on improperly loaded vehicles. In China, OID systems have been installed near train stations close to the railroad signals and curves. The location and condition of the track in the area of the OID system can affect the weighing performance. In recent years, the dedicated high-speed passenger lines have freed up a great deal of train capacity on the freight tracks, enabling freight train speed to increase from 60km/h to 80∼120km/h. It has become necessary to install OID systems on freight mainline between stations.

The authors of this paper discuss the prospect and hurdles of two OID technologies on China railways.

Topics: Stress , China
Commentary by Dr. Valentin Fuster
2018;():V001T03A009. doi:10.1115/JRC2018-6266.

The use of using cleaner energy (zero emissions) transportation has become a key focus in the North America even within rail transportation. Within in North America the migration from Diesel to Electric Locomotives, utilizing overhead catenary systems with voltages in the 25kV range for passenger trains has become the standard. In addition, “Shared-Use Rail Corridors” have become more prevalent in North America (USA and Canada), the use of Constant Warning Time Devices (CWTD) based on a change of inductance in the rail are less reliable within Electrified railroads.

With shared use track, it is understood that a difference exists between freight and passenger train speeds, resulting in the need for other methods to detect and determine the correct approach times become a priority. Implementing Computer-Based Train Control (CBTC) systems or Positive Train Control (PTC) technology can mitigate the problem if they communicate / request highway crossing activation. But in locations where PTC is not being installed or in Canada where it is not required, other methods need to be explored. This paper will review and analyze the following:

1. Review the existing systems being deployed;

2. Evaluate the deployed systems effectiveness;

3. Test and record data using various innovative technologies including: Axle counters determining speed of approaching train, acceleration (+ / −);

4. Conclusion for the integrating new axle counter technologies and existing track circuits.

Topics: Highways , Trains
Commentary by Dr. Valentin Fuster

Service Quality and Operations Research

2018;():V001T04A001. doi:10.1115/JRC2018-6134.

To illustrate the metro braking performance more appropriately and comprehensively, and to explore the dynamic braking process more accurately, the consistency analysis was used in this paper, which is used for the first time in the metro brake field. Based on the stream data, several characteristic values (braking pressure setting time, maximum braking pressure and stability value of braking pressure, i.e.) were extracted to represent the dynamic process. Under similar braking conditions, such as the similar applied pressure, speed and brake level, etc., the performance of brake system is similar and stable in certain extent. The dynamic data model has been developed in this paper. Large data mining based statistics was used to analyze braking performance and the dynamic prediction during a braking cycle was provided. Investigation presented in this paper contributes to the efforts in the direction of metro braking systems performance and evaluation of service condition. The dynamic evaluation based on the stream data has been proved to provide a new way to explore the influence of metro braking system so that it is reasonable to analyze the service condition of metro brake system. Based on the LabVIEW platform, the system for analyzing stream data was built. Based on stream data, using the characteristic and statistics method, this paper provides a way to describe the service condition. The result enriches the evaluation criteria of metro service condition.

Topics: Brakes
Commentary by Dr. Valentin Fuster
2018;():V001T04A002. doi:10.1115/JRC2018-6137.

The increasing movement of people and products caused by modern economic dynamics has burdened transportation systems. Both industrialized and developing countries have faced transportation problems in urbanized regions and in their major intercity corridors. Regional and highway congestion have become a chronic problem, causing longer travel times, economic inefficiencies, deterioration of the environment and quality of life. Congestion problems are also occurring at airports and air corridors, with similar negative effects. In the medium distance travel market (from 160 up to 800 km), too far to drive and too short to fly, High Speed Rail (HSR) technology has emerged as a modern transportation system, as it is the most efficient means for transporting large passenger volumes with high speed, reliability, safety, passenger comfort and environmental performance. HSR system’s feasibility will depend on its capacity to generate social benefits (i.e. increased mobility rates, reduced congestion, capacity increase and reduced environmental costs), to be balanced with the high construction, maintenance and operational costs. So, it is essential to select HSR corridors with strong passenger demands to maximize these benefits. The first HSR line was Japan’s Shinkansen service, a dedicated HSR system, between Tokyo and Osaka, launched in 1964, which is currently the most heavily loaded HSR corridor in the world. France took the next step, launching the Train à Grande Vitesse (TGV), in 1981, with a dedicated line with shared-use segments in urban areas, running between Paris and Lyon. Germany joined the venture in the early 1990 with the Inter City Express – ICE, with a coordinated program of improvements in existent rail infrastructure and Spain, in 1992, with the Alta Velocidad Espanola – AVE, with dedicated greenfield lines. Since then, these systems have continuously expanded their network. Currently, many countries are evaluating the construction of new HSR lines, with European Commission deeming the expansion of the Trans European Network as a priority. United Kingdom, for example, has just awarded construction contracts for building the so called HS2, an HSRexpanded line linking London to the northern territory. China, with its dynamic economic development, has launched its HSR network in 2007 and has sped up working on its expansion, and currently holds the highest HSR network. United States, which currently operates high speed trainsets into an operationally restricted corridor (the so called Northeast Corridor (NEC), linking Washington, New York and Boston), has also embarked into the high speed rail world with the launch of Californian HSR Project, currently under construction, aimed to link Los Angeles to San Francisco mega regions, the ongoing studies for Texas HSR project, to connect Dallas to Houston, into a wholly privately funding model, as well as studies for a medium to long term NEC upgrade for HSR. Australia and Brazil are also seeking to design and launch their first HSR service, into a time consuming process, in which a deep discussion about social feasibility and affordability is under way. This work is supposed to present an overview of HSR technology worldwide, with an assessment of the main technical, operational and economical features of Asian and European HSR systems, followed by a snapshot of the general guidelines applied to some planned HSR projects, highlighting their demand attraction potential, estimated costs, as well as their projected economic and environmental benefits.

Commentary by Dr. Valentin Fuster
2018;():V001T04A003. doi:10.1115/JRC2018-6148.

Rail public transit provides passengers with low cost, widely covered, environment-friendly travel service. However, many travelers do not choose the rail public transit because they are hindered by the first mile bottleneck — how to get to the train station. Ridesharing emerges as a viable transportation mode in connection with rail public transit. This paper designs a detour-based discounting mechanism for the first-mile ridesharing service to encourage more passengers to use the rail public transit. Specifically, passengers are incentivized by the mechanism to participate in the first-mile ridesharing service in connection with train stations. The mechanism accounts for passengers’ personalized requirement on the detour in determining the optimal vehicle-passenger matching, vehicle routing plan, and the pricing scheme. The New Brunswick train station (a New Jersey Transit station) is selected as the testbed to implement the proposed first-mile ridesharing mechanism. The results verify the application viability of the mechanism and demonstrate that the mechanism is effective to incentivize passengers to use the rail public transit.

Commentary by Dr. Valentin Fuster
2018;():V001T04A004. doi:10.1115/JRC2018-6160.

1Railway transportation is a mean of conveying ore, goods or passengers on wheeled vehicles running on rails. The operation is costly and the transported goods have low aggregated value, therefore profitability is enabled under long distances and large quantities. Railways all around the world face rising demand for transportation of goods and people. While upgrading the railroad infrastructure is guaranteed to increase capacity, it usually takes large capital expenditures and is time consuming.

In this paper we show how a circulation planning tool can aid railroads fully utilize their infrastructure in order to improve capacity utilization. Parameters such as train departing frequency, behavior during train delays, priorities on the negotiation of crossings and overtakes, performance allowance, and headway can be varied, and the resulting scenarios can be compared to help find a reasonable ceiling to the railroad capacity. The same tool can be used in an operational setting, to quickly respond to incidents and disruptions while keeping services at an acceptable level.

There are many publications that propose methods to calculate railway capacity. While traffic simulation methods offer accurate results, user interaction and graphical presentation, they are often time-consuming, so analytical methods are usually preferred. Our simulation approach takes just seconds to run, resulting in a much more powerful tool to assess capacity limits.

We run our simulations with data from a real heavy haul railroad, and use a number of metrics to show how their current operation could be improved with no need to build more tracks. Preliminary results indicate that, by changing train departing frequency alone, throughput could increase by more than 20%.

Topics: Railroads
Commentary by Dr. Valentin Fuster
2018;():V001T04A005. doi:10.1115/JRC2018-6175.

Rail switches are critical infrastructure components of a railroad network, that must maintain high-levels of reliable operation. Given the vast number and variety of switches that can exist across a rail network, there is an immediate need for robust automated methods of detecting switch degradations and failures without expensive add-on equipment. In this work, we explore two recent machine learning frameworks for classifying various switch degradation indicators: (1) a featureless recurrent neural network called a Long Short-Term Memory (LSTM) architecture, and (2), the Deep Wavelet Scattering Transform (DWST), which produces features that are locally time invariant and stable to time-warping deformations. We describe both methods as they apply to rail switch monitoring and demonstrate their feasibility on a dataset captured under the service conditions by Alstom Corporation. For multiple categories of degradation types, the baseline models consistently achieve near-perfect accuracies and are competitive with the manual analysis conducted by human switch-maintenance experts.

Topics: Machinery , Rails , Switches
Commentary by Dr. Valentin Fuster
2018;():V001T04A006. doi:10.1115/JRC2018-6271.

The highest level of automation may be achieved with traditional fixed block systems with numerous benefits. Diagnostic Data can be collected from wayside infrastructure and stored in a central depository. Diagnostics from the vehicle may also be collected, not typically in a dynamic fashion, and stored in a centralized depository. This data is not easily integrated or sequenced between the onboard and wayside. External systems are added to collect all data. This centralized system composes a Maintenance and Diagnostic Center.

In addition, with CBTC systems, communication between the wayside and the vehicle include ATP information, Movement Authority, Speed Restrictions from the wayside to the vehicle and reports of train location, speed, travel direction and vehicles status. With ATO, data is also transferred between the train and wayside. Much of the vehicle reported ATO data includes vehicle and on-board controller alarms and events. This train and wayside communicated data is collected and stored in the MDC with the added benefit of being integrated between the on-board and wayside. This integrated data allows data mining to be performed to evaluate many operating aspects of the system.

This presentation identifies some of the types of data collected and the analysis that may be performed on the data to identify improvements that are to be integrated in system operation, detect system components that are degrading to a point of failure to help schedule maintenance before failure and provide the capability to review events post mortem to identify the root cause of failures that occur in the system.

Commentary by Dr. Valentin Fuster

Planning and Development

2018;():V001T05A001. doi:10.1115/JRC2018-6138.

Competition is the driving force of any economic system, as it creates a challenging environment for service suppliers to provide affordable and reliable services to customers. Rail systems are an important element of the logistic chain, as they provide a unique service category (generally transporting large volumes at low unit costs) to shippers that otherwise would not be serviced by other modes — the so called captive shippers. In this scenario, competition is essential to guarantee the required service levels (availability and reliability), followed by competitive rates, which ultimately may influence shippers’ business competitiveness, both regionally and globally. Brazil and some North American countries (Canada, Mexico and United States), have a common feature, i.e. continental territories allied with the economic exploitation of bulky activities (industrial, mineral and agricultural), and, hence, depend strongly on heavy haul rail systems. These countries have been performing a continuous effort on improving competition practices into their rail systems, which are translated into important, and sometimes controversial, regulatory measures. These initiatives require a tenuous equilibrium, as they are supposed to provide the required competitive service at affordable rates for shippers, as well as a sustainable (financial and operational) environment to rail carriers, to guarantee the required return on long term investments and avoid compromising medium and long term rail network efficiency. This challenging task for rail market stakeholders (rail carriers, shippers and regulators) is far from a consensus. Rail companies claim that, as a capital intensive sector, governmental regulatory intervention into the rail system may inhibit their ability to invest the required funds to provide and expand rail capacity, as well as the maintenance of the required safety levels. Shippers, on the other hand, state that rail systems operate within a strong market concentration (originally formatted or due to subsequent merges and acquisitions) that give some rail carriers a disproportionate market power, that resembles a monopoly, which ultimately leaves a significant contingent of the so called captive shippers with just one freight rail carrier option, sometimes subjected to excessive rates, and, in some special instances (into offer restricted rail markets, for example), are responsible for the unavailability of rail services into the required volumes. In this context, there is currently a controversial debate regarding the effectiveness of competitive regulatory remedies into freight rail systems. This debate includes both market oriented rail systems (Canadian and U.S.), as well as rail contractual granted ones (Brazilian and Mexican). In the formers, the systems are mostly owned and operated by the private sector, and inter and intra modal options may theoretically provide the required competition level, while in the latter, rail systems have been broken into separate pieces and granted to the private sector under a concession arrangement, followed by an exclusive right to serve their territories, with trackage rights provisions, to be exerted by third parties, under previously defined circumstances and subjected to contractual agreements among rail operators. In both systems, competitive regulatory actions may be desirable and effective, as far as they may address the technical-operational-economic boundary conditions of each particular rail system. This work is supposed to present, into a review format, sourced from an extensive research into available international technical literature, and gathered as a unique document, an overview of the Brazilian and North American freight rail competition scenario, followed by a technical and unbiased effectiveness’ assessment of current (existing) and proposed competitive regulatory freight rail initiatives into Brazil, Canada, Mexico and United States, highlighting their strengths and eventually their weaknesses.

Topics: Rails
Commentary by Dr. Valentin Fuster
2018;():V001T05A002. doi:10.1115/JRC2018-6172.

The first and last mile of a trip has been used to describe passenger travel with regards to getting to and from transit stops/stations. Solving the first and last mile (FMLM) problem extends the access to transportation systems and enlarges the number of passengers from a remote community, such as rural areas. The FMLM problem has been addressed in different public transit contexts, mainly within urban areas. However, it is also an important part of the journey in an intercity trip; yet, limited research efforts have been undertaken to examine the FMLM problem that intercity passenger train riders face. This paper fills in this gap and further, aims to identify the best strategies that could serve as a FMLM solution for short distance intercity passenger rail service (i.e., corridors that are less than 750 miles long according to the Passenger Rail Improvement and Investment Act, 2008). The Hoosier State Train (HST) service, a short-distance intercity passenger rail that connects Chicago and Indianapolis four days a week, was chosen as a case of study. The HST has four intermediate stops located in Indiana. For some of those intermediate stops HST is the only intercity public transit service offered to reach either Chicago or Indianapolis. In order to explore opportunities to enhance the HST ridership, an on-board survey was conducted in November and December 2016. The findings of this survey suggested that there are riders who travel from counties further away from a county with a station to reach and complete their journey on the train. Moreover, it was found that most of the respondents drove or rented a car, or were dropped off to reach a train station in Indiana. Unlike the results from the Chicago station, the majority of riders boarding the train from one of the Indiana stations did not use ridesharing services or public transportation. These findings suggest that there is a possible gap into the FMLM travel options for intercity rail riders and alternative options to fill this gap should be considered. This paper discusses the case study results of an accessibility analysis aiming to identify the areas in need of first/last mile service where there are no public transportation services and/or it is costly to reach a station from a desired origin. To that end, a cost surface for the different modes available in the area of study was created to determine the average travel cost to the nearest station. The analysis was carried out in ArcGIS using origin-destination data from the on-board survey, transportation network information from the U.S. Bureau Transportation Statistics, and general transit feed specification (GTFS) data. Subsequently, some of the best strategies identified were modeled around the station (e.g., shuttle buses to/from the station) in order to examine how the accessibility would increase after a strategy implementation. The results of this study may have far-reaching implications for planning strategies that can enhance access to the train stations. Finally, the FMLM strategies could assist intercity passenger rail service providers attract a larger number of passengers.

Topics: Rails
Commentary by Dr. Valentin Fuster
2018;():V001T05A003. doi:10.1115/JRC2018-6181.

Configuration Management (CM) has been traced back to the early 1950s where the principles were first applied as change management for computer hardware. CM was later expanded to early software development which has continued to the modern age. While the mantra has not changed in 60+ years — to define and manage change — updated techniques have streamlined the process making it easier to implement and more responsive in utilization in order to keep pace with changes for both micro (software) and macro (large-scale infrastructure project) applications. In order to advance the state of good repair (SGR) for rail and transit assets in the United States, CM has become an essential tool and must coordinate with Project Integration and Asset Management so that an organized process for change control with accurate record keeping can be established and maintained long term. The importance of CM has also been mandated at the Federal level for rail assets under 49 CFR 236. To bring wider understanding of CM to the industry, this paper will present Configuration Management and Project Integration techniques as applied to rail transportation infrastructure including assets that must meet Federal regulations, and it will discuss the coordination between CM, Integration and Asset Management under the SGR initiative. This is the final paper in a series that began in 2012, presented at the Joint Rail Conferences, and featured Denver Transit Partners (DTP) and the Denver Eagle P3.

Commentary by Dr. Valentin Fuster

Safety and Security

2018;():V001T06A001. doi:10.1115/JRC2018-6129.

Railroads contribute to the national economy by carrying over 40% of intercity freight ton-miles in the United States. Train accidents cause damage to infrastructure and rolling stock, disrupt operations, and have the potential to result in casualties and damage the environment. A clear understanding and analysis of accident risk based on historical accidents can support the development and prioritization of effective accident prevention strategies. While extensive previous studies have focused on the safety risks associated with a variety of train operation conditions, much less work has been undertaken to evaluate train risk and safety under restricted speeds. As defined in 49 CFR 236 Subpart G, restricted speed is a speed that permits stopping within one-half the range of vision, but not exceeding 20 miles per hour. Nevertheless, some severe accidents at restricted speeds occurred in the last few years and are also highlighted in both NTSB and FRA reports. In this paper, we develop a quantitative analysis of restricted-speed accidents occurring between 2000 and 2016, based on the data from the U.S. Federal Railroad Administration. While overall accident rates have been proven to decline in prior studies, the preliminary results show that the rate of train accidents under restricted speeds fluctuates in the study period, without a significant increasing or decreasing trend. Furthermore, the distribution of restricted-speed accident severity, accident risk, and other pertinent characteristics are covered in this study.

Topics: Accidents , Trains
Commentary by Dr. Valentin Fuster
2018;():V001T06A002. doi:10.1115/JRC2018-6130.

Restricted speed is the speed that permits stopping within one-half the range of vision, normally not exceeding 20 miles per hour. The occurrences of some severe accidents at restricted speeds have highlighted the importance of safety improvements in restricted-speed operations. The Federal Railroad Administration (FRA) has identified restricted-speed violations as a common rule compliance problem. Nevertheless, little prior research has been conducted on the analysis of train operations and safety risk under restricted speeds. This paper used Fault Tree Analysis to explore scenarios for restricted-speed operations to identify failure paths that lead to train accidents. Understanding restricted-speed train accident causal chains and corresponding structural representations of relevant contributory precursors can contribute to more accurate estimation of restricted-speed risk. Four recent restricted-speed accidents were studied based on the information from the National Transportation Safety Board (NTSB) and FRA. This study may serve as a reference leading to the further development of quantitative risk assessment and the evaluation of risk mitigation strategies for restricted-speed operations.

Topics: Accidents
Commentary by Dr. Valentin Fuster
2018;():V001T06A003. doi:10.1115/JRC2018-6136.

Bearing degradation and defects can result in a premature failure. Water ingress into the bearing is a factor for premature degradation, as water may corrupt internal parts and degrade the bearing grease. This paper presents the investigation of the properties of grease degradation from bearings with water-related degradations. This research provides insight into the internal state of bearings that have been replaced due to grease degradation as a result of water ingress.

Separately, the railroad industry has observed bearing roller “bluing” or “lube staining.” This discoloration may be a harmless surface effect, or it may be similar to heat bluing. Determining true metallurgical effects may lead to the understanding between these two different types of “bluing”.

To study bearings with water-related lubrication degradation, grease samples were collected from two populations of bearing lubrication at bearing service locations. One population contains bearings identified with water-related damage, and a second population is a control set of bearings. Primary grease analysis was done per ASTM 7918, providing metrics of wear, contamination, consistency, and oxidative properties. Additional testing was performed where results indicated utility; including measurements of anti-oxidant remaining in grease and microscopic analysis of wear particles in the grease.

“Bluing” or “lube stain” bearing components were examined through analysis of lubrication and metallurgical metrics. Collections of samples from bearing shops included representative small amounts of grease and “blued” steel parts from bearings exhibiting surface discoloration. A second sample set included steel parts and grease samples from a control set of bearings. A third set of rollers were heat blued in the lab. Lube stained rollers and control set rollers were tested for metallurgical changes. Analysis of the bearing steel consisted of hardness and micro-hardness testing of polished samples, examination to compare microstructural features, and residual stress tests.

The tests conducted in the investigation of water-related bearing grease degradation indicate a difference between bearings with “Water-Etch” and “Non-Verified” degradation modes based on ferrous debris levels in the grease. This difference is due to wear of the bearing material deposited in the grease. The tests conducted in the investigation of lube stain in bearings show lube stain does not affect any tested metallurgical material properties, other than surface discoloration.

Topics: Bearings , Rollers , Water
Commentary by Dr. Valentin Fuster
2018;():V001T06A004. doi:10.1115/JRC2018-6157.

Railways have a substantial contribution to the economy of the United States. However, a train accident can result in casualties and extensive damages to infrastructure and the environment. Most of the prior research focused on derailments or grade-crossing accidents rather than the study of train collisions. The Federal Railroad Administration (FRA) identifies over 300 causes for all types of accidents, among which we aim to recognize the major factors that cause train collisions. Evaluating how collision frequency and severity vary with the accident cause is the key part of this research, in order to identify, evaluate and mitigate transportation risk. This paper presents a statistical analysis of passenger and freight train collisions in the United States from 2001 to 2015 to statistically analyze train collision frequency, severity, accident cause, and safety risk. The analysis finds that human errors and signal failures are among the most common causes of train collisions in U.S. in the 15-year study period. There is a significant decline in the overall train collision frequency by year. By observing these trends with respect to train collisions, possible accident prevention strategies could be developed and implemented accordingly.

Commentary by Dr. Valentin Fuster
2018;():V001T06A005. doi:10.1115/JRC2018-6162.

Rail defects are a significant safety concern for the world’s railway systems. Broken rails may lead to the catastrophic derailment of vehicles and have severe consequences including death, injury and economic losses. Precise estimation of the impact of descriptive factors on the occurrence of rail fatigue defects is of great significance for the development of rail defect statistical prediction models. Meanwhile, improvement of prediction models will assist railroads in allocating inspection maintenance resources efficiently. Thus, this paper focuses on reviewing principal risk factors affecting the occurrence of rail defects. A better understanding of the influence of rail defect explanatory factors could aid in model improvement. Previous data collection and analysis in rail defects are highlighted in this review in order to improve the understanding of the impact of potential risk factors. The overview of rail defects aims to aid researchers in improving their understanding of what factors affect the occurrence of rail fatigue defects and how to analyze these factors during the processing of statistical models.

Topics: Rails
Commentary by Dr. Valentin Fuster
2018;():V001T06A006. doi:10.1115/JRC2018-6169.

Tunnel fires are low-probability high-consequence events that could lead to loss of lives, property damage, and long service disruptions. The rapid rise of gas temperature in excess of 1000 °C within the confined tunnel space can affect the structural integrity of the tunnel. Although tunnel fires may not necessarily cause collapse, significant structural damage and disruption to rail services can lead to major economic losses. The objective of this paper is to investigate the expected structural damage in a cut-and-cover tunnel if exposed to a fire event. The analyses are completed by numerical simulations.

In the first part of this work, the effects of fire scenario and variability in thermal properties of material, on the potential volume of damaged tunnel lining that would require repair are investigated. In case of tunnels, historical events show that limited accessibility to suppress the fire can cause the fire to burn for days. The fire scenarios considered in this work are defined based on (a) the worst-case envelop hydrocarbon fires such as the Rijswaterstaat (RWS) fire curve and the RABT ZTV fire curve, and (b) the potential heat release rates in railway tunnels.

In the second part of this study, the effect of fire scenario on structural performance of a cut-and-cover tunnel is studied. The geometry and cross section of the Howard street tunnel in Baltimore that experienced a major fire in 2001 is used as the case study. The results show that the fire scenario and duration, while contains significant uncertainty, is one of the most influential factors on evaluating response of the tunnel structures.

Topics: Fire , Railroads , Tunnels
Commentary by Dr. Valentin Fuster
2018;():V001T06A007. doi:10.1115/JRC2018-6178.

Cameras are substantial elements to provide security to passengers in public spaces, e.g. in urban subway environments. Automated image processing algorithms are used more and more often to analyze the cameras’ video streams. However, misaligned cameras may produce serious problems by either generating false alarms or even being blind due to a shifted field of view. This paper presents two simple, real-time, and straight-forward software methods to detect camera movement and also allow for an adjustable tolerance, i.e. small changes are acceptable. The first approach shifts a reference edge (from a master image) along a convenient path; the second method uses distance measuring to detect a critical camera movement. Both methods are tested on real-world video from a subway environment. Since the algorithms are proposed for application in a railway environment with typically high requirements on operational robustness and reliability, special emphasis is put on constraints and limitations of the presented algorithms.

Topics: Cities , Rails , Security
Commentary by Dr. Valentin Fuster
2018;():V001T06A008. doi:10.1115/JRC2018-6189.

This work intends to lay the groundwork for Computer Aided Engineering (CAE)-based occupant safety of a typical tier-III Indian Railway (IR) passenger coach in a collision accident. Our previous work presented in International Crashworthiness Conference 2010 under the title “Simulation of Crash Behaviour of a Common Indian Railway Passenger Coach” provided crashworthiness assessment of a typical tier-III passenger coach structure for representative head-on collision scenarios namely, against an identical passenger coach and against a stationary locomotive. These scenarios were envisioned to be part of a bigger accident scenario e.g - head-on collision between two trains moving towards each other. Analysis of involved chain of events for entire rolling stock and resulting internal collisions between individual passenger cars was out of scope of this work and necessary inputs were obtained from available literature on the same. This work used a full scale Finite Element (FE) simulation model and commercial explicit solver LS-Dyna. FE model was validated using International Railway Union (UIC) code OR566 specified proof loads for design. Simulation methodology used for dynamic impact was validated by component level crushing experiments using a drop tower facility. Material modelling incorporated strain rate effect on yield strength which is essential for obtaining accurate structural deformations under dynamic impact loading. Contacts were modelled using the penalty method option provided by the solver. This model was simulated for collisions at 30, 40 and 56 km/h against a stationary rigid barrier. Collision speeds were chosen to simulate impact energies involved in collision scenarios as mentioned above. The structure was found to exhibit global bending deformation and jackknifing with pivot position at the door section. In this paper, we present an extension of this work — coupled occupant safety simulation and injury assessment. It was accomplished by recording head, neck, chest and knee responses of a Hybrid-III 50th percentile male Anthropomorphic Test Device (ATD) FE model, seated in passenger position on lower berth of the first cabin of a passenger car. Interiors were modelled to represent the actual structure. Dummy model was adapted to passenger cabin’s excessive mobility conditions and responses were revalidated against Federal Motor Vehicle Safety Standards (FMVSS) limits. Injury interpretation was based on Abbreviated Injury Scale (AIS), automotive injury criteria and injury risk curves for Head Injury Criterion (HIC), thoracic spine acceleration, neck bending moment in flexion and extension and knee force. This study provides with estimates of injury and fatality based on computer simulation of accident scenarios. However, attempts of correlating to any available injury and fatality statistics were out of scope of this study.

Commentary by Dr. Valentin Fuster
2018;():V001T06A009. doi:10.1115/JRC2018-6191.

A full-scale rollover crash test was performed on a CPC-1232 specification, crude oil tank car to document the base-case safety performance of the car top-fittings in a roll-over scenario. The specimen tank car was placed into a test fixture designed to roll the entire loaded and un-trucked carbody about a fixed pivot. A concrete surface target was used as the impacting surface of the top-fittings manway bonnet. At an impact speed of 7.8 mph, the manway bonnet deformed inward and sheared off completely. In addition, the vacuum relief valve sheared off and caused leakage of water through the connection. Reasonable correlation was observed between the pre-test simulations and the test results, which were further improved post-test.

The effort has highlighted an alternate failure mode, bolt shear, which seems to be initiated when the top fittings bonnet structures are strengthened by the use of thicker or higher strength materials, i.e., strengthening the bonnet structure moves the weakest link to the bolted connections, resulting in failure and lading release. The railroad industry can use the test generated structural performance and behavior data to improve top-fittings protection.

Commentary by Dr. Valentin Fuster
2018;():V001T06A010. doi:10.1115/JRC2018-6199.

In this study, a Secondary Impact Protection System (SIPS) consisting of an airbag and a deformable knee bolster for use on a modern freight locomotive was developed and tested. During rail vehicle collisions, a modern locomotive designed to current crashworthiness requirements should provide sufficient survival space to the engineer in cab. However, without additional protection against secondary impacts, a locomotive engineer could be subjected to head, neck, and femur injuries that exceed the limits specified in the Federal Motor Vehicle Safety Standards (FMVSS 208). The SIPS study aimed to design a system that would control these injuries within the limiting criteria.

Simulation results for the design concept showed that it would meet the FMVSS 208 criteria for the head, neck, chest, and femur, injuries and continuing to meet all existing functional requirements of the locomotive cab.

A sled testing of the prototype showed that to optimize the SIPS, further airbag design modifications, characterization and testing are required.

Topics: Locomotives
Commentary by Dr. Valentin Fuster
2018;():V001T06A011. doi:10.1115/JRC2018-6210.

The railroad industry currently utilizes two wayside detection systems to monitor the health of freight railcar bearings in service: The Trackside Acoustic Detection System (TADS™) and the wayside Hot-Box Detector (HBD). TADS™ uses wayside microphones to detect and alert the conductor of high risk defects. Many defective bearings may never be detected by TADS™ due to the fact that a high risk defect is considered a spall which spans more than 90% of a bearing’s raceway, and there are less than 20 systems in operation throughout the United States and Canada. Much like the TADS™, the HBD is a device that sits on the side of the rail tracks and uses a non-contact infrared sensor to determine the temperature of the train bearings as they roll over the detector. The accuracy and reliability of the temperature readings from this wayside detection system have been concluded to be inconsistent when comparing several laboratory and field studies. The measured temperatures can be significantly different from the actual operating temperature of the bearings due to several factors such as the class of railroad bearing and its position on the axle relative to the position of the wayside detector. Over the last two decades, a number of severely defective bearings were not identified by several wayside detectors, some of which led to costly catastrophic derailments. In response, certain railroads have attempted to optimize the use of the temperature data acquired by the HBDs. However, this latter action has led to a significant increase in the number of non-verified bearings removed from service. In fact, about 40% of the bearings removed from service in the period from 2001 to 2007 were found to have no discernible defects. The removal of non-verified (defect-free) bearings has resulted in costly delays and inefficiencies.

Driven by the need for more dependable and efficient condition monitoring systems, the University Transportation Center for Railway Safety (UTCRS) research team at the University of Texas Rio Grande Valley (UTRGV) has been developing an advanced onboard condition monitoring system that can accurately and reliably detect the onset of bearing failure. The developed system currently utilizes temperature and vibration signatures to monitor the true condition of a bearing. This system has been validated through rigorous laboratory testing at UTRGV and field testing at the Transportation Technology Center, Inc. (TTCI) in Pueblo, CO. The work presented here provides concrete evidence that the use of vibration signatures of a bearing is a more effective method to assess the bearing condition than monitoring temperature alone. The prototype bearing condition monitoring system is capable of identifying a defective bearing with a defect size of less than 6.45 cm2 (1 in2) using the vibration signature, whereas, the temperature profile of that same bearing will indicate a healthy bearing that is operating normally.

Commentary by Dr. Valentin Fuster
2018;():V001T06A012. doi:10.1115/JRC2018-6221.

Models have the ability to represent a specific viewpoint of situations or circumstances in the real world. This paper facilitates the understanding of the relationship between railway safety and operational performance by providing an overview of performance models and presenting a model framework that incorporates both elements of safety and operational performance in the rail industry. This framework, if further developed into a working model with complex equations will facilitate decision making by governments and stakeholders in the daily operations and investments of railways.

Topics: Safety , Modeling
Commentary by Dr. Valentin Fuster
2018;():V001T06A013. doi:10.1115/JRC2018-6224.

A review of past accident data shows that several fatalities have been attributed to passenger ejection through window openings during passenger train accidents. To study and address this issue, literature review and accident analyses were performed to investigate the safety aspects of passenger rail window glazing. A common failure mode is when the external gaskets that hold the glazing pane in place shear off and the windows are pushed inside the carbody during rollover derailments. This leads to passengers being ejected, often fatally, out of the train. Passenger containment was identified as the main improvement to be made to glazing systems. New or updated retention methods are thought to be necessary in the pursuit of safety.

Considering feasibility, implementation time, likelihood of success, and the potential for retrofit, a few concepts including various methods of zip-strip protection, a revised zip-strip location, and recessed window glazing have been ideated and the top rated concepts are being developed further.

In the next phase of work, field tests and additional analyses will help determine the efficacy of the proposed solutions and the necessity for additional engineering design requirements.

Topics: Safety , Rails
Commentary by Dr. Valentin Fuster
2018;():V001T06A014. doi:10.1115/JRC2018-6239.

In a typical railyard environment, a myriad of large and dynamic objects pose significant risks to railyard workers. Unintentional falls, trips and collisions with dynamic rolling stock due to distractions or lack of situational awareness are an unfortunate reality in modern railyards. The challenges of current technologies in detecting and tracking multiple differently-sized mobile objects in situations such as i) one-on-one, ii) many-to-one, iii) one-to-many, iv) blind spot, and v) interfering/non-interfering separation creates the possibility for reduction or loss of situational awareness in this fast-paced environment. The simultaneous tracking of assets with different size, velocity and material composition in different working and environmental conditions can only be accomplished through joint infrastructure-based asset discovery and localization sensors that cause no interference or impediment to the railyard workers, and which are capable of detecting near-misses as well. Our team is investigating the design and performance of such a solution, and is currently focusing on the innovative usage of lightweight low-cost RADAR under different conditions that are expected to be encountered in railyards across North America. We are employing Ancorteks 580-AD Software Defined RADAR (SDRadar) system, which operates at the license-free frequency of 5.8 GHz and with a variety of different configuration options that make it well-suited for generalized object tracking. The challenges, however, stem from the unique interplay between tracking large metallic objects such as railcars, locomotives, and trucks, as well as smaller objects such as railyard workers, in particular their robust discernment from each other. Our design’s higher-level system can interact with the lower-level SDRadar design to change the parameters in real-time to detect and track large objects over significant distances. The algorithm optimally adjusts waveform, sweep time and sample rate based on one or multiple detected object cross-sections and subsequently alters these parameters to be able to discern other objects from them that are in close proximity. We also use an ensemble method to determine the velocity and distance of target objects to accurately track the subject and larger objects at a distance. The methodology has been field-tested with several test cases in a multitude of weather and lighting conditions. We have also tested the proper height, azimuth and elevation angles for positioning our SDRadar to alleviate the risk of blind spots and enhancing the detection and tracking capabilities of our algorithm. The approach has outperformed our previous tests using visual and acoustic sensors for detection and tracking railroad workers in terms of accuracy and operating flexibility. In this paper, we discuss the details of our proposed approach and present our results from the field tests.

Commentary by Dr. Valentin Fuster
2018;():V001T06A015. doi:10.1115/JRC2018-6240.

Current standards such as NFPA 130 [1] require railcar floor assemblies to achieve a fire resistance rating according to ASTM E119 [2] by exposing the assemblies to a prescribed 30 minute time-temperature curve using a furnace. Though the ASTM E119 is a standard test procedure, it does not represent a real fire scenario which can have temporal and spatial varying exposure. This work developed a computational framework to evaluate and compare standard fire exposures such as ASTM E119 to real fire exposures to determine the difference in the temperature rise of a railcar floor assembly. The dimensions of the assembly used in this work consisted of the entire width of the railcar ∼3.0 m (10 ft) and a length of 3.7 m (12 ft) as described in NFPA 130. The real fire exposures simulated in this work have been identified in a review [3] of incidents involving fire exposures to railcars in the US and internationally over the past 50 years. The fire exposures consisted of a continuously fed diesel fuel spill, a localized trash fire, and a gasoline spill simulated from a collision of the railcar with an automobile. These realistic fire exposures were applied to a floor assembly model in Fire Dynamics Simulator (FDS) [4] which also included the undercarriage equipment to better capture the fire dynamics. The thermal exposure at the underside of railcar assembly was extracted using the heat transfer coefficient and the adiabatic surface temperature provided by FDS. These spatial-temporal exposures were coupled with a detailed railcar floor assembly finite element (FE) model in ABAQUS [5] to analyze the thermal behavior of the assembly. The thermal model in ABAQUS provided the evolution of temperature in different components of floor assembly consisting of a structural frame, insulation, and a composite floor. The standard scenarios were simulated for two hours instead of the typical 30 minutes to identify the appropriate exposure duration which can better represent a real fire scenario.

Commentary by Dr. Valentin Fuster
2018;():V001T06A016. doi:10.1115/JRC2018-6241.

Performing fire endurance tests of railcar floor assemblies in accordance with NFPA 130 is expensive given the minimum size requirements of 3.7 m (12 ft) in length and the full vehicle width. Often it is not financially viable to conduct such tests on several iterations of designs for the purpose of design optimization. Simulations of the fire endurance tests can be performed in place of experiments to provide predictions of floor assembly response of multiple designs at much lower cost. However, capturing the thermo-structural response of the floor assembly requires the ability to model the relevant physical phenomena including softening and weakening of the steel frame, degradation of the fire insulation, and failure of the composite floor. A methodology for performing such simulations was developed under this research addressing each of these phenomena. Temperature dependent thermal and mechanical properties of all modeled materials captures material softening and weakening. Degradation of the insulation was handled through a novel temperature dependent shrinkage approach. Failure models for the sandwich composite floor panels were obtained from literature to predict shear fracture of the core based on a maximum principal shear stress approach and delamination of the core/facesheet based on a maximum strain energy approach.

The developed methodology was applied to the simulation of a fire endurance test of an exemplar railcar floor assembly using the commercial finite element solver Abaqus. The assembly was known to hold a passing rating for a 30-minute fire endurance test according to NFPA 130. The floor assembly consisted of a stainless-steel frame, fiberglass insulation, and a ply-metal composite floor. Sequentially coupled thermal and structural models were developed to predict the thermostructural response of the floor assembly for a 30-minute exposure to the ASTM E119 prescriptive fire curve. User-subroutines were utilized to implement the sandwich composite failure models developed for predicted core shear fracture and core/facesheet delamination. The predicted temperature rise on the unexposed surface of the floor assembly after a 30-minute exposure ranged from 50°C to 90°C. The floor assembly was also predicted to maintain structural integrity with the applied crush load, having a center-point vertical deflection of 161 mm after the 30-minute exposure. This resulted in a predicted pass rating for a 30-minute exposure which agrees with the floor assembly’s actual fire rating.

Commentary by Dr. Valentin Fuster
2018;():V001T06A017. doi:10.1115/JRC2018-6242.

Train accidents can be attributed to human factors, equipment factors, track factors, signaling factors, and Miscellaneous factors. Not only have these accidents caused damages to railroad infrastructure and train equipment leading to excessive maintenance and repair costs, but some of these have also resulted in injuries and loss of lives. Big Data Analytics techniques can be utilized to provide insights into possible accident causes, thus resulting in improving railroad safety and reducing overall maintenance expenses as well as spotting trends and areas of operational improvements. We propose a comprehensive Big Data approach that provides novel insights into the causes of train accidents and find patterns that led to their occurrence. The approach utilizes a combination of Big Data algorithms to analyze a wide variety of data sources available to the railroads, and is being demonstrated using the FRA train accidents/incidents database to identify factors that highly contribute to accidents occurring over the past years. The most important contributing factors are then analyzed by means of association mining analysis to find relationships between the cause of accidents and other input variables. Applying our analysis approach to FRA accident report datasets we found that railroad accidents are correlating strongly with the track type, train type, and train area of operation. We utilize the proposed approach to identify patterns that would lead to occurrence of train accidents. The results obtained using the proposed algorithm are compatible with the ones obtained from manual descriptive analysis techniques.

Commentary by Dr. Valentin Fuster
2018;():V001T06A018. doi:10.1115/JRC2018-6246.

There is interest in providing a heat release rate based flammability requirements for interior finish materials on railcars. As a result, a research study was performed to develop a simple empirical model that can predict the real scale fire performance of an interior finish material from ASTM E1354 cone calorimeter data. A simple to use model has been developed to predict whether a material will contribute significantly to the growth of a fire inside of a railcar. The model consists of a flammability parameter, defined as the difference between the average heat release rate and the ratio of the ignition time and the burn duration. As indicated by the model, the relative flammability of materials is based on the balance between the heat release rate and the ease of ignition relative to burning duration. This work is focused on the use of the model in predicting material fire growth performance in full scale NFPA 286 room/corner tests, which has not previously been performed. The empirical flammability model was developed to use parameters which were obtained from cone calorimeter tests at 50 kW/m2 and provides a single value for a material. Generally, materials that have a flammability parameter of greater than 0.7 were determined to cause flashover in the NFPA 286 test, while those less than 0.6 did not cause flashover. The materials in the region between these values are borderline, with some causing flashover and some not. An initial assessment of a database of passenger railcar materials using the flammability parameter model revealed that about 50% of materials that meet NFPA 130 flammability requirements using ASTM E162 have the potential to cause flashover in the NFPA 286 room-corner test.

Topics: Heat , Finishes
Commentary by Dr. Valentin Fuster
2018;():V001T06A019. doi:10.1115/JRC2018-6248.

This paper provides an overview of the design of natural ventilation systems to control smoke movement in rail tunnels. The paper discusses the current industry standards and design requirements for tunnel emergency ventilation systems, and then addresses the various technical elements that are used to design such systems. These technical elements include parameters in the direct control of the designer, as well as those that are beyond the control of the designer. The paper also presents a case study where various physical design elements are utilized to create a working natural ventilation smoke control system for a short rail tunnel.

Commentary by Dr. Valentin Fuster
2018;():V001T06A020. doi:10.1115/JRC2018-6251.

The objective of this research is to use testing and modeling to quantify realistic exposure fires under a railcar and compare these exposures to those in a fire resistance test. A series of tests were conducted on a railcar floor mockup, scaled to 40% of a full railcar width, exposed to fire from below. Two fires were considered, one representing a 1.9 MW diesel fire (e.g. resulting from a ruptured fuel tank) and another representing a 0.3 MW trash fire (e.g. resulting from a collection of trash and debris under the railcar). Two geometric configurations were tested including a floor with equipment box obstructions and a flat floor without undercar obstructions. For the diesel fire, the heat flux directly above the fire reached 75 kW/m2 for the flat configuration and 95 kW/m2 for the obstructed configuration, while gas temperatures directly above the fire reached 750°C and 950°C, respectively. Temperatures and heat flux varied greatly over the floor geometry for the realistic fires, resulting in thermal gradients that are not characteristic of a fire resistance test.

Computational fluid dynamics simulations were used to model these different fire exposures under the railcar floor mockup as tested. The fire dynamics predicted were consistent with those measured. In the region of the mockup where the fire plume impinges, heat flux was predicted to within 11–22% of that measured. In the surrounding regions of the mockup, heat flux was predicted to within 22–40% of measured values. This level of agreement is appropriate for large-scale fire experiments, and the results demonstrate that the model is validated for use in the configurations considered in this study.

Topics: Fire
Commentary by Dr. Valentin Fuster
2018;():V001T06A021. doi:10.1115/JRC2018-6252.

This paper focuses on the safety aspect of passenger trains approaching a terminal station with a bumper block/post. As evidenced by the recent collision of a commuter train at Hoboken terminal on September 29, 2016, the consequences of a collision with a bumper post could be catastrophic, however, railroads can take preventative measures to reduce the element of risk. Case studies obtained from the National Transportation Safety Board (NTSB) involving similar bumper block accidents are analyzed to identify any potential common denominator.

The objective of this paper is to comprehensively present the various mitigation techniques that railroads can adopt to safeguard their systems against these types of accidents. Although some of the mitigation techniques presented in the paper may already be known in the industry generically, their application to specifically mitigating the hazard of bumper collisions is a novel attempt to focus systematically on this topic. Examples of mitigation techniques discussed here in include speed restricting devices, driver alerted features, bumper blocks with more impact tolerance, and organizational safety culture. The effect of newer technologies such as PTC (in USA only) and CBTC towards mitigating this hazard as well as the unique constraints presented at terminal stations is also assessed.

Topics: Trains
Commentary by Dr. Valentin Fuster
2018;():V001T06A022. doi:10.1115/JRC2018-6257.

Large scale flammability performance of interior finish used on railcars has been evaluated in previous studies using the NFPA 286 room corner fire test, which has a cross-section similar to a railcar. In some studies, the wall containing the door was removed to account for the shorter length of the room compared to the railcar length. The focus of this study is to assess whether the NFPA 286 standard room-corner test with a door represents conditions that developed inside a railcar during a fire. Fire Dynamics Simulator (FDS) was used to model the fire growth in a NFPA 286 standard room-corner test with a door, NFPA 286 room without the wall containing the door, and railcar geometry with a single door open. All three cases had the same exposure fire in a corner and the same lining material. In predictions of the NFPA 286 room-corner test with a door, gas temperature, heat release rate, and time to flashover agreed well with available NFPA 286 standard test data. The simulation results of fire growth inside a railcar with one side door open produced similar conditions and fire growth compared with the standard NFPA 286 room with a door. For simulations on the NFPA 286 room with the wall containing the door removed, it was found that removal of the wall with the door resulted in non-conservative fire growth conditions with the gas temperature and heat release rate under-estimated compared to the standard NFPA 286 room with a door. These simulations indicate that the standard NFPA 286 room-corner test with a door is representative of conditions that would develop inside of a railcar.

Topics: Fire
Commentary by Dr. Valentin Fuster
2018;():V001T06A023. doi:10.1115/JRC2018-6261.

Although accidents at Highway-Rail Grade Crossings (HRGCs) have been greatly reduced over the past decades, they continue to be a major problem for the rail industry, causing injuries, loss of life, and loss of revenue. Recently, the Strategic Highway Research Program sponsored a Naturalistic Driving Study, the SHRP2 NDS, which produced a unique opportunity to look at how drivers behave while traversing HRGCs. This research deviates from previous studies by concentrating on day-to-day actions of drivers who traverse the HRGCs without an incident, instead of focusing on the accident events that have formed the foundation most earlier studies. This paper will focus on the effects of the external environment, weather and day/night conditions, on driver behavior at HRGCs. We will present the methodology and data used for the study and provide some early results from the analysis, such as differences in compliance during poor versus clear weather. We will use both a compliance score based on scanning and speed reduction and an analysis of brake and gas pedal usage during the approach to a HRGC. The paper will conclude with a brief discussion of future research concepts.

Topics: Highways , Rails
Commentary by Dr. Valentin Fuster

Energy Efficiency and Sustainability

2018;():V001T07A001. doi:10.1115/JRC2018-6120.

A 2MW Battery Power System (BPS) was installed and tested in a traction power substation on the Orange Line of the Washington Metropolitan Area Transit Authority (WMATA), as a demonstration project. Measurement data were obtained under normal revenue service conditions. In addition, the same installation was tested as an emergency power source to move trains to desired destinations when the traction power system was under a simulated blackout situation. The results were analyzed to assess the ability of the BPS installation to achieve energy saving, peak power reduction, train voltage support and emergency power provision. This paper describes the findings from this demonstration project and the follow-on efforts in WMATA.

Commentary by Dr. Valentin Fuster
2018;():V001T07A002. doi:10.1115/JRC2018-6156.

For the railway wireless monitoring system, energy efficiency is important for prolonging the system lifetime and ensuring the successful transmission of the inspection data. In general, decreasing the size of the data packet is conductive to declining the transmission energy consumption. Hence, the inspection data packets should be processed before being transmitted. However, the energy consumption of data processing may also be considerable, especially for the vision-based monitoring system. Therefore, we propose an optimization methodology to address the trade-off of the energy usage between data processing and transmission in railway wireless monitoring systems. In addition, the various data types and transmission distances of the sensors may cause the unbalanced energy consumption, and it will shorten the system lifetime due to the failure of some sensors. To address this challenge, in our proposed optimization framework, we adopt customized compression ratios for each sensor to balance its energy consumption. On this basis, the system lifetime can be extended by minimizing and balancing the energy consumption simultaneously. Finally, we use several generalized numerical examples to demonstrate the superiority and practicality of the proposed strategy. Compared to previous methods in the literature, our proposed approach can increase service lifetime of wireless monitoring systems using equal and less energy.

Commentary by Dr. Valentin Fuster
2018;():V001T07A003. doi:10.1115/JRC2018-6163.

Remediation of environmental sites is of concern across the rail industry. Impacted sites may result from releases of chemicals to the environment along active rail lines or in rail yards; historical activities; or through acquisition of impacted property. Management of these liabilities may require investigation, planning, design, and remediation to reduce risks to human health and the environment and meet regulatory requirements. However, these investigation and remediation activities may generate unintended environmental, community, or economic impacts. To address these impacts, many organizations are focusing on the incorporation of sustainability concepts into the remediation paradigm.

Sustainable remediation is defined as the use of sustainable practices during the investigation, construction, redevelopment, and monitoring of remediation sites, with the objective of balancing economic viability, conservation of natural resources and biodiversity, and the enhancement of the quality of life in surrounding communities (Sustainable Remediation Forum [SURF]). Benefits of considering and implementing measures to balance the three pillars of sustainability (i.e., society, economics, and environment) may include lower project implementation costs, reduced cleanup timeframes, and maximizing beneficial while alleviating detrimental impacts to surrounding communities. Sustainable remediation has evolved from discussions of environmental impacts of cleanups (with considerable greenwashing), to quantifying and minimizing the environmental footprint and subsequent long-term global impacts of a remedy, and currently, incorporating strategies to address all three components of sustainability — environmental, social, and economic.

As organizations expand their use of more sustainable approaches to site cleanup, it is beneficial to establish consistent objectives and metrics that will guide implementation across a portfolio of sites. Sustainable remediation objectives should be consistent with corporate sustainability goals for environmental performance (e.g., greenhouse gas emissions, resource consumption, or waste generation), economic improvements (i.e., reduction of long term liability), and community engagement. In the last decade, there have been several Executive Orders (13423, 13514, 13693) that provide incrementally advanced protocols for achieving sustainability in government agency and corporate programs.

Resources for remediation practitioners are available to assist in developing sustainable approaches, including SURF’s 2009 White Paper and subsequent issue papers, ITRC’s Green and Sustainable Remediation: State of the Science and Practice (GSR-1) and A Practical Framework (GSR-2), and ASTM’s Standard Guide for Greener Cleanups (E2893-16) and Standard Guide for Integrating Sustainable Objectives into Cleanup (E2876-13). These documents discuss frameworks that may be applied to projects of any size and during any phase of the remediation life cycle, and many provide best management practices (BMPs) that may be implemented to improve the environmental, social, or economic aspects of a project. Many of these frameworks encourage a tiered approach that matches the complexity of a sustainability assessment to the cost and scope of the remediation. For small remediation sites, a sustainability program may include the selection, implementation, or tracking of BMPs. A medium sized remediation site may warrant the quantification of environmental impacts (e.g., air emissions, waste generation, etc.) during the evaluation and selection of remedial alternatives. Often, only large and costly remediation sites demand detailed quantitative assessment of environmental impacts (e.g., life cycle assessment), economic modeling, or extensive community or stakeholder outreach. However, if a tiered approach is adopted by an organization, components of each of these assessments can be incorporated into projects where it makes sense to meet the needs of the stakeholders.

Topics: Sustainability , Rails
Commentary by Dr. Valentin Fuster
2018;():V001T07A004. doi:10.1115/JRC2018-6260.

This paper provides an update to the 2015 paper titled “A New Energy Storage Substation for the Portland to Milwaukie Light Rail (PMLR) Extension” [4] presented at the 2015 JRC in San Jose.

The energy storage substation (ESS) with super-capacitor technology manufactured by Siemens was installed in place of a utility-connected substation at the Tacoma substation location to capture the energy generated by braking light rail vehicles and store it in the ESS energy savings mode and feed it back to the traction power supply during vehicle acceleration. In voltage stabilization mode, the ESS will enable the rail system to maintain voltage system stability by ensuring the system voltage to remain within the required voltage ranges and prevent system disruptions due to low system voltage conditions.

In the Fall 2015, the Tacoma ESS went into service as part of the PMLR Orange Line light rail extension. This paper presents the design concepts for the unit, briefly discusses installation and testing, and focuses on the optimization process, operating experience, energy savings and reliability. TriMet operates a fleet of 145 light rail vehicles on its 60 mile network. Approximately 75% of the energy regenerated during braking is captured and re-used, saving an estimated $1.8 M annually in energy cost. The Tacoma ESS capacity is approximately 2.5 kWh. The unit normally operates in energy savings mode, maximizing recovery and re-use of braking energy while the secondary voltage stabilization mode is available to maintain system operation during outage conditions. After more than two years of revenue service operation, detailed operating data is presented and analyzed, including reliability information and actual energy and cost savings.

Commentary by Dr. Valentin Fuster

Urban Passenger Rail Transport

2018;():V001T08A001. doi:10.1115/JRC2018-6104.

This Safety IDEA project-31 extends the design process for the accessible sleeper compartments to include 3-D digital modeling and anthropometric analyses and uses a full-scale soft mock-up of the sleeper compartment. The use of computer-aided design tools permits human factors constraints, including minimum spatial requirements and reach limitations, to be determined within the conceptual design phase without physical prototyping and data collection. In this ongoing study, physical prototyping in the form of a full-scale soft mock-up is used to validate digital results. Successful validation would indicate that anthropometric digital human models based on regular anthropometric databases may be used to design for populations with reduced mobility, provided that the wheeled mobility devices are modelled appropriately and advance the design process for accessible spaces. The soft mock-up permits spatial evaluation by the general public including people with disabilities. An online survey is also available to gather feedback and the needs and values of the target population. Representatives of the passenger rail industry are involved throughout the project and have been invited to participate in the evaluation of the soft mock up. The results of the project are validated designs for new accessible sleeper compartments for bi-level and single-level rail cars and include seating, sleeping, and restroom spaces. This will be disseminated for use by the passenger rail industry.

Commentary by Dr. Valentin Fuster
2018;():V001T08A002. doi:10.1115/JRC2018-6119.

The paper addresses the need to examine the trade-offs between passenger safety and independence in travel by people who use wheeled mobility devices on passengers trains. It has been the practice in Asia, North America and Europe to not require passengers in wheeled mobility devices (WhMDs) such as wheelchairs to secure their wheeled devices when traveling by rail. There are several motivations for examining the need for containment of WhMDs on passenger trains. In general the population is aging and getting larger, and this is reflected in the types of WhMDs that passengers are trying to bring on board trains. The US Federal Railroad Administration (FRA) and members of the Rail Vehicle Access Advisory Committee (RVAAC) requested a feasibility study on the economic impacts of accommodating two or more wheeled mobility devices in the accessible seating area [1]. The feasibility study indicated that there is space to accommodate two WhMDs without significant impact on revenue seat loss, however safety issues have emerged, and are the basis of this paper. The three research questions that are addressed include:

I. What is the appropriate interior space that accounts for WhMD maneuvering?

II. What are the appropriate levels of deceleration and jerk to be considered in the vehicle interior for passenger rail vehicles under severe braking?

III. What is the appropriate level of containment for occupied wheeled mobility devices on passenger rail vehicles?

The paper examines research literature and other findings from both North America and Europe that address in part the research questions.

Commentary by Dr. Valentin Fuster


2018;():V001T09A001. doi:10.1115/JRC2018-6105.

Overhead Contact Systems for electric transit vehicles utilize catenary or single contact wire suspended from cantilevers, bracket arms or span wires. For single contact wire, inclined pendulum suspension provides optimal performance for pantograph or trolley pole current collectors, though it is under-utilized in the United States. Typical suspension for single contact wire consists of direct suspension hangers or stitch suspension with steady arms where stagger is achieved by pulling off the contact wire with the hanger (direct suspension) or steady arms (stitch suspension). This results in the full weight of the contact wire in the span length being supported by the stitch or line insulator. This rigid point of attachment results in a heavy, stiff suspension leading to current collector bouncing, arcing and premature contact wire wear as the upward movement of the wire is restricted and a hard spot is created. It also results in excessive sag at elevated temperatures and contributes to an increased angle at the support span approach.

Inclined pendulums can be utilized in constant tension systems or variable tensioned systems where they impart a semi-constant tensioning into the line and keep the wire tension relatively stable over a particular temperature range. The expansion/contraction of the contact wire is taken up in the inclination of the pendulums where they rise or fall so that the tension and sag in the contact wire remains relatively consistent. In addition, they provide less resistance to uplift of the current collectors at the suspension point so that rising of the contact wire occurs as the collector approaches and passes under it. The vertical angle of the contact wire approaching the span support is kept to minimum levels and collector performance during hot weather conditions tends to remain trouble free. Further, the energy wave set up in the wire from the moving collector is not grossly reflected at the suspension point as with direct suspension thus allowing the collector to pass through smoothly without bounce or loss of continuous contact.

This paper describes the benefits of inclined pendulums in constant and variable tensioned systems such as creating a semi-constant tensioning effect, preventing current collector bounce and premature contact wire wear at the supports by reducing the uplift resistance on current collectors. It also provides the least visual obtrusiveness of all the suspension systems. In addition, this paper will present the associated costs of the inclined pendulum suspensions.

Topics: Pendulums
Commentary by Dr. Valentin Fuster
2018;():V001T09A002. doi:10.1115/JRC2018-6122.

At the core of electrified railroad design is the Pantograph and Overhead Contact System (OCS). The design of the OCS has been essentially unchanged for nearly 100 years. The pantograph has undergone structural changes in the last 50 years but it still functions the same way it did over 100 years ago. The technology has proven to be incredibly successful and reliable. However the relationship between the pantograph and the OCS has practical limitations that have become evident in recent times, as trains achieve ever faster operating speeds. Currently the world’s high sped trains all tend to have cruising speeds of about 220 MPH. This circumstance has been referred to in one paper as the Pantograph Barrier. (See Ref #1). The evident limitations on the cruising speeds of electric locomotives comes from several causes. Among these are the maintenance condition of the rails; the radius of curves; the use of roadbed super-elevation; and the presence of freight or commuter trains. But arguably the most pervasive limitation is the behavior of the OCS at higher speeds. History demonstrates the inevitable progress to ever faster rail speeds and the need for ever increasing traction power capacity. This paper will explore the evolution of the pantograph and OCS configuration as a means to identify the limitations that are endemic in the design. It proposes a list of design criteria that must guide the further evolution of the relationship between the OCS and the Pantograph. Further, it proposes a new configuration for the OCS and the Pantograph that may form the basis for the further evolution of the electric locomotives.

Commentary by Dr. Valentin Fuster
2018;():V001T09A003. doi:10.1115/JRC2018-6133.

Thyristor Controlled Rectifiers offer numerous advantages for the traction applications: capital cost savings, increased system throughput, reduced maintenance and additional energy and cost savings for reversible controlled rectifiers. Yet the controlled rectifier usage has been limited, partially because of testing difficulties. The multi-megawatt power level makes testing at the test laboratory at full power impractical. Further exasperating the issue is a presence of control systems that can’t be tested completely while running with a shorted output. The paper proposes a way out of this conundrum through the testing at reduced voltage and current (scaling). The scaling allows reducing power requirements 50 to 400 times, making it practical to test both regulating system and power circuit performance with simulated train load current. The scaled voltage/current test verifies a dynamic response under realistic train behavior, voltage regulation curve, AC and DC harmonics. The paper proposes the scaling tests to verify both forward and reverse operation of controlled rectifier.

Topics: Testing
Commentary by Dr. Valentin Fuster
2018;():V001T09A004. doi:10.1115/JRC2018-6170.

The variations in the operation timetable or schedule of an electrified transit rail system can lead to substantial fluctuations in power demands of its traction power network. This paper studies the correlation between the maximum power demands and timetable perturbations for electrified transit rail systems. Specifically, the operation schedule uncertainties are quantified as two parameters: headway shift and headway perturbation. A computation algorithm is introduced to illustrate how to use these two parameters to obtain the worst case scenario to obtain maximum power demand of traction power substations. Also a special type of catenary-free light rail system is used as an example to illustrate the algorithm and numerical results.

Commentary by Dr. Valentin Fuster
2018;():V001T09A005. doi:10.1115/JRC2018-6237.

DC High Speed circuit-breakers (DC-HSCB’s) and Protective Relays (PR’s) are present in all sorts of rail infrastructure. National or urban railways, tramways or metro’s, they all rely on this equipment to protect them from disastrous situations. The circuit-breakers can be installed either on board of the trains or on the trackside, in traction power substations. Protection Relays are, in most cases, part of the fixed installation. Needless to say that the DC-HSCB and PR are critical parts of the rail infrastructure. In the best case a faulty unit can cause the trains to stand still, in the worst case it can blow up parts of the infrastructure and cause casualties.

The life-cycle of circuit-breakers is long, some of the units still in operation are more than 50 years old. The first digital protective relays were introduced in the eighties. Proper testing of DC-HSCB and PR is a hot topic these days as a result of a number of incidents on various LRT and tramway systems (STEVO Electric, 2015). This paper describes some issues with testing and how to resolve them.

Commentary by Dr. Valentin Fuster
2018;():V001T09A006. doi:10.1115/JRC2018-6247.

One method of strengthening low frequency AC railway grids is to upgrade Booster Transformer (BT) catenary systems, to Auto Transformer (AT) catenary systems. An AT catenary system has lower equivalent impedance compared to a BT system. Thus, an upgrade makes the existing converter stations electrically closer.

Converter stations may have different types of Rotary Frequency Converters (RFCs) installed in them, and it is not well explored how different RFCs behaves and interact during and after a large disturbance.

Using the Anderson-Fouad model of synchronous machines to describe the dynamics of RFCs, several case studies have been performed through numerical simulations. The studies investigate the interactions within and between converter stations constituted with different RFC types, for BT as well AT catenary systems.

The numerical studies reveal that replacing BT with AT catenary systems, results in a more oscillatory system behaviour. This is seen for example in the power oscillations between and inside converter stations, after fault clearance.

Commentary by Dr. Valentin Fuster

Vehicle-Track Interaction

2018;():V001T10A001. doi:10.1115/JRC2018-6102.

Developing of testing rigs represents a crucial activity for understanding the behavior of physical systems. A sloshing cargo in a railway tank car involves the dynamic interaction of the cargo-frame systems, characterized by lateral load transfers derived from the motion of the cargo. Such dynamic loads could develop a concentrated damage in the rail. While it has been recognized in the literature the influence of such dynamic sloshing forces on the wheel load exerted on the rails, affecting the lateral stability of the car and, consequently, the level of stress in the rail, no experimental validation has been made for none of these situations, namely, the effect of sloshing cargoes on the lateral stability of the cars, and on the corresponding stress level in the rails. In this paper, an experimental test rig is proposed to study the liquid cargo–tank car interaction when negotiating a turn. The equipment will provide the means to validate a simplified mathematical model, which will allow extensive parametric analyses.

Topics: Turning , Testing , Vehicles
Commentary by Dr. Valentin Fuster
2018;():V001T10A002. doi:10.1115/JRC2018-6109.

Early detection of rail defects can avoid derailments and costly damage to the train and railway infrastructure. Small breaks, cracks or corrugations on the rail can quickly propagate after only a few train cars have passed over it, creating a potential derailment. The current technology makes use of a dedicated instrumented car or a separate railway monitoring vehicle to detect large breaks. These cars are usually equipped with accelerometers mounted on the axle or side frame. The simple detection algorithms use acceleration thresholds which are set at high values to eliminate false positives. As a result, rail surface defects that produce low amplitude acceleration signatures may not be detected, and special track components that produce high amplitude acceleration signatures may be flagged as defects.

This paper presents the results of a feasibility study conducted to develop new and more advanced sensory systems as well as signal processing algorithms capable of detecting various rail surface irregularities. A dynamic wheel-rail interaction model was used to simulate train dynamics as a result of rail defects and to assess the potential of this new technology on rail defect detection. In a future paper, we will present experimental data in support of the proposed model and simulations.

Commentary by Dr. Valentin Fuster
2018;():V001T10A003. doi:10.1115/JRC2018-6190.

The Train Energy and Dynamics Simulator (TEDS) is state-of-the-art simulation software, developed by the Federal Railroad Administration (FRA), to study train operation safety and performance as affected by a wide variety of rolling stock, track, train handling and operating configurations. As part of developing TEDS, existing and published data on braking, draft systems and train performance were used for initial validation of TEDS.

This paper describes two revenue service tests conducted to further validate TEDS. The first test was on a loaded unit train, while the second test was on a mixed train with empty and loaded cars and included distributed power in which the remote brake valve was cut in. Collected test data included throttle position, train speed, locomotive power, brake system pressures and coupler forces. Several events from these tests, representing typical train operating scenarios, were selected for comparison with TEDS predicted results. The TEDS predictions matched the measured test data for all of the scenarios simulated, further validating the performance of the software and offering additional assurance on the use of TEDS for simulating performance and safety critical train dynamic behavior.

Commentary by Dr. Valentin Fuster
2018;():V001T10A004. doi:10.1115/JRC2018-6194.

As part of the vehicle qualification process, the Federal Railroad Administration (FRA) currently requires in its track and passenger equipment safety standards that a validated vehicle model be used to demonstrate safe dynamic vehicle response to allowable track geometry variations. Transportation Technology Center, Inc. (TTCI) was contracted by FRA to characterize, model, and analyze a high speed passenger vehicle in order to provide guidance related to the vehicle qualification process. The overall objective of the project was to evaluate methods required to demonstrate the validity of a vehicle dynamics model project and investigate how different input parameters affect the accuracy of the model. The project consisted of four main tasks: (1) characterize a high speed passenger vehicle; (2) develop a mathematical model of the vehicle using measured parameters; (3) validate the mathematical model using on-track tests; and (4) conduct a sensitivity analysis of the vehicle model to determine the critical parameters.

FRA tasked TTCI with applying the testing and modeling methodology to FRA’s DOTX 216 geometry car. Specific parameters were identified that needed to be measured in order to develop a dynamic vehicle model of the car. A characterization test regime was outlined and performed to determine the necessary mass, stiffness, and damping characteristics, and the measured parameter values were used to create a mathematical model of the vehicle using TTCI’s NUCARS®* dynamic modeling software. A series of on-track validation tests were performed on different tracks at the Transportation Technology Center (TTC) using the DOTX 216 car to facilitate model validation efforts. The model was then used to simulate the on-track testing regime conducted at TTC. Model validation was evaluated using displacement, acceleration, and wheel/rail force measurements. Results from the simulations and test data were compared using multiple methods to demonstrate the validation of the DOTX 216 model. TTCI also conducted a parameter sensitivity analysis using the validated model to assess its sensitivity to changes in different parameter values and to identify the most critical parameters for simulating passenger vehicles. The testing, modeling, and model validation methodology described in this work provide a practical example of developing a validated vehicle model for use in the vehicle qualification process.

Commentary by Dr. Valentin Fuster
2018;():V001T10A005. doi:10.1115/JRC2018-6245.

Analytical work was conducted to study if movement of liquid in a tank car (or sloshing) could contribute in any way to derailments of trains carrying dangerous goods liquids. A liquid sloshing model was developed for railway tank car with formulas generated based on available finite element analysis data. An empty tank car dynamics simulation model validated with measured data was used as the base model to implement the liquid sloshing model. Hundreds of thousands of dynamics simulations were conducted for the tank car with liquid cargo at various fill ratios and with equivalent solid (i.e., rigid) cargo on more than 1000 measured curves. The results show that under some conditions tank car sloshing could increase the risk of derailment. The detrimental effect of tank car sloshing on rail safety increases with the increase of outage, trailing tonnage, grade, car length difference, curvature, train speed and track geometry irregularities. Quantitative risk analysis could be improved by considering the effects of tank car sloshing on derailment risk. The findings can be used by regulators and the railroads to improve train marshalling practice and risk mapping of railway networks.

Commentary by Dr. Valentin Fuster

Railroad History

2018;():V001T11A001. doi:10.1115/JRC2018-6180.

This paper provides a review of me of the main themes in North American R&D and technology innovation from the 1970s through 2017. A chronological description identifies some of the principal developments in safety and performance over the years, including the introduction of new technologies, the changes in government and industry priorities and funding for R&D. This includes investments in tank car/hazmat research, maglev, and high speed rail). Key technology introductions such as automated track and rolling stock inspection systems are discussed. The evolving and changing roles of the Federal government, the AAR, individual railroads, the supply industry are described. The paper offers a timeline of key events in railroad R&D and technology introductions, with brief discussions each came to pass, the conditions in the industry which drove or enabled them and the impacts each introduction have had. The paper closes with some thoughts about current trends in technology and railroad R&D and their likely trajectory into the future.

Topics: Railroads
Commentary by Dr. Valentin Fuster
2018;():V001T11A002. doi:10.1115/JRC2018-6259.

Travel by rail in the United States is one of the safest modes of transportation available. On the rare occasion that major accidents do occur, they represent an opportunity for railroads to learn what has happened and what needs to be done to prevent reoccurrence. This paper provides several, detailed case studies of noteworthy passenger and rail transit accidents that have occurred in the United States, from the 1960s to the present. It discusses the outcome of these accidents, including changes that were implemented due to lessons learned. It also discusses unique and/or noteworthy aspects of each accident.

Topics: Accidents , Rails
Commentary by Dr. Valentin Fuster
2018;():V001T11A003. doi:10.1115/JRC2018-6267.

This paper begins with examining the fundamental nature of wayside signals and considers the first know signaling practices used to communicate the condition of the track ahead to the train engineer. The principle of wayside signals is to keep trains separated and to provide knowledge of the conditions ahead; speed and routing information. Most railways have gone through many different evolutions of signals and practices some driven by railway mergers which drove the operating rules. This consistently required changes within the training of locomotive engineers assigned operate trains within their territory.

This paper will focus on a few transitions between signal types, the specific makeups and effectiveness of wayside signals since the beginning of railway signals in the early 1830s. Starting with the term “High Ball” not related to a popular drink known today, but a raising of a large ball into the air that could be seen from afar instructing a train his status to train operating schedule. Later, signals were developed to provide the train engineer the status of the track ahead by dividing the track into short sections. This allowed the track section to be labeled as “occupied” a train present or “un-occupied”, train not present within the track section. Wayside signals continued to be advanced such that today’s standards, aspects (mimicking the wayside signals) are displayed within the operating cab providing the indication directly to the engineer. As we continue forward, wayside signals have been reduced and in the future, they may be only in a museum next to the cassette player.

Topics: Signals
Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In