Fe Safe User
Fe Safe User
3DEXPERIENCE
fe-safe 2024
fe-safe USER GUIDE
©2023 Dassault Systèmes. All rights reserved. 3DEXPERIENCE®, the Compass icon, the 3DS logo, CATIA, BIOVIA, GEOVIA, SOLIDWORKS, 3DVIA, ENOVIA, EXALEAD, NETVIBES, CENTRIC PLM, 3DEXCITE, SIMULIA, DELMIA,
IFWE and MEDIDATA are commercial trademarks or registered trademarks of Dassault Systèmes, a French “société europèenne” (Versailles Commercial Register # B 322 306 440), or its subsidiaries in the United States
and/or other countries. All other trademarks are owned by their respective owners. Use of any Dassault Systèmes or its subsidiaries trademarks is subject to their express written approval.
Trademarks
fe-safe, Abaqus, Isight, Tosca, the 3DS logo, and SIMULIA are commercial trademarks or registered trademarks of
Dassault Systèmes or its subsidiaries in the United States and/or other countries. Use of any Dassault Systèmes or
its subsidiaries trademarks is subject to their express written approval. Other company, product, and service
names may be trademarks or service marks of their respective owners.
Legal Notices
fe-safe and this documentation may be used or reproduced only in accordance with the terms of the software
license agreement signed by the customer, or, absent such an agreement, the then current software license
agreement to which the documentation relates.
This documentation and the software described in this documentation are subject to change without prior notice.
Dassault Systèmes and its subsidiaries shall not be responsible for the consequences of any errors or omissions
that may appear in this documentation.
© Dassault Systèmes Simulia Corp, 2023.
Third-Party Copyright Notices
Certain portions of fe-safe contain elements subject to copyright owned by the entities listed below.
© Battelle
© Endurica LLC
© Amec Foster Wheeler Nuclear UK Limited
fe-safe Licensed Programs may include open source software components. Source code for these components is
available if required by the license.
The open source software components are grouped under the applicable licensing terms. Where required, links to
common license terms are included below.
1 Introduction
1.1 Background
SIMULIA, the Dassault Systèmes brand for realistic simulations, offers fe-safe® – the most accurate and
advanced fatigue analysis technology for real-world applications.
fe-safe empowers you to better tailor and predict the life of your products. It has been developed
continuously since the early 1990’s in collaboration with industry, ensuring that fe-safe provides the
capabilities required for real industrial applications. It continues to set the benchmark for fatigue
analysis software and is testimony to the fact that, not only is accurate fatigue analysis possible, but it is
possible regardless of the complexity of the model and the fatigue expertise of its users.
fe-safe was the first commercially available fatigue analysis software to focus on modern multiaxial
strain-based fatigue methods. It analyses metals, rubber, thermo-mechanical and creep-fatigue and
welded joints, and is renowned for its accuracy, speed and ease of use.
Consistent and accurate correlation with test results ensures that fe-safe maintains its position as the
technology leader for durability assessment and failure prevention.
fe-safe and the add-on modules fe-safe/Rubber and Verity™ in fe-safe, are available worldwide via
SIMULIA and our network of partners.
For further information please visit the fe-safe pages of the Dassault Systèmes website
1.1.1 fe-safe
fe-safe is a powerful, comprehensive and easy-to-use suite of fatigue analysis software for finite
element models. It is used alongside commercial FEA software, to calculate:
where fatigue cracks will occur
when fatigue cracks will initiate
the factors of safety on working stresses (for rapid optimisation)
the probability of survival at different service lives (the 'warranty claim' curve)
whether cracks will propagate
Results are presented as contour plots which can be plotted using standard FE viewers. fe-safe has
direct interfaces to the leading FEA suites.
For critical elements, fe-safe can provide comprehensive graphical output, including fatigue cycle and
damage distributions, calculated stress histories and crack orientation. To simplify component testing
and to aid re-design, fe-safe can evaluate which loads and loading directions contribute most to the
fatigue damage at critical locations.
Sophisticated techniques for identifying and eliminating non-damaged nodes, make fe-safe extremely
efficient for large and complex analyses, without compromising on accuracy.
Typical application areas include the analysis of machined, forged and cast components in steel,
aluminium and cast iron, high temperature components, welded fabrications and press-formed parts.
Complex assemblies containing different materials and surface finishes can be analysed in a single run.
For engineers who are not specialists in fatigue, fe-safe will automatically select the most appropriate
analysis method, and will estimate materials’ properties if test data is not available.
Specialist engineers can take advantage of user-configurable features. Powerful macro recording and
batch-processing functions make repetitive tasks and routine analyses straightforward to configure and
easy to run.
fe-safe includes the fe-safe Material Database (see below), to which users can add their own data, and
comprehensive materials data handling functions.
fe-safe also incorporates powerful durability analysis and signal processing software, safe4fatigue (see
below) at no additional cost, on all platforms.
Summary of capabilities
Fatigue of Welded Joints
fe-safe includes the BS7608 analysis as standard. Other S-N curves can be added. fe-safe also has an
exclusive license to the Verity Structural Stress Method developed by Battelle. Developed under a Joint
Industry Panel and validated against more than 3500 fatigue tests, Verity is bringing new levels of
accuracy to the analysis of structural welds, seam welds and spot welds
Vibration Fatigue
fe-safe includes powerful features for the analysis of flexible components and structures that have
dynamic responses to applied loading. Random transient analysis and PSDs are amongst the analysis
methods included
Test Program Validation
fe-safe allows the user to create accelerated test fatigue programs. These can be validated in fe-safe to
ensure that the fatigue-critical areas are the same as those obtained from the full service loading.
Fatigue lives and fatigue damage distributions can also be correlated
Critical Distance – will cracks propagate?
Critical distance methods use subsurface stresses from the FEA to allow for the effects of stress
gradient. The data is read from the FE model by fe-safe, and the methods can be applied to single
nodes, fatigue hot-spots or any other chosen areas including the whole model
Property Mapping
Results from casting or forging simulations can be used to vary the fatigue properties at each FE node.
Each node will then be analyzed with different materials data. Temperature variations in service,
multiaxial stress states and other effects such as residual stresses can also be included
Vector Plots
Vector plots show the direction of the critical plane at each node in a hotspot, or for the whole model.
The length and colour of each vector indicates the fatigue damage
Warranty curve
fe-safe combines variations in material fatigue strengths and variability in loading to calculate the
probability of survival over a range of service lives
Damage per block
Complex loading histories can be created from multiple blocks of measured or simulated load-time
histories, dynamic response analyses, block loading programs and design load spectra. Repeat counts
for each block can be specified. fe-safe also exports the fatigue damage for each ‘block’ of loading (for
example, from each road surface on a vehicle proving ground, or for each wind state on a wind turbine).
This shows clearly which parts of the duty cycle are contributing the most fatigue damage. Re-design
can focus on this duty cycle, and accelerated fatigue test programs can be generated and validated
Material database
A material database is supplied with fe-safe. Users can add their own material data and create new
databases. Material data can be plotted and tabulated. Effects of temperature, stress ratio etc can be
seen graphically. Equivalent specifications allow searching on US, European, Japanese and Chinese
standards
Automatic hot-spot formation
fe-safe automatically identifies fatigue hot-spots based on user-defined or default criteria. Hot-spots
can be used for rapid design change studies and design sensitivity analysis
Manufacturing effects
Results from an elastic-plastic FEA of a forming or assembly process or from surface treatments such as
cold rolling or shot peening can be read into fe-safe and the effects included in the fatigue analysis.
Estimated residual stresses can also be defined for areas of a model for a rapid ‘sensitivity’ analysis
Surface detection
fe-safe automatically detects the surfaces of components. The user can select to analyse only the
surface, or the whole model. Subsurface crack initiation can be detected and the effects of surface
treatments taken in to account
Surface contact
Surface contact is automatically detected. Special algorithms analyse the effects of contact stresses. This
capability has been used for bearing design and for the analysis of railway wheel/rail contact
Virtual strain gauges (single gauges and rosettes) can be specified in fe-safe to correlate with
measured data. fe-safe exports the calculated time history of strains for the applied loading. FE models
can be validated by comparison with measured data
Parallel processing
Parallel processing functionality is included as standard – no extra licences are required
Signal processing
Signal processing, load history manipulation, fatigue from strain gauges, and generation of accelerated
testing signals are among the many features included as standard
Structural optimisation
fe-safe can be run inside an optimisation loop with optimisation codes to allow designs to be optimised
for fatigue performance. fe-safe interfaces to Isight and Tosca from SIMULIA, and Workbench ANSYS®.
fe-safe/Rotate
fe-safe/Rotate speeds up the fatigue analysis of rotating components by taking advantage of their axial
symmetry. It is used to provide a definition of the loading of a rotating component, through one full
revolution, from a single static FE analysis. From a single load step, fe-safe/Rotate produces a sequence
of additional stress results as if the model had been rotated through a sequence of different
orientations.
fe-safe/Rotate is particularly suitable where the complete model exhibits axial symmetry, for example:
wheels, bearings, etc.. However, the capability can also be used where only a part of the model exhibits
axial symmetry, for example to analyse the hub of a cam. The remainder of the model (the non-axially
symmetric parts) can be analysed in the conventional way.
fe-safe/Rotate is included as a capability in the standard fe-safe. Since it is for use with finite element
model data, it is not available as an extension to safe4fatigue.
fe-safe/Rotate is an integrated part of the interface to the FE model, and is currently available for ANSYS
results (*.rst), Abaqus Fil and ASCII model files only.
Use of fe-safe/Rotate is discussed in section 21.
fe-safe Custom Module Framework (CMF)
fe-safe Custom Module Framework allows users to create and modify fatigue analysis methods.
Confidential algorithms are created in plug-in libraries using a C++ API. Using the Custom Module
Framework, algorithms can be added to those supplied with fe-safe to operate seamlessly in the fe-safe
environment.
fe-safe uses its own powerful fatigue loading capabilities to assemble the tensor time histories, which
are passed to the custom fatigue algorithm. Stress, strain and temperature variation and node-by-node
material property variations are supported, as well as custom FE variables. User-defined material
properties may be retrieved from material databases. After analysis, standard and user-defined
contours, logs and histories are returned to fe-safe to make use of its reporting capabilities.
Batch and distributed processing are also supported.
For further information and assistance with the usage of the API please contact your local SIMULIA
support representative.
1.1.2 safe4fatigue
safe4fatigue is an integrated system for managing advanced fatigue and durability analyses from
measured or simulated strain signals, peak/valley files and cycle histograms. Results may be in the form
of cycle and damage histograms, cycle and damage density diagrams, stress-strain hysteresis loops or
plots of fatigue damage.
safe4fatigue has been optimised for use on Windows and Linux platforms. Interfaces to many common
data acquisition systems and data structures are included. Alternatively, data can be acquired using fe-
safe data acquisition tools.
safe4fatigue incorporates powerful signal processing functionality, including modules for amplitude
analysis, frequency analysis and digital filtering. The signal processing modules can also be purchased
separately, for installations where fatigue analysis is not required.
safe4fatigue includes the fe-safe Material Database (see above), and comprehensive material data
handling functions.
Typical applications of safe4fatigue include automotive and aerospace component validation, ‘road load’
data analysis, on-line fatigue damage analysis, accelerated prototype testing and civil engineering
structure monitoring.
Powerful macro recording and batch processing functions make repetitive tasks and routine analyses
straightforward to configure and easy to run.
safe4fatigue is included in fe-safe at no additional cost.
Signal Processing Reference Manual This is based on the course notes for the
“Signal Processing” training course by John
Draper.
1.5 A complete copy of the user guide is included in the fe-safe software, via the online help, and
in the fe-safe installation directory in Adobe® PDF format.
1.6.1 Training
Dassault Systèmes SIMULIA provides training courses in:
Theory and Application of Modern Durability Analysis
Practical hands-on fe-safe training
Courses are available in-house and can be tailored to customers’ requirements.
2 Getting started
The licence key determines whether the software runs as fe-safe or safe4fatigue.
If more than one channel of data is selected, stacked plots ( ), overlaid plots ( ) or cross-plots ( )
can be produced.
Data can be presented in a tabular numerical format by clicking on the Numerical Display icon: .
2.4.2 A simple fatigue analysis from a measured signal using a local strain-life algorithm
This example demonstrates using safe4fatigue to perform a simple fatigue analysis from a measured
signal using a local strain-life algorithm.
Using the default Analysis Range settings ensures that the full time history is included in the analysis.
Determine which output file types should be produced using options in the Output Options area of the
Local Strain Analysis from Time History dialogue.
the message window (e). A summary of the file is shown in the Current FE Models window (d).
Information on named element groups is shown in the Fatigue from FEA dialogue box (a). Elements in
un-named groups are shown as ‘Default’.
See Appendix G, for details regarding interfacing to all supported FE file formats.
(e) For a strain-life analysis (for example, a Brown-Miller analysis), a multi-axial cyclic plasticity
model is used to convert the elastic stress-strain histories into elastic plastic stress-strain
histories. For an S-N curve analysis this step is omitted.
(f) For a shear strain or Brown-Miller analysis, the time histories of the shear and normal strain and
the associated normal stress are calculated on three possible planes. For an S-N curve analysis a
plane perpendicular to the surface is defined, and the time history of the stress normal to this
plane is calculated.
(g) On each plane the fatigue damage is calculated. For each plane the individual fatigue cycles are
identified using a ‘Rainflow’ cycle algorithm, the fatigue damage for each cycle is calculated and
the total damage is summed. The plane with the shortest life defines the plane of crack
initiation, and this life is written to the output file.
(h) During this calculation, fe-safe may modify the endurance limit amplitude. If all cycles (on a
plane) are below the endurance limit amplitude, there is no calculated fatigue damage on this
plane. If any cycle is damaging, the endurance limit amplitude is reduced to 25% of the constant
amplitude value, and the damage curve extended to this new endurance limit.
(i) Steps (a) to (h) are repeated for each node.
The analysis follows the same sequence as before, with the following exceptions.
Two loading history files will be opened (or one file containing at least two channels of loading
data).
The FEA model will contain at least two stress datasets.
Step 3 is performed twice, i.e.:
i. The first loading file is highlighted, as is the stress dataset to which it is applied. In the
Fatigue from FEA dialogue box (a), the Loading Settings tab is selected, and the Add...
>> A Load * dataset option is used.
ii. The second loading file is highlighted, as is the stress dataset to which it is applied. In
the Fatigue from FEA dialogue box (d), the Loading Settings tab is selected, and the
Add... >> A Load * dataset option is used.
The Analyse button initiates the analysis, as before.
fe-safe will prohibit the use of uniaxial fatigue methods when multiple load histories are applied. This is
because the principal stresses may change their orientation during the loading history.
valley in the loading. This is the most rigorous assumption. However, the user may request that fe-safe
performs a multi-channel peak/valley extraction on the signals as a default setting. (Alternatively, the
user may produce peak/valley signals as a separate operation (see section 10). This will reduce the
analysis time, but may lead to inaccuracies in the calculated lives. (see section 4 of the Fatigue Theory
Reference Manual for further discussion of multi-channel peak valley operations). If the user has
selected the peak/valley option, it is strongly recommended that the analysis is repeated for a selection
of the most critical elements with the peak/valley option turned off, to compare the fatigue lives.
Failure rates
For one or more specified target lives, fe-safe will combine statistical variability of material data, and
variability in loading, to estimate the failure rate. Data from a series of target lives can be used to derive
a ‘warranty claim’ curve. See section 17 for more details.
Haigh diagram
A Haigh diagram, showing the most damaging cycle at each node, can be created and plotted. The
results for all nodes on the model, or on selected element groups, are superimposed on a single
diagram. This provides a visual indication of the stress-based FRF’s for the complete model. See section
14 for more details.
Batch analysis
The standard analyses can be re-run interactively or in batch mode. See section 23 for more details.
Elastic-plastic FEA
Elastic-plastic FEA results can be analysed for certain loading sequences. See section 15 for more
details.
Additional effects
Additional scale factors can be included to allow for additional effects (for example size effects,
environmental effect, etc.). See section 5 for more details.
Export diagnostics
Detailed diagnostics can be written to a log file. See section 22 for more details.
5 Using fe-safe
5.1 Introduction
fe-safe is a suite of software for fatigue analysis from finite element models. It calculates:
fatigue lives at each node on the model – and thereby identifies fatigue crack sites;
stress-based factors of strength for a specified target life – these show how much the stresses
must be changed at each node to achieve the design life;
probability of failure at the design life, at each node;
probability of failure at a specified series of lives, to produce a ‘warranty curve’.
The results of these calculations can be plotted as 3-D contour plots, using the FEA graphics or third
party plotting suites. The fatigue results can be calculated from nodal stresses or elemental stresses.
In addition, fe-safe can output:
the effect of each load on the fatigue life at critical locations – to show if fatigue testing can be
simplified, and for load sensitivity analysis;
detailed results for critical elements, in the form of time histories of stresses and strains,
orientation of critical planes, etc.
fe-safe also includes a powerful suite of signal processing software, safe4fatigue (see section 7). This
allows the analysis of measured load histories and fe-safe results output. The facilities include:
plotting and digital listing;
manipulation, for example editing, scaling, filtering, integrating/differentiating;
amplitude analysis, for example Rainflow cycle counting, level crossing analysis;
frequency domain analysis, for example PSD, transfer function;
fatigue analysis for strain gauge data and other time history and Rainflow matrix data.
These methods assume that fe-safe has been installed and configured using the default locations and
may differ when using customised installation configurations.
Figure 5-1
The layout of the user interface can be adjusted to suit user preference and the screen size.
On Windows platforms, the Current FE Models and Loaded Data Files windows support “drag-and-drop”
methods. This means that selecting files in another Windows application (for example Windows
Explorer), and then dragging them into the appropriate fe-safe window can automatically load the files.
When a file is “dragged-and-dropped” to the Loaded Data Files window, the file is added to the list of
available data files.
When a file is “dragged-and-dropped” to the Current FE Models window, fe-safe starts the process of
importing the model.
Tip: If the fe-safe application is not visible, or is partly obscured by another application, then drag the
files to the fe-safe icon on the Windows taskbar, and hover over it for a couple of seconds (without
releasing the mouse button) until fe-safe becomes visible.
necessary, fe-safe will perform a plasticity correction in order to use elastic FE stresses with strain-
based fatigue algorithms.
A description of the loading: load histories can be imported from industry-standard file formats or
entered at the keyboard. Complex loading conditions can also be defined, including combinations of
superimposed load histories, sequences of FEA stresses and block loading. Loading histories and
other time-series data are contained in files referred to as data files.
Materials data: fatigue properties of the component material(s) are required; a comprehensive
material database is provided with fe-safe.
fe-safe endeavours to maintain interface support to the latest versions of Abaqus and third-party FE
packages. Detailed information on interfacing to the various FE data formats, including supported
versions, is given in Appendix G.
If Read forces from FE Models is selected then all force datasets will be selected.
When pre-scanning files, all datasets will be located and the basic information extracted. A maximum of
256000 datasets can be pre-scanned, an error message is shown if an attempt is made to load more
than 256000 datasets. The Select Datasets to Read dialogue will then be displayed showing all datasets
for the selected position, see Figure 5-3. For each load step a separate line appears in the pre-scan list
which acts as a header for all increments and datasets identified in this step. For general details on pre-
scan file, see Appendix E.
The Positions combo box lists all nodal and elemental locations that contain datasets. Changing the
Positions combo box will change the datasets displayed in the Datasets list, see Figure 5-3.
Using the checkboxes in the Quick select section along with Apply to Dataset List button can be used to
select ranges of datasets. Otherwise, datasets can be selected manually.
Figure 5-3
Each time a model is opened, the user is prompted to define the units.
Figure 5-4
For stresses the units can be MPa, KPa, Pa, psi, ksi. For strain the units can be strain(m/m) or
microstrain (µE). For temperatures the units can be °C, °F or Kelvin. For forces the units can be N, KN,
MN, lbf or klbf. For distance the units can be mm, m or in. For all the above unit types a user-defined
unit can be set, which requires configuring a conversion scale factor to SI units (MPa, strain, °C, N and
mm). The units are then displayed in the Current FE Models window.
When the model is imported, pertinent data extracted from the model is written to the “Loaded FE
Model” FED directory (see Appendix E) in the project folder. The FED directory stores stress, strain, force
and temperature data extracted from the imported FE model.
As data is being extracted from the FE model, the message log reports:
the names of element or node groups (for nodal datasets node groups are imported, for elemental
datasets element groups are imported);
maximum and minimum direct and shear stresses in each dataset;
a summary of the temperature datasets found.
Referencing datasets
In all cases, the index used to reference stress and strain datasets is the one displayed in the Current FE
Models window, which may not be the same as the step number in the source FE model file. Note also
that the numbering of stress datasets in the open FE model may change, for example if the model is re-
imported after the status of the Read strains from FE models option (in the General FE Options dialogue)
is changed.
fe-safe extracts group information for both element and node groups in the source FE model.
A summary of the element or node groups is displayed in the Current FE Models window by expanding
the Groups list.
Tip: When pre-scanning is enabled, read just the group information from the first file by deselecting all
the datasets in the file.
Figure 5-5
A checkbox in the top left hand corner of the dialogue toggles to view all the Groups or only those
compatible with the loaded model. While incompatible group may be used in an analysis, they require
greater overhead to process and can only be used if the FE mesh is available. When a model contains a
large number of groups it may become difficult to locate those of interest. To simplify the navigation a
filter can be applied to the list of groups. This filter is case insensitive and does not support the use of
wildcards.
User-defined ASCII element/node groups can be imported and exported, using the Load and Save
buttons respectively, or they can be created directly through the Basic Group Creation and the Advanced
Group Creation options at the bottom of the dialogue. These are described in the next section.
Individual or multiple groups can be moved between the list of Unused Groups on the left and the list of
Analysis Groups on the right by first selecting the groups to move and then clicking on either the or
the button.
Groups in both lists can be renamed as required, within the naming conventions described in section
5.5.2 above, by selecting a group in either list box and clicking the Properties button. This opens the
Group Properties dialogue shown in Figure 5-6 below, where the new name can be set in the User Name
field.
Figure 5-6
The Group Properties dialog also contains read-only fields with the original group name and the source
file of the model.
Groups to be analysed can be re-ordered (promoted / demoted) and the importance of the groups order
is discussed further in section 5.6.9 below.
Any changes made can be applied by clicking either the Apply or OK buttons, which will result in the
groups from the Analysis Groups list being added to the Group Parameters table within Fatigue from FEA
dialogue.
Figure 5-7
Alternatively user defined groups can be created directly through the Basic Group Creation and the
Advanced Group Creation options in the Select Groups to Analyse dialogue, see Figure 5-5.
New groups can be added to the Unused Groups list box on the left by using the Basic Group Creation
options Merge and Surface or by using the more complex but flexible Advanced Group Creation
equation editor.
The two basic options allow one to create a union of two or more groups selected from the list of groups
or an intersection of the selected groups with the SURFACE group of the loaded model. This second
option will only succeed if the Detect surface option was selected when loading the model.
The equation editor allows boolean operators to be used in creating new groups from the existing ones.
Double clicking the group name in either of the Unused Groups or Analysis Groups lists will insert it in
the equation editor. The following boolean operators can be typed in or inserted using the relevant
buttons:
AND – intersection of two or more groups
OR - union of two or more groups
XOR – the exclusive or of two or more groups
NOT – excludes ID’s from the selected group
Additionally parentheses can be used to further refine the equation.
Individual item ID’s can be manually entered in the equation, delimited by a comma or the OR operator.
Adding a continuous list of ID’s can be simplified by using with a hyphen, e.g. 1-5 will create a group
comprising of ID’s 1,2,3,4,5, a list of ID’s incrementing or decrementing by a fixed amount can be
specified by adding the increment within parentheses, e.g. 1-5(2) will create a group comprising of ID’s
1,3,5.
The wildcard operator * can be used to create unions between multiple groups. For example inserting a
* character alone in the equation field will create a new group comprising the union of all items within
groups in the current model.
Radio buttons at the bottom of the dialogue are used to indicate if the new group is to be nodal or
elemental. This choice will determine the item type when no type is specified. To mix element and node
IDs in the same group, prefix ‘e’ to element IDs and ‘n’ to node IDs, e.g. e1-10, n100 will create a group
with elements 1 to 10 and node 100. Note that element and node IDs will only be checked against the
mesh at analysis thus the group operators (AND, XOR etc.) will not check consider if a node is on an
element.
The source field shown in Properties, see Figure 5-6 above, for a user defined group will contain the
equation string rather than the path to the parent model.
Figure 5-8
Group properties for nodes and elements in multiple groups are handled as described in section 5.6.9
below.
Figure 5-9
When surface detection is successfully completed new element and nodal groups will be created, named
ELEMSURFACE and NODALSURFACE respectively, and a new entry named Surface will be added to the
Assembly section in the Current FE Models window, see Figure 5-10 below.
Figure 5-10
The subgroup option (i.e. analysis of the surface elements or the whole group) for an element group is
defined by double-clicking on Subgroup in the Group Parameters region of the Fatigue from FEA
dialogue. A dialogue box will appear where one of the two options must be selected.
algorithm to be used in the Group Algorithm Selection dialogue. Clicking the button displays a
drop-down menu of available fatigue algorithms.
The algorithms available in fe-safe are discussed in more detail in the following sections:
Figure 5-12
These files are stored in the \surface_finish subdirectory of the fe-safe installation directory, and
their format is described in Appendix E.
The surface finishes defined in the UNI 7670 definition file are shown in Figure 5-13 below. When a
surface finish type is selected from a list (see Figure 5-12, left) the material’s UTS is used to derive the
value of Kt from the selected curve.
1 UNI 7670, Meccanismi per apparecchi di sollevamento, Ente Nazionale Italiano Di Unificazione, Milano, Italy.
2 Data extracted from “Fundamentals of Metal Fatigue Analysis”, Bannantine, Comer and Handrock – page 13.
3 Data extracted from “Fundamentals of Metal Fatigue Analysis”, Bannantine, Comer and Handrock – page 14.
4 Data extracted from ”Maschinenelemente Band 1”, Niemann, Winter & Höhn – chapter 3.
5 Data based on calculations in FKM Guideline 6th Edition, 2012 – Section 4.3.1.4
Sample surface finishes defined in the Rz range definition file are shown in Figure 5-14 below. A surface
finish definition file is firstly selected from a list, and then the specific surface finish value is entered in
the Rz range field (see Figure 5-12, right). A new surface definition curve is generated by interpolating
the existing data for the defined Rz value and the material’s UTS is used to derive the value of Kf from
the generated curve. The surface finish factor can then be obtained by Kt=1/Kf.
Surface finish factors are applied using a multiaxial Neuber’s rule: the elastic stress is multiplied by the
surface finish Kt and this stress is used with the biaxial Neuber’s rule to calculate elastic-plastic stress-
strain. This means that surface finish effects are more significant at high endurance where the stresses
are essentially elastic.
Since the surface finish is a stress-dependent property, the surface finish factor can be used to
incorporate other stress-dependent phenomena, e.g. a size factor. To incorporate multiple stress-
dependent properties, simply multiply the scale factors for each property, and enter it as a user-defined
surface finish factor.
Figure 5-15
The residual stress can be defined in units of MPa or ksi, and is assumed to be constant in all directions
in the plane of the surface of the component.
No elastic plastic correction is applied to this stress value. The value is applied by adding it to the mean
stress of each cycle when calculating the life. For Factor of Strength (FOS) analyses (see section 17) the
residual stress is not scaled by the FOS scale factor.
Residual stresses can also be included as an initial stress condition in a fatigue loading.
Figure 5-16
where the ** indicates that the group has parameters that are different to the Default group.
Example 1:
If node 888 belongs to groups grp_3 and grp_4, then node 888 will take the properties of grp_3,
containing that node
Example 2:
If node 999 belongs to groups grp_1, grp_2, and grp_4 then node 999 will take the properties of
grp_1, (which are the same as the Default group).
Example 3:
If all groups, with the exception of grp_1, are set to 'Do not analyse', then node 888 will not be
analysed as grp_3 was set not to be analysed and node 999 will take the properties of grp_1
Example 4:
If all groups, including the Default group, with the exception of grp_4 are set to 'Do not analyse',
then neither node will be analysed as grp_1 and grp_3 were both set not to be analysed and they are
higher on the list than grp_4.
Promoting grp_4 to the top of the list will ensure all the nodes in that group take the properties of
grp_4. This can be accomplished using the Manage Groups dialogue described in section 0. Once
grp_4 is promoted, the table will appear as:
Figure 5-17
Example 5:
If all groups including the Default group, with the exception of grp_4 are set to 'Do not analyse',
then both nodes will take the properties of grp_4 as it is higher on the list than grp_1, grp_2, or
grp_3.
Determining which group the properties for a node or element came from can be done using a request
to export nodal information described in section 22.
The fe-safe Project Definition file saves references to locations of the files used in the analysis (e.g.
source FE model file, the fatigue loading definition file, etc.) as follows:
if the files used were placed outside the Project Directory, absolute paths are used, e.g.:
D:\Data\Files_Repository\FEA_Files\Project99\my_file.op2
if the files used were placed inside the Project Directory, relative paths are used, e.g.:
jobs\job_01\fe-results\fesafe.fer
A loading definition file (extension .ldf) file will also be created at the same time as the project
definition file, if a current.ldf file (for the current job) is used. This file will have the same root name
as defined for the project definition file above, but with extension .ldf.
Configuration settings can be retrieved using the Open FEA Fatigue Definition File... option. A dialogue
appears giving the user the option to reload the finite element model (or models) if required, for
example:
Figure 5-18
When the file is opened, the loaded settings will overwrite the current project and job settings. As the
file is opened, any paths defined in the file are interpreted assuming the following path hierarchy:
Absolute path (as defined in the .stlx file)
Location of the .stlx file
Current project path
Any paths defined in the referenced .ldf file will also be interpreted in a similar way and the loading
definition will then be saved as the new current.ldf (for the current job).
Legacy Keyword format and Stripped Keyword (*.kwd and *.xkwd) files can also be used to open analysis
configuration settings from analyses completed in an earlier version of fe-safe.
Configuration file can be used in command line or batch processes (see section 23).
The use of configuration files is discussed in detail in Appendix C and Appendix E.
When checked and a model is opened, the groups loaded from that model will automatically be added to
the list of groups in the Group Parameters table in the Fatigue from FEA window. Settings that differ
from the default analysis, material etc can be set for each of the groups added. Groups can be added,
removed and reordered via a dialogue accessed from the ‘Manage Groups..’ button in the Fatigue from
FEA window.
Pre-scan options
Always pre-scan: files will be pre-scanned automatically without prompting the user.
Do not pre-scan: files will not be pre-scanned and the whole file will be loaded each time.
Prompt to pre-scan: user will be asked each time if the file(s) should be pre-scanned (only if the
pre-scan file is invalid or not present).
Default pre-scan window options
These options control the default settings for the Read geometry and Detect surface checkboxes in the
Select Datasets to Read dialogue, see Figure 5-3. By default both these options are selected.
Read strains from FE Models
Checking this option reads strains as well as stresses from the FE model. These are used only when
performing strain-based analysis. Note that this option takes effect the next time a model is loaded and
is not applicable to pre-scanned models.
Read forces from FE Models
Checking this option reads forces as well as stresses from the FE model. These are used only when
performing a Verity analysis to get structural stresses. Note that this option takes effect the next time a
model is loaded and is not applicable to pre-scanned models.
Surface finder options
These options are used to configure the surface-finder algorithm: surface elements can be defined as
having either at least one surface node (has one or more nodes on the surface option) or at least one
surface face (has one or more faces on the surface option).
Additionally, the following elements with non-solid geometry can be treated as surface elements: planar
elements (2D elements), elements with reduced geometry (Beam, Pipe, Shell) and any other elements not
classified by fe-safe (Unclassified). Note that when elements with non-solid geometry are used in
conjunction with the has one or more nodes on the surface option, all solid elements sharing their
nodes will be set as surface elements as well.
Skipped nodes are not exported: nodes not analysed (e.g. when analysing weld lines) will not be
assigned any value.
This scale factor is applied to the imported stresses to allow additional phenomena to be incorporated
into the analysis, for example:
corrosion effects
confidence levels
Gating and analysis speed control
The gating parameters allow for a reduction in the amount of time required to run an analysis. A more
detailed discussion about pre-processing time histories is provided in section 13.4.
Gate tensors (as % of max tensor)
Omits samples from the calculated stress or strain histories so as to eliminate cycles whose amplitude is
less than the given percentage of the largest cycle. This is performed separately on each candidate plane
of each loading block. In proportional loading scenarios, moderate gating of the tensors is a safe speed-
up to perform because it ensures that the significant peaks and valleys on each candidate plane are
retained. The default value of 5% can speed up analyses but is not an aggressive value. In non-
proportional loading scenarios when shear-based algorithms (such as Brown-Miller, Fatemi-Socie, etc.)
are applied, the samples where the minimum and maximum normal stresses and strains occur may
differ from the peaks and valleys of the shear strain, and therefore may be missed if gating of tensors is
enabled. Gating of tensors can be used for a quick analysis and followed by a more detailed analysis of
just the critical sections of the component.
Pre-gate load histories with % gate
The Pre-gate load histories is done per loading block/event and performs a multi-channel peak-valley
extraction across all load histories in the block/event. It will generally find the larger cycles but may
produce non-conservative fatigue lives by missing damaging cycles because it does not ensure that all
the significant peaks and valleys on each candidate plane will be retained. Pre-gating of load histories
can be used for a quick analysis and followed by a more detailed analysis of just the critical sections of
the component.
Perform nodal elimination using material’s CAEL
This is also a safe speed-up to perform. For scale-and-combine loading blocks, the worst-case values
in the load histories and the FE datasets are used to estimate the worst possible stress and strain ranges
in the final loading. Any nodes whose worst possible strain or stress range is beneath the constant
amplitude endurance limit of the material are ignored, as no damage can occur. An 80% safety factor is
used.
Use trigonometric look-up tables
When the look-up tables are used the trigonometric function results related to the plane being analysed
are acquired from a table of pre-generated values at 0.05 degree intervals.
Disable triaxial stress and strain treatment
When triaxial stresses are detected at a node the critical-plane analysis is performed in each of the
triaxial planes. This can be disabled using this check box. In this case the 'surface' orientation is
identified from the stress tensors (even for strain-based analyses). See Technical Note 3 (TN-003) for an
in-depth discussion of triaxial stress treatment.
Disable failed directional cosines to XYZ
The default method used in fe-safe to evaluate the directional cosines for a node follows the sequence
below:
1. calculate the directional cosines for the largest stress sample in the loading
if 1 fails, then:
2. work through the remaining points in the stress history until a cycle is found for which the
directional cosines are solvable
if 2 fails, then:
3. use directional cosines for the global axis, XYZ.
In most cases this default behaviour will evaluate accurately the directional cosines in either step 1 or
step 2. The user has the option to disable step 3 by selecting Disable failed directional cosines to XYZ.
If this option is selected and the directional cosines cannot be evaluated from the stress history (steps 1
and 2), then the fatigue evaluation for this node is aborted and a "non-fatigue failure" error is recorded
in the log file.
temperature limits defined in the material properties. See Section 8.6 for additional details on
extrapolation of material data.
SN data
Checking this item will generate a warning in the case that material data are extrapolated past the SN
data limits defined in the material properties. See Section 8.6 for additional details on extrapolation of
material data.
Stress ratio (R-ratio)
Checking this item will generate a warning in the case that the calculated stress ratio of a loading cycle
is beyond the defined SN stress-ratio limits defined in the material properties.
PSD section
This tab defines parameters for analysis using PSD data, for more information see section 27.
Figure 5-22
With linear population of missing values the delta between values is uniform, whereas with logarithmic
population of values the values are first converted to logarithmic scale, populated linearly then
converted back to the original scale.
Note that only missing values are populated, if the values of a derived value are changed and the table is
populated a second time the derived value will not update – if this behaviour is required any derived
values that need regenerating must be deleted first.
Figure 5-23
Nf is a separate data set, as it is a single column the last two values are extrapolated from the first two.
In this case it would be more appropriate to use the logarithmic population of just the Nf column before
the linear population of the S values.
For material parameters with constraints on the data, invalid values will be highlighted red.
Figure 5-24
Results directory
This is the default directory for storing the results of signal processing and fatigue analysis from
measured signals, for more details see sections 9, 10, 11 and 12. By default this directory is in the
project directory <ProjectDir>/results, see section 3.
Default values of settings can now be recorded permanently, so that they remain set even after clearing
all settings, see section 5.2.3. The settings in fe-safe are divided into two groups:
Project settings – stores settings related to a particular project, see section 5.6.12. They are
recorded in a series of files under the project directory and are applied wherever that project is
opened, so that the project can be transferred to a different workstation with no extra setup
needed.
User settings – stores user preferences not specific to a single project. They are recorded in the
“user.stli” file in the user directory, and are not typically transferred between workstations.
When clearing settings, the two categories of settings are reset to factory defaults. However the factory
defaults can be overridden and will apply whenever a new project is started or an existing project is
cleared.
To choose which default settings to record, open the Tools >> Project Default Settings... dialogue (or
Tools >> User Default Settings... dialogue) as shown in Figure 5-25. The tree on the left displays all
settings that are currently different to the factory default. Settings names shown in the list match the
options descriptions used in the GUI relating to that setting; clicking on a particular setting displays
details of the setting in a panel on the right.
Figure 5-25
The check-boxes next to each setting determine which settings are saved. Only those that are selected
will be recorded in the defaults.
Note: It is currently not possible to control the defaults of group-related settings (e.g. algorithm or
material).
When the Save defaults button is clicked, the selected defaults will be recorded to one of the two
following files (depending on the type):
<UserDir>/project.stld
<UserDir>/user.stld
Import Project … : This allows an archived project to be imported into a new or existing project, see
below
Open FEA Fatigue Definition File … : This can be used to open project settings (see Section 5.6.12)
Save FEA Fatigue Definition File … : This can be used to save project settings in a single file (see
Section 5.6.12)
See Section 23.2 for project command line options, and Section 23.6 for project macro commands
7 Using safe4fatigue
7.1 Introduction
safe4fatigue is a suite of software for signal processing and graphics display and fatigue analysis from strain gauge
data. The files produced by the fe-safe Exports and Outputs function (see section 22) can also be displayed using
the graphics described in this section.
The functions available include:
File Handling;
Plotting, Printing and Exporting;
File editing;
Amplitude Analysis including File Modification and Digital Filters;
Frequency Domain Analysis;
Fatigue Analysis;
Signal Generation.
A file may contain single or multiple channels of time history data (e.g. measured signals obtained from a data
acquisition system), results produced in safe4fatigue (e.g. a Rainflow cycle histogram, a time-at-level distribution)
and results files produced by the fe-safe Export and Diagnostic options described in section 22.
Section 7.2 gives an overview of the safe4fatigue user interface. Section 7.3-7.6 describes the file handling; file
plotting, printing and exporting; and file editing. The Amplitude, Frequency and Fatigue analysis functions, and
Signal Generation, are described in sections 9 to12.
The Current FE Models window and the Fatigue from FEA dialogue box, normally displayed in fe-safe, are not
required for safe4fatigue analysis, the Material Databases window is required for the fatigue analysis functions.
Note that almost all the operations performed in safe4fatigue are written to a macro recording file, and can be
used in batch commands. See section 23 for a description of the macro recording and batch command system.
Files may be plotted by highlighting the file (or the channel in the file) and selecting the icon on the Toolbar,
or selecting View >> Plot (see section 7.5.6)
Multiple files, or multiple channels in files, may be plotted by highlighting the required channels using either the
CTRL key, for highlighting individual channels, or the SHIFT key, for highlighting ranges of channels. This capability
to process multiple files and channels applies to most of the signal manipulation and analysis functions in
safe4fatigue. For example, several channels can be analysed in a single process using the analysis functions
described in sections 9 to 12.
In the following examples a single channel file will be used.
To filter the signal (see section 10) the required channel is highlighted and the required filtering function is
selected. An output file is generated automatically, and its name is displayed in the Generated Results section
of the Loaded Data Files window. The filename shows that the file has been filtered. This information is also
entered into the file header, and can be displayed by accessing the file properties (see section 7.5.21).
To calculate a Rainflow cycle histogram (see section 10) highlight the required signal and select Amplitude >>
Rainflow (and Cycle Exceedence) from Time Histories ….
The results files are generated automatically. The 3-D cycle histogram can be displayed by highlighting the
filename and selecting the Toolbar icon or selecting View >> Plot This plot can be rotated, scaled and
manipulated (see section 7.5.22).
Results files can be re-scaled, integrated and manipulated using the Amplitude functions (see section 10).
The Loaded Data Files window lists all the open data files. Each data file is the top-level item in the tree and has
a number of signals associated with it as sub-items. Signals can be analysed or plotted by selecting them and then
selecting the required operation. Most operations allow multiple signals to be selected at once, using the standard
Windows functions of <SHIFT> or <CTRL> with mouse clicks.
This window also displays the contents of the Generated Results. Analysis results are placed in the Results
Archive on completion of the analysis. Items in the Results Archive can be plotted and analysed in the same
way as open data files.
A right mouse click over the Loaded Data Files window displays a menu. This duplicates some File menu
options, as well as the following tasks specific to the Loaded Data Files window:
Refresh Refreshes the display of file names.
Expand All Expands all tree items in the window to see the contents of all files.
Collapse All Collapses all tree items to display only file names.
7.5.4 Exit
Select File >> Exit or click the cross in the top right hand corner of the screen to exit fe-safe.
7.5.6 Plot
Select View >> Plot or the main toolbar icon .
This will create a plot window for each of the data signals selected in the Loaded Data Files window.
The plot window toolbar icons provide the following functions:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
7.5.11 Print
In the plot window select the toolbar icon to print the active plot window.
7.5.12 Copy
In the plot window select the toolbar icon or select Copy to Clipboard from the context sensitive menu
displayed by right mouse clicking over the active plot window.
The contents of the current plot window are copied to the clipboard for inserting into word processing and
spreadsheet software.
7.5.16 Zooming
The mouse is used to define the required area of the plot.
The Zoom In and Zoom Out from the context sensitive menu displayed by right mouse clicking over the active
plot window, or the plot window toolbar icons can be used to zoom in and out of the selected area.
The co-ordinates for the start and end point of the line can be defined.
For line plots this allows the axis limits, log scaling, labels, grids and interpolation modes to be set. For histograms
similar options are available, plus tilt/rotation controls and a check box to toggle between surface and tower plots.
7.5.22 Scrolling/tilting/rotating
These functions are accessed from the plot window toolbar or from the Properties dialogue for a plot window.
For sequential data plots the left and right arrows move forward and backwards one time base.
For histogram plots the left and right arrows control the rotation of a plot and the up and down arrows control
the tilt.
If a histogram is plotted as towers and then the tilt is set to 90, this provides a colour contour plot of the data:
Figure 7.5-13
Enter the text and press OK to add the text to the plot.
7.6.1 Introduction
The file editor is a digital editor that can be used for editing time history and analysis results files. All file formats
can be edited, including matrices from the Rainflow, Markov and other analysis functions. X-Y data files are
excluded.
The editor stores the edits without modifying the input file, until the user selects to exit. An edited file can then be
saved in any supported format. For example, a load history file in ASCII or binary format may be edited then saved
as a binary DAC file. There is no limit to the file length.
A context-sensitive Edit menu is displayed by clicking the right mouse button over the Numerical Listing
window:
After the first piece of data has been edited, the following prompt will be shown:
Figure 7.6-3
Clicking yes displays the numerical listing next to the signal. The signal is then updated after every edit.
With the cursor over the graphics window, click the right mouse button and select Properties. The properties of
the plotted data can now be edited, for example to plot just the range displayed in the Numerical Listing
window.
8 Material properties
Figure 8-1
Most of the functions described in this section can also be performed using a context-sensitive pop-up menu, which
is available by clicking over the Material Databases window with the right mouse button:
Figure 8-2
The Material Databases window presents the material data in an expandable tree view. Expanding the database
view displays the material records in that database.
Figure 8-3
Similarly, the material’s parameters can be displayed by expanding the material name:
Figure 8-4
Figure 8-5
The new database is added to the tree-view list in the Material Databases window. To add a material to the
database, the Approximate Material function can be used as described in section 8.4.7, below.
Figure 8-6
To filter using a custom sort string, select the Custom option from the Filters drop down menu and then type the
chosen string in the adjacent search box.
To return to showing all materials select the ‘All’ option from the drop down menu.
Figure 8-7
This function uses Seeger’s method (see the Fatigue Theory Reference Manual) to generate approximate fatigue
parameters based on the UTS (tensile strength) and elastic modulus of the material. In this dialogue, the default
system units are used for defining E and UTS. S-N data is also generated.
Figure 8-8
Figure 8-9
Figure 8-10
The units setting applies only to that material, and applies only to the units used to display and list the material
properties. It has no effect on values stored in the material database, which are always stored in units of MPa and
degrees C.
The parameters E, K’ and n’ are used to define the cyclic stress-strain curve and the hysteresis loops.
The parameters E, sf', ef', b, b2, knee_2nf and c are used to define the strain-life curve. For the strain-life
curve at lives above the specified knee, b2 is used instead of b. This facility is provided to allow for kinks in strain-
life curves observed in some materials. If you do not have such a material you can set the knee to 1e15, then b2
will not be used.
The Ultimate Tensile Strength, UTS, is used for:
normal stress analyses using Goodman or Gerber mean-stress correction;
any analysis using a user-defined mean-stress correction (see E7.2);
any analysis using a Kt value derived from a curve, (see section 5.5.4).
To define an S-N curve for a material, select the required material in the database. Then double click on either the
sn curve : N Values field, or the sn curve : S Values field. This pops-up an editable table for entering S-N data. If
multiple temperatures have been entered in the Temperature_List field, then the table will have columns for
each defined temperature, for example:
Figure 8-11
Pressing OK, transfers the values to the sncurve : NValues and sncurve : SValues fields as comma-separated lists. If
values are defined for more than one temperature, then the comma separated list of stresses for each temperature
are enclosed in brackets. For the above example the following values are transferred:
Figure 8-13
Edit the list so that it contains an R-ratio corresponding to each S-N curve to be specified, and then click OK.
When either s-n curve: N Values or s-n curve: S Values is double-clicked an editable table will again appear but
now a drop down menu will be available at the top of the window which can be used to select one of the specified
R-ratios. Select each R-ratio in turn to specify stress and N values for each, as before it is possible to specify
different stress values for different temperatures. When all the values have been entered, click OK.
Figure 8-14
When multiple stress ratios have been specified the values will be displayed in the s-n curve R-ratio field as a list of
comma separated values. Values in the s-n curve S Values and s-n curve: N values fields are also displayed as
comma separated lists with values for each R-ratio contained within square brackets and within those values for
different temperatures enclosed in curved brackets (where applicable).
Note that it is possible to specify similar T-N curves derived from torsional loadings. These can be applied in
algorithms which combine shear and normal or hydrostatic stress in variable weightings (e.g. Prismatic Hull, or
Susmel-Lazzarin, or Weld shear methods). Some of these only use the torsional endurance limit, not the full T-N
curve. The curves may be temperature-dependent in the same way as S-N curves, but it is not possible to specifiy
R-ratio dependent T-N curves, as mean stress effects are usually estimated with respect to normal stress and the
S-N curve. Either the full T-N curve may be specifed in a similar manner to the S-N curve, or a constant multiplier
may be supplied which is used to derive any T-N life value’s corresponding torsional stress amplitude by applying
this factor to the S-N curve.
T-N curve tn curve: N Values nf TN_Curve_N_Values Life values for the T-N curve
tn curve : T Values MPa TN_Curve_T_Values Torsional stress amplitude values for the
T-N curve
Or alternatively
T-N to S-N TN:s2t dimensionless K_SN_TO_TN Multiplier to convert S-N curve stress
conversion amplitude to T-N curve amplitude for the
factor same life.
If both a T-N curve and a TN:s2t scaling parameter are specified, then the T-N curve will be used.
Note that in earlier versions of fe-safe (prior to 2023-FD01) this parameter appeared in the GUI as TN:k. These
databases will still function correctly as the display name only appears in the GUI and the database parameter
keyword K_SN_TO_TN is unchanged.
As the mean stress effect is usually less prominent in loadings involving compression, separate Walker parameters
can be defined for tensile (stress ratio R≥0) and compressive (stress ratio R<0) loadings.
For further details, see section 14.4.
For a local strain analysis the following strain-life parameters must also be defined:
For a Smith-Watson-Topper life analysis the following parameters must also be defined:
Double-clicking on one of these fields displays an editable table for entering pairs of values, for example:
Figure 8-15
These parameters allow a list of Endurance Limit stresses (as maximum stress) and corresponding R values to be
defined. For the above example, the endurance stress is 390 MPa for constant amplitude testing at R=0 and the
endurance stress is 290 MPa for R=-1.
Pressing OK transfers the values to the database fields as a comma-separated list. For the above example the
following values are transferred:
dang van : Endurance Limit Smax (MPa) = 290, 390
dang van : R:SMin/Smax = -1, 0
First define a list of temperatures in the parameter Temperature_List. Double clicking on the
Temperature_List field displays an editable table. Enter the list of temperature values in the table, as shown in
the example below:
Figure 8-16
Pressing OK transfers the values to the Temperature_List field as a comma-separated list, i.e.:
0, 100, 300, 350
Once a temperature list has been entered for a material, each of the fatigue variables defined in 8.5.3 and 8.5.4
require multiple values - one for each temperature. Double clicking on one of these fields displays an editable table
with the correct number of columns. By default, each value is the same, but these can then be edited where
multiple temperature values are known, for example:
Figure 8-17
Pressing OK transfers the values from the table to the selected field, (in this example the Elastic (Young’s) Modulus
field), as a comma-separated list:
69000, 64860, 57270, 49680
These values correspond to the temperatures defined in the temperature list.
Where multiple temperature data is used, each material parameter is linearly interpolated between data points – see
8.6.3.
To define an S-N curve for each temperature, see section 8.5.4.
The parameters E and MATL_POISSON are used to compute the shear modulus. K’ and n’ are used to define the
cyclic stress-strain curve and the hysteresis loops.
The proof stress Rp0.2(MPa) is used to normalise the maximum normal stress and maintain the unit consistency in
the Fatemi-Socie model.
The Fatemi-Socie parameter FS_K describes the influence of the normal stress on fatigue life.
The parameters B0, C0, TAU_F_PRIME and GAMMA_F_PRIME are used to define the torsion strain-life curve.
1000
100
Sa MPa
10
0.1
1.0E+00 1.0E+01 1.0E+02 1.0E+03 1.0E+04 1.0E+05 1.0E+06 1.0E+07 1.0E+08 1.0E+09 1.0E+10 1.0E+11 1.0E+12 1.0E+13 1.0E+14 1.0E+15
Endurance N cycles
f
(2 N f ) b f (2 N f ) c
2 E (equation 3.4 in the Fatigue Theory Reference Manual)
cover values of 2Nf from 1 to the specified endurance limit endurance, so no extrapolation is necessary.
* WARNING * : While processing Element 1.1 data has extrapolated to a lower life than
defined in the material's SN data.
When using a knock-down curve the extrapolation warning limits may be adjusted to the knock-down curve life
limits if they extend further. To ensure the warning is shown when extrapolating beyond the original SN curve data
points, care should be taken to define the knock-down curve within the same limits.
To enable these warnings it is necessary to use the associated checkbox on the Properties tab of the Analysis
Options dialog (accessed from the FEA Fatigue menu). Note that when turned on this can generate a lot of
warnings, but the number of warnings is limited to 500. A final summary of the total number of nodes where
extrapolation occurred will be given at the end of the analysis log.
10 3
200oC
10 2
300oC
10 1
10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7
Life (2nf)
fe-safe also interpolates the yield stress, the ultimate tensile stress, and the endurance-limit endurance.
The interpolation is linear on each parameter. Beyond the extremes of the lowest and highest temperature the
values at the lowest and highest temperatures are used respectively. Each parameter is interpolated independently.
For example, if values of ’f are defined for 100oC and 300oC:
the value of ’f at 200oC is the (linear) average of the two specified values;
the value of ’f at 350°C is the same as the value for 300°C.
Materials from the highlighted database are displayed in the Material Type drop-down list. A number of different plot
options are available. Some plot options are not applicable to all materials – if an option is not applicable it is
automatically disabled. Any options, which are checked but disabled, are ignored.
Figure 8-20
For material data defined at multiple temperatures a plot temperature can be defined.
The plot files are added to the Loaded Data Files window and can be plotted and overlaid using the plot functions
described in section 7.5.
10-1
10-2
ea
10-3
10-4
Figure 8-21
The equation defining this curve is:
f '
(2 N f ) b f ' (2 N f ) c
2 E
The strain-life curve can be modified to allow b and hence f' to have different values above a specified life. This
is accomplished by defining the life at the knee (Knee-2nf), and the value of b above the knee (b2). See 8.5.3.
10-1
10-2
ea
b2 = 0
10-3
b2 = b/2
b2 = b
Figure 8-22
In the above figure, the strain-life curves for various settings of b2 are shown for a knee in the strain-life curves at
an endurance of 2Nf =107 reversals.
103
Sa(SN): MPa
102
101
100 102 104 106 108 1010 1012 1014
Life (2nf)
Figure 8-23
The source of this data can be derived from the specified S-N curve, or alternatively from the local strain
parameters using the equation:
f ' (2 N f ) b
2
fe-safe will select an S-N curve or a - 2Nf depending on the selecting two options;
FEA Fatigue>>Analysis Options…>>Use stress-life curve defined using SN datapoints
FEA Fatigue>>Analysis Options…>>Use sf’ and b if no SN datapoints
An S-N curve will be selected if the Use stress-life curve defined using SN datapoints option is selected, and an S-
N curve is present. This is the only condition for which an S-N curve will be used.
A - 2Nf curve will be selected if Use stress-life curve defined using SN datapoints option is not selected.
A - 2Nf curve will be selected if Use stress-life curve defined using SN datapoints option is selected, but there is
no S-N curve present, and the Use sf’ and b if no SN datapoints check box is selected. This means that the user
requested an S-N analysis, but as no S-N curve was present fe-safe selected a - 2Nf curve instead.
If the Use stress-life curve defined using SN datapoints option is selected, and the Use sf’ and b if no SN datapoints
option is not selected, and there is no S-N data present, fe-safe will not start the analysis, and will display a
warning.
If S-N data is used then the label on the material’s data plot is Sa (SN) (as in the above figure). If local strain data
is used the label is Sa (Mat).
Note: S-N data is entered in the material database as Stress amplitude (S) versus endurance Nf cycles. It is always
plotted as Stress amplitude (S) versus endurance 2Nf half-cycles.
102
101
STW: MPa100
10-1
10-2
10-3
Figure 8-24
( f ' ) 2
max (2 N f ) 2b f ' f ' (2 N f ) b c
2 E
8.7.4 Cyclic and hysteresis loop (‘Twice cyclic stress-strain’) curves
These are plots of the stable cyclic stress-strain curve and the stable hysteresis loop curve.
700
600
500
Stress: MPa
400
300
200
100
0
0 0.002 0.004 0.006 0.008
Strain
Figure 8-25
100
0 Graphite Effect
Stress:MPa
-100
Bulk Response
-200
Full Response
-300
-4000 -3000 -2000 -1000 0 1000 2000 3000 4000
Strain:uE
Figure 8-26
More details of the equations used in this calculation are provided in the cast iron technical background in section
14.19
100
10-1
STW: MPa
10-2
10-3
Figure 8-27
See section 14.19 for more details of the equations used for fatigue analysis of cast irons.
DEFAULT_MSC
# Default MSC or FRF
"C:/SIMULIA/fe-safe/2017_232/database/goodman.msc"
STANDARD_&_GRADE
# BSName
SAE_950C-Manten
Material_Class
# Material Class
Steel (Ductile)
MATL_ALGORITHM
# Algorithm
BrownMiller:-Morrow
MATL_UNITS
# Materials Units
Use system default
Data
# Data_Quality
Use only as an example; Kth; Sbw and Tw have notional values
Comment1
# Comment1
c:\material_data\manten_ref1.html
Revision_Number
# Revision Number
2
Revision_Date
# Revision Date
Wed Jun 10 08:24:28 2015
Revision_History
# Revision History
SN curve modified at v5.01-01
WeibullSlope_BF
# Slope BF
3
WeibullMin_QMUF
# Min QMUF
0.25
TAYLOR_KTH
# Kthreshold@R
5
gi_index
# Grey Iron Index
None
TempList
# Temperature List
0
StrainRateList
# StrainRateList
0
HoursList
# Hours List
0 1
SN_Curve_N_Values
# N Values
1e4 1e7
CPF_TW
# Tw
325
CPF_SBW
# Sbw
325
Const_Amp_Endurance_Limit
# Const Amp Endurance Limit
2.00E+07
MATL_POISSON
# Poissons Ratio
0.33
E
# E
203000
Rp0.2(MPa)
# Proof Stress 0.2%
325
UTS
# UTS
400
K'
# K'
1190
n'
# n'
0.193
Ef'
# Ef'
0.26
c
# c
-0.47
sf'
# sf'
930
b
# b
-0.095
PreSoakFactor
SN_Curve_S_Values
# S Values
363.0 188.3
A material can be imported from a text file using the Material menu item Import Material from Text File. The user is
prompted for the name of the text file to import. The material‘s name is extracted from the MATERIAL-NAME field. If
a material of the name already exists the opened material test file will be archived with the time and date as shown
in Figure 8-28.
Figure 8-28
Figure 8-29
4. Browse to the new .dbase file that was prepared in step 2, and click the Open button
5. The .dbase file is now shown in the Material Databases window, click the arrow to the left of the folder to
expand the display to show all of the materials
Figure 8-30
6. A database may be imported with additional materials that are not required to be retained (e.g. duplicates of
materials provided by the local.dbase packaged with the installation of the new version of fe-safe). The surplus
materials can be removed by selecting them in the Material Database window and either right-clicking on them
and selecting Delete, or by pressing the Delete key. As the process cannot be undone, ensure that you have
the correct material/database selected before confirming the delete operation with the Yes button on the
prompt.
Figure 8-31
8.10 References
8.1 ASME NH, ASME Boiler and Pressure Vessel Code, Division 1, Subsection NH, Class 1 Components in
Elevated Temperature Service, 2001.
8.2 Halford, G, R and Manson, S, S, Application of a Method of Estimating High-Temperature Low-Cycle
Fatigue Behaviour of Materials, Transactions of the ASME, Vol. 61, 1968.
8.3 Ainsworth, R, A, Budden, P, J, O'Donnell, M, P, Tipping, D, J, Goodall, I, W and Hooton, D, J, 2001.
Creep-Fatigue Crack Initiation Assessment Procedures, SMIRT 16, 2001, F04/2.
Figure 9.1-1
Example:
Setting the parameters shown in Figure 9.1-1 superimposes two sine waves and one white noise signal, as defined.
The resultant signal is shown in Figure 9.1-2, below:
Figure 9.1-2
Note that for the sine wave function, the specified amplitude is the amplitude of the generated sine wave, whilst for
the white noise function the amplitude refers to the rms. amplitude of the generated Gaussian white noise.
The output signal is written to a DAC format file, and the results added to the Loaded Data Files list. Subsequent
handling of the file (for example plotting, analysis, saving the results as an ASCII file) is discussed in section 7.
Figure 9.2-1
The function takes a sequence of peak/valleys. A half-cosine is fitted between each peak and valley, by inserting
intermediate data points.
The following parameters can be defined:
the maximum change in value between any two data points (to control the ramp rate);
the minimum number of data points to be inserted between each peak-valley pair – (to maintain the shape
of the cosine curve).
The output signal is written to a DAC format file, and the results added to the Loaded Data Files list. Subsequent
handling of the file (for example plotting, analysis, saving the results as an ASCII file) is discussed in section 7.
Figure 9.3-1
For each frequency in the DFT data, a sine wave is constructed with amplitude and phase derived from the real and
complex parts of the input data. The resulting time history signal is produced by combining the sine waves by
superposition, omitting those with amplitude less than ~0.1% of the largest amplitude.
The following parameters can be defined:
the length of the output time history signal in seconds.
the number of data points in the output time history signal per highest frequency cycle.
the label of the Y axis data for the output time history signal.
The output signal is written to a DAC format file, and the results added to the Loaded Data Files list. Subsequent
handling of the file (for example plotting, analysis, saving the results as an ASCII file) is discussed in section 7.
Amplitude
Differentiate 10.3.1 any sequential multi ● ● Polynomial order (between data points): ● Differentiation of the input. .dif DAC (S)
file type - 1st order
- 3rd order
Integrate 10.3.2 any sequential multi ● ● Integration order: ● Definite integral of the input. .int DAC (S)
file type - #1 order (Trapeziodal rule)
- #2 order (Simpson’s rule)
- #3 order (3/8th rule)
Optional integration constant
Mathematical functions 10.3.3 any sequential range-mean multi ● ● Mathematical function: ● Result of the selected mathematical function. .mth DAC (S)
file type histogram - SIN (sine) or
- COS (cosine) DAC (H)
- TAN (tangent)
- ASIN (inverse sine)
- ACOS (inverse cosine)
- ATAN (inverse tangent)
- LOG (common logarithm, ie. log10)
- 10^X (exponential function base 10)
- LN (natural logarithm, i.e. loge)
- EXP or e^X (exponential function)
- PI (multiplies input by (pi) )
Scale and offset 10.3.4 any sequential range-mean multi ● ● Input constants m, c1, c2 and r must be ● Linear and non-linear scaling of the input – see .dac DAC (S)
file type histogram specified 10.3.4, below. or
(see Note 3) DAC (H)
Multiply, divide, add or subtract two 10.3.5 any sequential 2 ● ● Operator: ● ● Result of the selected operation. .dac DAC (S)
signals file type - add (+) see Note
- subtract (-) 4
- multiply (×)
- divide (÷)
Concatenate multiple signals 10.3.6 any sequential multi ● ● ● Concatenation of all selected input files in the .dac DAC (S)
file type order they were selected.
Spike analysis 10.3.7 any sequential multi ● Number of bins Spike content as a rise-time distribution .rtd DAC (S)
file type histogram.
Spike removal 10.3.8 any sequential multi ● Maximum permissible rise Spike filtered signal – see 10.3.8, below. .dac DAC (S)
Frequency
Power spectral density 10.4.2 any sequential multi ● FFT buffer size – a whole power of 2, Power spectral density (PSD) distribution. .psd DAC (S)
file type between 32 and 2048.
Buffer overlap (%).
Normalise analysis.
Peak hold.
Cross-spectral density 10.4.3 any two 2 ● FFT buffer size – a whole power of 2, Power spectral density (PSD) distribution. .psd DAC (S)
sequential between 32 and 2048. Cross-spectral density (CSD) distribution. .csd DAC (S)
signals of any Buffer overlap (%).
sequential file Normalise analysis. Gain diagram. .gai DAC (S)
type Phase diagram. .pha DAC (S)
Coherence diagram. .coh DAC (S)
Transfer function 10.4.4 any two 2 ● FFT buffer size (a whole power of 2,
sequential between 32 and 2048).
signals of any Buffer overlap (%).
sequential file Normalise analysis.
type
Cross-spectral density matrix file 10.4.5 any number of multi Frequency resolution Power/Cross-spectral density distributions for all .psd ASCII
sequential Specify cross-correlations pairs of input signals in a single ASCII .psd file .asc ASCII
signals Output plottable files and (optionally) in separate, plottable ASCII
.asc files.
Filtering
Butterworth filtering 10.5.2 any sequential multi ● Filter type: Filtered signal. .dac DAC (S)
file type - low-pass
- high-pass
- band-pass
Lower cut-off frequency (Hz)
Upper cut-off frequency (Hz)
Filter-order:
- #1 order - 6dB/Octave
- #2 order - 12dB/Octave
- #3 order - 18 dB/Octave
Pass-region gain (dB)
FFT filtering 10.5.3 any sequential multi ● Definition of up to ten sets of filter Filtered signal .dac DAC (S)
file type coefficients, where each set includes:
- passband region gain (dB)
- lower cut-off frequency (Hz)
- upper cut-off frequency (Hz)
Filter order.
FIlter definitions can be saved, loaded and
plotted.
Note 1: The following descriptors refer to files using the industry standard DAC format - see Appendix E, 205.2.1.
DAC (S) : a single channel sequential file;
DAC (H) : a histogram file;
DAC (XY) : an XY data file;
DAC (Hyst) : an XY data file containing hysteresis loops.
Note 2: Some files require a specified number of input channels. “multi” implies that the function can be applied to multiple sequential input files of mixed formats.
Note 3: The function can be applied to multiple histogram input files.
Note 4: Input files must be of the same length (i.e. contain the same number of data points). If the number of data points is different, then all input signals are cropped to the
same length as the shortest signal.
Note 5: The input file can be of any sequential file type, but must contain PSD information. PSD information produced using one of the frequency-domain algorithms (see 10.4)
will be in DAC[S] format, and have the extension .psd.
Note 6: The input file can be of any sequential file type, but must contain level crossing information. Level crossing information produced using one of the Level Crossing
analysis functions (see 10.3.11and 10.3.12) will be in DAC[S] format, and have the extension .lca.
10.3.1 Differentiate
This function calculates the derivative of the input using a first or third-order polynomial.
10.3.2 Integrate
This function calculates the definite integral of the input using one of the following methods:
xk 1 xk
a) Trapezoidal rule (1st order):
x .dt
k
2
dt
x 4x x
b) Simpson's rule (2nd order):
xk .dt k 2 3 k 1 k dt
x 3x 3x x
c) Simpson’s 3/8th rule (3rd order):
xk .dt k 3 k 28 k 1 k 3dt
where
xk is the kth input data point, and
dt is the interval between data points:
Limitation
The integration of long data files should be avoided, as even a very small non-zero mean value will cause the
output values to diverge. This effect can be minimised by calculating the mean value of the input file (using the
Statistical Analysis module – see 10.2.23), and subtracting it from each data point (using the Scale and Offset
module – see 10.2.4) to produce a mean value which is close to zero.
Checks are made to ensure the integrity of the scaling operation. An initial check verifies that the scaling
parameters will not cause output values to overflow (i.e. become numerically too large for the computer to
manipulate). A second check prevents negative values being raised to non-integer powers (an operation which is
mathematically undefined).
To distinguish integers from real values in the exponent, r, only the first three decimal places are considered
significant. This avoids unnecessary restrictions caused by rounding.
The first point in the signal is copied to the output file and becomes the current point, P(n).
The signal is read point-by-point and the rise, R, between the current point, P(n), and the next point,
P(n+1), is evaluated.
If the difference, R, is less than the specified maximum permissible rise, Rmax, then the next point
becomes the current point and is written to the output file. Processing continues with the new current
value.
If the difference, R, is greater then Rmax then the point is considered to form either a part or the whole of a
spike. The current point is held and the next point is incremented to P(n+2).
The rise between the two points, P(n) and P(n+2), is evaluated and compared with twice the maximum
permissible rise value (2×Rmax). If the rise is greater than (2×Rmax) then the next point is incremented
again.
A new rise is evaluated and compared with (3×Rmax).
This process continues until the rise falls below the permitted multiple of the maximum rise. This point is
considered to be the end of the spike.
Assume that a spike is detected between two points P(n) and P(n+m). The module now linearly
interpolates between these two values over (m-1) points and the interpolated values are written to the
output file. The point P(n+m), becomes the current point and is also written to the output file.
The whole process continues with the new current point.
Note:
If the beginning of a spike is detected at point p(n), but the end of the signal is reached (at a point P(n+m)), before
the end of the spike has been determined, then the current data point P(n) is copied to the output file (m) times.
This avoids any inconsistency between the number of data points in the input file and the number of data points in
the output file.
Conventional methods approximate time-at-level by counting the data values that fall within any amplitude band,
assuming that the time spent in the band is given by the time between samples. However, such methods tend to
give poor results for short signals.
Instead, this program determines the bins passed through between each data point in the signal, and performs a
linear interpolation to find the time spent traversing each bin.
Figure 10.3.9-1
The time taken for a cycle to cross a particular amplitude band is t. The time spent within a particular amplitude
band for the complete signal is calculated by summing t for all cycles that cross the band.
Figure 10.3.9-2
Because the time spent in a band is dependant on the width of the band, the program produces a time-at-level
density diagram by dividing the time in each band by the width of the band.
The time-at-level result is a distribution whose area represents the total time of the signal (assuming the amplitude
limits encompass the whole signal). The time spent between any two limits is represented by the area between
these limits.
Figure 10.3.9-3
Figure 10.3.9-4
The limits, which can be rounded to give a specific bin width, can be greater than the limits of the input signal or
need not fully encompass the signal. In the latter case, any crossings outside the limits will be ignored.
The program counts the number of times the signal crosses each band in a positive direction. This is equivalent to
DIN45667 which specifies counting positive slope crossings for positive signal values, and negative slope crossings
for negative signal values.
A threshold gate level may be set to reduce the effect of noise in the signal. If noise coincides with a bin boundary,
many crossings may be counted. However, if a gate value is defined, the signal must cross an adjacent bin
boundary for a repeat crossing to be counted.
The figure below shows a level crossing distribution for a Gaussian white noise signal.
From the range and mean, the maximum and minimum values for the cycle can be calculated.
range range
max = mean + 2 min = mean - 2
The levels crossed between the cycle maximum and minimum are determined for each cycle in the histogram, to
produce a level crossing distribution.
A level crossing distribution for a cycle histogram will be very similar to that obtained from the original (sequential)
signal – see 10.3.11.
The output of this function may be one or more of the following (user-selectable) options:
Range-mean Rainflow cycle histogram
A range-mean matrix is defined by specifying the number of bins (between 2 and 64), an upper limit and a lower
limit. The width of each bin is defined by:
The limits, which can be rounded to give a specific bin width, can be greater than the limits of the input signal or
need not fully encompass the signal. In the latter case, any cycles outside the limits will be ignored. Each range and
mean bin represents the same increment in engineering units. Note that there may be rounding issues around the
bin limits, so care should be taken if specifying limits close to the signal minimum and maximum; for example, if the
exact anticipated maximum is given, it may be that the maximum range cycle is missed, because of rounding errors
in the way that the Rainflow counting is performed on a discretized version of the signal for efficiency. If this
happens then a warning message is given of the number of cycles which overflowed the maximum range. If the
lower and upper values are explictly specified then it is recommended that a small extra allowance of order R/4096
is made at each bound if it is desired to always count all cycles in the signal (where R is the signal min to max
range). The range axis will be between 0 and U-L (where U and L are the upper and lower limits), and the mean
axis will be between L and U. If the values for U and L are not specified explicitly but left at the default Signal Min
and Signal Max, then note that the actual applied signal bounds used in the histogram are extended by 5% of the
range at each end. This guarantees that all cycles are counted but results in bin sizes 10% greater than R/N and N
is the number of bins. The range axis will extend to 1.1R and the mean axis will be Signal Min-0.05R to Signal
Max+0.05R.
The range and mean of each closed cycle are determined, and used to position the cycle in the range-mean
histogram.
A gate level may be set to exclude small signal fluctuations. For example, in the following signal extract, the range
A to B is smaller than the gate value, so the peak-valley pair A-B would not be written to the output file.
Gating may be used to reduce the size of a peak/valley file. For a signal from a digital source, gating can be used
to exclude the effects of quantisation noise (which can make almost every point a peak or valley). However, if the
file is to be used in a fatigue analysis, care must be taken not to exclude potentially damaging events. The constant
amplitude endurance limit is not a guide to gate selection, since cycles much smaller than the endurance limit can
cause fatigue damage. For the same reason, it is also potentially dangerous to use gating to produce command
signals for accelerated fatigue tests.
The results of a peak-valley analysis may also be displayed as a peak-valley exceedence diagram. This shows the
number of peaks or valleys which exceed any specified value:
The limits, which can be rounded to give a specific bin width, can be greater than the limits of the input signal or
need not fully encompass the signal. In the latter case, any data-points outside the limits will be ignored. Each
range and mean bin represents the same increment in engineering units.
For the from-to matrix, the algorithm extracts peaks and valleys from the signal. As each turning point is extracted,
the peak and valley are binned in the output matrix.
In the above example, points A and B will are binned with point A in the from bin, and point B in the to bin. Then
points B and C are binned with point B in the from bin, and point C in the to bin. The complete signal is analysed in
this way. The result is a ‘from-to’ matrix, as shown in Figure 10.3.19-1. A ‘from-to’ matrix is sometimes referred to
as a Markov matrix.
A gate level may be set to exclude small signal fluctuations. For example, in the following signal extract, the range
A to B is smaller than the gate value, so the peak-valley pair A-B would not be written to the output file.
Range-mean histogram
The matrix for the range-mean histogram is defined in the same way as the matrix for the ‘from-to’ histogram,
above.
For each peak-valley pair, the peak-valley range and mean are calculated, and the peak-valley pair is binned in the
histogram.
For example:
A value is binned for every peak-valley pair in the signal, to produce the range-mean histogram.
10.3.23 Statistics
This function produces a statistical summary for all selected signals. The information is written to an ASCII text file
in a tabular format, and is also displayed in a dialogue box. The name of the text file is displayed in the message log
window.
The following information is produced for each signal (for the selected analysis range):
Max - Maximum amplitude
Max Pos - Position of maximum amplitude (in x-axis units)
Min - Minimum amplitude
Min Pos - Position of minimum amplitude (in x-axis units)
Mean - Arithmetic mean of all data points
SD - Standard deviation of all data points
If y is a data point value and N is the number of points in the signal, then:
(y)
mean y- = N
(y-y-)2
std dev = N
10.4 Frequency
Figure 10.4.2-1
The user may define
The FFT buffer size in data points (an integer power of 2 )
The Buffer overlap (this should be set to 10%)
Normalised analysis
A peak hold PSD instead of an averaged PSD
Include 10% cosine taper on each buffer or not.
All plots are shown in Figure 10.4.2-2. The input history was 4 superimposed sine waves of various amplitude and
phase.
7E6
6E6
5E6
Power
4E6
3E6
2E6
1E6
0
-500
-1000
imaginary (brown)
1400
1200
absReal:FFT
1000
800
600
400
200
0
0 10 20 30 40
Freq:Hz
Figure 10.4.2-2
The output from this function is an ASCII file with extension .psd in directory <project_dir>/results/. Its format is
described in Section 27.3.2.
A dialogue appears with the following controls:
A slider control allows selection of the frequency resolution in the output, which should be set fine enough to
distinguish the modes of the Generalised Displacements;
Radio buttons are used to specify whether cross-correlations are required;
A check-box controls output of plottable files. These are not required for frequency-based analysis but are
useful in validating intermediate results. They are ASCII files with extension .asc in directory
<project_dir>/results/. A file is created for each auto- and cross-correlation. These files appear in the
Generated Results part of the Loaded Data Files window.
10.5 Filtering
Figure 10.5.3-1
The current FFT filter is displayed. Clicking Plot Profile creates a gain diagram for the filter, which is added to the
file list in the Loaded Data Files window. The analysis range can also be specified. To filter the signal click OK.
The filter definition can be changed by selecting either Change... (to modify the definition of the current filter) or
New... (to create a new filter definition). These options display the FFT Band Pass Filter Definition dialogue box, as
shown in Figure 10.5.3-2.
Defining an FFT filter
An FFT filter can be defined by:
Opening an existing filter profile definition from an FPD file. The FPD file format is used to save filter
coefficients for the FFT filter.
Opening an existing filter profile definition from a GEN file. This file format is similar to the FPD format, and
is provided for backward compatibility with some earlier fe-safe software.
Opening a gain diagram from a file with extension GAI. A gain diagram can be defined using other signal
processing functions (for example the transfer function) and the file saved (using the Save Data File As
option) as a DAC file with extension *.gai.
Defining new filter coefficients.
Filter coefficients are defined in the FFT Band Pass Filter Definition dialogue box:
Figure 10.5.3-2
The definition is saved using the Save As... option. Saved filter definitions are recalled using the Open... option.
The filter coefficients define the passband, so the gain diagram for the coefficients shown in Figure 10.5.3-2 is as
shown in Figure 10.5.3-3.
Figure 10.5.3-3
11.2.1 Function
Calculates fatigue lives from a time history, using a material’s stress-life (S-N) curve. Input signals may be a stress-
time signal or a peak-picked signal.
11.2.2 Operation
Select:
Gauge Fatigue >> Uniaxial SN Curve Analysis from Time Histories…
This displays the following dialogue:
Goodman mean stress correction or no mean stress correction can be specified, and a stress concentration factor
and analysis range can be entered.
11.2.3 Output
The following results are created:
The cycles and damage histograms are cycle range-mean histograms, in the same units as the signal, 32 bins x 32
bins, scaled to include all cycles. The cycle histogram may be used as an input file for the programs which provide
fatigue analysis of cycle histograms
The time-correlated damage file gives an indication of whereabouts in time the fatigue damage occurs.
The fast-plot signal file contains 2048 data points which provide the same plot display (if not zoomed) as the full
signal file.
The program calculates fatigue endurance using the stress-life curve. Each endurance is obtained by linear
interpolation of the log stress amplitude and log endurance values.
If a stress concentration factor not equal to 1.0 is being applied, the program uses a relationship defined by
Peterson:
Kf - 1
Kfn = 1 +
0.915 + 200/(log N)4
where
Kf is the stress concentration at 107 cycles
Kfn is the value of Kf at endurance N cycles
This gives a linear relationship between a range at a given mean stress, and the range at zero mean stress that
would give the same endurance. For compressive mean stresses the Goodman line has been extended with half
the slope of the original line.
The value of the material UTS, ft, is read from the materials data base.
Although the Goodman correction is defined for stress, the program does allow the user to use any other
measured parameter, providing that an appropriate equivalent of ft can be obtained.
If a strain history has been measured, and an S-N curve with a stress-based Goodman mean stress correction is
required, the strain history should be converted to stress. Care should be taken to ensure that, if a linear
conversion is being used, the values do not exceed the elastic limit.
Fatigue lives are calculated using Miner's rule, that for each cycle
Fatigue failure is to be interpreted using the same criteria as was used to define the endurance values on the S-N
curve. If these were lives to crack initiation, then the life calculated by the program will be a calculated life to crack
initiation. If they were lives to component failure, then the life calculated by the program will be a calculated life to
component failure.
Time-correlated fatigue damage (upper graph) with the loading history (lower graph)
11.3.1 Function
Calculates fatigue lives from a Rainflow cycles histogram, using the stress-life (S-N) curve.
11.3.2 Operation
Select:
Gauge Fatigue >> Uniaxial SN Curve Analysis from Histograms…
Goodman mean stress correction or no mean stress correction can be specified, and a stress concentration factor
can be entered.
11.3.3 Output
The screen display shows:
the number of cycles in the histogram;
the fatigue life as repeats of the histogram.
The following results are created:
the name of the cycle histogram;
the name of the stress-life curve;
the stress concentration factor;
the name of the results file;
the number of cycles in the histogram;
the fatigue life as repeats of the histogram.
The fatigue damage is calculated using the mean value of stress range and mean value of mean stress from each
bin of the histogram. This is written to the output damage histogram, extension .dah.
11.4.1 Function
Calculates fatigue lives from a time history, using the stress-life relationships defined in BS5400 part10:1980 for
welded joints. Input signals may be a strain-time or stress-time signal or a peak-picked signal. BS5400 allows use
of histories measured using a strain-gauge rosette. A sensitivity analysis can also be performed.
11.4.2 Operation
Select:
Gauge Fatigue >> BS5400 Welded Joints from Time Histories…
The analysis definition can be configured, including scale sensitivity analysis parameters if required.
11.4.3 Output
The screen display shows:
the number of cycles in the signal
the fatigue life as repeats of the signal (mean life)
the fatigue life as repeats of the signal (design criteria)
The cycles and damage histograms are cycle range-mean histograms, in the same units as the signal, 32 bins x 32
bins, scaled to include all cycles. The cycle histogram may be used as an input file for the programs which provide
fatigue analysis of cycle histograms
The time-correlated damage file gives an indication of whereabouts in time the fatigue damage occurs.
The fast-plot signal file contains 2048 data points which provide the same plot display (if not zoomed) as the full
signal file.
damage = N1f
for each cycle
and that fatigue failure occurs when total damage = 1.0,
so that the life in repeats of the signal is
1.0
life =
1
Nf
Fatigue lives are calculated for two criteria - the mean life defined by the S-N curve, and the curve corrected to the
specified design criteria. The fatigue data in BS5400 normally allows for the stress concentration produced at the
weld, and so the stress concentration factor Kt used in the analysis will normally be 1.0. Some component
geometry details or other factors may produce an additional stress concentration at the weld, in which case a
factor greater than 1 should be used. The stresses are multiplied by the value of the stress concentration that is
entered.
The weld classification is defined by a letter - B,C,D,E,F,F2 or G. (see section 11 of the Fatigue Theory Reference
Manual for details of the weld classification procedure.)
The design criteria is defined as the number of standard deviations from the mean life. Any value will be accepted
by the program. Examples are:
11.5.1 Function
Calculates fatigue lives from a Rainflow cycle histogram, using the fatigue life data for welded joints in BS5400 Part
10:1980.
11.5.2 Operation
Select:
Gauge Fatigue >> BS5400 Welded Joints from Histograms…
11.5.3 Output
The screen display shows:
the number of cycles in the histogram
the fatigue life as repeats of the histogram (upper estimate)
the fatigue life as repeats of the histogram (lower estimate)
Two fatigue lives are calculated. The first uses the largest strain range represented by each bin in the histogram,
and provides the most conservative life estimate. This is written to the output damage. histogram, extension .dhi.
The second estimate uses the smallest strain range represented by each bin in the histogram, and provides the
least conservative life estimate. This is written to the output damage histogram, extension .dlo.
S=Ee
11.6.1 Function
The program takes 3 channels of strain gauge rosette data and calculates the principal strains or stresses and the
angle between the first strain gauge and the first principal strain or stress. Output is 4-channels of data. For strain
output, the 4th channel contains the principal strain of numerically largest magnitude. For stress output, the 4th
channel contains the value of the principal stress within ±45o of the first strain gauge arm. This stress can be used
as input to the welded joint fatigue programs. The principal values and angles can be plotted or cross-plotted in
fe-safe (see section 7)
11.6.2 Operation
Select three channels from the Loaded Data Files window. Then select:
Select the Rosette Angle (45 or 120 degrees), the output type, either Principal Strains or Principal Stresses. If
Principal Stresses is selected both Young's Module and Poisson's Ratio can be defined.
11.6.3 Output
Four output files with extension .DAC, containing:
channel 1: the maximum principal strain or stress
channel 2: the minimum principal strain or stress
channel 3: the angle between the maximum principal strain and the first arm of the strain gauge (positive
anti-clockwise)
channel 4: for strain output, the numerically largest value of strain channels 1 and 2
channel 4: for stress output, the value of the principal stress within ±45o of the first strain gauge arm.
1 1 2 2
1 = 2 ( A + C ) + 2 ( A - C ) + (2B - A - C)
1 1 2 2
2 = 2 ( A + C ) - 2 ( A - C ) + (2B - A - C)
2B - A - C
tan 2 =
A- C
1 2 2 2 2
1 = 3 (A + B + C) + 3 ( A - B ) + ( B - C ) + ( C - A )
1 2 2 2 2
2 = 3 (A + B + C) - 3 ( A - B ) + ( B - C ) + ( C - A )
3 ( C - B )
tan 2 =
2A - B - C
The principal stresses are calculated from the principal strains using
E
1 = (1 + 2 )
1 - 2
E
2 = (2 + 1 )
1 - 2
BS5400 analysis of welded joints allows input of multiaxial stresses, and recommends using the largest value of
principal stress which is within = ±45o of a line perpendicular to the weld. The Strain Gauge Rosette Analysis
module can calculate this value, assuming that eA is the strain gauge arm perpendicular to the weld.
12 Fatigue analysis from measured signals [2] : uniaxial strain-life and multiaxial methods
This section discusses the strain-life methods for evaluating fatigue life from measured strains. See the Fatigue
Theory Reference Manual section 2 for the technical background to strain-life fatigue analysis, note the sub-section
on cast iron does not apply.
Strain-life
f (2 N )b f (2 Nf ) c
f
2 E
( f )2
Smith-Watson-Topper max (2 N f ) 2b f f (2 N f )b c
2 E
Morrow
( f m ) (2 N )b f (2 Nf ) c
f
2 E
where
is the strain range for the cycle
Local strains are calculated from the nominal strains using Neuber's rule and the stress concentration factor Kt
Kt2S e
where is the local strain range
is the local stress range
e is the nominal strain range
S is the nominal stress range
K
1
n
E
2K
1
2
n
E
Fatigue lives are calculated using Miner's rule, that for each cycle, damage 1
Nf
and that fatigue crack initiation occurs when total damage = 1.0
12.3.1 Function
Calculates fatigue lives from a micro-strain-time history, using either the uniaxial strain-life relationship or the
uniaxial Smith-Watson-Topper life relationship. The sensitivity of the analysis to stress concentration and signal
scale factor can also be calculated. Input signals may be a strain-time signal or a peak-picked strain history.
12.3.2 Operation
Select: Gauge Fatigue >> Uniaxial Strain Life from Time Histories…
This displays the following dialogue:
Figure 12.3.2-1
The analysis definition can be configured, including the scale sensitivity analysis parameters if required.
12.3.3 Output
The screen display shows:
the fatigue life as repeats of the signal
the number of cycles in the signal
the name of the signal
50
40
30 Cycles
20
10
0
0 -2043
1058 -995
2117 53
Range:uE 3175 1101 Mean:uE
4233 2149
0.000012
0.00001
0.000008
0.000006 Damage
0.000004
0.000002
0
0 -2043
1058 -995
2117 53
Range:uE 3175 1101 Mean:uE
4233 2149
200
100
Stress:MPa
0
-100
-200
2000
1500
1000
500
Strain:uE
-500
-1000
-1500
-2000
0 1 2 3 4 5 6 7 8 9
Time:s
0.01
0.008
0.006
Damage
0.004
0.002
0
0 1 2 3 4 5 6 7 8 9
Time:s
104
103
Mean Life:nf
102
101
1 1.2 1.4 1.6 1.8 2
Scale:Factor
12.4.1 Function
Calculates fatigue lives from a micro-strain Rainflow cycles histogram, using the strain-life relationship. Analysis
can use a Smith-Watson-Topper, Morrow or no mean stress correction.
12.4.2 Operation
Select: Gauge Fatigue >> Uniaxial Strain Life from Histograms…
This displays the following dialogue:
Figure 12.4.2-1
12.4.3 Output
The screen display shows:
the most conservative and least conservative estimates of the fatigue life as repeats of the histogram;
the number of cycles in the histogram;
the name of the signal.
12.5.1 Function
For a peak/valley pair of nominal strains and an optional stress concentration factor, the program calculates the
local stress and strain for the peak and valley, and the endurance of the cycle using the strain-life and Smith-
Watson-Topper relationships.
12.5.2 Operation
Select:
Gauge Fatigue >> ‘Quick Look’ Strain Life…
Figure 12.5.2-1
12.5.3 Output
The results are displayed in the Results area at the bottom of the dialogue box.
12.6.1 Function
Converts a time history of local strains measured on one material, into the equivalent local strain history for another
material. Input signals may be a strain-time signal or a peak-picked strain history. Strain histories measured using a
strain-gauge rosette should not be analysed by this program. This operation is essential if local strains have been
measured in a notch and the user requires calculating fatigue lives for the same geometry in a different material. It
would be prudent to use this program with similar types of material, for example two steels, or two aluminium alloys,
rather than with two very dissimilar materials
12.6.2 Operation
Highlight a time history signal in the Loaded Data Files window, then select:
Gauge Fatigue >> Uniaxial Local Strain Material Conversion…
Figure 12.6.2-1
Select a source material and target material (both must be from the current database).
12.6.3 Output
A time history file containing the converted signal.
e e
where , are the measured strain and associated stress in the first material
and e , e are the stress and strain in an elastic material (the ‘nominal stress and strain’)
Local strains for the second material are then calculated from the nominal stress/strains using Neuber's rule
implemented as
e e
where , are now the strain and associated stress in the new material.
The program first searches for the absolute maximum value in the selected section of the signal (positive or
negative). This data point is converted into nominal stress/strain using Neuber's rule and the cyclic stress-strain
curve for material 1, and then converted into stress and strain for material 2.
The program then takes each data point and checks if it has closed a cycle. If not, the data point is converted into
nominal stress/strain using the hysteresis loop curve for material 1, and then into local stress/strain using the
hysteresis loop for material 2.
If a cycle has been closed, material memory is used to position the data point on a new hysteresis loop, and the
nominal strain calculated.
At the end of the selected section of the signal, the program returns to the start point of the section, and carries on
the conversion until the absolute maximum data point is reached.
See the Fatigue Theory Reference Manual, section 2.9.4 (and particularly Figure 2.43 in the Fatigue Theory
Reference Manual) for further details of this method.
12.7.1 Function
Calculates fatigue lives from 3 channels of strain or micro-strain strain gauge rosette data. The available algorithms
are normal strain or Brown Miller with the Morrow or the user-defined mean stress corrections, and the stress-life
algorithm for S-N curves with the Goodman, Gerber or user-defined mean stress corrections
12.7.2 Operation
Select three channels from the Loaded Data Files window. Then select:
Figure 12.7.2-1
In the Gauges Definition group select the units used in the time histories.
Select the required outputs in the Output Options tab. The x-axis of Histogram plots can either be the mean of the
damage parameter or the mean stress. See figure 12.6.3-1.
Select the desired algorithm by clicking on the User algorithm browse button, which displays the following menu:
Figure 12.7.2-2
Details on Normal Strain, Brown Miller and Normal Stress analyses can be found in sections 14.14, 14.16 and 14.7
respectively. Note that in this module the Normal Stress algorithm uses S-N curve data to evaluate fatigue life.
If a user-defined mean stress correction is chosen, the User Defined Mean Stress Correction browse button can be
used to select a file. See section 14.9 for an explanation and Appendix E for the file syntax. For all other mean
stress corrections, see the sections for the main algorithms noted above.
Press the Surface Finish Definition browse button to select a surface finish. This is the same as the dialogue as
described in 5.6.5.
12.7.3 Output
The screen display shows:
the sources for the time history inputs;
the critical plane angle at which the most damage occurs. This is measured from the first input channel;
the number of cycles on the critical plane;
the life on the critical plane. This is the number of repeats of the time histories.
An example:
Signal name (0 deg) : C:\safeResultsArchive\test49.dac, 1
Signal name (45 deg) : C:\safeResultsArchive\test49.dac, 2
Signal name (90 deg) : C:\safeResultsArchive\test49.dac, 3
Analysis type : Principle Strain Life Analysis from Strain Gauge Histories
Material : SAE_950C-Manten
(from P:\data\safe4test\test.dbase)
Analysis start time : Signal Start
Analysis end time : Signal End
Stress concentration : 1
For a Brown Miller analysis two damage and two cycle histograms will be produced. They will contain the direct and
shear strains on the critical plane. See Figure 12.7.3-1. There will be three angle and three time plots, one for the 1-
2, 2-3 and 1-3 planes. See Figure 12.7.3-2.
35 35
30 30
25 25
20 Cycles 20 Cycles
15 15
10 10
5 5
0 0
0 -1941 0 -3323
966 -959 1654 -1642
1933 23 3309 39
Range: uE 2899 1004 Mean: uE Range: uE 4963 1720 Mean: uE
3866 1986 6618 3400
0.00003 0.00003
0.000025 0.000025
0.00002 0.00002
0.000015 Damage 0.000015 Damage
0.00001 0.00001
0.000005 0.000005
0 0
0 -1941 0 -3323
966 -959 1654 -1642
1933 23 3309 39
Range: uE 2899 1004 Mean: uE Range: uE 4963 1720 Mean: uE
3866 1986 6618 3400
Figure 12.7.3-1 Brown Miller cycle and damage histograms for direct (left) and shear (right).
0.00035
0.0003
0.00025
Plane 1-2
0.0002
Damage
0.00015
0.0001
0.00005
Plane 2-3
0
0 20 40 60 80 100
Angle: degrees
120 140 160 Plane 3-1
Figure 12.7.3-2 Brown Miller damage vs. angle overlay of all 3 planes.
The x-axis of histograms is either the mean normal stress of the cycle (in MPa) or the mean of the damage
parameter (in units of MPa for the Normal Stress analysis and in units of micro-strain for the Normal Strain and
Brown Miller analyses) as shown in Figure 12.7.3-3.
35
35
30
30
25
25
20
20
Cycles Cycles
15 15
10 10
5 5
0 0
0 -2856 0 -597
1447 -1420 1447 -317
2893 17 2893 -36
Range: uE Mean: uE Range: uE Mean: MPa
4340 1454 4340 245
5787 2890 5787 525
Figure 12.7.3-3 Cycle histograms from the same source. Left with mean damage parameter, right with mean stress.
35
30
25
20 Cycles
15
10
5
0
0 -2856
1447 -1420
2893 17
Range: uE 4340 1454 Mean: uE
5787 2890
0.00002
0.000015
Damage
0.00001
0.000005
0
0 -2856
1447 -1420
2893 17
Range: uE 4340 1454 Mean: uE
5787 2890
0.00025
0.0002
0.00015
Damage
0.0001
0.00005
0
0 20 40 60 80 100 120 140 160
Angle: degrees
To form the time-correlated damage file, as each cycle is closed, the times for the three points which form the cycle
are used to position the fatigue damage in time. Half the damage for the cycle is presumed to occur mid-way
between the first two points, and the other half of the damage is presumed to occur mid-way between the 2nd and
3rd points. The damage is added to any previously calculated damage at these points.
Volume 1 12-14 Copyright © 2023 Dassault Systemes Simulia Corp.
Vol. 1 Section 12 Issue: 24.1 Date: 17.08.23
Fatigue analysis from measured signals [2] : strain-life methods
0.002
0 Degrees:Strain
0.001
-0.001
-0.002
0.001
45 Degrees:Strain
0.0005
-0.0005
-0.001
0.0008
0.0006
90 Degrees:Strain
0.0004
0.0002
0
-0.0002
-0.0004
-0.0006
-0.0008
0 1 2 3 4 5 6 7 8 9
Time:Secs
0.00002
0.000015
Damage
0.00001
0.000005
0
0 1 2 3 4 5 6 7 8 9
Tim e: s ecs
The FEA stresses are scaled by the applied loading. In practice the stresses could be calculated for any value of
applied load. However, two often-used conditions are:
(a) the FEA stresses are calculated for a unit load, and the loading history contains load values.
(b) the FEA stresses are calculated for the maximum load, and the load history represents each load as a
proportion of the maximum load.
The fatigue life is calculated as a number of repeats of the defined loading. Optionally this may be converted into
user-defined units (miles, hours, etc) – see section 13.2
For example: a vehicle engine may have been analysed to provide FE results at 5o intervals of crank angle, through
three revolutions of the crank shaft. The stresses will be contained in 216 stress datasets. These 216 sets of results
can be chained together in sequence. fe-safe can analyse the sequence of datasets to calculate the fatigue life at
each node.
The stress datasets can be applied in any order; can occur more than once in the sequence; and can have scale
factors applied to them.
Complex sequences of stresses can be built up by superimposing load history (scale-and-combine) loadings and
dataset sequences, providing that the sampling frequencies are the same. Additional scale factors, repeat counts
and multiple sequences can also be incorporated.
Dataset sequence loading can be defined directly (in the user interface), or using the load definition (LDF) file
(section 13.9).
If the loading units are repeats, then the loading is always equivalent to one repeat of the complete loading.
If any other unit is specified, for example miles, then the fatigue loading can be equivalent to any number in that
unit. For example, the above dialogue is defining one repeat of the fatigue loading cycle to be equivalent to 1000
miles.
This setting applies to all lives, including the Factor of Strength (FOS) design life, Probability of Failure target life
and Traffic Light Export life range thresholds.
Technique Using original time histories in the fatigue Using histories that have been Using multi-channel histories that have
loading definition. individually peak-valley picked in the been peak-valley picked.
fatigue loading definition.
Process For each node: For the whole model: For the whole model:
- time histories of the principal stresses - each original history is pre-processed - the original histories are pre-processed
are calculated from the original histories. individually using the peak-picking using the multi-channel peak-picking
function to extract fatigue cycles. function to extract peaks and valleys,
- the automatic peak-picking routine (part whilst maintaining the phase relationship
of the fatigue algorithm) extracts fatigue between the cycles.
cycles from the time histories of (for
example) the shear strain on the plane, For each node:
the normal stress on the plane..
- time histories of the principal stresses are calculated from the peak-picked
histories
- the automatic peak-picking routine (part of the fatigue algorithm) extracts
fatigue cycles from the time histories of (for example) the shear strain on the
plane, the normal stress on the plane.
Gating In the fatigue analysis, small cycles are Small cycles are removed from the pre-processed history using a user-defined cycle-
removed using a cycle-omission gate omission gate level. The level of gating affects the length of the pre-processed
automatically set. history, which has an impact on the speed of the fatigue analysis.
In the fatigue algorithm, small cycles are removed using a cycle-omission gate
automatically set
Advantages The preferred method of analysis. No risk The fastest method. The phase-relationship between
of missing peaks or valleys due to the channels is maintained.
orientation of the principals.
Disadvantages Slower. For multi-channel histories. The phase Multi-channel peak-picked histories are
relationship between channels may be longer than histories that are peak-picked
lost. individually, since additional points are
inserted to maintain phase relationship
If the cycle-omission gate level is set too between channels.
high, damaging cycles may be missed.
Assumes peaks and valleys in the
damage parameter will coincide with
peaks and valleys in the loading history.
Application Very much the preferred method of Should only be used for a single channel This method may be used to obtain
analysis, and the default fe-safe setting. of loading. The ‘gate’ should be chosen ‘quick-look’ results for multi-channel
to ensure all damaging cycles are histories, but fatigue hot-spots may be
retained. missed and fatigue lives may be in error.
may be imported. In fe-safe the stress data from each step may be listed as datasets 1 to 5, and the strain data
from each step may be listed as datasets 6 to 10, so the matching pairs would be 1 and 6, 2 and 7, etc..
Figure 13.7-1
The current loading configuration is summarised in the Loading Details... tree control.
Figure 13.8.1-1
To delete a block select a block or one of its child items and then select the context menu option Delete Block. If a
high frequency block is selected (or one of its child items) then only the high frequency block will be deleted.
Figure 13.8.2-1
When a high frequency block or one of its child items is selected the context menu option Delete Block can be used
to delete the high frequency block only.
Figure 13.8.3-1
If the selected dataset already has a loading it will be replaced, unless the loading is user defined, in which case the
loading will have to be deleted first. If multiple load histories are selected, the selected dataset list will be duplicated
for each loading.
When a load history is selected the context menu option Delete History will delete the load history from the loading
definition.
Models window will be added to the current block. If no block is selected a block will be appended with the dataset
and loading. The history editing Dataset Embedded Load History dialogue will be shown as in Figure 13.8.4-1.
Figure 13.8.4-1
If the selected dataset already has a loading a prompt to replace the loading will be displayed.
When a user loading is selected the context menu option Delete History will delete the loading from the loading
definition.
Figure 13.8.5-1
When a time history is selected, the context menu option Delete History will delete the time history from the loading
definition.
Figure 13.8.6-1
When a user time history is selected the context menu option Delete History will delete the user time history from
the loading definition.
Figure 13.8.7-1
If a dataset list (the target) was selected in the Fatigue from FEA dialogue then the source dataset will be added in
different ways:
If the source and target are the of the same type, the source dataset is added to the target list e.g. adding
stress dataset 6 to stress datasets ‘1-4, 7-9’ will become ‘1-4, 6-9’.
If the source and target dataset types can be paired (i.e. stress with strain, real with imaginary stress) then
the source dataset is added to the pair dataset list of the correct type, or one is created if there is not one
present. Figure 13.8.7-2 shows the loading in Figure 13.8.7.1 after a stress dataset is added.
Figure 13.8.7-2
As noted above at start of section 13.8, dataset lists can be edited via double clicking the item or pressing F2.
When editing dataset sequences in this manner, a continuous list of datasets can be specified with a hyphen e.g.
datasets 1 through 10 would be ‘1-10’. A list of datasets incrementing or decrementing by a fixed amount can be
specified. This is done by adding the increment within parenthesis after the end dataset number e.g. datasets 1, 4,
7 and 10 would be ‘1-10(3)’.
Note: Even if the increment of the sequence would not include the last dataset specified, it is always included e.g.
Datasets from 20 decreasing by 3 have the sequence 20, 17, 14, 11, 8, 5 and 2, but 20-1(3) would produce the
sequence 20, 17, 14, 11, 8, 5, 2 and 1.
When a dataset is selected the context menu option Delete Dataset will delete the dataset list from the loading
definition. A message box will ask if any associated datasets should also be deleted i.e. strains.
Figure 13.8.9-1
If the selected dataset already has a loading, embedded histories are edited while normal histories cause a
message box to be displayed asking if the loading should be replaced.
Figure 13.8.11-1
When a residual dataset is selected the context menu option Delete Dataset will delete both residual datasets from
the loading definition.
Figure 13.8.12-1
Successful editing of an embedded history causes the history to behave like a normal user defined history and no
longer shares the data.
Figure 13.8.14-1
To open a .LDF file select File >> Loadings >> Open FEA Loadings File... or alternatively select the Open
Loadings... from the loading context menu. Then select a file from the Open a Loading Definition File (*.ldf) dialogue
and click Open.
To save the loading to the current profile select the Save to Profile option from the loading context menu or File >>
Loadings >> Save FEA Loadings to Current Profile.
The load definition (LDF) file is a versatile file structure that can be used to define simple and complex loading
situations. In its simplest form, the LDF file can define a constant amplitude loading block. Complex loadings can be
defined as a series of loading blocks.
Each loading block can define a dataset sequence and a set of scale and combine operations between stress
datasets and their associated load histories.
For a sequence of blocks, fatigue cycles resulting from the transitions between blocks can also be included (see
13.9.8).
The LDF format also supports:
superimposition of high frequency loading cycles onto any block;
analysis of datasets from an elastic-plastic FE analyses;
temperature variation - for use in conventional high temperature fatigue analysis (see section 18);
In all cases, the index used to reference stress and strain datasets is the one displayed in fe-safe, (see 13.5,
above).
The loading is defined using a combination of the following definitions:
the BLOCK definition;
the dataset sequence definition;
the load history (scale-and-combine) definition;
the high frequency loading definition;
the temperature variation definition;
the time definition.
o The block comments are the last set of consecutive comment lines prior to each BLOCK
statement.
e.g.
Plugin algorithms.
scale 2 1.0 Yes Scale factor for the block.
This is multiplied by the scale factor for any individual items
in the block.
dt 3 0.0 Yes Block length, (the time, in seconds, that the block is
equivalent to). This time is equivalent to the n repeats of the
block NOT 1 repeat of the block, as is shown in the Loading
Settings window.
temp 4 -300 Yes Temperature of the block, (in °C).
If a value of less than -273 is specified, then the
temperature data will be extracted from the FE model.
This definition is used to combine a stress dataset with the time history of a loading to create a LOAD*DATA set.
Multiple LOAD*DATA sets can be defined and may be added to the loading definition in any order, since they are
combined by superimposing (adding) the time histories of the stress tensors at each point in time, to produce a
history of the stresses for the combined loading.
Load histories can be imported from any supported file format – see section 13.3.
Example 1 Example 2
# Each load history has ten samples # Each load history has ten samples
BLOCK n=100, scale=1.0 BLOCK n=100, scale=1.0
lh=/data/test.txt, signum=1, ds=4 lh=/data/test.txt, signum=1, ds=4
lh=/data/test.txt, signum=1, ds=4 lh=/data/test.txt, signum=1, ds=4
lh=/data/test.txt, signum=1, ds=4 lh=/data/test.txt, signum=1, ds=4
lh=/data/test.txt, signum=1, ds=4 lh=/data/test.txt, signum=1, ds=4
lhtime=0 5 7 9 10 11 25 27 30 31 lhtime=/data/test.txt, signum=1
END END
test.txt, column 1 defines the times for
each sample in the time history loading, in
seconds, for example:
0
5
7
9
10
11
25
27
30
31
Figure 13.9.5-1 indicates the difference caused by using the dt parameter to define the time for a block and using
the lhtime parameter. The block in both cases is 20 seconds long. For the lhtime parameter the 5 datasets
are spaced at 4 second intervals.
SXX:MPa
1
0
dt lhtime
-1
-2
0 5 10 15 20
Time:Secs
Figure 13.9-5.1
Generally the effect caused by this difference would have no effect on your fatigue analysis. It does become
important if a HFBLOCK loading is used.
13.9.6 High frequency loading definition (superimposition of a high frequency load blocks)
A block containing high frequency cycles can be superimposed on the defined loading in any block. Up to 20 high
frequency load blocks can be superimposed on each main block. Each high frequency block can be built up from
dataset sequences and load history scale-and-combine loads. The high frequency cycle is repeated from the start
of the block to the end of the block.
The definition statements HFBLOCK and HFEND are used to indicate the start and end of the high frequency block
definition.
The length of each high frequency block (HFBLOCK) is defined using the dt parameter. If a high frequency block is
used, the main block must also have its length defined using either the dt or lhtime parameters. The repeat
frequency of the high frequency block is a function of the main block time and the high frequency block time. The
amplitude of the loading is interpolated so that at each point in the main block and the high frequency block, a data
sample is evaluated.
The high frequency block can contain a dataset definition (see example 1, below) or a scale-and-combine definition
(see example 2, below). In both examples, the low frequency block lasts 100 seconds and the high frequency block
lasts 1 second so there are 100 repeats of the high frequency block.
The lhtime parameter does not override the dt parameter for high frequency blocks, this allows a time between
the last and first samples in the high frequency block to be defined. If lhtime is not defined the samples within the
high frequency block are located at the times:
Where nSamples is the length of the data set sequence or load histories within the hf block.
It should be noted that fe-safe expands the high frequency blocks into a full loading definition for each node prior to
analysis. This is done within the computer’s memory. Hence the limitation that a very long block and a very short
Copyright © 2023 Dassault Systemes Simulia Corp. Volume 1 13-15
Vol. 1 Section 13 Issue: 24.1 Date: 17.08.23
Defining fatigue loadings
high frequency block will require very large amounts of memory, in some cases far more than is available. For
example a block of 6 months and a high frequency block of 1000 rpm would require in excess of 20 Gbytes of
memory. This limitation should be considered when using the high frequency block facility.
Care should be taken in defining the time for the main block to achieve the required effect. If no data amplitude is
defined at t=0 in the main block (as for the lhtime example in figure 13.9.5-1) then the last amplitude in the block
is wrapped around to take the place of the missing start amplitude. This allows the high frequency amplitudes to be
superimposed upon an amplitude history over the complete loading time of the main block. Figure 13.9.6-1 shows
the same loading as figure 13.9.5-1 with and without a high frequency block superimposed.
2
SXX:MPa
-2
-3
0 5 10 15 20
Time:Secs
Figure 13.9.6-1.
Multi block complex loading can be built up using this technique. If a section of the analysis contains long flat
plateaus with a high frequency content then these should be reduced to as short a time as possible with a repeat
factor. The two examples below will give identical fatigue lives but the left hand example would generate a tensor
history of 3511 samples and the right hand one would only generate 3 samples. 3 samples will analyse much
quicker than 3511.
An example of a multi-block loading simulating a number of flight missions is shown below. The left-hand side
shows the mission simulated correctly. The right-hand side shows what would happen due to wrap-around if the
samples were not defined at t=0 and t=dt :
INIT INIT
Transitions=YES Transitions=YES
END END
################# #################
BLOCK, n=1, dt=144 BLOCK, n=1, dt=144
ds=1, scale=0.0 ds=1, scale=0.0
ds=1, scale=3.70 ds=1, scale=3.70
ds=1, scale=0.0 ds=1, scale=0.0
END END
################### ###################
BLOCK, n=1, dt=144 BLOCK, n=1, dt=144
ds=1, scale=0.0 ds=1, scale=0.0
ds=1, scale=1.765 ds=1, scale=1.765
lhtime=0,144 lhtime=1,144
HFBLOCK, dt=20 HFBLOCK, dt=20
ds=1, lh=p:\data\hf.txt, signum=1 ds=1, lh=p:\data\hf.txt, signum=1
HFEND HFEND
END END
################### ###################
BLOCK, n=1 BLOCK, n=1
ds=1, scale=1.765 ds=1, scale=1.765
ds=1, scale=1.765 ds=1, scale=1.765
lhtime=0, 144 lhtime=1, 144
HFBLOCK, dt=20 HFBLOCK, dt=20
ds=1, lh=p:\data\hf.txt, signum=1 ds=1, lh=p:\data\hf.txt, signum=1
HFEND HFEND
END END
################### ###################
BLOCK, n=1 BLOCK, n=1
ds=1, scale=1.765 ds=1, scale=1.765
ds=1, scale=0.0 ds=1, scale=0.0
lhtime=0,90 lhtime=1,90
HFBLOCK, dt=20 HFBLOCK, dt=20
ds=1, lh=p:\data\hf.txt, signum=1 ds=1, lh=p:\data\hf.txt, signum=1
HFEND HFEND
END END
################### ###################
BLOCK, n=1 BLOCK, n=1
ds=1, scale=0.0 ds=1, scale=0.0
ds=1, scale=0.784 ds=1, scale=0.784
lhtime=0,90 lhtime=1,90
HFBLOCK, dt=20 HFBLOCK, dt=20
ds=1, lh=p:\data\hf.txt, signum=1 ds=1, lh=p:\data\hf.txt, signum=1
HFEND HFEND
END END
################### ###################
BLOCK, n=1755 BLOCK, n=1755
Ds=1, scale=0.784 ds=1, scale=0.784
ds=1, scale=0.784 ds=1, scale=0.784
lhtime=0,20 lhtime=1,20
HFBLOCK, dt=20 HFBLOCK, dt=20
Ds=1, lh=p:\data\hf.txt, signum=1 ds=1, lh=p:\data\hf.txt, signum=1
HFEND HFEND
END END
################### ###################
BLOCK, n=1 BLOCK, n=1
Ds=1, scale=0.784 ds=1, scale=0.784
ds=1, scale=0.517 ds=1, scale=0.517
ds=1, scale=2.086 ds=1, scale=2.086
ds=1, scale=0.0 ds=1, scale=0.0
lhtime=0,283,285.5,288 lhtime=1,283,285.5,288
HFBLOCK, dt=20 HFBLOCK, dt=20
Ds=1, lh=p:\data\hf.txt, signum=1 ds=1, lh=p:\data\hf.txt, signum=1
HFEND HFEND
END END
The resulting plots of the Sxx transitions for a unit Sxx stress tensors are shown in the figure 13.9.7-1. In the upper
plot the spikes at the block edges are caused by the wrap-around technique used when a main block sample is not
defined at t=0.
3.5
#1 #2 #3 #4 #5 #7
3
2.5 Block
SXX:MPa
2
1.5
1
0.5
0
3.5
3
2.5
SXX:MPa
2
1.5
1
0.5
0
If the defined temperature history is shorter than the loading a warning will be written to the diagnostics log and the
last defined temperature will be used for all subsequent temperatures.
For high frequency blocks, the temperature across the block is not defined. Instead, it is calculated for each repeat
of the block from the temperature definition of the main blocks.
Note:
The definition of fatigue loading for varying temperature, as discussed in this section, is not required for
conventional high temperature fatigue.
INIT
Transitions=Yes
END
The settings block is normally placed at the beginning of the LDF file.
INIT
ds=1, es=2
END
Note that:
no elastic-plastic correction is applied to the residual tensors;
the residual stress is not relaxed for thermo-mechanical analyses;
the residual stress is not scaled during a Factor of Strength (FOS) analysis;
since the residuals are applied as an addition to the mean stress of the cycle, residuals will not be
‘washed-out’ by large cycles.
A diagnostics option is available (Export elastic-plastic residuals), which allows the resolved residual stresses to be
exported – see section 22.
Figure 13.10-1
In a LDF the use of the es keyword in a dataset sequence definition (see 13.9.3) turns off the elastic to elastic-
plastic correction function (i.e. the biaxial “Neuber Rule”) and treats the defined stress and strain datasets as a
stress-strain pair. For the above example:
(Desktop spreadsheet software can make entering long sequences much easier).
Scale factors must not be applied to elastic-plastic FEA results, unless they are used to convert non-standard stress
units to Pa, and strain units to m/m. See section 13.9.3 for the stress and strain scale factors.
Normal Strain, Brown Miller and Maximum Shear Strain analysis methods may be used with elastic-plastic FEA
results.
A range of datasets for both stresses and strains cane be used to simplify the definition of the .ldf file.
13.13.1 Using the BLOCK definition, the dataset sequence definition and the load history (scale-and-combine)
definition types
Equivalent loading.
# Sample LDF file
# Block with dataset sequence and load history combines
# using definition parameters
Channel 1 to dataset 3
Channel 3 to dataset 4
Channel 5 to dataset 5
All the histories are to be applied without additional scaling, i.e. with scale factors equal to 1.0
d
s
=
3
d
s
=
4
d
s
=
The LDF file would be: 5
If the three histories are in three separate files (say .dac files), the .ldf file will be
13.13.3 Three superimposed load histories with a repeat count specified, and two initial stress datasets.
In this example, the same load histories as in example 2 are applied. Now, two additional datasets 1 and 2 are to
be inserted at the beginning of the load history, and the section in brackets [ ] is repeated 100 times.
100 repeats
ds=1
ds=2
The fatigue life is calculated in repeats of this complete sequence then optionally converted into user-defined units
(miles, hours, etc)
d
s
=
1
1
d
s
ds=1 =
ds=2 1
2
Note:
The dtemp datasets need not have the same numbers as the corresponding stress datasets.
For all input files except UNV files, only the maximum temperature will be extracted.
For simple high temperature analysis the temperature datasets do not need to be specified. The analysis options
are used to select or de-select temperature effects.
Scale factors must not be used to re-scale elastic-plastic results. However, scale factors can be used to change
units. Stresses must be in Pascals, strain in units of m/m (not micro-strain).
The scale factor for stresses is defined by scale=, and the scale factor for strain is defined by escale=
For example, if the stresses are in MPa, and the strains in micro-strain :
900
800
Stress : MPa
700
600
500
400
250
In the following example, the high frequency block is defined by
200
Temp (Deg.)
150
HFBLOCK dt=0.5
100 ds=5-6, scale=0.1
ds=7, scale=-0.1
50 lhtime=0.0 0.2 0.3
HFEND
90
80
This block
70 takes 0.5 seconds (dt=0.5)
Time (Secs)
60
Three stress datasets are applied in sequence (ds=5-6 and ds=7)
50
The times
40 at which these datasets occur are given in seconds,
30
20
10 lhtime=0.0 0.2 0.3
0
0 50 100 150 200 250 300 350
and in this example the times are unequally spaced.
Samples
lhtime=\myfiles\datafile.txt, signum=3
If the time values are equally spaced, only the length of time for the block need be specified.
HFBLOCK dt=0.5
ds=5-6, scale=0.1
ds=7, scale=-0.1
HFEND
The specification of the outer block follows the syntax described in examples 1- 5. The parameter
lhtime= is used to specify the time values for each dataset.
fe-safe repeats the high frequency block the required number of times. In the above example, the high frequency
datasets would be applied at times of
0.0 0.2 0.3 0.5 0.7 0.8 1.0 1.2 1.3 and so on.
To superimpose these datasets on the low frequency block, the values in the low frequency block are interpolated
to give a value at each time in the high frequency block.
Note that this form or superimposition can produce very long analysis times. Users should experiment with small
groups of elements.
13.13.9 Example LDF file for thermomechanical fatigue analysis including a high frequency block
Consider a node in an FE Model with its stresses and temperatures calculated at 5 increments in time (0, 20, 50,
70, and 90 seconds) as shown below:
And assume that a unit load analysis provided a sixth load case with the stress tensor
1 0 0 0 0 0 0
To define a loading for the five time increments, and also superimpose the unit load dataset (sixth dataset) scaled
by a load of (0, 2, -2, 3), where the load history is repeated each second: then the LDF file would be:
where the file lhf1.txt would contain the following lines representing the loading applied to the sixth dataset:
0
2
-2
3
The stress (Sxx), temperature and time for the loading would be:
900
800
Stress : MPa
700
600
500
400
250
200
Temp (Deg.)
150
100
50
90
80
70
Time (Secs)
60
50
40
30
20
10
0
0 50 100 150 200 250 300 350
Samples
f '
(2 N f ) b f ' (2 N f ) c
2 E
Morrow:
( f ' m )
(2 N f ) b f ' (2 N f ) c
2 E
SWT:
( f ' ) 2
max (2 N f ) 2b f ' f ' (2 N f ) b c
2 E
Walker:
∆𝜀 1 − 𝑅 1−𝛾 𝜎𝑓′ 𝑏 𝑐
= ( ) [ (2𝑁𝑓 ) + 𝜀𝑓′ (2𝑁𝑓 ) ]
2 2 𝐸
Although these strain-life algorithms are intended for uniaxial stress states, fe-safe uses multiaxial methods to
calculate elastic strains from elastic FEA stresses, and a multiaxial elastic-plastic correction to derive the strain
amplitudes and stress values used in these equations.
f ' (2 N f ) b
2
and a multiaxial cyclic plasticity correction is used to convert the elastic FEA stresses to elastic-plastic stress-strain.
Otherwise the life curve is defined by the S-N values defined in the material database, and the plasticity correction
can be optionally performed depending on settings in Analysis Options dialogue [FEA Fatigue >> Analysis Options...],
Stress Analysis tab (see section 5).
Goodman, Gerber, Walker or no mean stress correction can be selected - see sections 14.3 and 14.4.
For the theoretical background to S-N curve analysis for uniaxial stresses, see the Fatigue Theory Reference Manual.
Where:
∆𝜀𝑟
is the effective strain amplitude at mean stress = 0
2
∆𝜀
is the strain amplitude
2
Where 𝛾+ and 𝛾− can be fitted to 𝑅 ≥ 0 and 𝑅 < 0 data, respectively, instead of adopting a single exponent.
Notice that the Walker factor is undefined for 𝑅 ≥ 1. Moreover, the factor tends to infinite as the stress ratio 𝑅
approaches 1.0. This may result in a non-realistic amplification of the cycle amplitude. A cutoff value between 0.0
and 1.0 can be defined to avoid this, but values much smaller than 1.0 may lead to non-conservative results. The
cutoff value can be set in the menu option FEA Fatigue >> Analysis Options... >> Algorithms tab >> Walker submenu.
In general, imposing a cutoff is not needed. However, in strain-life critical plane analyses, some candidate planes
may contain strain-cycles for which the stress amplitude is nearly zero. This situation can lead to a very large
correction factor, resulting in excessive damage.
1−𝑅 𝛾−1
It is instructive to plot the Walker factor ( ) 𝑅 in order to define a valid cutoff in a case-by-
as a function of
2
case basis. Figure 14.4-1 below shows the Walker factor with and without a cutoff, considering different values of 𝛾+
and 𝛾− .
The Walker mean stress correction is similar to the Smith-Watson-Topper correction when the additional material
property is set to 0.5. Notice however that the equations are not equivalent, and the two methods may differ
substantially when plastic damage is significant. The user is referred to the references listed below for a broader
discussion on the topic.
The following graphs show examples of the correlation obtained using the Walker parameter.
Copyright © 2023 Dassault Systemes Simulia Corp. Volume 1 14-3
Vol. 1 Section 14 Issue: 24.1 Date: 17.08.23
Fatigue analysis of elastic FEA results
y = -0.0002000x + 0.8818089
R2 = 0.6819578
For steels, the following approximation for the Walker parameter has been suggested:
1.0
0.6
0.4
= 0.0002000u + 0.8818
0.2
0.0
0 500 1000 1500 2000 2500
u, Ultimate Tensile Strength, MPa
Data
Figure 14.4-5 Trend of Walker parameter with Ultimate Tensile Strength for steels
0.473
0.651
No such trend has been determined for aluminium alloys:
1.0
, Walker Mean Stress Constant
Aluminum alloys
0.8
0.651
0.6 6061 AlMg4.5Mn
0.473
0.4
2014 7075
2024
0.2
0.0
300 350 400 450 500 550 600
u, Ultimate Tensile Strength, MPa
Figure 14.4-6 Trend of Walker parameter with Ultimate Tensile Strength for aluminium alloys
References:
N. E. Dowling, C. A. Calhoun, and A. Arcari, “Mean Stress Effects in Stress-Life Fatigue and the Walker Equation,”
Fatigue and Fracture of Engineering Materials and Structures, Vol. 32, No. 3, March 2009, pp. 163-179. Also,
Erratum, Vol. 32, October 2009, p. 866.
N. E. Dowling, “Mean Stress Effects in Strain-Life Fatigue,” Fatigue and Fracture of Engineering Materials and
Structures, Vol. 32, No. 12, December 2009, pp. 1004–1019.
M. A. Meggiolaro and J. T. P. de Castro, “An improved strain-life model based on the Walker equation to describe
tensile and compressive mean stress effects,” International Journal of Fatigue, Vol 161, August 2022, pp 106905.
Cycles of von Mises stress are extracted. The endurance is calculated from an S-N curve or from a stress-life curved
derived from local strain materials data. This is also configured from the Analysis Options dialogue.
When using the local strain materials data the life curve is defined by the equation:
f ' (2 N f ) b
2
and a cyclic plasticity correction is used to convert the elastic FEA stresses to elastic-plastic stress-strain.
Otherwise the life curve is defined by the S-N values defined in the material database, and the plasticity correction
can be optionally performed depending on settings in Analysis Options dialogue (FEA Fatigue >> Analysis Options...,
Stress Analysis tab).
The von Mises Stress algorithm is not recommended for general fatigue analysis. See the Fatigue Theory Reference
Manual, Section 7 for a discussion of this algorithm.
For finite life calculations Goodman, Gerber, Walker, User Defined, R ratio SN curves or no mean stress correction
can be selected. See sections 14.3, 14.4,, 14.9 and 14.11.
For infinite life calculations (FRF) a user defined, R ratio SN curves, Goodman or Gerber infinite life envelope analysis
can be performed. See section 17.
This algorithm is not recommended because as with all ‘representative’ stress varaibles that have their sign defined
by some criteria there is the possibility of sign oscillation. For the von Mises stress this occurs when the Hydrostatic
stress is close to zero (i.e. the major two principal stresses are similar in magnitude and opposite). This is why using
such ‘representative’ stress values for fatigue analysis can cause spurious hot spots.
1 2 2
𝜎𝑚 = √ [(𝜎𝑥𝑚 − 𝜎𝑦𝑚 ) + (𝜎𝑦𝑚 − 𝜎𝑧𝑚 ) + (𝜎𝑥𝑚 − 𝜎𝑧𝑚 )2 + 6(𝜏𝑥𝑦𝑚 2 + 𝜏𝑦𝑧𝑚 2 + 𝜏𝑥𝑧𝑚 2 )] × 𝑠𝑔𝑛(𝐼1,𝑑 )
2
1 2 2
𝜎𝑎 = √ [(𝜎𝑥𝑎 − 𝜎𝑦𝑎 ) + (𝜎𝑦𝑎 − 𝜎𝑧𝑎 ) + (𝜎𝑥𝑎 − 𝜎𝑧𝑎 )2 + 6(𝜏𝑥𝑦𝑎 2 + 𝜏𝑦𝑧𝑎 2 + 𝜏𝑥𝑧𝑎 2 )]
2
The above parameters are used to calculate the damage of each potential cycle, i.e. every pair of tensors in the stress
history, using the Walker mean-stress correction with two limitations:
1. If 𝜎𝑚 < 0, i.e. the stress ratio 𝑅 < −1, then a value of 𝑅 = −1 is used. This limits the reduction in damage
attributed to compressive cycles.
2. If 𝜎𝑎 > 𝜎𝑦 , where 𝜎𝑦 is the 0.2% proof stress, an adjustment is made to cycles which are partly compressive
(𝑅 < 0) so that their amplitudes are corrected as if they were fully tensile.
The highest damage thus obtained defines the Most Damaging Major Cycle (MDMC). This is then used to define a
coordinate system for Rainflow cycles as follows:
- The principal stress directions are computed for the MDMC;
- An octahedral plane, whose normal is denoted 𝑛𝑜𝑐𝑡 , is defined as the normalised sum of the principal vectors. It
can be shown that the normal stress on this plane is proportional to the hydrostatic stress;
- The shear component of the traction on this plane is calculated. It can be shown that this is proportional to the
von Mises stress;
- The normalised direction of this shear stress is then denoted 𝑛𝜏 .
Now for each stress tensor 𝜎 in the loading history, the Rainflow parameter is given by 𝑛𝜏 𝑇 𝜎 𝑛𝜏 , which is the normal
component in the direction of the maximum octahedral shear stress.
Once Rainflow cycles have been defined in this way, their damage is calculated using the Manson-McKnight
formulation above.
Note that the most damaging cycle thus identified may not be the same as the Most Damaging Major Cycle defined
above, since the damage parameter differs from the Rainflow parameter. In this case, the MDMC replaces the worst
Rainflow cycle in the Miner’s rule summation for the whole stress history and this is reflected in fe-safe’s standard
Life contour. A second life contour is output by this algorithm which takes no account of the MDMC.
References
J. Z. Gyekensi, P. L. Murthy and S. K. Mital, "NASALIFE- Component Fatigue and Creep Life Prediction Program",
National Aeronautics and Space Administration, Cleveland, 2005.
Per the Guideline, the algorithm is suitable for expected lives greater than 10,000 cycles. Unlike the other finite life
algorithms in fe-safe, the FKM guidelines does not provide a prediction of life. Instead the user must provide the
required life for the specified loading, and the algorithm then computes the degree of utilization.
The degree of utilization is based on the ratio of the largest stress amplitude to the variable amplitude fatigue strength.
Assessment of the component fatigue strength is achieved if the largest degree of utilisation is not greater than 1 (or
even a lower value than 1). The results reported by fe-safe are the individual utilization for each principal direction,
and the total combined utilization.
Stress amplitudes extracted by Rainflow counting the loading history are applied to the consistent version of Miner’s
rule (discussed in chapter 4 of the FKM guideline) to produce the variable amplitude fatigue strength. The elementary
version of Miner’s rule is also available as an option via the group properties.
Datasets within a loading block are combined by superposition before cycle extraction by Rainflow counting. To
comply with the FKM Guideline, only proportional loading is valid within a loading block, i.e. the direction of the
principal directions does not change. For non-proportional loading, the datasets should be applied in separate loading
blocks, each with a single superposition to ensure proportionality. For each principal direction, the largest individual
utilization over all loading blocks is reported. A combined degree of utilization (aBK,I, aBK,II, …) will be calculated for
each loading block (I, II, …) and summed to give the overall degree of utilization (Reference [1] page 109).
aBK = aBK,I + aBK,II + …
The degree of utilization is calculated according to the number of repeats specified in the loading definition. The block
repeats property should be combined with the loading block history to specify the number of cycles required for the
analysis. For example, to assess the degree of utilization at the knee point for a steel component with fully reversed
loading, apply 1e6 repeats to the default loading.
Materials can be selected from either the ‘FKM_Fe.dbase’ or ‘FKM_Al.dbase’ material databases for steel/iron and
aluminium materials respectively. Please note: the databases are delivered in the fe-safe product installation directory
under the /database sub-directory. Open the database to access the materials (see section 8 for more details).
Alternatively, materials from the existing databases can be used with the FKM Guideline algorithm by adding the
necessary properties:
fkm : Material Type – Required for use with this algorithm
fkm : Grey Iron Index – Required for GJL material type
fkm : Elongation (%) – Required for Wrought aluminium alloy material type
The relative stress gradient, in the direction normal to the component surface is calculated automatically for 3-
dimensional element types. A maximum search depth is set by the ‘taylor : L (mm)’ material parameter.
Surface roughness for groups analysed with this algorithm must be set using the ‘FKM-Guideline.sfprop’ definition
file available when ‘Define surface finish as a value’ is selected in the Surface Finish Definition dialog. The surface
roughness value Rz is valid in the range 1 to 200 microns.
Other group properties are set through the Group Algorithm Selection dialog. The FKM Guideline options are only
visible when the algorithm is selected or has been specified as the material default algorithm in the database. The
properties should be set according to the guideline document. Note that the coating factor is only applied to aluminium
alloys and that the casting factor is only relevant to cast iron material types.
The guideline considers four separate types of overloading which are accessed via the algorithm selection dialog of
fe-safe as methods of mean stress correction, along with the default method (described below). The characteristics
of the loading history for each method are
Type of overloading F1: constant mean stress
Type of overloading F2: constant stress ratio
Type of overloading F3: constant minimum stress
Type of overloading F4: constant maximum stress
In the case where none of the above conditions apply, the default mean stress correction option for varying mean
stress should be used. In this case the stress ratio of each cycle is made equivalent to that of the largest cycle by
adjusting the stress amplitude according to type of overloading F2.
References
f ' (2 N f ) b
2
and a cyclic plasticity correction is used to convert the elastic FEA stresses to elastic-plastic stress-strain.
Otherwise the life curve is defined by the S-N values defined in the material database, and the plasticity correction
can be optionally performed depending on settings in Analysis Options dialogue [FEA Fatigue >> Analysis Options...],
Stress Analysis tab (see section 5).
For finite life calculations Goodman, Gerber, Walker, Morrow, Morrow B, Smith-Watson-Topper, R ratio SN curves,
User Defined or no mean stress correction can be selected. See sections 14.1, 14.3, 14.4, 14.8, 14.9 and 14.11.
For infinite life calculations (FRF) a user defined, R ratio SN curves, Goodman or Gerber infinite life envelope analysis
are supported, see section 17.
Two non-standard fatigue analysis are also supported. To enable these options check on the Enable non standard
fatigue modules on the Legacy tab of the Analysis Options dialogue.
The Buch analysis is a hybrid finite and infinite life calculation, see section 17.
The Haigh diagram creation module (see 14.15.) has now been superseded by the diagnostic option for creating
Haigh and Smith diagrams for all analysis algorithms.
This algorithm can give very non-conservative results for most ductile metals – see the Fatigue Theory Reference
Manual, section 7.
The mean stress axis is made non-dimensional by dividing each mean stress by the material ultimate tensile strength,
UTS. For compressive mean stresses, the ultimate compressive strength, UCS, can be used, provided that the UCS
is defined in the material database.
At a mean stress equal to the material UTS, the allowable stress amplitude is zero, as the material is on the point of
For a cycle (Sa, Sm) the value of the MSC factor is extracted for the value of Sm, and the equivalent stress amplitude
at zero mean stress is:
Sa
Sa0
MSC
or, if the fatigue algorithm uses strain amplitudes then:
ea
ea 0
MSC
This can also be defined as a Smith diagram.
Each material can have a default user defined MSC. This will be used as the default MSC when the material is
selected for an analysis and also as the infinite life envelope for Haigh and Smith diagram diagnostics.
For details of how to define a mean stress correction curve in fe-safe, see appendix E 205.2.
Figure 14.12-1 Original S-N curve, knock-down curve, modified S-N curve
This option applies to stress-based analyses only where the S-N material data is available. The scaling will not be
applied if the S-N data is derived from the local strain parameters.
For details of how to define a mean stress correction curve in fe-safe, see Appendix E.
For finite life calculations the S-N curve for the Stress ratio of the cycle is used. If the Stress ratio falls
between two known R‐ratios, the S-N data is linearly interpolated between them.
For infinite life calculations the FRF envelope is constructed by looking up the FRF design life on the S-N
curves for the appropriate Stress ratio, then adding the corresponding point to the envelope. If the
highest mean stress on the envelope is less than the UTS, the envelope is taken horizontally out to the
UTS, at which point it drops to 0. If the lowest mean stress on the envelope is greater than the UCS
(which if undefined may take its value from the UTS) then the envelope is taken horizontally out to the
UCS but does not drop down to zero.
This option can only be used with the following stress-based algorithms: von Mises, normal stress and
stress-based Brown Miller.
Figure 14.14-1
The Buch calculation is very similar to the fatigue reserve factor (FRF) calculation described in section 17.3, except
that the envelope is a function of the both the materials UTS (Su) and the yield stress (Sy). The yield stress is taken
to be the 0.2 % proof stress. (Ref : Buch, A 'Fatigue Strength Calculation', Trans Tech Publications, 1988, (6) "Effects
of Mean Stress"). This calculation is more conservative than a Goodman calculation for large tensile or large
compressive mean stresses. The infinite life envelope is defined as in Figure 14.8-1. The diagram indicates that if the
stresses are within the shaded area the component will have a calculated infinite life.
The fe-safe analysis calculates a Fatigue Reserve Factor value at the node, using the method described in Section
17.
The Buch method has been extended for use in finite life design. As shown in Figure 14.8.2, curves for different
fatigue endurance values converge to the same curve in the region clipped by the lines joining the yield stresses. It
is not possible to determine a fatigue life in this region, and fe-safe calculates a pseudo-life in this region. It is assumed
that the S-N curve has a constant slope in the high cycle fatigue region, and the slope b at an endurance of 107 cycles
is used as an inverse power on the factor to obtain the fatigue life.
This method will provide consistent contour plots for FRF and fatigue life calculations performed with the Buch
algorithm. However it should be noted that, for cycles in the ‘clipped’ region, the method will give calculated lives that
are a function of the specified design life. In other words, the fatigue life will change with the design life.
Figure 14.14-2
To allow this algorithm to be selected check on the Enable non standard fatigue modules on the Analysis tab of the
Analysis Options dialogue.
Copyright © 2023 Dassault Systemes Simulia Corp. Volume 1 14-13
Vol. 1 Section 14 Issue: 24.1 Date: 17.08.23
Fatigue analysis of elastic FEA results
f '
(2 N f )b f ' (2 N f ) c
2 E
Morrow, Walker, Smith-Watson-Topper, User Defined or no mean stress correction may be selected. See section
14.11 for a definition of the user-defined MSC. For the Morrow mean stress correction the strain-life equation is
modified to:
( f ' m )
(2 N f ) b f ' (2 N f ) c
2 E
For the Walker mean stress correction the strain-life equation is modified to:
Rearranging this equation to show the correction applied to the left hand side gives
2 1−𝛾 𝛥𝜀 𝜎 ′𝑓
( ) = 2𝑁𝑓 𝑏 + 𝜀 ′𝑓 2𝑁𝑓 𝑐
1−𝑅 2 𝐸
The corrected strain amplitude then forms the damage parameter for the fatigue damage calculations.
Alternatively, an FRF calculation can be used with this algorithm - see section 17.3.
This algorithm can be also be used for fatigue analysis of elastic-plastic FEA results. (See section 15).
Fatigue analysis using principal strains can give very non-conservative results for ductile metals. See the Fatigue
Theory Reference Manual, section 7 for the background to this algorithm.
'
1.33 f (2 N f )b 1.5 f ' (2 N f )c
2 E
Morrow, User Defined or no mean-stress correction may be selected. See section 14.9 for a definition of the user-
defined MSC. For the Morrow mean-stress correction, the strain-life equation is modified to:
( ' )
1.33 f m (2 N f )b 1.5 f ' (2 N f )c
2 E
See the Fatigue Theory Reference Manual, section 7-4-4 for the background to this algorithm. Note that fe-safe uses
the value of Poisson’s ratio defined by the material to calculate the coefficient on the elastic term, which differs from
the value 0.3 cited in the Theory manual.
n '
1.665 f (2 N f )b 1.75 f ' (2 N f )c
2 2 E
For infinite-life analysis, Goodman, Gerber or no mean-stress correction may be selected; for finite-life analysis,
Morrow, User-Defined or no mean-stress correction may be selected. See section 14.9 for a definition of the user-
defined MSC. For the Morrow mean-stress correction, the strain-life equation is modified to:
n ( ' )
1.665 f m (2 N f )b 1.75 f ' (2 N f )c
2 2 E
Brown-Miller is the preferred algorithm for most ductile metals at room temperature and is the default algorithm for
most materials in the fe-safe material database. See the Fatigue Theory Reference Manual, section 7-4-7 for the
background to this algorithm. Note that fe-safe uses the value of Poisson’s ratio defined by the material to calculate
the coefficient on the elastic term, which differs from the value 0.3 cited in the Theory manual.
Note that, by default, the Brown-Miller algorithm in fe-safe uses only the values from the cycle end points to obtain
∆𝜀𝑛 and 𝜎𝑚 . Optionally, a search for the minimum and maximum normal stresses and strains between the end points
can be performed over each cycle. This is activated by turning off the option “Only use cycle end points to determine
the minimum and maximum normal stresses and strains over the cycle” in FEA Fatigue >> Analysis Options… >>
Algorithms tab >> Brown-Miller. Similar to the Fatemi-Socie algorithm, an interpolation tolerance (default value 0.05)
is used to determine the cycle closing point (see Figure 14.24-2).
Turning off “Only use cycle end points to determine the minimum and maximum normal stresses and strains over the
cycle” is the recommended option for non-proportional loading scenarios. With this approach, the actual minimum
and maximum normal stresses and strains over a cycle can be correctly determined. This allows the normal strain
1
range to be computed as ∆𝜀𝑛 = 𝜀𝑛,𝑚𝑎𝑥 − 𝜀𝑛,𝑚𝑖𝑛 and the mean stress as 𝜎𝑚 = 2 (𝜎𝑛,𝑚𝑎𝑥 + 𝜎𝑛,𝑚𝑖𝑛 ).
When the Cycle-life table for critical plane or the Cycle-life table for all planes (see section 22.1.7) is exported, extra
information will be available. Namely, the elastic (elasENmin, elasENmax, elasSmin, elasSmax) and the Neuber-
Copyright © 2023 Dassault Systemes Simulia Corp. Volume 1 14-15
Vol. 1 Section 14 Issue: 24.1 Date: 17.08.23
Fatigue analysis of elastic FEA results
corrected (EN min, EN max, S min, S max) minimum and maximum normal stresses and strains will be reported
along with the points where they occur (Pt ENmin, Pt ENmax, Pt Smin, Pt Smax). If a point of minimum/maximum
happens to be a virtual point (as discussed in Section 14.24 Fatemi-Socie analysis), this will be reported as a decimal
floating-point (index of the previous point plus the interpolation parameter, see Figure 14.24-2).
The weighting factor k depends on the torsional and bending/tension endurance limits:
𝜏−1 1
𝑘 = 3( − )
𝜎−1 2
and 𝜎−1 is the endurance limit under fully reversed loading from the S-N curve.
The value of 𝜏−1 is derived similarly from the T-N curve at the endurance limit, although it is also possible to specify a
constant scaling factor s2t to derive the T-N curve from the S-N curve, and hence 𝜏−1 from 𝜎−1
Just as the FRF (see 17.3) can be visualised in terms of Goodman/Haigh diagrams with amplitude and mean stress
axes, so the Dang Van safety factors can be viewed in terms of the deviatoric stress and the hydrostatic stress. The
safety factor 𝑓 above is equivalent to a radial safety factor on this diagram, and if we consider only scaling the
deviatoric component a vertical safety factor can also be calculated similar to the vertical FRF.
On the first pass through the signal, fe-safe considers the elastic shakedown state resulting from the multiaxial load.
The hydrostatic stress is subtracted from the direct stress, and the centre of minimum sphere which bounds the full
signal is estimated. The minimum sphere that bounds the locus of the signal can be considered as the 'yield domain'.
The algorithm first subtracts the hydrostatic stress off each stress tensor in the load history, and then converts these
to a 5D deviatoric vector. The algorithm locates the optimum centre of the 5D point cloud so as to minimise the radius
of an enclosing hypersphere. The method starts with an initial estimate of the centre at the mean position, and a low
value of the enclosing radius. A set of iterations through the history is performed increasing the radius by a small
amount, and moving the centre, for each point that is outside of the current radius estimate. This continues until all
points are within the radius. The iterative update scheme on the centre c and radius r for each point p is
𝐝=𝐩−𝐜
If
|𝐝| > 𝑟
(1−𝛼)(𝑑−𝑟)
𝑟+= 𝛼(𝑑 − 𝑟); 𝐜+= 𝐝
𝑑
With 𝛼 = 0.1
A second pass through the history refines the position of the centre, and calculates the minimum radius of the sphere.
This uses a minmax generalised descent approach proposed by Cerullo [12]. In this method at each iteration to
update the central estimate 𝐜 ∗ we scan through the relevant point set (only points with a distance from the initial
centre in excess of 0.75r) and find the time step 𝑡𝑚 with the maximum difference norm in the deviatoric space, so
𝑡𝑚 = argmax(|𝐝(𝑡) − 𝐜 ∗ |)
Then 𝐜 ∗ += 𝛾(𝐝(𝑡𝑚 ) − 𝐜 ∗ )
The second phase completes when either the current maximum distance radius increases or the change in central
estimate is less than 𝛿 = 𝛾𝑟𝜀 with 𝜀 = 1𝐸 − 6. In this final stage of convergence the convergence rate will be reduced
so we use 𝛾 = 0.05. Finally we also impose a limit on the number of iterations given by 10⁄𝛾 √𝑁 where N is the number
of points in the loading, so the method still completes fairly rapidly even when N is large.
Then the stable residual stress tensor 𝝆∗ is derived by converting the deviatoric centre 𝐜 ∗ back to an equivalent tensor,
𝐷𝑉
which is then subtracted off all of the points to determined the deviatoric amplitudes {𝜏𝑎,𝑡 }
On the third pass through the history, the deviatoric Tresca stresses are calculated after subtracting the stable
𝐷𝑉
residual tensor to give the generalized multiaxial shear estimate 𝜏𝑎,𝑡 as:
𝐷𝑉
𝜏𝑎,𝑡 = 𝑇𝑟𝑒𝑠𝑐𝑎(𝐒𝒕 − 𝐈𝜎𝐻,𝑡 − 𝝆∗ )
The loading path (time history of loading) is plotted on the Dang Van diagram. The vertical component is the deviatoric
Tresca stress and the horizontal component is the hydrostatic stress.
The stress-based factor of strength for any point in the loading is the distance between the loading path and the Dang
Van failure line. A safety factor is calculated for each point in the loading as a ratio with respect to the distance from
the Dang Van line. The safety factors can be expressed radially (w.r.t. the origin, equivalent to the ratio 𝑓 for the worst
point ) or vertically (w.r.t. zero shear stress line).
Safety factors less than one imply yielding and a non-infinite life.
𝐷𝑉
As well as the radial and vertical safety factors, two additional contours are also output for 𝜏𝑎,𝑡 and 𝜎𝐻,𝑡 at 𝑡 = 𝑡𝑚 . For
example:
The shear endurance limit 𝜏−1 is derived from the T-N curve (torsional form of S-N curve) using the Constant
Amplitude Endurance Limit ( CAEL, 2Nf half-cycles, see 8.5.2) as the shear amplitude where the T-N curve gives a
life equal to Nf full-cycles. Similarly 𝜎−1 is derived from the S-N curve as the (tensile or bending) stress amplitude
where the S-N curve gives a life equal to Nf full-cycles. Note that if either the S-N or T-N curves do not go up the
CAEL then extrapolation will be used (in log-log space). So if for example the S-N curve is defined between 1E4 and
1E6 cycles but the CAEL is 2E7, then the S-N curve will be logarithmically extrapolated out to 1E7 cycles to obtain
𝜎−1 . Also note that if the S-N and T-N curves specify stress amplitudes at the CAEL, then only these values are
relevant.
Note that older version of fe-safe did not use 𝜏−1 directly, and instead used a Dang Van failure line by fitting to a set
of S-N curves with at least two stress ratios, usually R=0 (constant amplitude) and R=-1. The gradient of this line is –
k, and its intercept 𝜏−1 . It is possible to revert to this form of material specification by using the Legacy tab under the
Analysis Options dialog; there is a checkbox entitled:
Copyright © 2023 Dassault Systemes Simulia Corp. Volume 1 14-17
Vol. 1 Section 14 Issue: 24.1 Date: 17.08.23
Fatigue analysis of elastic FEA results
Radial Factor
The radial factor is the ratio a/b, shown in Figure 14.19-1.
Figure 14.19-1
The loading path is indicated as a vector. The FRF is calculated for the point closest to the Dang Van infinite life line,
circled in Figure 14.19-1.
Vertical Factor
The FRF is the ratio of b/a, shown in Figure 14.19-2.
Figure 14.19-2
Prior to version 5.2-05 this calculation was only performed for the sample with the worst radial factor. At 5.2-05 this
was modified to perform the calculation for each and every sample. The old behaviour can be enabled by adding the
keyword “DANGVAN_VERTALLPTS”. With a value of 0 fe-safe will do the worst point only calculation (pre 5.2-05
behaviour) and with a value of 1 (the default5.2-05+ behaviour) fe-safe will do the calculation on every point.
14.19.3 Diagnostic output
For each analysis a diagnostics log file is created with the same name as the results file and the extension .log. This
will contain the information displayed in the message log during the analysis.
For a Dang Van analysis, export options include a Dang Van plot and plots of the Hydrostatic pressure and the Local
Shear strain. These should be selected in the Exports and Outputs dialogue, and the Export Dang Van Plots check
box should be checked.
The output files will be written to the same location as the results file, with filenames which contain the results filename
plus the element and node numbers.
e.g. If the output file is /data/test1.fil, then for element 27 node 4 the two created data files will be :
Dang Van Plot /data/test1_DangVan_27-4.dac
Stress tensors, local shear and hydrostatic stress plots /data/test1_S-e_27-4.txt
Both data files can be opened in fe-safe using File >> Data Files >>Open Data File and can then be plotted or listed
(see section 7). Example results are shown in Figure 14.13-3 and Figure 14.13-4.
300
250
200
Tau(Local):MPa
150
100
50
0
0 50 100 150
PHydro:MPa
120
Sxx
60
0
200
Syy
0
100
Szz 0
80
40
Sxy
0
-40
-50
Syz
-150
20
-20
Sxz
-80
100
Tau
60
20
200
pHydro
100
0
0 100 200 300 400 500 600 700
Samples
Figure 14.19-4 Plot of tensors, Hydrostatic Pressure and Local Shear Stress.
Using the cursor (Ctrl + T) on the Dang Van plot will show the radial and vertical factors calculated on a point by point
basis. The plot below shows an active cursor and a cursor converted to text.
200
150
R=-1
Tau(Local):MPa
100 R=0
50
(17.29, 41.015) rad.=4.307 vert.=6.293
0
-200 -150 -100 -50 0 50 100 150 200
PHydro:MPa
The weighting factor k depends on the torsional and bending/tension endurance limits:
𝜏−1
𝑘 =3 − √3
𝜎−1
and 𝜎−1 is the endurance limit under fully reversed loading from the S-N curve.
Some intuition into the Prismatic Hull method can be found by considering some simple cases:
𝜏
1. For fully reversed pure shear (torsion) stress 𝜏𝑎 , 𝜏𝑎𝑃𝐻 = 𝜏𝑎 and 𝜎𝐻,𝑚𝑎𝑥 = 0, so 𝑓 = −1⁄𝜏𝑎 .
𝜎 𝜎 𝜎
2. For fully reversed uniaxial stress 𝜎𝑎 , 𝜎𝐻,𝑚𝑎𝑥 = 𝑎⁄3 and 𝜏𝑎𝑃𝐻 = 𝑎⁄ , so 𝑓 = −1⁄𝜎𝑎 .
√3
Note that the references [1-5] define a symbol 𝜏𝑎 that is different from the applied shear stress in pure torsion, as in
those references 𝜏𝑎 is used for a distance in deviatoric space which needs a √2 normalisation factor to get back to
𝜏
an effective shear stress measure. The fe-safe term 𝜏𝑎𝑃𝐻 is equivalent to the term 𝑎⁄ in those references. The
√2
dimensions of the largest enclosing prismatic hull in the deviatoric space can be logged for specific items. See section
22.1.7.
Note that if residual stresses are defined, then as with other FRF type methods, the safety factor is not applied to the
residual hydrostatic stress. So rather than including the residual hydrostatic stress term in the 𝑘𝜎𝐻,𝑚𝑎𝑥 term, instead
we solve for
So the safety factor f with a residual stress with associated hydrostatic stress 𝑘𝜎𝐻,𝑟𝑒𝑠𝑖𝑑 is modified to be:
𝜏−1 − 𝑘𝜎𝐻,𝑟𝑒𝑠𝑖𝑑
𝑓 = 𝑃𝐻
𝜏𝑎 + 𝑘𝜎𝐻,𝑚𝑎𝑥
Also note that with compressive stresses (or residual) a negative hydrostatic stress will increase the safety factor.
The accuracy of the multiaxial Prismatic Hull method was assessed for steels and aluminum alloys in [3], considering
proportional and non-proportional stress and strain controlled tests reported in the literature. For complex non-
proportional histories the Prismatic Hull is generally more conservative than Dang-Van [1,4]. A study of the use of
the Prismatic Hull in assessing the fatigue of a Powertrain diesel crankshaft under peak torque conditions is given in
[4], which found that the Prismatic Hull correctly predicted the crack location. In an extensive survey of infinite life
methods [5] published in 2020, McKelvey et al found that the Prismatic Hull has the best agreement with the data
found in literature, and furthermore was the most computationally efficient.
1E6 cycles but the CAEL is 2E7, then the S-N curve will be logarithmically extrapolated out to 1E7 cycles to obtain
𝜎−1
The T-N curve may be defined explicitly, similarly to the S-N curve, or derived from the S-N curve using a constant
factor TN:s2t on stress (see 8.5.2 and 8.5.4). If no T-N curve is defined, and no such constant factor, then a default
constant factor of 1⁄√3 is used, which implies k=0, so the hydrostatic stress has no effect on the safety factor. This
would only be a sensible assumption for ductile materials. Note that the material databases supplied with fe-safe do
not contain torsional endurance limit data, but the example data in the “Local” database does contain illustrations of
the properties. See 8.5.2 and 8.5.4 for further details.
Finally note that although the full T-N curve may be used in certain finite life methods involving shear, it is only the
value of T at the CAEL that counts for the purposes of the Prismatic Hull. If full T-N data is not known, then the easiest
𝜏
way to get a required 𝜏−1 is to set TN:s2t to the ratio −1⁄𝜎−1 . Similar considerations also apply to the Susmel-Lazzarin
method discussed in the next section.
Note that the critical plane criterion is to maximize 𝜏𝑎 , not to minimize 𝑓𝑆𝐿 . Susmel argues that it is the shear stress
which is primarily responsible for initiating fatigue cracks, and although higher (tensile) normal stress further
encourages crack opening, the maximum normal stress term is a modifier on the safety factor, rather than being used
as part of the critical plane search. However Dantas et al [9] argued for some modification to this, as there can be
situations, especially under relatively low shear but high normal stress, where the location of the maxium shear plane
is rather ambiguous. There may be a number of planes that have very similar shear stress to the maximum but a wide
variation in maximum normal stress. Dantas et al [9] suggested a two pass approach, first computing the maximum
shear stress plane, and then examining other planes where the shear amplitude was within 99% of the maximum
shear; and finally picking the plane with maximum normal stress out of that subset of planes. This is offered as an
option in fe-safe, though as the critical plane search is usually performed at 10 degree intervals rather than the 1
degree used in [9] the 99% has been relaxed to 98%; furthermore this only supersedes the original plane if the
increase in maximum normal stress decreases the safety factor.
Also note that if the normal stress is purely compressive so that the maximum normal stress is negative, then it is
simply ignored (i.e. we do not allow negative values of 𝜌), in which case
𝜏−1
𝑓𝑆𝐿 =
𝜏𝑎
Susmel et al [10] later considered an upper limit of validity on 𝜌. This can be particularly important when the shear
stress is low but maximum normal stress is high. For example if 𝜏𝑎 is close to zero then 𝜌 can be very large, and even
tends to infinity for moderate normal stress as the shear term tends to zero. Also at very high normal stress, as the
maximum normal stress approaches the ultimate tensile stress (UTS), we essentially have a static failure and the
problem is no longer a fatigue problem. Therefore Susmel et al suggested [10] an upper bound on 𝜌 given by
𝜏−1
𝜌𝑙𝑖𝑚 =
2𝜏−1 − 𝜎−1
Note that if residual stresses are defined, then the residual stress will be implicit in the maximum normal stress term,
but the safety factor applies to the total stress (residual + variable), unlike a conventional FRF.
𝜎
The weighting factor (𝜏−1 − 2−1 ) used on 𝜌 allows the Susmel-Lazzarin algorithm to be tuned to a wide variety of
materials, as the relative weight applied to normal or shear terms varies between ductile and more brittle materials.
The original paper evaluated the method on a wide range of materials including forms of iron. However it is based on
the concept that shear stress is the dominant driver of fatigue, and so may be less appropriate for the most brittle
materials where a normal stress dominated approach may be preferred, or for cast irons for which fe-safe provides
specialised methods. In an extensive survey of infinite life methods [5] published in 2020, McKelvey et al found that
the Susmel-Lazzarin method was one of the better critical plane methods, but became less reliable at high mean
stress; they also noted the range of validity on 𝜌.
Note that there is no explicit mean stress correction used with the Susmel-Lazzarin algorithm, but the mean normal
stress is implicit in the maximum normal term.
The critical plane search considers two sets of planes rotated about a reference axis (in a triaxial search 3 axes are
considered – see Technical Note 3 at the end of the User Guide). Firstly shear on planes perpendicular to the axis are
considered, rotating about the reference direction (see Figure 14.21.1). Secondly the shear around the 45 degrees
cone centred on the axis is evaluated. The critical plane may be selected according to either maximum shear
amplitude [6] according to the MCR method generalized shear amplitude on the plane, or according to the method of
Dantas et al [9].
An example is shown below comparing the Susmel-Lazzarin method to the Prismatic Hull. The main hotspot location
is the same, and similar in reserve factor value, but the Susmel-Lazzarin contour is more diffuse, as the normal stress
term weighted by 𝜌 means that the factor does not decay off in areas of low shear as much as the Prismatic Hull.
Some recalibration of the default contour viewing schema may be desirable (e.g. 1-5 as shown below rather than 1-
10) in the contour viewer.
Figure 14.21-1
Comparison of Prismatic Hull (left) and Susmel-Lazzarin (right) FRF for a Crankshaft model undergoing
both torsion and bending.
If Susmel-Lazzarin is selected from the algorithm menu, then the consequent Group Algorithm Selection dialog
contains an additional panel at the bottom to select sub-option variants. The Dantas variant on the critical plane
selection is selected via a checkbox, and a combo-box allows selection of the 3 possible means of dealing with high
𝜌 values when maximum normal stress is high compared to shear amplitude.
(1 Di ) Pi
D
( Pi 1) N fi
where D is the damage for the cycle, in the current damage increment;
Di is the damage so far accumulated;
Pi is the damage rate parameter so far;
Nfi is the endurance of the cycle.
Figure 14.23-1
where 𝐺 is the shear modulus (computed internally using the Young’s modulus and the Poisson’s ratio), 𝜏𝑓′ is the
shear fatigue strength coefficient, 𝛾𝑓′ is the shear fatigue ductility coefficient and 𝑏0 and c0 are the shear fatigue
strength and ductility exponents, respectively. The parameter 𝑘 is a material dependent constant and the maximum
normal stress is normalised by the monotonic yield strength 𝜎𝑦 to maintain unit consistency. If test data is not
available, 𝑘 = 0.3 is taken as a default value. Mean stress effects are incorporated by the maximum normal stress
term in the Fatemi-Socie model and further mean stress corrections are not needed.
The algorithm specific options consist of the interpolation tolerance used to determine the cycle closing point and the
possibility of using only the cycle end points to extract the normal stress (Figure 14.24-1).
Figure 14.24-1
The default cycle processor of the Fatemi-Socie algorithm is designed to search for the maximum normal stress over
all points between the cycle end points. With this approach, the actual maximum normal stress over a cycle can be
correctly determined. On the other hand, using only the end points to determine the maximum normal stress avoids
a costly search between the end points, but is non-conservative under non-proportional loading conditions.
The tolerance on interpolated end point location is relevant only when not using only the end points to determine the
maximum normal stress within a cycle. Consider the cycle A-B-C illustrated in Figure 14.24-2, the tolerance is used
to check if the end point C is a good approximation for the actual closing point A’.
A−C
If − (A−B) ≥ tolerance, the virtual point A’ is obtained by linearly interpolating data from points y and z. Notice that this
affects the search for the maximum normal stress. If point C closes the cycle, the search for the maximum normal
stress is performed between points A-B-C. However, if the virtual point A’ closes the cycle, the search is performed
between points A-B-A’.
Figure 14.24-2
When the Cycle-life table for critical plane or the Cycle-life table for all planes (see section 22.1.7) are exported, the
elastic (elasSmax) and the Neuber-corrected (S max) maximum normal stresses are reported along with the point
(Pt Smax) where it occurs. If the maximum normal stress over a cycle occurs at a virtual point such as point A’, this
is reported as the index of the previous point (point y in the example) plus the interpolation parameter t used in the
linear interpolation 𝐴′ = y + t(𝑧 − 𝑦).
Note that no plasticity correction is applied to this method, even if the plasticity correction for S-N data is enabled. It
is primarily intended for medium to high cycle fatigue with low plasticity. It is not recommended for lower cycle fatigue,
for example lives below 1e6, for which the standard strain-based Brown Miller would be more appropriate.
Note that Stress-based Brown-Miller analysis always uses only the information from the cycle end points. The option
“Only use cycle end points to determine the minimum and maximum normal stresses and strains over the cycle” in
FEA Fatigue >> Analysis Options… >> Algorithms tab >> Brown-Miller has no effect over this algorithm.
14.26 References
1. EN Mamiya, JA Araujo, FC Castro (2009). Prismatic Hull – A new measure of shear stress amplitude in multiaxial
high cycle fatigue, International Journal of Fatigue 31, 1144-1153
2. FC Castro, JA Araujo, EN Mamiya, N Zouain(2009). Remarks on multiaxial fatigue limit criteria based on prismatic
hulls and ellipsoids, International Journal of Fatigue 31, 1875-1881
3. EN Mamiya, FC Castro , JA Araujo (2014) Recent developments on multiaxial fatigue: The contribution of the
University of Brasilia, Theoretical and Applied Fracture Mechanics
4. G de Morais Teixeira, J Draper, A Rodrigues et al (2015). Dang-Van, Prismatic Hull and Findley approaches for
high cycle fatigue evaluation of powertrain components, NAFEMS conference June 2015.
5. S McKelvey , S Zhang, E Subramanian and Y-L Lee (2020) . “Review and Assessment of Multiaxial Fatigue Limit
Models,” SAE Technical Paper 2020-01-0192, 2020, doi:10.4271/2020-01-0192.
6. L Susmel and P Lazzarin (2002). A bi-parametric Wöhler curve method for high cycle multiaxial fatigue
assessment, Fatigue and Fracture of Engineering Materials and Structures (25):63-78,2002.
7. IV Papadopoulos (2000). Multiaxial fatigue limit criterion and fatigue life prediction methodology (including stress
gradient effects), Technical report, European Commission Joint Research Centre, DOI 10.13140/2.1.2539.2321
(June 2000).
8. IV Papadopoulos (1998). Critical plane approaches in high cycle fatigue. On the definition of the amplitude and
the mean value of the shear stress acting on the critical plane, Fatigue and Fracture of Engineering Materials and
Structures (21):269-285,1998
9. AP Dantas, JA Araujo, EN Mamiya, et al (2013). An alternative measure for the shear stress amplitude in critical
plane based multiaxial fatigue models, ICMFF (9), October 2013.
10. L Susmel, R Tovo, P Lazzarin (2005). The mean stress effect on the high-cycle fatigue strength from a multiaxial
fatigue point of view. International Journal of Fatigue 27 (2005) 928–943.
11 T Matake (1977). An explanation of fatigue limit under combined stress. Bull JSME 1977 (20):257-263
12 M Cerullo (2013) Application of Dang Van criterion to rolling contact fatigue in wind turbine
roller bearings. Proc 13th International Conference on Fracture (2013), Beijing, China.
10-1
10-2
eA@Kt=1.5
eA@Kt=1
eA@Kt=2
eA@Kt=2.5
10-3
10-4
102 104 106 108 1010
Life:2nf
Figure 15.5-1: Strain-life curves degraded by the effect of surface finish factor Kt,
Kt=1 is the upper curve.
The degraded strain-life curve is calculated at increments on the original strain-life curve (see section 14 for the
equation defining the strain-life curve) as follows:
At a given life (nf) extract the strain amplitude (ea).
Use the cyclic stress-strain curve to evaluate the associated stress (S) and hence calculate the Neuber’s
product (np).
Divide the Neuber’s product (np) by the square of the surface finish factor (Kt) to give the effective Neuber’s
product (np’).
Evaluate the strain amplitude (ea’) and the Stress (S’) for the applied surface finish factor associated with life
(nf) using the cyclic stress-strain curve and the effective Neuber’s product (np’).
For the Brown-Miller and Maximum Shear Strain algorithms, the same ratio of ea/ea’ and S/S’ are used to correct
the algorithms’ life curves (see section 14) for the surface finish factor.
Note that the above procedure is not applicable to the modified strain-life equation used for the Smith-Watson-
Topper mean-stress correction and therefore surface finish effects are not supported with that mean-stress
correction in conjunction with elastic-plastic FEA results. fe-safe issues a validation error in that case. The Walker
mean-stress correction with exponents γ=0.5 (section 8.5.6) is similar to Smith-Watson-Topper and supports
surface finish effects. Notice however that the equations are not exactly equivalent (see Section 14.4).
Figure 15.5-2 shows an example of the calculated ratios for Manten at a Kt of 1.2. Above lives of 1e10 there is no
plasticity, so Kt is applied as a factor directly to the stresses and strains. As the lives get shorter and the plasticity
becomes more significant, Kt has an increasing effect on the strains and a diminishing effect on the stresses. At
lives close to one repeat, the effect on the strain has increased to 1.36 and that on the stresses has reduced to
1.06.
1.35
1.3
1.25
ea/ea'
1.2
S/S'
1.15
1.1
1.05
1
100 102 104 106 108 1010 1012 1014
Nf
Figure 15.5-2 : Effect of Kt on stress (lower curve) and strain (upper curve).
For a particular analysis, diagnostics can be exported displaying the original life curves, modified life curves and the
relationship between the two. See section 15.7 for more information.
4.5
3.5
3
de (uE)
2.5
1.5
0.5
This table can be exported using the diagnostics tools. See section 15.7 for more information.
The User-defined mean-stress correction modifies the strain amplitude by a factor extracted from the User-defined
mean-stress curve. This is simulated in the elastic-plastic analysis by iterating until the stress factor for the Kt; the
correction to the strain amplitude for the mean stress and the strain amplitude stabilise for the evaluated life.
15.7 Diagnostics
Two sets of diagnostics specific to elastic-plastic analysis with a surface finish effect are provided. Each is
controlled from the Exports and Outputs dialogue. This dialogue is obtained by selecting Exports ... from the
Fatigue from FEA dialogue.
Selecting the Export material diagnostics? checkbox will turn both sets of diagnostics on, the diagnostics apply to
the items (nodes or elements) specified in the List of Items text field (see section 22 for a more in depth description
of this field).
The first diagnostics are written to the .log file (See section 22.3.2 for more information). For each diagnosed node
a table is written as below.
Temperature : 0.00
Kt : 1.20
CAEL amp. : 141.48
Algorithm : NormalStrain
NOTES: Morrow column show 1e6*(SNf)^b with bm or ms correction (It may not be used)
S scaler column indicates how stress @ Kt=1 and actual Kt compare
Nf Life in repeats.
ea@Kt=1 (ea) The strain amplitude for the given life evaluated from the life equations. (See figure
15.6-1)
ea@kt (ea’) The degraded strain amplitude for the specified Kt and life. (See figure 15.6-1)
Ratio The strain ratio ea/ea’. (See figure 15.6-2)
Morrow The effective reduction in the strain-life curve (in uE) for each tensile MPa of mean
stress at the specified life. (See figure 15.6-3)
S scaler The stress ratio (S/S’) for the given life. (See figure 15.6-2)
The second sets of diagnostics are the plottable files (see section 22 for more information). For each diagnostics
node a plot file is created. If the plot file is opened for a particular node after the analysis is completed (using the
File >> Data Files >> Open Data File ... option) it will contain 3 data channels as shown in Figure 15.7-1.
Figure 15.7-1
In this case the diagnostics file was from element 340 node 3. The first channel contains Life information and the
second and third channels contain strain amplitude information for the original and degraded strain-life curves. The
Life and ea channels can be cross-plotted to create a strain-life curve plot as in Figure 15.7-2.
Note: If a Kt of 1 is specified no diagnostics will be displayed for a node.
Figure 15.7-2
Stress
Range
See the Fatigue Theory Reference manual for a discussion of the fatigue analysis of welded joints.
The curves have a constant slope between 105 and 107 cycles, where the stress-life relationship is defined by the
equation (for the mean life):
K0
N
Sm
where
N is the endurance in cycles;
S is the nominal stress range;
K0 is the constant for a particular weld classification;
m is the slope of the S-N curve on log-log axes. For most curves, m has a value of 3, from the Paris
crack growth law.
The curve between 105 and 107 cycles is defined from experimental test data. The curves were extended for longer
lives using theoretical calculation. The life to crack initiation for welded joints is a small part of the total life, as most
welded joints contain cracks or crack-like defects produced during manufacture. The life is therefore dominated by
the propagation of these cracks. Although the defect may initially be small and therefore not affected by small
cycles, the larger cycles present in the applied loading may propagate the defect, and as the defect size increases it
will be propagated by smaller cycles. The concept of an endurance limit therefore is not appropriate.
The result is that if all the cycles fall below the stress level for 107 cycles, the stress history can be considered non-
damaging. If larger cycles exist, all cycles must be considered, and the S-N curve is extended indefinitely, with the
value of m increased to (m + 2). For very large stress ranges, the curve is extended back at the slope of m until
static strength limitations apply.
NOTE: In fe-safe, for N>107 cycles the value of m is increased to (m + 2) creating a ‘flatter’ curve. For N<105 cycles
the curves are linearly extrapolated (in log-log terms) back to 1 cycle.
16.2 Operation
The dialogue box is displayed by double-clicking Algorithm in the Fatigue from FEA dialogue box, and then
selecting BS5400 Weld Life (CP).
Figure 16.2-1
The user must define the weld class. This defines the S-N curve to be used for the analysis of the model, or the
element group. The S-N curves are shown in Figure 16.1-1. The user should be familiar with the weld classification
selection procedure discussed in BS5400/BS7608 available from BSI.
Note that a different weld class can be defined for each element group.
The user must also select the design criteria. This parameter defines the probability of failure, in terms of the
number of standard deviations below the mean life. A value of zero produces a mean life (50% probability)
calculation. Example design criteria are:
To enable FOS calculations check the Perform Factor of Strength (FOS) Calculations box. This will enable the
target life field to be set. The target life can be a finite life specified in the chosen life units, or ‘infinite’ life based on
the endurance limit for the material
The factor of strength (FOS) is the factor which, when applied to either the loading, or to the elastic stresses in the
finite element model, will produce the required target life at the node. The FOS is calculated at each node, and the
results written as an additional value to the output file. The FOS values can be plotted as contour plots.
The limits of the FOS values can be configured in the Band Definitions for FOS Calculations region of the Analysis
Options dialogue, Safety Factors tab.
This procedure applies to all analyses, except stress-based analysis using the Buch mean stress correction.
Note: The critical plane is recalculated for each new factor at the node. If a constant critical plane is assumed, the
FOS may be unrealistically high. For example, application of the FOS to the mean stress on another plane may
cause this stress to exceed the material tensile strength. To avoid this type of problem, the critical plane is
constantly recalculated.
17.2.1 Modification of Factor of Strength (FOS) Calculation when using Buch Mean Stress Correction
When a Factor of Strength (FOS) analysis is performed using Buch Mean Stress Correction, the FOS is modified as
described below. This analysis is effectively a hybrid of a FOS calculation and an FRF calculation.
FOS values are calculated using both the Goodman and Buch mean stress corrections. The Goodman calculation
follows the procedure described above, i.e. the stress history is repeatedly re-scaled and the life recalculated.
A
The FOS value may also be calculated from the Buch diagram. Referring to Figure 17.2-3, the FOS is the ratio B .
For variable amplitude stress histories, the value of the FOS is calculated for the cycle that gives the lowest value of
this ratio.
The lowest value of the FOS from the Goodman and Buch calculations is written to the output file.
Sa
AH
BH
Endurance limit
AV
BV
AR
B
0
Sm
Figure 17.3-1
The ratio of the distance to the infinite life line and the distance to the cycle (Sa, Sm) is calculated for each
extracted cycle, to produce four reserve factors, as follows:
AH
Horizontal FRF : FRFH
BH
AV
Vertically FRF : FRFV
BV
AR
Radial FRF : FRFR
BR
Worst FRF : Worst of above 3 factors.
The following rules are followed when calculating Horizontal FRF in fe-safe
1. The Worst Horizontal FRF is taken to be the lowest value from any of the extracted cycles, including
negative values.
2. When the mean stress is to the left of the reference origin axis, fe-safe uses the first line segment with
a) a point to the left of the origin
b) a positive gradient
c) amplitudes that bound the cycles amplitude.
3. When the mean stress is to the right of the reference origin axis, fe-safe uses the first line segment with
a) a point to the right of the origin
b) a negative gradient
c) amplitudes that bound the cycles amplitude.
The FRF infinite life curve is defined using the same format rules as the user defined MSC, (see Appendix E). To
convert the factors in the envelope to amplitudes, multiply the factors by the amplitude that would cause failure at
the target life. The target life is specified in the Factor of Strength dialogue when an analysis using the FRF option
is selected. The target life is substituted into the life equation for the analysis type to calculate the amplitude that
would cause failure at that target life.
At each node, the worst-case reserve factor is calculated, for each of the four FRF types (horizontal, vertical, radial
and the worst of the 3). The limitations of this analysis are discussed in section 17.5.
A generalisation of the Goodman diagram radial FRF is also provided for other mean stress corrections that cannot
be represented in this form, for example a Walker correction which depends on R-ratio. There is no straighforward
geometrical interpretation on a Goodnam type diagram, but an FRF scaling factor can still be mathematically
defined for general mean stress corrections. Consider first a general mean stress correction function which
converts an amplitude and mean to an equivalent zero mean stress
𝑆́𝑎 = 𝑓(𝑆𝑎 , 𝑆𝑚 )
We may also have a residual stress R which affects the mean so more generally we have
𝑆́𝑎 = 𝑓(𝑆𝑎 , 𝑆𝑚 + 𝑅)
The generalisation of the radial FRF for a general mean stress correction function is to seek a solution of the
scaling factor which shifts the scaled corrected stress to the target endurance limit so
𝑓(𝜌𝑆𝑎 , 𝜌𝑆𝑚 + 𝑅) = 𝜎−1
Note that the residual stress is not scaled, which can make the equation lack an analytical solution for some
functions (e.g. Walker). However even then, solution by a numerical solver (e.g. Newton-Raphson) is
straightforward.
Note that the algebraic solution for Goodman MSC (with R=0) is
𝑆𝑎 𝑆𝑚 −1
𝜌=( + )
𝜎−1 𝑈
This is exactly the same as the geometric ratio of the radial distance to the Goodman line, but the geometric
equivalent definition of radial FRF is tied to this particular form of mean stress correction, whereas the general
definition above applies to any form of mean stress correction.
With residual stress R this generalizes to
(𝑈 − 𝑅)𝜎−1
𝜌=
𝑈𝑆𝑎 + 𝜎−1 𝑆𝑚
The Morrow and Morrow-B MSCs have the same form as Goodman, but the UTS U is replaced with 𝜎𝑓′ or 𝜎𝑓 .
For Smith-Watson-Topper with residual stress R we have
We take the minimum positive root as the solution. If there is no positive solution then the FRF is set to zero and a
warning is issued; this will indicate purely compressive stress states.
For Walker with zero residual stress we have
𝜎−1
𝜌=
(𝑆𝑎 +𝑆𝑚 )1−𝛾 𝑆𝑎𝛾
With residual stress R we have to solve
(𝜌(𝑆𝑎 +𝑆𝑚 ) + 𝑅)1−𝛾 𝜌𝛾 𝑆𝑎𝛾 = 𝜎−1
There is no analytical solution, but it is straightforward to use Newton-Raphson to solve for 𝜌.
The generalised FRF is only computed for the radial FRF; the vertical and horizontal contours will be zero filled
when using SWT, Walker or Morrow mean stress corrections.
Finally note that further generalisations of the FRF concept are provided by the following infinite life algorithms:
Dang Van
Prismatic Hull
Susmel Lazzarin
See sections 14.19-14.21 for details.
The analysis is selected from the drop-down menu associated with the user-defined algorithm in the Group
Algorithm Selection dialogue box.
To enable Failure Rate calculations check the box marked Perform Failure Rate for Target Lives Calculations.
The failure rate for target lives calculates the % probability of failure at the specified lives (user-defined life units).
For each of the list of target lives a contour plot will be created indicating the % probability of failure at that life. This
percentage can either be the % of components that will fail (Failure Rate) or the % that will survive (Reliability Rate)
depending upon whether or not the check box Calculate Reliability Rate instead of Failure Rate is checked.
The failure rates are calculated as follows:
(i) The assumption is made that for failure rate analysis to be useful the component must fail in the elastic
area of the strain-life curve.
(ii) A normal or Gaussian distribution is applied to the variation in loading. The % standard deviation of loading
is defined, representing the variability of the value of the load amplitude relative to the amplitude defined.
For non-constant amplitude loading the code derives an equivalent constant amplitude loading.
(iii) A Weibull distribution is applied to the material strength. This is defined by three parameters:
o The Weibull mean:
This is the strength at which the life curve exceeds the target life. This value is derived from the
material data and the specified target life. The Weibull distribution is centred on this value.
o The Weibull slope, Bf :
This is a shape parameter that varies the probability density.
The value of Bf is defined in the material database using the weibull : Slope BF parameter, (see section
8).
Examples of the effect of Bf on the shape of the distribution are shown in Figure 17.4-2.
1.2
bf=3.2
bf=2.5
1
bf=2
bf=1.1
0.8
bf=1.5
Probability
0.6
0.4
0.2
0
0 0.5 1 1.5 2
Strength
as the lower edge of the distribution tends towards zero amplitude, Qmuf tends towards
zero;
as the distribution gets narrower, Qmuf tends towards one.
For convenience, the minimum parameter is expressed as a ratio of the fatigue strength (i.e. it is
normalised by dividing it by the mean strength at the target life).
The value of Qmuf is defined in the material database using the weibull : Min QMUF parameter, (see
section 8).
(iv) The overlap area of the normal distribution of loading and the Weibull distribution of fatigue strength is
calculated for each of the target lives. This represents the probability of failure, as illustrated in Figure
17.4-3, below.
Figure 17.4-3
Note that in Figure 17.4-3, for illustrative purposes, the two distributions are plotted on a linear scale, whilst the
strain axis is shown plotted on a logarithmic scale.
Figure 17.4-4 illustrates the effect of varying Qmuf on the probability of failure (at lives of 1e6, 1e7 and 1e8), for a
component with a life of 1e7.
100
90
80
70
Prob Failure %
60 nf=1e6
50 nf=1e7
40 nf=1e8
30
20
10
0
0 0.2 0.4 0.6 0.8 1 1.2
Qmuf
Figure 17.4-4
Figure 17.5-1
The most severe cycle, i.e. the one that comes closest to the Goodman line, is plotted on the Goodman diagram. A
line is drawn through this point (either vertically, or from the origin). This indicates how much the stress could be
increased before it touches the Goodman line. If any cycle crosses the Goodman line the component would not
have an infinite life. As all the other cycles in the signal are smaller, they will still be below the endurance limit and
contribute no damage. Therefore, the ratio A/B (shown in Figure 17.5-1) indicates the factor of strength.
When designing for finite life, the same method cannot be used (except for constant amplitude loading). Consider
the case below in Figure 17.5-2, where there is 1 occurrence of the largest cycle, and (say) 100 occurrences of the
next smallest cycle, shown grey. The target life is (say) 105 repeats of the signal.
Figure 17.5-2
Under the applied loading, the smaller (grey) cycles would be assumed to be non-damaging. The Goodman
analysis would then use the ratio A/B to estimate the factor of strength (FRF). However, scaling the applied loading
by this FRF would now make the smaller cycles damaging. As there are many more of these, the FRF would be
greatly overestimated, and the analysis would be unsafe.
The same limitations apply to the use of Gerber diagrams to calculate FRF’s.
For these reasons, it is strongly recommended that Factors of Strength (FOS) are calculated, instead of FRF’s.
FOS values are calculated as described in section 17.2, and summarised below:
For a FOS calculation, fe-safe calculates the fatigue life. It then applies a scale factor to the elastic stresses in the
stress history, and re-calculates the plasticity. The fatigue life is re-calculated. This process is repeated until a scale
factor is found which, when applied to the stresses, gives a calculated life equal to the target life. This scale factor is
the FOS.
The FOS analysis is the method recommended in fe-safe because it is equally applicable to both complex loading
and constant amplitude loading, and to both finite and infinite life design.
Note: The comparison between FRFs calculated using the Goodman technique and the more rigorous fe-safe FOS
method will only agree for infinite life design, and only for constant amplitude loading. For other cases the results
will not agree, for the reasons outlined above. Note also that fe-safe reduces the endurance limit when the largest
cycle in the stress history becomes damaging.
Copyright © 2023 Dassault Systemes Simulia Corp. Volume 1 17-9
Vol. 1 Section 17 Issue: 24.1 Date: 17.08.23
Design life fatigue methods
18.2.4 Loading
The loading may consist of
elastic FEA ‘unit load’ stresses with time histories of loading: ‘scale and combine.’
elastic or elastic-plastic FEA stresses as a dataset sequence
elastic-plastic FEA stresses and mechanical strains as a dataset sequence
In all three cases the loading is added using the methods outlined in section 13. Mechanical strains exclude
thermal strain.
The definition of fatigue loading for varying temperature, as discussed in section 13, is not required for
conventional high-temperature fatigue.
Note: In the conventional high-temperature fatigue analysis described here, a single adjustment is made at each
node, to the maximum temperature at that node.
o If temperature is set in the loading block, fe-safe applies the block temperature.
o For multiple-block loading, the transition block (if enabled) will use the maximum temperature from
all blocks in the loading definition.
When temperature datasets are read from the source model, the following is applied:
o If temperature is not set in the loading block, fe-safe assumes the worst-case scenario, where the
maximum temperature from all temperature datasets open in fe-safe is determined and applied.
o If temperature is set in the loading block, fe-safe applies the block temperature.
o For multiple-block loading, the transition block (if enabled) will use the maximum temperature from
all blocks in the loading definition.
18.2.6 Analysis
The analysis proceeds as a normal fe-safe analysis.
Conventional high-temperature fatigue will not be carried out if the option in the FEA Fatigue >> Analysis
Options dialogue, General tab, entitled Disable temperature-based analysis box is checked.
18.3.1 DTMF
The DTMF technology can be used for localized or widespread plasticity and creep. It includes the effect of
temperatures on fatigue properties and can include:
the impact of the phasing between the temperature and stress/strain history;
creep and its interaction with fatigue; and
oxidation and its interaction with fatigue.
It requires an elastic-plastic or elastic-viscoplastic FEA analysis. The FEA data needed for DTMF are:
stresses;
strains (elastic and plastic); and
temperatures.
An example of the loading definition is as follows:
The damage calculation equations are defined in the 3DEXPERIENCE documentation https://help.3ds.com.
Choose:
3DEXPERIENCE
Select the release of interest. DTMF is documented from R2023x onwards.
Select V+R Simulation >> Structures >> Mechanical Scenario Creation >> Durability >>
High-Temperature Fatigue Workflows >> About DTMF
The required material properties are listed in section 8.5.17 of this guide.
An example material database, DTMF.dbase, is included in the fe-safe installation. It contains just a single material
to be used for getting started with DTMF. The values are not representative of any known material.
When you assign the DTMF algorithm to a group, you will also be able to:
Override the crack failure length, af, defined in the material assigned to the group
Turn off creep-fatigue interaction effects
Turn off oxidation-fatigue interaction effects
Turn off crack closure effects
19.1 Introduction
fe-safe can analyse loading defined by a Power Spectral Density diagram (PSD). The PSD is a description of the
loading in the frequency domain. See the Signal Processing Reference Manual for the theoretical background to the
PSD.
The analysis assumes that although the loading has been defined in the frequency domain, the component is
‘rigid’, i.e. the stresses in the component are linearly related to the magnitude of the applied load. The analysis
applies to a single PSD of loading.
fe-safe transforms the PSD into a Rainflow cycle histogram. The method generates cycle ranges, but does not
generate cycle mean values. All cycles are therefore at zero mean. Fatigue analysis from a cycle histogram is faster
than the analysis of the load history from which it was obtained, although this difference may only be noticeable
on larger FEA models. Because the sequence of events is not retained in the cycle histogram, a strain-life analysis
will be less precise (see the Fatigue Theory Reference Manual for a description of strain-life analysis from cycle
histograms). Also, the transformation of the PSD into Rainflow cycles generates cycle ranges but does not generate
cycle mean values. For this reason the method is most suited to the analysis of welded joints where the effects of
mean stress are not significant.
19.2 Background
fe-safe transforms the PSD into a Rainflow cycle histogram. The method generates cycle ranges, but does not
generate cycle mean values. All cycles are therefore at zero mean.
The Rainflow cycle histogram is re-formatted as an LDF file. The fe-safe analysis then proceeds as for any other
LDF file (see section 13 for a description of the LDF file format).
The Fatigue Theory Reference Manual describes the theoretical background to the use of PSD’s to define fatigue
loading, and gives the method for transforming a PSD into Rainflow cycles. The method was derived for loading
which is a Gaussian process, and which is stationary (i.e. its statistical properties do not vary with time). See the
Signal Processing Reference Manual for a description of Gaussian processes. The method has been shown to be
quite tolerant, in that acceptable fatigue lives can often be obtained for processes which are not strictly Gaussian
and not stationary. However, the user should always validate the analysis. The validation method is described in
section 19.4.
19.3 Operation
The PSD must be in one of the file formats supported by fe-safe, and must consist of values of (load)2/Hz, at equal
intervals of frequency (Hz), with the first value at zero Hz. The interval between frequency values must be defined.
The PSD can be plotted and listed (see section 7).
Figure 19.3-1
The dialogue requests that the user define the time (in seconds) to be represented by the cycle histogram. The
output file is a range-mean cycle histogram (.cyh).
19.3.2 Converting the Rainflow cycle histogram to a loading definition (LDF) file
The cycle histogram is transformed into an LDF file using the Amplitude >> Convert Rainflow to LDF for FEA
Fatigue menu option.
Figure 19.3-2
The user may define whether to take the upper edge of each range bin as the load range, or use centre of each
range bin, and must enter the number of the FEA stress data set to be analysed.
The LDF file-name is auto-generated, with extension .ldf. The user may wish to shorten or change the filename.
The file is self-documented, and contains one block for each non-zero bin in the histogram. An example, showing
the header and the first three blocks, is given below.
The LDF file can then be used as the load definition in fe-safe - see section 5. See section 13 for a description of
the LDF file format.
For the analysis of non-welded components, the user should also check the importance of mean stresses by
analysing the load history with and without a mean stress correction. This could be done using (for example) the
S-N Curve Analysis from Time Histories function (see section 11), with a suitably scaled S-N curve.
21.2 Terminology
This section defines some of the terms used in the fe-safe/Rotate module.
Angle of symmetry, S :
the angle of each symmetrical segment.
Master segment :
the segment being rotated.
Master segment angle, M :
the angle of the master segment, (equal to the angle of symmetry, S).
Solution :
a set of static stress results produced (for the whole model) by the FE package, for a particular orientation of
the model.
Rotated solution :
a solution produced as if the model had been rotated by an angle equal to the rotated solution angle, R.
Figure 21.2-1
21.3 Method
Consider a component that exhibits axial symmetry. The component can be divided into a number of axially
symmetrical segments. By definition, these segments are of equal shape and size, but differ in their orientation
about an axis. To take advantage of the axial symmetry of the segments, the elements and nodes in each segment
must be identical - see section 21.4.
Consider a simple two-dimensional model as shown in Figure 21.3-1, below:
Figure 21.3-1
The model has four modes of axial symmetry - i.e. the model has four segments of equal shape and size. Assume
that the model has been prepared with identical elements and nodes in each segment.
One of the segments is defined as the master segment, (see the guide to terminology in section 21.2). To
distinguish the master segment from the rest of the model it must be allocated a unique named element group, or
groups, in the FE solution – see sections 21.4.3 and 21.4.4, below.
If any elements in the model do not form part of the axially symmetric region, then these must be excluded during
the fe-safe/Rotate read process by defining one or more element groups that contain the elements to be excluded
– see section 21.4.5, below.
The model is now loaded and constrained for a particular axial orientation. An FE solution of the static stresses
under these conditions is produced, and written to an FE results file.
In fe-safe, fe-safe/Rotate is used to import the FE stress results for the model. fe-safe/Rotate produces a sequence
of additional stress results as if the model had been rotated through a sequence of angles.
The first fatigue data set uses stress data from the elements in the master segment.
To produce the additional fatigue data sets, fe-safe/Rotate first has to determine associated elements from each of
the other segments for every element in the master segment. In this example, where there are four axially
symmetrical segments, fe-safe/Rotate finds three elements associated with each element in the master segment.
The first associated element (the element from segment #2) is the equivalent element that lies 90° (360°/4)
clockwise from the element in the master segment. Similarly, the second and third associated elements, from
segments #3 and #4, are the equivalent elements that lie at 180° and 270° from the element in the master
segment.
When fe-safe/Rotate searches for an associated element it accepts as the closest match the element whose
centroid is nearest to the target location. If the centroid of the matched element is further away from the target
location than a specified tolerance, then fe-safe/Rotate displays a warning, for example:
where n is the number of matched elements that are out of tolerance, and x is the specified tolerance.
By default, the tolerance, T, is calculated by fe-safe/Rotate as a function of the number of elements in the master
segment, NE, and the number of segments, NS, where:
T = 10 / ( NE × NS )
The name of the data set describes it in an abbreviated form. For example, the following data set name:
fe-safe/Rotate automatically produces a load definition (LDF) file that is used by fe-safe when performing the
fatigue analysis. The LDF file comprises a loading block containing a data set sequence. The data set sequence lists
the stress data sets that define the variation in load over a sequence of angles.
For the example in Figure 21.3-1, the data set sequence would list four fatigue data sets, DS1 to DS4, describing a
complete rotation in four steps: 0° >> 90° >> 180° >> 270°.
The LDF file can be modified as necessary, for example to incorporate scaling information. However, it is important
that the order of data sets and blocks is preserved.
If the desired rotation increment (i.e. the angle between fatigue data sets - see 21.2) is smaller than the angle of
symmetry, then fe-safe/Rotate can be instructed to consider more than one solution.
Each solution takes advantage of the axial symmetry of the component, requiring a single static FE analysis to
define the loading for a full revolution. The FE results for the first solution are prepared by considering the model
in its original orientation. The next solution is prepared as if the model has been rotated through the rotated
solution angle, R.
The master segment angle, M, must be an integral multiple of the rotated solution angle, R, i.e.
R × i = M, where i is an integer.
Consider a case similar to the model in Figure 21.3-1 but this time with the addition of three rotated solutions, as
in Figure 21.3-2, below:
Figure 21.3-2
Again the model has four modes of axial symmetry, but we now need to consider four separate FE stress solutions.
Performing a stress analysis with the model in its original orientation produces the first solution. The second
solution is produced by loading and constraining the model as if it had been rotated through 22.5° (90° / 4).
Similarly the third and fourth solutions are produced as if the model had been rotated through 45° and 67.5°,
respectively.
In this example, the fatigue data sets are derived from the FE stress solutions and written to fatigue data sets in
the following order:
For fatigue ...stresses are ...which was Tensors are ...and are ...then written The
data set... read from FE prepared as if read from rotated to their equivalent
stress the model had elements in through... associated model rotation
solution... been rotated segment... elements in angle is...
through... segment...
1 1 0° 1 0° 1 0.0°
2 1 0° 2 90° 1 90.0°
3 1 0° 3 180° 1 180.0°
4 1 0° 4 270° 1 270.0°
5 2 22.5° 1 0° 1 22.5°
9 3 45° 1 0° 1 45.0°
13 4 67.5° 1 0° 1 67.5°
The LDF file, automatically generated by fe-safe/Rotate, reconstructs the fatigue data sets in the correct sequence
to simulate rotation of the model. The data sets constitute a single loading block, with the following sequence:
BLOCK n = 1
ds = 1
ds = 5
ds = 9
ds = 13
ds = 2
ds = 6
ds = 10
ds = 14
ds = 3
ds = 7
ds = 11
ds = 15
ds = 4
ds = 8
ds = 12
ds = 16
END
The advantages of using fe-safe/Rotate can be clearly seen in this example, where sixteen fatigue data sets have
been created in the FED file, at equivalent rotational intervals of 22.5°, from just four sets of FE stress data.
- since the radial symmetry of each segment and the half-model mirror symmetry are not exclusive, each half-
model segment must be symmetrical about its own radial centre-line;
- the radial boundaries of half-model segments must not overlap;
- for each matched element, fe-safe/Rotate also performs a node match, since the node order is likely to
change because of the mirroring process.
To create a half model with identical segments (see the example in Figure 21.4.2-1):
- create the geometry for a half-segment and mesh it (the light grey area in Figure 21.4.2-1);
- create a mirror copy of the meshed half-segment (the dark grey area in Figure 21.4.2-1);
- the two mirror-image half-segments constitute one full segment;
- if the model has an even number of segments, duplicate the full segment to create the remainder of the half-
model;
- if the model has an odd number of segments, duplicate the half-segments (mirrored and unmirrored, as
appropriate) to create the remainder of the half-model.
Figure 21.4.2-1
fe-safe uses the term “group” to describe either a list of element numbers (i.e. an ‘element group’) or a list of node
numbers (i.e. a ‘node group’).
fe-safe/Rotate supports only element-nodal data. Therefore, in this context, we are concerned only with element
groups.
The semantics used to describe element groups differ in different FE packages – this is discussed in Appendix G.
Ansys
Ansys does not export element and node groups directly to the RST file. Therefore, groups are supported in Ansys
by the use of the material number.
Abaqus
In Abaqus, element groups are referred to as “Element Sets”.
*NODE FILE
COORD
Figure 21.5-1
The name of the FE results file is entered at the top of the dialogue. Clicking on the button labelled ‘ . . . ’ allows
the user to browse for a file.
The fe-safe/Rotate module currently supports Ansys RST results files (*.rst) and Abaqus FIL (*.fil) files
(binary and ASCII) containing element-nodal data.
The axis of rotational symmetry should be entered. The FE model must be axially symmetric about one of the
global Cartesian axes, i.e. the rotational axis of the FE model must coincide with one of the Cartesian axes. Models
whose axes are parallel to, but not coincident with, one of the global Cartesian axes are not supported.
The number of segments and the number of solutions in each segment should be entered. There must be at least
one set of FE stress results in the FE results file for each solution.
If there are more sets of stresses in the FE results file than the number of solutions entered, then fe-safe/Rotate
assumes that the additional sets apply to an additional load case. Therefore, the number of result sets must be an
integral multiple of the number of solutions. If not, then fe-safe/Rotate returns an error when it attempts to read
the model.
A user-defined warning tolerance can be entered. If the warning tolerance is left blank then fe-safe/Rotate
calculates a tolerance criterion automatically - see 21.3.
The master segment must be defined, as described in section 21.4.4.
Groups that should be excluded from the rotational region should be defined as described in section 21.4.5.
To append a load case to an existing model (loaded using fe-safe/Rotate), select the Append model to existing
rotational definition option. The appended model must have the same master segment definitions and axis of
rotation as the original model. Therefore, if the append option is selected, the file name control is enabled, but all
other controls in the dialogue are disabled.
The model is loaded by clicking on the OK button. Here there is the option to pre-scan the file in case not all
datasets are required. As fe-safe/Rotate loads the model, information about the file and the data that it contains is
written to the file:
<ProjectDir>\Model\reader.log.
This information is also displayed in the Message Log window.
When the model has finished loading, a summary of the open model appears in the Current FE Models window,
showing the loaded datasets and element group information.
fe-safe/Rotate also produces a load definition (LDF) file that is used by fe-safe when performing the fatigue
analysis. The loading details are automatically reconfigured to use the LDF file.
Skip Matched Elements [ROTATIONAL_SKIPMATCHEDELS ] - this option is used to improve the time taken to match
elements in the master segment to elements in the other segments. The default option is to skip matched
elements - in other words to not attempt to match elements if they have already been matched. This can
considerably reduce the number of matching operations, depending on the geometry, the number of segments,
and so on.
Forces Rotate [ROTATIONAL_FORCESROTATE ] - sets the method by which rotated solutions are applied. The default
method (forces rotate) assumes that the rotated solutions are prepared as if the model has rotated through a
specified angle.
The alternative method (model rotates) is not available in this release.
FED Diagnostics Level [ROTATIONAL_FEDDIAGLEVEL ] - this facility enables diagnostic values to be exported and
viewed in an FE viewer. The following options are available:
Function ROTATIONAL_FEDDIAGLEVEL
0 1 2 3 4 5 6
Rotational Diagnostics Level [ROTATIONAL_DIAGLEVEL ] - this facility enables diagnostic values to be exported to
the reader.log file. The following options are available:
02 4 list node reference number, true node number and node coordinates
14 16384 write tables of rotated tensors for each element (per data set)
Table 21.6.2-3
The ROTATIONAL_DIAGLEVEL keyword can be used to set any combination of the above options by adding the
switch values for the required options. For example, to select options 10, 11 and 14, set
ROTATIONAL_DIAGLEVEL to 19456 (= 1024 + 2048 + 16384).
Figure 22.1.1-1
fe-safe currently allows a maximum of 64 scalars to be exported, but note that some options correspond to multiple
scalars, e.g. vectors and per-block contours. If the resulting number of scalars exceeds this limit, then the selected
contours will be truncated.
Life or LOG10(Life)
This contour indicates the number of repeats of the loading definition which will cause a fatigue failure. However,
when editing the loading, it is possible to assign an interval, e.g. in hours or miles, which corresponds to one repeat,
so that life is then reported in hours or miles. This is achieved by double-clicking on Loading is equivalent to 1
Repeats under the Settings node of the loading definition. A dialogue appears in which a numerical scale and a
description of the units may be set.
In the event of an item experiencing zero damage, a particular value indicating infinite life will be reported, which is
configured using setting [job.infinite life value]. Reserved value -1 indicates that the material’s value of the
Constant Amplitude Endurance Limit should be reported. If none is defined, a hard-coded value of 1e15 is used.
By default, the contour of fatigue lives is in Log base 10, for the best post-processing. Linear versus logarithmic
contour output is controlled by selecting Analysis Options from the FEA Fatigue menu and toggling the option
Export logarithmic lives to results file on the Export tab (see section 5 for more details).
Note that logarithms are not used in the progress table and analysis summary which appear in the analysis log and
the Message Log window.
Damage
This contour indicates the fatigue damage that arises from a single repeat of the loading. Damage is defined such
that values exceeding unity indicate a fatigue failure. Since most fatigue algorithms accumulate damage according
to Miner’s rule, damage is then the reciprocal of the fatigue life (in repeats). Damage is calculated by multiplying the
damage for each block by its number of repeats (decremented when transitions are used) and summing them, i.e.
Miner’s rule is implicit. Thus the use of multiple loading blocks or multiple repeats of a block is not appropriate to
algorithms which do not use Miners’ rule.
Traffic Lights
This is a traffic-light style contour plot of the fatigue lives.
Upper and lower design life thresholds (in user-selected units) can be entered.
The values exported to this contour are:
0 (zero) – for a node or element that fails to achieve the design life;
0.5 – for a node or element that may or may not achieve the design life (further analysis is necessary);
1 – for a node or element that clearly exceeds the design life.
FRF Contours
These four parameters are selected by default. However these contours will only be exported in the case that an
infinite life Fatigue Reserve Factor (FRF) analysis is selected in configuration options (see sections 5 and 17 for
more details).
Maximum temperature
The maximum temperature used for each analysis item (and used in the calculation of the temperature-dependent
UTS and FS contours above) can also be output. The item temperature for each block is taken from the non-
varying temperature assigned in the loading, if one is set. Otherwise, FE temperature datasets are used. In the
latter case, the datasets used depend on the algorithm: for FKM and plug-in algorithms, the largest temperature in
datasets attached to the loading block is used, but for other algorithms, all temperature datasets loaded from the FE
solution are used and none need be attached to the block. The block temperature is then the largest of the
datasets used; the temperature reported is the maximum over all blocks.
Critical planes
For plane-based algorithms that calculate the fatigue life, it is possible to export a vector that is the normal of the
critical plane, scaled by (1/Nf).
For a single loading block configuration the two export options are identical and only one will be exported.
Depending on the format of the results file and position of the data, the results will be exported as:
A vector field (available only in Abaqus .odb and I-DEAS .unv)
A tensor field with the results on the diagonal of the tensor and the other components zeroed.
Three scalar fields.
For result files that support both vector and tensor fields, an option is given for choosing a tensor field rather than
the vector.
Note: Vector plotting is disabled by default. Please contact your local support office for enabling instructions.
22.1.2 Histories
This tab allows the export of history plot files for the analysis to be defined. These plot files relate to the whole
analysis and generally have one sample per node. The created plot files can be plotted using the Loaded Data Files
window and Plot menu options. If any of the check boxes are selected then a whole analysis plot file is created. Its
name is created by appending ‘histories.txt’ to the specified output file name. For example, for the output file
\data\results.odb the history file \data\results.odb-histories.txt will be created.
Figure 22.1.2-1
Haigh diagram
For each node the worst cycle’s mean stress and damage parameter amplitude are cross-plotted – this is named
‘Haigh-all items’. The damage parameter amplitude varies from algorithm to algorithm, see table 22.1.1-1. If
multiple algorithms are used within a single analysis then the amplitude for each node will be a different parameter,
i.e. stresses for some and strains for others.
Under certain conditions an infinite-life envelope is added to the history file – this will be named ‘Infinite life Haigh
diagram for ... .’ The conditions are that:
the specified analysis has a user-defined MSC or an infinite-life FRF envelope;
the default MSC is defined for the materials used in the analysis. (If more than one material is used in the
analysis there will be an infinite life envelope for each material).
The infinite-life envelope will use the damage parameter amplitude at the FRF design life or the constant-amplitude
endurance limit life to scale the non-dimensional MSC or FRF.
See figure 22.1.2-3.
Figure 22.1.2-3
An overlay plot of the infinite-life envelope and the Haigh diagram is shown in figure 22.1.2-4. Each cross
represents the most damaging cycle for a node. There are about 80000 nodes on this plot.
Node#400027.1
Node#400159.1
250
200
SAE_950C-Manten@1E7
150
amp
100
50
0
-400 -300 -200 -100 0 100 200 300 400
Sm:MPa
Figure 22.1.2-4
The worst 2 nodes from the whole analysis are marked on the plots. Using the cursor facility allows the node ID for
any cycle to be viewed.
Smith diagram
For each node, the worst cycle’s mean stress and the stresses at each of the turning points are cross-plotted – this
is named ‘Smith-all items.’ Under the same conditions as those defined in the previous section, an infinite-life
envelope is also added to the history file – this is named ‘Infinite life Smith diagram for ....’ See figure 22.1.2-3.
Smith diagrams can only be created for analyses that use stress as the damage parameter; cross plotting the
turning points in strain would be meaningless.
An overlay plot of the infinite-life envelope and the Smith diagram is shown in figure 22.1.2-5.
400
SAE_950C-Manten@1E7
300
200
100
0 Node#400027.1
S:MPa
-100
-200
-300
-400
-500 Node#400027.1
Figure 22.1.2-5
The worst node in the analysis is marked on the plots. Using the cursor facility allows the node ID for any cycle to
be viewed. The two turning points for the most damaged node (400027.1) are marked as ‘important’ tags. All the
other turning points are marked with normal tags that can be seen using the cursor facility.
The same criterion is used to evaluate the worst cycle for the Smith diagram as for the Haigh diagram.
22.1.3 Worst-Item Histories
This tab allows the export of history plot files for the most damaging item in the analysis. If a finite-life calculation is
being performed and there is no damage then the plots will not be created. The plot files can be plotted using the
Loaded Data Files window and Plot menu options. If any of the check-boxes are selected then a whole-analysis
plot file is created. Its name is created by appending ‘histories.txt’ to the specified output file name. For example,
for output file \data\results.odb the history file \data\results.odb-histories.txt will be created.
Figure 22.1.3-1
Figure 22.1.2-3 shows a history plot file that contains both the worst-item histories and the whole-analysis histories.
In this example the channels named ‘****for Element 1.3’ are the worst-item history plots.
The definition of the most damaging item neglects any non-fatigue failure items that occur when Ignore non-fatigue
failure items (overflows) is checked. If two items have the same life/FRF values then the first encountered is
deemed to be the worst.
Haigh diagram
The Haigh diagram contains all the damaging cycles on the critical plane for the most damaging item in the
analysis. Tags indicating the sample numbers for the turning points in the loading are stored with each item. Zero
is the first sample in the loading. For infinite-life caluclations (FRF), if a residual stress is included in the analysis
then the mean value imparted by this residual is also shown on the Haigh diagram, as shown in figure 22.1.3-2.
Copyright © 2023 Dassault Systemes Simulia Corp. Volume 1 22-7
Vol. 1 Section 22 Issue: 24.1 Date: 17.08.23
Diagnostic techniques including additional outputs
A sample Haigh diagram is shown below with several of the tags converted to text using the context menu item
Convert Cursor Values to Text.
SAE_950C-Manten@1E7
150
Sa:MPa
100
50
0 (50, 0) Residual
-400 -300 -200 -100 0 100 200 300 400
Sm:MPa
Figure 22.1.3-2
Smith diagram
The Smith diagram also contains all the damaging cycles on the critical plane for the most damaging item in the
analysis. Each cycle has a sample for each turning point in stress. Tags indicating the sample numbers for the
turning point in the loading are stored with each item. The Smith diagram for the same analysis as in 22.1.3-2 is
shown in figure 22.1.3-3.
400
SAE_950C-Manten@1E7
300
200
S:MPa
100
0 (50, 0) Residual
-200
-200 -100 0 100 200 300 400
Sm:MPa
Figure 22.1.3-3
Von Mises
The von Mises stress for the worst item can aslo be exported. The way in which the sign of the von Mises stress is
assigned is controlled from the von Mises tab in the Analysis Options dialogue. If this is using the Hydrostatic
stress then the label of this plot will be SvM-Hy:MPa and if this is the Largest Principal stress then the label will be
SvM-LP:MPa. As with all ‘representative’ stress varaibles that have their sign defined by some criterion, there is
the possibility of sign oscillation. For the von Mises stress this occurs when the Hydrostatic stress is close to zero
(i.e. the major two principal stresses are similar in magnitude and opposite). This is why using such ‘representative’
stress values for fatigue analysis can cause spurious hot-spots. In areas where this could occur, the von Mises
stress plot will mark the sample with a black filled circle as shown in figure 22.1.3-4. A threshold criterion is used to
identify samples where the sign is questionable. This criterion is when the hydrostatic stress is less than 2.5% of
the von Mises stress.
200
150
100
(581, 91.025) ?+- SP=55.4 -49.7 0.0
S-vM-LP:MPa
50
-50
-100
-150
Figue 22.1.3-4
Displaying the cursor values at one of these black circles indicates the principal stresses at the sample.
22.1.4 Log
The Log tab allows text-based diagnostics relating to the whole analysis to be written to the .log text file. Note that
these diagnostics do not appear in the Message Log window, but can be viewed after the analysis is complete by
clicking the View log button in the Analysis completed dialogue.
The log file can be viewed in a text-editor. The name of the log file is derived from the output file name. For
example, if the output file name is:
c:\data\testResults_01.fil
then the text-based diagnostics are written to the file:
c:\data\testResults_01.log
Figure 22.1.4-1
Material Diagnostics
This allows the detailed material parameters to be dumped to the analysis log.
Items with worst n lives
A table of the worst n items can be created for the analysis, where n is an integer set in the dialogue. A sample
table is shown below:
677.10 738.5 735.10 740.1 738.9 738.10 735.8 735.7 735.3 735.2
The %est. Amp/End. Amp column indicates a nodal elimination estimate that was made for the particular nodes
(See the next section).
The list of items can be used in conjunction with the List of Items tab to just re-analyse the worst n nodes when
trying what-if scenarios.
For each item, the amplitude of the estimated worst cycle is compared with that of the mean-
stress-corrected constant-amplitude endurance limit (CAEL).
This is used to eliminate items from the fatigue analysis.
In this table, items are sorted by this ratio, which is listed as a percentage.
41882 of 42996 items were eliminated from the analysis on the basis of the endurance limit.
Critical Distance summary
At the end of the analysis, a summary is produced of the number of items for which each diagnostic
code was produced. During the analysis, a line is written to the log for each such node. See Section
26.3.
22.1.5 List of Items
For the diagnostics selected in the Log for Items and Histories for Items tabs, this tab defines the applicable
element or nodes. This tab can also be used to limit the analysis to just the listed set of element or node IDs using
the Only analyse listed items check-box.
If the analysis is to be limited to a few element or nodes, it may be worth writing the results to either the fe-safe
results file format (.fer) or to an ASCII text file (.csv) to avoid the overhead associated with exporting results to an
FE format.
Figure 22.1.5-1
Items not prefixed with ‘e’ or ‘n’ are interpreted as nodes or elements depending on the context, i.e.:
elements if stresses are elemental;
nodes if stresses are nodal.
See Appendix G for a description of how these terms relate to individual FEA suites.
The following syntax rules apply to the item list:
each item in the list must be separated from the next by a comma;
Figure 22.1.6-1
Dang-Van plots
See section 14.12.3.
Damage-vs-plane plots
Damage-vs-plane plots indicate the damage calculated for each angle in an axial plane-search.
The angle is measured from the orientation of the principals at the reference sample. The reference sample is the
stress tensor used to evaluate the orientation of the surface.
If the surface is in the xy-plane of the untransformed FE data, then the name of the damage channel will include the
angle between the x-axis and the critical plane. e.g. (NOTE: ang X->C/P 70 degs) indicates that the critical plane is
70 degrees clockwise from the x-axis.
For shear-based algorithms there will be a data series for each of the three shear-types 1-3, 2-3 and 1-2. If the
code performs a triaxial plane-search then these will be repeated for each search axis. These are repeated for
each loading block.
A sample from a Brown-Miller analysis is shown in figure 21.6.1-2.
10-9
10-10
10-11
1-3
10-12 2-3
1-2
10-13
10-14
10-15
0 50 100 150
Angle : Degs
Figure 21.6.1-2
Haigh diagram for critical plane
This is the same as the Worst Node Histories Haigh diagram except for the specified ID. See section 22.1.3.
Smith diagram for critical plane
This is the same as the Worst Node Histories Smith diagram except for the specified ID. See section 22.1.3.
Load histories
This exports the full fatigue loading stress tensors. For fatigue analysis from elastic-plastic stress-strain pairs the
strains are also exported. These are the stresses (and strains) prior to the code applying a plasticity correction. If
there are multiple blocks in the analysis then there will be 6 stress tensor channels per block.
Evaluated principals
These are the evaluated principal stresses and strains SP1, SP2, SP3, eP1, eP2 and eP3. No plasticity correction
is applied to these outputs i.e. for fatigue analyses from elastic FEA, these are elastic values. The angle between
SP1 at any point in the loading and in the reference sample is indicated by the channel Theta. The reference
sample is the one used by the code to evaluate the orientation of the surface. SP1 and SP2 are in the surface
being analysed and SP3 is out of it.
Note that these are not true principals because the out-of-surface shear components (yz and zx) are neglected, so
that principal 3 is always in the direction of the local z-axis (the surmised surface normal).
If the fatigue loading contains multiple blocks then there will be one set of principals for each block.
If the stress history is triaxial then the principals will be repeated for each of the triaxial surfaces that the code
analyses. See Technical Note 3 (TN-003) for some examples of triaxial stress diagnostics.
Figure 22.1.7-1
Element | Block | Group | T | Stress | Num | Shear | CP / X->Ang | Critical plane: | History length: | Surf
| | | degC | state | planes | type | deg | X->CP | Y->CP | Z->CP | pre-gating | post-gating | eval
--------------------------------------------------------------------------------------------------------------------------------------
e65.1 | 1 | Default | 0.000 | 2D | 171 | 1-2 | 102 | -0.202 | 0.979 | 0.000 | 4 | 4 | undef
e65.2 | 1 | Default | 0.000 | 2D | 171 | 1-2 | -79 | 0.199 | -0.980 | 0.000 | 4 | 4 | undef
e65.3 | 1 | Default | 0.000 | 2D | 171 | 1-2 | -87 | 0.060 | -0.998 | 0.000 | 4 | 4 | undef
e65.4 | 1 | Default | 0.000 | 2D | 171 | 1-2 | 93 | -0.058 | 0.998 | 0.000 | 4 | 4 | undef
Column 4 indicates the temperature at the node.
Column 5 indicates the state of the stress history. In this example it is 2D, but the plane-search for Brown-Miller is always triaxial.
Column 8 is only applicable when the plane-normal lies close to the xy-plane in the material coordinate system, i.e. in the untransformed data from the Finite Element analysis, as
would be expected for shell elements. In this case, the angle between the plane normal and the material x-axis is shown. This is not generally the same as the angle used to
designate the plane in the local coordinate system assigned by fe-safe.
Columns 9 to 11 indicate the orientation of the critical plane. See Technical Note 3 (TN-003) for more information on diagnostic options for triaxial stresses.
Block-life table
This table lists the damage caused by each loading block, expressed as a fatigue life (Nf) and taking into account its number of repetitions.
BLOCK-BY-BLOCK LIFE TABLE for Element [0]7273.1
Plane-life table
In the Log for Items tab, check Plane-life table, then press OK. The .log file after the analysis will contain a tabulation of damage per plane for the specified items.
Analysis planes are typically designated by an angle denoting a rotation from the x-axis towards the y-axis in the coordinate system derived from the reference tensor, in which the
surmised surface normal is identified with the z-axis. For shear-based algorithms, “planes” are further specified using shear-types 1-2, 2-3 and 1-3, which define both a plane-normal
and a perpendicular shear direction. In the case of triaxial plane searches, the designations 1, 2 or 3 denote the Cartesian axis about which the axial plane search is conducted; they
map to z, x and y respectively.
Triax Plane | Shear Plane | C/P Ang | Life | Cycle | Pt 1 | Pt 2 | S 1 | S 2 | elasE1 | elasE2 | elasS1 | elasS2
| | deg | Repeats | uE | uE | | | MPa | MPa | uE | uE | MPa | MPa
Copyright © 2023 Dassault Systemes Simulia Corp. Volume 1 22-3
Vol. 1 Section 22 Issue: 24.1 Date: 17.08.23
Diagnostic techniques including additional outputs
--------------------------------------------------------------------------------------------------------------------------------------
1 | 2-3 | 0 | 2.78e+04 | 0.000 | 10603.794 | 1 | 8 | 0.000 | 54.479 | 0 | 10604 | 0 | 54
1 | 2-3 | 10 | 3.20e+04 | 0.000 | 10189.168 | 1 | 8 | 0.000 | 52.842 | 0 | 10189 | 0 | 53
1 | 2-3 | 10 | 3.26e+04 | 0.000 | 10189.168 | 5 | 4 | 0.000 | 36.029 | 0 | 10189 | 0 | 36
1 | 2-3 | 170 | 3.20e+04 | 0.000 | 10189.170 | 1 | 8 | 0.000 | 52.842 | 0 | 10189 | 0 | 53
1 | 2-3 | 170 | 3.26e+04 | 0.000 | 10189.170 | 5 | 4 | 0.000 | 36.029 | 0 | 10189 | 0 | 36
1 | 2-3 | 180 | 2.78e+04 | 0.000 | 10603.794 | 1 | 8 | 0.000 | 54.479 | 0 | 10604 | 0 | 54
1 | 2-3 | 180 | 2.82e+04 | 0.000 | 10603.794 | 5 | 4 | 0.000 | 37.145 | 0 | 10604 | 0 | 37
1 | 1-2 | 20 | 1.81e+04 | 0.000 | 11889.271 | 1 | 8 | 0.000 | 96.258 | 0 | 11889 | 0 | 96
1 | 1-2 | 20 | 1.86e+04 | 0.000 | 11889.271 | 5 | 4 | 0.000 | 65.631 | 0 | 11889 | 0 | 66
1 | 1-2 | 30 | 1.47e+04 | 0.000 | 12716.273 | 1 | 8 | 0.000 | 81.816 | 0 | 12716 | 0 | 82
1 | 1-2 | 30 | 1.50e+04 | 0.000 | 12716.273 | 5 | 4 | 0.000 | 55.784 | 0 | 12716 | 0 | 56
1 | 1-2 | 40 | 1.66e+04 | 0.000 | 12309.319 | 1 | 8 | 0.000 | 64.100 | 0 | 12309 | 0 | 64
1 | 1-2 | 40 | 1.69e+04 | 0.000 | 12309.319 | 5 | 4 | 0.000 | 43.705 | 0 | 12309 | 0 | 44
1 | 1-2 | 50 | 2.70e+04 | 0.000 | 10717.494 | 1 | 8 | 0.000 | 45.247 | 0 | 10717 | 0 | 45
2 | 1-3 | 120 | 1.24e+06 | 0.000 | 4556.987 | 1 | 8 | 0.000 | 54.528 | 0 | 4557 | 0 | 55
2 | 1-3 | 120 | 1.29e+06 | 0.000 | 4556.987 | 5 | 4 | 0.000 | 37.178 | 0 | 4557 | 0 | 37
2 | 1-3 | 130 | 1.23e+06 | 0.000 | 4562.030 | 1 | 8 | 0.000 | 54.560 | 0 | 4562 | 0 | 55
2 | 1-3 | 130 | 1.29e+06 | 0.000 | 4562.030 | 5 | 4 | 0.000 | 37.200 | 0 | 4562 | 0 | 37
2 | 1-3 | 140 | 1.57e+06 | 0.000 | 4379.894 | 1 | 8 | 0.000 | 54.593 | 0 | 4380 | 0 | 55
2 | 2-3 | 0 | 1.72e+05 | 0.000 | -6681.233 | 1 | 8 | 0.000 | 54.674 | 0 | -6681 | 0 | 55
2 | 2-3 | 0 | 1.77e+05 | 0.000 | -6681.233 | 5 | 4 | 0.000 | 37.278 | 0 | -6681 | 0 | 37
2 | 2-3 | 10 | 1.84e+05 | 0.000 | -6586.351 | 1 | 8 | 0.000 | 54.668 | 0 | -6586 | 0 | 55
2 | 2-3 | 10 | 1.89e+05 | 0.000 | -6586.351 | 5 | 4 | 0.000 | 37.274 | 0 | -6586 | 0 | 37
2 | 1-2 | 30 | 1.08e+07 | 0.000 | -3390.028 | 1 | 8 | 0.000 | 0.292 | 0 | -3390 | 0 | 0
2 | 1-2 | 30 | 1.08e+07 | 0.000 | -3390.028 | 5 | 4 | 0.000 | 0.199 | 0 | -3390 | 0 | 0
2 | 1-2 | 40 | 1.34e+07 | 0.000 | -3296.901 | 1 | 8 | 0.000 | 0.229 | 0 | -3297 | 0 | 0
2 | 1-2 | 40 | 1.34e+07 | 0.000 | -3296.901 | 5 | 4 | 0.000 | 0.156 | 0 | -3297 | 0 | 0
3 | 1-3 | 30 | 3.09e+05 | 0.000 | 5931.051 | 1 | 8 | 0.000 | 41.054 | 0 | 5931 | 0 | 41
3 | 1-3 | 30 | 3.17e+05 | 0.000 | 5931.051 | 5 | 4 | 0.000 | 27.991 | 0 | 5931 | 0 | 28
3 | 1-3 | 40 | 3.10e+05 | 0.000 | 5948.045 | 1 | 8 | 0.000 | 32.164 | 0 | 5948 | 0 | 32
3 | 1-3 | 40 | 3.16e+05 | 0.000 | 5948.045 | 5 | 4 | 0.000 | 21.930 | 0 | 5948 | 0 | 22
3 | 2-3 | 0 | 2.01e+04 | 0.000 | 11652.672 | 1 | 8 | 0.000 | 54.674 | 0 | 11653 | 0 | 55
3 | 2-3 | 0 | 2.04e+04 | 0.000 | 11652.672 | 5 | 4 | 0.000 | 37.278 | 0 | 11653 | 0 | 37
3 | 2-3 | 10 | 2.21e+04 | 0.000 | 11332.929 | 1 | 8 | 0.000 | 53.031 | 0 | 11333 | 0 | 53
3 | 2-3 | 10 | 2.24e+04 | 0.000 | 11332.929 | 5 | 4 | 0.000 | 36.158 | 0 | 11333 | 0 | 36
3 | 2-3 | 170 | 2.21e+04 | 0.000 | 11332.930 | 1 | 8 | 0.000 | 53.031 | 0 | 11333 | 0 | 53
3 | 2-3 | 170 | 2.24e+04 | 0.000 | 11332.930 | 5 | 4 | 0.000 | 36.158 | 0 | 11333 | 0 | 36
PRISMATIC HULL DEVIATORIC SPACE TABLE for Element 101.1 All Blocks
√∑5𝑖=1 𝑠𝑎𝑖
2
⁄
𝜏𝑎𝑃𝐻 =
√2
Where 𝑠𝑎𝑖 is the Samp value of dimension 𝑖 in the table above (i.e. half the range of that deviatoric dimension of the
prismatic hull).
PSD items
Lists, for each block in a PSD analysis:
The 0th, 1st, 2nd and 4th moments
The number of peaks per second
Upward mean crossings per second
The irregularity factor
The central frequency
RMS stress
See Section 19 for more details of frequency-domain fatigue analyses.
Dataset stresses
Tabulates all stress, strain and temperature datasets loaded from the FE model, both in the model’s physical units
and in the units chosen for exports.
Elastic-plastic residuals
For each loading block, tabulates the residual stress and strain normal to each analysis plane.
Sample Sxx Syy Sxy Szz Syz Sxz Exx Eyy Exy Ezz Eyz Exz
MPa MPa MPa MPa MPa MPa uE uE uE uE uE uE
1 -13 -119 -3 -37 -4 -1 520 -1434 -127 78 -133 -46
2 -11 -1 -1 -1 -0 -1 -98 -18 -41 53 -16 -22
In-surface principals
Tabulates idealised principal stresses and strains. They are idealised in the sense that a surface normal is first surmised from the stress history. The history is then
rotated into a local coordinate system in which the z-axis denotes the surface normal and the yz and zx shear components are neglected. The principals reported are
then the normals in the z-axis (principal 3) and two orthogonal directions in the xy plane (principals 1 and 2). The angle theta indicates the rotation of principal 1 in the
xy-plane from x towards y.
Note that this treatment is applied even when the item does not lie on the surface, or when no surface detection has been performed.
The resulting table is shown below. SP* denotes a stress and eP* a strain. The second and third sets of in-surface principals indicate that fe-safe has treated this
node as triaxial and determined principals relative to all three Cartesian axes. See Technical Note 3 (TN-003) for treatment of triaxial stresses.
After each axis, the determination of the type of stress history is shown, e.g. Proportional, Non-Proportional (constant-direction principals) or Non-
Proportional. The first two are used to reduce the number of planes to be analysed.
IN-SURFACE PRINCIPALS for Element 65.1, Block 1 Triaxial Plane 1 of 3 Sample indices are one-based.
Pt | SP1 | SP2 | SP3 | eP1 | eP2 | eP3 | theta | Reference Sample? | Constant Direction? | Proportional?
| MPa | MPa | MPa | uE | uE | uE | deg | | |
-------------------------------------------------------------------------------------------------------------------
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | No | Yes | No
2 | 29.45 | -0.00549 | 0 | 2806 | -842.2 | 0 | 0 | No | Yes | No
3 | 29.45 | -0.00549 | 0 | 2806 | -842.2 | 0 | 0 | No | Yes | No
4 | 74.29 | 0.2656 | 0 | 7069 | -2098 | 0 | 0 | Yes | Yes | No
IN-SURFACE PRINCIPALS for Element 65.1, Block 1 Triaxial Plane 2 of 3 Sample indices are one-based.
Pt | SP1 | SP2 | SP3 | eP1 | eP2 | eP3 | theta | Reference Sample? | Constant Direction? | Proportional?
| MPa | MPa | MPa | uE | uE | uE | deg | | |
-------------------------------------------------------------------------------------------------------------------
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | No | Yes | No
2 | -0.004309 | 0 | 29.45 | -842 | 0 | 2805 | 0 | No | Yes | No
3 | -0.004309 | 0 | 29.45 | -842 | 0 | 2805 | 0 | No | Yes | No
4 | 0.2656 | 0 | 74.29 | -2098 | 0 | 7069 | 0 | Yes | Yes | No
IN-SURFACE PRINCIPALS for Element 65.1, Block 1 Triaxial Plane 3 of 3 Sample indices are one-based.
Pt | SP1 | SP2 | SP3 | eP1 | eP2 | eP3 | theta | Reference Sample? | Constant Direction? | Proportional?
| MPa | MPa | MPa | uE | uE | uE | deg | | |
-------------------------------------------------------------------------------------------------------------------
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | No | Yes | No
2 | 29.45 | 0 | -0.004309 | 2805 | 0 | -842 | 0 | No | Yes | No
3 | 29.45 | 0 | -0.004309 | 2805 | 0 | -842 | 0 | No | Yes | No
4 | 74.29 | 0 | 0.2656 | 7069 | 0 | -2098 | 0 | Yes | Yes | No
SENSITIVITY ANALYSIS for Element 1.1 (The life is for 1 repeat of the block (i.e n=1), it does
not consider the n Value if this is an LDF analysis)
It should be noted that performing a load sensitivity analysis on a large number of the items in your model could
increase the overall analysis time substantially.
Figure 22.2.1-1
The left tab is used to define the influence coefficients. The top grid defines the loads (or datasets) for which one
would like to know their contribution - these are the dataset numbers in the Current FE Models window.
To add new loads press the + button at the top of the dialogue. This will display the Add IC Load dialogue, as
shown in Figure 22.2.1-2:
Figure 22.2.1-2
The Description and Units are text strings that will be displayed in the output matrices. Multiple loads (datasets) can
be added by specifying a range or list of datasets. Dataset ranges are specified using a ‘-’ (minus) character, e.g.:
1-17, 24, 27.
Pressing the OK button adds the new loads to the loads grid. The Load #, Description and Units columns are
editable.
In-cell editing is supported.
Pressing the <<<<<< button on the grid for a load definition will set the description to that associated with the
dataset in the Current FE Models window.
The Load # defines a unique identification number for the load (this can be just the dataset number).
To edit multiple loads simultaneously select the required rows by selecting them with the left mouse button. To
highlight additional loads after the first one has been highlighted, hold down the CTRL key on the keyboard and
click the on the additional rows using the left mouse button. When the required loads (rows) are highlighted, click on
the header of the appropriate column to edit that parameter. The relevant dialogue will be displayed. Only columns
marked with a * can be edited in this manner - see Figure 22.2.1-3.
Figure 22.2.1-3
Value Meaning
All All layers in the shell.
Not a shell The element is not a shell
A number Defines the surface of interest
Gauge type. The valid values are shown in the table below :
Value Meaning
Single Stress A single or one-armed stress gauge.
Single Strain A single or one-armed strain gauge.
Rosette Strain A rosette strain gauge – three gauges are created at 0°, 45° and 90° to the specified
orientation.
Stress Tensor Gauges are simulated as the three stress tensors Sxx, Syy and Sxy. The orientation
is ignored.
Angle. This is the angle from the x-axis to the first arm of the gauge. 0° is along the x-axis and 90° is along
the y-axis. This is ignored for the Stress Tensor gauge type.
To add new gauges click the + button above the locations grid of the dialogue. This will display the Add Gauges
dialogue, shown in Figure 22.2.1-4.
Figure 22.2.1-4
Element ranges and lists are specified using a ‘-’ (minus) character, e.g.:
1-17, 24, 27.
Pressing the OK button adds the new gauges to the grid. The Surface, Gauge Type and Angle columns are
editable.
In-cell and multi-gauge editing are supported in the same way as for the load definitions (described above) – see
Figures 22.2.1-5 and 22.2.1-6.
The Open ... and Save ... buttons allow the influence gauge definitions to be saved to a file and reloaded at a later
date. See section 22.4 for the format of these files.
INFLUENCE COEFFICIENTS
# indicates that at a gauge location there are out of plane direct stresses (Szz != 0)
! indicates that at a gauge location there are out of plane shear stresses (Syz != 0
or Szx != 0)
Figure 22.2.2-1
22.2.3 Influence coefficient matrix
For each specified gauge type there are a number of responses for each load case. The table below indicates how
many responses and their names.
The influence coefficient matrix can be exported in three formats as outlined in the following sections.
When out-of-surface direct stresses occur at a gauge location a ‘#’ character is added to the IC for the gauge and
when out-of-surface shear stresses occur a ‘!’ character is added to the IC for the gauge. An example of the output
written to the analysis log file is shown in Figure 22.2.2-1. This includes the out-of-surface markers.
In addition to the matrix, a summary of the influence coefficients will be added to the .log file.
If there were gauges defined in the influence coefficients definition that were not included in the analysis then a
message is appended to the .log file similar to the one below:
4 4 5.1717898E-03
5 1 4.2707700E-02
5 2 0.2475940
5 3 -2.3491900E-02
5 4 -3.5227101E-02
6 1 4.2707700E-02
6 2 0.2475940
6 3 -2.3491900E-02
6 4 -3.5227101E-02
7 1 -0.1698290
7 2 -0.7558005
7 3 -1.0061632E-02
7 4 0.1075660
8 1 -0.1698290
8 2 -0.7558005
8 3 -1.0061632E-02
8 4 0.1075660
9 1 -0.3743764
9 2 -2.074844
9 3 0.1298187
9 4 0.2787820
10 1 -0.3743764
10 2 -2.074844
10 3 0.1298187
10 4 0.2787820
11 1 -1.7923901E-02
11 2 -0.1415415
11 3 -3.8885854E-02
11 4 -1.2738341E-02
12 1 -1.7923901E-02
12 2 -0.1415415
12 3 -3.8885854E-02
12 4 -1.2738341E-02
+ IC:uE
<0.32
<0.7
1 1 <1.09
<1.47
6 7 <1.85
<2.23
11 13
16
19
21
25
Load ID Respone#
- IC:uE
<0.36
<0.79
1 1 <1.22
<1.65
6 7 <2.07
<2.5
11 13
16 19
21
25
Load ID Respone#
Figure 22.2.6-1
This can also be displayed in tabular format as below, where the columns are loads, and the rows are responses.
Figure 22.2.6-2
22.3 Gauges
At a node the strains or stresses in a particular direction can be exported using the gauges facility. If a plasticity
correction is performed within the fatigue analysis this will be included in the calculation of the gauge value. Any
surface finish factor will be ignored. Residual stresses will be included. For analysis with multiple blocks the gauge
output will be a concatenation of a single repeat of each block.
Only shells, membranes and other two-dimensional elements with coordinate systems defined in the surface of the
element should be analysed, i.e. the surface of the component is the XY surface of the element.
Models containing 3D elements should be skinned with membrane or shell elements.
This 2D limitation is so that the gauge orientation can be specified in a straightforward manner.
This module will enable the comparison of measured strains and those evaluated in the fatigue analysis software.
22.3.1 Defining gauges
The Influence Coefficients and Gauges dialogue shown in Figure 22.3.1-1 is displayed by pressing the Gauges, Inf
Coeffs ... button on the Fatigue from FEA dialogue.
Figure 22.3.1-1
The right tab of the Influence Coefficients and Gauges dialog is used to define the gauges. The grid displays the
gauge locations. Each location is defined by:
An element or node number. If a particular node on an element is required then the syntax el.node is used.
Value Meaning
All All layers in the shell.
Not a shell The element is not a shell
A number Defines the surface of interest
Gauge type. The valid values are shown in the table below :
Value Meaning
Single Stress A single or one-armed stress gauge.
Single Strain A single or one-armed strain gauge.
Rosette Strain A rosette strain gauge – three gauges are created at 0°, 45° and 90° to the specified
orientation.
Angle. This is the angle from the x-axis to the first arm of the gauge. 0° is along the x-axis and 90° is along
the y-axis.
Copyright © 2023 Dassault Systemes Simulia Corp. Volume 1 22-11
Vol. 1 Section 22 Issue: 24.1 Date: 17.08.23
Diagnostic techniques including additional outputs
To add new gauges press the + button beneath the gauges grid. This will display the dialogue shown in Figure
22.3.1-2.
Figure 22.3.1-2
Element ranges and lists are specified using a ‘-’ (minus) character, e.g.:
1-17, 24, 27.
Pressing the OK button adds the new gauges to the grid. The Surface, Gauge Type and Angle columns are
editable.
In-cell and multi-gauge editing are supported in the same way as for the load definitions (see 22.2.1, above). Only
columns marked with a * can be edited in this manner. See Figures 22.3.1-3 and 22.3.1-4.
The gauge outputs will be written to the plot file for a node. The plot file names are derived as described in section
22.1.2. One plottable output will be created for each arm of the gauge. These will be named as follows:
Name Description
EPS_gauge_ang Elastic-plastic strains
SIG_gauge_ang Elastic-plastic stresses
E_gauge_ang Elastic strains
S_gauge_ang Elastic stresses
Where an elastic-plastic correction is performed in the fatigue software, or the input stresses and strains are
interpreted as elastic-plastic, then the elastic-plastic versions of stress and strain will be written. The table below
shows this in more detail. An × denotes an algorithm does not support a particular analysis.
Figure 22.3.2-1
Where a plasticity correction is performed the strain and stress gauge outputs will vary from the “Normals”
described in section 22.1. In section 22.1 elastic stresses and strains are exported when a plasticity correction is
performed.
After the analysis is complete, a nodes plot file for a specified gauge can be opened in the Loaded Data Files
window, using File >> Data Files >> Open Data File ...:
Figure 22.3.2-2
The example in Figure 22.3.2-2 was created with a rosette strain gauge and 3 single stress gauges. The gauges
are plotted in Figure 22.3.2-3 below.
12000
10000
8000
6000 EPS0:uE
4000 EPS45:uE
2000
EPS90:uE
0
-2000
-4000
400
300
200 SIG0:MPa
100 SIG45:MPa
0 SIG90:MPa
-100
-200
0 100 200 300 400 500 600
Samples
Figure 22.3.2-3
In addition to the plottable outputs the .log file will contain a summary of the gauges defined for an analysis.
If there were gauges that were not a part of the analysis then a message similar to the one shown below will be
added to the .log.
WARNING:The following ids defined in your Gauges were not part of your analysis :
67
When out-of-surface direct stresses occur at a gauge location a ‘#’ character is added to the gauge name and when
out-of-surface shear stresses occur a ‘!’ character is added to the gauge name. An example of the output of surface
markers is shown in Figure 22.3.2-4.
Figure 22.3.2-4
400
300
200
SIG0:MPa 100
-100
-200
6000 7000 8000 9000 10000 11000 12000 13000
EPS0:uE
Figure 22.3.3-1
The Gauge sample interpolation factor (see Figure 22.3.1-1) can be used to insert extra samples between each of
the samples in the loading to provide better hysteresis loop definition. Figure 22.3.3-2 shows the same loading as
Figure 22.3.3-1 with an interpolation factor of 10.
400
300
200
SIG0:MPa
100
-100
-200
6000 7000 8000 9000 10000 11000 12000 13000
EPS0:uE
Figure 22.3.3.2
It should be noted that for fatigue analysis from elastic-plastic FEA results the interpolation factor does not improve
the hysteresis loop shapes as a plasticity correction is not applied in the fatigue software. Superimposing the
interpolated and non-interpolated outputs shows the areas between the peaks and valleys forming the shapes of
the hysteresis loops - see Figure 22.3.3-3.
13000
12000
11000
10000 EPS0:uE
Interpolated
9000 EPS0:uE
No interp.
8000
7000
6000
400
300
200 SIG0:MPa
Interpolated
100 SIG0:MPa
No interp.
0
-100
-200
40 41 42 43 44 45 46 47
Samples
Figure 22.3.3-3
For non-proportional constant amplitude loading sample hysteresis loops look like Figure 22.3.3-4. These are
similar to those shown in Socie and Marquis (Ref. 22.1).
200
100
0 deg.
0
45 deg.
90 deg.
-100
-200
Figure 22.3.3-3
For more complex loading, effects such as backward hysteresis loops can be seen. This occurs where the strain
increases on a plane as the stress reduces (or vice versa). An example of a section of loading where this occurs is
shown below in Figure 22.3.3-4, (eP1 and SP1 are the lower plots in each segment). The plots are of elastic
principals which, in this example, are not changing direction. The first principal (ep1 and sP1) exhibit backward
hysteresis behaviour due to the overriding effect of the Poisson’s strains. SP2 is much bigger than SP1 in the
displayed area.
400
350
300
250 SP1:MPa
200 SP2:MPa
150
S decreasing
100 NOTE:
eP1 and SP1
are the lower
1500 plots in each
segment.
1000
eP1:uE
eP2:uE
500
e increasing
0
78 80 82 84 86 88
Samples
Figure 22.3.3-4
Cross plotting the elastic stress and strains and the elastic-plastic stress and strains in the direction of eP1 and SP1
displays the backward hysteresis loops - see Figure 22.3.3-5.
Figure 22.3.3-5
where:
load_case_id Specifies the input FEA load case. The fe-safe dataset number.
load_number This is a unique ID for the load to be used for the output .inf file.
Note: If the definition is for gauge outputs rather than an influence coefficient matrix then LOAD commands are
ignored.
22.4.2 SINGLE
This defines a single strain gauge output. This is defined in the format:
where:
el_num Is the element number for the gauge.
Surface Defines the surface. This is either:
where the parameters are the same as those for the SINGLE command.
where the parameters are the same as those for the SINGLE command.
where the parameters are the same as those for the SINGLE command.
Figure 22.5.1-1
The Find Hotspots dialogue can then be used to define the criterion for detecting hotspots:
Contour variable
The required variable can be selected from the drop-down list. This list will include all contour variables that were
requested for exporting, see Section 22.1.1.
Critical value for criterion
Numerical value for the criterion used to determine hotspots. This will be referencing data in the selected contour
variable.
Figure 22.5.1-3
The window will be updated after all hotspots have been detected. Names of detected hotspots are based on the
name of the contour variable used, _LT_ or _GR_ for less than and greater than and the value of the criterion used.
Short information about each hotspot group is revealed by clicking the [+] symbol, which includes:
id of the hotspot group,
global id of the worst node in the group,
value at the worst node and number of all nodes in the group
22.5.2 Using Hotspots
Following Hotspot detection, right-click in the Current FE Models window and select Use Hotspots… to open the Use
Hotspots dialogue:
The Use Hotspots dialogue can then be used to select hotspots to be converted to element groups, by default all
detected hotspots will be selected. A Union group containing all selected hotspots can also be created. Clicking OK
will create element groups from selected hotspots, which can then be used for a subsequent fatigue analysis
configuration:
Figure 22.5.2-2
The .fer file can now be saved to another output format. For example to save to an OP2 file:
select File >> Save FE Fatigue Results as...;
set the results file to the .fer file just created as described above;
set the output file to have the desired extension - e.g. myResults.op2.
click Save.
22.8 References
22.1 Socie D F and Marquis G B
Multiaxial Fatigue
SAE International, 2000, pp 286.
where
<token> is the token for the required function,
for example:
the token RAINFL performs the Rainflow and Cycle Exceedence function;
the token RF2LDF performs the Convert Rainflow to LDF function.
and
<arg_1>, <arg_2>, ......., <arg_n> is a comma-separated list of arguments required by the
function.
Example 1:
The following macro script was saved when the Rainflow and Cycle Exceedence function was used on the
file whitelon.dac to produce a Rainflow histogram, then the Convert Rainflow to LDF function was
used to produce an LDF file from the resulting Rainflow:
Example 2:
The script in Example 1 was saved and edited so that the same functions would be processed, but on
different input files. This time the Rainflow and Cycle Exceedence function was used on the file
sinlong.dac to produce a Rainflow histogram, sinlon_rainflow_01.cyh. This file was then
converted to an LDF file called ldf_from_sinlon_rainflow_01.ldf using the Convert Rainflow to
LDF function.
Support for macros in the command line is also available using the macro= command line option – see section 23.2
below
fe-safe
Following the token, comma-delimited arguments are entered as described in section 23.2, below. The parameters
supported within macros are: j=, v=, b=, o=, log=, <kwd>=, material= and mode=. If values of arguments contain
any spaces they should be surrounded by double quotes e.g. macro=”c:\My Documents\test2.fil”. File
references should include a full path, on Windows the path should include the drive letter. See Running fe-safe from
the command line below for examples.
pre-scan
Following the token, commands and corresponding arguments and values are entered as described in section 23.4,
below. The commands supported within macros are: files, position, select, deselect, open, append, and delete. A
pre-scan token cannot be used in the same line with any other token.
Pre-scan commands and arguments can be entered in separate lines, in form of token followed by command, or
can be entered all in one line beginning with the token and followed by a comma-separated list of commands (their
arguments separated by spaces).
Pre-scanning in a macro represents a method to extract datasets from the source FE model which is described in
section 5.
groups
Following the token, commands and corresponding arguments and values are entered as described in section 23.5,
below. The commands supported within macros are: load, save, and list. A group token cannot be used in the same
line with any other token.
Group commands and arguments can be entered in separate lines, in form of token followed by command, or can
be entered all in one line beginning with the token and followed by a comma-separated list of commands (their
arguments separated by spaces).
Defining element or node groups in a macro represents a method of Managing groups used for FEA fatigue
analysis which is described in section 5.
23.1.10 Combining pre-scanning, user defined groups, and FEA fatigue analysis in a macro
An example of combining pre-scanning, user defined groups, and FEA fatigue analysis in a macro is shown below:
The path to a setting is the hierarchy used in the settings files. To aim this, where settings can be changed within
the UI, tools tips display the settings path to use in a macro e.g. for the settings to read strain datasets while
performing a full read is [project.model.extract strains] as shown in figure 23.1.11.1
Figure 23.1.11.1
Where a setting path is unambiguous part or the entire path prefix can be ignored e.g. as ‘extract strains’ is unique
[model.extract strains] or [extract strains] can be used in place of [project.model.extract strains].
A secondary way to reduce duplication in a macro and increase readability is to use the setting with syntax where a
setting name prefixed with . will be relative to the last defined with settings path e.g. the following:
[job.exports.plots.Haigh] = true
[job.exports.plots.Smith] = true
[job.exports.plots.principals] = false
[job.exports.plots.damage] = true
Can be replaced with:
with [job.exports.plots]
.[Haigh] = true
.[Smith] = true
.[principals] = false
.[damage] = true
The setting type determines what values are allowed, in all cases and white-space after the equals symbol (=) and
at the end of the line is ignored. Where the value includes spaces double quotes can be used to clarify the value.
For string, file and directory setting types the value can contain dynamically resolved tokens:
Environment variables using ${NAME} syntax e.g. [source file] = ${TEMP}/myfile.odb
The current macro directory can be referenced with <%macro_dir> e.g. [source file] = <%macro_dir>/myfile.odb
Paths are auto resolved to the correct / or \, any drive is stripped on Linux and on Windows a missing drive is
replaced with the setting [UNIX drive] (which defaults to c:)
For arrays of settings such as for groups and materials, the settings can be accessed using (number) e.g. the first
groups algorithm can be set using [groups(1).algorithm] = “WeldLife”
Note: the examples below assume that the fe-safe application is run by typing fe-safe_cl on the command line.
On Windows platforms this is found in the fe-safe installation exe sub-directory.
On Linux platforms this there is a script in the base fe-safe installation directory called
fe-safe_cl – see section 3.
On Windows platforms, if the main fe-safe executable fe-safe.exe is used for macro or batch processing instead of
fe-safe_cl.exe, a dialogue pops up showing the command being executed. Any messages generated are displayed
in the pop-up console window. This is a legacy option and does not support all available features.
Each command-line parameter that has a value is of the format parameter=value. If value contains any spaces
it should be surrounded by double quotes e.g. macro=”c:\My Documents\My Macro.macro”.
File references should include the full path. On Windows the path should include the drive letter, e.g.:
Note: While macros run by executables fe-safe and fe-safe_cl require commas between parameters (see
Section 23.1.2), the command line does not need them (though it is unaffected by them).
Command-line parameters fall into 2 categories: process commands and optional parameters. The supported
process commands are:
If a macro (macro=), load FE model (j=), a Verity analysis (v=) or fatigue analysis (b=) are specified, the
command(s) will be processed. A macro command cannot be run with any other command line parameters; all
other parameters will be ignored except –project, –h and –v (see below).
The other three commands may be specified in any order, but will always be executed in the order of: loading the
FE model, performing a Verity analysis then performing a fatigue analysis.
If no processes are specified, fe-safe will display the help screen.
Referencing a project definition file (*.stlx) using the fatigue analysis parameter (b=) will cause the loaded settings
to overwrite the current project and job settings. As the file is opened, any paths defined in the file are interpreted
assuming the following path hierarchy:
Absolute path (as defined in the .stlx file)
Location of the .stlx file
Current project path
Any paths defined in the referenced .ldf file will also be interpreted in a similar way and the loading definition will
then be saved as the new current.ldf (for the current job).
Legacy Keyword format and Stripped Keyword (*.kwd and *.xkwd) files can also be used as the value of the
fatigue analysis parameter (b=) from analyses completed in an earlier version of fe-safe.
<kwd>=<value> Overrides the setting having legacy keyword <kwd> with <value>
-import_project Imports the project archive into the current project directory, it will overwrite any existing
<ProjectArc> files. <ProjectArc> can be relative to the current working directory.
-macro_check=<checktype> Checks that the macro can be successfully run, rather than executing it
-macro_exit=<condition> Sets the condition for stopping execution (or checking) of a macro
Forces material data to ‘refresh’ from database, use ‘cached’ from .stlx file or ‘auto’ decide
material=<mode>
(default)
-overwrite_project When importing a project archive with –project option, an existing files will be overwritten.
Overrides project directory to <ProjectArc> stripped of its file suffix, and imports the project
-project <ProjectArc> archive into the new project directory. The import will abort if there are existing files.
<ProjectArc> can be relative to the current working directory.
-project <ProjectDir>
The current location of the project directory is overridden, see section 5 for more details.
[<setting>]=<value>
Changing a setting value can be done via the [<setting>]= command, however there are a number of restrictions
compared to changing a setting from within a macro:
Setting names containing spaces must be enclosed in double quotes
Accessing setting arrays (e.g. the groups) can only be done via index and not via a name
Spaces prior or following the = are not allowed
Values with spaces in must be enclosed in double quotes
Using a comma, double quotes or any platform special characters in a value is not possible
<kwd>=<value>
Changing a keyword value can be done via the <kwd>= command, group keywords are set using the suffix .n for
group n e.g. MyKeyword.3=MyValue will set keyword ‘MyKeyword’ in group 3 to ‘MyValue’. If a keyword file is
loaded, any keyword set on the command line takes precedence.
-l <location>
This can be used to redirect the licence server location for the session. A hostname (or IP address) should be
passed through, with an optional port number. (e.g. MYHOSTNAME@7171).
log=<logfile>
During an analysis with the b= command, the log file can be redirected with the log= command rather than its name
being paired with that of the output file.
-macro_check=<checktype>
This option can be used to check the macro for several types of errors. Running a check will not change project
settings or create any files. The following checks are supported:
Check for syntax errors using -macro_check=syntax. These errors include unknown commands and
formatting errors. By default a syntax error will cause the macro check to stop; see -macro_exit.
Check semantic errors using -macro_check=semantics. This includes checking that command arguments
don’t conflict, for the existence of input files and that output file names are viable. Note that for complex
commands some of these types of errors will only be detected when executing the command. Checking for
semantic errors will also check for syntax errors.
Check licensing errors using -macro_check=licence. This checks for basic licensing requirements. These
do not include add-ons used in a fatigue analysis, as the settings are not changed and so cannot be used to
determine the state when all commands would have been run. Checking for licensing errors will also check for
semantic and syntax errors.
-macro_exit=<exitcondition>
This option can be used to change the condition under which a macro run (or check) is stopped:
To continue to the end of a macro regardless of any errors, use -macro_exit=macro_end.
To stop running a macro when a syntax error is encountered, use -macro_exit=syntax_error. This is the
default.
To stop running a macro when a semantic error is encountered, use -macro_exit=semantic_error. This
will also stop if a syntax error is encountered.
To stop running a macro when a licensing error is encountered, use -macro_exit=licence_error. This
will also stop if a semantic or syntax error is encountered.
To stop running a macro when macro command fails, use -macro_exit=execute_error. This will also
stop if a licensing, semantic or syntax error is encountered.
material=<mode>
Forces the material data:
to be reloaded from the relevant database (material=refresh) – any material keywords set on the command
line will be ignored;
to use data from the .stlx file (material=cached) – required properties can be modified from the command
line;
to be reloaded from the relevant database unless material keywords are set on the command line, in which
case data from the .stlx file will be used (material=auto). This is the default option.
mode=<mode>
Specifies that the model(s) being opened with the j= command should be loaded:
via the Rotate module (mode=rotate);
with the geometry information of the first model (mode=geometry);
with geometry and surface-detection (mode=surface);
as frequency-domain data (mode=psd).
o=<outputfile>
When performing the analysis with the b= command the output file name specified in the keyword file is overridden.
This will also override the analysis log file name unless the log= command is used.
-s
Stops most messages from being echoed to the console, though some errors will still get displayed.
-v
Displays the version of fe-safe and exits.
-w
Causes fe-safe to wait for the <Enter> key to be pressed before exiting.
-timeout <minutes>
The default command-line fe-safe licence time-out is 15 minutes; this option can be used to change the time-out to
<minutes> minutes. This changes the timeout for all future runs.
Example 1:
The macro script in Example 2, above, was run from a windows command prompt, using:
fe-safe_cl macro=c:\my_macros\macro_02.macro
Example 2:
The following command is entered in a Linux console window at the shell prompt:
This loads the project definition file /data/test.stlx, then loads FE analysis results from the two FIL
model files, test1.fil and test2.fil. Fatigue analysis results are written to the file /data/res.csv.
The program exits when the analysis is complete.
Example 3:
The following command is entered in a Linux console window at the shell prompt:
This reloads the FE analysis results referenced in the project definition file /data/test.stlx, applies
the settings in the project definition except that the elastic modulus for element group 2 is modified to
200000 using the ELASMOD keyword.
where fe-safe_cl is the fe-safe_cl.exe executable, or an alias to the script fe-safe_cl (see 23.2, above).
The script my_batch_linux.sh can be run from the command line by typing:
./my_batch_linux.sh
As the script is executed, each command line launches an instance of fe-safe, executes the analysis then shuts
down fe-safe before the next line executes. So, in this example fe-safe would be launched and shut down four
times.
“\data\test_models\model_01.fil” or “C:\data\test_models\model_01.fil”
OR
”c:\My Documents\projects\project_1\model_01.fil”
OR if c:\My Documents\projects\project_1 is the current project
”.\model_01.fil”.
The files will be pre-scanned in the order of keyhole_01.fil followed by keyhole.op2. If geometry
import or surface detection options are requested in the fatigue analysis (using the mode= parameter) the
required data would be loaded from the first model, if available.
Those commands do not load any data into fe-safe immediately – appropriate datasets must next be
selected and then opened.
The selected datasets are all of the steps in the pre-scanned file, less the last step. The parameter step
and the value last are part of a list of parameters and values that can be used with the select or
deselect commands:
Parameter Value
step Step number n, step name, first, last or all
inc Increment number n, first, last or all
time Time t, first, last or all
ds Dataset number n or dataset name, first, last or all
source A file name filename of a file pre-scanned using the pre-scan file command including
the full path, e.g: "c:\my_files\*.fil", if more than one. Alternatively use
first, last or all
type Result type: all, stress, strain, force, temperature, history, misc
N and/or custom(CustomName)
N
ote: Number n can be an integer n, or a range n-m, e.g.: 2-25 or 1-6(2).
Names name or filename are case sensitive and are set as a text strings within double quotes
with optional ‘*’ wildcards, e.g.: “*heat*”.
Time t can be a real and must include the decimal point, even for 0, e.g.: 0.0
Custom variable CustomName refers to the data type name used in CMF algorithms.
For example:
Optionally, select and deselect commands can be used with 2 special qualifiers, geometry and
detect-surface. These can be used to control geometry-reading and surface-detection in the same
way as in the pre-scan dialogue, see section 5.7.2. For example:
Position command is used to control the position the data is read from FEA result files. Available
arguments are: elemental, nodal, integration, centroidal or element-and-centroidal.
For example:
The above commands do not load any data into fe-safe immediately – appropriate datasets must next be
opened.
Optional append command can be used to append additional datasets to the datasets already opened, for
example:
“\data\test_models\model_01.fil” or “C:\data\test_models\model_01.fil”
OR
”c:\My Documents\model_01.fil”.
To load an existing fe-safe ASCII (*.csv, *.txt, *.asc) or binary (*.grp) group file, a groups token should be used,
followed by the load command, appropriate filename and optional parameter defaulttype, to identify whether
the group contains nodes or elements. If the group type is not set it will default to elemental. For example:
Groups definitions from FEA model files are automatically extracted when such files are loaded into fe-safe. To load
an FEA model file the following command can be used:
fe-safe j=/data/test1.fil
To save existing groups, the groups token should be used, followed by the save command, optional parameter
binary, to control whether group information is to be saved to a binary (*.grp) file, and the target filename. For
example:
To select, deselect, and remove groups from the group parameters list, the groups token should be used, followed
by the list command, select, deselect or remove parameters and a group name. For more information on
managing groups see section 5. Group names are case sensitive and are set as a text strings within double quotes
with optional ‘*’ wildcards, e.g.: “GROUP*”. An all operator can be used instead of a group name to manage all
existing groups. For example:
fe-safe j=/data/test1.fil
groups list select all
Selects all loaded groups for the fatigue analysis.
fe-safe j=/data/test1.fil
groups list select “GROUP2”
groups list select “GROUP*”
Selects a group named GROUP2, followed by all other groups with names starting in GROUP for the fatigue
analysis.
Note: Selection order dictates positions of the selected groups in their parameters list. For more information see
section 5.
fe-safe j=/data/test1.fil
groups list deselect “GROUP3”
A group named GROUP3 will not be used to set the fatigue analysis options.
To creator new groups the groups token should be used, followed by the create command, the name of the new
group and then the equation representing the new groups’ contents (this is identical to the contents when
creating an advanced group, see section 5). Optionally the create command can be followed by ,
type=elemental or , type=nodal – this sets the default group type and required when specific items are used
to identify the type. For example:
groups create “NewGroup” “GroupA AND GroupB”
This will create a group called NewGroup containing items common to both group GroupA and group GroupB.
There are a number of special identifiers that can be used to specify mesh based groups:
Identifier name (case-insensitive) Definition
from_mesh(all) All elements if default group type is elements, otherwise
all nodes
from_mesh(elements) All elements
from_mesh(nodes) All nodes
from_mesh(surface) All surface elements if default group type is elements,
otherwise all surface nodes
from_mesh(solids) All solid elements
from_mesh(shells) All shell elements
from_mesh(brick) All brick elements
from_mesh(wedge) All wedge elements
from_mesh(octahedral) All octahedral elements
from_mesh(pyramid) All pyramid elements
from_mesh(tetrahedron) All tet elements
from_mesh(quadrilateral shell) All quad shell elements
from_mesh(triangular shell) All triangular shell elements
from_mesh(quadrilateral) All quad elements
from_mesh(triangular) All triangular elements
from_mesh(beam) All beam elements
from_mesh(conn) All connector elements
from_mesh(unsupported) All unsupported/unclassified elements
This command changes the current project to <Project Directory>. If this is not an existing project, a new project will
be created. If for any reason the specified project directory is invalid, e.g. permissions restrictions, the project will
not be changed.
This command creates a new project; the directory can either be specified using the optional parameter <Optional
Project Directory> or based on the <Project Archive> file path, stripped of all extensions. The archive is then
extracted to the new project, which then becomes the current project.
If the new project directory is invalid or would cause any files to be overwritten, the operation is aborted and no
change will occur.
This command imports the project settings file and replaces settings values for all settings listed in the settings file.
If all other settings should be at their defaults, call CLEARKWD first.
This command imports the archive into the current project; any existing files will be overwritten.
This command exports the project to a stlx file. The optional Project can be replaced with User for the user settings.
This command exports the current project to <Project Location>. The project location is treated as a directory if it is
an existing directory or the file path is not an existing file and it does not end in 7z. If this is the case, the export will
be treated as a project copy to the directory, otherwise the project location is treated as the file name of a project
archive to be created. In either case, if there are files that exist that would be overwritten, the export is aborted – this
can be prevented by calling macro command rm or rmdir to remove any existing file or directory.
There are several categories of project file. By default, all except any external FE models are exported. If there is
missing project model data (e.g. no FESAFE.FED), then any external FE models will be selected instead.
Files external to the project that are selected for export will be copied to a location relative to the exported project,
e.g. exporting to c:\Archive\project_01 will cause external files to be copied to c:\Archive\project_01\external_files
(or one of its subdirectories). The exported project settings will reflect the new relative locations which the external
files are now in.
Optionally the categories of project files selected to be exported can be changed using the token names:
Macro token name Affected project files
Project The project settings and all miscellaneous project files
Fe_model The source FE models
Datasets The loaded datasets
Mesh The loaded mesh
Groups The loaded groups
Job The job settings and all miscellaneous job files
Loading Any used ldf or hldf file in the project directory
Histories Any history files used that are in the project directory
External_Loading Any used ldf or hldf file outside the project directory
External_Histories Any history files used that are outside the project directory
Results The results stored in the intermediate results file
Exported_Results The results exported to the target output file
Export_Prereq The file required to create the exported results file
Result_Diag The diagnostic results files created during the analysis
Categories are separated from the Export command and each other by commas, e.g.
Error: Couldn't parse line copy again.log again3.log, correct format is:
[cp|copy] <source>, <dest>[, <error if source does not exist=No/Yes>[, <remove
destination if already exists=No/Yes>]]
In this case, the error arises because the arguments of a macro command must be separated by commas. To
correct the problem, add the comma in line 1 of the macro file and try again.
The source and directory arguments are required. These must be files, not directories. Fully-qualified paths or
relative path references can be used. For instance:
The argument to display errors is specific to cases when the file does not exist; the default is No. Syntax errors will
still be displayed. For instance:
An error caused by the source not being present will not stop the macro from processing additional lines. For
instance:
The argument to overwrite is also described as an option to remove the destination file if it already exists; the
default is No.
[del|rm] <file>
Fully-qualified paths or relative path references can be used. Wildcards such as * cannot be used, for instance to
remove all files in the project directory or to remove all *.log files in a subdirectory.
A syntax error will be caused but the macro will continue processing commands if the argument is a directory, not a
file. For instance:
In the example above, the second line in the macro failed, but lines 2 and 4 were completed and the macro ran to
completion.
Fully-qualified paths or relative path references can be used. Wildcards such as * cannot be used, for instance to
remove all directories in the project directory or to remove all subdirectories in a fully qualified path.
The argument to display errors is specific to cases when the directory does not exist; the default is No. Syntax
errors will still be displayed.
Fully-qualified paths or relative path references can be used. Wildcards such as * cannot be used, for instance to
remove all directories in the project directory. Additional directories to remove may optionally be specified.
Using fe-safe, these variations can be accounted for through the capabilities of nodal property mapping:
Material properties can be defined independently for each node on the model using property mapping.
The property map can include material properties for all or just part of the model, e.g.: a heat treated region of a
shaft. If properties for a node are not specifically included in the property map, then the properties of the
material that are set in the Group Parameters region (see section 5) will be used based on the group the node
is part of.
A property map does not have to include all material properties – just those that vary spatially. For example, it is
possible that only a mechanical property such as UTS is affected. Alternately a fatigue property such as the
tabular stress-life endurance curve may be affected. All other properties for the node will come from the
material set in the Group Parameters table (see section 5 for details) based on the group the node is part of.
Any material parameter defined in fe-safe can be used in a property map (see section 8 for details). The effect
of the mapped property on fatigue results will depend on how each property is used in fe-safe, for instance UTS
is frequently used to determine surface finish factor. See sections 14 and 15 for fatigue analysis of Elastic and
Elastic-Plastic FEA results respectively.
Temperature-dependent variation with property mapping is comprehensive and powerful:
o Not all nodes have to use temperature-dependent properties, and those that do can have a different
number of temperatures listed. Properties will be interpolated as described in section 8.
o The nodal properties can be temperature-dependent, even if the main properties for the material are
not temperature-dependent. For example, the nodal property map may contain temperature-
dependent UTS and nothing else, whereas the properties of the material defined for the group that the
node belongs to could be defined only at one temperature (for instance at room temperature). In this
example, temperature-dependent UTS will be used, even though the other properties are isothermal.
o Such variation makes use of existing conventional high temperature fatigue in fe-safe (see section 18
for details).
The Nodal properties will appear in the tree view in the Current FE Models window. Beneath the Nodal Properties
heading the path and name of the file opened are shown as well as the first node defined with the properties list for
that node. Opening an NPD file enables nodal property mapping through a keyword (NODALPROPS=) referencing
the fully qualified path and file name of the *.npd file. This can be used to reference nodal property definitions
during command line or macro analyses, for more information see section 23.
By selecting Close Nodal Properties from the pop-up menu this information is removed from the tree and the
analysis keyword is cleared. Note that when a new FE model is opened the nodal properties are automatically
cleared.
Figure 24.3-1
Once the geometry has been read a summary will appear in the “Open FE Models” model tree as shown in
Figure 24.3-2:
Figure 24.3-2
The metadata section should contain entries describing all properties to be modified using the nodal property
file and the syntax should be identical to the corresponding entries of the material properties of interest in the
*.template file.
The NPD file includes sections for:
Temperature metadata (optional)
List of temperatures (optional)
Nodal property metadata
List of nodal properties (varying by temperature if specified)
The recognised keywords, in the order in which they should appear in an NPD file, are as follows:
For the optional temperature section:
Each metadata section can contain a metadata definition of one or more material parameters. For the Temperature
metadata section, this is limited to the Temperature_List variable only. For the Nodal metadata section, any variable
(including Temperature_List) that can be included in a material database in fe-safe can be referenced. This is done
by accessing the metadata section from an existing material database file (*.dbase).
For example, many commonly used material properties for fatigue analysis in fe-safe are included in the local
database, accessible in the Local Directory as an ASCII file <LocalDir>\local.dbase. A copy of the local
database can be made and accessed to find examples of metadata lines corresponding to material properties of
interest for property mapping. Find each variable on its own line, and copy the lines of interest to build the metadata
sections of a nodal property definition (NPD) file.
The first column in the table of nodal properties contains each node number to define nodal properties for,
subsequent columns (tab-delimited) should contain the relevant data in the same order as defined in the metadata
section. If a temperature list is specified this means that multiple values for each variable, corresponding to the
temperatures should be listed (in space-delimited form).
Below are a few examples to show the use of metadata and the corresponding values listed at a short subset of
nodes in an FE model. The example files are available from the directory <DataDir>\NPD and can be opened
using the right-mouse button in the Current FE Models window and selecting Open Nodal Properties....
#<document link title> <keyword> <unused> <display text> <units> <size> <Extra_info>
NODAL_METADATA_START
BSName STANDARD_&_GRADE UNUSED BSName None 72
E E UNUSED ~~gen~:~E MPa 32000 "Edit=Table2d,...
UTS UTS UNUSED ~~gen~:~UTS MPa 32000 "Edit=Table2d,...
NODAL_METADATA_END
Note: some lines above have been truncated to fit the page. A sample file can be found in the directory
<DataDir>\NPD to examine the full metadata definition for each parameter, and consider the impact of tab and
space delimiting.
The lines in the nodal metadata section came from a material template file and reference the two variables: Young’s
Modulus (E) and Ultimate Tensile Strength (UTS) in the nodal list.
The first column in the table of nodal properties contains labels that are the node numbers, subsequent columns
(tab-delimited) should contain the relevant data in the same order as defined in the metadata section. For example,
for node 550 the Young’s modulus was set to 203000 MPa and the Ultimate tensile strength was set to 400 MPa.
These columns were separated from each other by tabs.
Once opened in the GUI the metadata above, and values for the first node defined in the Nodal List are shown in
the Current FE Models window as follows in Figure 24.4-1:
Figure 24.4-1
24.4.3 Example 2 - NPD file with temperature list defined for all nodes
For Nodal List data, when a temperature list has been defined, additional values for each variable are tab
delimited. This means that for the example above, at node 550, Young’s Modulus (E) is defined at the three
temperatures (20, 200, and 250) as (203001, 190820, and 168490 respectively). Once opened in the GUI the
metadata above, and values for the first node defined in the Nodal List (corresponding to the temperature list)
are shown in the Current FE Models window as follows in:
An example nodal property definition with a defined temperature metadata and temperature list is shown below,
note that some lines have been truncated to fit the page:
#<document link title> <keyword> <unused> <display text> <units> <size> <Extra_info>
TEMPERATURE_METADATA_START
BSName STANDARD_&_GRADE UNUSED BSName None 72
Temperature_List TempList UNUSED ~~gen~:~Temperature~List deg.C 200 "Edit=...
TEMPERATURE_METADATA_END
#<List of temperatures>
TEMPERATURE_LIST_START
Temperatures 20 200 350
TEMPERATURE_LIST_END
#<document link title> <keyword> <unused> <display text> <units> <size> <Extra_info>
NODAL_METADATA_START
BSName STANDARD_&_GRADE UNUSED BSName None 72
E E UNUSED ~~gen~:~E MPa 32000 "Edit=Table2d,...
UTS UTS UNUSED ~~gen~:~UTS MPa 32000 "Edit=Table2d,...
SN_Curve_S_Values SN_Curve_S_Values UNUSED ~sn~curve~:~S~Values MPa 32000...
SN_Curve_N_Values SN_Curve_N_Values UNUSED ~sn~curve~:~N~Values nf 32000...
NODAL_METADATA_END
Note: some lines above have been truncated to fit the page. A sample file can be found in the directory
<DataDir>\NPD to examine the full metadata definition for each parameter, and consider the impact of tab
and space delimiting and parenthesis.
The lines in the temperature and nodal metadata sections came from a material template file and reference the
variables in the temperature and nodal lists. In this example a temperature list of 20, 200, 350 degrees C was
defined (note the list is space delimited).
All nodal properties should contain space delimited lists of data corresponding to the temperatures in the
temperature list. Multi-dimensional properties (e.g. S-N curve datapoints) should be grouped by temperature in
parentheses, and each group should be tab delimited. Node 550 for example, Young’s Modulus (E) was set to
203000 MPa at 20 degrees C, 190820 MPa at 200 degrees C, and 168490 MPa at 350 degrees C. The values
in this list were separated from each other by spaces, while the list of Moduli was separated from the column
indicating the node number and the list of Ultimate Tensile Strengths by tabs. Tabular stress-life data was
defined for Node 550 for example as being 400 MPa at 1e4 cycles and 400 MPa at 1e7 cycles, for 20 degrees
C.
Once opened in the GUI the metadata above, and values for the first node defined in the Nodal List are shown
in the Current FE Models window as follows in Figure 24.4-2:
Figure 24.4-2
24.4.4 Example 3 – NPD file with temperature lists varying at each node
An alternative approach to defining temperature dependent material properties is by omitting the separate
temperature list and specifying different temperature lists for each node as follows, note that some lines have
been truncated to fit the page:
#<document link title> <keyword> <unused> <display text> <units> <size> <Extra_info>
NODAL_METADATA_START
BSName STANDARD_&_GRADE UNUSED BSName None 72
Temperature_List TempList UNUSED ~~gen~:~Temperature~List deg.C 200...
E E UNUSED ~~gen~:~E MPa 32000 "Edit=Table2d,...
UTS UTS UNUSED ~~gen~:~UTS MPa 32000 "Edit=Table2d,...
NODAL_METADATA_END
Note: some lines above have been truncated to fit the page. A sample file can be found in the directory
<DataDir>\NPD to examine the full metadata definition for each parameter, and consider the impact of tab
and space delimiting.
The lines in the nodal metadata section (including a temperature list variable) came from a material template
file and reference the variables in the nodal list. In this example a temperature list of 20, 200, 350 degrees C is
defined for node 550 only, and a different temperature list is specified at each node in the nodal list.
All nodal properties should contain space delimited lists of data and the columns following the temperature list
should correspond to the temperatures in the temperature list column respectively. Multi-dimensional properties
(e.g. S-N curve datapoints) should be grouped by temperature in parentheses, and each group should be tab
delimited. Node 550 for example, Young’s Modulus (E) was set to 203000 MPa at 20 degrees C, 190820 MPa
at 200 degrees C, and 168490 MPa at 350 degrees C. The values in this list were separated from each other
by spaces, while the list of Moduli was separated from the column indicating the node number and the list of
Ultimate Tensile Strengths by tabs. To show the flexibility of varying temperature lists, node 163 had Moduli
defined at 40, 250, and 400 degrees C instead.
Note that in Example 3, each node includes a temperature list of three temperatures. In fact, each node can
have a different number of temperatures in the list. In such a case, the data in each column would vary
accordingly. A sample file can be found in the directory <DataDir>\NPD to examine an example wherein the
temperature lists are different length for each node.
Once opened in the GUI the metadata above, and values for the first node defined in the Nodal List are shown
in the Current FE Models window as follows in Figure 24.4-3:
Figure 24.4-3
fe-safe supports finite life analysis using the TCD, and it is selected on the Group Algorithm Selection dialog in a
similar way to finite life. But because fe-safe does not support the life-dependent critical distance, its use must also
be explicitly enabled for the project using an option under the Analysis Options dialog (see next section). We also
recommend that a full elastic-plastic FEA is used for such application as in [5]. When strain-based methods are used,
the strain tensors in an elastic-plastic loading will be interpolated at the TCD point in a similar way to the stress
tensors.
It is also possible to choose between the Point Method (default) or the Line Method using radio buttons on the TCD
section of the Algorithms tab of the FEA Fatigue >> Analysis Options dialogue window:
A limit to calculate enhanced safety factor calculations using TCD only when surface FOS/FRF is between specified
values is possible. Nodes outside those values will be omitted from the calculation. By default the limits are applied
within the thresholds of 0 and 10 (for both FRF and FOS). Note that the FOS bands are configured separately under
the FOS sub-dialog of the same Algorithms tab, and the default FOS bands would mean all surface nodes would
default to being processed through TCD.
Note: The bands are applied as strict inequalities, so if the upper limit is say 10 and the surface FRF is exactly 10
(possibly by applying a limit), then no TCD calculation is performed at this node, and so there will be no value in the
TCD contour. If a complete and entire contour is required then deselect the checkbox, or apply an upper limit strictly
above that of the surface FRF or FOS, but be aware that this may increase run times significantly.
The TCD can be enabled for finite life methods (see 26.3), including strain-based algorithms, but it is recommended
that an elastic-plastic FEA is used [5]. The Allow TCD for finite life checkbox selects finite life TCD in principle for the
project, but individual groups must also have Run TCD selected with their algorithm in the same way as for infinite
life methods. If the project setting to Allow TCD for finite life is Off then a note will also be displayed below the Run
TCD checkbox to advise the user to allow finite life TCD under Analysis Options. A desired life range for the standard
surface life within which TCD will be run may also be selected. This allows items of already high life away from a
hotspot area to be ignored, which can significantly reduce analysis times. Also note that the finite life TCD will not be
run if the surface life is already at the material CAEL.
whereas factors below one will be under-estimated because L should really have been increased. fe-safe does not
model the variation of L with life, for which additional material parameters would be required, and a more complex
recursive calculation. Therefore, care must be taken in interpreting safety factor values for finite life analyses. A
pragmatic approach is to define L not strictly for the target life itself, but for an increased target life appropriate to the
desired safety factor. Assume that you have data representing L vs. 𝑁𝑓 and that the S-N curve takes the form 𝑆𝑎 =
−1⁄
𝐶𝑁𝑓 𝑚 near the target life. For a safety factor of 1.1 and a target life of 1E6, look up L for an increased target life
𝑁𝑓 = (1.1)𝑚 ∗ 1𝐸6. For m=3, this corrected life would be (1.13 )1𝐸6 = 1.331𝐸6. If the infinite life value of L is simply
retained, then the results will generally be conservative (at least for factors close to 1), as L will have been under-
estimated.
Although TCD may be run on any FOS and most infinite life FRF-type analyses, note that the value of L used might
have to be adapted to the algorithm. As TCD is intended for high cycle fatigue it should generally be used with stress
methods such as Normal-Stress for brittle materials, or Susmel-Lazzarin or the Prismatic Hull for more ductile
materials. Furthermore, if L has been estimated under loading tests for normal stress using S-N curves, then it may
not be accurate for shear-oriented strain methods such as Brown-Miller.
For finite life analyses an elastic-plastic FEA is recommended [5], and any plastic strain datasets will be interpolated
(or integrated for Line Method) in the same way as the stress tensors. Finite Life analyses with TCD can be performed
using the following life algorithms:
Normal Stress
Normal Strain
Brown-Miller
Stress-based Brown-Miller
Maximum Shear
Cast Iron
26.5.1 TCD may not yet be used for the following methods:
FKM
Verity and other Weld methods
𝐾 = 𝜎√𝜋𝑎
This parameter can be used to predict crack-growth due to fatigue, which will only occur when the range ∆K of stress
intensity exceeds a threshold ∆Kth, which is a material property that is constant for a given stress ratio R = min / max
= Kmin / Kmax. This property is defined in the fe-safe material database for the case of R = -1 corresponding to zero
mean stress and is denoted taylor : Kthreshold@R:-1, in units MPa m1/2. fe-safe can use this to calculate the critical
distance parameter L (see section 26.10 below), or alternatively the critical distance can be directly specified as the
material property taylor : L (mm).
26.6.1 Mean stress correction
The Critical Distance calculation corrects stress-cycles for mean-stress effects using the same mean stress correction
(MSC) that was selected for the surface analysis. For more information on the mean stress corrections and required
material properties see sections 14 and 8 respectively.
26.7 Outputs
Critical Distance Radial FRF or FOS values are output as contour fields when their surface FOS/FRF factor
counterparts are calculated by an analysis and selected for export as contours. The worst Critical Distance factors
are also reported in the analysis summary:
Figure 26-4
Similarly the life (and damage) contours are also duplicated for the TCD if using a finite life analysis.
fe-safe optionally outputs two additional contours called CritDist-Success and CritDist-Diagnostics so that any
problem nodes can be identified. These can be selected via the Contours tab of the Exports and Outputs dialog,
opened via the Exports… button of the Analysis Settings tab of the Fatigue from FEA dialog. The difference between
the success and diagnostics contours is that the former gives a simple summary of success or failure, whereas the
latter gives detailed reasons for the failures. The coding of the success contour is
0 = Failure
1 = Warning (Calculation succeeded but with a warning)
2 = Success
A complete description of all the diagnostic codes is given in section 26.9 below. In brief, negative codes are used if
the calculation did not proceed at all (e.g. the node was out of the defined FRF band, or required material data was
unavailable), zero if the calculation succeeded, and positive for a warning or error encountered during the calculation.
A possible cause of error is when the stress gradient path leaves the model before reaching the specified critical
distance (e.g. when analysing thin structures).
If the Export Critical Distance summary checkbox has been selected on the Log tab of the Exports and Outputs dialog,
then a warning summary will appear at the end of the analysis giving the total number of problem nodes under each
category, and the corresponding diagnostic code. The diagnostic code may be useful when viewing the diagnostic
contour (details in section 26.9 below). If the Export Critical Distance summary checkbox has been selected, then
further details on each node with a Critical Distance warning or error are written to the fe-safe log file. Each such node
has a line in the log giving node ID, numeric diagnostic code and short text explanation. The number of nodes in any
failure category in this file is limited to 1000.
An example comparing the conventional surface FRF with the Critical Distance FRF contour is shown below for an
open-source crank throw model (Figure 26-5). It can be seen that the worst-case FRF region is improved on the
Critical Distance contour (the red hotspots disappear).
Figure 26-5
It is also possible to produce more detailed information showing details of the calculation and the stress tensors
interpolated along the stress gradient path. Values are output at element boundaries. These additional outputs can
be selected for specific items by specifying the required item IDs in the List of Items tab in the Exports dialog box.
If the Critical Distance items checkbox is selected on the Log for Items tab, then further details will be written to the
log file detailing:
the surface node’s coordinates and surface-normal;
the elements intersected by the critical path, with topology information;
If the Critical Distance stress tensors checkbox is selected on the Histories for Items tab, then each node in the list of
items has a second plottable text file created, listing the six tensor components of the interpolated stress or strain for
each dataset in the loading with the dataset number. These plottable text files likewise appear in the results directory
and are start with the results output model name, followed by TCDTensor and the item ID, (e.g. :
crankshaft_Results.odb-TCDTensors_[0]e578.1.txt). Note that for a dataset sequence loading, this
essentially just duplicates the TCD version of the Load Histories file. The Load Histories is clearer on the separation
of the stress and strain datasets. The Critical Distance stress tensors output has been retained from the legacy TCD
implementation, but generally the use of Load Histories is recommended instead. We intend to supplement the Critical
Distance stress tensors in a future release, by adding tensors sequences along the critical path (e.g. at element
boundaries). Also note that under the Log for Items, the Dataset Stresses option (under Stress/Strain) outputs both
surface and TCD stress and strain tensors to the log file.
Otherwise the analysis proceeds but may fail with one of the following errors.
Error Code Value Meaning
Error_Path_Left_Model 10 The critical path appropriate to the method (PM
or LM) normal to the node left the model (or
entered a hole).
Error_Missing_Stress 11 An element along the path had no associated
stress data (or not all its corner nodes had
stress data).
Error_Bad_Mesh 12 There were mesh inconsistencies during ray
tracing, or the surface normal never entered the
model.
Error_Singular_Matrix 13 Geometry of an element led to non-invertible
matrix during stress interpolation, probably due
to a collapsed element with several coinciding
nodes
Error_Msc_Failed 14 Mean stress correction failed
Error_Internal_Problem 20 Any other undiagnosed problems
Certain FE analysis packages may use quadratic elements but only export stresses at corner nodes. This still allows
a successful analysis (although with linear interpolation), but there may be some warnings in the log file indicating
nodes with missing stress values.
26.9.2 Visualising Problem Node Cases
As stated above, nodes where the Critical Distance calculation could not be performed will receive a diagnostic code
in the CritDist-Diagnostic contour (if this contour was selected for output on the Contours tab). The warning summary
at the end of the analysis will give the corresponding diagnostic code for each error type (if the Export Critical Distance
summary checkbox has been selected on the Log tab). If it is desired to investigate where these nodes are in the
model then a suitable post-processor may be used to view the contour. It will normally be necessary to adjust the
contour legend in the post-processor to highlight the error code of interest. For example when analysing a thread
model, fe-safe reported that there were several hundred nodes where the critical path left the model (diagnostic code
10). The results file was loaded into a suitable post-processing tool, where the diagnostic contour could be visualised.
The colour coding was adjusted to highlight the diagnostic code 10 value, and creating a plane cut through the model
shows that these problem cases lie along the edge of the thread where the critical distance is greater than the thread
thickness.
Note: Some post-processors may produce spurious local interpolation effects when displaying integer diagnostic
codes as floating point values. Averaging should be switched off if possible to negate these effects.
Figure 26.9-1 Diagnostic contour showing location of “path left model” nodes on a thread
Critical distances can vary from less than 0.1mm for high strength steels to 4mm for some grey irons.
For sharper notches (i.e. at higher values of Kt) there will be a bigger difference between the stresses at the surface
and the stresses at the critical distance. Hence there is more chance that the crack will not propagate. This difference
will be greater for lower strength materials because the critical distance is greater. Critical distance methods are
therefore most applicable to relatively sharp notches in cast irons, but may have an effect on other materials as well
depending on Kt.
The benefit of using critical distances is that higher stresses may be used. If the crack will not propagate it may be
possible to increase the stresses to a value where the crack will just not propagate. However, the designer is then
moving from a ‘crack initiation’ design criterion to one in which small cracks are allowed.
Critical Distance methods are described in detail in Ref. 26.1. Critical distance parameters for many materials are
given in Ref. 26.2. If no critical distance (L) material property is specified in the material database (see section 8 of
the fe-safe User Guide), then the critical distance is calculated using the threshold value of the crack growth
2
1 K th
L
o
Where:
L is the critical distance for the material and
o is the stress range at the constant amplitude endurance limit (CAEL) from a conventional uniaxial stress S-
N curve at zero mean stress. Note that even if L is instead specified as a material property, o is still calculated
from the CAEL, as it is also needed for the FRF (or FOS) factor calculation.
A review of Critical Distance applications is given in Ref. 26.3.
26.10.1 Cast Irons
There is an exception made in the calculation of o when the Cast Iron algorithm is used. The Smith-Watson-
Topper (SWT) life curve is used instead of the S-N curve in such cases to convert the CAEL to o .
The CAEL (n say) is converted to an equivalent Grey Iron SWT value thus:
𝐺𝑆𝑊𝑇 = 𝐴𝑛𝑏 ,
where A and b are SWTLifeCurveCoeff and SWTLifeCurveExponent material properties (see section 8 of the fe-safe
User Guide) (e.g. b=-0.25 for Downing : GreyIron).
Then assuming elasticity in the SWT stress-strain product, fe-safe sets o using Young’s modulus E as follows:
∆𝜎0 = 2√𝐸𝐺𝑆𝑊𝑇
This gives for the Cast Iron algorithm:
∆𝐾 2
𝐿=
4𝜋𝐸𝐴𝑛𝑏
It is recommended that iron materials have the material database property for L explicitly specified whenever possible.
26.10.2 Critical Distance Methods
The Point Method (PM) postulates that the condition for fatigue failure is that the stress-range at a distance L/2 (critical
distance) from the crack tip exceeds the fatigue strength o , the stress-range that corresponds to infinite life
according to the material SN curve, see Figure 26.6-2 below.
Thus, the stress-range at a distance L/2 from the surface may be compared with o to compute Fatigue Reserve
Factors (FRF) or Factors of Strength (FOS).
Similarly, the Line Method (LM) uses the mean of the stress-range integrated over a path of length 2L along the
normal to the surface, see Figure 26.6- below.
When using the Line Method the dataset tensors for each node are integrated along the line and used instead of the surface
dataset tensors when evaluating the loading in the fatigue calculations.
26.11 References
26.1. Taylor D. The Theory of Critical Distances. A New Perspective in Fracture Mechanics.
Elsevier, 2007
26.2. Susmel L. Multiaxial notch fatigue.
Woodhead, 2009
26.3. Susmel, L. (2008). The theory of critical distances: a review of its applications in fatigue. Engineering Fracture
Mechanics, 75(7), 1706-1724.
26.4. Susmel, L., & Taylor, D. (2012). A critical distance/plane method to estimate finite life of notched components
under variable amplitude uniaxial/multiaxial fatigue loading. International Journal of Fatigue, 38, 7-24.
26.5. Susmel, L., & Taylor, D. (2010). An elasto-plastic reformulation of the theory of critical distances to estimate
lifetime of notched components failing in the low/medium-cycle fatigue regime. Journal of engineering materials and
technology, 132(2).
There is a choice of analysis algorithms to calculate expected life once a suitable PSD response has been
calculated:
The Dirlik algorithm [1] (this is the default)
The Tovo-Benasciutti algorithm [2-4] with fixed (per node) mean stress defined by residual stress.
The Tovo-Benasciutti algorithm [2-4] with randomly distributed mean stress centred on the specified
residual stress.
The Bendat method, intended for narrowband response PSDs.
The Steinberg method
The Wirsching-Light method (bandwidth correction to Bendat)
The Dirlik algorithm only considers cycle amplitudes, so if residual stresses are present one of the Tovo-Benasciutti
methods should be used. Note that if no residual stress is defined then the Tovo-Benasciutti algorithm will use zero
overall mean, but even the fixed mean option may still give slightly different results to Dirlik because the amplitude
distribution is slightly different.
Four methods are available for computing response PSDs:
Von Mises stress for ductile metals.
A normal-stress critical plane algorithm.
A shear-stress critical plane algorithm.
A combined shear and normal stress critical plane algorithm.
The normal-stress critical plane algorithm searches a full hemisphere, but to obtain a reasonable computation time,
the shear algorithm searches a more restricted set of critical planes, which are planes at 90 degrees or 45 degrees
to the surface normal. Since this implies that the surface normal at each node is defined, the shear stress PSD
algorithm can only be run on the surface group. The combined shear and normal stress algorithm is a kind of
modified shear algorithm. The set of evaluated critical planes is still exactly as for the shear case, but a contribution
of normal stress (projected onto each plane) is added with configurable weighting k (default 0.25).
There is also a special case of applying PSD methods to weld fatigue using modal structural stresses derived using
the Battelle Structural Stress method (Verity) applied to modal forces. This will be automatically selected when
applying PSD to a Verity-derived weld group. There is an option to pick the modal stresses from either normal or
shear (along weld line) structural stress.
The PSD approach may also be used in FOS calculations on expected life.
The fe-safe analysis procedure is shown in Figure 27.1-1. In summary, fe-safe processes the FEA results and the
user-supplied PSDs (and CSDs, if available) by calculating the response PSDs at each node. This response data is
then used by the fatigue damage algorithm.
Figure 27.1-1 Outline of data interaction during the fe-safe PSD calculation procedure (for a typical multi-channel,
single loading block example). Note that the purple text indicates input data, the green text indicates calculated data
and the red text indicates output.
(a)
(b)
Figure 27.2-1 Abaqus input file extracts to request the necessary output in including (a) modal stress data and (b)
fe-safe Generalized Displacement data for a PSD analysis.
The content of each .mcf file does not explicitly state the associated channel (loading location and direction). Such
information is necessary for a multi-channel PSD analysis since the calculations outlined in Figure 27.1-1 are
carried out on a per-channel basis. To overcome this problem the following naming convention must be obeyed. For
n channels there will be n .mcf files. It is expected that each .mcf file has a unique channel-specific number at the
end of its name, located between a ‘_’ and the file extension. It is assumed that such channel identifiers are
numbered in a continuous manner (from 1 to n), e.g. say a .rst file has two associated .mcf files then these files
should be called x_1.mcf and y_2.mcf (where x and y denotes a valid file name).
Some users have reported that, when working in ANSYS Workbench, additional columns of data can appear in
ANSYS mcf files. It is believed that these represent an additional base motion of the structure, similar to the Abaqus
mode 0 which is generally ignored in PSD analyses. By default the extra data causes an error when loading the
model, as there is an inconsistent amount of data. However it is possible to suppress the error and force fe-safe to
ignore the extra columns by selecting a checkbox on the ANSYS RST Interface Options dialog (accessed from the
FEA Fatigue menu), as illustrated below.
If this option is selected, then it is assumed that for n modes, the first n pairs of columns after the first (which is the
frequency column) represent the required MPF data, and any further columns are ignored; a warning is still given
on model load that extra columns were detected.
The content of each .pch file does not explicitly state the associated channel (loading location and direction). Such
information is necessary for a multi-channel PSD analysis since the calculations outlined in Figure 27.1-1 are
carried out on a per-channel basis. To overcome this problem the following naming convention must be obeyed. For
n channels there will be n .pch files. It is expected that each .pch file has a unique channel-specific number at the
end of its name, located between a ‘_’ and the file extension. It is assumed that such channel identifiers are
numbered in a continuous manner (from 1 to n), e.g. say an .op2 file has two associated .pch files then these files
should be called x_1.pch and y_2.pch (where x and y denotes a valid file name).
The complex Generalized Displacements being imported into fe-safe can be expressed in either polar or
rectangular form (this data will be converted to rectangular form for use in the PSD loading process in fe-safe). By
default, an Abaqus Steady State Dynamics Analysis exports such data in polar form, i.e. with modulus and
argument components (where the angles are expressed in degrees). Meanwhile, the default settings for an ANSYS
Harmonic Analysis or NASTRAN Frequency Response Analysis results in complex-valued data that is exported in
rectangular form, i.e. with real and imaginary components. With the above in mind, it is imperative that the
appropriate Complex number notation radio button is selected by the user.
Finally, in the Files that provide Power Spectral density (PSD) data section select the files containing PSD data.
Click OK, then the option to pre-scan the file will be displayed and the procedure for Selecting datasets to read will
proceed as with other pre-scanning operations (see section 5). Note that if applying PSD methods to welds using
the Battell Structural Stress (Verity) method, it is also necessary to select Force datasets in the pre-scan to read in
the modal force datasets used in Verity.
Figure 27.2-2 Open Finite Element Model for PSD Analysis dialogue box.
𝑃𝑆𝐷11 (𝑓𝑗 ) 0 … 0
0 𝑃𝑆𝐷22 (𝑓𝑗 ) … …
(1)
… … … 0
( 0 … 0 𝑃𝑆𝐷𝑛𝑛 (𝑓𝑗 ))
where j = 1, …, m and the 𝑃𝑆𝐷(𝑓) entries represent the PSD terms. Such data can be viewed as a single fe-safe
loading block (see section 13) and can be used in combination with the modal stresses and generalized
displacements in order to calculate the response PSD (per node).
where the 𝐶𝑟𝑜𝑠𝑠(𝑓) entries represent the complex CSDs in rectangular form, i.e. such data is assumed to have real
and imaginary components. If the user possesses CSD data then further cases, or loading blocks, may be
constructed by creating case-specific combinations of matrix (2). Note that the matrix is Hermitian [7]. So, given n
sets of PSD data, i.e. one set per channel, calculations can be implemented for any unique combination of cross
correlation components above the matrix diagonal, over m discrete frequencies.
To clarify the above, consider a three channel example where PSD spectra are provided over, say, 100 discrete
frequencies. Here, a loading block that neglects the contribution of the cross correlation terms will make use of the
diagonal terms only, i.e. the following matrix will be formed (at run-time)
𝑃𝑆𝐷11 (𝑓𝑗 ) 0 0
( 0 𝑃𝑆𝐷 (𝑓
22 𝑗 ) 0 ) (3)
0 0 𝑃𝑆𝐷33 (𝑓𝑗 )
If CSD data is available (over the entire frequency range) then seven further loading blocks can be created by
considering any unique combination of the three components above the matrix diagonal, e.g.
Given suitable modal stress data, the FEA Fatigue analysis process can then be implemented (per loading block).
Note: Given matrix (2), it is possible to use the coherence function [8] to provide a quantitative estimate of causality
between two sets of PSD spectra (per loading block); i.e. at frequency j the cross correlation term at row p, column
q should satisfy
2
0 ≤ |𝐶𝑟𝑜𝑠𝑠𝑝𝑞 (𝑓𝑗 )| ⁄(𝑃𝑆𝐷𝑝𝑝 (𝑓𝑗 )𝑃𝑆𝐷𝑞𝑞 (𝑓𝑗 )) ≤ 1 . (6)
Failure to satisfy this inequality will indicate that unsuitable PSD data has been provided by the user.
To provide the necessary input data to fe-safe, the user must create a file for every loading block under
consideration, e.g. the cases characterised by matrices (3) to (5) would require three files (see section 27.3.1).
Each file should be an ASCII file using ANSI encoding with a .psd extension and should contain PSDs and CSDs (if
available) over the frequency range of interest, which in turn will indicate the matrix configuration (per loading
block). PSD files should be formatted as follows:
Related PSD and cross correlation data (for n channels) should be written in a single file.
Comment lines start with a ‘#’ and may be used throughout.
The first non-empty, non-comment line must be a header specifying the number of channels n:
Number of channels [|:|=] n
The following header lines are optional but, where used, must appear in the order listed. Whitespaces are
optional.
The names of the channels may be specified using
Channel names [:|=] "name1"[, "name2"[, "name3"...]]
i.e. a comma-separated list of names, each enclosed in double quotes. If they are specified, the number of
names must equal the number of channels and the names must be unique. Comparison is case-insensitive.
The associated signal length may be specified in the form:
Exposure [time|duration] [:|=] t [|s|seconds|mn|minutes|h|hours]
where t is the signal length and missing units are interpreted as seconds. Note that a legacy variant
Exposure time(seconds) = t is no longer supported.
For future use, frequency units may be specified using Frequency units [:|=] Hz, but currently Hz (the
default) are the only units supported.
Power density units may be specified using:
Power density units [:|=]
[|N2_Hz|m2_Hz|mm2_Hz|m2_s2_Hz|mm2_s2_Hz|m2_s4_Hz|mm2_s4_Hz|g2_Hz]
The default is m2s4/Hz.
If and only if the power density units are specified as g2/Hz, the acceleration due to gravity, g, is not taken as a
constant but must then be specified in ms-2:
g [:|=] <acceleration>
After the header line(s), n sets of 2-columned PSD data must be provided, i.e. columns of frequency and real-
valued data. Each set must be separated by either an empty line or a comment line.
After the sets of PSD data, further 3-columned sets of CSD data, i.e. frequency, real and imaginary-valued
data, may be defined. If there is no CSD data, i.e. a loading block characterised by matrix (1) (see section
27.3.1), then the space after the last PSD data set should be empty (or contain comments). Alternatively, if
the user wants to supply CSD data, i.e. any unique loading block configuration characterised by matrix (2) (see
section 27.3.1), then n(n-1)/2 sets of 3-columned data must be given in ascending column-, then row-order.
The associated complex-conjugate entries, i.e. those below the diagonal in matrix (2), are not required.
The same set of frequencies must be used for each power- and cross-spectrum.
The frequencies need not be evenly spaced.
A single row of zeroes (per matrix entry) is sufficient to represent zero-valued CSD data over the entire
frequency range.
An example of a typical file is displayed in Figure 27.3-1. Note that this file contains data for 1000 frequencies
based on the template represented by matrix (5) (see section 27.3.1). Ellipsis has been used to abbreviate this
sample; it does not form part of the file-format.
Number of channels = 3
Channel names: "Acceleration X", "Acceleration Y", "Acceleration Z"
Exposure time = 32.768s
Frequency units : Hz
Power density units= g2_Hz
g: 9.8066
#PSD_0_0_frq PSD_0_0_real
0.1 0.0014311
0.2 0.0001870
...
100 0.0000001
#PSD_1_1_frq PSD_1_1_real
0.1 0.0014895
0.2 0.0001932
...
100 0.0000021
#PSD_2_2_frq PSD_2_2_real
0.1 0.0014821
0.2 0.0003933
...
100 0.0000003
CSD (or single PSD) format files can be derived from time signals using the Cross-Spectral Density Matrix File
option in the Frequency menu. This uses a 10% buffer overlap with cosine tapering and defaults to 1024 bins in the
FFT buffer (which may be reduced for short signals with fewer than 1024 datapoints). The exposure time is also
written to the PSD file header. The output .psd file is created in the project results directory (<project>/results) and
users may wish to rename it. Note that for single-channel signals there is also an option on the Frequency menu
called Power Spectrum Density (PSD), which generates a single-channel PSD file in .dac format. This is less
convenient for PSD fatigue analysis, as it would be necessary to manually convert the file format by re-saving in
some ASCII format and then providing the necessary headers and frequency column. It is therefore recommended
that the general Cross-Spectral Density Matrix File option be used to generate PSD files from time signals, even for
the simple case of a simple PSD for a single signal. The frequency-domain signal-processing options are
described further in Chapter 10.
When multiple PSD files are being used, i.e. when there is more than one loading block per analysis:
the frequency values per .psd file must match those specified in other .psd files;
the frequencies in a .psd file may differ from those specified in the generalized displacement data.
The response PSD frequency set is restricted to the larger lower bound of the .psd and generalized displacement
data and the smaller upper bound (so no extrapolation is performed), and is set to the union of the input PSD and
generalized displacement data frequency sets lying within these joint bounds. The response PSD for each analysis
node at each such frequency is computed by combining the input PSD channels, the generalized displacements
and the modal stress tensors, with interpolation as required and appropriate projection in the critical-plane
approaches. Algorithm details are given in [9,10].
Note that the number of frequencies used must lie between at least 3 and at most 32767 in each of the supplied
PSDs and in the Generalized Displacement data for each channel. When the data is combined, the final number of
combined frequencies must also not exceed the 32767 limit.
A description of each method is beyond the scope of this document (see refs. [2,9,10,11] for further details).
However, note that in fe-safe the granularity of the critical plane search can be varied by selecting the Critical plane
search count field in FEA Fatigue->Analysis Options->General tab (see section 5). For most cases the default value
of 18 (which leads to a search increment of 10 degrees) should suffice. The combined normal and shear algorithm
is taken from Macha & Nieslony [11], and uses the same set of critical planes as the shear algorithm. So this can be
viewed as a kind of modified shear method, where some contribution of the normal stress 𝜎𝑛 to the damage is
included. The normal contribution is controlled by a configurable parameter k (in [0,1]) which can be set in the
above dialog (default 0.25). For ductile materials a similar parameter in the Findley (time-domain) algorithm is in the
range [0.2,0.3]. The damage parameter is in effect:
2𝜏𝑠 + 𝑘𝜎𝑛
1+𝑘
For weld groups created by running the Verity Weld Preparation stage (see Verity in fe-safe User Guide), the
selection between normal and shear weld structural stress replaces the normal choice of 4 methods of PSD
response. Note that when applying PSD methods to weld structural stresses a complex form of the Equivalent
Structural Stress (ESS) transformation is applied on a per channel basis to the modal structural stress of each
channel; the response PSD is then computed by summing over ESS per channel. There is a bending ratio per
channel when applying the I(r) function in the ESS transformation. Note that this is not strictly the same as applying
the ESS function as part of the damage integral, due to the non-linearity of the I(r) function in the ESS
transformation, but has been found in internal testing to be a good enough approximation for membrane stress
dominated welds. However for welds with multiple channels and a high bending ratio (high bending structural stress
compared to membrane, e.g r > 0.5) it is recommended that PSD methods be used to identify hotspots, and a
restricted time domain analysis be used for more precise life prediction at the weld hotspots.
The Implement Von Mises-based nodal filtering check box is a potential speed-up option which is available when a
critical plane option is selected. If checked, the box indicates that fe-safe will implement 'nodal filtering'. Response
PSD moments will be initially calculated (for all nodes) by using the Von Mises stress, and only nodes with
significant stress (i.e. finite life below constant amplitude endurance limit (CAEL)) will be further processed using a
critical plane search. In models where most of the lives are infinite, this allows faster processing of the majority of
nodes which undergo no (or low) damage. More precisely, nodes with very low stress (RMS below 15% of CAEL
fatigue strength) are immediately filtered out, whereas nodes with obviously significant stress (RMS exceeding 40%
of CAEL fatigue strength) are immediately passed on for critical plane processing. Nodes with RMS values in
between these thresholds have an approximate life calculated using a conservative narrowband approximation of
Bendat [12], for which an analytical solution is available for expected life, with a 20% error margin applied to the
Von Mises RMS. If this conservative life is below the CAEL then the critical plane processing is invoked.
The Use log interpolation checkbox determines the form of frequency interpolation applied to the input PSD function
and generalized displacements. Selection of this checkbox will result in interpolation in log-log space rather than
linear interpolation. This would generally be more appropriate for PSD functions specified with a coarse frequency
grid, especially if the PSD function had been originally specified in log-log space; log interpolation may well be more
appropriate for narrowband PSDs. Note that complex terms are interpolated in Cartesian form when interpolated
linearly, but in polar coordinates for log interpolation, for which the magnitude is log interpolated but the phase is
linearly interpolated. This corresponds to the generalization of log to the complex plane (as 𝑙𝑜𝑔(𝑟𝑒 𝑖𝜃 ) = 𝑙𝑜𝑔(𝑟) + 𝑖𝜃
). This applies to cross-correlated channels with CSD terms or complex generalized displacements. The default
interpolation in fe-safe versions 2022x and later is to use logarithmic (checkbox selected), whereas in 2021x it is
linear for backwards compatibility. Note that if an older project is opened in 2022x with no setting specified, the
earlier linear default form is retained, whereas new projects created in 2022x default to logarithmic.
The damage integral for the Dirlik algorithm is affected by the setting of the RMS stress cut-off multiple. It is
recommended that the default value be normally retained. Also note that these settings (cut-off and Number of
stress range intervals) are only applied to the Dirlik algorithm. The damage is upper bounded at the value implied
by the limit, and the remaining tail of the stress PDF is integrated using this damage upper bound (or 1 if the
damage would be more than 1). Also note that Dirlik’s algorithm is defined in terms of stress ranges (not
amplitudes), and so the limit in the case of Dirlik is applied to the stress range (not amplitude). Hence the default
setting of 10 can be thought of as covering 5 standard deviations of the amplitude distribution. The Tovo-Benasciutti
method has a more complicated way of handling the integral, and limits are affected by the mean stress under
consideration. Therefore for Tovo-Benasciutti the limits are always the lower of the SN curve intercept point or 5
Copyright © 2023 Dassault Systemes Simulia Corp. Volume 1 27-11
Vol. 1 Section 27 Issue: 24.1 Date: 17.08.23
Fatigue analysis using PSD data
RMS values, subsequently modified by the current mean. Finally the number of stress range intervals is also only
applied to Dirlik, since with Tovo-Benasciutti or the simple narrowband methods there is a closed form for the
integral for single-segment SN curves, and otherwise a lower number of 100 intervals is used when also doing a
double integral over the randomly varying mean. If running Dirlik on a large model a small speed-up can be
obtained by reducing the Number of stress range intervals. It can typically be dropped to 100 without materially
affecting accuracy, but values under 50 are not recommended.
There is a further option, selectable by checkbox, to apply a further bound to the Dirlik damage integral at the
Ultimate Tensile Strength (UTS) of the configured material. If the UTS is lower than the SN curve intercept point,
then the effect is to use the UTS in place of the SN curve intercept as an additional bound on the upper limit of the
integral, after which the tail is treated as having damage of 1. Use of this option is usually over-conservative at low
life, as for most materials the UTS is lower than the SN curve intercept, but is provided for backwards compatibility
with earlier versions of fe-safe (6.5-02, 6.5-03, 2016), or for when a material’s SN curve is not regarded as valid
beyond the UTS. For the medium to high life region, use of this option will have little or no effect, as the stress
range limit would already be below the UTS.
fe-safe calculates fatigue results using either the Dirlik method [1] or the Tovo-Benasciutti method [2-4], or earlier
basic methods (Bendat, Steinberg or Wirsching-Light). All provide a closed form solution to estimate the Probability
Density Function (PDF) p(S) of stress range S from the spectral moments of the response PSD, and hence
calculate a histogram of Rainflow cycle ranges. Expected fatigue damage can be calculated from this cycle
histogram by integration of D(S)p(S), where D(S) is the damage incurred by a cycle of range S. Earlier versions of
fe-safe (up to fe-safe 2016) provided only Dirlik’s algorithm for converting the response PSD spectral moments to a
PDF. Dirlik’s PDF is a semi-empirical mixture model of three distributions which suffers from two issues:
a) It only assesses cycle amplitudes, and there is no adjustment for cycle means, neither random variation in the
mean, nor a non-zero global mean due to residual stress effects.
b) The Dirlik formula is semi-empirical and although it appears to work fairly well, it lacks a sound theoretical basis.
These drawbacks were addressed in the work of Tovo and Benasciutti culminating in the paper published as [2];
further theoretical details are given in [3] and [4]. Note, however, that the theoretical justifications given by
Benasciutti in his PhD thesis [4] are for stationary Gaussian processes.
The method sums a weighted combination of two damage terms: a narrowband component, and a wider band
range counting component. Both are Rayleigh distributions in amplitude, but with different variances, and the
second also has a Gaussian PDF on the cycle mean.
The selection of Dirlik or Tovo-Benasciutti method is made by double clicking on the Algorithm tab in the Analysis
Settings tab of fe-safe. This results in a PSD-specific algorithm dialog popping up as shown below.
If a Tovo-Benasciutti algorithm is selected then the two radio buttons pairs for the mean stress variability model and
the mean stress correction are activated. The mean stress used in Tovo-Benasciutti can either be set to a fixed
value determined by the residual stress, or this can be used as the centre of a Gaussian distribution used to model
the stochastic effect of random variation in individual cycles.
The employed mean stress correction in the work of Tovo & Benasciutti takes the Goodman/Morrow form for
positive mean m :
−1
𝑆 ′ = 𝑆 (1 − 𝑚⁄𝑆 )
𝐿
The limit stress 𝑆𝐿 can be set to either the stress which gives damage of 1 on the SN curve (Morrow the default), or
the UTS (Ultimate Tensile Strength), which is the (typically-over-conservative) Goodman correction. Alternatively
fe-safe also offers a more flexible form of mean stress correction, using a User-Defined Mean Stress Correction,
supplied in a .msc file. This provides a piece-wise linear generalisation of the Goodman diagram, and can also
model the effect of negative (compressive) residual stress. See section 14.11 for details (or Appendix E for the file
format).
When the stochastic mean form of Tovo-Benasciutti is used, then as well as integrating the expected damage over
the Rayleigh distribution of stress, the (wide band) range counting component is also integrated over the Gaussian
distribution of mean stress. This will produce more damage than using a fixed mean. Note that the stress correction
can asymptote to infinity as the mean stress approaches the limit amplitude. This can be a problem in the stochastic
mean form of Tovo-Benasciutti, where even if the process mean is below the limit, the random distribution can have
a tail in excess of the limit. In these circumstances fe-safe always constrains the computed damage at 1 so that
random mean contributions in the tail do not produce absurd contributions to the expected damage integral. The
stress integral is always upper bounded at the SN curve intercept, and any remaining PDF tail is simply assigned
an effective damage of 1 (i.e. the component can only be destroyed once).
Note that when the shear algorithm is used, then the S-N curve used for the damage function is based on normal
stress, but the shear is converted to an equivalent normal stress by doubling it so that in effect equivalent normal
stress is given by (see [11]):
𝜎𝑒𝑞𝑣 = 2𝜏
If a non-default surface finish is specified (Kt>1), then Kt is used to scale the stress integration axis, so for stress
range S with probability density p(S), the damage term is D(KtS). When performing a FOS analysis, the evaluated
scale factor is applied to the stress axis of the damage integral in a similar way, rather than scaling the input loading
(as that would be equivalent to a quadratic scaling. Note that the mean stress is not multiplied by Kt.
fe-safe also provides simpler earlier methods: Bendat (narrowband), Wirsching-Light, and Steinberg. The Bendat
method [12] is only really valid for narrowband response spectra and tends to be over-conservative. Note that the
Bendat method can also be regarded as a limiting case of Tovo-Benasciutti with no mean stress correction, as the
latter method incorporates a narrowband Rayleigh distribution identical to Bendat, which should dominate as
bandwidth tends to zero. The Wirsching-Light method is a kind of broadband correction to Bendat; the Bendat
damage is multiplied by a correction factor depending on the bandwidth (and also the S-N curve exponent). The
review by Quigley et al [13] discusses the Bendat and Wirsching-Light methods, as well as Dirlik and Tovo-
Benasciutti. Typically Wirsching-Light gives similar results to Tovo-Benasciutti (with low mean stress), but the latter
is more mathematically principled and may work better on complex multi-modal spectra. The Steinberg method is
even simpler than Bendat, and assumes a Gaussian distribution of stress amplitude, simplified to only 3 integration
points at 1,2, and 3 RMS values. It tends to be over-conservative. This method is included for historic completeness
and for comparison with other PSD fatigue codes, but is not recommended, although it may be slightly faster than
the other methods, as the integration is performed in a trivial manner without the use of any special functions (e.g.
gamma functions).
To allow fe-safe to calculate expected damage the following information must be provided [1]:
a) Material parameters to define the S-N curve for the material (see section 8).
b) PSD loading block exposure time, i.e. the amount of time that the component is exposed to the load case.
This may be provided in the header of the PSD file, or specified later in the loading definition.
c) Suitable settings for the granularity of the integration step and a value of 𝑘 to define the maximum stress
range1, i.e. 𝑘 ∙ 𝑅𝑀𝑆 (only used in Dirlik method).
1
A value ≥ 10 is recommended.
Copyright © 2023 Dassault Systemes Simulia Corp. Volume 1 27-13
Vol. 1 Section 27 Issue: 24.1 Date: 17.08.23
Fatigue analysis using PSD data
If no S-N curve is provided then the strain-life curved may be used instead with an elastic conversion. Like other fe-
safe stress algorithms, this depends on the setting of the Use Sf’ and b if no SN datapoints checkbox (see Stress
Analysis under the Algorithms tab of the Analysis Options dialog). Also note that multi-segment SN curves may be
used. The damage function defined in references [1] and [2] is a fixed power law, equivalent to a single segment SN
curve, but fe-safe will perform the PDF integral using a more general multi-segment SN curve if required. However
this will result in a somewhat slower run-time, especially if the stochastic mean Tovo-Benasciutti option is used.
Note that the Bendat, Wirsching-Light and Steinberg methods are defined using a fixed SN curve slope. If one of
these methods is selected for a material with a multi-segment SN curve then an averaged slope is used between 1
and 5 RMS using weights estimated from the damage contribution using Bendat (using a complete gamma
function). As the damage contribution weights depends on the SN curve slope, a recursive method of estimation is
used, with the initial weights computed from the SN curve slope on the initial segment.
To calculate safety factors for infinite life, a FOS calculation at infinite life should be used, rather than the FRF
calculation provided in some earlier versions of fe-safe (6.5-00 and 6.5-01). This has been removed because there
were statistical difficulties in providing an accurate standard deviation scaling (a value for 𝑘 ) for the FRF over long
time scales, and the FOS calculation takes better account of smaller cycles. Note however that the FOS scaling
produces the desired target life as the expected life, but due to random variability that may not be the life actually
achieved in any specific instance. It is therefore recommend that a slightly conservative approach to FOS
calculations be adopted.
If there are significant residual stresses present then one of the Tovo-Benasciutti algorithms should be used, as any
overall mean effects will be ignored in Dirlik. The residual stress can be set on a group-wise basis by either using
the Residual Stress column of the Analysis Settings tab (assumed isotropic), or by providing a residual stress
dataset in an appended finite element model. The modal analysis datasets must always be loaded first using Open
Finite Element Model For PSD Analysis..... Then if there is a dataset relating to a residual stress analysis, then that
may be loaded using Append Finite Element Model… from the File menu. Then the residual dataset may be added
to the Transitions Block on the Loading Settings tab using Replace Residual Dataset on the popup menu (the
required dataset must be first selected). Note that this option was originally provided for elastic-plastic analyses,
and therefore a limitation of the user interface is that an associated strain dataset must also be supplied, even
though this will not be used in the PSD analysis (see section 13 for details of Defining elastic-plastic residual
stresses). The residual stress tensor is projected onto the required critical plane when running critical plane
searches to obtain the mean stress used in the Tovo-Benasciutti algorithm. When the stochasticmean option is
selected the expected damage is integrated over both amplitude and randomised mean centred on the overall
mean for the residual. The Tovo-Benasciutti algorithm defines a Gaussian distribution for the actual mean of a
random cycle, but this is centered on the defined residual. If a Von Mises analysis is performed then there is no
direction onto which the residual tensor should be projected, so the trace of the tensor is used instead.
The damage integrals over amplitude and mean are limited by a stress limit set to the smaller of the UTS and the
stress amplitude at which the damage is one. This is used to limit the damage integration at 𝑆𝐿 − |𝑚|.
For the fixed mean variant 𝑚 is always 𝑚𝑐 (derived from the appropriate residual if defined, otherwise zero). For
randomised mean, an outer integration loop is performed over the mean (for range-mean damage term) using the
Gaussian PDF of mean stress (which is centred on 𝑚𝑐 ), see equation (42) in [2]. The general form for a signal of
time length T seconds is:
𝑆𝐿 𝑆𝐿 −|𝑚|
where the first term represents the expected narrowband damage derived from integrating the narrowband
Rayleigh distribution with the (mean stress corrected) damage function 𝐷(𝑆 ′ (𝑚)); 𝐺𝜇 (𝑚, 𝑚𝑐 ) is the Gaussian pdf of
the mean stress; 𝑅𝑎 (𝑆) is the Rayleigh pdf of the range counted damage for amplitude (see equation (21) in [2])
with cdf 𝜌𝑎 (𝑆); and the limiting damage is 𝐷𝐿 . Note that fe-safe does not accrue damage at low stress; a lower
bound stress 𝑆0 is calculated based on the CAEL (this is passed to PSD as a material property and is normally a
fixed fraction of the CAEL stress).
The narrowband damage 𝑑𝑛𝑏 is obtained by a straightforward integration of the narrowband pdf, so
𝑆𝐿 −|𝑚𝑐 |
27.6 References
[1] T. Dirlik, “Application of Computers in Fatigue Analysis”, University of Warwick Thesis, 1985.
[2] D Benasciutti and R Tovo, 2006. "On fatigue damage computation in random loadings with threshold level and
mean value influence." Struct. Durability Health Monitoring 2 (2006): 149-164.
[3] D Benasciutti and R Tovo. Rainflow cycle distribution and fatigue damage in Gaussian random loadings. No.
129. Internal report, 2004, Dipartimento di Ingegneria ,Universita degli Studi di Ferrara, Italy.
[4] D Benasciutti. Fatigue Analysis of Random Loadings. PhD Thesis, 2004, University of Ferrara, Italy.
[5] Abaqus software documentation.
[6] ANSYS software documentation.
[7] H. Anton, “Elementary Linear Algebra”, John Wiley & Sons, 2000.
[8] C. Lalanne, “Mechanical Vibration and Shock Analysis, Random Vibration”, Wiley, 2013.
[9] G.M. Teixeira et al., “Random Vibration Fatigue – Frequency Domain Critical Plane Approaches”, ASME,
IMECE2013-62607, 2013.
[10] G.M. Teixeira et al., “Random Vibration Fatigue-A Study Comparing Time Domain and Frequency Domain
Approaches for Automotive Applications”, No. 2014-01-0923, SAE Technical Paper, 2014.
[11] E Macha and A Nieslony (2012). Critical plane fatigue life models of materials and structures under multiaxial
stationary random loading: the state of the art in Opole Research Centre CESTI and directions of future activities.
International Journal of Fatigue, 39:95-102, 2012.
[12] Bendat JS. (1964). “Probability functions for random responses.” NASA report on contract NAS-5-4590.
[13] Quigley JP, Lee Y-L, Wang L (2016). Review and Assessment of Frequency-Based Fatigue Damage Models.
SAE International Journal of Materials and Manufacturing-V125-5, 2016.
fe-safe has a materials approximation algorithm, accessible from the 'Options' button in the materials data base.
This generates strain-life data for steels and for aluminium alloys, using the material's elastic modulus E and
ultimate tensile strength. This algorithm has been shown to be reliable for a range of commonly used steels and
aluminium alloys.
However, the user may have additional information available. In particular, a traditional S-N curve may be available
for a cylindrical specimen tested at zero mean stress under axial loading. This note suggests a method for
incorporating this information. Reference should be made to the Fatigue Theory Reference Manual section 3 for
background information.
First run the materials approximation algorithm in the materials data base, using the appropriate values of E and
Ultimate Tensile Strength.
The stress-life curve may be defined as shown in Figure 1.1. In the high cycle regime, (say) between 105 and 107
cycles, the slope of the S-N curve and the slope of the local stress-life curve will be very similar. The parameter b
may therefore be obtained from the S-N curve and will replace the value calculated from the approximation
algorithm.
The S-N curve may also define the stress amplitude at 107 cycles, or some other high cycle endurance. With
reference to Figure 1, adjust the stress-life curve to pass through the known data point, keeping the slope b
calculated in the previous paragraph. This will produce a revised value of 'f
These parameters can replace the values generated by the materials approximation program.
The remaining parameters for the strain-life curve generated from the materials approximation routine can be
accepted.
An adjustment to the value of 'f implies that the relative values of elastic and plastic strain have changed. The
value of n' should be re-calculated using
b
n'
c
'f
The value of K' should be replaced by K '
( 'f ) n '
1 Introduction
Most fatigue analysis is performed using stresses from an elastic FEA. The conversion from elastically-calculated
FEA stresses to elastic-plastic stress-strains is carried out in the fatigue software. The two essential features of the
fatigue modelling process are (a) an elastic-plastic conversion routine, and (b) a kinematic hardening model. A
common elastic-plastic conversion routine is Neuber’s rule, and although other methods are available, these will be
all be referred to as Neuber’s rule in this document.
In implementing Neuber’s rule, each node is treated as a separate entity. The elastic to elastic-plastic conversion
cannot therefore allow for the fact that stresses may redistribute from one node to another as a result of yielding.
Normally this is an acceptable approximation, because yielding generally occurs in notches. However, there may be
instances where gross yielding occurs on a component, and stresses redistribute from one area to another. This
may require an elastic-plastic FEA.
In order to set up an elastic-plastic FEA correctly, it is important to appreciate the methods used in the fatigue
software. These are described below.
2 Kinematic hardening
The Fatigue Theory Reference Manual, pages 2-20 to 2-22 show an example of the stress-strain response to a
sequence of elastic-plastic strains, for uniaxial stresses. The response has been calculated using a kinematic
hardening model.
The example is reproduced below (retaining the Figure numbers from the user manual)
Example 2.1
Figure 2.31 shows a short time history of local strain.
The strain values are:
The strain range from E to F is (0.0014 - (-0.001)) = 0.0024. On the hysteresis loop curve with its
origin at point E, the stress range is 415.1 MPa, and the stress at F is (239.1 - 415.1) = -176 MPa.
The strain range from F to A closes the cycle E-F. Its strain range is 0.0024 and the maximum stress
at E is 239.1 MPa. Using material memory, the stress at A is calculated using a hysteresis loop curve
with its origin at D. The strain range from D to A is 0.0055, and the stress at A is 321.1 MPa. This
strain range has closed the largest cycle in the signal, that from A-D-A. Its strain range is 0.0055,
and the maximum stress at A is 321.1 MPa.
A summary of the three cycles is shown in Figure 2.33, and in the table below.
CYCLE max
B-C 0.0024 189.9
E-F 0.0024 239.1
A-D 0.0055 321.1
Important features of kinematic hardening are illustrated in Figure 2.33. These are
1. Once a closed hysteresis loop has occurred, for example the loop B-C, the ‘material memory’ phenomena
occurs, in that the materials stress-strain response from A to D is calculated as though the closed loop B-C had
not occurred.
2. Subsidiary loops (B-C and E-F) have some plasticity associated with them. Isotropic hardening would not
produce this effect, because with isotropic hardening the materials yield stress increases to encompass the
largest event experienced so far, and so subsidiary cycles would be elastic.
Kinematic hardening is illustrated further in the Fatigue Theory Reference Manual, pages 7-40 to 7-43.
Note that in fatigue analysis, ‘yielding’ is considered to occur at stresses much lower than the 0.2% proof stress. In
fe-safe, the yield stress is taken to be the stress at which the difference between the elastically-calculated stress
and the elastic-plastic stress is 1% of the elastically-calculated stress.
Before the large event X-Y, the small cycles have a zero mean stress. After X-Y, the mean- stress for the smaller
cycles has been increased. If the loading represents a ‘day in the life’ of the component, this effect will only occur
on the first ‘day’. After this, all the small cycles will have the higher mean-stress.
Fatigue software simulates this effect by starting and finishing the analysis at the numerically largest strain (or
stress). The sequence would be analysed as though it consisted of the strain history shown below, i.e. starting and
finishing at point X.
X X X X
Y Y
Assuming that the fatigue life will be many repeats of this loading, the procedure produces the correct mean
stresses for all repeats except the first part of the first repeat. This is considered an acceptable approximation.
In modelling a fatigue loading sequence in elastic-plastic FEA, it is important that this procedure is followed. In the
example above, it may be necessary to model the sequence up to point X in Figure 2.34, or to model an initial
occurrence of point X. The sequence up to the next occurrence of point X should then be modelled. The sequence
of stress/strain from X to X (as shown above) is required for the fatigue analysis.
4 Materials data
Many materials cyclically harden or cyclically soften during the first few cycles of fatigue loading, until a stable
cyclic stress-strain response is attained (see the Fatigue Theory Reference Manual, page 3-3). Fatigue analysis is
carried out using the stable cyclic properties, and it is important that these stable cyclic properties are also used in
the elastic-plastic FEA. Conventional monotonic properties should not be used.
5 Discussion
It is clear from the above that care is needed when setting up elastic-plastic FEA for subsequent fatigue analysis.
Even when this is done, a series of presentations at user conferences has suggested that elastic-plastic FEA does
not generate stress/strain sequences that match those generated by fatigue analysis software. This seems to be
related to the way that kinematic hardening for cyclic loading is implemented in the FEA software. As a result,
users may see a lack of comparability between the fatigue lives calculated from an elastic-plastic FEA and those
calculated from elastic FEA.
1 Introduction
This technical note provides an outline of how fe-safe deals with triaxial stress states. These can happen on the
surface of components where contact occurs.
fe-safe uses the stress tensor history built by combining the stresses from the Finite Element datasets and load
histories to identify the orientation of the surface of the component. The assumption is that two of the principal
stresses will lie in the surface of the component and the third will be perpendicular to it. The two in-surface
principals may change direction within the surface during the whole loading sequence, but the out-of-plane principal
will not. This is shown for a three-sample dataset sequence in the figure below. NOTE: The surface is hatched.
Where the third principal is insignificant, the stress state is identified as 2-dimensional.
Where the out-of-plane principal stress is significant but the surface shear stresses are not significant, fe-safe treats
this as a two-dimensional stress state.
Otherwise, the stress tensor history is marked as triaxial and the fatigue calculations are performed using plane-
searches about 3 axes. The worst damage on any of the planes is stored.
Our 3DEXPERIENCE® platform powers our brand applications, serving 12 industries, and provides
a rich portfolio of industry solution experiences.
Dassault Systèmes, the 3DEXPERIENCE® Company, provides business and people with virtual universes to imagine sustainable innovations. Its
world-leading solutions transform the way products are designed, produced, and supported. Dassault Systèmes’ collaborative solutions foster social
innovation, expanding possibilities for the virtual world to improve the real world. The group brings value to over 210,000 customers of all sizes in all
industries in more than 140 countries. For more information, visit www.3ds.com.