Skip to main content
  • original innovation
  • Open access
  • Published:

Artificial Neural network-based prediction of SCF at the Intersection of CFST Y-joints

Abstract

Stress concentration factors (SCFs) are used to quantify the hot-spots stress in tubular joints with circular hollow section for fatigue assessment, which are always obtained by finite element analysis or specimens testing. According to design specifications, complex formulas are recommended to calculate the SCFs at special locations of the intersection lines weld toe of tubular joints for individual load cases. To improve the fatigue performance of the joint, the concrete is filled in the chord to form a concrete-filled steel tube (CFST) joint. The capability of back-propagation neural network-based (BPNN) model in calculation of the SCFs in CFST Y-joints was investigated in this study. Three hundred FE numerical models were investigated to evaluate the effects of changes in different geometrical parameters on the SCFs of CFST Y-joint and the FEA results were used to train and test the neural networks. The nonlinear mapping relationships between the affecting variables and the SCFs distributions were established. Research results showed that SCFs prediction results of CFST Y-joints from BPNN models are close to the FE results, and properly trained and well calibrated BPNN can be reliable alternatives to complicated SCFs equations for predicting SCFs distribution at intersection line of CFST Y-joints.

1 Introduction

With high strength to weight ratio, tube structures are widely used in various engineering structures, which are mainly fabricated from circular hollow section (CHS) members by welding the brace members to the surface of the chord members, resulting in so-called tubular joints. When the tube structures are subjected to cyclic loads, fatigue cracks usually initiate from the surface of tubular joints due to high stress concentrations and inherent welding defects. At present, stress-based methods and fracture mechanics-based methods are available for the fatigue life evaluation of welded joints (Wei et al., 2017, Qian et al., 2014). Because fatigue cracks always initiate from the weld toes in CHS welded joints, hot-spots stress (HSS) Shs–N curves were recommended by most of specifications to evaluate fatigue life of CHS joints. HSS includes all the stress concentration features of weld details, except those due to the local weld toe geometry. Stress Concentration Factors (SCFs) are generally applied to describe the HSS distributions of tubular joints, which are the ratio of HSS ranges to nominal stress ranges. SCFs of tubular joints can be obtained by finite element analysis (FEA) or specimens testing. SCFs calculation formulas are always deduced by multiple regression analysis of design parameters of tubular joints. Lots of researches on SCFs of CHS joints were carried out and different SCFs calculation formulas were proposed since 1970s (Kuang et al., 1975, Ahmadi, 2016, Cheng et al., 2018). Some of these SCFs equations were recommended by different institutes, such as International Institute of Welding (IIW) (1999), Committee for International Development and Education on Construction of Tubular structures (CIDECT) (Zhao et al., 2000), American Petroleum Institute (API) (1993) and Det Norske Veritas (DNV) (2008) etc. To improve the fatigue performance of tubular joints, concretes are filled in the chords to form a concrete-filled steel tube (CFST) joints. At present, using design formulas of CHS joints, SCFs of CFST joints are generally determined by introducing equivalent thickness. In addition, several new SCFs calculation formulas have been proposed based on testing and FEA results for CFST T-joints (Tong et al. 2017, Xu et al., 2015), CFST N-joints (Kim et al., 2014) and CFST K-joints (Chen et al., 2016). Also, experiments for CFST Y-joints were carried out and indicated that some well-established SCFs equations were consistent for braces, but very conservative for concrete filled chords (Yang et al., 2016).

As an important component of artificial intelligence (AI), artificial neural network (ANN) is a powerful tool for prediction of nonlinearities by simulate the biological structure of the human brain (Rafiq et al., 2001). In recent years, many types of ANN models have been proposed to solve complicated engineering problem. Neural network-based estimation of SCFs for steel multi-planar tubular XT-joints were proposed by Chiew et al. (2001). Neural network-based evaluations of SCFs distributions at the Intersection of tubular X-joints were proposed by Choo et al (2007). Neural network-based SCFs assessment of a T-welded joint was carried out by Dabiri et al. (2017). A new formulation of flexural over strength factor for steel beams by means of ANN was presented by Güneyisi et al. (2014). The shear strength predictions of steel-concrete composite structures based on ANN were investigated (Allahyari et al., 2018, Wei et al., 2016, Safa et al., 2016). In addition to regression analysis and function approximation, ANNs are also applied to damage identification and fatigue life prediction. Dunga develops a robust method for crack detection using the concept of transfer learning as an alternative to training an original neural network (Dunga et al., 2019).

According to design specifications, very complicated equations are recommended to calculate the SCFs at the crown points and the saddle points in the intersection lines weld toe of CFST joints for individual load cases. In this paper, an alternative approach is presented to predict the SCFs distributions at the intersection of CFST Y-joints by back-propagation neural network (BPNN).

ANN technique was used to simulate the relationships between basic variables and the SCFs in CFST joints. Based on the finite element analysis results, 300 training samples were used to train the BPNN prediction model and the parameters affecting the SCFs of CFST Y-joints under investigation. Well trained BPNN were used to evaluate the SCFs distributions at the intersections of CFST Y-joints subjected to three types of independent loadings, or combined loadings. By comparing the BPNN prediction results with the FEA results, it shows that the BPNN evaluations are feasible, and the prediction accuracy will improve with the increase of training samples.

2 SCFs distribution of CFST Y-joints

2.1 Influencing factors on SCFs of tubular joints

Specimen test results have shown that the higher stress concentration and the lower fatigue resistant of welded joints. As shown in Fig.1, CFST chord and CHS brace are connected in CFST Y-joints. The existing research results show that the SCFs of the CHS joints are related to some dimensionless parameters of the joints (Wei et al., 2018). For the Y-joints, the dimensionless parameters α, β, γ, τ and θ affecting the SCF of tubular joints. The specific geometric meaning is shown in Fig. 1.

Fig. 1
figure 1

the dimensionless parameters of CFST joint

It is commonly recognized that stress concentration is caused by stiffness change. The stress concentration behaviors and different stress items are illustrated in Fig. 2.

Fig. 2
figure 2

Stress of welded tubular joints

In CFST trusses, truss member global stiffness, tubular joint stiffness and tubular wall local stiffness have effects on SCF of CFST joints. Chord and braces are joined by the intersection weld. The chord just like elastic basis for braces. The points on the intersection line of the chord have the same displacement because the axial stiffness of the brace is higher than that of the chord. Since the 1980s, many researches have been carried out on the SCFs of tubular joints through finite element methods, model tests and dimensionless analysis, and the relationships between SCFs and design parameters were established. IIW and CIDECT issued the fatigue design specification for tubular joints, and the recommended formulas for SCFs of the typical tubular joint were given.

2.2 SCFs distribution of CHS Y-joints and CFST Y-joints

To investigate the local deformation and SCFs distributions of CHS Y-joints and CFST Y-joints, FE models were developed by using solid elements, as shown in Fig. 3(a). welds are simplified to a triangle based on the AWS D1.1 specification (2015), as shown in Fig. 3(b). In order to ensure the calculation accuracy, the size of the FE mesh near the weld is 2 mm. To improve the calculation efficiency, a larger mesh size is used in the area away from the weld.

Fig. 3
figure 3

Finite element model

In the FEA, the brace is subjected to axial force, and the two ends of the chord are set as fixed ends (Fig. 4). For the CFST joints, the contact and friction between the steel tube and the concrete are considered in the calculation. The material parameters and geometric parameters are shown in Table 1. The HSS was calculated by the surface extrapolation method given by the IIW specification. Two or three extrapolation points are on the surface along the direction perpendicular to weld toe, and then the extrapolated point stresses are substituted into the recommended formulas to calculate the HSS at the weld toe.

Fig. 4
figure 4

Loads and boundary conditions

Table 1 FEM parameters

When the brace is subjected to axial loads, global bending and local radial deformation are typically found in the chord in CHS joint, as shown in Fig. 5(a) and Fig. 6(a). The reaction force of the chord is non-uniform along the line of intersection, which varies with the inconsistency in the stiffness distribution. Filling the chord with concretes changes the stiffness distributions of the tubular joint. As the bending stiffness of the chord increases, the bending deformation of the chord decreases as shown in Fig. 5(b). At the same time, since the radial deformation of the chord is restricted by the internal concrete, the radial stiffness of the chord increases, as shown in Fig. 6(b).

Fig. 5
figure 5

Vertical deformation of chord

Fig. 6
figure 6

Radial deformation of chord (mm)

Compared with CHS joints, deformations along the brace axial are smaller on the crown points (root or toe) of the CFST joint. The rigidity of the crown point may be close to or even exceed the rigidity of the saddle point. The reacting force distribution on the chord is tend to be uniform, and the location of the maximum reacting force may change from the saddle to the crown. In addition, bending stress caused by chord wall local bending and additional stress caused by chord section radial deformation decrease significantly.

Based on FEA results, SCFs distributions of CHS Y-joint and CFST Y-joint were shown in Fig. 7. After the chord is filled with concrete, the distributions of SCFs of the chord and the brace have changed. The SCFs of CFST Y-joints are significantly smaller than those of CHS Y-joints. The position of the maximum SCFs changes from the saddle point in CHS Y-joint to a point between the crown toe and the saddle point in CFST Y-joint. The existing fatigue test results of the CFST joints show that the initial fatigue crack does not appear at the saddle point as the CHS joint, but starts somewhere near the crown toe and changes with the angle θ.

Fig. 7
figure 7

SCFs of tubular joints

In most design specifications, the equivalent thickness is introduced. Considering the internal concrete is equivalent to the wall thickness of the chord, the SCFs of the grouting tubular joint are obtained according to the recommended formulas of the CHS joints. According to the CIDCT specification, the SCFs of the CHS joints and the CFST joints are calculated, as listed in Table 2. For the CHS joints, the results from the recommended formulas are in good agreement with the FEA results. For the CFST joints, there are significantly difference between the recommended formulas results and the FEA results. In addition, the recommended formulas can only give the SCFs of the saddle point and the crown point and cannot distinguish the difference between the crown toe and the crown root, and cannot determine the maximum value of the SCFs. In short, the formulas proposed by the design specifications cannot be reliably used for the SCFs analysis of CFST joints.

Table 2 SCF on key points

3 BPNN model

3.1 Structure of BPNN

At present, research on machine learning and deep learning related technologies and their applications are getting more and more attention. As one of the basic algorithms of AI, ANN has become a research hotspot again. Based on multi-layer perceptron, ANN appeared in the 1940s to simulate several basic characteristics of human brain function, which is an adaptive nonlinear dynamic system composed of many simple basic components (neurons). Since ANN have the ability to fit the nonlinear relationships between input variables and outputs variables, ANN have greatest potential in areas such as regression analysis, classification, pattern recognition and function approximation, etc. Although each processor in ANN maintains only one piece of dynamic information and only perform a few simple calculations, ANN can achieve self-learning by adjusting weights and biases. Among various types of architecture of ANN model, BPNN model is most widely used in the industry.

BPNN is a multi-layer feed forward neural network trained according to the error back propagation algorithm. A BPNN model always consists of at least three hierarchical layers of neurons: an input layer, one or more hidden layers and an output layer. Every neuron in the input layer will send its output to every neuron in the hidden layers, and every neuron in the hidden layers will send its output to every neuron in the output layer. The configuration of BPNN is shown in Fig. 8. The number of neurons in the input layer is equal to the number of input variables. The number of neurons in the output layer is the same as the number of output variables. The number of neurons in the hidden layer can be varied based on the complexity of the problem and the size of the input information. BPNN model has some attractive features: Nonlinear mapping function from multiple input data to multiple output data can be automatically constructed by a trained network; the trained network has a feature of the so-called generalization; the trained network operates quickly in an application process.

Fig. 8
figure 8

Structure of BPNN

The BPNN is trained by repeatedly importing a series of input/output data sets (samples) into the network. The network gradually learns the mapping relationships between inputs and outputs by adjusting the weights to minimize the error between the actual and the predicted output of the train sets. After the learning process is completed, network weight coefficients cannot be changed. In this model, it is necessary to use a network with only forward calculation in pattern recognition and prediction, and the calculation can be performed very quickly.

3.2 Learning and predicting of BPNN

BPNN have great advantages in dealing with problems in which many factors influence the process and result, and the process is poorly understood, and there are test data or analytical data. A flow chart to explain the learning and predicting process of BPNN is shown in Fig.9.

Fig. 9
figure 9

learning and predicting process of BPNNs

In a three-layer BPNN, The number of neurons in the input layer is n: xi=(x1,x2,…xn). The number of neurons in the hidden layer is d: hj=(h1,h2,…hd). The number of neurons in the output layer is m: yk=(y1,y2,…ym). Wij is the weight of the connection between input layer neurons (i) and hidden layer neurons (j), θj is the bias of the hidden layer neurons (j). Wjk is the weight of the connection between hidden layer neurons (j) and output layer neurons (k), θk is the bias of the output layer neurons (k).

3.2.1 Feed forward Algorithm

Hidden layer neurons:\({h}_j=f\left(\sum \limits_{i=1}^n{W}_{ij}{x}_i-{\theta}_j\right)\)

Output layer neurons:\({y}_k=f\left(\sum \limits_{j=1}^d{W}_{jk}{h}_j-{\theta}_k\right)\)

The error of output layer neurons can be expressed as:

$$e=\frac{1}{2}\sum \limits_k^m{\left({t}_k-{y}_k\right)}^2=\frac{1}{2}\sum \limits_k^m{\left[{t}_k-f\left(\sum \limits_{j=1}^d{W}_{jk}f\left(\sum \limits_{i=1}^n{W}_{ij}{x}_i-{\theta}_j\right)-{\theta}_k\right)\right]}^2$$

Where tk is the desired output and yk is forecast output.

The total error of all sample is E, \(E=\sum \limits_{i=1}^P{e}_i<\varepsilon\), where P is the number of samples.

3.2.2 Error Back Propagation

The error between output layer neurons and hidden layer neurons can be expressed as:

$${\delta}_k=\left({t}_k-{y}_k\right){y}_k\left(1-{y}_k\right)$$

Thus, the weight between hidden layer neurons and output layer neurons need be updated as:

$${W}_{jk}\left({n}_0+1\right)={W}_{jk}\left({n}_0\right)+\eta \sum \limits_{p_i=1}^P{\delta}_k{h}_j$$

Where n0 is the iterations number.

The bias of the output layer neurons need be updated as:

$${\theta}_k\left({n}_0+1\right)={\theta}_k\left({n}_0\right)+\eta \sum \limits_{p_i=1}^P{\delta}_k$$

The error between input layer neurons and hidden layer neurons can be expressed as:

$${\delta}_j={h}_j\left(1-{h}_j\right)\sum \limits_{k=1}^m{\delta}_k{W}_{jk}$$

Thus, the weight between hidden layer neurons and input layer neurons need be updated as:

$${W}_{ij}\left({n}_0+1\right)={W}_{ij}\left({n}_0\right)+\eta \sum \limits_{p_i=1}^P{\delta}_j{x}_i$$

The bias of the hidden layer neurons need be updated as:

$${\theta}_j\left({n}_0+1\right)={\theta}_j\left({n}_0\right)+\eta \sum \limits_{p_i=1}^P{\delta}_j$$

3.2.3 Activation function

In order to improve the adaptability of neural networks for solving nonlinear problems, nonlinear activation functions are essential in neural networks. Sigmoid function, Tanh function and rectified linear unit (ReLU) function are always used in ANN. Different activation functions have different properties, so the activation function should be chosen reasonably based on the characteristics of the problem being solved. Different activation functions are compared in Fig. 10. Different activation functions are compared in Fig. 11.

Fig. 10
figure 10

Activation function curve

Fig. 11
figure 11

Derivation of activation function curve

The mathematical expression of the activation function and its derivation are shown in Table 3. The neural network is optimized with some form of gradient descent, so the activation function must be differentiable. When the value of the variables are very large or very small, the derivatives of the Tanh function and the Sigmoid function will be close to zero, which will cause the gradient of the weight to be close to zero. In this case, the gradient update is very slow, often referred to as gradient disappearance. ReLU function can improve the computational efficiency of ANN because of simple calculation and can solve the problem of gradient disappearance of Sigmoid function and Tanh function. It has been applied in ANN in recent years.

Table 3 Activation function and derivation

4 SCFs prediction based on BPNN

Prediction of SCFs of CFST Y-joints has been a difficult task because of various factors affecting the stress distribution and their uncertainties. It has been commonly accepted that specimens test and FEA are the best ways to provide accurate SCFs predictions. Since specimen tests have been restricted by time and expense in spite of their reliability, more and more researchers use FEA to investigate the SCFs of CFST joints. There are relatively few studies on using ANN to predict the SCFs of CFST joints. In this study, to predict the SCFs along the brace-chord intersection of CFST Y-joints, a prediction program based on BPNN was developed by using python computer language.

4.1 BPNN for SCFs prediction of CFST Y-joints

Including one hidden layer, a three-layer BPNN was establish, as shown in Fig.12. Combining FEA results and existing research results, six key parameters were selected as the input layer units, and the SCFs of the brace and chord were taken as two output layer units. Besides the four dimensionless design parameters α、β、γ and τ, θ are used to describe the angle between the chord and the brace, and φ are used to locate different positions on the intersection weld by angle. φ=0, the point is crown root, and φ=π/2, the point is saddle point, and φ=π, the point is crown toe.

Fig. 12
figure 12

BPNN for SCF prediction

Generally, Neural networks with few hidden nodes cannot reflect small changes in the general trend of predicted responses. It is recommended by Patterson et al (1996) and Wythoff et al (1993) to determine the number of hidden layers or hidden neurons by trial and error, that is, to begin with a small network and introduce new neurons and connections until performance is satisfactory. In this paper, ten neurons are included in hidden layer.

300 FEA results were collected as the training samples, the other 30 FEA results were used as target data to verify the accuracy of prediction program. In order to improve the learning efficiency and predictive ability of BPNN, it is necessary to normalize the input and output data by using simple linear algebraic equations before the training starts, and the input and output data are converted to between 0 and 1.

4.2 SCFs prediction results

By learning the date from 300 FEA results, SCFs of CFST Y-joints were predicted by using BPNN. With the different activation functions, the error distribution between the FEA results and the BPNN predicted results are shown in Fig.13. Tanh activation function has better prediction accuracy. The prediction accuracy of chord is better than that of brace.

Fig. 13
figure 13

Error distribution from BPNN prediction

Based on the BPNN prediction results with Tanh activation function, columnar distribution of the number of samples with different errors are shown in Fig. 14. The X-axis is the error level and the Y-axis is the predicted sample number. For SCFs of chord, more than 95% of BPNN prediction results are greater than FEA results and more than 90% of predictions with an error of less than 20%. For SCFs of brace, more than 95% of BPNN prediction results are greater than FEA results and nearly 80% of the predicted results with an error of less than 20%.

Fig. 14
figure 14

Error distribution from BPNN prediction with tanh function

Fig. 15 shows the variation of the SCFs of CFST Y-joints with different input variables. Comparing the FEA results with the BPNN prediction results in Fig. 15 a), the SCFs trend with φ is in good agreement from crown root (φ=0) to crown toe (φ=180) when α=12.5, β=0.6, γ=15, τ=0.8,θ=60, but the bimodal characteristics of the FEA results are not well reflected in the BPNN prediction results. Comparing the FEM results with the BPNN prediction results in Fig. 15 b), the SCFs trend with β is in good agreement when α=12.5, γ=20, τ=0.8, θ=60, φ=150. Comparing the FEA results with the BPNN prediction results in Fig. 15 c), the SCFs trend with γ is in good agreement when α=12.5, β=0.6, τ=0.8, θ=60, φ=150. Comparing the FEA results with the BPNN prediction results in Fig. 15 d), the SCFs trend with τ is in good agreement when α=12.5, β=0.6, γ=17.5, θ=60, φ=150.

Fig. 15
figure 15

SCFs of CFST Y-joints with different input variables

5 Conclusion

  1. (1)

    Filling the chord with concrete changes the stiffness distribution of the tubular joints. As the bending stiffness and the radial stiffness of the chord increases, SCFs of CFST Y-joints are significantly smaller than those of CHS Y-joints.

  2. (2)

    In CFST Y-joint, the rigidity of the crown points may be close to or even exceed the rigidity of the saddle point. The position of the maximum SCFs changes from the saddle point in CHS Y-joint to a point between the crown toe and the saddle point in CFST Y-joint.

  3. (3)

    With six input layer units and two output layer units, a three-layer BPNN was establish, and it can be used to predict the SCFs distributions of chord and brace in CFST Y-joints after learning the date from 300 FEA results.

  4. (4)

    Compare with the other two activation functions, Tanh function has better prediction accuracy. The prediction accuracy of chord is better than that of brace. More than 95% of BPNN prediction results are greater than FEA results and more than 85% of predictions with an error of less than 20% based on Tanh function.

  5. (5)

    Comparing the FEA results with the BPNN prediction results, the SCFs variation trends with different input variables are generally consistent, and the prediction accuracy of BPNN can be improved by increasing the number of training samples in the future. BPNN can be a reliable alternative to complicated SCFs equations for predicting SCFs distribution at intersection line of CFST Y-joints.

Availability of data and materials

The data and materials in current study are available from the corresponding author on reasonable request.

Abbreviations

SCF:

Stress concentration factor

CFST:

Concrete-filled steel tube

BPNN:

Back-propagation neural network-based

FE:

Finite element

FEA:

Finite element analysis

CHS:

Circular hollow section

HSS:

hot-spots stress

IIW:

International Institute of Welding

API:

American Petroleum Institute

DNV:

Det Norske Veritas

CIDECT:

Committee for International Development and Education on Construction of Tubular structures

AI:

Artificial intelligence

ANN:

Artificial neural networks

References

Download references

Acknowledgments

The research reported herein has been conducted as part of the research projects granted by the National Natural Science Foundation of China (NSFC 52078424). The authors would like to thank NSFC for the financial support of this research.

Funding

This study is supported by the National Natural Science Foundation of China (No. 5278424).

Author information

Authors and Affiliations

Authors

Contributions

Lin Xiao: Developing the BPNN analysis program; performing FEA and parametric study, writing original draft. Xing Wei: Providing guidance in methodology development, substantially revising the draft and financial supports. Junming Zhao: Performing FEA and parametric study, Collecting and analyzing the finite element calculation results. Zhirui Kang: Collecting and analyzing the finite element calculation results. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Xing Wei.

Ethics declarations

Ethics approval and consent to participate

Not applicable

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xiao, L., Wei, X., Zhao, J. et al. Artificial Neural network-based prediction of SCF at the Intersection of CFST Y-joints. ABEN 3, 6 (2022). https://doi.org/10.1186/s43251-022-00056-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43251-022-00056-z

Keywords