Structural Damage Identification Based on Rough Sets and Artificial ...

17 downloads 0 Views 1015KB Size Report
Apr 22, 2014 - fusion technique will be used to assess the structural dam- age. This paper ...... Canyon Bridge,” in Proceedings of the 15th International Modal.
Hindawi Publishing Corporation e Scientific World Journal Volume 2014, Article ID 193284, 9 pages http://dx.doi.org/10.1155/2014/193284

Research Article Structural Damage Identification Based on Rough Sets and Artificial Neural Network Chengyin Liu,1,2 Xiang Wu,3 Ning Wu,1 and Chunyu Liu1 1

Harbin Institute of Technology, Shenzhen Graduate School, Shenzhen 518055, China Key Laboratory of C&PC Structures, Southeast University, Nanjing 211189, China 3 State Key Laboratory of Robotics and System, Harbin Institute of Technology, Dazhi Street, Nangang District, Harbin 150001, China 2

Correspondence should be addressed to Ning Wu; [email protected] Received 27 February 2014; Accepted 22 April 2014; Published 11 June 2014 Academic Editor: Hua-Peng Chen Copyright © 2014 Chengyin Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper investigates potential applications of the rough sets (RS) theory and artificial neural network (ANN) method on structural damage detection. An information entropy based discretization algorithm in RS is applied for dimension reduction of the original damage database obtained from finite element analysis (FEA). The proposed approach is tested with a 14-bay steel truss model for structural damage detection. The experimental results show that the damage features can be extracted efficiently from the combined utilization of RS and ANN methods even the volume of measurement data is enormous and with uncertainties.

1. Introduction Structures are very vulnerable to influence like impact, earthquake and hurricanes. Therefore it is crucial for the decision maker to know the damage and health status of the structure in time, so that necessary maintenance can be taken. Recently, more and more innovative structural damage detection techniques have been applied to the existing structures for Structural Health Monitoring (SHM), especially large-scale structures, and many testing methods are nondestructive [1–3]. Attention has been drawn to how to use the current measurement data to produce a result with less uncertainty regardless of measurement noises and environmental variation, such as changing temperature, moisture, and load condition [4]. Many different approaches have been applied to solve the inaccurate measurement problem, for example, Sohn et al. proposed a probabilistic damage detection methodology to reduce measurement noises [5]. Worden and Dulieu-Barton investigated the influence of uncertainties both in practical measurement and in finite element model of damage detection [6], and they proposed a statistical method to resolve the inaccuracy that resulted from the modeling and measurement errors [7]. In recent studies, intelligent information processing techniques such as the autoregressive integrated moving average model, linear

regression technique, ANN methods, and grey models are introduced to SHM applications. ANN methods have been used extensively in structural damage identification. In practice, damage indexes in structures are firstly extracted by using signal processing techniques such as wavelet transform and Fourier analysis; then ANN models are built to detect structural damages from those indexes. It has been widely accepted that the ANN methods have helped to achieve a greater accuracy in structural damage detection. However, ANN has two obvious drawbacks when applying to a large number of data [8, 9]. The first one is that training an ANN model with big amount of data is time consuming, and the second one is that ANN cannot reach an analytical solution. In consequence, a reliable ANN model that can select the relevant factors automatically from the historical data is required. As a useful mathematical tool, RS theory applies the unclear relation and data pattern comparison based on the concept of an information system with indiscernible data, where the data is uncertain or inconsistent. The characteristics of RS theory are to create approximate descriptions of objects for data analysis, optimization, and recognition, and it does not need the prior knowledge. Therefore using RS theory can evaluate the importance of various attributes and retain some key attributes with no additional knowledge except for

2

The Scientific World Journal

2. Information Entropy Based RS Theory

Table 1: Decision table general form. Target

Condition attribute

Decision attribute

𝑋 x1

𝐶1 𝑢1,1

⋅⋅⋅ ⋅⋅⋅

𝐶𝑛 𝑢1,𝑛

𝑑 V1

𝑥2 ⋅⋅⋅ 𝑥𝑚

𝑢2,1 ⋅⋅⋅ 𝑢𝑚,1

⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅

𝑢2,𝑛 ⋅⋅⋅ 𝑢𝑚,𝑛

V2 ⋅⋅⋅ V𝑚

RS theory was proposed by Pawlak [18] as a new mathematical tool for reasoning about vagueness, uncertainty, and imprecise information. In this section, we introduce the concepts of decision table, discretization algorithm, and information entropy in RS theory and explain their relationships. 2.1. RS Theory. We have the following.

the supplied data required [10]. To date, the RS approach has been applied in many domains, such as machine fault diagnosis, stock market forecast, decision support systems, medical diagnosis, data filtration, and software engineering [11–14]. The classical RS model can only be used to process categorical features with discrete values. For the RS based damage index selection in structural damage identification, a discretizing algorithm is required to partition the value domains of real-valued variables into several intervals as categorical features. Many discretization methods of numerical attributes have been proposed in recent years, including equal distance method, equal frequency method, and maximum entropy method [9]. However, discretization of numerical attributes may cause information loss because the degrees of membership of numerical values to discretized values are not considered [15, 16]. Recently, a discretization algorithm based on information entropy has been reported to be a potential mechanism for the measurement of uncertainty in RS. The information entropy has been widely employed in RS, and different information entropy models have been proposed. In particular, D¨untsch and Gediga presented a well-justified information entropy model for the measurement of uncertainty in RS [17]. A novel application of integrating RS theory and ANN is presented in this paper for structural health monitoring and damage detection particularly for problems with large measurement data with uncertainties. The objective of the paper is to study how the RS and ANN techniques can be combined to detect structural damages. This method consists of three stages. First, RS will be applied to find relevant factors for structural modal parameters derived from structural vibration responses. Then, relevant information will be fed to the ANN as input. Finally, a synthesizing RS-ANN model based on the datafusion technique will be used to assess the structural damage. This paper is organized as follows. In Section 2, a brief introduction of fundamental theories on RS with information entropy is presented, and an overview of the ANN methods is given in Section 3. A three-stage damage detection model using combined RS and ANN technique is presented in Section 4. Laboratory experiment of a 14-bay truss model will be carried out to test and validate the proposed method in Section 5. Finally, concluding remarks are summarized in Section 6.

Definition 1. Decision table is a knowledge representation system in the application of RS theory with a quaternary (𝑋, 𝑅, 𝑉, 𝑓) set, where 𝑋 is a set of targets, and 𝑅 is a set of attributes, 𝑅 = 𝐶 ∪ 𝐷. 𝐶 and 𝐷 are condition attribute set and decision attribute set, respectively. 𝑉 = ∪𝑉𝑟 is a set of attributes’ data range. 𝑉𝑟 is the range of attribute 𝑟. 𝑓 : 𝑋 × 𝑅 → 𝑉; 𝑓 is an information function, which assigns the range of each attribute. Table 1 is a typical decision table. Definition 2. 𝑋 is a domain of discourse. 𝑃 and 𝑄 are equivalence relations of universe 𝑋; then the 𝑃-positive region of 𝑄 is defined by the union of all the objects of 𝑈 which can be classified as the equivalence class of 𝑈/𝑄 by the knowledge 𝑈/𝑃; that is, POS𝑃 (𝑄) = ⋃ 𝑃 (𝑍) .

(1)

𝑍∈𝑋/𝑄

Definition 3. Let 𝑃 and 𝑄 be equivalence relations of 𝑈. If (2) is satisfied, then 𝑟 ∈ 𝑃 is said to be 𝑄-dispensable in 𝑃; otherwise, 𝑟 ∈ 𝑃 is 𝑄-indispensable in 𝑃. If all 𝑟 are 𝑄indispensable in 𝑃, 𝑃 is said to be independent with respect to 𝑄. Consider POS𝑃 (𝑄) = POS𝑃−{𝑟} (𝑄) .

(2)

Definition 4. If 𝑆 ⊆ 𝑃 is 𝑃-independent and POS𝑆 (𝑄) = POS𝑃 (𝑄) is satisfied, then 𝑆 is said to be the 𝑄-reduct of 𝑃, that is, RED𝑄(𝑃), and the union of all the 𝑄-indispensable attributes is said to be the 𝑄-core of 𝑃, that is, CORE𝑄(𝑃). The relation of these two notions is expressed as CORE𝑄 (𝑃) = ∩RED𝑄 (𝑃) .

(3)

2.2. Discretization Algorithm Based on Information Entropy. Let 𝑈 ⊆ 𝑋 be a subset, and the number of instances is |𝑈|. The number of jth (𝑗 = 1, 2, . . . , 𝑟) decision attribute is 𝑘𝑗 . Let the information entropy of this subset be 𝑟(𝑑)

𝐻 (𝑈) = − ∑ 𝑝𝑗 log2 𝑝𝑗 , 𝑗=1

𝑝𝑗 =

𝑘𝑗 |𝑈|

.

(4)

In general, 𝐻(𝑈) ≥ 0. If the information entropy is small, it reveals that several decision attributes are predominant, and the complexity is small. All the decision attributes especially are the same, and 𝐻(𝑈) = 0. For the breakpoint 𝑐𝑖𝑎 in the example, its decision attribute is 𝑗 (𝑗 = 1, 2, . . . , 𝑟); the

The Scientific World Journal

3 Table 2: Structural damage database.

Damage case

Damage condition Position Degree 1 5% 1 10% ⋅⋅⋅ ⋅⋅⋅ 3 95%

Bay 2 2 ⋅⋅⋅ 13

1 2 ⋅⋅⋅ 684

1 8.76 8.73 ⋅⋅⋅ 7.84

Natural frequency 2 3 32.34 61.57 32.31 61.49 ⋅⋅⋅ ⋅⋅⋅ 24.53 52.32

⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅

1 0.006 0.006 ⋅⋅⋅ 0.006

Mode curvature ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅

13 0.006 0.006 ⋅⋅⋅ 0.018

Table 3: Minimum property set 1 after reduction. Damage case

Span 2 2 ⋅⋅⋅ 13

1 2 ⋅⋅⋅ 684

Damage condition Position Degree 1 5% 1 10% ⋅⋅⋅ ⋅⋅⋅ 3 95%

1 8.76 8.73 ⋅⋅⋅ 7.84

Natural frequency 2 3 32.34 61.57 32.31 61.49 ⋅⋅⋅ ⋅⋅⋅ 24.53 52.32

First order strain mode ⋅⋅⋅ 12 ⋅⋅⋅ 0.003 ⋅⋅⋅ 0.003 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 0.018

1 0.003 0.004 ⋅⋅⋅ 0.003

Table 4: Minimum property set 2 after reduction. Damage case

Span 2 2 ⋅⋅⋅ 13

1 2 ⋅⋅⋅ 684

Damage condition Position Degree 1 5% 1 10% ⋅⋅⋅ ⋅⋅⋅ 3 95%

Natural frequency 2 3 32.34 61.57 32.31 61.49 ⋅⋅⋅ ⋅⋅⋅ 24.53 52.32

1 8.76 8.73 ⋅⋅⋅ 7.84

1 0.009 0.012 ⋅⋅⋅ 0.006

Second order strain mode ⋅⋅⋅ 12 ⋅⋅⋅ −0.006 ⋅⋅⋅ −0.006 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ −0.036

Table 5: Minimum property set 3 after reduction. Damage case

Span 2 2 ⋅⋅⋅ 13

1 2 ⋅⋅⋅ 684

Damage condition Position Degree 1 5% 1 10% ⋅⋅⋅ ⋅⋅⋅ 3 95%

Natural frequency 2 3 32.34 61.57 32.31 61.49 ⋅⋅⋅ ⋅⋅⋅ 24.53 52.32

1 8.76 8.73 ⋅⋅⋅ 7.84

1 0.009 0.015 ⋅⋅⋅ 0.009

Third order strain mode ⋅⋅⋅ 12 ⋅⋅⋅ 0.009 ⋅⋅⋅ 0.009 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 0.054

Table 6: Sections of each attribute set and condition attributes. Damage case Set 1, DB Set 2, DB Set 3, DB Set 1, DP Set 2, DP Set 3, DP

1 3 3 3 4 4 4

Natural frequency 2 3 2 5 2 5 2 5 10 10 10 10 10 10

1 5 9 8 3 1 1

2 6 7 7 5 2 1

3 5 8 7 2 4 1

4 7 9 8 4 4 5

5 7 10 10 4 3 4

Strain mode 6 7 5 5 11 11 10 10 4 4 5 5 3 3

8 7 9 10 4 3 4

9 7 9 8 4 4 5

10 5 8 7 2 4 1

11 6 7 7 5 2 1

12 5 9 8 3 1 1

10 3 2 ⋅⋅⋅ 3 2

11 3 3 ⋅⋅⋅ 3 2

12 3 2 ⋅⋅⋅ 5 5

2 2 ⋅⋅⋅ 12 12

Table 7: Attribute set 1 rules generation for damage bay. Damage case 1 2 ⋅⋅⋅ 139 140

1 3 3 ⋅⋅⋅ 3 2

Natural frequency 2 3 2 3 2 3 ⋅⋅⋅ ⋅⋅⋅ 2 2 1 1

1 4 5 ⋅⋅⋅ 2 2

2 3 3 ⋅⋅⋅ 3 2

3 3 3 ⋅⋅⋅ 2 2

4 2 2 ⋅⋅⋅ 2 2

5 3 2 ⋅⋅⋅ 2 2

First order strain mode 6 7 8 9 3 3 4 2 2 3 2 2 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 3 2 2 2 2 2 2 2

4

The Scientific World Journal Table 8: Attribute set 2 rules generation for damage bay.

Damage case 1 2 ⋅⋅⋅ 194 195

1 3 3 ⋅⋅⋅ 3 2

Natural frequency 2 3 2 3 2 3 ⋅⋅⋅ ⋅⋅⋅ 1 1 1 1

1 1 1 ⋅⋅⋅ 6 6

2 1 2 ⋅⋅⋅ 5 5

3 2 2 ⋅⋅⋅ 6 6

4 3 3 ⋅⋅⋅ 6 6

5 2 2 ⋅⋅⋅ 7 7

Second order strain mode 6 7 8 9 3 8 6 6 3 8 6 6 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 7 3 3 4 7 3 3 5

10 6 6 ⋅⋅⋅ 3 3

11 5 5 ⋅⋅⋅ 3 4

12 6 6 ⋅⋅⋅ 1 1

2 2 ⋅⋅⋅ 12 12

10 1 1 ⋅⋅⋅ 5 5

11 1 1 ⋅⋅⋅ 5 5

12 1 1 ⋅⋅⋅ 8 8

2 2 ⋅⋅⋅ 12 12

Table 9: Attribute set 3 rules generation for damage bay. Damage case 1 2 ⋅⋅⋅ 229 230

1 3 3 ⋅⋅⋅ 3 2

Natural frequency 2 3 2 3 2 3 ⋅⋅⋅ ⋅⋅⋅ 1 1 1 1

1 7 7 ⋅⋅⋅ 2 2

2 5 5 ⋅⋅⋅ 1 1

3 5 5 ⋅⋅⋅ 1 1

number of decision attributes less than 𝑐𝑖𝑎 in the set 𝑈 is 𝑙𝑗𝑈(𝑐𝑖𝑎 ), and the number of decision attributes greater than 𝑐𝑖𝑎 in the set 𝑈 is 𝑟𝑗𝑈(𝑐𝑖𝑎 ). Let 𝑙

𝑈

(𝑐𝑖𝑎 )

=

∑ 𝑙𝑗𝑈 (𝑐𝑖𝑎 ) , 𝑗=1 (5)

𝑟(𝑑)

𝑟𝑈 (𝑐𝑖𝑎 ) = ∑ 𝑟𝑗𝑈 (𝑐𝑖𝑎 ) . 𝑗=1

Therefore the breakpoint 𝑐𝑖𝑎 could divide the set 𝑈 into two subsets 𝑋𝑙 and 𝑋𝑟 . Let 𝑟(𝑑) 𝑗=1

𝑟(𝑑)

𝐻 (𝑋𝑟 ) = − ∑ 𝑞𝑗 log2 𝑞𝑗 , 𝑗=1

5 7 7 ⋅⋅⋅ 3 3

Third order strain mode 6 7 8 9 5 3 10 5 3 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 5 10 7 6 10 7

9 1 1 ⋅⋅⋅ 6 6

Step 1. 𝑃 = 0; 𝐿 = {𝑋}; 𝐻 = 𝐻(𝑋). Step 2. To any 𝑐 ∈ 𝑆, calculate 𝐻(𝑐, 𝐿). Step 3. If 𝐻 ≤ min{𝐻(𝑐, 𝐿)}, go to the end.

𝑟(𝑑)

𝐻 (𝑋𝑙 ) = − ∑ 𝑝𝑗 log2 𝑝𝑗 ,

4 6 6 ⋅⋅⋅ 1 1

𝑝𝑗 =

𝑙𝑗𝑈 (𝑐𝑖𝑎 ) , 𝑙𝑈 (𝑐𝑖𝑎 )

𝑞𝑗 =

𝑟𝑗𝑈 (𝑐𝑖𝑎 ) . 𝑟𝑈 (𝑐𝑖𝑎 )

(6)

The information entropy of the breakpoint 𝑐𝑖𝑎 to the set 𝑈 is rewritten as 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨𝑋 󵄨 󵄨𝑋 󵄨 (7) 𝐻𝑈 (𝑐𝑖𝑎 ) = 󵄨 𝑙 󵄨 𝐻 (𝑋𝑙 ) + 󵄨 𝑟 󵄨 𝐻 (𝑋𝑟 ) . |𝑋| |𝑋| Assume that 𝐿 = {𝑌1 , 𝑌2 , . . . , 𝑌𝑚 } is the equivalence selected by decision table; the new information entropy of the new breakpoint 𝑐 ∉ 𝑃 can be written as 𝐻 (𝑐, 𝐿) = 𝐻𝑌1 (𝑐) + 𝐻𝑌2 (𝑐) + ⋅ ⋅ ⋅ + 𝐻𝑌𝑚 (𝑐) .

(8)

Let 𝑃 be the set of the chosen breakpoints, 𝐿 is an equivalent set divided by breakpoint set 𝑃, 𝑆 is the set of the initial breakpoint, and 𝐻 is the information entropy of decision table; our discretization algorithm can be expressed as follows.

Step 4. Select 𝑐max into 𝑃 to make 𝐻(𝑐, 𝐿) be minimum, 𝐻 = 𝐻(𝑐, 𝐿)𝑆 = 𝑆 − {𝑐}. Step 5. To all 𝑈 ∈ 𝐿, if 𝑐max divide the equivalence 𝑈 into 𝑋1 and 𝑋2 , then delete 𝑈 from 𝐿 and join the equivalence 𝑋1 and 𝑋2 into 𝐿. Step 6. If any equivalence in 𝐿 has the same decision, go to the end. Otherwise go to Step 2.

3. Artificial Neural Network (ANN) An artificial neural network (ANN) is an information processing paradigm inspired by biological nervous systems like brains. Although ANNs model the mechanism of brain, they do not have analytical function form, and therefore ANNs are data based instead of model based. An ANN is usually composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. The ANN used in this study is arranged in three layers of neurons, namely, the input, hidden, and output layers. The input layers introduce the model inputs, and the middle layer of hidden units feeds into an output layer through variable weight connections. The ANN learns by adjusting the values of these weights through a back-propagation algorithm that permits error corrections to be fed through the layers. Output layer provides the estimations of the network. An ANN is renowned for their ability to learn and generalize from example data, even when the data is noisy and incomplete. This ability has led to an investigation into the application

The Scientific World Journal

5 Table 10: Attribute set 1 rules generation for damage position.

Damage case 1 2 ⋅⋅⋅ 251 252

1 3 3 ⋅⋅⋅ 1 1

Natural frequency 2 3 10 8 9 7 ⋅⋅⋅ ⋅⋅⋅ 2 1 1 1

1 2 3 ⋅⋅⋅ 1 1

2 3 1 ⋅⋅⋅ 1 1

3 1 1 ⋅⋅⋅ 1 1

4 2 2 ⋅⋅⋅ 1 1

5 2 2 ⋅⋅⋅ 1 1

First order strain mode 6 7 8 9 2 2 2 2 2 2 2 2 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 1 1 1 1 1 1 1 1

10 1 1 ⋅⋅⋅ 1 1

11 3 1 ⋅⋅⋅ 1 1

12 1 1 ⋅⋅⋅ 3 3

1 1 ⋅⋅⋅ 3 3

10 3 3 ⋅⋅⋅ 2 3

11 1 1 ⋅⋅⋅ 1 1

12 1 1 ⋅⋅⋅ 1 1

1 1 ⋅⋅⋅ 3 3

Table 11: Attribute set 2 rules generation for damage position. Damage case 1 2 ⋅⋅⋅ 253 254

1 3 3 ⋅⋅⋅ 1 1

Natural frequency 2 3 10 8 9 7 ⋅⋅⋅ ⋅⋅⋅ 1 1 1 1

1 1 1 ⋅⋅⋅ 1 1

2 1 1 ⋅⋅⋅ 1 1

3 1 1 ⋅⋅⋅ 3 3

of ANNs to automated knowledge acquisition. They also help to discern patterns among input data, require fewer assumptions, and achieve a higher degree of prediction accuracy.

4. The Hybrid Method The common advantage of RS and ANN is that they do not need any additional information about data like probability in statistics or grade of membership in fuzzy-set theory [19]. RS has proved to be very effective in many practical applications. However, in RS theory, the deterministic mechanism for the description of error is too straightforward [20], and therefore the rules generated by RS are often unstable and have low classification accuracies. In consequence, RS cannot identify structural damage with a high accuracy. ANN is generally considered to be the most powerful classifier for low classification-error rates and robustness to noise. The knowledge of ANN is buried in their structures and weights [21, 22]. It is often difficult to extract rules from a trained ANN. The combination of RS and ANN is very natural for their complementary features. One typical approach is to use the RS approach as a preprocessing tool for the ANN [12, 23]. RS theory provides useful techniques to reduce irrelevant and redundant attributes from a large database with various attributes. ANN has the ability to approach any complex functions and possess a good robustness to noise. In practice, there are often vast amounts of sensor data that are typically updated every few minutes in SHM system. One of the most important issues of RS theory is the reduction in dimension of the decision table in terms of both attributes and objects, thereby reducing the redundancy. This paper will develop the structural damage model by using the RS methodology to reduce the dimension of the structural damage database before applying the ANN

4 2 2 ⋅⋅⋅ 3 3

5 1 1 ⋅⋅⋅ 1 1

Second order strain mode 6 7 8 9 1 3 1 3 1 3 1 3 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 3 1 1 2 3 1 1 3

method. Firstly, the following reductions can be derived based on the RS theory: attribute reduction, object reduction, and rule generation. Object reduction involves reducing the rows of the database in terms of redundant objects (rows). Rule generation involves the generation of If-Then rules from the database. Then the ANN is trained to learn in order to predict the damage conditions.

5. Experimental Validation 5.1. Test Structure. The test structure is a steel truss with 14 bays, shown in Figure 1. Each bay is 585 mm long, 490 mm wide, and 350 mm high. Totally, the steel truss has 52 longitudinal rods, 50 crosswise rods, and 54 diagonal rods. Each rod is forged with steel pipe. The section of the rods is hollow circular with an outer diameter of 18 mm, and inner diameter of 12 mm. Node board uses equilateral angle steel. Rods are bolted on the node board. Damages of the structure are simulated by two kinds of reduced thickness rods. One is 2 mm thick, and the other is 1 mm thick. Accelerometers are mounted on each node of the structure as shown in Figure 2. The sampling interval of measurements retrieved from the data acquisition system is 5 min. 5.2. Establishment of Damage Database. A FE model was built to simulate the test structure as shown in Figure 3. In this study, three types of damage conditions are investigated, respectively, including damage bay, damage position, and damage degree. Since the end bays have no upper rod, the damage bay starts from the second span. Thus 12 bays are assumed to be damaged. In these bays, damage positions in upper rod, diagonal rod, and bottom rod are all known. For damage degree, we simulate the stiffness from 95% to 5% with the interval of 5%. In total there are 19 different kinds of damage degrees. Combining these three damage conditions, we have 684 damage conditions in total.

6

The Scientific World Journal Table 12: Attribute set 3 rules generation for damage position.

Damage case 1 2 ⋅⋅⋅ 273 274

1 3 3 ⋅⋅⋅ 3 2

Natural frequency 2 3 10 8 9 7 ⋅⋅⋅ ⋅⋅⋅ 2 3 2 3

1 1 1 ⋅⋅⋅ 1 1

2 1 1 ⋅⋅⋅ 1 1

3 1 1 ⋅⋅⋅ 1 1

4 4 4 ⋅⋅⋅ 1 1

5 4 4 ⋅⋅⋅ 1 2

Third order strain mode 6 7 8 3 3 1 3 3 1 ⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ 3 3 4 3 3 4

9 1 1 ⋅⋅⋅ 4 4

10 1 1 ⋅⋅⋅ 1 1

11 1 1 ⋅⋅⋅ 1 1

12 1 1 ⋅⋅⋅ 1 1

1 1 ⋅⋅⋅ 3 3

Table 13: Damage identification by using attribute set 1. Expectation 1 2 3 4 5 6 7 8 9 10 11 12

Bay 7 7 7 7 7 7 5 5 5 5 5 5

Position upper upper diagonal diagonal bottom bottom upper upper diagonal diagonal bottom bottom

Degree 28.8% 62.2% 28.8% 62.2% 28.8% 62.2% 28.8% 62.2% 28.8% 62.2% 28.8% 62.2%

Recognition 1 2 3 4 5 6 7 8 9 10 11 12

Bay 7.32 6.93 7.23 7.08 7.11 7.07 5.12 4.88 4.53 4.54 5.16 5.22

Position 1.28 1.02 1.97 2.01 3.33 3.13 1.17 0.74 1.45 1.15 2.63 3.04

Degree 15.96% 48.53% 30.68% 57.75% 10.84% 20.68% 40.52% 63.84% 82.56% 35.53% 68.45% 72.34%

Figure 2: Accelerometer. Figure 1: Test structure.

According to the FEA results, 13 structural damage indexes are extracted, including the first three natural frequencies, the first three strain modes, the first three vibration mode shapes, modal assurance criterion (MAC), coordinate modal assurance criterion (COMAC), curvature mode, and natural frequency square. These indexes, together with damage conditions, form a 684 rows and 124 columns structural damage database (decision table) in this study. Table 2 lists part of the database. Note that in the damage position column, number 1, 2, and 3 represent the upper rod, diagonal rod, and bottom rod, respectively.

5.3. Attribute Reduction. In this section, application of RS to data reduction involves three steps (see below). 5.3.1. Step 1: Reduction of Decision Table. The damage database is reduced in batches as shown in Tables 3, 4, and 5. From the reduced database, it can be seen that the data volume has been greatly reduced. The core of the database is the first three natural frequencies. In order to ensure the integrity of the damage indexes, less reduced condition attributes are remained. There are 3 minimum properties in total. They are the first three frequencies with the first order strain mode (set 1), the first three frequencies with the second

The Scientific World Journal

7 Table 14: Damage identification by using attribute set 2.

Expectation 1 2 3 4 5 6 7 8 9 10 11 12

Bay 7 7 7 7 7 7 5 5 5 5 5 5

Position upper upper diagonal diagonal bottom bottom upper upper diagonal diagonal bottom bottom

Degree 28.8% 62.2% 28.8% 62.2% 28.8% 62.2% 28.8% 62.2% 28.8% 62.2% 28.8% 62.2%

Recognition 1 2 3 4 5 6 7 8 9 10 11 12

Bay 6.24 7.42 7.14 6.73 7.24 7.21 4.76 4.62 5.25 5.11 5.62 6.21

Position 1.34 1.34 1.84 1.93 2.04 3.54 1.42 0.67 2.02 1.44 3.52 3.13

Degree 13.41% 35.42% 42.52% 45.14% 85.31% 51.96% 45.15% 13.56% 41.08% 68.28% 49.31% 25.19%

Position 1.17 0.74 1.88 2.35 2.63 3.04 1.14 1.36 2.44 1.97 3.74 3.02

Degree 58.74% 20.84% 66.68% 59.52% 60.39% 63.84% 62.56% 83.15% 39.16% 43.18% 25.44% 72.82%

Table 15: Damage identification by using attribute set 3. Expectation 1 2 3 4 5 6 7 8 9 10 11 12

Bay 7 7 7 7 7 7 5 5 5 5 5 5

Position upper upper diagonal diagonal bottom bottom upper upper diagonal diagonal bottom bottom

Degree 28.8% 62.2% 28.8% 62.2% 28.8% 62.2% 28.8% 62.2% 28.8% 62.2% 28.8% 62.2%

Figure 3: Test structure FE model.

order strain mode (set 2), and the first three frequencies with the third order strain mode (set 3), respectively. 5.3.2. Step 2: Discretization of Reduced Decision Table. Through the discretization of the three attribute sets, a set of reduced decision tables can be obtained. The attribute

Recognition 1 2 3 4 5 6 7 8 9 10 11 12

Bay 7.22 5.82 6.81 7.03 6.81 7.59 4.84 5.04 5.14 5.15 4.91 4.70

sets (1, 2, and 3) are discretized according to the decision attributes, the damage bay (DB), and the damage position (DP), respectively. Table 6 summarizes the intervals of each decision attribute resulted from the discretization of the three attribute sets. It is found that, for the decision attribute of damage bay, the intervals are much more in the strain mode condition attributes than those in natural frequency condition attributes. While for the decision attribute of damage position, the intervals are much more at the natural frequency condition attributes than those in strain mode condition attributes. The result demonstrates that the strain mode has more weights in identification of structural damage bay, while the natural frequency has more weights in identification of structural damage position. 5.3.3. Step 3: Rules Generation. Rules generation is a key step in the RS analysis. In this study, the rules are generated from the discretized decision table in the form of knowledge. According to the exclusive rule extraction method, the same condition and decision attributes are removed. Therefore, simplified decision tables are obtained as shown in Tables 7, 8, 9, 10, 11, and 12. These decision tables demonstrate that every single damage case is unique.

8 From Table 7 to Table 12, it can be seen that the rows of each table are decreased to less than half of the original ones after rules generation. Each attribute set has its own rule of damage identification. The values of rule generation result for damage bay are less than those for damage position on average. It illustrates that the identification of damage bay is easier than that of damage position. 5.4. Identification of Structure Damage Using ANN. In this section, back-propagation ANN is applied to the reduced database for further identification of structural damages. The reduced database in terms of attributes can be described as the best subset of variables which describe the structural damage database completely. This reduction in number of attributes decreases the time of decision-making process and consequently reduces the cost of efficiency analysis. As mentioned above, three attribute sets are chosen as the input, and three damage conditions are chosen as the output to train the ANN model. The back-propagation network computes the weights in a recurrence mode from the last layer backward to the first layer. Using real data obtained from the experimental testing, we put the experimental measurements into the trained ANN input layer to identify the structural damage. The results in Tables 13, 14, and 15 show that the RS method determines the group of input variables and generates the structural damage rule sets before using ANN. While the performance of the ANN model on identification of damaged degree is not very good, the hybrid method proposed in the paper is helpful to construct a good identification model for structural damage, offering an excellent performance of identifying the damaged bay and damaged position of the test structure.

6. Conclusions In this paper, a novel method of combining RS and ANN methods is applied to the identification of structural damages. This study uses RS theory and integrates the inductive reduction algorithm and discretization algorithm based on information entropy to improve the ANN model for structural damage identification. Through a detailed experimental analysis of a 14-bay truss structure, this paper presents and discusses the conversion of damage index to RS object, predicting variables selection, removal of redundant from information table, and rules generation. The experiments data is preprocessed and reduced by RS before using ANN for identifying the damages of truss structure. The identification accuracy is mainly attributed to RS since it can remove redundant attributes without any classification information loss. Furthermore, the improvement in tolerance and accuracy with the proposed method shows that there is a great potential for integration of various techniques to improve the performance of an individual technique.

Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.

The Scientific World Journal

Acknowledgments This work is supported by the Programme of Introducing Talents of Discipline to Universities (Grant no. B07018), Natural Science Foundation of China (Grant no. 60772072, 51108129), Key Laboratory of C&PC Structures Ministry of Education, Guangdong Major Science and Technology Plan (Grant no. 2007AA03Z117), Shenzhen Foundmental Research Project (Grant no. JC201105160538A) and Shenzhen Overseas Talents Project (Grant no. KQCX20120802140634893).

References [1] C. R. Farrar, S. W. Doebling, and D. A. Nix, “Vibration-based structural damage identification,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 359, no. 1778, pp. 131–149, 2001. [2] S. W. Doebling and C. R. Farrar, “The state of the art in structural identification of constructed facilities,” Tech. Rep., Los Alamos National Laboratory, Los Alamos, Mexico, 1999. [3] S. W. Doebling, C. R. Farrar, and M. B. Prime, “A summary review of vibration-based damage identification methods,” Shock and Vibration Digest, vol. 30, no. 2, pp. 91–105, 1998. [4] C. R. Farrar and K. Worden, “An introduction to structural health monitoring,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 365, no. 1851, pp. 303–315, 2007. [5] H. Sohn, C. R. Farrar, F. M. Hemez, D. D. Shunk, D. W. Stinemates, and R. N. Brett, “A review of structural health monitoring literature: 1996–2001,” Tech. Rep. LA-13976-MS, Los Alamos National Laboratory Report, 2003. [6] K. Worden and J. M. Dulieu-Barton, “An overview of intelligent fault detection in systems and structures,” Structural Health Monitoring, vol. 3, no. 1, pp. 85–98, 2004. [7] C. R. Farrar, S. W. Doebling, P. J. Cornwell, and E. G. Straser, “Variability of modal parameters measured on the Alamosa Canyon Bridge,” in Proceedings of the 15th International Modal Analysis Conference, pp. 257–263, February 1996. [8] P. Lingras, “Comparison of neofuzzy and rough neural networks,” Information Sciences, vol. 110, no. 3-4, pp. 207–215, 1998. [9] B. S. Ahn, S. S. Cho, and C. Y. Kim, “Integrated methodology ofrough set theory and artificial neural network for business failure prediction,” Expert Systems with Applications, vol. 18, no. 2, pp. 65–74, 2000. [10] X. Xiang, J. Zhou, C. Li, Q. Li, and Z. Luo, “Fault diagnosis based on Walsh transform and rough sets,” Mechanical Systems and Signal Processing, vol. 23, no. 4, pp. 1313–1326, 2009. [11] F. E. H. Tay and L. Shen, “Fault diagnosis based on rough set theory,” Engineering Applications of Artificial Intelligence, vol. 16, no. 1, pp. 39–43, 2003. [12] R. Zhou and J. G. Yang, “The research of engine fault diagnosis based on rough sets and support vector machine,” Transactions of CSICE, vol. 24, no. 4, pp. 379–383, 2006 (Chinese). [13] J. R. Li, L. P. Khoo, and S. B. Tor, “RMINE: a rough set based data mining prototype for the reasoning of incomplete data in condition-based fault diagnosis,” Journal of Intelligent Manufacturing, vol. 17, no. 1, pp. 163–176, 2006. [14] Z. Geng and Q. Zhu, “Rough set-based heuristic hybrid recognizer and its application in fault diagnosis,” Expert Systems with Applications, vol. 36, no. 2, pp. 2711–2718, 2009.

The Scientific World Journal [15] R. Jensen and Q. Shen, “Semantics-preserving dimensionality reduction: rough and fuzzy-rough-based approaches,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 12, pp. 1457–1471, 2004. [16] Q. Hu, D. Yu, J. Liu, and C. Wu, “Neighborhood rough set based heterogeneous feature subset selection,” Information Sciences, vol. 178, no. 18, pp. 3577–3594, 2008. [17] I. D¨untsch and G. Gediga, “Uncertainty measures of rough set prediction,” Artificial Intelligence, vol. 106, no. 1, pp. 109–137, 1998. [18] Z. Pawlak, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer Academic, 1991. [19] R. Li and Z.-O. Wang, “Mining classification rules using rough sets and neural networks,” European Journal of Operational Research, vol. 157, no. 2, pp. 439–448, 2004. [20] J. Bazan, A. Skowron, and P. Synak, “Dynamic reducts as a tool for extracting laws from decisions tables,” in Proceedings of the Symposium on Methodologies for Intelligent Systems, pp. 346– 355, 1994. [21] M. W. Craven and J. W. Shavlik, “Using neural networks for data mining,” Future Generation Computer Systems, vol. 13, no. 2-3, pp. 211–229, 1994. [22] H. Lu, R. Setiono, and H. Liu, “Effective data mining using neural networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 8, no. 6, pp. 957–961, 1996. [23] R. W. Swiniarski and L. Hargis, “Rough sets as a front end of neural-networks texture classifiers,” Neurocomputing, vol. 36, pp. 85–102, 2001.

9