購(gòu)買(mǎi)設(shè)計(jì)請(qǐng)充值后下載,,資源目錄下的文件所見(jiàn)即所得,都可以點(diǎn)開(kāi)預(yù)覽,,資料完整,充值下載可得到資源目錄里的所有文件。。。【注】:dwg后綴為CAD圖紙,doc,docx為WORD文檔,原稿無(wú)水印,可編輯。。。具體請(qǐng)見(jiàn)文件預(yù)覽,有不明白之處,可咨詢QQ:12401814
外文翻譯部分:
英文原文
Mine-hoist fault-condition detection based on
the wavelet packet transform and kernel PCA
Abstract: A new algorithm was developed to correctly identify fault conditions and accurately monitor fault development in a mine hoist. The new method is based on the Wavelet Packet Transform (WPT) and kernel PCA (Kernel Principal Component Analysis, KPCA). For non-linear monitoring systems the key to fault detection is the extracting of main features. The wavelet packet transform is a novel technique of signal processing that possesses excellent characteristics of time-frequency localization. It is suitable for analysing time-varying or transient signals. KPCA maps the original input features into a higher dimension feature space through a non-linear mapping. The principal components are then found in the higher dimension feature space. The KPCA transformation was applied to extracting the main nonlinear features from experimental fault feature data after wavelet packet transformation. The results show that the proposed method affords credible fault detection and identification.
Key words: kernel method; PCA; KPCA; fault condition detection
1 Introduction
Because a mine hoist is a very complicated andvariable system, the hoist will inevitably generate some faults during long-terms of running and heavy loading. This can lead to equipment being damaged,to work stoppage, to reduced operating efficiency andmay even pose a threat to the security of mine personnel. Therefore, the identification of running fault shas become an important component of the safety system. The key technique for hoist condition monitoring and fault identification is extracting information from features of the monitoring signals and then offering a judgmental result. However, there are many variables to monitor in a mine hoist and, also , there are many complex correlations between thevariables and the working equipment. This introduce suncertain factors and information as manifested by complex forms such as multiple faults or associated faults, which introduce considerable difficulty to fault diagnosis and identification[1]. There are currently many conventional methods for extracting mine hoist fault features, such as Principal Component Analysis(PCA) and Partial Least Squares (PLS)[2]. These methods have been applied to the actual process. However, these methods are essentially a linear transformation approach. But the actual monitoring process includes nonlinearity in different degrees. Thus, researchers have proposed a series of nonlinearmethods involving complex nonlinear transformations. Furthermore, these non-linear methods are confined to fault detection: Fault variable separation and fault identification are still difficult problems.This paper describes a hoist fault diagnosis featureexaction method based on the Wavelet Packet Transform(WPT) and kernel principal component analysis(KPCA). We extract the features by WPT and thenextract the main features using a KPCA transform,which projects low-dimensional monitoring datasamples into a high-dimensional space. Then we do adimension reduction and reconstruction back to thesingular kernel matrix. After that, the target feature isextracted from the reconstructed nonsingular matrix.In this way the exact target feature is distinct and stable.By comparing the analyzed data we show that themethod proposed in this paper is effective.
2 Feature extraction based on WPT and
KPCA
2.1 Wavelet packet transform
The wavelet packet transform (WPT) method[3],which is a generalization of wavelet decomposition, offers a rich range of possibilities for signal analysis. The frequency bands of a hoist-motor signal as collected by the sensor system are wide. The useful information hides within the large amount of data. In general, some frequencies of the signal are amplified and some are depressed by the information. That is tosay, these broadband signals contain a large amountof useful information: But the information can not bedirectly obtained from the data. The WPT is a finesignal analysis method that decomposes the signalinto many layers and gives a etter resolution in thetime-frequency domain. The useful informationwithin the different requency ands will be expressed by different wavelet coefficients after thedecomposition of the signal. The oncept of “energy information” is presented to identify new information hidden the data. An energy igenvector is then used to quickly mine information hiding within the large amount of data.The algorithm is:
Step 1: Perform a 3-layer wavelet packet decomposition of the echo signals and extract the signal characteristics of the eight frequency components ,from low to high, in the 3rd layer.
Step 2: Reconstruct the coefficients of the waveletpacket decomposition. Use 3 j S (j=0, 1, …, 7) to denote the reconstructed signals of each frequencyband range in the 3rd layer. The total signal can thenbe denoted as:
(1)
Step 3: Construct the feature vectors of the echosignals of the GPR. When the coupling electromagneticwaves are transmitted underground they meetvarious inhomogeneous media. The energy distributing of the echo signals in each frequency band willthen be different. Assume that the corresponding energyof 3 j S (j=0, 1, …, 7) can be represented as3 j E (j=0, 1, …, 7). The magnitude of the dispersedpoints of the reconstructed signal 3 j S is: jk x (j=0,1, …, 7; k=1, 2, …, n), where n is the length of thesignal. Then we can get:
(2)
Consider that we have made only a 3-layer waveletpackage decomposition of the echo signals. To makethe change of each frequency component more detailedthe 2-rank statistical characteristics of the reconstructedsignal is also regarded as a feature vector:
(3)
Step 4: The 3 j E are often large so we normalize them. Assume that, thus the derived feature vectors are, at last:
T=[] (4)
The signal is decomposed by a wavelet packageand then the useful characteristic information featurevectors are extracted through the process given above.Compared to other traditional methods, like the Hilberttransform, approaches based on the WPT analysisare more welcome due to the agility of the processand its scientific decomposition.
2.2 Kernel principal component analysis
The method of kernel principal component analysisapplies kernel methods to principal component analysis[4–5].
The principalcomponent is the element at the diagonal afterthe covariance matrix,has beendiagonalized. Generally speaking, the first N valuesalong the diagonal, corresponding to the large eigenvalues,are the useful information in the analysis.PCA solves the eigenvalues and eigenvectors of thecovariance matrix. Solving the characteristic equation[6]:
(5)
where the eigenvalues ,and the eigenvectors, is essence of PCA.
Let the nonlinear transformations, ? : RN F ,x X , project the original space into feature space,F. Then the covariance matrix, C, of the original space has the following form in the feature space:
(6)
Nonlinear principal component analysis can be
considered to be principal component analysis ofin the feature space, F. Obviously, all the igenvaluesof C and eigenvectors, V F \ {0} satisfyV = V . All of the solutions are in the subspace
that transforms from
(7)
There is a coefficient Let
(8)
From Eqs.(6), (7) and (8) we can obtain:
(9)
where k =1, 2, ….., M . Define A as an M×M rank
matrix. Its elements are:
From Eqs.(9) and (10), we can obtain
M Aa = A2a . This is equivalent to:
M Aa = Aa .
Make as A’s eigenvalues, and, as the corresponding eigenvector.
We only need to calculate the test points’ projections
on the eigenvectorsthat correspond to
nonzero eigenvalues in F to do the principal component
extraction. Defining this asit is given by:
(12)
principal
component we need to know the exact form of the non-linear image. Also as the dimension of the feature space increases the amount of computation goes up exponentially. Because Eq.(12) involves an inner-product computation, according to the principles of Hilbert-Schmidt we can find a kernel function that satisfies the Mercer conditions and makesThen Eq.(12) can
be written:
Here is the eigenvector of K. In this way the dot product must be done in the original space but the specific form of (x) need not be known. The mapping, (x) , and the feature space, F, are all completely determined by the choice of kernel function[ 7–8].
2.3 Description of the algorithm
The algorithm for extracting target features in recognition of fault diagnosis is:
Step 1: Extract the features by WPT;
Step 2: Calculate the nuclear matrix, K, for each sample, in the original input space, and
Step 3: Calculate the nuclear matrix after zero-mean processing of the mapping data in feature space;
Step 4: Solve the characteristic equation M a = Aa ;
Step 5: Extract the k major components using Eq.(13) to derive a new vector. Because the kernel function used in KPCA met the Mercer conditions it can be used instead of the inner product in feature space. It is not necessary to consider the precise form of the nonlinear transformation. The mapping function can be non-linear and the dimensions of the feature space can be very high but it is possible to get the main feature components effectively by choosing a suitable kernel function and
kernel parameters[9].
3 Results and discussion
The character of the most common fault of a mine hoist was in the frequency of the equipment vibration signals. The experiment used the vibration signals of
a mine hoist as test data. The collected vibration signals were first processed by wavelet packet. Then through the observation of different time-frequency
energy distributions in a level of the wavelet packet we obtained the original data sheet shown in Table 1 by extracting the features of the running motor. The
fault diagnosis model is used for fault identification or classification.
Experimental testing was conducted in two parts: The first part was comparing the performance of KPCA and PCA for feature extraction from the original
data, namely: The distribution of the projection of the main components of the tested fault samples. The second part was comparing the performance of the classifiers, which were constructed after extracting features by KPCA or PCA. The minimum distance and nearest-neighbor criteria were used for classification comparison, which can also test the KPCA and PCA performance. In the first part of the experiment, 300 fault samples were used for comparing between KPCA and PCA for feature extraction. To simplify the calculations a Gaussian kernel function was used:
10
The value of the kernel parameter, , is between 0.8 and 3, and the interval is 0.4 when the number of reduced dimensions is ascertained. So the best correct classification rate at this dimension is the accuracy of the classifier having the best classification results. In the second part of the experiment, the classifiers’ recognition rate after feature extraction was examined. Comparisons were done two ways: the
minimum distance or the nearest-neighbor. 80% of the data were selected for training and the other 20% were used for testing. The results are shown in Tables 2 and 3.
From Tables 2 and 3, it can be concluded from Tables 2 and 3 that KPCA takes less time and has relatively higher recognition accuracy than PCA.
4 Conclusions
A principal component analysis using the kernel fault extraction method was described. The problem is first transformed from a nonlinear space into a linearlinear
higher dimension space. Then the higher dimension feature space is operated on by taking the inner product with a kernel function. This thereby cleverly solves complex computing problems and overcomes the difficulties of high dimensions and local minimization. As can be seen from the experimental data, compared to the traditional PCA the KPCA analysis has greatly improved feature extraction and efficiency in recognition fault states.
References
[1] Ribeiro R L. Fault detection of open-switch damage in
voltage-fed PWM motor drive systems. IEEE Trans
Power Electron, 2003, 18(2): 587–593.
[2] Sottile J. An overview of fault monitoring and diagnosis
in mining equipment. IEEE Trans Ind Appl, 1994, 30(5):
1326–1332.
[3] Peng Z K, Chu F L. Application of wavelet transform in
machine condition monitoring and fault diagnostics: a
review with bibliography. Mechanical Systems and Signal
Processing, 2003(17): 199–221.
[4] Roth V, Steinhage V. Nonlinear discriminant analysis
using kernel function. In: Advances in Neural Information
Proceeding Systems. MA: MIT Press, 2000: 568–
574.
[5] Twining C, Taylor C. The use of kernel principal component
analysis to model data distributions. Pattern
Recognition, 2003, 36(1): 217–227.
[6] Muller K R, Mika S, Ratsch S, et al. An introduction to
kernel-based learning algorithms. IEEE Trans on Neural
Network, 2001, 12(2): 181.
[7] Xiao J H, Fan K Q, Wu J P. A study on SVM for fault
diagnosis. Journal of Vibration, Measurement & Diagnosis,
2001, 21(4): 258–262.
[8] Zhao L J, Wang G, Li Y. Study of a nonlinear PCA fault
detection and diagnosis method. Information and Control,
2001, 30(4): 359–364.
[9] Xiao J H, Wu J P. Theory and application study of feature
extraction based on kernel. Computer Engineering,
2002, 28(10): 36–38.
中文譯文
基于PCA技術(shù)核心的打包和變換的礦井提升機(jī)失誤的發(fā)現(xiàn)
摘要:
一個(gè)新的運(yùn)算法則被正確的運(yùn)用于證明和監(jiān)視礦井提升機(jī)的過(guò)失情況。這種方法是基于小浪小包變換(WPT)和PCA為核心基礎(chǔ)的。(KPCA,核心校長(zhǎng)成份分析)因?yàn)榉蔷€性監(jiān)聽(tīng)系統(tǒng)主要是通過(guò)主要特征來(lái)發(fā)現(xiàn)和吸取系統(tǒng)過(guò)失的。小浪小包變換是處理時(shí)間-頻率局限性的優(yōu)良特性信號(hào)的新技術(shù)。它對(duì)分析改變時(shí)間或短暫的信號(hào)是適當(dāng)?shù)摹?
KPCA 透過(guò)最初的輸入的非線性映射映射特征進(jìn)入較高的尺寸特征空間。主要的成份然后在較高的尺寸特征空間被發(fā)現(xiàn)。KPCA 變形被適用于從實(shí)驗(yàn)的過(guò)失特征小浪后的數(shù)據(jù)小包變形吸取主要的非線性特征。結(jié)果表示,被提議的方法負(fù)擔(dān)可信的過(guò)失發(fā)現(xiàn)和確認(rèn)。
關(guān)鍵詞:核心方法;主成分分析;核主元分析;故障檢測(cè)
1介紹
因?yàn)槲业牡V井提升機(jī)是一個(gè)非常復(fù)雜的可變性比較大的系統(tǒng),
升高不可避免的產(chǎn)生錯(cuò)誤和長(zhǎng)時(shí)間的超載。這些都有可能損壞設(shè)備,操作終端,甚至降低工作效率,對(duì)我們員工的安全帶來(lái)威脅。因此,流動(dòng)錯(cuò)誤的確認(rèn)一直被認(rèn)為是安全系統(tǒng)的一個(gè)重要組成部分。對(duì)于升高情況的測(cè)試和監(jiān)聽(tīng)只要是依靠探取監(jiān)聽(tīng)信號(hào)和他的結(jié)果的信息特征。但是,在那里礦井提升的高度檢測(cè)和工作設(shè)備之間有許多復(fù)雜的相互關(guān)系。這些因素和數(shù)據(jù)的引進(jìn)可以當(dāng)作有很多部分顯示形成許多錯(cuò)誤和失誤。這錯(cuò)誤的介紹和確認(rèn)會(huì)給我們帶來(lái)相當(dāng)多的困難認(rèn)識(shí)?,F(xiàn)在,很多利用我的技術(shù)發(fā)現(xiàn)現(xiàn)有提升機(jī)缺點(diǎn)的方法在許多傳統(tǒng)的方法中扮演著重要的角色。比如主要成份分析(PCA)和部分最少?gòu)V場(chǎng)(PLS)。這些方法已經(jīng)被熟練的運(yùn)用于我們的實(shí)際生產(chǎn)中來(lái)。但是這些方法在本質(zhì)上是接近的。然而在實(shí)際工作中的監(jiān)視設(shè)備往往發(fā)、生非線性的。因此我們的研究員已經(jīng)計(jì)劃了包括浮躁的非線性變形等一系列的無(wú)線發(fā)現(xiàn)技術(shù)。此外,這些非線性方法限制了錯(cuò)誤的發(fā)現(xiàn),現(xiàn)在這些錯(cuò)誤的分離和缺點(diǎn)的確認(rèn)依然是一個(gè)困難的所在。這篇論文是基于小浪小包變換 (WPT)和核心校長(zhǎng)成份分析 (KPCA)的礦井提升機(jī)的失誤確認(rèn)方法。我們吸取 WPT和 thenextract 的特征
使用 KPCA 變換,計(jì)劃低空間的監(jiān)聽(tīng) datasamples 進(jìn)入高空間的空間主要部份特征。接著我們做到尺寸的減少 然后進(jìn)行核心點(diǎn)陣式尺寸的重建。之后我們的目標(biāo)是重建特征和非反常的點(diǎn)陣式尺寸。這樣我們得到的是清楚又穩(wěn)定的目標(biāo)特征?;诖宋覀儽硎?,這種方法在這次計(jì)劃中分析出來(lái)的數(shù)據(jù)是有效的。
2基于小波包變換的特征提取
核心單元的分析
2.1小波包變換
小波包變換(小波包變換)方法[ 3 ] ,這是一個(gè)小波的概括分解,提供了的很多可能性分析,傳感系統(tǒng)的信號(hào)頻帶的升降器點(diǎn)擊收集到的信號(hào)是非常廣泛的。這些信息中隱藏了大量的使用信息。一般情況下,一些平率信號(hào)的擴(kuò)大包含不好的信息。
這就是說(shuō),這些寬帶信號(hào)包含大量有用的信息:但是信息不能獲得的數(shù)據(jù)。然而小波包變換是一個(gè)很好的信號(hào)分析方法,分解許多層并給出了一個(gè)見(jiàn)上書(shū)決議時(shí)間頻域。實(shí)用的信息在不同的發(fā)展戰(zhàn)略將不同的小波系數(shù)后的信號(hào)。該信號(hào)的提出,是以確定新的信息隱藏?cái)?shù)據(jù)。能源然后用于排雷信息隱藏的算法是:
第1步:執(zhí)行3層小波包分解的回波信號(hào),并提取信號(hào)
特點(diǎn)八高頻成分,從低到高,在第三層。
第2步:重構(gòu)系數(shù)波讓包分解。
利用3 j S (j=0, 1, …, 7)
指每個(gè)重建信號(hào)的頻帶范圍內(nèi)的第3層??偟男盘?hào)就可以被命名為:
(1)
第3步:構(gòu)建特征向量的的探地雷達(dá)。當(dāng)電磁波的耦合傳輸他們滿足各種地下非均勻介質(zhì)。能源分布的回波信號(hào)在每個(gè)頻帶然后將不同:
承擔(dān)相應(yīng)的能量 3 j S (j=0, 1, …, 7) 可以代表3 j E (j=0, 1, …, 7).
的規(guī)模分散點(diǎn)的重建信號(hào)3 j S 是 jk x (j=0,1, …, 7; k=1, 2, …, n),
其中n是長(zhǎng)度的信號(hào)。然后,我們可以得到:
(2)
認(rèn)為我們?nèi)〉昧酥挥?層波讓包分解的回波信號(hào)。為了使每一個(gè)變化的更詳細(xì)的頻率成分的2階統(tǒng)計(jì)特性的重建信號(hào)也視為一個(gè)特征向量:
(3)
(4)第4步3 j E往往大,所以我們正?;麄?。假設(shè),從而得出的特征向量是,最后:
T=[]
信號(hào)分解的小波包,然后有用的特征信息提取的特征向量是通過(guò)上述過(guò)程。相對(duì)于其他傳統(tǒng)方法,如Hilbertt并存的形式,方法基于小波包變換分析更歡迎由于敏捷的過(guò)程和它的科學(xué)分解。
2.2版內(nèi)核主成分分析
該方法的核心主成分分析方法,適用于核心主成分分析[ 4-5 ] 。
主要組成部分是在對(duì)角線元素后,協(xié)方差矩陣,已是結(jié)尾 。一般而言,第一次N值山對(duì)角線長(zhǎng),相應(yīng)的大特征值,是有用的信息在數(shù)據(jù)分析.PCA解決了特征值和特征向量的協(xié)方差矩陣。求解特征方程[ 6 ] :
如果特征值和特征向量,是屬于PCA的。使非線性變換,RN F ,x X項(xiàng)目原始空間到特征空間,樓然后,協(xié)方差矩陣,中,原來(lái)的空間具有下列表格中的功能空間:
(6)
非線性主成分分析可
被認(rèn)為是主成分分析的功能空間,樓顯然,所有的C抗原值和特征向量,V F \ {0} 滿足V = V。所有的解決方案是在子這一轉(zhuǎn)變從
(7)
使系數(shù) 可以得到
(8)
從6 7 8式我們可以得到
(9)
使k =1, 2, …..,M定義A是M×M的矩陣,她的要點(diǎn)是
M Aa = Aa .從9和10式我們可以得到M Aa = A2a這就相當(dāng)于M Aa = Aa .
使作為A的特征值,以及相應(yīng)的特征向量。我們只需要計(jì)算測(cè)試點(diǎn)的預(yù)測(cè)的特征向量對(duì)應(yīng)的非零特征值的F這樣做主要成分的提取。界定這種因?yàn)樗怯桑?
(12)
主要組成部分,我們需要知道確切形式的非線性圖像。還為層面的特征空間增加了計(jì)算量隨之呈指數(shù)。由于均衡器。 式( 12 )涉及黨內(nèi)產(chǎn)品計(jì)算,
根據(jù)原則的Hilbert -施密特我們能夠找到一種核心功能,滿足了條件,使美這樣式12可以改寫(xiě)成 這里是K的一個(gè)變量。這樣,斑點(diǎn)產(chǎn)品必須在原來(lái)的空間,但具體形式 (x)必須不知道。他測(cè)繪, (x)和空間的特點(diǎn),男,都完全取決于選擇核函數(shù)[ 7-8 ] 。
2.3說(shuō)明算法該算法提取目標(biāo)特征識(shí)別的故障診斷是:
第1步:提取特征的小波包變換;第2步:計(jì)算核基質(zhì),鉀,每
例如 在原來(lái)的輸入空間,和
第3步:計(jì)算核基質(zhì)后零意味著處理測(cè)繪數(shù)據(jù)的特征空間;
第4步:求解特征方程M a = Aa ;
第5步:提取的K主要組成部分使用情商。 ( 13 )制定出一個(gè)新的載體。
因?yàn)閮?nèi)核中使用的核主元分析功能會(huì)見(jiàn)了美世的條件,可用于代替內(nèi)積的特征空間。沒(méi)有必要考慮的具體形式的非線性變換。映射功能可以非線性和層面的特征空間可能很高,但有可能得到有效成分主要特點(diǎn)選擇合適的核函數(shù)和內(nèi)核參數(shù)[ 9 ] 。
3結(jié)果與討論
性質(zhì)的最常見(jiàn)故障的礦井提升機(jī)是在頻率的設(shè)備振動(dòng)信號(hào)。實(shí)驗(yàn)所用的振動(dòng)信號(hào)的礦井提升機(jī)的測(cè)試數(shù)據(jù)。所收集的振動(dòng)信號(hào)首先處理小波包。然后通過(guò)觀察不同的時(shí)頻能量分布在一個(gè)水平的小波包,我們獲得原始數(shù)據(jù)列于表1
提取的特點(diǎn),運(yùn)行發(fā)動(dòng)機(jī)。該故障診斷模型用于故障識(shí)別或分類。
實(shí)驗(yàn)測(cè)試,分兩部分進(jìn)行: 第一部分是性能的比較核主元分析和主成分分析的特征提取從原來(lái)的數(shù)據(jù),即:分布情況的預(yù)測(cè)的主要組成部分測(cè)試故障樣本。那個(gè)第二部分是比較的性能分類,這是建造在提取功能的核主元分析或常設(shè)仲裁法院。的最短距離和近鄰的標(biāo)準(zhǔn),用于分類相比之下,這也可以測(cè)試的核主元分析和常設(shè)仲裁法院的執(zhí)行情況。在第一部分實(shí)驗(yàn)中, 300個(gè)故障樣本被用于核主元分析和比較主成分分析的特征提取。為了簡(jiǎn)化計(jì)算高斯核函數(shù)使用:
10
價(jià)值的內(nèi)核參數(shù), ,是關(guān)系 0.8和3 ,以及時(shí)的間隔為0.4的數(shù)目減少尺寸的確定。因此,最好的糾正分類率在這個(gè)層面的準(zhǔn)確性分類器擁有最好的分類結(jié)果。
在第二部分實(shí)驗(yàn)中,分類器'識(shí)別率的特征提取后進(jìn)行了檢查。比較兩種方式進(jìn)行:在最小距離或近鄰。 80 %的這些數(shù)據(jù)被選定為培訓(xùn)和其他20 % 用于測(cè)試。結(jié)果見(jiàn)下表
2和3 。
從表2和第3 ,可以得出結(jié)論從表第2和第3的核主元分析需要更少的時(shí)間和相對(duì)
更高的識(shí)別準(zhǔn)確率超過(guò)常設(shè)仲裁法院。
4結(jié)論
主要組成分析中的應(yīng)用內(nèi)核故障提取方法描述。問(wèn)題首先是由一個(gè)非線性空間到線性高維空間。然后,高維功能空間上運(yùn)行,采取了內(nèi)部產(chǎn)品的核心功能。這從而巧妙地解決復(fù)雜的計(jì)算問(wèn)題和克服的困難,高維和局部極小。可以看出,從實(shí)驗(yàn)數(shù)據(jù),相比傳統(tǒng)的主成分分析的核主元分析分析大大改善了特征提取和效率
在承認(rèn)過(guò)失國(guó)。
參考文獻(xiàn)
[1] Ribeiro R L. Fault detection of open-switch damage in
voltage-fed PWM motor drive systems. IEEE Trans
Power Electron, 2003, 18(2): 587–593.
[2] Sottile J. An overview of fault monitoring and diagnosis
in mining equipment. IEEE Trans Ind Appl, 1994, 30(5):
1326–1332.
[3] Peng Z K, Chu F L. Application of wavelet transform in
machine condition monitoring and fault diagnostics: a
review with bibliography. Mechanical Systems and Signal
Processing, 2003(17): 199–221.
[4] Roth V, Steinhage V. Nonlinear discriminant analysis
using kernel function. In: Advances in Neural Information
Proceeding Systems. MA: MIT Press, 2000: 568–
574.
[5] Twining C, Taylor C. The use of kernel principal component
analysis to model data distributions. Pattern
Recognition, 2003, 36(1): 217–227.
[6] Muller K R, Mika S, Ratsch S, et al. An introduction to
kernel-based learning algorithms. IEEE Trans on Neural
Network, 2001, 12(2): 181.
[7] Xiao J H, Fan K Q, Wu J P. A study on SVM for fault
diagnosis. Journal of Vibration, Measurement & Diagnosis,
2001, 21(4): 258–262.
[8] Zhao L J, Wang G, Li Y. Study of a nonlinear PCA fault
detection and diagnosis method. Information and Control,
2001, 30(4): 359–364.
[9] Xiao J H, Wu J P. Theory and application study of feature
extraction based on kernel. Computer Engineering,
2002, 28(10): 36–38.