數(shù)字圖像處理與邊緣檢測(cè)畢業(yè)論文外文翻譯
《數(shù)字圖像處理與邊緣檢測(cè)畢業(yè)論文外文翻譯》由會(huì)員分享,可在線閱讀,更多相關(guān)《數(shù)字圖像處理與邊緣檢測(cè)畢業(yè)論文外文翻譯(16頁(yè)珍藏版)》請(qǐng)?jiān)谘b配圖網(wǎng)上搜索。
1、外文資料翻譯 Digital Image Processing and Edge Detection Digital Image Processing Interest in digital image processing methods stems from two principal applica- tion areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and
2、representation for au- tonomous machine perception. An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y,
3、and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a pa
4、rticular location and value. These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely used to denote the elements of a digital image. Vision is the most advanced of our senses, so it is not surprising that images play the single most impo
5、rtant role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spec- trum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accusto
6、med to associating with images. These include ultra- sound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of applications. There is no general agreement among authors regarding where image processing stops and other related
7、areas, such as image analysis and computer vi- sion, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition
8、, even the trivial task of computing the average intensity of an image (which yields a single number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learn
9、ing and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much
10、 slower than originally anticipated. The area of image analysis (also called image understanding) is in be- tween image processing and computer vision. There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm
11、is to consider three types of computerized processes in this continuum:low-, mid-, and highlevel processes. Low-level processes involve primitive opera- tions such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact
12、that both its inputs and outputs are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual ob
13、jects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). Finally, higherlevel processing involves “making sense” of an ensemble of recognize
14、d objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision. Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual
15、 regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. As a simple i
16、llustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting (segmenting) the individual characters, describing the characters in a form suitable for computer
17、processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by
18、the statement “making sense.” As will become evident shortly, digital image processing, as we have defined it, is used successfully in a broad range of areas of exceptional social and economic value. The areas of application of digital image processing are so varied that some form of organization
19、 is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal energy source for images in u
20、se today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In this section we discuss b
21、riefly how images are generated in these various categories and the areas in which they are applied. Images based on radiation from the EM spectrum are the most familiar, es- pecially images in the X-ray and visual bands of the spectrum. Electromagnet- ic waves can be conceptualized as propagatin
22、g sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If spectr
23、al bands are grouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct but rather t
24、ransition Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling. Image enhancement is among the simplest and most appealing areas of d
25、igital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contrast of an image because “it looks better.” It is impo
26、rtant to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement,which is subjective, image restoration is objective, in the sense that restoration techniques ten
27、d to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good” enhancement result. Color image processing is an area that has been gaining in importance because of the signi
28、ficant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in an image. Wavelets are the foundation
29、 for representing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions. Compression, as the name implies, deals with techniques for
30、reducing the storage required to save an image, or the bandwidth required to transmi it.Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by signif
31、icant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard. Morphological processing deals with tools f
32、or extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes. Segmentation procedures partition an image into its constituent parts or
33、objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or er
34、ratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed. Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constitut
35、ing either the bound- ary of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the dat
36、a should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics,such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal
37、shape. In some applications, these representations complement each other. Choosing a representation is only part of the solution for trans- forming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of i
38、nterest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another. Recognition is the process that assigns a label (e.g., “vehicle”) to
39、 an object based on its descriptors. As detailed before, we conclude our coverage of digital image processing with the development of methods for recognition of individual objects. So far we have said nothing about the need for prior knowledge or about the interaction between the knowledge base an
40、d the processing modules in Fig2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as sim- ple as detailing regions of an image where the information of interest is known to be located, thus limiting the s
41、earch that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in con-nection with ch
42、ange-detection applications. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig2 above by the use of double-headed arrows between the processing modules and the knowledge base, as op- po
43、sed to single-headed arrows linking the processing modules. Edge detection Edge detection is a terminology in image processing and computer vision, particularly in the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in a digital image at wh
44、ich the image brightness changes sharply or more formally has discontinuities.Although point and line detection certainly are important in any discussion on segmentation,edge dectection is by far the most common approach for detecting meaningful discounties in gray level. Although certain literatur
45、e has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects:1.focal blur caused by a finite depth-of-field and finite point spread function; 2.penumbr
46、al blur caused by shadows created by light sources of non-zero radius; 3.shading at a smooth object edge; 4.local specularities or interreflections in the vicinity of object edges. A typical edge might for instance be the border between a block of red color and a block of yellow. In contrast a lin
47、e (as can be extracted by a ridge detector) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line. To illustrate why edge detection is not a trivial task, let us consider the problem o
48、f detecting edges in the following one-dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels. 5 7 6 4 152 148 149 If the intensity difference were smaller between the 4th and the 5th pixels and if the intensity differences between the ad
49、jacent neighbouring pixels were higher, it would not be as easy to say that there should be an edge in the corresponding region. Moreover, one could argue that this case is one in which there are several edges.Hence, to firmly state a specific threshold on how large the intensity change between two
50、neighbouring pixels must be for us to say that there should be an edge between these pixels is not always a simple problem. Indeed, this is one of the reasons why edge detection may be a non-trivial problem unless the objects in the scene are particularly simple and the illumination conditions can b
51、e well controlled. There are many methods for edge detection, but most of them can be grouped into two categories,search-based and zero-crossing based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradie
52、nt magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the
53、image in order to find edges, usually the zero-crossings of the Laplacian or the zero-crossings of a non-linear differential expression, as will be described in the section on differential edge detection following below. As a pre-processing step to edge detection, a smoothing stage, typically Gaussi
54、an smoothing, is almost always applied (see also noise reduction). The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of i
55、mage gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions. Once we have computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an im
56、age point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to noise, and also to picking out irrelevant features from the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges. If the edge thresholding is ap
57、plied to just the gradient magnitude image, the resulting edges will in general be thick and some type of edge thinning post-processing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into edge polygon by
58、 an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally comparing the values of the gradie
59、nt magnitude in the estimated gradient direction. A commonly used approach to handle the problem of appropriate thresholds for thresholding is by using thresholding with hysteresis. This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of an ed
60、ge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be
61、 in continuous curves, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge. Still, however, we have the problem of choosing appropriate thresholding parameters, and suitable thresholding values may
62、vary over the image. Some edge-detection operators are instead based upon second-order derivatives of the intensity. This essentially captures the rate of change in the intensity gradient. Thus, in the ideal continuous case, detection of zero-crossings in the second derivative captures local maxima
63、 in the gradient. We can come to a conclusion that,to be classified as a meaningful edge point,the transition in gray level associated with that point has to be significantly stronger than the background at that point.Since we are dealing with local computations,the method of choice to determine wh
64、ether a value is “significant” or not id to use a threshold.Thus we define a point in an image as being as being an edge point if its two-dimensional first-order derivative is greater than a specified criterion of connectedness is by definition an edge.The term edge segment generally is used if the
65、edge is short in relation to the dimensions of the image.A key problem in segmentation is to assemble edge segments into longer edges.An alternate definition if we elect to use the second-derivative is simply to define the edge ponits in an image as the zero crossings of its second derivative.The de
66、finition of an edge in this case is the same as above.It is important to note that these definitions do not guarantee success in finding edge in an image.They simply give us a formalism to look for them.First-order derivatives in an image are computed using the gradient.Second-order derivatives are obtained using the Laplacian. 數(shù)字圖像處理與邊緣檢測(cè) 數(shù)字圖像處理 數(shù)字圖像處理方法的研究源于兩個(gè)主要應(yīng)用領(lǐng)域:其一是為了便于人們分析而對(duì)圖像信息進(jìn)行改進(jìn):其二是為使機(jī)器自動(dòng)理解而對(duì)圖像數(shù)據(jù)進(jìn)行存儲(chǔ)、傳輸及顯示。
- 溫馨提示:
1: 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
2: 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
3.本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
5. 裝配圖網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 2025年防凍教育安全教育班會(huì)全文PPT
- 2025年寒假安全教育班會(huì)全文PPT
- 初中2025年冬季防溺水安全教育全文PPT
- 初中臘八節(jié)2024年專題PPT
- 主播直播培訓(xùn)提升人氣的方法正確的直播方式如何留住游客
- XX地區(qū)機(jī)關(guān)工委2024年度年終黨建工作總結(jié)述職匯報(bào)
- 心肺復(fù)蘇培訓(xùn)(心臟驟停的臨床表現(xiàn)與診斷)
- 我的大學(xué)生活介紹
- XX單位2024年終專題組織生活會(huì)理論學(xué)習(xí)理論學(xué)習(xí)強(qiáng)黨性凝心聚力建新功
- 2024年XX單位個(gè)人述職述廉報(bào)告
- 一文解讀2025中央經(jīng)濟(jì)工作會(huì)議精神(使社會(huì)信心有效提振經(jīng)濟(jì)明顯回升)
- 2025職業(yè)生涯規(guī)劃報(bào)告自我評(píng)估職業(yè)探索目標(biāo)設(shè)定發(fā)展策略
- 2024年度XX縣縣委書(shū)記個(gè)人述職報(bào)告及2025年工作計(jì)劃
- 寒假計(jì)劃中學(xué)生寒假計(jì)劃安排表(規(guī)劃好寒假的每個(gè)階段)
- 中央經(jīng)濟(jì)工作會(huì)議九大看點(diǎn)學(xué)思想強(qiáng)黨性重實(shí)踐建新功