畢 業(yè) 設(shè) 計(jì)(論 文)外 文 參 考 資 料 及 譯 文
譯文題目: Vehicle Navigation Using Global Views使用全局視圖的汽車導(dǎo)航
學(xué)生姓名:
?! I(yè):
所在學(xué)院:
指導(dǎo)教師:
職 稱:
Vehicle Navigation Using Global Views
1 Introduction
Driver inattention is a major offender to highway crashes. The National Highway Traffic Safety Administration estimates that at least 25% of police-reported crashes involve some forms of driver inattention. Driving is a process that requires a driver to distribute his/her attention among different sub-tasks. First of all, a driver needs to pay attention to issues directly related to safety, including the surrounding traffic, dashboard displays, and other influx of information on the road such as traffic lights and road signs. In addition, the driver may choose to talk to a passenger, listen to the radio, and talk on the cell phone. Therefore, situation awareness plays an important role in driving safety. In this research, we are developing technologies to provide a driver with the information of dynamic surroundings around the vehicle when he/she is driving to enhance his/her situation awareness.
Situation awareness is defined as the perception of the elements in the environ- ment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future. Sensing and representing infor- mation is a key for situation awareness in driving a vehicle. A lot of research has been directed towards improving in-vehicle information presentation. Green et al. surveyed early studies on human factor tests in navigation displays. They de- scribed objectives, principles and guidelines for the design of in-vehicle devices. Dale et al. investigated the problem of generating natural route descriptions for nav- igational assistance. Lee et al. developed a situationally appropriate map system for drivers].
Navigation user interfaces have changed dramatically over the last few years due to the availability of electronic maps and the Global Positioning System (GPS). The displays in the current GPS navigation systems show the location of a vehicle on a graphical map in a way that is similar to looking straight down at a paper map. Recently, several companies, such as Microsoft and Google, have started providing global view maps, such as aerial imagery maps, satellite imagery maps, and bird’s eye view maps. For example, in the bird’s eye view mode, Microsoft’s Windows Live Local consists of high resolution aerial imagery taken from an angle rather than straight down from four directions. Besides the GPS, a vehicle can also ob- tain information about the driving environment from other sensors, such as video cameras mounted at various positions, thermal infrared imagers, RADAR, LIDAR, and ultrasonic sensors. Among these sensors, video cameras are attractive from a packaging and cost perspective. Recent advances in computer vision and image processing technologies have made it possible to apply video-based sensors along with the GPS in driving assistance applications.
Here, we propose a novel method to enhance situation awareness by dynamically providing a global view of surrounding for drivers. The surrounding of a vehicle will be captured by an omnidirectional vision system at the top of a vehicle. In order to obtain high quality of surrounding images, we use an omnidirectional vision system consisting of multiple cameras, rather than a catadioptric camera used by the most existing systems for intelligent vehicles. The video stream from the camera is processed to detect nearby vehicles and obstacles. Positions of these detected objects will be overlaid on a global view map of the vehicle. We deduce the mapping between an omnidirectional vision system and global view map. This map can be projected onto a Head-Up Display (HUD) on the windshield and provide a dramatically realistic perspective view of the driving environment. By looking at the display, a driver can have a global picture of the situation and likely produce a good driving strategy.
The rest of this chapter is organized as the following: Sect. 8.2 describes the problem and the proposed approach. Section 8.3 discusses the imaging model of our camera system. Section 8.4 presents a panoramic Inverse Perspective Mapping (pIPM). Section 8.5 shows how to implement the pIPM. Section 8.6 introduces the elimination of the wide-angle lens radial error. In Sect. 8.7, we illustrate the pro- posed method by an example that maps vehicles detected from the video stream captured by am omnidirectional vision system onto the Google Earth map.
2 The Problem and Proposed Approach
The field of view, which is the part of the observable world that is seen at any given moment, plays an important role in driving safety. While a human has an almost 180- degree forward-facing field of view, his/her binocular vision, which is important for depth perception, only covers 140 degrees of the field of vision. In a driving situation, it is desirable to have a complete 360-degree field of view. In order to expand a driver’s field of view, automobile manufacturers have equipped a rear-view mirror and side mirrors on vehicles. More recently, rear-view video cameras have been added to many new model cars to enhance the ability of the rear-view mirror by showing the road directly behind the car. These camera systems are usually mounted to the bumper or lower parts of the car allowing for better rear visibility. However, looking at mirrors can move a driver’s attention away from the road.
Adding sensors and devices in a vehicle can potentially lead to more distractions. Inattention is one of leading causes of car accidents, estimated to account for 25% of all road traffic accidents. Our goal, therefore, is to increase a driver’s field of view without adding distraction sources. In this approach, we propose to capture surroundings of a vehicle by an omnidirectional vision system mounted at the top of a vehicle and display the dynamic global view on the windshield using an HUD. In this way, a driver can obtain a global view of the surrounding without shifting his/her attention away from the front view of the vehicle.
Omnidirectional vision system has been previously used in intelligent vehicle applications, such as vehicle tracking, indoor parking lot, and driver monitoring driver, etc.. These applications used different omnidirectional sensors, such as wide Field-Of-View (FOV) dioptric cameras, catadioptric cameras, Pan-Title-Zoom (PTZ) cameras and polydioptric cameras. Both wide FOV diop- tric cameras and catadioptric cameras have some limitations. First, their images are heavily distorted, and we have to spend much time on correcting the distortion. Sec- ond, they cannot provide high resolution images of surroundings. PTZ cameras are often used in environment surveillance by moving the cameras. Although PTZ cam- eras can provide high resolution images, mechanical motion of the cameras causes slow system responses. Instead of using these cameras, we will use an omnidirec-tional vision system consisting of multiple cameras to capture a full view of the surroundings around a vehicle with the high resolution up to 1600 × 320 simulta- neously.
However, the panoramic video stream from the omnidirectional camera cannot be easily understood by a driver, so we map the driving situation onto a global view map. That is, we automatically extract objects (vehicles, pedestrians, etc.) from the video stream and mark their positions on the global view map. We use a hypothesis– validation structure to detect the nearby vehicles surrounding a host vehicle. Without losing generality, in this approach, we have utilized Google Earth which can provide us with high quality and resolution aerial and satellite images including highways, streets, and more. In addition, data import feature from Google Earth makes it possible to sense and represent the dynamic information surrounding a host vehicle and import our custom geographic data into the Google Earth application.
3 The Panoramic Imaging Model
In this section, we describe the mathematical model of panoramic imaging and provide a context for the mapping between the panoramic image and the global electronic map. There are three different coordinate systems as shown in Fig. 3.1. Xv Yv Zv is the vehicle coordinate system. Xc Yc Zc is the coordinate system for in-
dividual camera c in the camera array where c = 0,...,N ? 1, and N is the number
of cameras. UOV is the image coordinate system of camera c. Let r denotes the radius of the camera array, and θ = 2π/N the shift angle. The 3D coordinates of the camera array center is [l, d, h]T in the vehicle coordinate system. The orientation of camera c is defined by two rotation angles αc and βc as shown in Fig.3.1(a) and (b).
Fig. 3.1 Geometric relationship among the vehicle, camera array, and image coordinate system of each individual camera: (a) front view, (b) image plane of an individual camera, (c) aerial view,(d) camera array layout
4 The Panoramic Inverse Perspective Mapping (pIPM)
4.1 The Mapping Relationship Between Each Image and a Panoramic Image
In this section, we build the mapping relationship between each image of its cor- responding camera and a panoramic image. Let (uc, vc) represent the image co- ordinates of the cth camera. Define the cylindrical panoramic image coordinates
captured by all N cameras as (θp, vp), where θp ∈ (0, 2π) is the panning angle shown in Fig. 4.1. Therefore, we obtain the mapping relationship of the cth camera
coordinates (uc, vc) and panoramic image pixel coordinates (θp, vp) according to Fig. 4.1
Fig. 4.1 The illustration of FOV of the panoramic camera in the vehicle coordinate system
5 The Implementation of the pIPM
5.1 The Field of View of N Cameras in the Vehicle Coordinate System
The first step of the implementation of the pIPM is to determine the Field of View (FOV), xv ∈ [?Hg /2, Hg /2], yv ∈ [?Wg /2, Wg/2], shown in Fig. 8.4.
5.2 Calculation of Each Interest Point’s View Angle in the Vehicle Coordinate System
For each point in the vehicle coordinate system, we calculate its view angle and determine the corresponding mapping camera. In Fig. 8.4, Xv OYv is the vehicle coordinate system, θ is the view angle, we calculate θg using xv and yv.
6 The Elimination of Wide-Angle Lens’ Radial Error
Due to the effect of the number of cameras, we often use a wide-angle lens to in- crease the angle field of view. However, the wide-angle lens will cause radial distor- tion.
7 Combining Panoramic Images with Electronic Maps
Electronic map services such as Microsoft Virtual Earth and Google Earth can help reduce a driver’s load by providing high quality electronic route and turn-to-turn di- rections. For example, Fig. 7.1 shows the route, generated by Google Earth, around Carnegie Mellon University.
We can further reduce a driver’s cognitive load by combining the images captured by the omnidirectional camera with the electronic map in real time. In particular, we perform image analysis to detect surrounding objects such as vehicles and pedestri- ans, and display the detected objects on the electronic map.
In this approach, we mainly focus on vehicle detection. Our vehicle detection approach includes two basic phases. In the hypothesis generation phase, we first determine the Regions of Interest (ROI) in an image according to lane vanishing points. From the analysis of horizontal and vertical edges in the image, vehicle hy- pothesis lists are generated for each ROI. In the hypothesis validation phase, we have developed a vehicle validation system by using Support Vector Machine (SVM) and Gabor features.
Fig. 7.1 Google global navigation map
使用全局視圖的汽車導(dǎo)航
1介紹
駕駛員注意力不集中是公路交通事故的一個(gè)主要原因。美國國家公路交通安全管理局估計(jì),警察報(bào)告事故中至少有25%的涉及駕駛員注意力不集中的一些形式。駕駛是一個(gè)需要駕駛員分配他/她的注意力在不同子任務(wù)上的過程。首先,一個(gè)司機(jī)需要注意一些直接關(guān)系到安全的問題,包括周圍的交通,儀表板顯示,以及其他道路上大量涌入的信息,如交通信號燈和道路標(biāo)志。此外,司機(jī)可能選擇跟一位乘客談話,聽收音機(jī),接電話。因此,情況意識在行車安全中起著重要的作用。在這項(xiàng)研究中,我們正在開發(fā)的技術(shù),以提供給駕駛員關(guān)于周圍車輛的信息,從而提高他/她的情況意識。
情況意識是指在一定量的時(shí)間和空間內(nèi)對環(huán)境要素的感知,以及它們的含義的理解和在不久的將來對它們地位的投影。在駕駛汽車時(shí),遙感信息和代表信息是情況意識的關(guān)鍵。許多研究已經(jīng)著力提高汽車內(nèi)載信息。格林等人研究了導(dǎo)航顯示器中人為因素試驗(yàn)的早期調(diào)查。他們描述了車載設(shè)備的目標(biāo)、設(shè)計(jì)原則和指導(dǎo)方針。戴爾 等人研究了為援助導(dǎo)航的所生成的路線描述的問題。李蓋爾等人開發(fā)了適合駕駛員的情形合適的地圖系統(tǒng)。
由于電子地圖和全球定位系統(tǒng)(全球定位系統(tǒng))的可獲得性,在過去幾年中導(dǎo)航用戶界面發(fā)生了巨大變化。當(dāng)前的全球定位系統(tǒng)導(dǎo)航系統(tǒng)顯示器顯示了汽車在一個(gè)圖形化的地圖上的位置,類似于直接在一個(gè)紙地圖上看。最近,有幾家公司,如微軟和谷歌,已經(jīng)開始提供全局視圖映射,如航空影像地圖、衛(wèi)星影像地圖和鳥眼視圖。例如,在鳥眼視圖模式下,微軟的本地窗口由高分辨率航空影像組成,此影像是從一個(gè)角度拍攝的,而不是直接從四個(gè)方向。除了GPS,汽車也可以獲得一定的由其他傳感器所輸送的行使環(huán)境,如安裝在不同位置的攝像機(jī),紅外熱像儀,雷達(dá),激光雷達(dá)和超聲波傳感器。在這些傳感器中,視頻攝像機(jī)從包裝和成本的角度看是有吸引力的。計(jì)算機(jī)視覺和圖像處理技術(shù)的最新進(jìn)展使得在駕駛輔助應(yīng)用的全球定位系統(tǒng)中應(yīng)用視頻傳感器成為可能。
在這里,我們提出了一種通過為駕駛員動(dòng)態(tài)提供一個(gè)周圍情況的全球性視野來提高情況意識的的方法。汽車的周圍將被一個(gè)裝在汽車頂部的全方位的視覺系統(tǒng)所捕獲。為了獲得高質(zhì)量的周邊圖像,我們使用一個(gè)由多相機(jī),而不是現(xiàn)有應(yīng)用在大多數(shù)智能汽車所存在系統(tǒng)的一個(gè)全景攝像機(jī)組成的全方位視覺系統(tǒng)。攝像機(jī)的視頻流被處理以檢測附近的車輛和障礙。這些檢測對象的位置將覆蓋全球的車輛視野地圖。我們推導(dǎo)了一個(gè)全方位視覺系統(tǒng)和全局視圖之間的映射。這張圖可以投影到擋風(fēng)玻璃上的平視顯示器(HUD)上,同時(shí)提供了一個(gè)驚人的真實(shí)駕駛環(huán)境視角。通過查看顯示器,駕駛員可以有一個(gè)全局情況的畫面以及可能產(chǎn)生一個(gè)良好的駕駛策略。
本章的其余部分組織如下:描述了問題和所提出的方法;討論了我們的相機(jī)系統(tǒng)的成像模型;給出了一個(gè)全景逆透視映射(PIPM);說明如何實(shí)施PIPM;介紹了消除廣角鏡頭的徑向誤差;最后,我們通過一個(gè)例子來闡明所提出的方法,這個(gè)例子是從全方位視覺系統(tǒng)到谷歌地球地圖上捕捉的視頻流繪制汽車地圖。
2問題和提出的解決措施
在任何特定的時(shí)刻觀察到的可知世界的一部分的視界,在駕駛安全性方面扮演著重要的角色。而一個(gè)人擁有近180度的視野范圍,他/她的雙眼視覺,重要的深度知覺只涵蓋了140度的視野。在一個(gè)駕駛情況下,它是可取的有一個(gè)完整的360度視野。為了擴(kuò)大駕駛員的視野,汽車制造商已經(jīng)安裝了汽車后視鏡和側(cè)鏡。最近,后視攝像機(jī)已被添加到許多新型汽車上,通過直接顯示車后的路況增強(qiáng)后視鏡的能力。這些相機(jī)系統(tǒng)通常安裝在汽車保險(xiǎn)杠或較低的部分,允許更好的后能見度。但是,看視鏡會轉(zhuǎn)移駕駛員的注意力而遠(yuǎn)離道路。
在汽車中添加傳感器和設(shè)備可能會導(dǎo)致更多的干擾。注意力不集中是造成車禍的主要原因,約占所有交通事故的25%。因此,我們的目標(biāo)是增加一個(gè)司機(jī)的視野,而不增加干擾源。在這種方法中,我們提出通過安裝在汽車頂部的全方位視覺系統(tǒng)和使用HUD的使擋風(fēng)玻璃顯示動(dòng)態(tài)全局視圖來捕捉汽車周圍情況。通過這種方式,一個(gè)駕駛員可以得到一個(gè)沒有轉(zhuǎn)移他/她的看向前方的注意力的周邊全景視覺系統(tǒng)。
全方位視覺系統(tǒng)之前就已被用于在智能汽車應(yīng)用上,如汽車跟蹤,室內(nèi)停車場,和司機(jī)監(jiān)控驅(qū)動(dòng)程序等。這些應(yīng)用程序使用不同的方位傳感器,如寬視場(FOV)折射式相機(jī),折反射式相機(jī),平移變焦(PTZ)攝像機(jī)和多折射式相機(jī)。寬視場折射式相機(jī)和折反射式相機(jī)都有一定的局限性。首先,他們的圖像被嚴(yán)重扭曲,我們不得不花太多時(shí)間來糾正失真。其次,他們不能提供周邊環(huán)境的高分辨率圖像。平移變焦攝像機(jī)通常通過移動(dòng)攝像機(jī)在環(huán)境監(jiān)測中使用。雖然平移變焦時(shí)代能夠提供高分辨率圖像,攝像機(jī)的機(jī)械運(yùn)動(dòng)使系統(tǒng)響應(yīng)慢。不是使用這些相機(jī),我們將使用一個(gè)全方位視覺系統(tǒng),該系統(tǒng)由多個(gè)捕捉到具有高達(dá)1600×320的分辨率的汽車周圍全方位情況的攝像頭組成。
但是,從全方位相機(jī)的全景視頻流不能很容易地被駕駛員所理解掌握,所以我們將駕駛情況映射到一個(gè)全局視圖上。也就是說,我們會自動(dòng)從視頻流中提取物體(汽車、行人等),并在全局視圖上標(biāo)記他們的位置。我們使用一個(gè)假設(shè)驗(yàn)證結(jié)構(gòu)來探測主車附近的車輛。不失一般性,在這種方法中,我們利用谷歌地球,它可以提供給我們高品質(zhì)和高分辨率的航空和衛(wèi)星圖像,包括公路,街道,和更多。此外,谷歌地球的數(shù)據(jù)導(dǎo)入功能可以檢測和表現(xiàn)主車周圍的動(dòng)態(tài)信息以及將我們的自定義地理數(shù)據(jù)輸入到谷歌地球應(yīng)用中去。
3全景成像模型
在這一節(jié)中,我們描述了全景成像的數(shù)學(xué)模型,并提供了一個(gè)在全景圖像和全球電子地圖間映射的背景。圖3.1所示三種不同的坐標(biāo)系。是汽車坐標(biāo)系。Xv Yv Zv是指個(gè)人相機(jī)C照相機(jī)陣列中的c = 0,…,N ?1,N是攝影機(jī)的數(shù)量的坐標(biāo)系。UOV是相機(jī)c的圖像坐標(biāo)系統(tǒng),讓r是指攝像機(jī)陣列的半徑,和θ = 2π/N角偏移。在車輛坐標(biāo)系中,相機(jī)陣列中心的三維坐標(biāo)是[l, d, h] T。c相機(jī)的方向是由雙旋轉(zhuǎn)角αc 和 βc定義的,如圖3.1(a)和(b)所示。
圖3.1單個(gè)攝像機(jī)下汽車坐標(biāo)系、相機(jī)陣列坐標(biāo)系和圖像坐標(biāo)系之間的幾何關(guān)系:
(a) 前視圖 (b) 單個(gè)攝像機(jī)的圖像平面 (c)空中視圖 (d) 相機(jī)陣列布局
4全景逆透視映射(pIPM)
4.1每個(gè)圖像和全景圖像之間的映射關(guān)系
在這一節(jié)中,我們建立了響應(yīng)相機(jī)的每一個(gè)圖像和全景圖像的映射關(guān)系。讓(uc, vc)表示圖像cth相機(jī)的圖像坐標(biāo)。定義被所有n個(gè)相機(jī)捕獲的柱面全景圖像坐標(biāo)為(θp, vp),而搖攝角度θp ∈ (0, 2π)如圖4.1所示。因此,我們得到cth相機(jī)的映射關(guān)坐標(biāo)(uc, vc)和全景圖像的像素坐標(biāo)(θp, vp),如圖4.1所示。
圖4.1 從一個(gè)單一的圖像映射到全景圖像
5 pIPM的實(shí)施
5.1汽車坐標(biāo)系中N個(gè)相機(jī)的視野區(qū)域
實(shí)施pIPM的第一步就是確定視野區(qū)域(FOV), xv ∈ [?Hg /2, Hg /2], yv ∈ [?Wg /2, Wg/2], 如圖5.1所示。
5.2計(jì)算在汽車坐標(biāo)系中每一個(gè)作用點(diǎn)的視角
對于汽車坐標(biāo)系中的每一點(diǎn),我們計(jì)算其視角和確定相應(yīng)的映射相機(jī)。在圖5.1中,Xv Oyv是坐標(biāo)系,θ是視角,我們用xv 和 yv計(jì)算θg。
圖5.1汽車坐標(biāo)系中對全景相機(jī)坐標(biāo)的說明
6消除廣角鏡頭的徑向誤差
由于攝像機(jī)的數(shù)量的影響,我們經(jīng)常使用廣角鏡頭來增加視覺角度區(qū)域。但是,廣角鏡頭會導(dǎo)致一些徑向誤差。
7將全景圖與電子地圖相結(jié)合
電子地圖服務(wù),如微軟的虛擬地球和谷歌地球,可以通過提供高質(zhì)量的電子線路和轉(zhuǎn)向來減輕駕駛員的負(fù)擔(dān)。比如,圖7.1顯示了由谷歌地球生成的圍繞卡耐基梅隆大學(xué)的路線。
我們可以通過實(shí)時(shí)結(jié)合全方位攝像機(jī)捕獲的圖像和電子地圖來進(jìn)一步降低駕駛者的認(rèn)知負(fù)荷。特別是,我們執(zhí)行圖像分析來檢測周圍的物體如汽車和行人,并在電子地圖上顯示檢測到的對象。
在這種方法中,我們主要集中在汽車檢測。我們的汽車檢測方法包括2個(gè)基本階段。在假設(shè)生成階段,我們首先根據(jù)車道消失點(diǎn)在圖像中確定區(qū)域利益(ROI)。從水平和垂直邊緣的圖像分析,汽車假設(shè)列出每個(gè)ROI的產(chǎn)生。在假設(shè)驗(yàn)證階段,我們通過利用向量支持機(jī)(SVM)和Gabor特征開發(fā)出汽車驗(yàn)證系統(tǒng)。
圖 7.1 谷歌全球?qū)Ш降貓D