基于ROS的機(jī)器人路徑導(dǎo)航系統(tǒng)的設(shè)計(jì)4張CAD圖
基于ROS的機(jī)器人路徑導(dǎo)航系統(tǒng)的設(shè)計(jì)4張CAD圖,基于,ROS,機(jī)器人,路徑,導(dǎo)航系統(tǒng),設(shè)計(jì),CAD
基于ROS的自主室內(nèi)導(dǎo)航SLAM算法仿真
摘要----在這篇文章中,我們正在檢查基于SLAM的移動(dòng)機(jī)器人在室內(nèi)環(huán)境中建圖和導(dǎo)航的靈活性。它基于機(jī)器人操作系統(tǒng)(ROS)框架。模型機(jī)器人采用Gazebo軟件包制作,在Rviz中模擬。建圖過程是通過使用GMapping算法來完成的,GMapping算法是一種開源算法。本文的目的是評(píng)估移動(dòng)機(jī)器人模型在未知環(huán)境中的建圖、定位和導(dǎo)航。
關(guān)鍵詞----Gazebo;ROS;Rviz;GMapping;激光掃描;導(dǎo)航;SLAM;機(jī)器人模型;軟件包。
19
引言
在現(xiàn)代世界,由于機(jī)器人出錯(cuò)的概率降低,對(duì)機(jī)器的需求也在增加。機(jī)器人的研究和應(yīng)用從醫(yī)療保健到人工智能。人們的生活中也出現(xiàn)很多機(jī)器人他們極大的便利了人們的生活,但是他們是如何工作的,他們真的像人類一樣嗎?他們真的能夠感知外界環(huán)境嗎?其實(shí)并不是,除非給機(jī)器人一些感知能力,否則它們無法理解周圍的環(huán)境。我們可以使用不同的傳感器,如激光雷達(dá)、RGB-D相機(jī)、慣性測(cè)量單元(IMU)和聲納來提供傳感能力。通過使用傳感器和建圖算法,機(jī)器人可以創(chuàng)建周圍環(huán)境的地圖,并在地圖中定位自己。機(jī)器人將不斷檢查環(huán)境中發(fā)生的動(dòng)態(tài)變化。我們的目標(biāo)是建立一個(gè)室內(nèi)應(yīng)用的自主導(dǎo)航平臺(tái)。在本文中,我們通過測(cè)量機(jī)器人模型到達(dá)目的地所花費(fèi)的行進(jìn)時(shí)間來檢驗(yàn)在ROS(機(jī)器人操作系統(tǒng))中實(shí)現(xiàn)的基于SLAM(同時(shí)定位和建圖)的機(jī)器人模型的效率。測(cè)試在Rviz創(chuàng)建的虛擬環(huán)境中進(jìn)行。通過在地圖中為不同的目的地放置不同的動(dòng)態(tài)障礙物,來測(cè)量行進(jìn)時(shí)間。
思路
與機(jī)器人一起工作需要很多傳感器,每個(gè)過程都需要實(shí)時(shí)處理。為了使用需要每10-50毫秒更新一次的傳感器和執(zhí)行器,我們需要一種能夠滿足這種要求的的操作系統(tǒng)。而機(jī)器人操作系統(tǒng)(ROS)為我們提供了實(shí)現(xiàn)這一點(diǎn)的架構(gòu)。首先ROS是開源的,有許多來自好的研究機(jī)構(gòu)的代碼,人們可以很容易地在他們自己的項(xiàng)目中使用和實(shí)現(xiàn)。此外,機(jī)器人的工程師們?cè)缧r(shí)候缺乏一個(gè)共同的合作和交流平臺(tái),這推遲了機(jī)器人管家的采用和其他相關(guān)的發(fā)展。自過去十年以來,機(jī)器人創(chuàng)新隨著ROS的出現(xiàn)而迅速發(fā)展,工程師可以在ROS中構(gòu)建機(jī)器人應(yīng)用程序和程序。機(jī)器人導(dǎo)航是機(jī)器人領(lǐng)域大多數(shù)研究者關(guān)注的一個(gè)非常廣泛的課題。為了使移動(dòng)機(jī)器人系統(tǒng)能夠自主,它必須分析來自不同傳感器的數(shù)據(jù)并執(zhí)行決策,以便在未知環(huán)境中導(dǎo)航。ROS幫助我們解決與移動(dòng)機(jī)器人導(dǎo)航相關(guān)的不同問題,并且這些技術(shù)不限于特定的機(jī)器人,而是可以在機(jī)器人領(lǐng)域的不同開發(fā)項(xiàng)目中重復(fù)使用。
相關(guān)工作
在研究論文[1]中,作者使用Gmapping算法和ROS進(jìn)行定位和導(dǎo)航。Gmapping算法使用激光雷達(dá)傳感器的激光掃描數(shù)據(jù)來生成地圖。該地圖由OpenCV人臉檢測(cè)和corobot技術(shù)持續(xù)監(jiān)控,以識(shí)別人并在工作環(huán)境中導(dǎo)航。研究論文[2]的作者解釋了2個(gè)基于ROS、建圖和定位的協(xié)作機(jī)器人。這些機(jī)器人是自主移動(dòng)的,在未知的地區(qū)工作。對(duì)于這個(gè)項(xiàng)目,使用的算法也是SLAM。在這里,機(jī)器人的主要任務(wù)是撿起三塊積木,并以預(yù)定的方式排列它們。在ROS平臺(tái)的支持下,他們?yōu)榇酥圃炝藱C(jī)器人。在研究論文[3]中,作者創(chuàng)建了機(jī)械手的仿真實(shí)驗(yàn),并說明了在短時(shí)間內(nèi)實(shí)現(xiàn)機(jī)器人控制的方法。使用ROS和Gazebo軟件包,他們建立了一個(gè)7自由度的取放機(jī)器人模型,并設(shè)法找到了一種花費(fèi)更少時(shí)間的機(jī)器人控制器。一篇研究論文[5]通過仿真比較了3種SLAM算法核心SLAM、Gmapping和Hector SLAM。最佳算法用于在不同地形中測(cè)試無人地面小車(UGV),以執(zhí)行防御任務(wù)。通過模擬實(shí)驗(yàn),他們比較了不同算法的表現(xiàn),并制作了一個(gè)執(zhí)行定位和建圖的機(jī)器人平臺(tái)。研究論文[6]的作者利用自動(dòng)視覺和導(dǎo)航框架構(gòu)建了一個(gè)導(dǎo)航平臺(tái),利用ROS,將開源的GMapping捆綁包用于即時(shí)定位和地圖生成(SLAM)。使用Rviz的這個(gè)設(shè)置,turtlebot 2可以實(shí)現(xiàn)。用Kinect傳感器代替激光測(cè)距儀,降低了成本。該雜志[9]涉及基于智能手機(jī)中傳感器的室內(nèi)導(dǎo)航。智能手機(jī)既是測(cè)量平臺(tái),也是用戶界面。雜志[10]的作者實(shí)現(xiàn)了一個(gè)6自由度姿態(tài)估計(jì)方法和一個(gè)室內(nèi)視覺障礙者尋路系統(tǒng)。地板平面從三維攝像機(jī)的點(diǎn)云中提取,并作為地標(biāo)節(jié)點(diǎn)添加到6自由度SLAM的圖形中,以減少誤差。滾轉(zhuǎn)、俯仰、偏航、X、Y和Z是6個(gè)軸。用戶界面是通過聲音。期刊[11]解釋了為什么室內(nèi)環(huán)境對(duì)自主四軸飛行器來說很困難。由于實(shí)驗(yàn)是在室內(nèi)進(jìn)行的,他們不能使用全球定位系統(tǒng),他們使用激光測(cè)距儀、XSens IMU和激光鏡的組合來生成三維地圖,并在其中定位。四軸飛行器正在使用SLAM算法導(dǎo)航。在論文[12]中,作者描述了固定路徑算法和輪椅的特點(diǎn),輪椅在模擬技術(shù)的幫助下使用該算法。論文[13]的作者解釋了Arduino制造的自動(dòng)導(dǎo)航平臺(tái),以及使用ani2c協(xié)議與數(shù)字羅盤和旋轉(zhuǎn)編碼器等組件接口來計(jì)算距離。在論文[14]中,作者利用Matlab中的模糊工具箱創(chuàng)建了一個(gè)自主移動(dòng)機(jī)器人,并使用該機(jī)器人進(jìn)行路徑規(guī)劃。對(duì)機(jī)器人執(zhí)行24條模糊規(guī)則。論文[15]的作者使用射頻識(shí)別超高頻無源標(biāo)簽和閱讀器創(chuàng)建了室內(nèi)空間的對(duì)象級(jí)建圖。他們說這種方法被用來以經(jīng)濟(jì)有效的方式生成一個(gè)大的室內(nèi)區(qū)域地圖。
系統(tǒng)
A.ROS
ROS的故事始于2000年代中期,當(dāng)時(shí)斯坦福大學(xué)正處于創(chuàng)建支持斯坦福 AI機(jī)器人和個(gè)人機(jī)器人項(xiàng)目的系統(tǒng)的階段。2007年,位于加州門洛帕克的公司W(wǎng)illow Garage通過提供大量資源參與了該系統(tǒng)的開發(fā),從而為機(jī)器人領(lǐng)域制造的靈活動(dòng)態(tài)軟件系統(tǒng)的進(jìn)一步開發(fā)做出了貢獻(xiàn)這也從無數(shù)涉及發(fā)展的研究中提供了更多的資源和專業(yè)知識(shí)。該系統(tǒng)是在BSD許可下開發(fā)的,并且漸漸地吸引了更多的專家去使用它。隨著時(shí)間的推移,它已經(jīng)成為機(jī)器人研究界廣泛使用的平臺(tái)。2013年,ROS的核心開發(fā)和維護(hù)被轉(zhuǎn)移到開源機(jī)器人基金會(huì),并一直運(yùn)行到今天。目前,ROS由世界各地成千上萬(wàn)的用戶使用,從愛好到大規(guī)模的工業(yè)自動(dòng)化系統(tǒng)。
機(jī)器人操作系統(tǒng)(ROS)是一個(gè)免費(fèi)的開源軟件,也是最受歡迎的機(jī)器人編程中間件之一。ROS自帶消息傳遞接口、工具、包管理、硬件抽象等。它為機(jī)器人應(yīng)用程序提供不同的庫(kù)、軟件和一些集成工具。ROS是一個(gè)提供進(jìn)程間通信的消息傳遞接口,因此它通常被稱為中間件。ROS提供了許多設(shè)施來幫助研究人員開發(fā)機(jī)器人應(yīng)用程序。在這項(xiàng)研究工作中,ROS是主要的基礎(chǔ),因?yàn)樗灾黝}的形式在不同的節(jié)點(diǎn)之間發(fā)布消息,并具有分布式參數(shù)系統(tǒng)。ROS還提供平臺(tái)間可操作性、模塊化、并發(fā)資源處理。ROS通過確保線程不一直讀寫共享資源,而是僅僅發(fā)布和訂閱消息,簡(jiǎn)化了系統(tǒng)的整個(gè)過程。ROS還幫助我們創(chuàng)建虛擬環(huán)境,生成機(jī)器人模型,實(shí)現(xiàn)算法,并在虛擬世界中可視化它,而不是在硬件本身中實(shí)現(xiàn)整個(gè)系統(tǒng)。因此,可以對(duì)系統(tǒng)進(jìn)行相應(yīng)的改進(jìn),最終在硬件上實(shí)現(xiàn)時(shí),可以獲得更好的效果?,F(xiàn)在已經(jīng)建立了對(duì)ROS結(jié)構(gòu)的基本理解,可以呈現(xiàn)自動(dòng)導(dǎo)航特征的綜合描述了。ROS中的自動(dòng)導(dǎo)航過程是在導(dǎo)航棧中實(shí)現(xiàn)的,它需要不同的信息以便對(duì)期望的目的地進(jìn)行正確的計(jì)算。
B.Gazebo
Gazebo是一個(gè)機(jī)器人模擬器。Gazebo使用戶能夠創(chuàng)建復(fù)雜的環(huán)境,并提供了在創(chuàng)建的環(huán)境中模擬機(jī)器人的機(jī)會(huì)。在Gazebo,用戶可以制作機(jī)器人的模型,并在三維空間中集成傳感器。就環(huán)境而言,用戶可以創(chuàng)建一個(gè)平臺(tái),并為其設(shè)置障礙。對(duì)于機(jī)器人模型,用戶可以使用URDF文件,并可以給出機(jī)器人的鏈接。通過給出鏈接,我們可以給出機(jī)器人每個(gè)部分的運(yùn)動(dòng)程度。本研究創(chuàng)建的機(jī)器人模型是一個(gè)差動(dòng)驅(qū)動(dòng)機(jī)器人,帶有兩個(gè)輪子,激光和一個(gè)攝像頭。在Gazebo中創(chuàng)建一個(gè)示例環(huán)境,供機(jī)器人相應(yīng)地移動(dòng)和建圖。在這種環(huán)境中,一些對(duì)象被隨機(jī)放置在創(chuàng)建地圖的地方,這些對(duì)象被視為靜態(tài)障礙物。
C.SLAM
自主機(jī)器人應(yīng)該能夠安全地探索周圍環(huán)境,而不會(huì)與人相撞或撞到物體。同步定位和映射(SLAM)使機(jī)器人能夠通過了解周圍環(huán)境的樣子(建圖)和它相對(duì)于周圍環(huán)境的位置(定位)來完成這項(xiàng)任務(wù)。SLAM可以使用不同類型的1D、2D和3D傳感器來實(shí)現(xiàn),如聲學(xué)傳感器、激光測(cè)距傳感器、立體視覺傳感器和RGB-D傳感器。ROS可以用來實(shí)現(xiàn)不同的SLAM算法,比如Gmapping、Hector SLAM、KartoSLAM、Core SLAM、Lago SLAM。ROS中的Gmapping包提供了通過使用激光和里程計(jì)數(shù)據(jù)創(chuàng)建二維地圖的工具。SLAM算法通過在該區(qū)域執(zhí)行定位操作來創(chuàng)建未知環(huán)境的地圖。在未知區(qū)域被繪制成地圖并且機(jī)器人知道其相對(duì)于地圖的位置后便可以執(zhí)行路線規(guī)劃和導(dǎo)航。因此,SLAM算法是實(shí)現(xiàn)機(jī)器人自動(dòng)導(dǎo)航的重要組成部分。激光器需要配備一個(gè)固定的水平安裝的激光測(cè)距儀。SLAM也是避免機(jī)器人行進(jìn)中障礙物的重要功能。
KartoSLAM,Hector SLAM,Gmapping算法比其它都要好。從地圖精度的角度來看,這些算法具有非常相似的性能,但實(shí)際上在概念上是不同的。也就是說,赫克托SLAM是基于EKF的,Gmapping是基于RBPF占用網(wǎng)格建圖,KartoSLAM是基于圖形建圖。對(duì)于一個(gè)處理能力較低的機(jī)器人來說,Gmapping可以表現(xiàn)得很好。ROS中的建圖包提供基于激光的SLAM(即使定位和建圖),所以ROS節(jié)點(diǎn)稱為SLAM_Gmapping。
SLAM算法可以包含以下五個(gè)重要步驟:
1.數(shù)據(jù)采集:從攝像機(jī)或激光掃描儀等傳感器收集測(cè)量出的數(shù)據(jù)。
2.特征提取:獨(dú)特且可識(shí)別的關(guān)鍵點(diǎn)和特征是從數(shù)據(jù)庫(kù)中挑選的。
3.特征關(guān)聯(lián):來自先前測(cè)量的關(guān)鍵點(diǎn)和特征與最近的關(guān)鍵點(diǎn)和特征相關(guān)聯(lián)。
4.姿態(tài)估計(jì):利用關(guān)鍵點(diǎn)和特征之間的相對(duì)過渡以及機(jī)器人的位置來估計(jì)機(jī)器人的新姿態(tài)。
5.地圖調(diào)整:基于新的姿態(tài)和等效測(cè)量,地圖被相應(yīng)地更新。
D.Rviz
Rviz是一個(gè)模擬器,我們可以在其中可視化3D環(huán)境中的傳感器數(shù)據(jù),例如,如果我們給Gazebo中的機(jī)器人模型固定一個(gè)Kinect,激光掃描值可以在Rviz中可視化。從激光掃描數(shù)據(jù),我們就可以建立一個(gè)地圖用于自動(dòng)導(dǎo)航。在Rviz中,我們可以使用攝像機(jī)圖像、激光掃描等方式訪問和圖形化表示這些值。這些信息可用于構(gòu)建點(diǎn)云和深度圖像。在Rviz坐標(biāo)中稱為框架。我們可以選擇許多顯示器在Rviz中觀看,它們是來自不同傳感器的數(shù)據(jù)。通過點(diǎn)擊添加按鈕,我們可以在Rviz中顯示任何數(shù)據(jù)。網(wǎng)格顯示器將給出地面或參考。激光掃描顯示器將給出來自激光掃描儀的顯示。激光掃描顯示器將是傳感器msgs/激光掃描類型。點(diǎn)云顯示器將顯示程序給出的位置。軸顯示器將給出參考點(diǎn)。
實(shí)現(xiàn)
用于機(jī)器人模型執(zhí)行導(dǎo)航的環(huán)境在Gazebo中創(chuàng)建,并且所創(chuàng)建的機(jī)器人模型被導(dǎo)入到環(huán)境中。機(jī)器人模型由兩個(gè)輪子組成,兩個(gè)腳輪便于移動(dòng),一個(gè)攝像頭連接到機(jī)器人模型上。然后,Hokuyo激光傳感器被添加到機(jī)器人,插件也含于Gazebo文件。Hokuyo激光傳感器提供激光數(shù)據(jù),可用于創(chuàng)建地圖。使用Gmapping包,通過添加必要的不同參數(shù),在Rviz中就能創(chuàng)建一個(gè)地圖。最初,機(jī)器人模型被移動(dòng)到環(huán)境的每一個(gè)角落,直到使用“teleop_key”包創(chuàng)建了完整的地圖,其中機(jī)器人使用鍵盤進(jìn)行控制的。結(jié)果表明,Rviz中最終生成的地圖與Gazebo中創(chuàng)建的環(huán)境非常相似。對(duì)于Rviz中的可視化,選擇并添加了必要的主題。該機(jī)器人模型中使用的Hokuyo激光傳感器以主題“/掃描”的形式發(fā)布激光數(shù)據(jù),而且是Rviz中激光掃描的主題。以創(chuàng)建地圖的類似方式,添加了“/map”主題。生成的地圖使用ROS中可用的地圖服務(wù)器包保存。一旦地圖生成并保存,機(jī)器人現(xiàn)在就可以合并導(dǎo)航堆棧包了。
非常重要的是要注意,如果不把地圖給機(jī)器人,它就不能導(dǎo)航。使用amcl的導(dǎo)航堆棧包為機(jī)器人在2D環(huán)境中移動(dòng)提供了一個(gè)概率定位系統(tǒng)。現(xiàn)在,機(jī)器人已經(jīng)準(zhǔn)備好在創(chuàng)建的地圖中任何地方導(dǎo)航。機(jī)器人的目的地可以使用Rviz中的2D導(dǎo)航目標(biāo)選項(xiàng)給出,該選項(xiàng)基本上確認(rèn)了機(jī)器人有一個(gè)目標(biāo)。用戶必須在地圖上點(diǎn)擊想要的區(qū)域,還應(yīng)該指出機(jī)器人的方向。藍(lán)線是機(jī)器人到達(dá)目的地必須遵循的實(shí)際路徑。由于一些參數(shù)的原因,機(jī)器人可能不會(huì)遵循給它的確切路徑,但它總是試圖通過不斷地重新規(guī)劃路徑來遵循它。節(jié)點(diǎn)圖指示了不同節(jié)點(diǎn)正在發(fā)布和訂閱的不同主題。其中/move_base節(jié)點(diǎn)訂閱了幾個(gè)主題,如里程計(jì)、速度命令、地圖、目標(biāo),這些主題為機(jī)器人的基礎(chǔ)在環(huán)境中導(dǎo)航提供了必要的數(shù)據(jù)。
結(jié)果評(píng)估
為了評(píng)估ROS和基于SLAM的Gmapping和導(dǎo)航的性能,創(chuàng)建了特定的環(huán)境。在每個(gè)環(huán)境中,不同的參數(shù),如SLAM生成的地圖代表現(xiàn)實(shí)的程度,機(jī)器人到達(dá)給定目的地所需的時(shí)間。此外,動(dòng)態(tài)障礙物被放置在機(jī)器人的導(dǎo)航路徑上,以測(cè)試機(jī)器人重新規(guī)劃路徑到另一條路徑所需的時(shí)間。通過測(cè)試算法的幾個(gè)目的地,當(dāng)目的地A作為機(jī)器人的目標(biāo)時(shí),SLAM根據(jù)之前生成的地圖找出最短路徑,但是當(dāng)我們?cè)诼窂街蟹胖脛?dòng)態(tài)障礙物時(shí),激光傳感器掃描地圖,并通過在地圖中添加檢測(cè)到的障礙物來更新地圖。一旦地圖更新,SlAM找到到達(dá)目的地的下一條最短路徑。
結(jié)論
為了驗(yàn)證基于ROS和SLAM的SLAM建圖和導(dǎo)航的性能。在本項(xiàng)目中,通過驅(qū)動(dòng)機(jī)器人穿過在Rviz模擬器中創(chuàng)建的特定環(huán)境及其地圖。創(chuàng)建地圖后且目標(biāo)點(diǎn)固定后。然后計(jì)算機(jī)器人到達(dá)目的地的時(shí)間。通過考慮10次試驗(yàn),得出平均值。通過改變目的地點(diǎn),類似的過程得以繼續(xù)。在某些情況下,也引入一些障礙物,這樣機(jī)器人就會(huì)找到另一條路,并穿過它。以同樣的方式創(chuàng)建和測(cè)試第二個(gè)環(huán)境。計(jì)算到達(dá)目的地所需的時(shí)間。
從這項(xiàng)研究中可以觀察到,機(jī)器人給出了很好的響應(yīng)時(shí)間,并且只需要合理的時(shí)間來覆蓋從出發(fā)點(diǎn)到目的地的距離。隨著距離的增加,增加的時(shí)間也增加。在地圖有障礙物的情況下,機(jī)器人會(huì)找到最短的路徑。如果引入額外的障礙物,機(jī)器人將停止并重新計(jì)算新路徑。
參考外文文獻(xiàn)
[1] International Journal of Pure and Applied Mathematics Volume 118 No. 7 2018, 199-205
ROS based Autonomous Indoor Navigation Simulation Using SLAM Algorithm
Rajesh Kannan Megalingam, Chinta Ravi Teja, Sarath Sreekanth, Akhil Raj
Department of Electronics and Communication Engineering, Amrita Vishwa Vidaypeetham, Amritapuri, Kerala, India.
Abstract—In this paper, we are checking the flexibility of a SLAM based mobile robot to map and navigate in an indoor environment. It is based on the Robot Operating System (ROS) framework. The model robot is made using gazebo package and simulated in Rviz. The mapping process is done by using the GMapping algorithm, which is an open source algorithm. The aim of the paper is to evaluate the mapping, localization, and navigation of a mobile robotic model in an unknown environment. Keywords—Gazebo; ROS; Rviz; Gmapping; laser scan; Navigation; SLAM; Robot model; Packages.
I. INTRODUCTION
In the modern world, the need for machines are increasing due to the probability of making mistakes by the robot is less. The research and application of robotics are from healthcare to artificial intelligence. A robot can’t understand the surroundings unless they are given some sensing power. We can use different sensors like LIDAR, RGB-D camera, IMU (inertial measurement units) and sonar to give the sensing power. By using sensors and mapping algorithms a robot can create a map of the surroundings and locate itself inside the map. The robot will be continuously checking the environment for the dynamic changes happening there. Our aim was to build an autonomous navigation platform for indoor application. In this paper, we are checking the efficiency of a SLAM (Simultaneous Localization and Mapping) based robot model implemented in ROS (Robot Operating System) by measuring the travel time taken by the robot model to reach the destination. The test is done in a virtual environment created by Rviz. By placing different dynamic obstacles for different destinations in the map, the travel time is measured.
II. MOTIVATION
Working with the robots need a lot of sensors and every process needs to be handled in real time. To use the sensors and actuators which needs to be updated every 10-50 milliseconds we need a type of operating system that gives this kind of privilege. Robot Operating System (ROS) provides us with the architecture to achieve this. ROS is open source and there are a lot of codes available from good research institutes which one can readily use and implement in their own projects. Further robot’s engineers earlier lacked a common platform for collaboration and communication which delayed the adoption of robotic butlers and other related developments. The robotic innovation has quickly paced up since last decade with the advent of ROS wherein the engineers can build robotic apps and programs. Robot navigation is a very wide topic which most of the researchers are concentrating in the field of robotics. For a mobile robot system to be autonomous, it has to analyze data from different sensors and perform decision making in order to navigate in an unknown environment. ROS helps us in solving different problems related to the navigation of the mobile robot and also the techniques are not restricted to a particular robot but are reusable in different development projects in the field of robotics.
III. RELATED WORKS
In the research paper [1], the Authors use ROS with a gmapping algorithm to localize and navigate. Gmapping algorithm uses laser scan data from the LIDAR sensor to make the map. The map is continuously monitored by OpenCV face detection and corobot to identify human and navigate through the working environment. The authors of research paper [2] explain about 2 cooperative robots which work based on ROS, mapping, and localization. These robots are self-driving and working in unknown areas. For this project also the algorithm used is SLAM. Here the main tasks of the robots are to pick up three block pieces and to arrange them in a predetermined manner. With the help of the ROS, they made robots for this purpose. In the research paper [3], the Authors created a simulation of the manipulator and illustrated the methods to implement robot control in a short time. Using ROS and gazebo package, they build a model of pick and place robot with 7 DOF. They managed to find a robot control which takes less time. A research paper [5] compares 3 SLAM algorithms core SLAM, Gmapping, and Hector SLAM using simulation. The best algorithm is used to test unmanned ground vehicles(UGV) in different terrains for defense missions. Using simulation experiments they compared the performance of different algorithms and made a robotic platform which performs localization and mapping. The authors of the research paper [6], made a navigation platform with the use of automated vision and navigation framework, With the use of ROS, the open source GMapping bundle was used for Simultaneous Localization and Mapping (SLAM). Using this setup with rviz, the turtlebot 2 is implemented. Using a Kinect sensor in place of laser range finder, the cost is reduced. The journal [9], deals with indoor navigation based on sensors that are found in smart phones. The smartphone is used as both a measurement platform and user interface. The Author of the journal [10] implemented a 6-degree of freedom (DOF) pose estimation (PE) method and an indoor wayfinding system for the visually impaired. The floor plane is extracted from the 3-D camera’s point cloud and added as a landmark node into the graph for 6-DOF SLAM to reduce errors. roll, pitch, yaw, X, Y, and Z are the 6 axes. The user interface is through sound. Journal [11] explains why the indoor environment is difficult for an autonomous quadcopter. Since the experiment is done indoor they couldn’t use GPS, they used a combination of a laser range finder, XSens IMU, and laser mirror to make 3-D map and locate itself inside it. The quadcopter is navigating using SLAM algorithm.In paper [12] the authors describe fixed path algorithm and characteristics of the wheelchair which uses this with the help of simulation techniques. The authors of paper [13] explain about an auto navigation platform made in Arduino and the use of ani2c protocol to interface components like adigital compass and a rotation encoder to calculate the distance. In the paper [14], using Fuzzy toolbox in Matlab the authors created an autonomous mobile robot and uses the robot for path planning. 24 fuzzy rules on the robot are carried out. The authors of the paper [15], creates an object level mapping of an indoor space using RFID ultra-high frequency passive tags and readers. they say the method is used to map a large indoor area in a cost-effective manner.
IV. SYSTEM
A. ROS Robotic Operating System (ROS) is a free and open-source and one of the most popular middlewares for robotics programming. ROS comes with message passing interface, tools, package management, hardware abstraction etc. It provides different libraries, packages and several integration tools for the robot applications. ROS is a message passing interface that provides inter-process communication so it is commonly referred as middleware. There are numerous facilities that are provided by ROS which helps researchers to develop robot applications. In this research work, ROS is considered as the main base because it publishes messages in the form of topics in between different nodes and has a distributed parameter system. ROS also provides Interplatform operability, Modularity, Concurrent resource handling. ROS simplifies the whole process of a system by ensuring that the threads aren't actually trying to read and write to shared resources, but are rather just publishing and subscribing to messages. ROS also helps us to create a virtual environment, generate robot model, implement the algorithms and visualize it in the virtual world rather than implementing the whole system in the hardware itself. Therefore, the system can be improved accordingly which provides us a better result when it is finally implemented it in the hardware.
B. Gazebo The gazebo is a robot simulator. Gazebo enables a user to create complex environments and gives the opportunity to simulate the robot in the environment created. In Gazebo the user can make the model of the robot and incorporate sensors in a three-dimensional space. In the case of the environment, the user can create a platform and assign obstacles to that. For the model of the robot, the user can use the URDF file and can give links to the robot. By giving the link we can give the degree of movement for each part of the robot. The robot model which is created for this research is a differential drive robot with two wheels, Laser, and a camera on it as shown in Fig. 1. A sample environment is created in the Gazebo for the robot to move and map accordingly. The sample map is shown in Fig. 2. In this environment, several objects were placed randomly where the map is created along with it the objects as these objects were considered as static obstacles.
C. SLAM Autonomous robots should be capable of safely exploring their surroundings without colliding with people or slamming into objects. Simultaneous localization and mapping (SLAM) enable the robot to achieve this task by knowing how the surroundings look like (mapping) and where it stays with respect to the surrounding (localization). SLAM can be implemented using different types of 1D, 2D and 3D sensors like acoustic sensor, laser range sensor, stereo vision sensor and RGB-D sensor. ROS can be used to implement different SLAM algorithms such as Gmapping, Hector SLAM, KartoSLAM, Core SLAM, Lago SLAM. KartoSLAM, Hector SLAM, and Gmapping are better in the group compared to others. These algorithms have a quite similar performance from map accuracy point of view but are actually conceptually different. That’s, Hector SLAM is EKF based, Gmapping is based on RBPF occupancy grid mapping and KartoSLAM in based on thegraph-based mapping. Gmapping can perform well for a less processing power robot. The mapping package in ROS provides laser-based SLAM (Simultaneous Localization and Mapping), as the ROS node called slam_gmapping.
D. Rviz Rviz is a simulator in which we can visualize the sensor data in the 3D environment, for example, if we fix a Kinect in the robot model in the gazebo, the laser scan value can be visualized in Rviz. From the laser scan data, we can build a map and it can be used for auto navigation. In Rviz we can access and graphically represent the values using camera image, laser scan etc. This information can be used to build the point cloud and depth image. In rviz coordinates are known as frames. We can select many displays to be viewed in Rviz they are data from different sensors. By clicking on the add button we can give any data to be displayed in Rviz. Grid display will give the ground or the reference. Laser scan display will give the display from the laser scanners. Laser scan displays will be of the type sensor msgs/Laser scans. Point cloud display will display the position that is given by the program. Axes display will give the reference point.
V. IMPLEMENTATION
The environment for the robot model to perform the navigation is created in the gazebo and the robot model which was created is imported into the environment. The robot model consists of two wheels, two caster wheels for the ease of movement and a camera is attached to the robot model. Later the Hokuyo Laser is added to the robot and plugins were incorporated into the gazebo files. Hokuyo laser provides laser data which can be used for creating the map. Using the Gmapping packages a map is created in the Rviz by adding the different parameters that are necessary. The Fig. 3, shows the initial generation of the map when launched. Initially, the robot model is moved to every corner of the environment until a full map is created using the “teleop_key” package where the robot is controlled using the keyboard. As shown in the Fig. 4, the final generated map in the Rviz which is very much similar to the created environment in the gazebo. For visualization in Rviz, necessary topics were selected and added. The Hokuyo laser sensor which is used in this robot model publishes the laser data in the form of the topic “/scan” which is selected as a topic of laser scan in rviz. In a similar way for creating the map, “/map” topic is added. The generated map is saved using the map_server package that is available in the ROS. Once the map is generated and saved the robot is now ready for the incorporation of navigation stack packages
It is very important to note that a robot cannot be navigated without feeding the map to it. Navigation stack packages by using amcl were used which provides a probabilistic localization system for a robot to move in a 2D. Now, the robot is ready to navigate anywhere in the created map. The destination for the robot can be given using the 2D nav goal option in the Rviz which basically acknowledges the robot with a Goal. The user has to click on the desired area in the map and should also point out the orientation of the robot that it has to be in. The blue line is the actual path that the robot has to follow to reach the destination. The robot may not follow the exact path that is given to it due to some of the parameters but it always tries to follow it by rerouting itself constantly. The node graph that is shown in the Fig. 5, indicates the different topics that are being published and subscribed to the different nodes. The /move_base node is subscribed to several topics like odometry, velocity commands, map, goal, these topics gives the necessary data for the base of the robot to navigate in the environment.
VI. EVALUATION OF THE RESULTS
In order to evaluate the performance of ROS and slam based Gmapping and navigation, specific environments were created. In each environment, different parameters like how well the SLAM generated maps represent reality, the time it took for the robot to reach the given destination. Also, the dynamic obstacles were placed in the robot's navigation path to
收藏