遠(yuǎn)程遙控裝置設(shè)計(jì)
遠(yuǎn)程遙控裝置設(shè)計(jì),遠(yuǎn)程遙控裝置設(shè)計(jì),遠(yuǎn)程,遙控裝置,設(shè)計(jì)
A Low Cost Experimental Telerobotic Systems
A. Zaatri
Mechanical Department Laboratory
University Mentouri of Constantine, Algeria
http://www.infopoverty.net/new/Confs/IWC_05/docs/Zaatri.doc
Abstract
This paper presents the development of a low cost experimental telerobotic system built up with local means in an emerging country (Algeria). From a remote site, a webcam sends images of a robot manipulator through Internet to the control site where a human operator remotely monitors this robot in order to achieve pick-and-place tasks. Some control modes have been implemented and tested such as mouse-click, image-based and gesture-based modes.
Very encouraging pedagogical results have been obtained in this attractive and complex field of modern technology.
1. Introduction
In developing countries, very hard constraints and difficulties are imposed to students and researchers leading usually to inadequate pedagogic results, especially when attempting to learn and experiment complex modern systems. These constraints may stem from the lack of economical budgets, from a bureaucratic discouraging environment, from a mismach between university and industry, etc.
One interesting and challenging field to investigate and to experiment by students in emerging countries concerns the design of modern technology applications such as the development of low cost experimental telerobotic systems. Indeed, this helps to understand and master how to combine both engineering and information technologies in order to built complex systems.
In this context, some pedagogic telerobotic systems are available through Internet such as the mobile robot Xavier [1], and the web robot ABB of Australia [2]. However, as far as we know, none are available in developing countries. Therefore, to introduce this challenging technology, a didactic program has been launched based on the following steps: -build up robot arm manipulators. -build up a pantilt unit (ptu) for controlling a webcam orientation. -implement robot control software-implement the communication software via internet connecting the robot site and the operator site. -implement and test remotely some control modes.
2.The Experimental Telerobotic System
The telerobotic system is composed, at the remote site, of a robot arm manipulator and of a ptu to control the orientation of a webcam. Both the robotic arm manipulator and the ptu have been designed and built in our laboratory. The arm manipulator is a serial robot of three degrees of freedom of type RRR. It holds a gripper. The ptu enables horizontal and vertical orientations. The articulations are motorised with very economical DC motors. Figure 1 shows the ptu holding the webcam as well as the robot arm manipulator. Again, the electronic command unit for robot control are implemented in our laboratory with very cheap components.
Figure 1 . The telerobotic remote system
Since there is no hardware for signal acquisition that is available at this stage, the electronic command unit uses simply the parallel ports of the PC to select and activate the DC motors in an on-off way.
On the local site stands the human operator who remotely directs the tasks via a Graphical User Interface (GUI). This GUI is designed according to user-centred design. It provides facilities to remotely control both the robot arm manipulator and the pan-tilt unit for selecting views. Mouse click based control, image based control, and gesture based control have been implemented and tested. Figure 2 shows the operator at the local site and the video stream that enables to carry out tasks.
Figure 2 . The telerobotic local site
Two PCs are used, one at the local site and the second at the remote site. The interconnection between these sites is based on the TCP/IP sockets. The software is mainly written in Java while some low level functions are written with C.
For economical reasons, we have actually only implemented the direct geometrical model and the inverse geometrical model. Of course, the system is not accurate since these models do not take into account the gravity effect and there is no feedback. Nevertheless, these simple models enable to achieve some pick-and-place tasks.
3. Control Modes
To remotely achieve tasks, we have implemented the following three control modes.
3.1. Mouse click commands
The mouse click control mode enables the control of the robot as well as the ptu by using simple mouse clicks on some appropriate buttons of a panel. Each button represents a specific function or a specific direction of motion. The frames showed in Figure 3 shows the control panels of the arm manipulator and of the ptu.
Figure 3 . Control Panels (robot and ptu)
To achieve tasks with this mode, the operator directs the robot by a series of clicks on the appropriate buttons.
3.2. Image Based Commands
Image-based control mode enables high level control. Within this mode, the operator directs the robot towards locations in 2D or 3D space by only pointing on their images by means of a mouse clicks [3]. This mode has also been used to control Marskhod robot [4].
3.3 Gesture commands
The operator stands in front of the webcam and moves an object in a certain direction. An algorithm using the KLT tracker [5] determines the direction of the motion that serves to orient the robot in the corresponding direction.
4. Experiments
Various experiments have been carried out involving the described control modes.
4.1 Mouse-click control experiments
Within this control mode, the operator can carry out pick-and-place tasks such as pick a box from above a table and place it at another location.
In practice, the operator manages the task by clicking on selected buttons of the graphical panel in order to direct the robot towards the object of interest. Once the end-effector is positioned near that box, the operator activates the gripper for picking this object. Then, the operator moves the robot towards the position where the box has to be left. Once this position is reached, the operator deactivates the gripper in order to release the box. Figure 4 illustrates our experimental robot performing a pick-and-place task.
Figure 4 . The robot performing a task
Many experiments have been carried out with different students. It turns out that this mode is intuitive and very easy to learn.
On the other hand, difficulties arise from the fact that the operator has to direct tasks by controlling each degree of freedom independently. One main advantage is that the operator compensates the incertainties and the robot unaccuracy.
4.2 Image-based control experiments
Many experiments have been carried out using the image-based control. Practically, this control mode is used to send the robot to some location. First, an image of the remote site is grabbed. Then, the operator selects an object of interest. The streovision software extracts the coordinates of this object which are used to move the robot towards the object in the real world.
In practice, unaccuracy have negatively influenced our results because of the model simplicity, the lack of feedback, the calibration of cheap webcams. As a consequence, the implementation of image_based in 2D space have provided better results with comparison of that of 3D space.
4.3 Gesture-based control Experiments
Experiments have been carried out within this control mode. The operator generates a series of movements in different directions. The software analyses the image stream and moves the robot in the corresponding directions.
This control mode offers the advantage of being without contact of the operator with the computer. Another advantage is the possibility of using this technique for robot programming by human demonstration. Nevertheless, some difficulties which are related to image processing and environment issues limit the capability of this control mode.
5. Conclusion
A low cost pedagogic experimental telerobotic system built up in our laboratory has effectively been used to carry out simple pick-and-place experiments. We have implemented and tested three control modes namely mouse-click-based control, image-based control and gesture-based control.
Experiments has shown that the main issue remains the poor accuracy of the telerobotic system. This issue can be overcomed by adding some equipment such as accurate motors and cameras, by implementing dynamical robot models and by using feedback control.
One important added value is to combine these modes in order to build a multimodal interface.
References
[1] R. Simmons et al, “Xavier: an autonomous mobile robot on the web”, Robotic Automation Magasine, 2000, pp.733-739.
[2] B. Dalton, “Techniques for web telerobotics”, department of mechanical and material engineering . University of Western Australia, 2001.
[3] A. Zaatri and M. Oussalah, “Integration and design of multimodal interfaces for supervisory control systems”, Information fusion journal, 2003, 4(2), pp. 135-150
[4] D. Wettergreen, H. Thomas and M. Bualat, “Initial results from vision-based control of the Ames Marsokhod rover”, IEEE International Conference on intelligent robots and systems, Grenoble, sep 1997.
[5] B.D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision”, International Joint Conference on Artificial Intelligence, 1981, pages 674-679.
一種低成本實(shí)驗(yàn)遙控機(jī)器人系統(tǒng)
A. Zaatri
阿爾及利亞君士坦丁門圖大學(xué)機(jī)械系實(shí)驗(yàn)室
摘要
本文介紹了由新興國(guó)家(阿爾及利亞)采用當(dāng)?shù)胤椒ń⒌囊环N低成本實(shí)驗(yàn)遙控機(jī)器人系統(tǒng)的發(fā)展。從長(zhǎng)遠(yuǎn)看,人工操作者將使用攝像頭,借助互聯(lián)網(wǎng)把圖片傳送給機(jī)械手,通過遠(yuǎn)程控制實(shí)現(xiàn)這個(gè)機(jī)械手選擇或放置的任務(wù)。一些控制模式已經(jīng)被應(yīng)用或試驗(yàn),如鼠標(biāo)點(diǎn)擊模式,基于圖像的模式和基于形體的模式。
在這個(gè)充滿吸引力且十分復(fù)雜的現(xiàn)代技術(shù)領(lǐng)域已經(jīng)取得了非常令人鼓舞的成果。
1. 導(dǎo)言
在發(fā)展中國(guó)家,由于受到非常多的困難和制約因素,學(xué)生和前沿研究人員通常只能得到不完整的教學(xué)實(shí)驗(yàn)成果,尤其是在嘗試學(xué)習(xí)和試驗(yàn)復(fù)雜的現(xiàn)代系統(tǒng)時(shí)。這些制約因素可能源于經(jīng)濟(jì)預(yù)算的不足,或者政府官員的不支持政策,以及大學(xué)與工業(yè)領(lǐng)域的脫軌,等等。
在新興國(guó)家,學(xué)生就設(shè)計(jì)現(xiàn)代技術(shù)的申請(qǐng)做了一個(gè)有趣并具有挑戰(zhàn)性的實(shí)地調(diào)查和實(shí)驗(yàn),如低成本實(shí)驗(yàn)遙控機(jī)器人系統(tǒng)。事實(shí)上,這有助于理解和掌握如何結(jié)合工程與信息技術(shù)來建立復(fù)雜的系統(tǒng)。
在這種情況下,通過互聯(lián)網(wǎng)可以利用到一些教學(xué)用的遙控機(jī)器人系統(tǒng),如移動(dòng)機(jī)器人Xavier[1],以及澳大利亞ABB公司的網(wǎng)絡(luò)機(jī)器人[2]。但是,據(jù)我們所知,在發(fā)展中國(guó)家這些機(jī)器人沒有一個(gè)可利用。因此,為介紹這項(xiàng)具有挑戰(zhàn)性的技術(shù)作了教學(xué)計(jì)劃,基本步驟如下所示:
——建立機(jī)器人手臂。
——建立一個(gè)機(jī)動(dòng)機(jī)構(gòu)(ptu)來控制攝像頭的方向。
——使用機(jī)器人控制軟件。
——使用通信軟件,通過互聯(lián)網(wǎng)連通機(jī)器人站點(diǎn)和人工操作者站點(diǎn)。
——應(yīng)用和試驗(yàn)一些遠(yuǎn)程控制模式。
2. 實(shí)驗(yàn)遙控機(jī)器人系統(tǒng)
這個(gè)遙控機(jī)器人系統(tǒng)完成后,在遠(yuǎn)程站點(diǎn),由一個(gè)機(jī)器人手臂或一個(gè)機(jī)動(dòng)機(jī)構(gòu)來控制攝像頭的方向。這個(gè)機(jī)器人手臂或這個(gè)機(jī)動(dòng)機(jī)構(gòu)均已在我們實(shí)驗(yàn)室設(shè)計(jì)并建造出來。這個(gè)機(jī)器人手臂是一個(gè)三自由度型的串行系統(tǒng)機(jī)器人。它擁有一個(gè)爪子。這個(gè)機(jī)動(dòng)機(jī)構(gòu)能使攝像頭在水平方向和垂直方向運(yùn)動(dòng)。關(guān)節(jié)由一個(gè)特殊的經(jīng)濟(jì)型直流電機(jī)驅(qū)動(dòng)。圖1顯示了機(jī)動(dòng)機(jī)構(gòu)控制攝像頭以及機(jī)械臂操縱攝像頭的情況。另外,在我們實(shí)驗(yàn)室,使用了非常便宜的組件來實(shí)現(xiàn)電子指令系統(tǒng)對(duì)機(jī)器人的控制。
圖1:遠(yuǎn)程遙控機(jī)器人系統(tǒng)
在現(xiàn)階段由于沒有可用的硬件來采集信號(hào),電子指令系統(tǒng)只能使用電腦的并行端口來選擇和激活直流電機(jī)的開和關(guān)狀態(tài)。
在本地站點(diǎn),人工操作者通過圖形用戶界面(GUI)遠(yuǎn)程指揮任務(wù)。這個(gè)用戶界面的設(shè)計(jì)是以用戶為中心設(shè)計(jì)的。它提供遠(yuǎn)程控制機(jī)械臂操縱和機(jī)動(dòng)機(jī)構(gòu)選擇觀察點(diǎn)的設(shè)施。另外,基于鼠標(biāo)點(diǎn)擊的控制,基于圖像的控制,基于形體的控制也已經(jīng)得到應(yīng)用和試驗(yàn)。圖2顯示了人工操作者在本地站點(diǎn)操作控制,右圖視頻上顯示機(jī)械臂在執(zhí)行任務(wù)。
圖2:遙控機(jī)器人本地站點(diǎn)
使用兩臺(tái)電腦,一臺(tái)在本地站點(diǎn)和另一臺(tái)在遠(yuǎn)程站點(diǎn)。這些站點(diǎn)之間的互連是基于TCP / IP插口。該軟件主要是用Java編寫的,其中一些低層次的功能是用C語言編寫的。
由于經(jīng)濟(jì)方面的原因,我們事實(shí)上只使用了直接幾何模型和逆幾何模型。當(dāng)然,這個(gè)系統(tǒng)是不準(zhǔn)確的,因?yàn)檫@些模型沒有考慮重力的影響,也沒有任何反饋。不過,這些簡(jiǎn)單的模型能夠?qū)崿F(xiàn)機(jī)器人完成一些選擇或放置的任務(wù)。
3. 控制模式
為了實(shí)現(xiàn)遠(yuǎn)程執(zhí)行任務(wù),我們使用了以下三種控制模式。
3.1鼠標(biāo)點(diǎn)擊命令
鼠標(biāo)點(diǎn)擊控制模式使機(jī)器人的控制可以通過鼠標(biāo)簡(jiǎn)單地點(diǎn)擊一些合適的按鈕來控制機(jī)器人或機(jī)動(dòng)機(jī)構(gòu)。每個(gè)按鈕代表了一個(gè)特定的功能或特定的運(yùn)動(dòng)方向。如圖3所示的表框顯示了機(jī)器人手臂和機(jī)動(dòng)機(jī)構(gòu)的控制面板。
圖3:控制面板(機(jī)器人手臂和機(jī)動(dòng)機(jī)構(gòu))
在這種模式下執(zhí)行任務(wù),操作者只要按特定順序點(diǎn)擊相應(yīng)的按鈕來控制機(jī)器人。
3.2基于圖像的命令
基于圖像的控制模式能夠?qū)崿F(xiàn)高層次的控制。在這一模式下,操作者只要在他們的圖像上通過鼠標(biāo)點(diǎn)擊的方式點(diǎn)擊機(jī)器人所在二維或三維空間的位置來控制它。這種模式曾被用來控制機(jī)器人Marskhod[4]。
3.3形體命令
操作者站在攝像頭前然后按照某個(gè)確定的方向移動(dòng)一個(gè)物體。使用KLT跟蹤算法[5]來決定運(yùn)動(dòng)方向,實(shí)現(xiàn)在相應(yīng)方向上機(jī)器人的確定運(yùn)動(dòng)。
4. 實(shí)驗(yàn)
我們已經(jīng)進(jìn)行了各種相關(guān)實(shí)驗(yàn)來描述這個(gè)控制模式。
4.1鼠標(biāo)點(diǎn)擊控制的實(shí)驗(yàn)
在這個(gè)控制模式下,操作者可以實(shí)現(xiàn)機(jī)器人選擇和放置的任務(wù),如拿起桌子上的一個(gè)表箱或者把它搬放到到另一位置。
在實(shí)踐中,操作者通過點(diǎn)擊圖形面板上的選擇按鈕來控制機(jī)器人去自己所想的任何地方。只要最終地點(diǎn)是箱子附近的位置,操作者就能驅(qū)動(dòng)夾鉗來拿起這個(gè)箱子。然后,操作者操縱機(jī)器人移動(dòng)到箱子需要放置的位置。只要到達(dá)了指定地點(diǎn),操作者操縱機(jī)器人松開夾鉗,放下這個(gè)箱子。圖4顯示了我們的實(shí)驗(yàn)機(jī)器人演示選擇或放置的任務(wù)。
圖4:該機(jī)器人正在演示任務(wù)
很多不同的學(xué)生進(jìn)行過許多實(shí)驗(yàn)。實(shí)驗(yàn)證明,這種模式非常直觀,非常容易學(xué)習(xí)。
另外,困難是來自實(shí)際操作方面,即操作者必須單獨(dú)控制每個(gè)自由度來使機(jī)器人執(zhí)行任務(wù)。其中一個(gè)主要優(yōu)點(diǎn)是:操作者彌補(bǔ)了系統(tǒng)的不確定性和機(jī)器人的低精度。
4.2基于圖像控制的實(shí)驗(yàn)
基于圖像控制我們已經(jīng)進(jìn)行了許多實(shí)驗(yàn)。實(shí)際上,這種控制模式是用來傳送一些位置給機(jī)器人。首先,獲取一個(gè)遠(yuǎn)程站點(diǎn)的圖片。然后,操作者選擇任意一個(gè)自己感興趣的物體。通過圖像軟件提取這個(gè)物體的坐標(biāo),用來操縱機(jī)器人使它移動(dòng)到那個(gè)物體在現(xiàn)實(shí)當(dāng)中的位置上。
在實(shí)際操作當(dāng)中,因?yàn)樵撃J教?jiǎn)單,而且缺乏反饋,并且攝像頭的校準(zhǔn)元件很廉價(jià),它們的低精度對(duì)我們的實(shí)驗(yàn)結(jié)果產(chǎn)生了負(fù)面影響。事實(shí)上,在二維空間應(yīng)用基于圖像的控制模式比在三維空間應(yīng)用此模式得出了更好的結(jié)果。
4.3基于形體控制的實(shí)驗(yàn)
我們已經(jīng)進(jìn)行了在這個(gè)控制模式下的很多實(shí)驗(yàn)。操作者做了一系列不同方向上的運(yùn)動(dòng)。該軟件分析這些圖片信息流使機(jī)器人在相應(yīng)的方向上移動(dòng)。
這種控制模式下的好處是操作者不需要依賴電腦。使用這個(gè)機(jī)器人技術(shù)項(xiàng)目的另一個(gè)優(yōu)點(diǎn)是人類可以通過肢體示范來控制機(jī)器人。不過,相關(guān)的圖像處理問題以及環(huán)境問題等一些方面的困難限制了這種控制模式的可行性。
5. 結(jié)論
我們實(shí)驗(yàn)室建立的這個(gè)低成本教學(xué)用實(shí)驗(yàn)遙控機(jī)器人系統(tǒng)能在實(shí)驗(yàn)中有效地完成簡(jiǎn)單的選擇或放置動(dòng)作。我們已經(jīng)應(yīng)用并測(cè)試了這三種控制模式,即基于鼠標(biāo)點(diǎn)擊的控制模式,基于圖像的控制模式和基于形體的控制模式。
實(shí)驗(yàn)表明,仍然存在的主要問題是遙控機(jī)器人系統(tǒng)的精度差。這個(gè)問題可以通過增加高精度電機(jī)和精密攝像機(jī),使用動(dòng)態(tài)機(jī)器人模型以及運(yùn)用反饋控制來克服。
一個(gè)重要的附加價(jià)值是將這些控制模式結(jié)合起來建立一個(gè)多式聯(lián)運(yùn)接口。
參考文獻(xiàn)
[1] R. Simmons et al,“Xavier:網(wǎng)絡(luò)自動(dòng)機(jī)器人”,機(jī)器人自動(dòng)化雜志,2000,P733-P739.
[2] A. Zaatri and M. Oussalah,“網(wǎng)絡(luò)遙控機(jī)器人技術(shù)”,機(jī)械與材料工程系,西澳大利亞大學(xué),2001.
[3] A. Zaatri and M. Oussalah,“監(jiān)控系統(tǒng)多式聯(lián)運(yùn)接口的整合與設(shè)計(jì)”,信息融合雜志,2003, 4(2), P135-P150.
[4] D. Wettergreen, H. Thomas and M. Bualat,“Ames Marsokhod火星車基于視覺控制的初步結(jié)果”,智能機(jī)器人系統(tǒng)的IEEE國(guó)際會(huì)議,Grenoble,1997.10.
[5] B.D. Lucas and T. Kanade,“迭代圖像配準(zhǔn)技術(shù)在立體視覺中的應(yīng)用”,人工智能國(guó)際聯(lián)合會(huì)議,1981,P674-P679.
收藏