單臂多關(guān)節(jié)運動機構(gòu)設(shè)計【說明書+CAD+SOLIDWORKS】
單臂多關(guān)節(jié)運動機構(gòu)設(shè)計【說明書+CAD+SOLIDWORKS】,說明書+CAD+SOLIDWORKS,單臂多,關(guān)節(jié),運動,機構(gòu),設(shè)計,說明書,仿單,cad,solidworks
A Low Cost Experimental Telerobotic Systems
A. Zaatri
Mechanical Department Laboratory
University Mentouri of Constantine, Algeria
http://www.infopoverty.net/new/Confs/IWC_05/docs/Zaatri.doc
Abstract
This paper presents the development of a low cost experimental telerobotic system built up with local means in an emerging country (Algeria). From a remote site, a webcam sends images of a robot manipulator through Internet to the control site where a human operator remotely monitors this robot in order to achieve pick-and-place tasks. Some control modes have been implemented and tested such as mouse-click, image-based and gesture-based modes.
Very encouraging pedagogical results have been obtained in this attractive and complex field of modern technology.
1. Introduction
In developing countries, very hard constraints and difficulties are imposed to students and researchers leading usually to inadequate pedagogic results, especially when attempting to learn and experiment complex modern systems. These constraints may stem from the lack of economical budgets, from a bureaucratic discouraging environment, from a mismach between university and industry, etc.
One interesting and challenging field to investigate and to experiment by students in emerging countries concerns the design of modern technology applications such as the development of low cost experimental telerobotic systems. Indeed, this helps to understand and master how to combine both engineering and information technologies in order to built complex systems.
In this context, some pedagogic telerobotic systems are available through Internet such as the mobile robot Xavier [1], and the web robot ABB of Australia [2]. However, as far as we know, none are available in developing countries. Therefore, to introduce this challenging technology, a didactic program has been launched based on the following steps: -build up robot arm manipulators. -build up a pantilt unit (ptu) for controlling a webcam orientation. -implement robot control software-implement the communication software via internet connecting the robot site and the operator site. -implement and test remotely some control modes.
2.The Experimental Telerobotic System
The telerobotic system is composed, at the remote site, of a robot arm manipulator and of a ptu to control the orientation of a webcam. Both the robotic arm manipulator and the ptu have been designed and built in our laboratory. The arm manipulator is a serial robot of three degrees of freedom of type RRR. It holds a gripper. The ptu enables horizontal and vertical orientations. The articulations are motorised with very economical DC motors. Figure 1 shows the ptu holding the webcam as well as the robot arm manipulator. Again, the electronic command unit for robot control are implemented in our laboratory with very cheap components.
Figure 1 . The telerobotic remote system
Since there is no hardware for signal acquisition that is available at this stage, the electronic command unit uses simply the parallel ports of the PC to select and activate the DC motors in an on-off way.
On the local site stands the human operator who remotely directs the tasks via a Graphical User Interface (GUI). This GUI is designed according to user-centred design. It provides facilities to remotely control both the robot arm manipulator and the pan-tilt unit for selecting views. Mouse click based control, image based control, and gesture based control have been implemented and tested. Figure 2 shows the operator at the local site and the video stream that enables to carry out tasks.
Figure 2 . The telerobotic local site
Two PCs are used, one at the local site and the second at the remote site. The interconnection between these sites is based on the TCP/IP sockets. The software is mainly written in Java while some low level functions are written with C.
For economical reasons, we have actually only implemented the direct geometrical model and the inverse geometrical model. Of course, the system is not accurate since these models do not take into account the gravity effect and there is no feedback. Nevertheless, these simple models enable to achieve some pick-and-place tasks.
3. Control Modes
To remotely achieve tasks, we have implemented the following three control modes.
3.1. Mouse click commands
The mouse click control mode enables the control of the robot as well as the ptu by using simple mouse clicks on some appropriate buttons of a panel. Each button represents a specific function or a specific direction of motion. The frames showed in Figure 3 shows the control panels of the arm manipulator and of the ptu.
Figure 3 . Control Panels (robot and ptu)
To achieve tasks with this mode, the operator directs the robot by a series of clicks on the appropriate buttons.
3.2. Image Based Commands
Image-based control mode enables high level control. Within this mode, the operator directs the robot towards locations in 2D or 3D space by only pointing on their images by means of a mouse clicks [3]. This mode has also been used to control Marskhod robot [4].
3.3 Gesture commands
The operator stands in front of the webcam and moves an object in a certain direction. An algorithm using the KLT tracker [5] determines the direction of the motion that serves to orient the robot in the corresponding direction.
4. Experiments
Various experiments have been carried out involving the described control modes.
4.1 Mouse-click control experiments
Within this control mode, the operator can carry out pick-and-place tasks such as pick a box from above a table and place it at another location.
In practice, the operator manages the task by clicking on selected buttons of the graphical panel in order to direct the robot towards the object of interest. Once the end-effector is positioned near that box, the operator activates the gripper for picking this object. Then, the operator moves the robot towards the position where the box has to be left. Once this position is reached, the operator deactivates the gripper in order to release the box. Figure 4 illustrates our experimental robot performing a pick-and-place task.
Figure 4 . The robot performing a task
Many experiments have been carried out with different students. It turns out that this mode is intuitive and very easy to learn.
On the other hand, difficulties arise from the fact that the operator has to direct tasks by controlling each degree of freedom independently. One main advantage is that the operator compensates the incertainties and the robot unaccuracy.
4.2 Image-based control experiments
Many experiments have been carried out using the image-based control. Practically, this control mode is used to send the robot to some location. First, an image of the remote site is grabbed. Then, the operator selects an object of interest. The streovision software extracts the coordinates of this object which are used to move the robot towards the object in the real world.
In practice, unaccuracy have negatively influenced our results because of the model simplicity, the lack of feedback, the calibration of cheap webcams. As a consequence, the implementation of image_based in 2D space have provided better results with comparison of that of 3D space.
4.3 Gesture-based control Experiments
Experiments have been carried out within this control mode. The operator generates a series of movements in different directions. The software analyses the image stream and moves the robot in the corresponding directions.
This control mode offers the advantage of being without contact of the operator with the computer. Another advantage is the possibility of using this technique for robot programming by human demonstration. Nevertheless, some difficulties which are related to image processing and environment issues limit the capability of this control mode.
5. Conclusion
A low cost pedagogic experimental telerobotic system built up in our laboratory has effectively been used to carry out simple pick-and-place experiments. We have implemented and tested three control modes namely mouse-click-based control, image-based control and gesture-based control.
Experiments has shown that the main issue remains the poor accuracy of the telerobotic system. This issue can be overcomed by adding some equipment such as accurate motors and cameras, by implementing dynamical robot models and by using feedback control.
One important added value is to combine these modes in order to build a multimodal interface.
References
[1] R. Simmons et al, “Xavier: an autonomous mobile robot on the web”, Robotic Automation Magasine, 2000, pp.733-739.
[2] B. Dalton, “Techniques for web telerobotics”, department of mechanical and material engineering . University of Western Australia, 2001.
[3] A. Zaatri and M. Oussalah, “Integration and design of multimodal interfaces for supervisory control systems”, Information fusion journal, 2003, 4(2), pp. 135-150
[4] D. Wettergreen, H. Thomas and M. Bualat, “Initial results from vision-based control of the Ames Marsokhod rover”, IEEE International Conference on intelligent robots and systems, Grenoble, sep 1997.
[5] B.D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision”, International Joint Conference on Artificial Intelligence, 1981, pages 674-679.
一種低成本實驗遙控機器人系統(tǒng)
A. Zaatri
阿爾及利亞君士坦丁門圖大學(xué)機械系實驗室
摘要
本文介紹了由新興國家(阿爾及利亞)采用當(dāng)?shù)胤椒ń⒌囊环N低成本實驗遙控機器人系統(tǒng)的發(fā)展。從長遠(yuǎn)看,人工操作者將使用攝像頭,借助互聯(lián)網(wǎng)把圖片傳送給機械手,通過遠(yuǎn)程控制實現(xiàn)這個機械手選擇或放置的任務(wù)。一些控制模式已經(jīng)被應(yīng)用或試驗,如鼠標(biāo)點擊模式,基于圖像的模式和基于形體的模式。
在這個充滿吸引力且十分復(fù)雜的現(xiàn)代技術(shù)領(lǐng)域已經(jīng)取得了非常令人鼓舞的成果。
1. 導(dǎo)言
在發(fā)展中國家,由于受到非常多的困難和制約因素,學(xué)生和前沿研究人員通常只能得到不完整的教學(xué)實驗成果,尤其是在嘗試學(xué)習(xí)和試驗復(fù)雜的現(xiàn)代系統(tǒng)時。這些制約因素可能源于經(jīng)濟預(yù)算的不足,或者政府官員的不支持政策,以及大學(xué)與工業(yè)領(lǐng)域的脫軌,等等。
在新興國家,學(xué)生就設(shè)計現(xiàn)代技術(shù)的申請做了一個有趣并具有挑戰(zhàn)性的實地調(diào)查和實驗,如低成本實驗遙控機器人系統(tǒng)。事實上,這有助于理解和掌握如何結(jié)合工程與信息技術(shù)來建立復(fù)雜的系統(tǒng)。
在這種情況下,通過互聯(lián)網(wǎng)可以利用到一些教學(xué)用的遙控機器人系統(tǒng),如移動機器人Xavier[1],以及澳大利亞ABB公司的網(wǎng)絡(luò)機器人[2]。但是,據(jù)我們所知,在發(fā)展中國家這些機器人沒有一個可利用。因此,為介紹這項具有挑戰(zhàn)性的技術(shù)作了教學(xué)計劃,基本步驟如下所示:
——建立機器人手臂。
——建立一個機動機構(gòu)(ptu)來控制攝像頭的方向。
——使用機器人控制軟件。
——使用通信軟件,通過互聯(lián)網(wǎng)連通機器人站點和人工操作者站點。
——應(yīng)用和試驗一些遠(yuǎn)程控制模式。
2. 實驗遙控機器人系統(tǒng)
這個遙控機器人系統(tǒng)完成后,在遠(yuǎn)程站點,由一個機器人手臂或一個機動機構(gòu)來控制攝像頭的方向。這個機器人手臂或這個機動機構(gòu)均已在我們實驗室設(shè)計并建造出來。這個機器人手臂是一個三自由度型的串行系統(tǒng)機器人。它擁有一個爪子。這個機動機構(gòu)能使攝像頭在水平方向和垂直方向運動。關(guān)節(jié)由一個特殊的經(jīng)濟型直流電機驅(qū)動。圖1顯示了機動機構(gòu)控制攝像頭以及機械臂操縱攝像頭的情況。另外,在我們實驗室,使用了非常便宜的組件來實現(xiàn)電子指令系統(tǒng)對機器人的控制。
圖1:遠(yuǎn)程遙控機器人系統(tǒng)
在現(xiàn)階段由于沒有可用的硬件來采集信號,電子指令系統(tǒng)只能使用電腦的并行端口來選擇和激活直流電機的開和關(guān)狀態(tài)。
在本地站點,人工操作者通過圖形用戶界面(GUI)遠(yuǎn)程指揮任務(wù)。這個用戶界面的設(shè)計是以用戶為中心設(shè)計的。它提供遠(yuǎn)程控制機械臂操縱和機動機構(gòu)選擇觀察點的設(shè)施。另外,基于鼠標(biāo)點擊的控制,基于圖像的控制,基于形體的控制也已經(jīng)得到應(yīng)用和試驗。圖2顯示了人工操作者在本地站點操作控制,右圖視頻上顯示機械臂在執(zhí)行任務(wù)。
圖2:遙控機器人本地站點
使用兩臺電腦,一臺在本地站點和另一臺在遠(yuǎn)程站點。這些站點之間的互連是基于TCP / IP插口。該軟件主要是用Java編寫的,其中一些低層次的功能是用C語言編寫的。
由于經(jīng)濟方面的原因,我們事實上只使用了直接幾何模型和逆幾何模型。當(dāng)然,這個系統(tǒng)是不準(zhǔn)確的,因為這些模型沒有考慮重力的影響,也沒有任何反饋。不過,這些簡單的模型能夠?qū)崿F(xiàn)機器人完成一些選擇或放置的任務(wù)。
3. 控制模式
為了實現(xiàn)遠(yuǎn)程執(zhí)行任務(wù),我們使用了以下三種控制模式。
3.1鼠標(biāo)點擊命令
鼠標(biāo)點擊控制模式使機器人的控制可以通過鼠標(biāo)簡單地點擊一些合適的按鈕來控制機器人或機動機構(gòu)。每個按鈕代表了一個特定的功能或特定的運動方向。如圖3所示的表框顯示了機器人手臂和機動機構(gòu)的控制面板。
圖3:控制面板(機器人手臂和機動機構(gòu))
在這種模式下執(zhí)行任務(wù),操作者只要按特定順序點擊相應(yīng)的按鈕來控制機器人。
3.2基于圖像的命令
基于圖像的控制模式能夠?qū)崿F(xiàn)高層次的控制。在這一模式下,操作者只要在他們的圖像上通過鼠標(biāo)點擊的方式點擊機器人所在二維或三維空間的位置來控制它。這種模式曾被用來控制機器人Marskhod[4]。
3.3形體命令
操作者站在攝像頭前然后按照某個確定的方向移動一個物體。使用KLT跟蹤算法[5]來決定運動方向,實現(xiàn)在相應(yīng)方向上機器人的確定運動。
4. 實驗
我們已經(jīng)進行了各種相關(guān)實驗來描述這個控制模式。
4.1鼠標(biāo)點擊控制的實驗
在這個控制模式下,操作者可以實現(xiàn)機器人選擇和放置的任務(wù),如拿起桌子上的一個表箱或者把它搬放到到另一位置。
在實踐中,操作者通過點擊圖形面板上的選擇按鈕來控制機器人去自己所想的任何地方。只要最終地點是箱子附近的位置,操作者就能驅(qū)動夾鉗來拿起這個箱子。然后,操作者操縱機器人移動到箱子需要放置的位置。只要到達了指定地點,操作者操縱機器人松開夾鉗,放下這個箱子。圖4顯示了我們的實驗機器人演示選擇或放置的任務(wù)。
圖4:該機器人正在演示任務(wù)
很多不同的學(xué)生進行過許多實驗。實驗證明,這種模式非常直觀,非常容易學(xué)習(xí)。
另外,困難是來自實際操作方面,即操作者必須單獨控制每個自由度來使機器人執(zhí)行任務(wù)。其中一個主要優(yōu)點是:操作者彌補了系統(tǒng)的不確定性和機器人的低精度。
4.2基于圖像控制的實驗
基于圖像控制我們已經(jīng)進行了許多實驗。實際上,這種控制模式是用來傳送一些位置給機器人。首先,獲取一個遠(yuǎn)程站點的圖片。然后,操作者選擇任意一個自己感興趣的物體。通過圖像軟件提取這個物體的坐標(biāo),用來操縱機器人使它移動到那個物體在現(xiàn)實當(dāng)中的位置上。
在實際操作當(dāng)中,因為該模式太簡單,而且缺乏反饋,并且攝像頭的校準(zhǔn)元件很廉價,它們的低精度對我們的實驗結(jié)果產(chǎn)生了負(fù)面影響。事實上,在二維空間應(yīng)用基于圖像的控制模式比在三維空間應(yīng)用此模式得出了更好的結(jié)果。
4.3基于形體控制的實驗
我們已經(jīng)進行了在這個控制模式下的很多實驗。操作者做了一系列不同方向上的運動。該軟件分析這些圖片信息流使機器人在相應(yīng)的方向上移動。
這種控制模式下的好處是操作者不需要依賴電腦。使用這個機器人技術(shù)項目的另一個優(yōu)點是人類可以通過肢體示范來控制機器人。不過,相關(guān)的圖像處理問題以及環(huán)境問題等一些方面的困難限制了這種控制模式的可行性。
5. 結(jié)論
我們實驗室建立的這個低成本教學(xué)用實驗遙控機器人系統(tǒng)能在實驗中有效地完成簡單的選擇或放置動作。我們已經(jīng)應(yīng)用并測試了這三種控制模式,即基于鼠標(biāo)點擊的控制模式,基于圖像的控制模式和基于形體的控制模式。
實驗表明,仍然存在的主要問題是遙控機器人系統(tǒng)的精度差。這個問題可以通過增加高精度電機和精密攝像機,使用動態(tài)機器人模型以及運用反饋控制來克服。
一個重要的附加價值是將這些控制模式結(jié)合起來建立一個多式聯(lián)運接口。
參考文獻
[1] R. Simmons et al,“Xavier:網(wǎng)絡(luò)自動機器人”,機器人自動化雜志,2000,P733-P739.
[2] A. Zaatri and M. Oussalah,“網(wǎng)絡(luò)遙控機器人技術(shù)”,機械與材料工程系,西澳大利亞大學(xué),2001.
[3] A. Zaatri and M. Oussalah,“監(jiān)控系統(tǒng)多式聯(lián)運接口的整合與設(shè)計”,信息融合雜志,2003, 4(2), P135-P150.
[4] D. Wettergreen, H. Thomas and M. Bualat,“Ames Marsokhod火星車基于視覺控制的初步結(jié)果”,智能機器人系統(tǒng)的IEEE國際會議,Grenoble,1997.10.
[5] B.D. Lucas and T. Kanade,“迭代圖像配準(zhǔn)技術(shù)在立體視覺中的應(yīng)用”,人工智能國際聯(lián)合會議,1981,P674-P679.
收藏