幫助臥床不起者日常生活的移動機器人系統(tǒng)外文文獻翻譯、中英文翻譯
幫助臥床不起者日常生活的移動機器人系統(tǒng)外文文獻翻譯、中英文翻譯,幫助,匡助,臥床不起,日常生活,移動,挪動,機器人,系統(tǒng),外文,文獻,翻譯,中英文
MOBILE ROBOT SYSTEM TO AID THE DAILY LIFE FOR BEDRIDDEN PERSONSTakashi KOMEDA, Hiroaki MATSUOKA, Yasuhiro OGAWA, Mitsuyoshi FUJII, Tateki UCHIDA, Masao MIYAGI, Hiroyuki KOYAMA and Hiroyasu FUNAKUBOSHIBAURA INSTITUTE OF TECHNOLOGY FUKASAKU307, OMIYA, 330, JAPANABSTRACT People who care for bedridden patients have great burdens in mind and body. The number of old-aged people in the population is increasing in Japan, the problem of care for bedridden patients will become an increasingly social problem. We are trying to solve this problem by developing a small mobile robot system for bedridden patients. The purpose of this system is to pick up and to bring a small object putting it somewhere inside the room semi-automatically. This mobile robot consists of a manipulator, visual sensor unit and mobile unit. This system gives information about the surrounding to the patient through the camera and monitor. When the patient directs an target object on the monitor, the system measures a 3-dimensional location of it, and the mobile unit approaches the object and the manipulator picks up and carries it back to the patient. This system is controlled by the image information and also by human through an interface based on images. 1. INTRODUCTION We generally have great burdens in mind and body to help bedridden patients. The population of old aged increasing today, this helping work will become a social problem. We are trying to solve this problem by developing a small mobile robot system[l]. The purpose of this system is to bring the target object putting it somewhere inside the room semiautomatic ally. Of course, there are many kinds of works involved in helping bedridden persons, for example changing clothes, giving the medicine and cleaning the bed, etc With this system. however, we think that even if the machine achieves only one type of work instead of a human, the burdens on the helper should decrease in mind and body.2. SYSTEM CONSTRUCTIONFig. 1 shows the overview of the mobile robot. This mobile robot consists of a manipulator, a visual sensor unit and a mobile unit. This system gives information about the surrounding to the patient through the camera and the monitor. When the patient directs a target object on the monitor. the system measures a 3-dimensional location of it. The mobile unit then approaches the target object and the manipulator picks up and carrying it back to the patient. The si7e of the mobile unit is width 600 mm, length 800 mm. height 550 mm, and it has the space to load the controller and batteries. The mobile unit is driven by two DC servo motors and two wheels. Each motor has a pulse encoder for position control. The length of the manipulator is about 1255[mm] and it has 4 degrees of freedom and a hand. Its weight capacity is about 3 kgf. The actuator of each joint is DC servo motor with a pulse encoder to control its position. A pair of strain gages is put on to the wrist part of the manipulator. The weight of a target object can therefore be measured and used in the feedback loop of the controller.3.VISUAL SENSORThe visual sensor system consists of a CCD camera and an image processor which is controlled by a personal computer. The camera is attached on the wrist part of the manipulator. This design makes it possible to move the camera and manipulator simultaneously. Then, the system gives 0-7803-3280-6/96/%5.00 @ 1996 IEEE - 2789 - surrounding information to the operator through the camera and the monitor. If a target object is directed, it measure a 3- dimensional location of it. The target objects for this mobile robot system arc things for daily using which have different size and shape. It is difficult to treat an image processing to recognize such things and “me its location in real time. To solve this problem, we put things for daily using on a tray which has a cylindrical fixed size grip. This grip is the target object of the visual sensor. Therefore, we can easily measure the size of the target object on the image plane as the number of pixels of the CCD camera, and calculate the distance to the object using a simple formula(Fig.2). The treatment to measure a 3- dimensional distance using the image and a following visual tracking of the target object can be done in real time.Monitor screen No.2 is the image of after O.S[sec] of monitor screen No.1. Monitor screen No.1 shows the center and feature points of the target object. Monitor screen No.2 shows that the visual sensor finds new center and feature points by scanning the image. After O.S[sec], the visual sensor will find another new center and feature points.4. VISUAL TRACKINGThe mobile unit is controlled by a personal computer, a control board and motor drivers. The computer calculates the number of drive pulses of each drive motor from the location of the target object, and sends the information to the control board. The control board generates DDA pulse distribution to do the synchronous operation between two motors[2]. Motor drivers observe the feedback pulse to do the rate control and the position control. However, even if we perfectly control two motors of the mobile unit using the feedback pulse, the mobile unit docs not always move exactly because of the difference of the diameter of each drive wheel and the road surface condition. Therefore, we use the visual tracking method to solve this problem. This mobile robot has an image information from CCD camera, and this information renews every 0.5 second. When the mobile unit approaches the target object, the computer calculate the position of the target object on the monitor every 0.5 second. If the position shifts from the center of the monitor, the mobile unit is controlled to change the direction to catch it in the center. This method is able to keep the object in the center of the monitor continuously(Fig.3). The method was confirmed by measure the relation between the center of the target object on the monitor and the angle of mobile robot. Fig.4 shows one of the results when the mobile robot has approached about 1800 mm. This result shows that the center of the target object positioned about 360 pixel point (full pixel is 512) when the mobile robot starts action to approach to the target, and is caught the center of the monitor (256 pixel point) when the mobile robot run about 750 mm. The method is able to keep the center within 5 pixels during 750 mm to 1800 mm running and enough precision for this mobile robot.5. CONTROLLER AND INTERFACEOur mobile robot can bring the target from the undecided position inside the room semi-automatically. However, if the operator wants a specific target, he must communicate with the robot through the computer. For example, if the target is at the right side of the robot, he must point out the arrow on the monitor by the mouse cursor to turn the robot to find the target on the monitor. In addition, he must teach the robot which is the target. Furthermore, this robot is used by the operator who is not professional about robots, and is handicapped.We have therefore developed an interface system between the robot and the operator based on following conceptions. (1) Easy and simple. The operator can control the robot easily like as radio-controlled car and also this input method must be simple.(2) Safety High space is not necessary, and for the softy tic robot can interrupt the movement every time tic robot or operator recognizes a dangerous motion. This robot has the small camera which is attached on the wrist part of manipulator, the operator can recognize the direction of the end cofactor and the sconce through the monitor. But, it is not sufficient to control tic robot with surrounding information through the camera, the operator also necks information about status of the robot. For example if the position of the robot reaches the upper limit of its working space, it is impossible to move the robot even if the operator directs to move UP.Under the configuration of Multi CPU, each task compensate robot motion, and the operator can choose cash task as an object according to his necessity. With this system. the operator can simply choose the robot motion task as an edict from existing several tasks (Fig. 7). These tasks are not always in active, hut always on standby to get the command from the operator. This means that if the operator finds it necessary, he can interrupt the motion to execute another tasks which can escape from a dangerous path.It is also necessary to compensate the safety when the robot finds a danger while the movement, this means not the operator but the robot can recognize the danger. In our system, a pair of strain gages is placed on the wrist part of the manipulator, the robot can measure the load at the wrist part using these strain gages, and rccogni7c the danger when excessiveness force arise at this part. Using the time slice method, we can interrupt the robot motion without influence when robot recognize a danger. This means that this system watches a danger both operator and robot.6. CONCLUSIONS(1)We have developed the mobile robot system which consists of a manipulator, a mobile unit and a visual sensor, and is controlled by image information. (2)We have developed a visual tracking method to correct the motion direction, it is simple way, high response and good accuracy to use this system.(3)We have developed the interface system between our mobile robot and the operator. This interface can observe each task of robot motions and show the graphical motion using the real status information. We tried to check functions of this system, it was simple and easy to operate the robot and also we confirmed that it is useful for safety, In future, we will improve this system more interactive and try to apply to the obstacle avoidance.ACKNOWLEDGMENT We wish to thank the Welfare Equipment Development Center of Japan for support this project.REFERENCES 1. T.Komcda etc. “ Mobile robot system to aid the daily life of bedridden persons in the private house”, Proceedings of 2nd European Conference on Advancement Rehabilitation Technology, 24.4( 1993) 2. T.Komeda etc. ‘I Mobile robot system to aid the daily life of bedridden persons in the private house(2nd report)”, Proceedings of 3rd European Conference on Advancement Rehabilitation Technology, pp.179-181(1995)摘 要照顧臥床不起的病人的人在思想和身體上都有很大的負擔。日本老年人口數(shù)量不斷增加,臥床不起病人的護理問題將成為一個日益嚴重的社會問題。為了解決這個問題,我們正在為臥床不起的病人開發(fā)一個小型的移動機器人系統(tǒng)。這個系統(tǒng)的目的是把一個小物體半自動地放到房間的某個地方。該移動機器人由機械手、視覺傳感器單元和移動單元組成。該系統(tǒng)通過攝像頭和監(jiān)視器向患者提供有關(guān)周圍環(huán)境的信息。當患者在監(jiān)視器上引導(dǎo)目標對象時,系統(tǒng)測量其三維位置,移動單元靠近該對象,操作器拿起并將其帶回患者。該系統(tǒng)由圖像信息和人通過基于圖像的界面進行控制。幫助臥床不起者日常生活的移動機器人系統(tǒng)1 引言我們通常在精神和身體上有很大的負擔來幫助臥床不起的病人。隨著老年人口的不斷增加,這項幫扶工作將成為一個社會問題。我們正試圖通過開發(fā)一個小型移動機器人系統(tǒng)來解決這個問題。這個系統(tǒng)的目的是讓目標物體半自動地放在房間的某個地方。當然,幫助臥床不起的人有很多種工作,例如換衣服、給藥、打掃床鋪等。有了這個系統(tǒng)。然而,我們認為,即使機器只完成一種類型的工作,而不是一個人,對幫助者的負擔應(yīng)該在思想和身體上減少。2 系統(tǒng)建設(shè)圖 1 顯示了移動機器人的概述。該移動機器人由機械手、視覺傳感器單元和移動單元組成。該系統(tǒng)通過攝像頭和監(jiān)視器向患者提供有關(guān)周圍環(huán)境的信息。當患者在監(jiān)視器上引導(dǎo)目標對象時。該系統(tǒng)測量其三維位置。然后,移動單元接近目標對象,操作器拿起并將其帶回患者。移動單元的 Si7e 寬 600 mm,長800 mm。高度 550 毫米,它有空間裝載控制器和電池。移動單元由兩個直流伺服電機和兩個輪子驅(qū)動。每個電機都有一個用于位置控制的脈沖編碼器。操縱器的長度約為 1255[mm],有 4 個自由度和一只手。它的重量約為 3 公斤力。每個關(guān)節(jié)的執(zhí)行機構(gòu)是帶脈沖編碼器的直流伺服電機來控制其位置。在機械手的手腕部位安裝一對應(yīng)變計。因此,可以在控制器的反饋回路中測量和使用目標對象的重量。3 視覺傳感器視覺傳感器系統(tǒng)由 CCD 攝像機和由個人計算機控制的圖像處理器組成。攝像機安裝在操縱器的手腕部分。這種設(shè)計使得攝像機和操縱器可以同時移動。然后,系統(tǒng)通過攝像機和監(jiān)視器向操作員提供 0-7803-3280-6/96/%5.00@1996 IEEE-2789-周圍信息。如果一個目標物體是定向的,它測量它的三維位置。該移動機器人系統(tǒng)的目標對象為日常使用的弧形物體,具有不同的大小和形狀。很難對圖像處理進行實時識別和定位。為了解決這個問題,我們把日常使用的東西放在一個有圓柱形固定尺寸把手的托盤上。此夾點是視覺傳感器的目標對象。因此,我們可以很容易地測量圖像平面上目標物體的大小,作為 CCD 相機的像素數(shù),并使用一個簡單的公式計算到物體的距離(圖 2) 。利用圖像進行三維距離測量,并對目標物體進行跟蹤視覺跟蹤,可實現(xiàn)實時處理。2 號監(jiān)視器屏幕是 1 號監(jiān)視器屏幕的后 O.S[秒]圖像。1 號監(jiān)視器屏幕顯示目標對象的中心和特征點。2 號監(jiān)視器屏幕顯示視覺傳感器通過掃描圖像找到新的中心和特征點。在 O.S[秒]之后,視覺傳感器將找到另一個新的中心和特征點。4 視覺跟蹤移動單元由個人電腦、控制板和電機驅(qū)動器控制。計算機根據(jù)目標對象的位置計算每個驅(qū)動電機的驅(qū)動脈沖數(shù),并將信息發(fā)送到控制板。控制板產(chǎn)生DDA 脈沖分配,在兩個電機之間進行同步操作[2]。電機驅(qū)動器通過觀察反饋脈沖進行速度控制和位置控制。然而,即使我們使用反饋脈沖完美地控制移動單元的兩個電機,由于每個驅(qū)動輪的直徑和路面狀況的差異,移動單元也不總是準確地移動。因此,我們使用視覺跟蹤方法來解決這個問題。這個移動機器人有一個來自 CCD 攝像機的圖像信息,每 0.5 秒更新一次。當移動單元接近目標對象時,計算機每 0.5 秒計算一次目標對象在監(jiān)視器上的位置。如果位置從監(jiān)視器的中心移動,移動單元將被控制以改變方向以在中心捕獲它。這種方法能夠使物體連續(xù)地保持在監(jiān)視器的中心(圖 3) 。通過測量監(jiān)視器上目標物中心與移動機器人角度之間的關(guān)系,確定了該方法。圖 4 顯示了移動機器人接近 1800 mm 時的一個結(jié)果。結(jié)果表明,當移動機器人開始接近目標時,目標對象的中心位置約為 360 像素點(全像素為 512) ,當移動機器人運行約750 mm 時,目標對象的中心位置被捕捉到監(jiān)視器的中心位置(256 像素點) 。該方法能夠在 750-1800 毫米的運行過程中使中心保持在 5 個像素范圍內(nèi),并且對該移動機器人具有足夠的精度。5 控制器和接口我們的移動機器人可以半自動地將目標從房間內(nèi)尚未確定的位置帶出。但是,如果操作者想要一個特定的目標,他必須通過計算機與機器人通信。例如,如果目標位于機器人的右側(cè),他必須用鼠標光標指向監(jiān)視器上的箭頭,以轉(zhuǎn)動機器人在監(jiān)視器上找到目標。此外,他還必須教機器人什么是目標。此外,該機器人還被不擅長機器人的操作人員使用,并且是殘疾人。因此,我們開發(fā)了一個基于以下概念的機器人與操作員之間的接口系統(tǒng)。(1)簡單易行。操作人員可以像無線遙控車一樣方便地控制機器人,而且這種輸入方法必須簡單。(2)不需要安全高空間,對于軟性 TIC 機器人,每次 TIC 機器人或操作員識別到危險動作時,都會中斷動作。該機器人在機械手手腕部位安裝有小型攝像機,操作者通過監(jiān)視器可以識別出末端輔助因子和開關(guān)的方向。但是,通過攝像機來控制帶有周圍信息的TIC 機器人是不夠的,操作者也會對機器人的狀態(tài)信息進行縮頸。例如,如果機器人的位置達到其工作空間的上限,即使操作員指示向上移動,也不可能移動機器人。在多 CPU 的配置下,每個任務(wù)補償機器人的運動,操作者可以根據(jù)需要選擇現(xiàn)金任務(wù)作為對象。有了這個系統(tǒng)。操作員可以簡單地從現(xiàn)有的幾個任務(wù)中選擇機器人運動任務(wù)作為指令(圖 7) 。這些任務(wù)并不總是處于活動狀態(tài),而是始終處于待機狀態(tài),以便從操作員那里獲得命令。這意味著,如果操作員發(fā)現(xiàn)有必要,他可以中斷運動以執(zhí)行另一個可以從危險路徑逃脫的任務(wù)。當機器人在運動過程中發(fā)現(xiàn)危險時,也需要對安全性進行補償,這意味著機器人不能識別危險,而只能識別危險。在我們的系統(tǒng)中,在機械手的手腕部分放置一對應(yīng)變片,機器人可以使用這些應(yīng)變片測量手腕部分的載荷,并且在這部分產(chǎn)生過大力的危險。利用時間切片方法,當機器人識別出危險時,可以不受影響地中斷機器人的運動。這意味著這個系統(tǒng)同時監(jiān)視操作員和機器人的危險。6 結(jié)論(1)開發(fā)了由機械手、移動單元和視覺傳感器組成的由圖像信息控制的移動機器人系統(tǒng)。(2)我們開發(fā)了一種視覺跟蹤方法來校正運動方向,該系統(tǒng)操作簡單,響應(yīng)速度快,精度高。(3)我們開發(fā)了移動機器人與操作員之間的接口系統(tǒng)。該界面可以觀察機器人運動的各個任務(wù),并利用真實的狀態(tài)信息顯示圖形運動。我們試著檢查該系統(tǒng)的功能,操作簡單,操作方便,同時也證實了該系統(tǒng)的安全性,今后我們將改進該系統(tǒng),使其更具互動性,并嘗試應(yīng)用于避障。感謝日本福利設(shè)備發(fā)展中心對本項目的支持。參考文獻[1]T.Komcda 等, “幫助臥床不起者在私人住宅中日常生活的移動機器人系統(tǒng)” ,第二屆歐洲先進康復(fù)技術(shù)會議論文集,24.4(1993)[2]T.Komeda 等, 《私人住宅中臥床不起的人的日常生活輔助移動機器人系統(tǒng)》 (第二次報告) , 《第三屆歐洲先進康復(fù)技術(shù)會議論文集》 ,第 179-181頁(1995 年)MOBILE ROBOT SYSTEM TO AID THE DAILY LIFE FOR BEDRIDDEN PERSONSTakashi KOMEDA, Hiroaki MATSUOKA, Yasuhiro OGAWA, Mitsuyoshi FUJII, Tateki UCHIDA, Masao MIYAGI, Hiroyuki KOYAMA and Hiroyasu FUNAKUBOSHIBAURA INSTITUTE OF TECHNOLOGY FUKASAKU307, OMIYA, 330, JAPANABSTRACT People who care for bedridden patients have great burdens in mind and body. The number of old-aged people in the population is increasing in Japan, the problem of care for bedridden patients will become an increasingly social problem. We are trying to solve this problem by developing a small mobile robot system for bedridden patients. The purpose of this system is to pick up and to bring a small object putting it somewhere inside the room semi-automatically. This mobile robot consists of a manipulator, visual sensor unit and mobile unit. This system gives information about the surrounding to the patient through the camera and monitor. When the patient directs an target object on the monitor, the system measures a 3-dimensional location of it, and the mobile unit approaches the object and the manipulator picks up and carries it back to the patient. This system is controlled by the image information and also by human through an interface based on images. 1. INTRODUCTION We generally have great burdens in mind and body to help bedridden patients. The population of old aged increasing today, this helping work will become a social problem. We are trying to solve this problem by developing a small mobile robot system[l]. The purpose of this system is to bring the target object putting it somewhere inside the room semiautomatic ally. Of course, there are many kinds of works involved in helping bedridden persons, for example changing clothes, giving the medicine and cleaning the bed, etc With this system. however, we think that even if the machine achieves only one type of work instead of a human, the burdens on the helper should decrease in mind and body.2. SYSTEM CONSTRUCTIONFig. 1 shows the overview of the mobile robot. This mobile robot consists of a manipulator, a visual sensor unit and a mobile unit. This system gives information about the surrounding to the patient through the camera and the monitor. When the patient directs a target object on the monitor. the system measures a 3-dimensional location of it. The mobile unit then approaches the target object and the manipulator picks up and carrying it back to the patient. The si7e of the mobile unit is width 600 mm, length 800 mm. height 550 mm, and it has the space to load the controller and batteries. The mobile unit is driven by two DC servo motors and two wheels. Each motor has a pulse encoder for position control. The length of the manipulator is about 1255[mm] and it has 4 degrees of freedom and a hand. Its weight capacity is about 3 kgf. The actuator of each joint is DC servo motor with a pulse encoder to control its position. A pair of strain gages is put on to the wrist part of the manipulator. The weight of a target object can therefore be measured and used in the feedback loop of the controller.3.VISUAL SENSORThe visual sensor system consists of a CCD camera and an image processor which is controlled by a personal computer. The camera is attached on the wrist part of the manipulator. This design makes it possible to move the camera and manipulator simultaneously. Then, the system gives 0-7803-3280-6/96/%5.00 @ 1996 IEEE - 2789 - surrounding information to the operator through the camera and the monitor. If a target object is directed, it measure a 3- dimensional location of it. The target objects for this mobile robot system arc things for daily using which have different size and shape. It is difficult to treat an image processing to recognize such things and “me its location in real time. To solve this problem, we put things for daily using on a tray which has a cylindrical fixed size grip. This grip is the target object of the visual sensor. Therefore, we can easily measure the size of the target object on the image plane as the number of pixels of the CCD camera, and calculate the distance to the object using a simple formula(Fig.2). The treatment to measure a 3- dimensional distance using the image and a following visual tracking of the target object can be done in real time.Monitor screen No.2 is the image of after O.S[sec] of monitor screen No.1. Monitor screen No.1 shows the center and feature points of the target object. Monitor screen No.2 shows that the visual sensor finds new center and feature points by scanning the image. After O.S[sec], the visual sensor will find another new center and feature points.4. VISUAL TRACKINGThe mobile unit is controlled by a personal computer, a control board and motor drivers. The computer calculates the number of drive pulses of each drive motor from the location of the target object, and sends the information to the control board. The control board generates DDA pulse distribution to do the synchronous operation between two motors[2]. Motor drivers observe the feedback pulse to do the rate control and the position control. However, even if we perfectly control two motors of the mobile unit using the feedback pulse, the mobile unit docs not always move exactly because of the difference of the diameter of each drive wheel and the road surface condition. Therefore, we use the visual tracking method to solve this problem. This mobile robot has an image information from CCD camera, and this information renews every 0.5 second. When the mobile unit approaches the target object, the computer calculate the position of the target object on the monitor every 0.5 second. If the position shifts from the center of the monitor, the mobile unit is controlled to change the direction to catch it in the center. This method is able to keep the object in the center of the monitor continuously(Fig.3). The method was confirmed by measure the relation between the center of the target object on the monitor and the angle of mobile robot. Fig.4 shows one of the results when the mobile robot has approached about 1800 mm. This result shows that the center of the target object positioned about 360 pixel point (full pixel is 512) when the mobile robot starts action to approach to the target, and is caught the center of the monitor (256 pixel point) when the mobile robot run about 750 mm. The method is able to keep the center within 5 pixels during 750 mm to 1800 mm running and enough precision for this mobile robot.5. CONTROLLER AND INTERFACEOur mobile robot can bring the target from the undecided position inside the room semi-automatically. However, if the operator wants a specific target, he must communicate with the robot through the computer. For example, if the target is at the right side of the robot, he must point out the arrow on the monitor by the mouse cursor to turn the robot to find the target on the monitor. In addition, he must teach the robot which is the target. Furthermore, this robot is used by the operator who is not professional about robots, and is handicapped.We have therefore developed an interface system between the robot and the operator based on following conceptions. (1) Easy and simple. The operator can control the robot easily like as radio-controlled car and also this input method must be simple.(2) Safety High space is not necessary, and for the softy tic robot can interrupt the movement every time tic robot or operator recognizes a dangerous motion. This robot has the small camera which is attached on the wrist part of manipulator, the operator can recognize the direction of the end cofactor and the sconce through the monitor. But, it is not sufficient to control tic robot with surrounding information through the camera, the operator also necks information about status of the robot. For example if the position of the robot reaches the upper limit of its working space, it is impossible to move the robot even if the operator directs to move UP.Under the configuration of Multi CPU, each task compensate robot motion, and the operator can choose cash task as an object according to his necessity. With this system. the operator can simply choose the robot motion task as an edict from existing several tasks (Fig. 7). These tasks are not always in active, hut always on standby to get the command from the operator. This means that if the operator finds it necessary, he can interrupt the motion to execute another tasks which can escape from a dangerous path.It is also necessary to compensate the safety when the robot finds a danger while the movement, this means not the operator but the robot can recognize the danger. In our system, a pair of strain gages is placed on the wrist part of the manipulator, the robot can measure the load at the wrist part using these strain gages, and rccogni7c the danger when excessiveness force arise at this part. Using the time slice method, we can interrupt the robot motion without influence when robot recognize a danger. This means that this system watches a danger both operator and robot.6. CONCLUSIONS(1)We have developed the mobile robot system which consists of a manipulator, a mobile unit and a visual sensor, and is controlled by image information. (2)We have developed a visual tracking method to correct the motion direction, it is simple way, high response and good accuracy to use this system.(3)We have developed the interface system between our mobile robot and the operator. This interface can observe each task of robot motions and show the graphical motion using the real status information. We tried to check functions of this system, it was simple and easy to operate the robot and also we confirmed that it is useful for safety, In future, we will improve this system more interactive and try to apply to the obstacle avoidance.ACKNOWLEDGMENT We wish to thank the Welfare Equipment Development Center of Japan for support this project.REFERENCES 1. T.Komcda etc. “ Mobile robot system to aid the daily life of bedridden persons in the private house”, Proceedings of 2nd European Conference on Advancement Rehabilitation Technology, 24.4( 1993) 2. T.Komeda etc. ‘I Mobile robot system to aid the daily life of bedridden persons in the private house(2nd report)”, Proceedings of 3rd European Conference on Advancement Rehabilitation Technology, pp.179-181(1995)
收藏