基于麥克納姆輪的全方位機(jī)器人移動(dòng)底盤的設(shè)計(jì)【含9張CAD圖帶答辯ppt+外文翻譯】
【需要咨詢購(gòu)買全套設(shè)計(jì)請(qǐng)加QQ1459919609】圖紙預(yù)覽詳情如下:
8th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2017 The Control System Based on Extended BCI for a Robotic WheelchairTimofei I. Voznenko_, Eugene V. Chepin?, and Gleb A. Urvanov National Research Nuclear University MEPhI (Moscow Engineering Physics Institute) Moscow, Russia Abstract In most cases, the movement of wheelchairs is controlled by disabled people using a joystick or by an accompanying person. Significantly disabled patients need alternative control methods without using the wheelchair joystick because it is undesirable or impossible for these patients. In this article, we present the implementation of a robotic wheelchair based on a powered wheelchair that is controlled not by the joystick but by the onboard computer that receives and processes data from the extended brain-computer interface (extended BCI). Under this term we understand the robotic complex control system with simultaneous independent alternative control channels. In this robotic wheelchair version the BCI works with voice and gesture control channels. Keywords: extended brain-computer interface, robotic wheelchair, control channel, robotics 1 Introduction Technical projects on the development of robotic wheelchairs have been carried out since the last century. Modern mobile robotic complexes (MR) which include robotic wheelchairs are complex heterogeneous hardware and software systems and they should provide a certain level of comfort and reliability of control answering the fields of their application. Under the term MR we understand the robotic system having an onboard powerful, versatile, and inexpensive miniature computer with a modern CPU providing the ability to connect the modern peripherals to the system without any restrictions, unlike the microcontroller capabilities. Because of this, it is possible to use the maximum possible set of software, more memory and parallel programming techniques to achieve the real-time mode (RTM).The robotic wheelchair (hereinafter the “chair”) is designed for patients with severe disorders of the musculoskeletal system and other functions of the body (hands, speech, hearing, etc.). The patient is the operator of the chair sitting in it and controlling it via the control system. For clarity we will call them the “patient”. Along with it, the chair is controlled by the specialistoperator who can remotely monitor the chair. This is possible because of a parallel chair-control channel provided by the Wi-Fi connection of the onboard computer with a remote Tracking and Control Station (TCS). TCS is a remote computer through which the specialist-operator can control the chair “intercepting”, if necessary, the control from the patient-operator.2 Related WorksThe patients who use the wheelchairs in most cases are quite satisfied with them, if the chair is equipped with an electric drive and control system via the joystick. An example of such a robotic wheelchair called Wheelesley is described by Yanco [1]. This chair provides additional opportunities for the patient during the driving by the provision of the information “with a lower level of navigation”.One of the modern trends in the development of the robotic wheelchairs is the projects such as the Chinese project Chiba (Robotic wheelchair) described by Morales et al. [2] and Szondy [3]. The main objective of these projects is the control system of the mechatronics of the chair which effectively overcomes the obstacles in the way such as stairs and border stones. However, some patients are unable to control the chair with such functions. Therefore, for these patients the way to improve the quality of their lives is the development of the control systems for the robotic wheelchairs that would allow them to control their own wheelchair using their modest possibilities: weak hands, voice, etc.There is a great variety of the ways to control the MR. The most common is to control directly by the joystick as described by Jawawi et al. [4]. However, the directly connected to the servomotor of the joystick Arduino opens the opportunity of controlling the chair using such methods as the brain-computer interface (BCI), voice or gesture control. Each MR has onboard a powerful general-purpose computer. It interacts during the operation with the external computing unit — Tracking and Control Station (TCS). The TCS allows to carry out the remote control of the MR’s work by the operator. Also, it is possible to send to the TCS the data to gather the statistics on the basis of which it is possible to make the changes in the values of the parameters to improve the quality of the MR’s control. The global problem of increasing the intelligence of such complex systems is extremely relevant nowadays, especially in the transition to the control of the teams of robots, for example proposed by Bereznyak et al. [5]. For some applications of the MRs their control circuit includes a human-operator. For example, in medicine it is extremely important to implement the control system based on the traditional paradigm of “control commands”, but on the basis of non-traditional methods of controlling the complex systems. There are several works describing BCI-controlled wheelchairs but the control accuracy is not high enough for reliable control, for example, 50% and above described by Ng et al. [6] and 79.38% presented by Achic et al. [7]. In order to increase the control accuracy of BCI-controlled robotic wheelchair we propose using of other control channels like voice commands or gestures. In this project, we proposed the new non-traditional control method we called “extended BCI”, which involves the operation of three control channels in parallel: voice commands, gestures and the BCI. The urgency and necessity of this control method is determined by the field of its application: medical robotics.3 Theory 3.1 The Nontraditional Methods of MR Control Under the traditional method of MRs control we understand the way to control using the commands passed from the operator to the MR’s control system via some interface. Under the non-traditional control methods we understand in this article the following: BCI, voice control, control with gestures. The BCI is an interface that provides a direct transmission of the information from the brain to the computing device as described by Tromov and Skrugin [8]. Recently different companies has developed the portable BCI such as NeuroSky, MindFlex, Emotiv, as described by Stamps and Hamam [9]. Some of these neurocomputing interfaces allows not only to obtain the EEG (electroencephalogram) data, but also to obtain data about the emotional state of the operator, for example used by Chepin et al. [10] and Voznenko et al. [11]. The voice control is a way of interaction between a man and a computer by the voice. This method of control is based on the processing of audio signals coming from a microphone. Speech recognition system using phonemes and grammar follows Lamere et al. [12]. The gesture recognition system was developed by Chistjakov et al. [13] in the “Robotics” laboratory of the NRNU MEPhI. The algorithm determines the fact of the hand getting in the graphics region (specified area) corresponding to a particular gesture. This system is installed into the wheelchair to control its movement by the hand gestures and finger movements of the patient. 3.2 Extended BCI The main scientific and engineering idea of the described project is the development of the control system. The general ideology of the project is based on the concept of “extended BCI” proposed by Tromov and Skrugin [8], Dyumin et al. [14], Chepin et al. [15] and Urvanov et al. [16], by which we mean the presence in the control system the following robot-control channels: BCI, voice control, control with gestures.The term “extended BCI” was introduced in order to emphasize that in addition to the control channel based on the parallel BCI there are other ones, not so common channels. BCI is the main control method, but in practice there are situations when a particular patient more effectively controls the wheelchair using the voice commands and/or gestures. With this approach, it is necessary to solve the problem of choosing the most correct control channel. In addition, the architecture should implement the possibility of taking into account the disease peculiarities of the particular patient and develop a decision-making mechanism based on the analysis of the information from all control channels. 3.3 The Task of Decision-Making The concept of “extended interface” includes several different control channels of the MR by the operator. The general scheme of the decision-making system based on the data from the extended BCI interface is presented in Figure 1. When using more than one data channel to control the MR it is needed to solve the problem of the decision-making. In general, the problem looks as follows: there are several control channels and it is necessary to decide what command should be executed at a given time. The decision-making system should have the following features: 1. To be deterministic. If we know the specificity of working with the extended BCI-interface system components it is possible to make the deterministic decision-making system taking into account the mentioned features of the components. 2. The ability to have the varying credibility degree for the control system channels . Since the channels have different degree of credibility, it is logical that the information coming to the decision-making system should have the different values. 3. Support an asynchronous data input. Despite the fact that the part of the data is received synchronously and during a long period of time, the solutions based on asynchronously incoming information having a greater relative value should be taken timely. 4. Work with continuous processes. For example, when working with thought-images it is important to record not only the state but also the dynamic characteristics of the process, and the previous state. Thus, the developed method of decision-making should be similar to the automaton with a memory. The decision-making system prototype satisfying all requirements was implemented using channels accuracy based priority accounting.Figure 1: General problem statement4 ImplementationThe current “chair” hardware-software complex consists of:1. The wheelchair with the electric drive “Titan” LY-103-120, which is designed for the independent movement in the premises and on roads with hard-surface for the disabled people with diseases of the musculoskeletal system and injuries of the lower extremities (Figure 2a). Instead of joystick control the chair has a control unit, which has an interface with the onboard computer port.2. The neural interface on the basis of the Emotiv Epoc (Figure 2b) and the software module. The Emotiv Epoc BCI allows one to obtain information not only about the fact that the user thought about the thought-images, but also the quantitative assessment of this fact (power).b)a) c)Figure 2: a) The “chair”, b) BCI Epoc Emotiv, c) Intel RealSenseChannel 1Channel 2Channel 3Channel N。 。 。 。 。 。 。DecisionMakingSystemControlSignal3. The basic equipment for the operator video interface with the robot for the development of a system of gesture recognition (a stereo camera for fixing the movements and gestures of the operator and the set of individual cameras). The arm tracking is made using an Intel RealSense camera that is shown in Figure 2c. 4. The basic equipment for the operator audio interface with the chair. The voice recognition takes place with the help of English phonetics of the Sphinx-4 library developed by Lamere et al. [12]. The dictionary consists of the words matched the available phonetics 5 Results and Conclusion The “chair” project has received quite a wide coverage in Russian and the foreign mass-media: a few reports on some Russian TV-channels, in particular, on the NTV [17], as well as in the press of Spain and Spanish-speaking countries, for example in El Diario de Hoy [18], El Universal [19] and in China (Science and Technology Daily [20]). Acknowledgments We would like to thank the RFBR for support of this project by the grant No. 14-07-00843 — “The intelligent robotic wheelchair”. References [1] H. A. Yanco. Wheelesley: A robotic wheelchair system: Indoor navigation and user interface. In Assistive technology and artificial intelligence, pages 256–268. Springer, 1998.[2] R. Morales, A. Gonzalez, V. Feliu, and P. Pintado. Environment adaptation of a new staircase-climbing wheelchair. Autonomous Robots, 23(4):275–292, 2007.[3] D. Szondy. Chiba robotic wheelchair turns wheels into legs. Gizmag, October 17, 2012, www.gizmag.com, 2012. [4] D. N. Jawawi, K. Kamal, M. A. S. Talab, M. Z. M. Zaki, N. M. Hamdan, R. Mohamad, R. Mamat, and S. Sabil. A Robotic Wheelchair Component-Based Software Development. INTECH Open Access Publisher, 2011. 5 [5] I. S. Bereznyak, E. V. Chepin, and A. A. Dyumin. The actions language as a programming frame-work for cloud robotics applications. In Cloud System and Big Data Engineering (Confluence), 2016 6th International Conference, pages 119–124. IEEE, 2016. [6] D. W. K. Ng, Y. W. Soh, and S. Y. Goh. Development of an autonomous bci wheelchair. In Computational Intelligence in Brain Computer Interfaces (CIBCI), 2014 IEEE Symposium on, pages 1–4. IEEE, 2014.[7] F. Achic, J. Montero, C. Penaloza, and F. Cuellar. Hybrid bci system to operate an electric wheelchair and a robotic arm for navigation and manipulation tasks. In Advanced Robotics and its Social Impacts (ARSO), 2016 IEEE Workshop on, pages 249–254. IEEE, 2016. [8] A. G. Trofimov and V. I. Skrugin. Brain-computer interfaces. review. Information Technologies, (2):2–11, 2011. [9] K. Stamps and Y. Hamam. Towards inexpensive bci control for wheelchair navigation in the enabled environment–a hardware survey. In International Conference on Brain Informatics, pages 336–345. Springer, 2010. [10] E. V. Chepin, A. A. Dyumin, G. A. Urvanov, and T. I. Voznenko. The improved method for robotic devices control with operator’s emotions detection. In NW Russia Young Researchers in Electrical and Electronic Engineering Conference (EIConRusNW), 2016 IEEE, pages 173–176. IEEE, 2016.[11] T. I. Voznenko, G. A. Urvanov, A. A. Dyumin, S. V. Andrianova, and E. V. Chepin. The research of emotional state influence on quality of a brain-computer interface usage. Procedia Computer Science, 88:391–396, 2016.[12] P. Lamere, P. Kwok, E. Gouvea, B. Raj, R. Singh, W. Walker, M. Warmuth, and P. Wolf. The cmu sphinx-4 speech recognition system. In IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP 2003), Hong Kong, volume 1, pages 2–5, 2003. [13] I. S. Chistjakov, G. A. Urvanov, D. V. Bajkov, and E. V. Chepin. Sistema upravlenija robo-tizirovannym kreslom pri pomoshhi zhestov (the system of control robotic wheelchair using ges-tures). Vestnik natsional’nogo issledovatel’skogo yadernogo universiteta “MIFI”, 5(4):381–388, 2016.[14] A. A. Dyumin, P. S. Sorokoumov, E. V. Chepin, and G. A. Urvanov. Architecture and prototype of human-machine interface with mobile robotic device. Vestnik natsional’nogo issledovatel’skogo yadernogo universiteta “MIFI”, 2(3):376–380, 2013. [15] E. V. Chepin, A. A. Dyumin, P. S. Sorokoumov, and G. A. Urvanov. A prototype of the brain-computer interface for mobile robot. In The 15th International Workshop on Computer Science and Information Technologies (CSIT 2013), volume 2, pages 202–206. CSIT, 2013.[16] G. A. Urvanov, V. V. Dan’shin, A. A. Dyumin, and Chepin E. V. The system of human interaction as an agent of mobile robotic system. Software systems and computational methods, (1):45–51, 2015. [17] V moskve ispytali invalidnoe kreslo, upravljaemoe siloj mysli (in moscow was experienced a wheelchair controlled by thought). http://www.ntv.ru/novosti/1604180/, 2016. [18] La silla de ruedas del futuro ser′a guiada por el pensamiento y las emociones (the wheelchair of the future will be guided by thought and emotions). http://www.elsalvador.com/articulo/internacional/ silla-ruedas-del-futuro-sera-guiada-por-pensamiento-las-emociones-109909, 2016. [19] La silla de ruedas del futuro ser′a guiada por el pensamiento (the wheelchair of the fu-ture will be guided by thought). http://www.eluniversal.com/noticias/estilo-vida/ silla-ruedas-del-futuro-sera-guiada-por-pensamiento_305388, 2016. [20] Invalidnoe kreslo, upravljaemoe siloj mysli (wheelchair controlled by thought). http://inosmi. ru/science/20160617/236896139.html, 2016.第 8 屆國(guó)際生物認(rèn)知架構(gòu)年會(huì),BICA 2017。擴(kuò)展 BCI 的機(jī)器人輪椅控制系統(tǒng)提莫菲·沃斯納科奇,尤金·v·切平,以及格列布·烏爾瓦諾夫國(guó)家研究核能大學(xué)(莫斯科工程物理研究所)莫斯科,俄羅斯。摘要在大多數(shù)情況下,輪椅的運(yùn)動(dòng)是由殘疾人用操縱桿或陪同人員控制的。嚴(yán)重殘疾的患者不需要使用輪椅操縱桿就需要其他的控制方法,因?yàn)閷?duì)這些患者來說是不可取的或不可能的。在這篇文章中,我們介紹了一個(gè)基于電動(dòng)輪椅的機(jī)器人輪椅的實(shí)現(xiàn),它不是由操縱桿控制的,而是由機(jī)載計(jì)算機(jī)接收和處理來自擴(kuò)展腦機(jī)接口(擴(kuò)展 BCI)的數(shù)據(jù)。在這一術(shù)語下,我們了解機(jī)器人復(fù)雜的控制系統(tǒng),同時(shí)擁有獨(dú)立的可選控制通道。在這個(gè)機(jī)器人輪椅版本的 BCI 工作與聲音和手勢(shì)控制頻道。關(guān)鍵詞:擴(kuò)展腦機(jī)接口,機(jī)器人輪椅,控制通道,機(jī)器人。1 介紹自上個(gè)世紀(jì)以來,機(jī)器人輪椅的發(fā)展技術(shù)項(xiàng)目一直在進(jìn)行。包括機(jī)器人輪椅的現(xiàn)代移動(dòng)機(jī)器人復(fù)合體(MR)是復(fù)雜的異構(gòu)硬件和軟件系統(tǒng),它們應(yīng)該提供一定程度的舒適性和可靠性,以滿足其應(yīng)用領(lǐng)域的需求。在這個(gè)術(shù)語中,MR .我們理解機(jī)器人系統(tǒng)擁有一個(gè)強(qiáng)大的、多功能的、便宜的微型計(jì)算機(jī),它具有現(xiàn)代的 CPU,它提供了將現(xiàn)代外圍設(shè)備連接到系統(tǒng)的能力,而不受任何限制,不像微控制器的功能。因此,可以使用最大可能的軟件集、更多的內(nèi)存和并行編程技術(shù)來實(shí)現(xiàn)實(shí)時(shí)模式(RTM)。機(jī)器人輪椅(下稱“椅”)是為肌肉骨骼系統(tǒng)的嚴(yán)重疾病和身體的其他功能(手、說話、聽力等)的病人設(shè)計(jì)的。病人是坐在椅子上的椅子的操作員,通過控制系統(tǒng)控制它。為了清晰起見,我們稱他們?yōu)椤安∪恕?。與此同時(shí),這把椅子由專門的操作員控制,他們可以遠(yuǎn)程監(jiān)控椅子。這是可能的,因?yàn)橛蓹C(jī)載計(jì)算機(jī)的 Wi-Fi 連接提供的一個(gè)平行的座椅控制通道與一個(gè)遠(yuǎn)程跟蹤和控制站(TCS)。TCS 是一種遠(yuǎn)程計(jì)算機(jī),通過它,專家操作人員可以控制椅子“攔截” ,必要時(shí)可以控制來自患者的控制。2 相關(guān)的工作在大多數(shù)情況下病人對(duì)他們使用的輪椅感到非常滿意,椅子的操縱桿安裝了一個(gè)電力驅(qū)動(dòng)和控制系統(tǒng)。Yanco[1]描述了一個(gè)叫做 Wheelesley 的機(jī)器人輪椅的例子。這張椅子為病人提供了額外的機(jī)會(huì),在駕駛過程中提供信息“有較低水平的導(dǎo)航” 。機(jī)器人輪椅發(fā)展的現(xiàn)代趨勢(shì)之一是由 Morales 等[2]和 Szondy[3]所描述的中國(guó)項(xiàng)目 Chiba(機(jī)器人輪椅)等項(xiàng)目。這些項(xiàng)目的主要目標(biāo)是控制系統(tǒng)的機(jī)電一體化的椅子,有效克服了障礙,如樓梯和邊界石。然而,有些患者無法控制這類功能的椅子。因此,對(duì)于這些患者來說,改善他們生活質(zhì)量的方法是開發(fā)出機(jī)器人輪椅的控制系統(tǒng),這樣他們就可以用他們有限的可能性來控制自己的輪椅:軟弱的手、聲音等等。有很多方法可以控制最常見的是由 Jawawi 等[4]所描述的操縱桿直接控制。然而,與操縱桿 Arduino 的伺服馬達(dá)直接相連的是利用腦-機(jī)接口(BCI)、聲音或手勢(shì)控制等方法來控制椅子的機(jī)會(huì)。每個(gè)人都有一臺(tái)功能強(qiáng)大的通用計(jì)算機(jī)。它在操作過程中與外部計(jì)算單元-跟蹤和控制站(TCS)進(jìn)行交互。TCS 允許操作員對(duì) MR 的工作進(jìn)行遠(yuǎn)程控制。此外,還可以將數(shù)據(jù)發(fā)送到 TCS,以收集統(tǒng)計(jì)數(shù)據(jù),根據(jù)這些數(shù)據(jù),可以對(duì)參數(shù)的值進(jìn)行更改,以提高 MR 控制的質(zhì)量。目前,增加這種復(fù)雜系統(tǒng)的智能的全球問題是極其相關(guān)的,特別是在由Bereznyak 等人提出的關(guān)于機(jī)器人團(tuán)隊(duì)控制的過渡中。[5]。對(duì)于一些應(yīng)用,他們的控制電路包括一個(gè)人操作員。例如,在醫(yī)學(xué)上,以傳統(tǒng)的“控制指令”范式為基礎(chǔ)實(shí)現(xiàn)控制系統(tǒng)是極其重要的,但在控制復(fù)雜系統(tǒng)的非傳統(tǒng)方法的基礎(chǔ)上。有幾幅作品描述了 bci 控制的輪椅,但控制精度不夠高,無法可靠控制,例如,Ng 等人所描述的 50%以上,Achic 等[7]提出的 79.38%。為了提高 bci 控制機(jī)器人輪椅的控制精度,我們建議使用語音命令或手勢(shì)等其他控制通道。在這個(gè)項(xiàng)目中,我們提出了一種新的非傳統(tǒng)的控制方法,我們稱之為“擴(kuò)展 BCI”,它包括三個(gè)控制通道并行的操作:語音指令、手勢(shì)和BCI。這種控制方法的緊迫性和必要性是由其應(yīng)用領(lǐng)域所決定的:醫(yī)用機(jī)器人。3 理論3.1 MR 控制的非傳統(tǒng)方法。在傳統(tǒng)的控制方法中,我們理解了通過一些接口將命令從操作符傳遞到 MR的控制系統(tǒng)的方法。在本文中我們理解的非傳統(tǒng)控制方法如下:BCI,語音控制,手勢(shì)控制。BCI 是一個(gè)接口,它提供從大腦到計(jì)算機(jī)設(shè)備的信息的直接傳輸[8]。最近,不同的公司開發(fā)了像 NeuroSky, MindFlex, Emotiv 這樣的便攜式 BCI[9]。其中一些神經(jīng)計(jì)算接口不僅可以獲得 EEG(腦電圖)數(shù)據(jù),還可以獲得關(guān)于操作者情緒狀態(tài)的數(shù)據(jù),例如 Chepin 等[10]和 Voznenko 等[11]。語音控制是一種人與計(jì)算機(jī)之間通過聲音進(jìn)行交互的一種方式。這種控制方法是基于對(duì)來自麥克風(fēng)的音頻信號(hào)的處理。語音識(shí)別系統(tǒng)使用音素和語法,[12]。這個(gè)手勢(shì)識(shí)別系統(tǒng)是由 Chistjakov 等人在 NRNU MEPhI 的“機(jī)器人”實(shí)驗(yàn)室開發(fā)的。該算法決定了在圖形區(qū)域(指定區(qū)域)與特定手勢(shì)對(duì)應(yīng)的手的實(shí)際情況。這個(gè)系統(tǒng)安裝在輪椅上,通過病人的手勢(shì)和手指的移動(dòng)來控制它的運(yùn)動(dòng)。3.2 擴(kuò)展 BCI所述項(xiàng)目的主要科學(xué)和工程思想是控制系統(tǒng)的開發(fā)。項(xiàng)目的一般意識(shí)形態(tài)概念的基礎(chǔ)上提出的“擴(kuò)展 BCI Tromov 和 Skrugin[8],Dyumin et al。[14],Chepin 等。[15]和 Urvanov et al。[16],我們指的是在控制系統(tǒng)的以下機(jī)器人控制渠道:BCI,語音控制,控制手勢(shì)。“擴(kuò)展 BCI”一詞是為了強(qiáng)調(diào)除了基于并行 BCI 的控制信道之外,還有其他的信道。BCI 是主要的控制方法,但在實(shí)踐中,當(dāng)某一特定的患者使用語音指令和/或手勢(shì)更有效地控制輪椅時(shí),就會(huì)出現(xiàn)這種情況。采用這種方法,有必要解決選擇最正確的控制信道的問題。此外,該體系結(jié)構(gòu)還應(yīng)實(shí)現(xiàn)考慮到特定患者的疾病特性的可能性,并在分析所有控制渠道的信息的基礎(chǔ)上制定決策機(jī)制。3.3 決策的任務(wù)。“擴(kuò)展接口”的概念包括操作人員對(duì) MR 的幾個(gè)不同的控制通道?;跀U(kuò)展BCI 接口數(shù)據(jù)的決策系統(tǒng)總體方案如圖 1 所示。當(dāng)使用多個(gè)數(shù)據(jù)通道來控制 MR時(shí),需要解決決策問題。一般來說,問題是這樣的:有幾個(gè)控制通道,需要決定在給定的時(shí)間執(zhí)行什么命令。決策系統(tǒng)應(yīng)具備以下特點(diǎn):1.如果我們知道使用擴(kuò)展的 bci 接口系統(tǒng)組件的特性,就可以考慮到組件的上述特性,從而做出確定性決策系統(tǒng)。2.對(duì)控制系統(tǒng)信道具有不同可信度的能力。由于渠道具有不同程度的可信度,進(jìn)入決策系統(tǒng)的信息應(yīng)該具有不同的價(jià)值是合乎邏輯的。3. 支持異步數(shù)據(jù)輸入。盡管數(shù)據(jù)的部分是同步接收的,而且在很長(zhǎng)一段時(shí)間內(nèi),基于異步傳入信息的解決方案具有更大的相對(duì)價(jià)值。4. 使用連續(xù)的過程。例如,在處理思想圖像時(shí),不僅要記錄狀態(tài),還要記錄進(jìn)程的動(dòng)態(tài)特性和前一個(gè)狀態(tài)。因此,開發(fā)的決策方法應(yīng)該類似于具有內(nèi)存的自動(dòng)機(jī)。采用基于信道精度的優(yōu)先級(jí)會(huì)計(jì)實(shí)現(xiàn)了滿足所有需求的決策系統(tǒng)原型。圖 1:一般問題陳述通道 1通道 2通道 3通道 N。 。 。 。 。 。 。決定制造系統(tǒng)控制信號(hào)4 實(shí)現(xiàn)當(dāng)前的“椅子”硬件軟件復(fù)雜包括:1. “Titan”-103-120 電動(dòng)輪椅,其設(shè)計(jì)用于在建筑物內(nèi)和道路上的獨(dú)立運(yùn)動(dòng),為殘障人士的肌肉骨骼系統(tǒng)疾病和下肢的損傷(圖 2a)。而不是操縱桿控制,椅子有一個(gè)控制單元,它有一個(gè)接口與機(jī)載計(jì)算機(jī)端口。2. 基于 Emotiv Epoc(圖 2b)和軟件模塊的神經(jīng)接口。Emotiv Epoc BCI 允許人們獲取信息,不僅是關(guān)于用戶對(duì)思想圖像的思考,還包括對(duì)這個(gè)事實(shí)(權(quán)力)的定量評(píng)估。3. 操作人員視頻接口的基本設(shè)備,用于開發(fā)一個(gè)手勢(shì)識(shí)別系統(tǒng)(用于固定操作人員的動(dòng)作和手勢(shì)的立體攝像機(jī)和單個(gè)攝像頭的集合)。arm 跟蹤是使用英特爾的 RealSense 攝像機(jī)拍攝的,如圖 2c 所示。4. 操作人員音頻接口的