Controlling Devices Using Biological Signals - SAGE Journals

4 downloads 31138 Views 1MB Size Report
In this case, a good human interface becomes the .... seen in the Figure 5) delivers digital images from the eye, ... to enhance regions of interest of the image.
Controlling Devices Using Controlling Devices Using Biological Biological Signals Signals Regular Paper

Alexandre Santos Brandão1 , Leonardo Bonato Felix2 , Daniel Cruz Cavalieri3 , 2 4 , Teodiano Alexandre Santos Brandão , Leonardo Bonato , Freire Bastos-Filho5 and Mário Antonio Mauricio Ferreira1,*Leite Miranda de SáFelix 3 6 Sarcinelli-Filho Daniel Cruz Cavalieri , Antonio Mauricio Ferreira Leite Miranda de Sá4,

Teodiano Freire Bastos-Filho5 and Mário Sarcinelli-Filho6

1, 2 Departamento de Engenharia Elétrica, Universidade Federal de Viçosa, Minas Gerais, Brazil 3, 5, 6 Programa de Pós-graduação em Engenharia Elétrica, Universidade Federal do Espírito Santo, Brazil Departamento de Engenharia Elétrica, Universidade Federal de Viçosa, Gerais, Brazile Pesquisa de Engenharia, 41,2 Programa de Engenharia Biomédica, Instituto Luiz Coimbra de Minas Pós-Graduação 3,5,6 Programa de Pós-graduação em Engenharia Elétrica, Universidade Federal do Espírito Santo, Brazil Universidade Federal do Rio de Janeiro, Brazil 4 Programa de Engenharia Biomédica, Instituto Luiz Coimbra de Pós-Graduação e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro, Brazil *Corresponding author E-mail: [email protected]

Received 10 March 2011; Accepted 15 March 2011

Abstract Knowing that the driving task of a conventional wheelchair could be difficult or even impossible for impairment people, this work presents an overview of some strategies developed to aid these people. Within this context, a myoelectrical eye-blink and an iris tracking system to guide a robotic wheelchair are briefly described. Futhermore, some comments about EEG-based systems are also presented. Finally, it is presented a robotic wheelchair navigation system capable to reach a desired pose in a planar environment while avoiding static and dynamic obstacles. Keywords Impairment people, Semi-autonomous Navigation

Robotic Wheelchair,

1. Introduction There are many people with either lower and upper extremity impairments or severe motor dysfunctions. It is quite difficult or even impossible to them to drive a conventional wheelchair. Then, an alternative to help these people to overcome the difficulties is to develop a robotic wheelchair system. Such system commonly integrates a sensing subsystem, a navigation and control module and a user-machine interface to guide the wheelchair in an 22 Int J Adv Robotic Sy, 2011, Vol. 8, No. 3, Special Issue Assistive Robotics, 22-33

autonomous or semi-autonomous way. In the autonomous mode, the robotic wheelchair goes to the desired place without the user involvement in the vehicle control. However, in the semi-autonomous mode the user shares the high-level control with the robotic wheelchair system. In this case, the user should be able to use some motor skills. For safety navigation of the robotic wheelchair, both modes need an obstacle avoidance strategy for unknown obstacles, which can be static or dynamic. Some human-machines interfaces (HMI) used to guide a robotic wheelchair in a autonomous or semi-autonomous way and an aid navigation system are presented in the sequence. The most important evaluation factors for robotic wheelchair systems are safety and ease of operation. Some improvements can be obtained by providing autonomy to the system. There are many works dealing with the obstacle avoidance based on infrared, ultrasonic, vision, and other sensors using a robotic wheelchair ((1; 2; 24; 26)). In all the researches, the ultimate goal is to develop a wheelchair that automatically takes the users to a desired pose. However, in addition to going to certain designated places, sometimes we wish to displace in the environment as freely as possible. Thus the system should be capable of doing it. In this case, a good human interface becomes the key factor. Instead of a joystick, like on conventional power www.intechweb.org

specific places in the face. A set of electrodes are located as shown in Figure 1(a), in order to record signals in a differential way. In other words, the differential signal is obtained by using electrodes above the right and left eyes and another one in the ear (which is used as reference, due to the absence of muscle interferences). Figure 2 shows typical eye blinking signals.

wheelchairs, voice can be used in issuing commands ((25; 29; 46)). In the Wheelesly robotic wheelchair system ((52)), the user can control it by choosing a command icon focusing his/her gaze on the screen. A set of five electrodes is placed on the user’s head for both measuring eye movements and detecting the command choice. In the work developed by (27), They use the face direction to transmit the intentions of the user to the system. Besides, autonomous navigation capabilities are also implemented based on ultrasonic sensors and a environment observing camera, placed strategically in the robotic wheelchair.

After recording the myoelectrical signal, the algorithm (1) is implemented to find the peaks and the position of the eye blink signal in the preprocessed MES samples. Results are shown in the Figure 3. In order to avoid detection of natural eye blinks, a threshold is established to disregard signals lower than 35% of the maximum peak. Below such threshold, a blink is regarded as a noise and it is thus disregarded in the system.

In our proposed system, as the work developed by (3), it is used myoelectrical eye blinks, iris-tracking and brain-computer interface to choose symbols in a Personal Digital Assistant (PDA) and to start an action. The PDA presented in Figure 1(a) and 1(b) is installed onboard the wheelchair in such a way that it is always visible for the impaired individual seated on. It provides a graphic interface containing the possible options for the operator, including the pre-programmed movements of the wheelchair, a virtual keyboard for text edition, and symbols to express some basic needs or feelings of the impaired individual, such as sleep, drink, eat, feel cold, heat, etc. For all these cases, a specific option is selected using a procedure to scan the rows and columns in which the icons are distributed in the screen of the PDA (once the desired screen is presented). A voice player confirms the option chosen, providing a feedback to the user and allowing the communication with people around as well.

The next step of the system is to determine the time-interval of the eye blink, based on the peak detection. An angular variation approach using tangent computation is used in y3 (n) to determine the behavior near the peak and then to indicate the start-end point of an eye blink. Finally, an Artificial Neural Network (ANN) classifier is trained to recognize right, left and natural eye blinks. In order to compare the performance of back-propagation algorithms in this problem, Bayesian Regularization (BR), Resilient Backpropagation (RP) and Scaled Conjugate Gradient (SCG) are here used. A set of 630 preprocessed samples of right and left eye blinks and random noises (used to emulate natural eye blink) is used for training and validation process. The ANN structure adopted is 40-4-3 for the input, hidden and output layers, respectively. After training and validation process, an accuracy of 99.6% is obtained using RP algorithm.

2. Myoelectrical eye blink signal system This system idea consists of recognizing the eye blink, contained in Myoelectrical Signal (MES) acquired on the

(a) Eye blink myoelectrical signal based.

(b) Eye tracking based.

Figure 1. The structure of the proposed HMI.

23 Int J Adv Robotic Sy, 2011, Vol. 8, No. 3, Special Issue Assistive Robotics, 22-33

2

A mobile robot Pioneer 2-DX simulator is used to validate the proposal. It is worthy to mention that such vehicle has the same kinematic model of a wheelchair. In the simulation environment developed in MATLAB, the user can choose either the presence or absence of obstacles in the navigation scene, as well as to determine the position of such obstacles. It is also possible to vary the linear and angular velocities of the mobile robot, by choosing a velocity that best matches the processing effort of

the computer used, providing easy visualization of the real-time simulation. Figure 4 shows the desired and traveled path executed by the mobile robot according to the commands generated by the Automatic Eye-blink Recognition (AER) system. Figure 1(a) shows the structure of the interface based on the eye blinks used to guide a robotic wheelchair in a semi-autonomous way.

MES Right Eye (mV)

0.05

0

−0.05

−0.1

−0.15

0

5

0

5

10

15

10

15

MES Left Eye (mV)

−0.05 −0.1 −0.15 −0.2 −0.25 −0.3

Time (s)

y0

Original MES

Figure 2. Typical eye blinking signals. 2 0 −2 10

y1

−1 0.50

y2

10

15

20

25

30

35

5

10

15

20

25

30

35

5

10

15

20

25

30

35

5

10

15

20

25

30

35

5

10

15

20 Time (s)

25

30

35

0 −0.5 0.10 0.05 0 10

y3

5

0

0.5 0

0

Figure 3. Results using the algorithm based on Pan-Tompkins method, where y0 is the standard signal; y1 is the signal after application of the derivative of Pan-Tompkins; y2 represents the square of the input signal and y3 the signal after the implementation of an integrated mobile window filter.

Algorithm 1 Define eye blink peak position in MES samples 1: Receive eye-blink sample ⇒ y(n) 2: Find the greatest absolute peak of the input signal ⇒ max |y(n)| 3: Normalize the input signal y(n) ⇒ y0 ( n ) = max |y(n)| 4: Apply Pan-Tompkins derivative ⇒ y1 (n) = 18 [2y0 (n) − y0 (n − 1) − y0 (n − 3) + 2y0 (n − 4)] 5: Compute the energy of the signal ⇒ y2 (n) = y21 (n) 6: Filter y2 (n) using an integrated mobile window filter ⇒ y3 (n) = N1 [y2 (n − ( N − 1)) + y2 (n − ( N − 2)) + · · · + y2 (n)] Alexandre Santos Brandão, Leonardo Bonato Felix, Daniel Cruz Cavalieri, Antonio Mauricio Ferreira Leite Miranda de Sá, 3 Teodiano Freire Bastos-Filho and Mário Sarcinelli-Filho: Controlling Devices Using Biological Signals

24

Commands Desired Recognized front front right right front front left left front front left left front front front front stop stop

Figure 4. Simulation result: At left: A table of desired and recognized blink commands to follow a specified path using the proposed AER system. At right: Path traveled by the mobile robot Pioneer 2-DX during the simulation.

Figure 6. (a) Original image. (b) Binary image.

or, x [2x2 − 2x1 ] + y[2y2 − 2y1 ] + x12 + y21 − y22 − x22 .

Figure 5. Webcam strategically adapted in a goggle.

(2)

Simplifying (2), one gets A1 = 2x2 − 2x1 , B1 = 2y2 − 2y1 , C1 = x12 + y21 − y22 − x22 . (3) Observing Figure 7, one realizes the first equation is perpendicular to the line passing by the midpoint P1 , which results A1 xc + B1 yc + C1 = 0. (4)

3. Iris-tracking system Despite the effectiveness presented by the system described in the Section 2, there are some problems associated to the eye blinks, such as muscle spasms and the inability to perform an eye blink (due to severe motor disabilities, for example). In such context, it is proposed a HMI based on the iris-tracking shown in Figure 1(b).

Applying the same technique to the middle point P2, one gets A2 xc + B2 yc + C2 = 0, (5)

A webcam (strategically adapted in a goggle, as can be seen in the Figure 5) delivers digital images from the eye, whose iris should be tracked. First, a threshold is applied to discern the iris from the other parts of the face (Figure 6). However, in order to avoid eyebrows and eyelashes detection, morphologic and edge filters are also applied to enhance regions of interest of the image. In such case, Random Circular Hough Transform (RCHT - (34)) and the Canny filter are used. Thus, each binarized image circle is parameterized by two values (xc , yc ) representing the coordinates of its center. Assuming initially that three pixels of coordinates (x, y) are on the edge of the image, at a certain distance, empirically determined, it is possible to calculate the RCHT. Figure 7 illustrates an example of detecting the center of the circle.

where, A2 = 2x3 − 2x2 , B1 = 2y3 − 2y2 , C1 = x22 + y22 − y23 − x32 . (6) From (4) and (5), the center is defined by xc =

and the radius is given by  r = ( x − x c )2 + ( y − y c )2 .

(1)

25 Int J Adv Robotic Sy, 2011, Vol. 8, No. 3, Special Issue Assistive Robotics, 22-33

(8)

After determining the center and radius by (8), the region of interest (ROI) of the iris is obtained from the average of the centers found by the RCHT, and the mean radius determined from the average of the radius. Figure 8 illustrates the application of the technique described.

Thus, considering the distance between two points, one has

( x − x1 )2 + ( y − y1 )2 = ( x − x2 )2 + ( y − y2 )2 ,

−C1 − C2 − yc ( B1 + B2 ) A1 C2 − A2 C1 , yc = . A1 + A2 B1 ( A1 + A2 ) − A1 ( B1 + B2 ) (7)

4

Figure 7. (a) Example of a circle with points detected on its edge. (b) Lines drawn from randomly selected points on the edge of the circle and the intersection of perpendicular lines on the center of the circle.

Figure 8. Iris image obtained from the application of the Canny filter. Highlights, center obtained from the average of the centers calculated using the RCHT.

Despite the ease in obtaining the parameters of the circumference, the calculation of the centroid is affected by illumination influences, making the path traced very noisy. In order to smooth such effects, a Kalman filter is applied in the coordinates xc and yc . Figure 9 illustrates a eye-tracking task showing the coordinates of the centroid of the ROI obtained with and without applying the filter. The HMI used to select symbols in a pictographic display is based on MES, where the eye-blink is replaced by the iris displacement. In contrast with the previous approach, the interface based on eye tracking proved to be a simple and inexpensive alternative compared with commercial systems in the literature. By using the Canny filter combined with the Random Circular Hough Transform, it was possible to detect the iris of the eyeball and thus determine a region of interest, decreasing the influence of eyebrows and eyelashes in the calculation of the centroid of the iris. The use of the Kalman filter enabled fine-tuning the eye-tracking movement.

4. EEG-based systems In cases of major severity, neither eye or iris movements can be reliably recorded. In such a case, the electroencephalogram (EEG) could provide sources of information for HMI. In fact, all EEG-based HMI systems share the same source of information: the evoked potential (EP). In response to external sensorial stimuli the brain

responds with an EP. These responses carry important information about its respective sensory pathway and are also useful in posterior fossa tumor ((20; 48)), audiometry ((17; 44)), activation in epilepsy ((4; 7)) and monitoring surgery ((38; 50)). Since the signal-to-noise (SNR) of brain EPs are usually low when immerse in the electroencephalogram (EEG), only the evaluation of such responses over several repetitions may reveal its time and spectral contents. If the stimuli are presented at a low repetition rate, then there is sufficient time between stimulus and each individual response arises and vanishes within this interval. This kind of response is called transitory EP and is suitable for analysis of the response’s waveform e.g. delay of propagation and peaks. According to (8), the upper rate limit to elicit such responses is about 2 Hz. When the stimuli are presented with a sufficiently high rate, each transient EP overlaps in time and the frequency content of the response basically lies on the frequency of the stimulus. So, a steady-state EP is obtained in this case. When someone uses the EPs - e.g. related to sensory or imaginary tasks and recorded from the scalp - to promote the control of devices, such a strategy is called Brain Computer Interface (BCI) ((12; 18; 30)), as shown in Figure 10. BCI aims at accomplishing a direct communication between patients and their external environment, especially peoples with several motor limitations ((22)). All BCI investigations share two principal signal processing steps: feature extraction and classification ((35; 40)). In the first step, many mathematical techniques are used to give information from the EEG data. Among those techniques, one can mention: band powers ((43)), Power Spectral Density (PSD) ((9)), time-frequency features ((51)), autoregressive (AR) models parameters ((42)). Among techniques used in BCI, the most relevant to interpret and classify features are: linear classifiers ((37)), neural networks ((19)), and nonlinear Bayesian classifiers ((31)). Appointed as a promissory technique, a nonlinear Bayesian classifier known as Hidden Markov Model (HMM) can be used to modeling spontaneous electroencephalogram (EEG) events to BCI application ((39; 47; 49)). Furthermore, the Magnitude-Squared of Coherence (MSC) has shown good results to objective response detection during sensory stimulation ((11; 14; 36)) and imaginary movement ((45; 49)).

Alexandre Santos Brandão, Leonardo Bonato Felix, Daniel Cruz Cavalieri, Antonio Mauricio Ferreira Leite Miranda de Sá, 5 Teodiano Freire Bastos-Filho and Mário Sarcinelli-Filho: Controlling Devices Using Biological Signals

26

Figure 9. Original coordinates xc and yc from the ROI, and the same coordinates filtered using the Kalman filter.

Figure 10. The structure of a typical BCI.

navigation and control module and a human-machine interface to guide the vehicle in a autonomous or semi-autonomous way ((5; 15; 33; 41; 53)). Both methods are proposed to aid handicapped people during their

5. Robotic wheelchair navigating system As mentioned before, a robotic wheelchair system is commonly constituted of a sensing subsystem, a 27 Int J Adv Robotic Sy, 2011, Vol. 8, No. 3, Special Issue Assistive Robotics, 22-33

6

navigation in a structured or semi-structured environment, while avoiding static and dynamic obstacles. Nowadays the strategy to avoid dynamic obstacles can be split into two approaches, that are model- and learning-based ((10)). The first one uses mathematic models to represent the movement of the vehicle and the movement of the obstacles in the environment as well to describe collision possibilities, and finally to give a solution to avoid them. By its turn, the learning-based approach applies the knowledge obtained in real situations to “learn” the way to avoid dynamic obstacles. Considering obstacle avoidance tasks for robotic wheelchairs, in (28) an assistance control mode generates a collision-free path in a smart structured environment. In (13) a sonar array gives the proximity information in a semi-structured environment and the distance relationship between the wheelchair and the closest static obstacle is used in a force-feedback control through an analog joystick. In turn, in (32) a supervisor algorithm takes the distance information of the ultrasound sensors to decide about the safety of the control signals sent to the wheelchair through an analog joystick. If there is any collision risk the control action is disregarded (canceled). The occupation grid technique is used in (21) to define the direction of greater freedom for navigation of the robotic wheelchair. In this case, the filling of the occupation cells is done according to measurements provided by 3D infrared laser sensors. In this section, an obstacle avoidance strategy to avoid both static and dynamic obstacles in a semi-structured environment is proposed. The model-based approach is used to represent the movement of the obstacles. In contrast with (10), where a main obstacle (which is the closest to the vehicle or the fastest in the collision course) is selected and then the safe path is defined; here all obstacles are considered to define the avoidance pathway. A weighting factor ck , similar to the fictitious force strategy ((23)), is used to give more importance to the closer and faster obstacles, with respect to the wheelchair. Another important contribution is the local mapping which provides 360 degree distance information around the vehicle (named virtual omnidirectional sensor) thus improving the tangential escape strategy when only static obstacles are considered. Also, the physical dimensions of the wheelchair are taken into account in all calculations, i.e., the vehicle is not considered as a point in the environment anymore. Thus it is possible to predict which part of the border of the vehicle is closer to an obstacle. In addition, the position of the laser scanner (in the front part of the vehicle) in relation to the control point is also dealt with. Finally, it is important to mention that a stability analysis of the system is also performed, based

On the other hand, when dynamic obstacles are presented in the navigation scene, it is necessary to find the obstacles in the workspace and to identify which of them are moving. In order to compute the resulting velocity vector, it is assumed that the proximity measurement vector delivered by the laser scanner is a discrete function, which

on the Theory of Lyapunov, considering an analytical approach for the saturation of the control signals sent to the robotic wheelchair. First, the control algorithm designed looks for a desired point, which can be a specific place in a house. Then, a laser scanner onboard mounted in the front part of the vehicle is used to estimate the position and the velocity of the obstacles in a semi-structured environment. Once the movement model of the obstacles is estimated, the collision points between the robotic wheelchair and the obstacles are calculated. In the presence of dynamic obstacles, the obstacle avoidance strategy sets the commands of velocity based on such collision points allowing the wheelchair overtakes the most critical obstacle or waits to be overtaken by it, i.e., increasing or decreasing the vehicle velocity, respectively. Considering a navigation between to points, a robotic wheelchair should seek for a goal while avoiding any obstacle during navigation. Any obstacle inside a circle of radius dobs around the vehicle (named safety zone) should be avoided by using the tangential escape strategy ((16)). After leaving the obstacle behind, the vehicle resumes the search for the goal. If no obstacle is found inside the safety zone, the vehicle keeps navigation towards the target. One can note that after avoiding the obstacles in the environment the distance wheelchair-target is continuously reduced until reaching the target. As proposed in (6), the control algorithm is asymptotically stable in the sense of Lyapunov, once the kinematics model of the vehicle is considered during the design of the controllers. Figure 11(b) illustrates the position control scheme used to reach a desired position in the environment, which can be specific places in a house. In other words, each room defines coordinates that should be reached by the robotic wheelchair in a intelligent house. It is worthy to highlight that the desired position is constantly updated during an avoiding obstacles tasks. In order to avoid static and dynamic obstacles during navigation, the proximity information is given by a laser scanner onboard the vehicle. Such range sensor is mounted on the front part of the wheelchair, and delivers 181 range measurements at a sample rate of 10Hz, with 1 degree of resolution, covering a semi-circle in front of the vehicle. When only static obstacles are considered, the tangential escape strategy proposed in (16) is here implemented to provide a safe path through reactive navigation (without previous planning), whose main idea is to follow paths that are tangential to the boundary of the obstacles being avoided.

can be differentiated. Figure 12(a) illustrates a laser scan of the workspace, at a specific time-instant, and its discrete derivative. Looking at Figure 12(a) (on top), an obstacle can be identified in the laser scan whenever a beam returns a value lower than the maximum one adopted (in this case,

Alexandre Santos Brandão, Leonardo Bonato Felix, Daniel Cruz Cavalieri, Antonio Mauricio Ferreira Leite Miranda de Sá, 7 Teodiano Freire Bastos-Filho and Mário Sarcinelli-Filho: Controlling Devices Using Biological Signals

28

(a) The robotic wheelchair.

(b) The position control scheme adopted

Figure 11. The test platform developed in the Laboratory of Intelligent Automation of the Federal University of Espírito Santo, Vitória ES, Brazil.

(a) Laser measurements range and its discrete derivative of the scene shown at right.

(b) Snapshot of the robot navigation identification in two consecutive instants.

during

obstacles

Figure 12. The strategy to identify dynamic obstacles and to predict its velocities.

dmax = 5m). However using the difference between two consecutive polar measurements, as shown in Figure 12(a) (on bottom), one can observe a negative peak followed by a positive one, which indicates when the detection of an obstacle starts and ends, respectively. Figure 12(b) illustrates the obstacles identified in the instants k and k + 1. knowing their position in two consecutive instants and the sample period of the system, it is possible to compute their velocities, once it is possible to relate Ok [k] with Ok [k + 1]. The arrows shown in Figure 12(b) indicate the velocities of the obstacles, which are used to define the collision points with the vehicle, and then to establish the safer path to it.

Commonly, the laser scanner sensor is mounted in the front of the wheelchair, providing range measurements for the horizon of 180◦ ahead it. Its position in the vehicle creates blind zones where it is not possible to detect obstacles, such as, for example, the both sides of the wheelchair robot. Such blind zones can cause lateral collisions during navigation and/or abrupt obstacle detections during vehicle rotation, thus affecting directly the performance of the obstacle avoidance strategy. To improve the obstacle avoidance strategy, it is also proposed a local mapping, which creates a virtual omnidirectional sensor, responsible to provide 360 degree distance information around the vehicle.

In such proposal, all obstacles are considered to define the avoidance pathway and a weighting factor is used to give more importance to the closer and faster obstacles, with respect to the wheelchair.

It is important to highlight that the physical dimensions of the wheelchair are taken into account in all calculations, i.e., it is not considered as a point in the environment anymore. Details of this proposal can be found in (6).

29 Int J Adv Robotic Sy, 2011, Vol. 8, No. 3, Special Issue Assistive Robotics, 22-33

8

(a)

(b)

(c)

Figure 13. Simulation Results: Robotic wheelchair overtaking the dynamic obstacle and leaving the static obstacle behind.

(a)

(b)

(c)

Figure 14. Simulation Results: Robotic wheelchair waiting to be overtaking by the dynamic obstacle and reaching the target after avoiding the static obstacle.

A robotic wheelchair simulator developed in Federal University of Espírito Santo is used to run the simulations and validate the proposal. It is important to mention that such simulator runs in real time and considers the dynamic model of the vehicle, not only its kinematics. In every simulation the robotic wheelchair starts in the position (0m, 0m) and should reach the target in the coordinates (4m, 5.5m) while avoiding static and dynamic obstacles. The semi-structured environment used in the simulations has one static obstacle located in the position (2.75m, 3.7m) and the walls around the vehicle. A dynamic obstacle initially positioned in the coordinates (4.9m, 2.2m) in the first simulation, and in the coordinates (3.7m, 2.2m) in the second one, moves from right to left with a constant → velocity (− v o1 ) of 200mm/s. The simulations were split and presented in three parts for a better understanding, and with a sample time of 2.5 seconds, a snapshot is taken to illustrate the current situation of the navigation. Figure 13 shows the path traveled by the vehicle (solid line) during the execution of the task and the movement of the obstacle (from the white to the black rectangle). In this simulation, one can note that the robotic wheelchair avoids the dynamic obstacle passing in front of it. The cross and circle marks shown in such figure indicate the estimated position of the obstacles. When one of the marks is not shown in Figure 13, it indicates that the dynamic obstacle is not “seen" anymore by the laser scanner. It is also important to mention that

the star mark represents the control point h. In contrast with Figure 13, Figure 14 shows a situation where the wheelchair wait that the dynamic obstacle overtakes it. It is also important to highlight that in both simulations, the robotic wheelchair avoids the static obstacles safely (including the walls of the environment), even when the laser scanner cannot detect possible obstacles. In such cases, the local mapping is extremely helpful with regard to safe navigation.

6. Concluding remarks This work proposes an aid navigation system and some human-machines interfaces, used to guide a robotic wheelchair in an autonomous or semi-autonomous way. In such context, we use myoelectrical eye blinks, iris-tracking and brain-computer interface to choose symbols in a Personal Digital Assistant and to start an action. The first system presented consists of recognizing the eye blink, contained in Myoelectrical Signal acquired on the specific places in the face by a set of electrodes. In the second system, some problems associated to the eye blinks, such as muscle spasms and the inability to perform an eye blink (due to severe motor disabilities, for example) are solved by using an iris tracking based system composed by a webcam (strategically adapted in a goggle), which delivers digital images from the eye. In the sequence, some comments about Brain-Compute Interface are also done for situations where neither eye or iris movements can be reliably recorded. In such cases,

Alexandre Santos Brandão, Leonardo Bonato Felix, Daniel Cruz Cavalieri, Antonio Mauricio Ferreira Leite Miranda de Sá, 9 Teodiano Freire Bastos-Filho and Mário Sarcinelli-Filho: Controlling Devices Using Biological Signals

30

the electroencephalogram provides sources of information for HMI through evoked potential. Finally, it is presented an obstacle avoidance strategy to avoid both static and dynamic obstacles in a semi-structured environment, which aids handicapped people during a autonomous or semi-autonomous navigation of the robotic wheelchair.

[13] Fattouh, A., Sahnoun, M. & Bourhis, G. (2004). Force feedback joystick control of a powered wheelchair: Preliminary study, Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Vol. 3, pp. 2640– 2645. [14] Felix, L. B., Miranda de Sa, A. M. F. L., Infantosi, A. F. C. & Yehia, H. C. (2007). Multivariate objective response detectors (mord): Statistical tools for multichannel eeg analysis during rhythmic stimulation, Annals of Biomedical Engineering 35(3): 443–452. [15] Ferreira, A., Celeste, W. C., Cheein, F. A., Bastos-Filho, T. F., Sarcinelli-Filho, M. & Carelli, R. (2008). Human-machine interface based on emg and eeg applied to robotic systems, Journal of NeuroEngineering and Rehabilitation 5: 10. [16] Ferreira, A., Pereira, F. G., Vassallo, R. F., Bastos-Filho, T. F. & Sarcinelli-Filho, M. (2008). An approach to avoid obstacles in mobile robot navigation: The tangential escape, Journal of control and automation, Brazilian Society of Automatic (SBA) 19(4): 395–405. [17] Finneran, J. J. (2009). Evoked response study tool: A portable, rugged sustem for single and multiple auditory evoked potential measurements, Journal of the Acoustical Society of America 126(1): 491–500. [18] Gao, X., Xu, D., Cheng, M. & Gao, S. (2003). A bci-based environmental controller for the motion-disabled, IEEE Transactions on Neural Systems and Rehabilitation Engineering 11(2): 137–140. [19] Garrett, D., Peterson, D. A., Anderson, C. W. & Thaut, M. H. (2003). Comparison of linear, nonlinear, and feature selection methods for eeg signal classification, IEEE Transactions on Neural Systems and Rehabilitation Engineering 11(2): 141–144. [20] Goto, T., Muraoka, H., Kodama, K., Hara, Y., Yako, T. & Hongo, K. (2010). Monitoring of motor evoked potential for the facial nerve using a cranial peg-screw electrode and "threshold level" stimulation method, Skull Base: An Interdisciplinary Approach 20(6): 429–433. [21] Hoey, J., Gunn, D., Mihailidis, A. & Elinas, P. (2006). Obstacle avoidance wheelchair system, Proceedings of the International Conference on Robotics and Automation (ICRA’06), Orlando, Florida, USA. [22] Hoffmann, U., Vesin, J. M., Ebrahimi, T. & Diserens, K. (2008). An efficient p300-based brain-computer interface for disabled subjects, Journal of Neuroscience Methods 167(1): 115–125. [23] Hogan, N. (1985). Impedance control: An approach to manipulation, ASME Journal of Dynamic Systems, Measurement, and Control 107: 1–23. [24] Ju, J. S., Shin, Y. & Kim, E. Y. (2009). Intelligent wheelchair (iw) interface using face and mouth recognition, Proceedings of the 13th international conference on Intelligent user interfaces, IUI ’09, ACM, New York, NY, USA, pp. 307–314. [25] Katevas, N. I., Sgouros, N. M., Tzafestas, S. G., Papakonstantinou, G., Beattie, P., Bishop, J. M., Tsanakas, P. & Koutsouris, D. (1997). The autonomous mobile robot senario: a sensor aided intelligent navigation system for powered wheelchairs, Robotics & Automation Magazine, IEEE 4(4): 60–70+.

7. Acknowledgments The authors thank CNPq, CAPES, FAPEMIG and FAPERJ for supporting this work.

8. References [1] Argyros, A. A. & Bergholm, F. (1999). Combining central and peripheral vision for reactive robot navigation, In Proceedings of Computer Vision Pattern Recognition Conference (CVPR’99). Fort Collins, pp. 23–25. [2] Argyros, A., Georgiadis, P., Trahanias, P. & Tsakiris, D. P. (2001). Semi-autonomous navigation of a robotic wheelchair, Journal of Intelligent and Robotic Systems 34. [3] Bastos-Filho, T. F., Ferreira, A., Celeste, W. C., Cavalieri, D. C., de la Cruz, C., Sarcinelli-Filho, M., Soria, C. M., Perez, E. & Cheein, F. A. (2010). Robotic wheelchair controlled by a multimodal interface, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA2010), Workshop on Multimodal Human-Robot Interfaces, Anchorage, Alaska. [4] Bickford, R. G., Daly, D. & Keith, H. M. (1953). Convulsive effects of light stimulation in children, Journal of the Acoustical Society of America 86(2): 170–183. [5] Bourhis, G., Horn, O., Habert, O. & Pruski, A. (2001). An autonomous vehicle for people with motor disabilities, IEEE Robotics and Automation Magazine 8(1): 20–28. [6] Brandao, A. S., de la Cruz, C., Bastos-Filho, T. F. & Sarcinelli-Filho, M. (2010). A strategy to avoid dynamic and static obstacles for robotic wheelchairs, Proceedings of 19th IEEE International Symposium on Industrial Electronics (ISIE’10), pp. 3553–3558. [7] Brazzo, D., Di Lorenzo, G., Bill, P., Fasce, M., Papalia, G., Veggiotti, P. & Seria, S. (2011). Abnormal visual habituation in pediatric photosensitive epilepsy, Clinical Neurophysiology 122(1): 16–20. [8] Chiappa, K. H. (1997). Evoked potentials in clinical medicine, 3 edn, Lippincott Williams & Wilkins. [9] Chiappa, S. & Bengio, S. (2004). Hmm and iohmm modeling of eeg rhythms for asynchronous bci systems, Proceedings of European Symposium on Artificial Neural Networks, pp. 199–204. [10] de Almeida Neto, A., Heimann, B., Nascimento-Jr., C. L. & Goes, L. C. S. (2003). Avoidance of multiple dynamic obstacle, Proccedings of the XVII International Congress of Mechanical Engineering, ABCM 2003, Vol. 1, So Paulo, Brazil, pp. 33–39. [11] Dobie, R. A. & Wilson, M. J. (1989). Analysis of auditory evoked potentials by magnitude squared coherence, Ear Hearing 10(1): 2–13. [12] Ebrahimi, T., Vesin, J. M. & Garcia, G. (2003). Brain computer interface in multimedia communication, IEEE Signal Processing Magazine 20(1): 14–24. 31 Int J Adv Robotic Sy, 2011, Vol. 8, No. 3, Special Issue Assistive Robotics, 22-33

10

[26] Kobayashi, Y., Kinpara, Y., Shibusawa, T. & Kuno, Y. (2009). Robotic wheelchair based on observations of people using integrated sensors, Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems, IROS’09, IEEE Press, Piscataway, NJ, USA, pp. 2013–2018. [27] Kuno, Y., Nakanishi, S., Murashima, T., Shimada, N. & Shirai, Y. (1999). Intelligent wheelchair based on the integration of human and environment observations, Information, Intelligence, and Systems, International Conference on 0: 342. [28] Levine, S. P., Bell, D. A., Jaros, L. A., Simpson, R. C., Koren, Y. & Borenstein, J. (1999). The navchair assistive wheelchair navigation system, Proceedings of the IEEE Transaction on Rehabilitation Engineering, Vol. 7, pp. 443–451. [29] Levine, S. P., Bell, D. A., Jaros, L. A., Simpson, R. C., Koren, Y., Member, S., Borenstein, J. & System, N. (1999). The navchair assistive wheelchair navigation system, IEEE Transactions on Rehabilitation Engineering 7: 443–451. [30] Lotte, F., Congedo, M., LEcuyer, A., Lamarche, F. & Arnaldi, B. (2007). A review of classification algorithms for eeg-based brain-computer interfaces, Journal of Neural Engineering 4(2): R1–R13. [31] Lowne, D. R., Roberts, S. J. & Garnett, R. (2010). Sequential non-stationary dynamic classification with sparse feedback, Pattern Recognition 43(3): 897–905. [32] Lu, T., Yuan, K., Zhu, H. & Hu, H. (2005). An embedded control system for intelligent wheelchair, Proceedings of the 27th Annuan International Conference of the IEEE Engineering in Medicine e Biology Society, Shanghai, China. [33] Mazo, M. (2001). An integral system for assisted mobility, IEEE Robotics and Automation Magazine 8(1): 46–56. [34] McLaughlin, R. A. (1998). Randomized hough transform: Improved ellipse detection with comparison, Pattern Recognition Letters 19(3-4): 299 – 305. [35] Middendorf, M., McMillan, G., Calhoun, G. & Jones, K. S. (2000). Brain-computer interfaces based on the steady-state visual-evoked response, IEEE Transactions on Rehabilitation Engineering 8(2): 211–214. [36] Miranda de S, A. M. F. L., Infantosi, A. F. C. I. & Simpson, D. M. (2002). Coherence between one random and one periodic signal for measuring the strength of responses in the eletroencephalogram during sensory stimulation, Medical and Biological Engineering and Computing 40(1): 99–104. [37] Muller, K. R., Anderson, C. W. & Birch, G. E. (2003). Linear and nonlinear methods for brain-computer interfaces, IEEE Transactions on Neural Systems and Rehabilitation Engineering 11(2): 165–169. [38] Nuwer, M. R., Dawson, E. G., Carlson, L. G., kanim, L. E. A. & Shierman, J. E. (1995). Somatosensory evoked potential spinal cord monitoring reduces neurologic deficits after scoliosis surgery - results of a large multicenter survey, Evoked Potentials - Electroencephalography and Clinical Neurophysiology 96(1): 6–11. [39] Obermaier, B., Guger, C. & Pfurtscheller, G. (1999). Hidden markov models used for the off line

[40]

[41]

[42]

[43] [44] [45]

[46]

[47]

[48]

[49]

[50]

[51]

[52]

classification of eeg data, Biomedizinische Technik 44(6): 158–162. Ochoa, J. B. (2002). EEG Signal Classification for Brain Computer Interface Applications, PhD thesis, Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland. Parikh, S., Grassi, V., Kumar, V. & Okamoto, J. (2007). Integrating human inputs with autonomous behaviors on an intelligent wheelchair platform, IEEE Intelligent Systems 22(2): 33–41. Penny, W. D., Roberts, S. J., Curran, E. A. & Stokes, M. J. (2000). Eeg-based communication: A pattern recognition approach, IEEE transactions on Rehabilitation Engineering 8(2): 214–215. Pfurtscheller, G. & Neuper, C. (1997). Motor imagery activates primary sensorimotor area in humans, Neuroscience Letters 239(2-3): 65–68. Picton, T. W., Woods, D. L., Baribeaubraun, J. & Healey, T. M. G. (1977). Evoked-potential audiometry, Journal of Otolaryngology 6(2): 90–119. Santos Filho, S. A., Tierra-Criollo, C. J., Souza, A. P., Pinto, M. A. S., Lima, M. L. C. & Manzano, G. M. (2009). Magnitude squared of coherence to detect imaginary movement, EURASIP Journal on Advances in Signal Processing (534536): 1–12. Simpson, R. & Levine, S. (1997). Adaptive shared control of a smart wheelchair operated by voice control, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems 2, Grenoble, France, pp. 622–626. Sitaram, R., Zhang, H. H., Guan, C. T., Thulasidas, M., Hoshi, Y., Ishikawa, A., Shimizu, K. & Birbaumer, N. (2007). Temporal classification of multichannel near-infrared spectroscopy signals of motor imagery for developing a brain-computer interface, Neuroimage 34(4): 1416–1427. Soustiel, J. F., Hafner, H., Chistyakov, A. V., Guilburd, J. N., Zaaroor, M., Yussim, E. & Feinsod, M. (1993). Monitoring of brain-stem trigeminal evoked-potentials - clinical applications in posterior fossa surgery, Electroencephalograpy and Clinical Neurophysiology 88(4): 255–260. Souza, A. P., Santos-Filho, S. A., Xavier, P. A. M., Felix, L., Maia, C. A. & Tierra-Criollo, C. J. (2010). Modeling movement and imaginary movement eeg by means of hidden markov models and coherence, Proceedings of ISSNIP Biosignals and Biorobotics Conference, pp. 86–91. Thuet, E. D., Winscher, J. C., Padberg, A. M., Bridwell, K. H., Lenke, L. G., Dobbs, M. B., Schootman, M. & Luhmann, S. J. (2010). Validity and reliability of intraoperative monitoring in pediatric spinal deformity surgery: a 23-year experience of 3436 surgical cases, Spine 35(20): 1880–1886. Wang, T., Deng, H. & He, B. (2004). Classifying eeg-based motor imagery tasks by means of time-frequency synthesized spatial patterns, Clinical Neurophysiology 115(2): 2744–2753. Yanco, H. A. & Gips, J. (1997). Preliminary investigation of a semi-autonomous robotic wheelchair directed through electrodes, Rehabilitation Engineering Society of North America 1997 Annual Conf., pp. 414–416.

Alexandre Santos Brandão, Leonardo Bonato Felix, Daniel Cruz Cavalieri, Antonio Mauricio Ferreira Leite Miranda de Sá, 11 Teodiano Freire Bastos-Filho and Mário Sarcinelli-Filho: Controlling Devices Using Biological Signals

32

[53] Zeng, Q., Teo, C., Rebsamen, B. & Burdet, E. (2008). A collaborative wheelchair system, IEEE

Transactions on Neural Systems and Rehabilitation Engineering 16(2): 161–170.

33 Int J Adv Robotic Sy, 2011, Vol. 8, No. 3, Special Issue Assistive Robotics, 22-33

12