Emotion recognition based on EEG signals during watching video clips

0 downloads 0 Views 860KB Size Report
In this paper we used video stimuli to induce happy and sad mood of 20 ... Murugappan(2013), for example, were using video clips and music stimuli to.
EMOTION RECOGNITION BASED ON EEG SIGNALS DURING WATCHING VIDEO CLIPS KESRA NERMEND, AKEEL ALSAKAA, ANNA BORAWSKA, PIOTR NIEMCEWICZ Summary Brain signal analysis for human emotion recognition plays important role in psychology, management and human machine interface design. Electroencephalogram (EEG) is the reflection of brain activity – by studying and analysing these signals we are able to perceive emotional state changes. In order to do so, it is necessary to select the appropriate EEG channels that are placed mostly on the frontal part of the head. In this paper we used video stimuli to induce happy and sad mood of 20 participants. To classify the emotions experienced by the volunteers we used five different classification methods to obtain optimal result taking into account all features that were extracted from signals. We observed that the Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) obtained the highest accuracy of emotion recognition. Keyowrds: cognitive neuroscience, EEG, emotion recognition, classification Introduction In recent years, assessment of human emotions from Electroencephalogram (EEG) has become one of the most actively researched areas in developing intellectual man-machine interfaces. Emotions play a major role in many aspects of our daily lives, including decision making, perception, learning, rational thinking and actions. Assessing emotions is the key to understand human beings. In most experiments of emotion recognition images, sounds and video clips are used to stimulate the mind of participants. Acquisition of brain activity signals in a form that enables sufficient level of recognition and classification is very difficult due to the various sources of noise, such as eye blinking, muscle movements and surrounding electrical devices. (Yuan-Pin Lin et al., 2010). Many papers have been reported on emotion recognition using contradictory emotions as it has become known to be the easiest mood diagnosis and it can be more easily concealed by other subjects (Hosseini, 2012, Landowska, Szwoch, Szwoch, Wróbel, & Kołakowska, 2012). Contradictory emotions include, among others, fear, tension, discomfort, and confusion, sadness and nervousness on the negative side and elation, serenity, calm, comfort, relaxation and activity on the positive side. They create so called 2D valence-arousal model as illustrated in figure (1).

40

Kesra Nermend, Akeel Alsakaa, Anna Borawska, Piotr Niemcewicz Emotion recognition based on EEG signals during watching video clips

Figure 1. 2D valence-arousal model Source: (Hosseini, 2012) In the scope of related scientific research for mood recognition with the use of video clips to stimulate emotion of the participant brain, works of several researchers can be mentioned. Murugappan & Murugappan(2013), for example, were using video clips and music stimuli to determine emotions for five different moods (happy, surprised, frightened, disgusted and neutral), collecting the EEG signals from 20 subjects using high resolution equipment (62 channels). They were extracting the features during pre-processing, these features are mapped into the corresponding emotions using two classifiers: K-Nearest Neighbours and Probabilistic Neural Network. They focused only on the beta band and got a maximum mean classification accuracy of 91.33 %. Takahashi (2004) in his work investigates emotion recognition of five emotional states (joy, anger, sadness, fear, and relaxation) by using Support Vector Machines. He observed how multimodal bio-potential signals are effective for emotion recognition. By experimenting with recognition of emotions, he attained a recognition rate of 41.7% for all researched emotions. Other researchers, Bajaj and Pachori (2013), worked to select specific features that have more effect on the signals during audio-video stimuli. These features are used as an input to Multiclass Least Squares Support Vector Machine (MC-LS-SVM) for classification of human emotions. This method has provided an accuracy of 80.83% for the classification of human emotions from EEG signals. Most of research focuses on one or a few features that are extracted from signals to obtain the best recognition, like: high alpha, low alpha, high beta, low beta or only gamma. In this paper we use a broader set of features that are extracted from brain signals to be more comprehensive in diagnosing mood.

41

Studies & Proceedings of Polish Association for Knowledge Management Nr 86, 2017

In the first section of the paper, we describe a scenario that uses video stimuli presentation for inducing two moods: happy and sad. In the second section we explain EEG data acquisition. The third section illustrates automatic artefact correction and rejection methods used in the research. In section four we show, how to make feature extraction and the final section presents the application classification algorithms that are acclaimed in the literature. 1. Experiment Design The aim of the experiment is to recognize the mood (happy and sad) from observation of brain signals that are recorded using EEG electrodes connected to a person's scalp. The experiment consists of displaying one of two videos – funny or doleful. With the use of such videos we are trying to induce the specified mood in the participants. Both video clips can be found on YouTube website1. The duration of each clip was about 4 minutes and 50 second, Figure (2) presents a collection of pictures from funny and sad movie respectively.

Figure 2. Five pictures as stimuli elicited from video clips Source: own elaboration. Before displaying the video, we showed a black screen for two minutes trying to calm down each participant and to focus his/her attention to the middle of screen, (Kong, Zhao, Hu, Vecchiato, & Babiloni, 2013; Hosseini, S.A.; Khalilzadeh, 2010). In order to obtain subjective opinions about the participant’s mood, we asked him/her a self-assessment question. As an answer, he/she evaluated the mood in a scale of 0–9. A value of 0 represents a sad mood, 5 is a neutral one and 9 means happy mood. This process is useful to add supplementary comparison between signals obtained from brain and the assessment made by the participant. This process is done twice during the experiment. The figure (3) illustrates scenario workflow in the proposed experiment.

1

https://www.youtube.com/watch?v=OxXmMbviaRo&feature=youtu.be, https://www.youtube.com/watch?v=4AvJrr50c7w&feature= youtu.be

42

Kesra Nermend, Akeel Alsakaa, Anna Borawska, Piotr Niemcewicz Emotion recognition based on EEG signals during watching video clips

S

User Information Participant Data

Scenario Explanation Determining the Mood (09) Displaying Black Screen Video Stimuli (Happy or Sad)

EEG signals (EDF file)

Determining the Mood (09)

E

Figure 3. Experiment workflow Source: own elaboration. 2. EEG Data acquisition EEG data signals were recorded from a group of 20 healthy participants. The average age of the participants was 33.95 years (13 men, 7 women). The participant sat in front of a computer and he/she entered first some basic information related to gender, age, language, nationality and profession. The experiment started by displaying for 30 seconds some explanation on the screen to demonstrate what a participant is expected to do during experiment procedure. Signal acquisition was done using a Contec KT88 device. In our experiment we used the 10–20 international system shown in figure (4). This system is based on the relationship between the location of electrodes and the underlying area of the cerebral cortex (“10” and “20” refer to the 10% or 20% of inter electrode distance (Bhoria, Singal, & Verma, 2012)).

43

Studies & Proceedings of Polish Association for Knowledge Management Nr 86, 2017

Figure 4. Labels for points according to 10–20 electrode placement system Source: [12]. In our experiment we recorded signals from electrodes marked as: Fp1, Fp2, F7, F3, Fz, F4 and F8. We have chosen electrodes that were placed on the front of head, because, from a neuroscientific point of view, main functions of the brain in the frontal lobe are related with emotional responding (Papousek, Reiser, Weber, Freudenthaler & Schulter, 2012). The sample rate was set to 200 samples per second, which is the maximum rate for the Contec KT88 device. The obtained data were exported into a edf (European Data Format)2 to enable more convenient processing in EEGlab3 and with the use of Matlab environment. 3. EEG Data Pre-Processing In pre-processing phase we performed two essential processes: data filtering to eliminate noise that consists of frequencies outside the standard range and artefact removal to diminish nested signals produced from eyes blinking and muscles movement.

2

http://www.edfplus.info/specs/edf.html EEGLAB is an interactive Matlab toolbox for processing continuous and event-related EEG, MEG and other electrophysiological data using independent component analysis (ICA), time/frequency analysis, and other methods including artefact rejection.(“Getting Started - SCCN” n.d.) 3

44

Kesra Nermend, Akeel Alsakaa, Anna Borawska, Piotr Niemcewicz Emotion recognition based on EEG signals during watching video clips

3. 1. Data Filtering EEG recording is comprised of several different types of signals, each of them has a different frequency range. Digital filtering is used to retain only the frequency components of interest and to remove other data, whether it is noise or merely physiological signals outside the range of interest. It is important to note that the way in which data is filtered depends largely on the sampling rate at which the original data was acquired (Widmann, Schröger, & Maess, 2014). Digital filters which can be implemented, are: Infinite Impulse Response (IIR) or Finite Impulse Response filters (FIR). In most publications, using the IIF filter requires a cut-off or threshold given in Hz. We have applied low and high pass filter to remove data with a frequencies below 0.4 Hz and above 50 Hz (Nitschke, Miller, Cook, 1998). Filters were applied on signals for each channel separately to make filtering process faster. 3. 2. Artefact removing When brain activity signals were recorded through the electrodes placed on the scalp of a participant, eye blinks and muscle movements caused contamination of the EEG signal. This causes additional noise in addition to surrounding electric fields which magnitude is greater than the desired electrical activity of the brain (Joyce, Gorodnitsky, & Kutas, 2004). The electrical signals emanating from the eye movements and blinks, produce a signal known as electrooculogram (EOG). A fraction of EOG contaminate the electrical activity of the brain and these contaminating potentials are commonly referred to as ocular artefacts (OA). In data acquisition, these OA are often dominant over other signals. Hence, devising a method for successful removal of OA from EEG recordings is still is a major challenge (Krishnaveni, Jayaraman, Aravind, Hariharasudhan, & Ramadoss, 2006; Naraharisetti, 2010). Similar situation occurs when it comes to muscle artefacts as well – although not as problematic as ocular ones, they have to be removed from the EEG signal. Figure 5a. shows a segment of EEG signal corrupted by ocular artefacts and Figure 5b. shows a segment of EEG signals corrupted by muscle artefacts.

[a]

[b]

Figure 5. a) EEG recording corrupted by ocular artefacts, b): Corrupted by muscle artefacts Source: own elaboration. A variety of methods have been proposed for correcting ocular and muscle artefacts. One common strategy is artefact rejection. The rejection of epochs contaminated with OA is very laborious 45

Studies & Proceedings of Polish Association for Knowledge Management Nr 86, 2017

and time consuming and often result in considerable loss in the amount of data available for analysis. Other, widely used methods for removing artefacts, are based on regression in the time or frequency domain techniques (Yuan-Pin Lin et al., 2010). However, regression based artefact removal eliminates the neural potentials common to reference electrodes and to other frontal electrodes and that may cause some problems. In our experiment we used automatic artefact removal based on Blind Source Separation (BSS) techniques. The main method which was applied is wavelet Independent Component Analysis (wICA, (Castellanos & Makarov, 2006)), which has been proven useful for suppression of artefacts in EEG recordings, both in the time and frequency domains. 4. Feature extraction and classification Feature extraction helps us to elicit useful information from EEG signal. The features are characteristics of the signal in frequency domain. According to these features we will distinguish between different emotions. In this stage it will be described how features are extracted and then classified. 4.1. Feature Extraction By using discrete wavelet transformation (DWT) function, the signal for each channel will be decomposed into five levels depending on Daubechies-8 (db8) or Symlets-8 (sym8) wavelet function (Al-kadi & Marufuzzaman, 2013, Murugappan, Nagarajan, & Yaacob, 2010). The selecting suitable wavelet function and decomposition level is important for analysis of the signal. The scope of interest ranges between 0–50 Hz. The decomposition level which we have used is five, because all other ranges are considered to be noise or they are used for another purpose like epilepsy monitoring (Joyce et al., 2004). On the other hand, the decomposition process of the signal depends on the number of sampling frequency that was used for the signal recording. In our experiment we used frequency of 200 samples per second. This value was determined by the used EEG recording device characteristics. The exemplary signal reconstruction from one-dimensional signal 1-D wavelet coefficients is illustrated in figure (6). To obtain successful result we applied the (db8) function. The elicited wavelet coefficients provide a compact representation for the energy distributed in the EEG signals in the time and frequency domain (Hosseini, 2012). The table (1) displays frequency corresponding to various levels of decomposition for db8 wavelet function with a frequency sampling of 200 Hz.

46

Kesra Nermend, Akeel Alsakaa, Anna Borawska, Piotr Niemcewicz Emotion recognition based on EEG signals during watching video clips

Figure 6. 1-D wavelet coefficients Source: (“Reconstruct single branch from 1-D wavelet coefficients – MATLAB wrcoef,” n.d.). Table 1. Frequency bands according to decomposition level. Seq. 1 2 3 4 5 6

Decomposition level A5 D5 D4 D3 D2 D1

Frequency bandwidth 0.4–3.5 3.5–6.5 6.5–11.5 11.5–23.5 23.5–50 50–64

Band Delta Theta Alpha Beta Gamma Noises

Source: own elaboration. Other data above 50 Hz is noise and it is not considered a part of features to the signals. By using Fast Fourier Transformation (FFT), we obtained high order spectrum for each band that was used to obtain input values for a classification phase. 4.2. Emotion recognition and classification The brain signals classification problem is difficult task. There are many techniques for the implementation of such classification. Trying to get better results for emotion classification, we used many approaches in the scope of supervised algorithms and we obtained the best result using a Support Vector Machine (SVM) and Linear Discriminate Analysis (LDA). Among different supervised classifiers, SVM is the one that performs significantly better than others. The algorithm of SVM was proposed by Vapnik (1979). To understand the concept of SVM consider a binary classification for simple case of a two dimensional feature of linearly separable training samples as in Figure (7). 47

Studies & Proceedings of Polish Association for Knowledge Management Nr 86, 2017

Equation (1) expresses a simple representation for EEG data that should be classified, ݇ ൌ ሼሺ‫ݔ‬ଵ ǡ ‫ݕ‬ଵ ሻǡ ሺ‫ݔ‬ଶ ǡ ‫ݕ‬ଶ ሻǡ ǥ Ǥ Ǥ ǡ ሺ‫ݔ‬௡ ǡ ‫ݕ‬௡ ሻሽǡ (1) where x is the input vector (high order spectrum from the feature extraction phase) and y is the class labelled 1 and 2 (value 1 stands for happy and 2 for sad mood). A discriminating function could be defined as in equation (2). ൐ ͲǤͷ‫ͳݏݏ݈݄ܽܿ݁ݐ݋ݐݏ݃݊݋݈ܾ݁ݔ‬ ݂ሺ‫ݔ‬ሻ ൌ ܸܵ‫ݐܿݑݎݐݏܯ‬ሺ‫ݓ‬ǡ ‫ݔ‬ሻ ൌ ൜ (2) ൏ ͲǤͷ‫ʹݏݏ݈݄ܽܿ݁ݐ݋ݐݏ݃݊݋݈ܾ݁ݔ‬ In this formula w determines the orientation of a discriminant plane. There is an infinite number of possible planes that could correctly classify the training data and defining the margin of a separating hyperplane. An optimal classifier finds the hyperplane for which the best generalising hyperplane is equidistant or far from each set of points. Optimal separation is achieved when there is no separation error and the distance between the closest vector and the hyperplane is maximal (Stoean & Stoean, 2014).

Figure 7. SVM separating hyperplane into two class Source: [18]. Obtaining an optimal classification result with SVM is still difficult, therefore we applied other algorithms, like LDA which is a technique used to detect a linear combination of features which describe and separate two or multi classes of objects. The resulting combination is used as a linear classifier. In LDA it is supposed that the item of the class has normal distribution. For instance, in a two class dataset, supposed a priori probabilities for class 1 and class 2 are respectively p1 and p2, and individual classes means and overall mean are written as m1, m2, and m (Balakrishnama & Ganapathiraju, 1998): ݉ ൌ ‫݌‬ଵ ‫݉  כ‬ଵ ൅  ‫݌‬ଶ ‫݉  כ‬ଶ (3) LDA is based upon the concept of searching for a linear combination of attributes that best separates two classes (0 and 1) of a binary attribute, mathematically robust and often produces models whose accuracy is as good as more complex methods (Sayad, 2010). 48

Kesra Nermend, Akeel Alsakaa, Anna Borawska, Piotr Niemcewicz Emotion recognition based on EEG signals during watching video clips

We have been testing another three algorithms (K-Nearest Neighbours, Naïve Bayes and Probabilistic Neural Network) trying to obtain satisfactory results as well. However, these algorithms did not give good result to determine accurate classes for most of datasets. We have chosen SVM and LDA algorithms among all others because they’ve got the best results. 5. Experimental Results and Discussion In our experiment we obtained two main results. The first is extraction of features from brain signals and the second is classification and recognition of emotions that is made on the basis of these features. The video stimuli that was displayed for each participant in the experiment we defragment it into three parts, because the raw EEG data that recoded was too long when we wanted to make comparisons between data and difficult to recognize whole data into which mood belong. Because perhaps the participant is influenced by some of episodes from entire clip. To be able to partitioning the raw EEG data into three equal parts, we have omitted the first and the last 10 seconds from the data, to be each parts consist of 90 seconds. Total number of participants was 20. They were distributed into two groups, the first 10 person were stimulated by funny video (class 1), and the second 10 person were stimulated by a sad video (class 2). We used 35 features of EEG signal for each part (5 frequency bands multiplied by 7 channels). Data for all participants were saved in an edf file – each file was split into three parts. We used 70% of the data obtained for training and 30% for testing dataset. We applied the most popular algorithms that are used in signal classification (supervised) to achieve optimal result. Research that was done allowed to state that the SVM and LDA are the best. The other algorithms that we have tried, like naïve Bayes and k-Nearest Neighbours or Probabilistic Neural Networks (PNN), not always allowed to get more that 50% of recognition rate, as illustrated in table (2). Table 2. Emotion classification result for each algorithm Classification Algorithm

Training set

Test Set

Emotion Recognition

Rate

SVM LDA

42 42

18 18

13 13

72.22% 72.22%

KNN NB

42 42

18 18

12 6

66.66% 33.33%

PNN

42

18

4

22.22%

Source: own elaboration. The results show that in the SVM and LDA classification algorithm, we were able to recognize the mood of 13 participants among 18 that belonged do the testing dataset. All these results extracted from the analysis of brain signals (raw EEG data) regardless of the data introduced by the participant to determine his/her mood during the registration process. Those data were taken to verify the impact of video on the mood of the participant.

49

Studies & Proceedings of Polish Association for Knowledge Management Nr 86, 2017

6. Conclusion This paper contributes to recognizing human mood on the basis of the brain signals. The conducted research allowed to conclude that using the entire set features that are extracted from the brain signals to classify the mood, causes difficulties when we want to achieve high accuracy. Better results can be probably achieved focusing on most significant bands like alpha and beta. In our experiment using SVM and LDA when increasing the size of training set to get higher score of emotion not effected much the final result. This study also can be extended in future to get emotion indicator that tells more precisely about the strength of an emotion, like very happy or only happy. Bibliography [1] [2] [3] [4]

[5] [6] [7]

[8]

[9] [10]

[11] [12]

[13]

[14]

Al-kadi, M., & Marufuzzaman, M. (2013). JART 399 Effectiveness of Wavelet Denoising on Electroencephalogram Signals, 11(February), 156–160. Balakrishnama, S., & Ganapathiraju, a. (1998). Linear Discriminant Analysis – a Brief Tutorial. Compute, 11, 1–9. Bhoria, R., Singal, P., & Verma, D. (2012). Analysis of effect of sound levels on eeg. International Journal of Advanced Technology & Engineering Research (IJATER), 121–124. Castellanos, N. P., & Makarov, V. a. (2006). Recovering EEG brain signals: Artifact suppression with wavelet enhanced independent component analysis. Journal of Neuroscience Methods, 158(2), 300–312. Getting Started – SCCN. (n.d.). Retrieved July 27, 2015, from http://sccn.ucsd.edu/wiki/Getting_Started. Hosseini, S. A. (2012). Classification of Brain Activity in Emotional States Using HOS Analysis. International Journal of Image, Graphics and Signal Processing, 4(1), 21–27. Hosseini, S.A.; Khalilzadeh, M. A. (2010). Emotional stress recognition system using EEG and psychophysiological signals: Biomedical Engineering and Computer Science (ICBECS), 2010 International Conference on, 1–6. Joyce, C. A., Gorodnitsky, I. F., & Kutas, M. (2004). Automatic removal of eye movement and blink artifacts from EEG data using blind component separation. Psychophysiology, 41(2), 313–325. Kong, W., Zhao, X., Hu, S., Vecchiato, G., & Babiloni, F. (2013). Electronic evaluation for video commercials by impression index. Cognitive Neurodynamics, 7(6), 531–535. Krishnaveni, V., Jayaraman, S., Aravind, S., Hariharasudhan, V., & Ramadoss, K. (2006). Automatic identification and Removal of ocular artifacts from EEG using Wavelet transform. Measurement Science Review, 6(4), 45–57. Landowska, A, Szwoch, M., Szwoch, W., Wróbel, M. R., & Kołakowska, A. (2012). Emotion recognition and its applications. Murugappan, M., Nagarajan, R., & Yaacob, S. (2010). Discrete Wavelet Transform Based Selection of Salient EEG Frequency Band for Assessing Human Emotions. Universiti Malaysia …, (Takahashi). Naraharisetti, K. V. P. (2010). Removal of ocular artifacts from EEG signal using joint approximate diagonalization of eigen matrices (JADE) and wavelet transform. Canadian Journal on Biomedical, 1(4), 56–60. Nitschke,J., Miller, G., Cook, E. (1998). 2a. Digital filtering.pdf. 50

Kesra Nermend, Akeel Alsakaa, Anna Borawska, Piotr Niemcewicz Emotion recognition based on EEG signals during watching video clips

[15]

[16] [17] [18] [19] [20] [21] [22]

Papousek, I., Reiser, E. M., Weber, B., Freudenthaler, H. H., & Schulter, G. (2012). Frontal brain asymmetry and affective flexibility in an emotional contagion paradigm. Psychophysiology, 49(4), 489–98. Reconstruct single branch from 1-D wavelet coefficients – MATLAB wrcoef. (n.d.). Retrieved January 18, 2016, from http://www.mathworks.com/help/wavelet/ref/wrcoef.html Sayad, S. (2010). Real Time Data Mining. Stoean, C., & Stoean, R. (2014). Support Vector Machines and Evolutionary Algorithms for Classifi cation. Single or Together. Taywade, S. A. (2012). A Review : EEG Signal Analysis With Different Methodologies. Vapnik. (1979). Support Vector Machines. Widmann, A., Schröger, E., & Maess, B. (2014). Digital filter design for electrophysiological data – a practical approach. Journal of Neuroscience Methods, 1–16. Yuan-Pin Lin, Chi-Hong Wang, Tzyy-Ping Jung, Tien-Lin Wu, Shyh-Kang Jeng, Jeng-Ren Duann, & Jyh-Horng Chen. (2010). EEG-Based Emotion Recognition in Music Listening. IEEE Transactions on Biomedical Engineering, 57(7), 1798–1806.

51

Studies & Proceedings of Polish Association for Knowledge Management Nr 86, 2017

ROZPOZNAWANIE EMOCJI W OPARCIU O SYGNAŁY EEG PODCZAS OGLĄDANIA WIDEOKLIPÓW Streszczenie Analiza sygnałów mózgowych w celu rozpoznawania emocji odgrywa znaczącą rolĊ w psychologii, zarządzaniu oraz projektowaniu interfejsów człowiek-maszyna. Elektroencefalogram jest odzwierciedleniem aktywnoĞci mózgu – badając i analizując te sygnały moĪna dostrzec zmiany stanu emocjonalnego. Aby uzyskaü taki efekt, konieczne jest wybranie odpowiednich kanałów EEG. Są one połoĪone przede wszystkim w czĊĞci czołowej czaszki. W prezentowanym artykule przeprowadzono badania, w których u 20 badanych osób, za pomocą filmu video, starano siĊ wzbudziü nastrój wesoły lub smutny. W celu przeprowadzenia klasyfikacji odczuwanych przez ochotników emocji wykorzystano 5 róĪnych metod, biorących pod uwagĊ wszystkie cechy, które zostały wyodrĊbnione z zarejestrowanych sygnałów. Najlepsze wyniki osiągniĊto z wykorzystaniem metody maszyny wektorów noĞnych (Support Vector Machine, SVM) oraz liniowej analizy dyskryminacyjnej. Słowa kluczowe: neuronauka poznawcza, EEG, rozpoznawanie emocji, klasyfikacja Kesra Nermend Katedra Metod Komputerowych w Ekonomii Eksperymentalnej Uniwersytet Szczeciski, Wydział Nauk Ekonomicznych i Zarzdzania e-mail: [email protected] Akeel Alsaaka Wydział Informatyki Uniwersytet w Karbali, Irak e-mail: [email protected] Anna Borawska Katedra Metod Komputerowych w Ekonomii Eksperymentalnej Uniwersytet Szczeciski, Wydział Nauk Ekonomicznych i Zarzdzania e-mail: [email protected] Piotr Niemcewicz Katedra Metod Komputerowych w Ekonomii Eksperymentalnej Wydział Nauk Ekonomicznych i Zarzdzania Uniwersytet Szczeciski e-mail: [email protected]

52