Giving feedback to participants
You will often want to provide participants with feedback, to let them know how well they are doing. As a courtesy towards the participants, because most people find it unpleasant to perform a task without getting feedback. And also to improve the participants’ motivation. In OpenSesame there are lots of ways to provide feedback.
- The difference between
feedback
andsketchpad
items - Relevant variables
- Feedback after a block of trials
- Feedback after a single trial
- Resetting and manipulating feedback variables
The difference between feedback
and sketchpad
items
To give provide you will generally use the feedback
item, instead of the sketchpad
item. These two items are quite similar, but are different in when stimulus preparation occurs. See also:
Relevant variables
OpenSesame automatically keeps track of a number of feedback variables, which are described here:
Feedback after a block of trials
It is common to provide feedback after every block of trials. This way you don’t overwhelm the participant with feedback on every trial, which may disrupt the flow of the experiment (although it may be useful in some cases). To do this, construct a sequence
that contains a single block of trials (typically a loop
item), followed by a feedback
item. It is also convenient to add a reset_feedback
item just before the block_loop. This prevents carry-over effects, for example, from responses that have been collected during the instructions.
Figure 1. Providing feedback after a block of trials using a feedback
item.
In the feedback
item, you can add some text. You can use the variables described above using the [variable name]
notation.
Figure 2. You can use a number of standard feedback variables, such as avg_rt
and acc
.
You can also use an inline_script
item, inserted immediately before the feedback
item, to provide custom types of feedback. For example, if you want to provide a warning when accuracy drops below 75% you could insert the following inline_script before the feedback item.
if self.get('acc') > 90:
exp.set('feedback_msg', 'Excellent, well done!')
elif self.get('acc') > 75:
exp.set('feedback_msg', 'Pretty good')
else:
exp.set('feedback_msg', 'Come on, you can do better!')
Listing 1. Using an inline_script
item to provide custom feedback.
Feedback after a single trial
Sometimes you want to give the participant feedback after every trial. It’s probably wise to use subtle feedback in this case, so you don’t disrupt the flow of the experiment. What I often do is briefly (500 ms, say) present a green or red fixation dot, depending on whether the participant responded correctly. The easiest way to do this is by adding both a red and a green fixation dot to the trial_sequence, and execute only one of them, depending on the value of the correct
variable.
Figure 3. Providing feedback after each trial using Run if
statements.
In this case, you can use a sketchpad
item, because you don’t change the contents of the canvas depending on the participant’s response. You only change which of the two sketchpad
s, both of which have been constructed in advance, will be shown: green_fixation or red_fixation.
You can also present full feedback after every trial, using a feedback
item inserted after the response item (such as a keyboard_response
), as shown in [Figure 1].
Resetting and manipulating feedback variables
Feedback variables, such as average_response_time
and accuracy
are reset when a feedback
item is called (assuming that the Reset feedback variables box is checked) and wherever you insert a reset_feedback
plug-in. However, you can also manipulate the feedback variables using an inline_script
.
For example, the following script resets the feedback variables:
exp.set('total_responses', 0)
exp.set('total_correct', 0)
exp.set('total_response_time', 0)
exp.set('average_response_time', 'NA')
exp.set('avg_rt', 'NA')
exp.set('accuracy', 'NA')
exp.set('acc', 'NA')
Listing 2. Resetting feedback with an inline_script
item.
And the following script updates the feedback variables based on a response:
response_time = 1000 # Assume that the RT was 1000ms
correct = 1 # Assume that the response was correct
exp.set('total_responses', self.get('total_responses')+1)
exp.set('total_correct', self.get('total_correct')+correct)
exp.set('total_response_time', self.get('total_response_time')+response_time)
avg_rt = self.get('total_response_time')/self.get('total_responses')
acc = 100.*self.get('total_correct')/self.get('total_responses')
exp.set('average_response_time', avg_rt)
exp.set('avg_rt', avg_rt)
exp.set('accuracy', acc)
exp.set('acc', acc)
Listing 3. Updating feedback with an inline_script
item.