Press "Enter" to skip to content

You shall not pass (yet): Iteratively training participants with conditional branching on FindingFive

A typical learning experiment is usually split into two phases: a training phase, where participants get to familiarize themselves with the learning task, and a test phase, where the performance of participants is evaluated. In these situations, researchers usually want to know which participants are the learners – those who successfully “get” the learning task during the training phase, and which are non-learners – those who end up performing essentially at random during the test phase.

The old way: run the same training phase for everyone and exclude non-learners at the analysis stage

In most cases, data collected from non-learners would be irrelevant to the research question. Therefore, researchers must identify which participants are non-learners and exclude their data from analysis. This is usually done in a post-hoc manner by examining the task performance of all participants during the training phase. Those who fall behind an arbitrarily set threshold (e.g., accuracy on a 2AFC task) are flagged as non-learners and excluded from data.

Fixed training phase is too restrictive

This post-hoc exclusion strategy is fairly popular and logistically easy to implement. However, it has a significant drawback – by running participants under the same exact procedure, researchers must make an arbitrary tradeoff between the length of the training phase and the effectiveness of the training:

If the training phase is too short, one might run the risk of losing a lot of participants as non-learners. If the training phase is too long, the overall study will be longer and more costly, not to mention that learners who got the point of the task early on will have to suffer through the same training task over and over again.

Neither scenario is ideal.

FindingFive supports iterative training that varies in length

Here comes FindingFive to save the day. Building upon the conditional branching feature that we released just a little over a month ago, we have now implemented a neat feature that allows researchers to train participants on the same block of trials iteratively, until the participants reach a certain accuracy threshold. With FindingFive’s latest feature, researchers now can:

  • Specify the minimum number of training iterations – so that everyone gets at least some training. This minimum can be just one iteration in most cases.
  • Specify the maximum number of training iterations – so that the training does end at some point, even if some participants never achieve the learner accuracy threshold.
  • Specify whether the accuracy is evaluated on all training iterations or just the last one – since in some cases it makes more sense to emphasize the improvement in learning rather than the overall average learning performance.

These functions are archived by specifying the “iterations” dictionary of an accuracy branching block. You can read more about the technical details in our API documentation here.

Learners and non-learners can be diverted into different branches

There will be inevitably participants whose learning performance falls short of the accuracy threshold, even after the maximum number iterations of training. Researchers can simply end the study for non-learners while letting learners continue.

To achieve this, since this iterative training feature is based on conditional branching, researchers can simply define two branches for learners and non-learners. In the branch for learners, the study goes on, but in the branch for non-learners, researchers can add some debrief information and end the study right away.

Setting up branches is also described in detail in our API documentation.

Are you excited?

We certainly are. We hope this feature will prove useful to a lot of our researchers. As always, feel free to reach out to us at researcher.help@findingfive.com with questions and comments!

Comments are closed.