But, in general it will be faster, less bug prone and easier to share with non-coders if you use Builder.
Once we have our experiment written in JS, we need a way to “host” it online. Pavlovia is:
A secure server for running experiments and storing data.
A git based version control system.
A huge open access library of experiments (that you can add to!)
If the task that you need doesn’t already exist - push your own! Before you get started try to make sure you:
Have a fresh folder that contains only one .psyexp file and the resources needed by that file.
It can also be helpful to make sure your folder is not in a location already under git version control.
Once you have made your experiment and made sure that your local folders are organized neatly ( with one .psyexp file in this location) you’re ready to sync your project to pavlovia!.
Once you have synced your study you will find it in your Dashboard on pavlovia.org under “Experiments”.
Piloting versus running - piloting will produce a token that lets you run your study for free for one hour, a data file will automatically download so that you can inspect it. Running will generate a URL to share with participants - no data will be downloaded locally using that link.
CSV or DATABASE - csv will generate a csv file per participant that will be sent to your gitlab repository (so it will be public if you make the repo public). Database will append all participants data to a single file (it will not be sent to gitlab).
Inside the experiment settings of PsychoPy you can configure the online settings of your experiment.
Let’s quickly make a basic experiment and put it online:
Make a new .psyexp file with some text that simply reads “Hello, I’m online!”
Sync that experiment to pavlovia.org
Go to your experiment dashboard to find your experiment
Change your study to piloting and check that it runs by changing it to pilot mode and select “pilot”.
Redirect your participant to PsychoPy.org when they have completed the task, redirect them to pavlovia.org if they do not complete the task.
Let’s try putting the task we made in day 1 online and getting some data together!
So your task was running perfectly offline, then you pushed it online, and it doesn’t work - why? There are lot’s of reasons something might not work online, but the most common errors are coding errors.
The PsychoJS library doesn’t yet contain everything in PsychoPy , for several reasons:
Does a component “make sense” online? e.g. Grating stimuli ideally require a luminance calibrated monitor. Does your participant have a photometer at home? Input/Output components to connect with EEG might not make sense online either..
PsychoJS is younger than PsychoPy! (but we’re making good progress!)
When we add code components we have the choice to add code as either:
Py - pure Python
The last option is very cool and useful - but it can catch people out if something doesn’t translate smoothly!
Update to the latest release! Version 2021.2. improved transpiling alot and you can save alot of manual debugging online using that version.
Always check the status of online options status of online options before making your experiment
Push your experiment little and often (don’t make your full experiment perfectly locally and then try to push it online)
Read the crib sheet
Check out the psychoJS documentation
The forum is always there!
There are several kinds of error we might encounter when getting online, but generally these fall in three categories (you can find a useful tutorial here)
“ReferenceError: X is not defined”
Using python modules
This is a rarer one, but handy to know about. Another reason a semantic error could occur is if you have created a custom function that can’t be accessed from within the location it is called.
Generally when you make custom variables in code components, PsychoPy will identify those and automatically declare that variable before the experiment initializes i.e.
var myVariable1 will be seen at the start of the experiment. If this doesn’t occur it might be that you need to add that yourself to the “Before experiment” tab.
Generally PsychoPy will try to find all the resources you need automatically and load them, but there are some cases this might not work..
Incorrect file extension
Your image is a “.jpeg” but you have accidentally used the extension “.png”
Resources defined through code
If a resource is defined through code rather than from a conditions file or component field then PsychoPy can fail to “prepare” for the eventuality that resource is needed. In cases like this it is always a good idea to manually add any additional resources you might need to the additional resources section of the experiment settings when Configuring online settings.
A type error occurs when we refer to an object that does not exist.
This can also occur because something exists in PsychoPy that does not exist in PsychoJS. For example
core.Clock() is not a constructor in JS because Clock lives in the util module of PsychoJS i.e.
util.Clock(). The crib sheet can be helpful in helping in these cases.
Still relevant to 2021.2.2
Even though we’ve improved the transpiler, there are some things that either still need updating or that we can’t expect to transpile i.e. whole python libraries like numpy. So if you are using specific functions you will need to find the JS equivalent and add that to your experiment. We would also then need to change code type to “Both”.
For faster access look up the keyboard shortcut for your specific operating system/browser!
The developer tools are particularly helpful for Syntax errors: Initializing experiment, where there is no error message, but things “don’t work”.
You can open developer tools in your browser (the crib sheet) gives tips how to do this on different browsers/operating systems) This will tell us where our (which line) error is occurring. Remember, exporting to code is a one-way street. So whilst it is useful to look into the code, we really recommend fixing errors back in builder where possible.
If you are ever unsure where to look in your builder experiment for an error, you can look for the line that indicates what routine this code is being executed in.
If you ever make a change in your experiment and it isn’t reflected in your online experiment, it is very likely you need to clear your browser cache. How this is done can vary browser to browser - so do search how to do that on your specific operating system/browser.
Think Escape room, but with bugs…
I am going to give you an experiment with 4 levels, each level contains a bug. Use the skills that we have learnt to find each bug and progress to the next level.
To start fork or download this experiment.
console.log(): The equivalent of
print() in python. Useful for when a variable doesn’t appear as you expect - you can print out values to your console and check they are updating as you expect.
window.object = object: pass an object to the window for inspection e.g. pass a component by replacing
object with the name of your component. Useful for seeing what attributes and methods an object has.
window.open('myURL'): open a new window e.g. a questionnaire (note: can be blocked as a pop up by some mac users).
alert() Add a pop-up alert to the participant.
prompt('Please enter your name', 'default') retrieve some info from the participant via a pop-up
confirm('Please click OK!') Display a pop-up box with OK or cancel.
If you are running your study in full screen mode these will break into window mode*
expInfo['frameRate'] might be useful for checking the participants Operating system or screen refresh rate.
Remember that this is a one-way street! don’t be tempted to alter the JS code if you want to continue making edits in builder! implement code from within builder itself!*
Pavlovia uses a powerful git-based system for storage and version control. Some of the benefits of using this include:
Fork existing projects
Easy sharing of your task (open science)
Add lab members to projects
Pavlovia uses a git based system for version control called “gitlab”. You can see when the last changes were made to the task by looking at the commit history.
If you click on the change you can see deletions and insertions. You can browse the repository at that point in history to retrieve past files!
To add members to your own project, you can use the settings>members option where you can search and invite collaborators.
You can change the visibility of your task at any time under permissions. Remember Once you make your project “public” the data file stored there will also be public (unless you have your data saving mode set to database).
When we take a study online, it is often important to automate group assignment in some way. At the moment, Pavlovia does not have an “out-of-box” solution for this - but there are several ways to approach this.
Quite often, researchers think that if they have several groups they will need several Pavlovia projects (one per group). This is often inefficient and can become quite confusing when collating the data. Instead, we can make a single experiment and start by using the principles we learned in Block designs and counterbalancing.
When sharing a study with a participant, we can auto-complete fields in the startup GUI using query strings. You can provide info to your experiment by appending your experiment URL with
?participant=1&group=A - where “participant” and “group” correspond to parameter names.
There is no limit on the number of parameter names that you provide, so long as each parameter is separated by an ampersand (
Thanks to query strings we can generate several URLs for the same project but for each group. For example, you might have 4 groups and therefore share the URLs:
If you are using this approach and sharing URLs on recruitment websites, you would need to be careful that the same participants do not complete several URLs (i.e. complete your study several times in different groups). If you are using Prolific for recruitment there is guidance on how to do this here.
A slightly more efficient way might be to generate sequential participant IDs and use that to assign to groups. For this, Wakefield Morys-Carter has developed an external app (Morys-Carter, 2021) to assist.
So, If your experiment URL is https://pavlovia.org/a/b then use https://moryscarter.com/vespr/pavlovia.php?folder=a&experiment=b/
Inside PsychoPy, we could then use the code component:
if int(expInfo['participant']) % 2 == 0: expInfo['group'] = A # Assigns even ID's to group A else: expInfo['group'] = B
We then would not need the parameter “group” in our experiment settings (because this parameter assignment through code would overwrite it anyway).
Counterbalancing with more than 2 groups online is a little more complex. We can use the sequential participant ID method but we need to be more careful. If we had 40 participants, in python, we could write:
# Makes a long list of length 4 * 10 groups = ['A', 'B', C', 'D'] * 10 # if python index starts at 0 but participant ID starts at 1 the first element # will be skipped, so add a value to compensate groups.append('A') # use the participant ID to index from this list expInfo['group'] = groups[int(expInfo['participant'])]
# Makes a long list of length 4 * 10 groups = Array(10).fill(['A', 'B', 'C', 'D']).flat(); # if index starts at 0 but participant ID starts at 1 the first element # will be skipped, so add a value to compensate groups.push('A'); # use the participant ID to index from this list thisGroup = groups[Number.parseInt(expInfo["participant"])]; expInfo["Group"] = thisGroup;
console.log(‘Group: ‘, expInfo[‘group’])
Problem The tool described so far is great and is free, but it does not take into account how many participants completed. So, it is still important to manually check how many complete data sets you have for each group.
We do hope to have an out-of-box solution to this in future, but we are very grateful for alternative solutions contributed by the community. In particular, Wakefield Morys-Carter has developed a Study Portal to help group counterbalancing. Taking into account participant completion is a paid feature, but at a low cost (£10).
If you are using the licensed features of the Study Portal to assign participants to group - do not use code within your experiment to assign group based on participant ID.
This allows tracking of how many participants from each group have completed and how many timed out:
Other features the Study Portal could help with:
You can watch a presentation of the portal here.
There are several other tools that can be useful including:
Let’s practice debugging errors, then play with advanced plugins we can use online ( Advanced online).
Then we will try Coding a full experiment.