Drift and Noise Models

Leaky/Unstable integrator

Leaky/unstable integrators are implemented in DriftLinear. For a leaky integrator, set the parameter x to be less than 0. For an unstable integrator, set the parameter x to be greater than 0. For example:

from pyddm import Model
from pyddm.models import DriftLinear
model = Model(drift=DriftLinear(drift=0, t=.2, x=.1))

Try it out with:

from pyddm import Model, Fittable, DriftLinear
from pyddm.plot import model_gui
model = Model(drift=DriftLinear(drift=Fittable(minval=0, maxval=2),
                                t=Fittable(minval=0, maxval=2),
                                x=Fittable(minval=-1, maxval=1)),
              dx=.01, dt=.01)
model_gui(model, conditions={"frequency": [0, 4, 8]})

Sine wave evidence

We use evidence in the form of a sine wave as an example of how to construct a new model class.

Suppose we have a task where evidence varies according to a sine wave which has a different frequency on different trials. The frequency is a feature of the task, and will be the same for all components of the model. Thus, it is a “condition”. By contrast, how strongly the animal weights the evidence is not observable and only exists internal to the model. It is a “parameter”, or something that we must fit to the data. This model can then be defined as:

import numpy as np
from pyddm.models import Drift
class DriftSine(Drift):
    name = "Sine-wave drifts"
    required_conditions = ["frequency"]
    required_parameters = ["scale"]
    def get_drift(self, t, conditions, **kwargs):
        return np.sin(t*conditions["frequency"]*2*np.pi)*self.scale

In this case, frequency is externally provided per trial, thus defined as a condition. By contrast, scale is a parameter to fit, and is thus defined as a parameter. We then use the DriftSine class to define model:

from pyddm import Model
model = Model(name='Sine-wave evidences',
                  drift=DriftSine(scale=0.5))
sol = model.solve(conditions={"frequency": 5})

The model is solved and the result is saved in the variable “sol”, where the probability of each choice and other outputs could be retrieved. Finally, note that the conditions, being externally defined (e.g. trial-by-trial), must be input during the call to model.solve. The parameters, such as offset, are defined within the respective classes. Depending on the context, it could be either a constant (as done here) or as a Fittable object, if fitting to data is required.

Try it out with:

from pyddm import Model, Fittable
from pyddm.plot import model_gui
model = Model(drift=DriftSine(scale=Fittable(minval=0, maxval=2)),
              dx=.01, dt=.01)
model_gui(model, conditions={"frequency": [0, 4, 8]})

Coherence-dependent drift rate

from pyddm import Drift
class DriftCoherence(Drift):
    name = "Drift depends linearly on coherence"
    required_parameters = ["driftcoh"] # <-- Parameters we want to include in the model
    required_conditions = ["coh"] # <-- Task parameters ("conditions"). Should be the same name as in the sample.
    
    # We must always define the get_drift function, which is used to compute the instantaneous value of drift.
    def get_drift(self, conditions, **kwargs):
        return self.driftcoh * conditions['coh']

Try it out with:

from pyddm import Model, Fittable
from pyddm.plot import model_gui
model = Model(drift=DriftCoherence(driftcoh=Fittable(minval=0, maxval=1)),
              dx=.01, dt=.01)
model_gui(model, conditions={"coh": [0, .25, .5]})

Biased coherence-dependent drift rate

To model tasks in which some trials are expected to have a biased drift rate (e.g. tasks with a reward bias or strong trial history effects), we may want the drift rate to be biased in a certain direction (called “me” in some models) depending on task parameters. In the case of reward bias, whereby high reward trials have a higher drift rate, we need two task parameters: one describing the coherence of the trial (“coh”) and one indicator of whether the trial was high reward (1) or low reward (0) (“highreward”). Then we have to parameters: the amount by which coherence impacts the drift rate (“driftcoh”) and the amount of reward bias (“rewbias”).

from pyddm import Drift
class DriftCoherenceRewBias(Drift):
    name = "Drift depends linearly on coherence, with a reward bias"
    required_parameters = ["driftcoh", "rewbias"] # <-- Parameters we want to include in the model
    required_conditions = ["coh", "highreward"] # <-- Task parameters ("conditions"). Should be the same name as in the sample.
    
    # We must always define the get_drift function, which is used to compute the instantaneous value of drift.
    def get_drift(self, conditions, **kwargs):
        rew_bias = self.rewbias * (1 if conditions['highreward'] == 1 else -1)
        return self.driftcoh * conditions['coh'] + rew_bias

Try it out with:

from pyddm import Model, Fittable
from pyddm.plot import model_gui
model = Model(drift=DriftCoherenceRewBias(
                        driftcoh=Fittable(minval=0, maxval=1),
                        rewbias=Fittable(minval=0, maxval=1)),
              dx=.01, dt=.01)
model_gui(model, conditions={"coh": [0, .25, .5], "highreward": [0, 1]})

Coherence-dependent drift rate with leak

from pyddm import Drift
class DriftCoherenceLeak(Drift):
    name = "Leaky drift depends linearly on coherence"
    required_parameters = ["driftcoh", "leak"] # <-- Parameters we want to include in the model
    required_conditions = ["coh"] # <-- Task parameters ("conditions"). Should be the same name as in the sample.
    
    # We must always define the get_drift function, which is used to compute the instantaneous value of drift.
    def get_drift(self, x, conditions, **kwargs):
        return self.driftcoh * conditions['coh'] + self.leak * x

Try it out with:

from pyddm import Model, Fittable
from pyddm.plot import model_gui
model = Model(drift=DriftCoherenceLeak(driftcoh=Fittable(minval=0, maxval=1),
                                       leak=Fittable(minval=-1, maxval=1)),
              dx=.01, dt=.01)
model_gui(model, conditions={"coh": [0, .25, .5]})

Urgency gain function

To implement a gain function, we want to scale both the drift rate and the noise by some function. First, let’s define such a function to be a simple linear ramp.

def urgency_gain(t, gain_start, gain_slope):
    return gain_start + t*gain_slope

Now we can define a Drift model component which uses this.

from pyddm import Drift
class DriftUrgencyGain(Drift):
    name = "drift rate with an urgency function"
    required_parameters = ["snr", "gain_start", "gain_slope"]
    def get_drift(self, t, **kwargs):
        return self.snr * urgency_gain(t, self.gain_start, self.gain_slope)

We also need to define a Noise model component which uses it. This allows us to keep the signal-to-noise ratio (SNR) fixed.

from pyddm import Noise
class NoiseUrgencyGain(Noise):
    name = "noise level with an urgency function"
    required_parameters = ["gain_start", "gain_slope"]
    def get_noise(self, t, **kwargs):
        return urgency_gain(t, self.gain_start, self.gain_slope)

Finally, here is an example of how we would use this. Note that it utilizes shared parameters:

from pyddm import Model, Fittable
from pyddm.plot import model_gui
gain_start = Fittable(minval=0, maxval=1)
gain_slope = Fittable(minval=0, maxval=2)
m = Model(drift=DriftUrgencyGain(snr=Fittable(minval=0, maxval=2),
                                 gain_start=gain_start,
                                 gain_slope=gain_slope),
          noise=NoiseUrgencyGain(gain_start=gain_start,
                                 gain_slope=gain_slope),
          dt=.01, dx=.01)
model_gui(model=m)

Across-trial variability in drift rate

While across-trial variability in drift rate is possible in PyDDM, it is not efficient or ergonomic. Unlike variability in starting position or non-decision time, the distribution of drift rates must be discretized, and each must be run separately. Here, we demonstrate how to do this for a uniform distribution.

In order to run such a model, first you must prepare your Sample by running it through the following function. This makes one duplicate of each of your data points according to each discretization point. As a result, note that that likelihood, BIC, and other summary statistics about the data or fit quality may be inaccurate.

import pyddm as ddm
import numpy as np
RESOLUTION = 11
def prepare_sample_for_variable_drift(sample, resolution=RESOLUTION):
    new_samples = []
    for i in range(0, resolution):
        choice_upper = sample.choice_upper.copy()
        choice_lower = sample.choice_lower.copy()
        undecided = sample.undecided
        conditions = copy.deepcopy(sample.conditions)
        conditions['driftnum'] = (np.asarray([i]*len(choice_upper)),
                                  np.asarray([i]*len(choice_lower)),
                                  np.asarray([i]*undecided))
        new_samples.append(ddm.Sample(choice_upper, choice_lower, undecided, choice_names=samplue.choice_names, **conditions))
    new_sample = new_samples.pop()
    for s in new_samples:
        new_sample += s
    return new_sample

After using the above function to generate a new sample, use a class such as the following, which provides a uniformly-distributed drift rate.

class DriftUniform(ddm.Drift):
    """Drift with trial-to-trial variability.
    
    Note that this is a numerical approximation to trial-to-trial
    drift rate variability and is inefficient.  It also requires
    running the "prepare_sample_for_variable_drift" function above on
    the data.
    """
    name = "Uniformly-distributed drift"
    resolution = RESOLUTION # Number of bins, should be an odd number
    required_parameters = ['drift', 'width'] # Mean drift and the width of the uniform distribution
    required_conditions = ['driftnum']
    def get_drift(self, conditions, **kwargs):
        stepsize = self.width/(self.resolution-1)
        mindrift = self.drift - self.width/2
        return mindrift + stepsize*conditions['driftnum']

This can be used like any other Drift class. For example:

from pyddm import Model, Fittable
from pyddm.plot import model_gui
model = Model(drift=DriftUniform(drift=Fittable(minval=0, maxval=2),
                                 width=Fittable(minval=0, maxval=2)),
              dx=.01, dt=.01)
model_gui(model, conditions={"driftnum": list(range(0, 11))})

Moment-to-moment observations

Suppose we have measured a parameter at each moment in each trial and would like to compute a drift rate based upon it. Let’s suppose we measured the subject’s skin conductance response (SCR) while performing the task, and we hypothesize that evidence integration is stronger when there is a higher SCR. We can bin SCR into bins (say, 100 ms bins) and use this to determine drift rate. This means that there will only be one trial per condition, and that each value of the condition will be a tuple of SCR values. Note that it must be a tuple, and cannot be a list.

Let’s save the SCR into the “signal” condition in the code below. We can write:

class DriftMomentToMoment(ddm.Drift):
    """Drift rate which depends on trial-wise observations over time"""
    name = "Moment-to-moment drift"
    BINSIZE = .1 # 100 ms per bin
    required_parameters = ['drift_multiplier'] # How much to scale moment-to-moment drift
    required_conditions = ['signal'] # should be a list of values which determine the moment-to-moment drift
    def get_drift(self, t, conditions, **kwargs):
        bin_number = int(t//self.BINSIZE) # Which bin are we currently in?
        n_bins = len(conditions['signal']) # Total number of bins for this condition
        # If we are currently in a bin which exceeds the total bins, fix to the last bin
        if bin_number >= n_bins:
            bin_number = n_bins-1
        # Compute the moment-to-moment drift
        return conditions['signal'][bin_number] * self.drift_multiplier

This can be used in a model for any sample which has the “signal” condition, where “signal” is a tuple of values, giving the magnitude of the SCR at each timepoint (spaced BINSIZE apart).

Alternatively, we could extend this beyond just drift rate. Suppose we hypothesized that, instead of drift rate, the urgency signal was related to pupil diameter. We can write:

def signal_to_urgency(t, signal, binsize=.1):
    bin_number = int(t//binsize) # Which bin are we currently in?
    n_bins = len(signal) # Total number of bins for this condition
    # If we are currently in a bin which exceeds the total bins, fix to the last bin
    if bin_number >= n_bins:
        bin_number = n_bins-1
    return 1 + signal[bin_number]

class DriftUrgencyMomentToMoment(ddm.Drift):
    """Drift rate which varies over time, differently for each trial"""
    name = "Moment-to-moment urgency drift"
    required_parameters = ['snr'] # How much to scale moment-to-moment drift
    required_conditions = ['signal'] # should be a list of values which determine the moment-to-moment drift
    def get_drift(self, t, conditions, **kwargs):
        return signal_to_urgency(t, conditions['signal']) * self.snr

class NoiseUrgencyMomentToMoment(ddm.Noise):
    """Noise rate which varies over time, differently for each trial"""
    name = "Moment-to-moment urgency noise"
    required_parameters = [] # How much to scale moment-to-moment drift
    required_conditions = ['signal'] # should be a list of values which determine the moment-to-moment drift
    def get_noise(self, t, conditions, **kwargs):
        return signal_to_urgency(t, conditions['signal'])

Then, we could create a model using the following:

from pyddm import Model, Fittable
from pyddm.plot import model_gui
m = Model(drift=DriftUrgencyMomentToMoment(snr=Fittable(minval=0, maxval=2)),
          noise=NoiseUrgencyMomentToMoment(),
          dt=.01, dx=.01)
model_gui(m, conditions={"signal": [(.1, .1, .1, .1, .1, .1, 0), (0.0, 1, 2, 3), (0, 0.0, 0,0, 1)]})