# Recipes for Initial Conditions¶

## General use of initial conditions¶

Initial conditions can be included in the model by passing it directly to the Model object. For example, for a uniform distribution centered at 0, do:

```from pyddm import Model
from pyddm.models import ICRange
model = Model(IC=ICRange(sz=.2))
```

## Biased Initial Conditions¶

Often we want to model a side bias, either those which arise naturally or those introduced experimentally through asymmetric reward or stimulus probabilities. The most popular way of modeling a side bias is to use a starting position which is closer to the boundary representing that side.

To do this, we must first include a condition in our dataset describing whether the correct answer was the left side or the right side. Suppose we have a sample which has the `left_is_correct` condition, which is 1 if the correct answer is on the left side, and 0 if the correct answer is on the right side. Now, we can define an `InitialCondition` object which uses this information. We do this by defining a `get_IC()` method. This method should generate a discretized probability distribution for the starting position. Here, we want this distribution to be a single point `x0`, the sign of which (positive or negative) depends on whether the correct answer is on the left or right side. The function receives the support of the distribution in the `x` argument. We can model this with:

```import numpy as np
from pyddm import InitialCondition
class ICPointSideBias(InitialCondition):
name = "A starting point with a left or right bias."
required_parameters = ["x0"]
required_conditions = ["left_is_correct"]
def get_IC(self, x, dx, conditions):
start = np.round(self.x0/dx)
# Positive bias for left choices, negative for right choices
if not conditions['left_is_correct']:
start = -start
shift_i = int(start + (len(x)-1)/2)
assert shift_i >= 0 and shift_i < len(x), "Invalid initial conditions"
pdf = np.zeros(len(x))
pdf[shift_i] = 1. # Initial condition at x=self.x0.
return pdf
```

Then we can compare the distribution of left-correct trials to those of right-correct trials:

```from pyddm import Model
from pyddm.plot import plot_compare_solutions
import matplotlib.pyplot as plt
model = Model(IC=ICPointSideBias(x0=.3))
s1 = model.solve(conditions={"left_is_correct": 1})
s2 = model.solve(conditions={"left_is_correct": 0})
plot_compare_solutions(s1, s2)
plt.show()
```

We can also see these directly in the model GUI:

```from pyddm import Model, Fittable
from pyddm.plot import model_gui
model = Model(IC=ICPointSideBias(x0=Fittable(minval=0, maxval=1)),
dx=.01, dt=.01)
model_gui(model, conditions={"left_is_correct": [0, 1]})
```

To more accurately represent the initial condition, we can linearly approximate the probability density function at the two neighboring grids of the initial position:

```import numpy as np
import scipy.stats
from pyddm import InitialCondition
class ICPointSideBiasInterp(InitialCondition):
name = "A dirac delta function at a position dictated by the left or right side."
required_parameters = ["x0"]
required_conditions = ["left_is_correct"]
def get_IC(self, x, dx, conditions):
start_in = np.floor(self.x0/dx)
start_out = np.sign(start_in)*(np.abs(start_in)+1)
w_in = np.abs(start_out - self.x0/dx)
w_out = np.abs(self.x0/dx - start_in)
if not conditions['left_is_correct']:
start_in = -start_in
start_out = -start_out
shift_in_i = int(start_in + (len(x)-1)/2)
shift_out_i = int(start_out + (len(x)-1)/2)
if w_in>0:
assert shift_in_i>= 0 and shift_in_i < len(x), "Invalid initial conditions"
if w_out>0:
assert shift_out_i>= 0 and shift_out_i < len(x), "Invalid initial conditions"
pdf = np.zeros(len(x))
pdf[shift_in_i] = w_in # Initial condition at the inner grid next to x=self.x0.
pdf[shift_out_i] = w_out # Initial condition at the outer grid next to x=self.x0.
return pdf
```

Try it out with:

```from pyddm import Model, Fittable
from pyddm.plot import model_gui
model = Model(IC=ICPointSideBiasInterp(x0=Fittable(minval=0, maxval=1)),
dx=.01, dt=.01)
model_gui(model, conditions={"left_is_correct": [0, 1]})
```

In practice, these are very similar, but the latter gives a smoother derivative, which may be useful for gradient-based fitting methods (which are not used by default).

## Fixed ratio instead of fixed value¶

When fitting both the initial condition and the bound height, it can be preferable to express the initial condition as a proportion of the total distance between the bounds. This ensures that the initial condition will always stay within the bounds, preventing errors in fitting.

```import numpy as np
from pyddm import InitialCondition
class ICPointSideBiasRatio(InitialCondition):
name = "A side-biased starting point expressed as a proportion of the distance between the bounds."
required_parameters = ["x0"]
required_conditions = ["left_is_correct"]
def get_IC(self, x, dx, conditions):
x0 = self.x0/2 + .5 #rescale to between 0 and 1
# Bias > .5 for left side correct, bias < .5 for right side correct.
# On original scale, positive bias for left, negative for right
if not conditions['left_is_correct']:
x0 = 1-x0
shift_i = int((len(x)-1)*x0)
assert shift_i >= 0 and shift_i < len(x), "Invalid initial conditions"
pdf = np.zeros(len(x))
pdf[shift_i] = 1. # Initial condition at x=x0*2*B.
return pdf
```

Try it out with:

```from pyddm import Model, Fittable
from pyddm.models import BoundConstant
from pyddm.plot import model_gui
model = Model(IC=ICPointSideBiasRatio(x0=Fittable(minval=-1, maxval=1)),
bound=BoundConstant(B=Fittable(minval=.1, maxval=2)),
dx=.01, dt=.01)
model_gui(model, conditions={"left_is_correct": [0, 1]})
```

## Biased Initial Condition Range¶

```import numpy as np
import scipy.stats
from pyddm import InitialCondition
class ICPointRange(InitialCondition):
name = "A shifted side-biased uniform distribution"
required_parameters = ["x0", "sz"]
required_conditions = ["left_is_correct"]
def get_IC(self, x, dx, conditions, *args, **kwargs):
# Check for valid initial conditions
assert abs(self.x0) + abs(self.sz) < np.max(x), \
"Invalid x0 and sz: distribution goes past simulation boundaries"
# Positive bias for left correct, negative for right
x0 = self.x0 if conditions["left_is_correct"] else -self.x0
# Use "+dx/2" because numpy ranges are not inclusive on the upper end
pdf = scipy.stats.uniform(x0 - self.sz, 2*self.sz+dx/10).pdf(x)
return pdf/np.sum(pdf)
```

Try it out with constant drift using:

```from pyddm import Model, Fittable, DriftConstant
from pyddm.plot import model_gui
model = Model(drift=DriftConstant(drift=1),
IC=ICPointRange(x0=Fittable(minval=0, maxval=.5),
sz=Fittable(minval=0, maxval=.49)),
dx=.01, dt=.01)
model_gui(model, conditions={"left_is_correct": [0, 1]})
```

## Cauchy-distributed Initial Conditions¶

```import numpy as np
import scipy.stats
from pyddm import InitialCondition
class ICCauchy(InitialCondition):
name = "Cauchy distribution"
required_parameters = ["scale"]
def get_IC(self, x, dx, *args, **kwargs):
pdf = scipy.stats.cauchy(0, self.scale).pdf(x)
return pdf/np.sum(pdf)
```

Try it out with:

```from pyddm import Model, Fittable, BoundCollapsingLinear
from pyddm.plot import model_gui
model = Model(IC=ICCauchy(scale=Fittable(minval=.001, maxval=.3)),
bound=BoundCollapsingLinear(t=0, B=1),
dx=.01, dt=.01)
model_gui(model)
```