# patsy.builtins API reference¶

This module defines some tools that are automatically made available to code evaluated in formulas. You can also access it directly; use from patsy.builtins import * to import the same variables that formula code receives automatically.

patsy.builtins.I(x)

The identity function. Simply returns its input unchanged.

Since Patsy’s formula parser ignores anything inside a function call syntax, this is useful to ‘hide’ arithmetic operations from it. For instance:

y ~ x1 + x2


has x1 and x2 as two separate predictors. But in:

y ~ I(x1 + x2)


we instead have a single predictor, defined to be the sum of x1 and x2.

patsy.builtins.Q(name)

A way to ‘quote’ variable names, especially ones that do not otherwise meet Python’s variable name rules.

If x is a variable, Q("x") returns the value of x. (Note that Q takes the string "x", not the value of x itself.) This works even if instead of x, we have a variable name that would not otherwise be legal in Python.

For example, if you have a column of data named weight.in.kg, then you can’t write:

y ~ weight.in.kg


because Python will try to find a variable named weight, that has an attribute named in, that has an attribute named kg. (And worse yet, in is a reserved word, which makes this example doubly broken.) Instead, write:

y ~ Q("weight.in.kg")


and all will be well. Note, though, that this requires embedding a Python string inside your formula, which may require some care with your quote marks. Some standard options include:

my_fit_function("y ~ Q('weight.in.kg')", ...)
my_fit_function('y ~ Q("weight.in.kg")', ...)
my_fit_function("y ~ Q(\"weight.in.kg\")", ...)


Note also that Q is an ordinary Python function, which means that you can use it in more complex expressions. For example, this is a legal formula:

y ~ np.sqrt(Q("weight.in.kg"))

class patsy.builtins.ContrastMatrix(matrix, column_suffixes)

A simple container for a matrix used for coding categorical factors.

Attributes:

matrix

A 2d ndarray, where each column corresponds to one column of the resulting design matrix, and each row contains the entries for a single categorical variable level. Usually n-by-n for a full rank coding or n-by-(n-1) for a reduced rank coding, though other options are possible.

column_suffixes

A list of strings to be appended to the factor name, to produce the final column names. E.g. for treatment coding the entries will look like "[T.level1]".

class patsy.builtins.Treatment(reference=None)

Treatment coding (also known as dummy coding).

This is the default coding.

For reduced-rank coding, one level is chosen as the “reference”, and its mean behaviour is represented by the intercept. Each column of the resulting matrix represents the difference between the mean of one level and this reference level.

For full-rank coding, classic “dummy” coding is used, and each column of the resulting matrix represents the mean of the corresponding level.

The reference level defaults to the first level, or can be specified explicitly.

# reduced rank
In [1]: dmatrix("C(a, Treatment)", balanced(a=3))
Out[1]:
DesignMatrix with shape (3, 3)
Intercept  C(a, Treatment)[T.a2]  C(a, Treatment)[T.a3]
1                      0                      0
1                      1                      0
1                      0                      1
Terms:
'Intercept' (column 0)
'C(a, Treatment)' (columns 1:3)

# full rank
In [2]: dmatrix("0 + C(a, Treatment)", balanced(a=3))
Out[2]:
DesignMatrix with shape (3, 3)
C(a, Treatment)[a1]  C(a, Treatment)[a2]  C(a, Treatment)[a3]
1                    0                    0
0                    1                    0
0                    0                    1
Terms:
'C(a, Treatment)' (columns 0:3)

# Setting a reference level
In [3]: dmatrix("C(a, Treatment(1))", balanced(a=3))
Out[3]:
DesignMatrix with shape (3, 3)
Intercept  C(a, Treatment(1))[T.a1]  C(a, Treatment(1))[T.a3]
1                         1                         0
1                         0                         0
1                         0                         1
Terms:
'Intercept' (column 0)
'C(a, Treatment(1))' (columns 1:3)

In [4]: dmatrix("C(a, Treatment('a2'))", balanced(a=3))
Out[4]:
DesignMatrix with shape (3, 3)
Intercept  C(a, Treatment('a2'))[T.a1]  C(a, Treatment('a2'))[T.a3]
1                            1                            0
1                            0                            0
1                            0                            1
Terms:
'Intercept' (column 0)
"C(a, Treatment('a2'))" (columns 1:3)


Equivalent to R contr.treatment. The R documentation suggests that using Treatment(reference=-1) will produce contrasts that are “equivalent to those produced by many (but not all) SAS procedures”.

code_with_intercept(levels)
code_without_intercept(levels)
class patsy.builtins.Poly(scores=None)

Orthogonal polynomial contrast coding.

This coding scheme treats the levels as ordered samples from an underlying continuous scale, whose effect takes an unknown functional form which is Taylor-decomposed into the sum of a linear, quadratic, etc. components.

For reduced-rank coding, you get a linear column, a quadratic column, etc., up to the number of levels provided.

For full-rank coding, the same scheme is used, except that the zero-order constant polynomial is also included. I.e., you get an intercept column included as part of your categorical term.

By default the levels are treated as equally spaced, but you can override this by providing a value for the scores argument.

Examples:

# Reduced rank
In [1]: dmatrix("C(a, Poly)", balanced(a=4))
Out[1]:
DesignMatrix with shape (4, 4)
Intercept  C(a, Poly).Linear  C(a, Poly).Quadratic  C(a, Poly).Cubic
1           -0.67082                   0.5          -0.22361
1           -0.22361                  -0.5           0.67082
1            0.22361                  -0.5          -0.67082
1            0.67082                   0.5           0.22361
Terms:
'Intercept' (column 0)
'C(a, Poly)' (columns 1:4)

# Full rank
In [2]: dmatrix("0 + C(a, Poly)", balanced(a=3))
Out[2]:
DesignMatrix with shape (3, 3)
C(a, Poly).Constant  C(a, Poly).Linear  C(a, Poly).Quadratic
1           -0.70711               0.40825
1           -0.00000              -0.81650
1            0.70711               0.40825
Terms:
'C(a, Poly)' (columns 0:3)

# Explicit scores
In [3]: dmatrix("C(a, Poly([1, 2, 10]))", balanced(a=3))
Out[3]:
DesignMatrix with shape (3, 3)
Intercept  C(a, Poly([1, 2, 10])).Linear  C(a, Poly([1, 2, 10])).Quadratic
1                       -0.47782                           0.66208
1                       -0.33447                          -0.74485
1                        0.81229                           0.08276
Terms:
'Intercept' (column 0)
'C(a, Poly([1, 2, 10]))' (columns 1:3)


This is equivalent to R’s contr.poly. (But note that in R, reduced rank encodings are always dummy-coded, regardless of what contrast you have set.)

code_with_intercept(levels)
code_without_intercept(levels)
class patsy.builtins.Sum(omit=None)

Deviation coding (also known as sum-to-zero coding).

Compares the mean of each level to the mean-of-means. (In a balanced design, compares the mean of each level to the overall mean.)

For full-rank coding, a standard intercept term is added.

One level must be omitted to avoid redundancy; by default this is the last level, but this can be adjusted via the omit argument.

Warning

There are multiple definitions of ‘deviation coding’ in use. Make sure this is the one you expect before trying to interpret your results!

Examples:

# Reduced rank
In [1]: dmatrix("C(a, Sum)", balanced(a=4))
Out[1]:
DesignMatrix with shape (4, 4)
Intercept  C(a, Sum)[S.a1]  C(a, Sum)[S.a2]  C(a, Sum)[S.a3]
1                1                0                0
1                0                1                0
1                0                0                1
1               -1               -1               -1
Terms:
'Intercept' (column 0)
'C(a, Sum)' (columns 1:4)

# Full rank
In [2]: dmatrix("0 + C(a, Sum)", balanced(a=4))
Out[2]:
DesignMatrix with shape (4, 4)
C(a, Sum)[mean]  C(a, Sum)[S.a1]  C(a, Sum)[S.a2]  C(a, Sum)[S.a3]
1                1                0                0
1                0                1                0
1                0                0                1
1               -1               -1               -1
Terms:
'C(a, Sum)' (columns 0:4)

# Omit a different level
In [3]: dmatrix("C(a, Sum(1))", balanced(a=3))
Out[3]:
DesignMatrix with shape (3, 3)
Intercept  C(a, Sum(1))[S.a1]  C(a, Sum(1))[S.a3]
1                   1                   0
1                  -1                  -1
1                   0                   1
Terms:
'Intercept' (column 0)
'C(a, Sum(1))' (columns 1:3)

In [4]: dmatrix("C(a, Sum('a1'))", balanced(a=3))
Out[4]:
DesignMatrix with shape (3, 3)
Intercept  C(a, Sum('a1'))[S.a2]  C(a, Sum('a1'))[S.a3]
1                     -1                     -1
1                      1                      0
1                      0                      1
Terms:
'Intercept' (column 0)
"C(a, Sum('a1'))" (columns 1:3)


This is equivalent to R’s contr.sum.

code_with_intercept(levels)
code_without_intercept(levels)
class patsy.builtins.Helmert

Helmert contrasts.

Compares the second level with the first, the third with the average of the first two, and so on.

For full-rank coding, a standard intercept term is added.

Warning

There are multiple definitions of ‘Helmert coding’ in use. Make sure this is the one you expect before trying to interpret your results!

Examples:

# Reduced rank
In [1]: dmatrix("C(a, Helmert)", balanced(a=4))
Out[1]:
DesignMatrix with shape (4, 4)
Intercept  C(a, Helmert)[H.a2]  C(a, Helmert)[H.a3]  C(a, Helmert)[H.a4]
1                   -1                   -1                   -1
1                    1                   -1                   -1
1                    0                    2                   -1
1                    0                    0                    3
Terms:
'Intercept' (column 0)
'C(a, Helmert)' (columns 1:4)

# Full rank
In [2]: dmatrix("0 + C(a, Helmert)", balanced(a=4))
Out[2]:
DesignMatrix with shape (4, 4)
Columns:
['C(a, Helmert)[H.intercept]',
'C(a, Helmert)[H.a2]',
'C(a, Helmert)[H.a3]',
'C(a, Helmert)[H.a4]']
Terms:
'C(a, Helmert)' (columns 0:4)
(to view full data, use np.asarray(this_obj))


This is equivalent to R’s contr.helmert.

code_with_intercept(levels)
code_without_intercept(levels)
class patsy.builtins.Diff

Backward difference coding.

This coding scheme is useful for ordered factors, and compares the mean of each level with the preceding level. So you get the second level minus the first, the third level minus the second, etc.

For full-rank coding, a standard intercept term is added (which gives the mean value for the first level).

Examples:

# Reduced rank
In [1]: dmatrix("C(a, Diff)", balanced(a=3))
Out[1]:
DesignMatrix with shape (3, 3)
Intercept  C(a, Diff)[D.a1]  C(a, Diff)[D.a2]
1          -0.66667          -0.33333
1           0.33333          -0.33333
1           0.33333           0.66667
Terms:
'Intercept' (column 0)
'C(a, Diff)' (columns 1:3)

# Full rank
In [2]: dmatrix("0 + C(a, Diff)", balanced(a=3))
Out[2]:
DesignMatrix with shape (3, 3)
C(a, Diff)[D.a1]  C(a, Diff)[D.a2]  C(a, Diff)[D.a3]
1          -0.66667          -0.33333
1           0.33333          -0.33333
1           0.33333           0.66667
Terms:
'C(a, Diff)' (columns 0:3)

code_with_intercept(levels)
code_without_intercept(levels)
patsy.builtins.C(data, contrast=None, levels=None)

Marks some data as being categorical, and specifies how to interpret it.

This is used for three reasons:

• To explicitly mark some data as categorical. For instance, integer data is by default treated as numerical. If you have data that is stored using an integer type, but where you want patsy to treat each different value as a different level of a categorical factor, you can wrap it in a call to C to accomplish this. E.g., compare:

dmatrix("a", {"a": [1, 2, 3]})
dmatrix("C(a)", {"a": [1, 2, 3]})

• To explicitly set the levels or override the default level ordering for categorical data, e.g.:

dmatrix("C(a, levels=["a2", "a1"])", balanced(a=2))

• To override the default coding scheme for categorical data. The contrast argument can be any of:

patsy.builtins.center(x)

A stateful transform that centers input data, i.e., subtracts the mean.

If input has multiple columns, centers each column separately.

Equivalent to standardize(x, rescale=False)

patsy.builtins.standardize(x, center=True, rescale=True, ddof=0)

A stateful transform that standardizes input data, i.e. it subtracts the mean and divides by the sample standard deviation.

Either centering or rescaling or both can be disabled by use of keyword arguments. The ddof argument controls the delta degrees of freedom when computing the standard deviation (cf. numpy.std()). The default of ddof=0 produces the maximum likelihood estimate; use ddof=1 if you prefer the square root of the unbiased estimate of the variance.

If input has multiple columns, standardizes each column separately.

Note

This function computes the mean and standard deviation using a memory-efficient online algorithm, making it suitable for use with large incrementally processed data-sets.

patsy.builtins.scale(*args, **kwargs)

standardize(x, center=True, rescale=True, ddof=0)

A stateful transform that standardizes input data, i.e. it subtracts the mean and divides by the sample standard deviation.

Either centering or rescaling or both can be disabled by use of keyword arguments. The ddof argument controls the delta degrees of freedom when computing the standard deviation (cf. numpy.std()). The default of ddof=0 produces the maximum likelihood estimate; use ddof=1 if you prefer the square root of the unbiased estimate of the variance.

If input has multiple columns, standardizes each column separately.

Note

This function computes the mean and standard deviation using a memory-efficient online algorithm, making it suitable for use with large incrementally processed data-sets.

patsy.builtins.bs(x, df=None, knots=None, degree=3, include_intercept=False, lower_bound=None, upper_bound=None)

Generates a B-spline basis for x, allowing non-linear fits. The usual usage is something like:

y ~ 1 + bs(x, 4)


to fit y as a smooth function of x, with 4 degrees of freedom given to the smooth.

Parameters: df – The number of degrees of freedom to use for this spline. The return value will have this many columns. You must specify at least one of df and knots. knots – The interior knots to use for the spline. If unspecified, then equally spaced quantiles of the input data are used. You must specify at least one of df and knots. degree – The degree of the spline to use. include_intercept – If True, then the resulting spline basis will span the intercept term (i.e., the constant function). If False (the default) then this will not be the case, which is useful for avoiding overspecification in models that include multiple spline terms and/or an intercept term. lower_bound – The lower exterior knot location. upper_bound – The upper exterior knot location.

A spline with degree=0 is piecewise constant with breakpoints at each knot, and the default knot positions are quantiles of the input. So if you find yourself in the situation of wanting to quantize a continuous variable into num_bins equal-sized bins with a constant effect across each bin, you can use bs(x, num_bins - 1, degree=0). (The - 1 is because one degree of freedom will be taken by the intercept; alternatively, you could leave the intercept term out of your model and use bs(x, num_bins, degree=0, include_intercept=True).

A spline with degree=1 is piecewise linear with breakpoints at each knot.

The default is degree=3, which gives a cubic b-spline.

This is a stateful transform (for details see Stateful transforms). If knots, lower_bound, or upper_bound are not specified, they will be calculated from the data and then the chosen values will be remembered and re-used for prediction from the fitted model.

Using this function requires scipy be installed.

Note

This function is very similar to the R function of the same name. In cases where both return output at all (e.g., R’s bs will raise an error if degree=0, while patsy’s will not), they should produce identical output given identical input and parameter settings.

Warning

I’m not sure on what the proper handling of points outside the lower/upper bounds is, so for now attempting to evaluate a spline basis at such points produces an error. Patches gratefully accepted.

New in version 0.2.0.

patsy.builtins.cr(x, df=None, knots=None, lower_bound=None, upper_bound=None, constraints=None)

Generates a natural cubic spline basis for x (with the option of absorbing centering or more general parameters constraints), allowing non-linear fits. The usual usage is something like:

y ~ 1 + cr(x, df=5, constraints='center')


to fit y as a smooth function of x, with 5 degrees of freedom given to the smooth, and centering constraint absorbed in the resulting design matrix. Note that in this example, due to the centering constraint, 6 knots will get computed from the input data x to achieve 5 degrees of freedom.

Note

This function reproduce the cubic regression splines ‘cr’ and ‘cs’ as implemented in the R package ‘mgcv’ (GAM modelling).

Parameters: df – The number of degrees of freedom to use for this spline. The return value will have this many columns. You must specify at least one of df and knots. knots – The interior knots to use for the spline. If unspecified, then equally spaced quantiles of the input data are used. You must specify at least one of df and knots. lower_bound – The lower exterior knot location. upper_bound – The upper exterior knot location. constraints – Either a 2-d array defining general linear constraints (that is np.dot(constraints, betas) is zero, where betas denotes the array of initial parameters, corresponding to the initial unconstrained design matrix), or the string 'center' indicating that we should apply a centering constraint (this constraint will be computed from the input data, remembered and re-used for prediction from the fitted model). The constraints are absorbed in the resulting design matrix which means that the model is actually rewritten in terms of unconstrained parameters. For more details see Spline regression.

This is a stateful transforms (for details see Stateful transforms). If knots, lower_bound, or upper_bound are not specified, they will be calculated from the data and then the chosen values will be remembered and re-used for prediction from the fitted model.

Using this function requires scipy be installed.

New in version 0.3.0.

patsy.builtins.cc(x, df=None, knots=None, lower_bound=None, upper_bound=None, constraints=None)

Generates a cyclic cubic spline basis for x (with the option of absorbing centering or more general parameters constraints), allowing non-linear fits. The usual usage is something like:

y ~ 1 + cc(x, df=7, constraints='center')


to fit y as a smooth function of x, with 7 degrees of freedom given to the smooth, and centering constraint absorbed in the resulting design matrix. Note that in this example, due to the centering and cyclic constraints, 9 knots will get computed from the input data x to achieve 7 degrees of freedom.

Note

This function reproduce the cubic regression splines ‘cc’ as implemented in the R package ‘mgcv’ (GAM modelling).

Parameters: df – The number of degrees of freedom to use for this spline. The return value will have this many columns. You must specify at least one of df and knots. knots – The interior knots to use for the spline. If unspecified, then equally spaced quantiles of the input data are used. You must specify at least one of df and knots. lower_bound – The lower exterior knot location. upper_bound – The upper exterior knot location. constraints – Either a 2-d array defining general linear constraints (that is np.dot(constraints, betas) is zero, where betas denotes the array of initial parameters, corresponding to the initial unconstrained design matrix), or the string 'center' indicating that we should apply a centering constraint (this constraint will be computed from the input data, remembered and re-used for prediction from the fitted model). The constraints are absorbed in the resulting design matrix which means that the model is actually rewritten in terms of unconstrained parameters. For more details see Spline regression.

This is a stateful transforms (for details see Stateful transforms). If knots, lower_bound, or upper_bound are not specified, they will be calculated from the data and then the chosen values will be remembered and re-used for prediction from the fitted model.

Using this function requires scipy be installed.

New in version 0.3.0.

patsy.builtins.te(s1, .., sn, constraints=None)

Generates smooth of several covariates as a tensor product of the bases of marginal univariate smooths s1, .., sn. The marginal smooths are required to transform input univariate data into some kind of smooth functions basis producing a 2-d array output with the (i, j) element corresponding to the value of the j th basis function at the i th data point. The resulting basis dimension is the product of the basis dimensions of the marginal smooths. The usual usage is something like:

y ~ 1 + te(cr(x1, df=5), cc(x2, df=6), constraints='center')


to fit y as a smooth function of both x1 and x2, with a natural cubic spline for x1 marginal smooth and a cyclic cubic spline for x2 (and centering constraint absorbed in the resulting design matrix).

Parameters: constraints – Either a 2-d array defining general linear constraints (that is np.dot(constraints, betas) is zero, where betas denotes the array of initial parameters, corresponding to the initial unconstrained design matrix), or the string 'center' indicating that we should apply a centering constraint (this constraint will be computed from the input data, remembered and re-used for prediction from the fitted model). The constraints are absorbed in the resulting design matrix which means that the model is actually rewritten in terms of unconstrained parameters. For more details see Spline regression.

Using this function requires scipy be installed.

Note

This function reproduce the tensor product smooth ‘te’ as implemented in the R package ‘mgcv’ (GAM modelling). See also ‘Generalized Additive Models’, Simon N. Wood, 2006, pp 158-163

New in version 0.3.0.