patsy API reference

This is a complete reference for everything you get when you import patsy.

Basic API

patsy.dmatrix(formula_like, data={}, eval_env=0, NA_action='drop', return_type='matrix')

Construct a single design matrix given a formula_like and data.

Parameters:
  • formula_like – An object that can be used to construct a design matrix. See below.
  • data – A dict-like object that can be used to look up variables referenced in formula_like.
  • eval_env – Either a EvalEnvironment which will be used to look up any variables referenced in formula_like that cannot be found in data, or else a depth represented as an integer which will be passed to EvalEnvironment.capture(). eval_env=0 means to use the context of the function calling dmatrix() for lookups. If calling this function from a library, you probably want eval_env=1, which means that variables should be resolved in your caller’s namespace.
  • NA_action – What to do with rows that contain missing values. You can "drop" them, "raise" an error, or for customization, pass an NAAction object. See NAAction for details on what values count as ‘missing’ (and how to alter this).
  • return_type – Either "matrix" or "dataframe". See below.

The formula_like can take a variety of forms. You can use any of the following:

  • (The most common option) A formula string like "x1 + x2" (for dmatrix()) or "y ~ x1 + x2" (for dmatrices()). For details see How formulas work.
  • A ModelDesc, which is a Python object representation of a formula. See How formulas work and Model specification for experts and computers for details.
  • A DesignMatrixBuilder.
  • An object that has a method called __patsy_get_model_desc__(). For details see Model specification for experts and computers.
  • A numpy array_like (for dmatrix()) or a tuple (array_like, array_like) (for dmatrices()). These will have metadata added, representation normalized, and then be returned directly. In this case data and eval_env are ignored. There is special handling for two cases:
    • DesignMatrix objects will have their DesignInfo preserved. This allows you to set up custom column names and term information even if you aren’t using the rest of the patsy machinery.
    • pandas.DataFrame or pandas.Series objects will have their (row) indexes checked. If two are passed in, their indexes must be aligned. If return_type="dataframe", then their indexes will be preserved on the output.

Regardless of the input, the return type is always either:

  • A DesignMatrix, if return_type="matrix" (the default)
  • A pandas.DataFrame, if return_type="dataframe".

The actual contents of the design matrix is identical in both cases, and in both cases a DesignInfo object will be available in a .design_info attribute on the return value. However, for return_type="dataframe", any pandas indexes on the input (either in data or directly passed through formula_like) will be preserved, which may be useful for e.g. time-series models.

New in version 0.2.0: The NA_action argument.

patsy.dmatrices(formula_like, data={}, eval_env=0, NA_action='drop', return_type='matrix')

Construct two design matrices given a formula_like and data.

This function is identical to dmatrix(), except that it requires (and returns) two matrices instead of one. By convention, the first matrix is the “outcome” or “y” data, and the second is the “predictor” or “x” data.

See dmatrix() for details.

patsy.incr_dbuilders(formula_like, data_iter_maker, eval_env=0, NA_action='drop')

Construct two design matrix builders incrementally from a large data set.

incr_dbuilders() is to incr_dbuilder() as dmatrices() is to dmatrix(). See incr_dbuilder() for details.

patsy.incr_dbuilder(formula_like, data_iter_maker, eval_env=0, NA_action='drop')

Construct a design matrix builder incrementally from a large data set.

Parameters:
  • formula_like – Similar to dmatrix(), except that explicit matrices are not allowed. Must be a formula string, a ModelDesc, a DesignMatrixBuilder, or an object with a __patsy_get_model_desc__ method.
  • data_iter_maker – A zero-argument callable which returns an iterator over dict-like data objects. This must be a callable rather than a simple iterator because sufficiently complex formulas may require multiple passes over the data (e.g. if there are nested stateful transforms).
  • eval_env – Either a EvalEnvironment which will be used to look up any variables referenced in formula_like that cannot be found in data, or else a depth represented as an integer which will be passed to EvalEnvironment.capture(). eval_env=0 means to use the context of the function calling incr_dbuilder() for lookups. If calling this function from a library, you probably want eval_env=1, which means that variables should be resolved in your caller’s namespace.
  • NA_action – An NAAction object or string, used to determine what values count as ‘missing’ for purposes of determining the levels of categorical factors.
Returns:

A DesignMatrixBuilder

Tip: for data_iter_maker, write a generator like:

def iter_maker():
    for data_chunk in my_data_store:
        yield data_chunk

and pass iter_maker (not iter_maker()).

New in version 0.2.0: The NA_action argument.

exception patsy.PatsyError(message, origin=None)

This is the main error type raised by Patsy functions.

In addition to the usual Python exception features, you can pass a second argument to this function specifying the origin of the error; this is included in any error message, and used to help the user locate errors arising from malformed formulas. This second argument should be an Origin object, or else an arbitrary object with a .origin attribute. (If it is neither of these things, then it will simply be ignored.)

For ordinary display to the user with default formatting, use str(exc). If you want to do something cleverer, you can use the .message and .origin attributes directly. (The latter may be None.)

Convenience utilities

patsy.balanced(factor_name=num_levels[, factor_name=num_levels, ..., repeat=1])

Create simple balanced factorial designs for testing.

Given some factor names and the number of desired levels for each, generates a balanced factorial design in the form of a data dictionary. For example:

In [1]: balanced(a=2, b=3)
Out[1]: 
{'a': ['a1', 'a1', 'a1', 'a2', 'a2', 'a2'],
 'b': ['b1', 'b2', 'b3', 'b1', 'b2', 'b3']}

By default it produces exactly one instance of each combination of levels, but if you want multiple replicates this can be accomplished via the repeat argument:

In [1]: balanced(a=2, b=2, repeat=2)
Out[1]: 
{'a': ['a1', 'a1', 'a2', 'a2', 'a1', 'a1', 'a2', 'a2'],
 'b': ['b1', 'b2', 'b1', 'b2', 'b1', 'b2', 'b1', 'b2']}
patsy.demo_data(*names, nlevels=2, min_rows=5)

Create simple categorical/numerical demo data.

Pass in a set of variable names, and this function will return a simple data set using those variable names.

Names whose first letter falls in the range “a” through “m” will be made categorical (with nlevels levels). Those that start with a “p” through “z” are numerical.

We attempt to produce a balanced design on the categorical variables, repeating as necessary to generate at least min_rows data points. Categorical variables are returned as a list of strings.

Numerical data is generated by sampling from a normal distribution. A fixed random seed is used, so that identical calls to demo_data() will produce identical results. Numerical data is returned in a numpy array.

Example:

Design metadata

class patsy.DesignInfo(column_names, term_slices=None, term_name_slices=None, builder=None)

A DesignInfo object holds metadata about a design matrix.

This is the main object that Patsy uses to pass information to statistical libraries. Usually encountered as the .design_info attribute on design matrices.

Here’s an example of the most common way to get a DesignInfo:

In [1]: mat = dmatrix("a + x", demo_data("a", "x", nlevels=3))

In [2]: di = mat.design_info
column_names

The names of each column, represented as a list of strings in the proper order. Guaranteed to exist.

In [1]: di.column_names
Out[1]: ['Intercept', 'a[T.a2]', 'a[T.a3]', 'x']
column_name_indexes

An OrderedDict mapping column names (as strings) to column indexes (as integers). Guaranteed to exist and to be sorted from low to high.

In [1]: di.column_name_indexes
Out[1]: OrderedDict([('Intercept', 0), ('a[T.a2]', 1), ('a[T.a3]', 2), ('x', 3)])
term_names

The names of each term, represented as a list of strings in the proper order. Guaranteed to exist. There is a one-to-many relationship between columns and terms – each term generates one or more columns.

In [1]: di.term_names
Out[1]: ['Intercept', 'a', 'x']
term_name_slices

An OrderedDict mapping term names (as strings) to Python slice() objects indicating which columns correspond to each term. Guaranteed to exist. The slices are guaranteed to be sorted from left to right and to cover the whole range of columns with no overlaps or gaps.

In [1]: di.term_name_slices
Out[1]: OrderedDict([('Intercept', slice(0, 1, None)), ('a', slice(1, 3, None)), ('x', slice(3, 4, None))])
terms

A list of Term objects representing each term. May be None, for example if a user passed in a plain preassembled design matrix rather than using the Patsy machinery.

In [1]: di.terms
Out[1]: [Term([]), Term([EvalFactor('a')]), Term([EvalFactor('x')])]

In [2]: [term.name() for term in di.terms]
Out[2]: ['Intercept', 'a', 'x']
term_slices

An OrderedDict mapping Term objects to Python slice() objects indicating which columns correspond to which terms. Like terms, this may be None.

In [1]: di.term_slices
Out[1]: OrderedDict([(Term([]), slice(0, 1, None)), (Term([EvalFactor('a')]), slice(1, 3, None)), (Term([EvalFactor('x')]), slice(3, 4, None))])
builder

A DesignMatrixBuilder object that can be used to generate more design matrices of this type (e.g. for prediction). May be None.

A number of convenience methods are also provided that take advantage of the above metadata:

describe()

Returns a human-readable string describing this design info.

Example:

In [1]: y, X = dmatrices("y ~ x1 + x2", demo_data("y", "x1", "x2"))

In [2]: y.design_info.describe()
Out[2]: 'y'

In [3]: X.design_info.describe()
Out[3]: '1 + x1 + x2'

Warning

There is no guarantee that the strings returned by this function can be parsed as formulas. They are best-effort descriptions intended for human users.

linear_constraint(constraint_likes)

Construct a linear constraint in matrix form from a (possibly symbolic) description.

Possible inputs:

  • A dictionary which is taken as a set of equality constraint. Keys can be either string column names, or integer column indexes.
  • A string giving a arithmetic expression referring to the matrix columns by name.
  • A list of such strings which are ANDed together.
  • A tuple (A, b) where A and b are array_likes, and the constraint is Ax = b. If necessary, these will be coerced to the proper dimensionality by appending dimensions with size 1.

The string-based language has the standard arithmetic operators, / * + - and parentheses, plus “=” is used for equality and ”,” is used to AND together multiple constraint equations within a string. You can If no = appears in some expression, then that expression is assumed to be equal to zero. Division is always float-based, even if __future__.true_division isn’t in effect.

Returns a LinearConstraint object.

Examples:

di = DesignInfo(["x1", "x2", "x3"])

# Equivalent ways to write x1 == 0:
di.linear_constraint({"x1": 0})  # by name
di.linear_constraint({0: 0})  # by index
di.linear_constraint("x1 = 0")  # string based
di.linear_constraint("x1")  # can leave out "= 0"
di.linear_constraint("2 * x1 = (x1 + 2 * x1) / 3")
di.linear_constraint(([1, 0, 0], 0))  # constraint matrices

# Equivalent ways to write x1 == 0 and x3 == 10
di.linear_constraint({"x1": 0, "x3": 10})
di.linear_constraint({0: 0, 2: 10})
di.linear_constraint({0: 0, "x3": 10})
di.linear_constraint("x1 = 0, x3 = 10")
di.linear_constraint("x1, x3 = 10")
di.linear_constraint(["x1", "x3 = 0"])  # list of strings
di.linear_constraint("x1 = 0, x3 - 10 = x1")
di.linear_constraint([[1, 0, 0], [0, 0, 1]], [0, 10])

# You can also chain together equalities, just like Python:
di.linear_constraint("x1 = x2 = 3")
slice(columns_specifier)

Locate a subset of design matrix columns, specified symbolically.

A patsy design matrix has two levels of structure: the individual columns (which are named), and the terms in the formula that generated those columns. This is a one-to-many relationship: a single term may span several columns. This method provides a user-friendly API for locating those columns.

(While we talk about columns here, this is probably most useful for indexing into other arrays that are derived from the design matrix, such as regression coefficients or covariance matrices.)

The columns_specifier argument can take a number of forms:

  • A term name
  • A column name
  • A Term object
  • An integer giving a raw index
  • A raw slice object

In all cases, a Python slice() object is returned, which can be used directly for indexing.

Example:

y, X = dmatrices("y ~ a", demo_data("y", "a", nlevels=3))
betas = np.linalg.lstsq(X, y)[0]
a_betas = betas[X.design_info.slice("a")]

(If you want to look up a single individual column by name, use design_info.column_name_indexes[name].)

classmethod from_array(array_like, default_column_prefix='column')

Find or construct a DesignInfo appropriate for a given array_like.

If the input array_like already has a .design_info attribute, then it will be returned. Otherwise, a new DesignInfo object will be constructed, using names either taken from the array_like (e.g., for a pandas DataFrame with named columns), or constructed using default_column_prefix.

This is how dmatrix() (for example) creates a DesignInfo object if an arbitrary matrix is passed in.

Parameters:
  • array_like – An ndarray or pandas container.
  • default_column_prefix – If it’s necessary to invent column names, then this will be used to construct them.
Returns:

a DesignInfo object

class patsy.DesignMatrix

A simple numpy array subclass that carries design matrix metadata.

design_info

A DesignInfo object containing metadata about this design matrix.

This class also defines a fancy __repr__ method with labeled columns. Otherwise it is identical to a regular numpy ndarray.

Warning

You should never check for this class using isinstance(). Limitations of the numpy API mean that it is impossible to prevent the creation of numpy arrays that have type DesignMatrix, but that are not actually design matrices (and such objects will behave like regular ndarrays in every way). Instead, check for the presence of a .design_info attribute – this will be present only on “real” DesignMatrix objects.

static __new__(input_array, design_info=None, default_column_prefix='column')

Create a DesignMatrix, or cast an existing matrix to a DesignMatrix.

A call like:

DesignMatrix(my_array)

will convert an arbitrary array_like object into a DesignMatrix.

The return from this function is guaranteed to be a two-dimensional ndarray with a real-valued floating point dtype, and a .design_info attribute which matches its shape. If the design_info argument is not given, then one is created via DesignInfo.from_array() using the given default_column_prefix.

Depending on the input array, it is possible this will pass through its input unchanged, or create a view.

Stateful transforms

Patsy comes with a number of stateful transforms built in:

patsy.center(x)

A stateful transform that centers input data, i.e., subtracts the mean.

If input has multiple columns, centers each column separately.

Equivalent to standardize(x, rescale=False)

patsy.standardize(x, center=True, rescale=True, ddof=0)

A stateful transform that standardizes input data, i.e. it subtracts the mean and divides by the sample standard deviation.

Either centering or rescaling or both can be disabled by use of keyword arguments. The ddof argument controls the delta degrees of freedom when computing the standard deviation (cf. numpy.std()). The default of ddof=0 produces the maximum likelihood estimate; use ddof=1 if you prefer the square root of the unbiased estimate of the variance.

If input has multiple columns, standardizes each column separately.

Note

This function computes the mean and standard deviation using a memory-efficient online algorithm, making it suitable for use with large incrementally processed data-sets.

patsy.scale(x, center=True, rescale=True, ddof=0)

An alias for standardize(), for R compatibility.

Finally, this is not itself a stateful transform, but it’s useful if you want to define your own:

patsy.stateful_transform(class_)

Create a stateful transform callable object from a class that fulfills the stateful transform protocol.

Handling categorical data

class patsy.Treatment(reference=None)

Treatment coding (also known as dummy coding).

This is the default coding.

For reduced-rank coding, one level is chosen as the “reference”, and its mean behaviour is represented by the intercept. Each column of the resulting matrix represents the difference between the mean of one level and this reference level.

For full-rank coding, classic “dummy” coding is used, and each column of the resulting matrix represents the mean of the corresponding level.

The reference level defaults to the first level, or can be specified explicitly.

# reduced rank
In [1]: dmatrix("C(a, Treatment)", balanced(a=3))
Out[1]: 
DesignMatrix with shape (3, 3)
  Intercept  C(a, Treatment)[T.a2]  C(a, Treatment)[T.a3]
          1                      0                      0
          1                      1                      0
          1                      0                      1
  Terms:
    'Intercept' (column 0)
    'C(a, Treatment)' (columns 1:3)

# full rank
In [2]: dmatrix("0 + C(a, Treatment)", balanced(a=3))
Out[2]: 
DesignMatrix with shape (3, 3)
  C(a, Treatment)[a1]  C(a, Treatment)[a2]  C(a, Treatment)[a3]
                    1                    0                    0
                    0                    1                    0
                    0                    0                    1
  Terms:
    'C(a, Treatment)' (columns 0:3)

# Setting a reference level
In [3]: dmatrix("C(a, Treatment(1))", balanced(a=3))
Out[3]: 
DesignMatrix with shape (3, 3)
  Intercept  C(a, Treatment(1))[T.a1]  C(a, Treatment(1))[T.a3]
          1                         1                         0
          1                         0                         0
          1                         0                         1
  Terms:
    'Intercept' (column 0)
    'C(a, Treatment(1))' (columns 1:3)

In [4]: dmatrix("C(a, Treatment('a2'))", balanced(a=3))
Out[4]: 
DesignMatrix with shape (3, 3)
  Intercept  C(a, Treatment('a2'))[T.a1]  C(a, Treatment('a2'))[T.a3]
          1                            1                            0
          1                            0                            0
          1                            0                            1
  Terms:
    'Intercept' (column 0)
    "C(a, Treatment('a2'))" (columns 1:3)

Equivalent to R contr.treatment. The R documentation suggests that using Treatment(reference=-1) will produce contrasts that are “equivalent to those produced by many (but not all) SAS procedures”.

class patsy.Diff

Backward difference coding.

This coding scheme is useful for ordered factors, and compares the mean of each level with the preceding level. So you get the second level minus the first, the third level minus the second, etc.

For full-rank coding, a standard intercept term is added (which gives the mean value for the first level).

Examples:

# Reduced rank
In [1]: dmatrix("C(a, Diff)", balanced(a=3))
Out[1]: 
DesignMatrix with shape (3, 3)
  Intercept  C(a, Diff)[D.a1]  C(a, Diff)[D.a2]
          1          -0.66667          -0.33333
          1           0.33333          -0.33333
          1           0.33333           0.66667
  Terms:
    'Intercept' (column 0)
    'C(a, Diff)' (columns 1:3)

# Full rank
In [2]: dmatrix("0 + C(a, Diff)", balanced(a=3))
Out[2]: 
DesignMatrix with shape (3, 3)
  C(a, Diff)[D.a1]  C(a, Diff)[D.a2]  C(a, Diff)[D.a3]
                 1          -0.66667          -0.33333
                 1           0.33333          -0.33333
                 1           0.33333           0.66667
  Terms:
    'C(a, Diff)' (columns 0:3)
class patsy.Poly(scores=None)

Orthogonal polynomial contrast coding.

This coding scheme treats the levels as ordered samples from an underlying continuous scale, whose effect takes an unknown functional form which is Taylor-decomposed into the sum of a linear, quadratic, etc. components.

For reduced-rank coding, you get a linear column, a quadratic column, etc., up to the number of levels provided.

For full-rank coding, the same scheme is used, except that the zero-order constant polynomial is also included. I.e., you get an intercept column included as part of your categorical term.

By default the levels are treated as equally spaced, but you can override this by providing a value for the scores argument.

Examples:

# Reduced rank
In [1]: dmatrix("C(a, Poly)", balanced(a=4))
Out[1]: 
DesignMatrix with shape (4, 4)
  Intercept  C(a, Poly).Linear  C(a, Poly).Quadratic  C(a, Poly).Cubic
          1           -0.67082                   0.5          -0.22361
          1           -0.22361                  -0.5           0.67082
          1            0.22361                  -0.5          -0.67082
          1            0.67082                   0.5           0.22361
  Terms:
    'Intercept' (column 0)
    'C(a, Poly)' (columns 1:4)

# Full rank
In [2]: dmatrix("0 + C(a, Poly)", balanced(a=3))
Out[2]: 
DesignMatrix with shape (3, 3)
  C(a, Poly).Constant  C(a, Poly).Linear  C(a, Poly).Quadratic
                    1           -0.70711               0.40825
                    1           -0.00000              -0.81650
                    1            0.70711               0.40825
  Terms:
    'C(a, Poly)' (columns 0:3)

# Explicit scores
In [3]: dmatrix("C(a, Poly([1, 2, 10]))", balanced(a=3))
Out[3]: 
DesignMatrix with shape (3, 3)
  Intercept  C(a, Poly([1, 2, 10])).Linear  C(a, Poly([1, 2, 10])).Quadratic
          1                       -0.47782                           0.66208
          1                       -0.33447                          -0.74485
          1                        0.81229                           0.08276
  Terms:
    'Intercept' (column 0)
    'C(a, Poly([1, 2, 10]))' (columns 1:3)

This is equivalent to R’s contr.poly. (But note that in R, reduced rank encodings are always dummy-coded, regardless of what contrast you have set.)

class patsy.Sum(omit=None)

Deviation coding (also known as sum-to-zero coding).

Compares the mean of each level to the mean-of-means. (In a balanced design, compares the mean of each level to the overall mean.)

For full-rank coding, a standard intercept term is added.

One level must be omitted to avoid redundancy; by default this is the last level, but this can be adjusted via the omit argument.

Warning

There are multiple definitions of ‘deviation coding’ in use. Make sure this is the one you expect before trying to interpret your results!

Examples:

# Reduced rank
In [1]: dmatrix("C(a, Sum)", balanced(a=4))
Out[1]: 
DesignMatrix with shape (4, 4)
  Intercept  C(a, Sum)[S.a1]  C(a, Sum)[S.a2]  C(a, Sum)[S.a3]
          1                1                0                0
          1                0                1                0
          1                0                0                1
          1               -1               -1               -1
  Terms:
    'Intercept' (column 0)
    'C(a, Sum)' (columns 1:4)

# Full rank
In [2]: dmatrix("0 + C(a, Sum)", balanced(a=4))
Out[2]: 
DesignMatrix with shape (4, 4)
  C(a, Sum)[mean]  C(a, Sum)[S.a1]  C(a, Sum)[S.a2]  C(a, Sum)[S.a3]
                1                1                0                0
                1                0                1                0
                1                0                0                1
                1               -1               -1               -1
  Terms:
    'C(a, Sum)' (columns 0:4)

# Omit a different level
In [3]: dmatrix("C(a, Sum(1))", balanced(a=3))
Out[3]: 
DesignMatrix with shape (3, 3)
  Intercept  C(a, Sum(1))[S.a1]  C(a, Sum(1))[S.a3]
          1                   1                   0
          1                  -1                  -1
          1                   0                   1
  Terms:
    'Intercept' (column 0)
    'C(a, Sum(1))' (columns 1:3)

In [4]: dmatrix("C(a, Sum('a1'))", balanced(a=3))
Out[4]: 
DesignMatrix with shape (3, 3)
  Intercept  C(a, Sum('a1'))[S.a2]  C(a, Sum('a1'))[S.a3]
          1                     -1                     -1
          1                      1                      0
          1                      0                      1
  Terms:
    'Intercept' (column 0)
    "C(a, Sum('a1'))" (columns 1:3)

This is equivalent to R’s contr.sum.

class patsy.Helmert

Helmert contrasts.

Compares the second level with the first, the third with the average of the first two, and so on.

For full-rank coding, a standard intercept term is added.

Warning

There are multiple definitions of ‘Helmert coding’ in use. Make sure this is the one you expect before trying to interpret your results!

Examples:

# Reduced rank
In [1]: dmatrix("C(a, Helmert)", balanced(a=4))
Out[1]: 
DesignMatrix with shape (4, 4)
  Intercept  C(a, Helmert)[H.a2]  C(a, Helmert)[H.a3]  C(a, Helmert)[H.a4]
          1                   -1                   -1                   -1
          1                    1                   -1                   -1
          1                    0                    2                   -1
          1                    0                    0                    3
  Terms:
    'Intercept' (column 0)
    'C(a, Helmert)' (columns 1:4)

# Full rank
In [2]: dmatrix("0 + C(a, Helmert)", balanced(a=4))
Out[2]: 
DesignMatrix with shape (4, 4)
  Columns:
    ['C(a, Helmert)[H.intercept]',
     'C(a, Helmert)[H.a2]',
     'C(a, Helmert)[H.a3]',
     'C(a, Helmert)[H.a4]']
  Terms:
    'C(a, Helmert)' (columns 0:4)
  (to view full data, use np.asarray(this_obj))

This is equivalent to R’s contr.helmert.

class patsy.ContrastMatrix(matrix, column_suffixes)

A simple container for a matrix used for coding categorical factors.

Attributes:

matrix

A 2d ndarray, where each column corresponds to one column of the resulting design matrix, and each row contains the entries for a single categorical variable level. Usually n-by-n for a full rank coding or n-by-(n-1) for a reduced rank coding, though other options are possible.

column_suffixes

A list of strings to be appended to the factor name, to produce the final column names. E.g. for treatment coding the entries will look like "[T.level1]".

Spline regression

patsy.bs(x, df=None, knots=None, degree=3, include_intercept=False, lower_bound=None, upper_bound=None)

Generates a B-spline basis for x, allowing non-linear fits. The usual usage is something like:

y ~ 1 + bs(x, 4)

to fit y as a smooth function of x, with 4 degrees of freedom given to the smooth.

Parameters:
  • df – The number of degrees of freedom to use for this spline. The return value will have this many columns. You must specify at least one of df and knots.
  • knots – The interior knots to use for the spline. If unspecified, then equally spaced quantiles of the input data are used. You must specify at least one of df and knots.
  • degree – The degree of the spline to use.
  • include_intercept – If True, then the resulting spline basis will span the intercept term (i.e., the constant function). If False (the default) then this will not be the case, which is useful for avoiding overspecification in models that include multiple spline terms and/or an intercept term.
  • lower_bound – The lower exterior knot location.
  • upper_bound – The upper exterior knot location.

A spline with degree=0 is piecewise constant with breakpoints at each knot, and the default knot positions are quantiles of the input. So if you find yourself in the situation of wanting to quantize a continuous variable into equal-sized bins with a constant effect across each bin, you can use bs(x, num_bins, degree=0).

Similarly, a spline with degree=1 is piecewise linear with breakpoints at each knot.

The default is degree=3, which gives a cubic b-spline.

This is a stateful transform (for details see Stateful transforms). If knots, lower_bound, or upper_bound are not specified, they will be calculated from the data and then the chosen values will be remembered and re-used for prediction from the fitted model.

Using this function requires scipy be installed.

Note

This function is very similar to the R function of the same name. In cases where both return output at all (e.g., R’s bs will raise an error if degree=0, while patsy’s will not), they should produce identical output given identical input and parameter settings.

Warning

I’m not sure on what the proper handling of points outside the lower/upper bounds is, so for now attempting to evaluate a spline basis at such points produces an error. Patches gratefully accepted.

New in version 0.2.0.

Working with formulas programmatically

class patsy.Term(factors)

The interaction between a collection of factor objects.

This is one of the basic types used in representing formulas, and corresponds to an expression like "a:b:c" in a formula string. For details, see How formulas work and Model specification for experts and computers.

Terms are hashable and compare by value.

Attributes:

factors

A tuple of factor objects.

patsy.INTERCEPT

This is a pre-instantiated zero-factors Term object representing the intercept, useful for making your code clearer. Do remember though that this is not a singleton object, i.e., you should compare against it using ==, not is.

class patsy.LookupFactor(varname, force_categorical=False, contrast=None, levels=None, origin=None)

A simple factor class that simply looks up a named entry in the given data.

Useful for programatically constructing formulas, and as a simple example of the factor protocol. For details see Model specification for experts and computers.

Example:

dmatrix(ModelDesc([], [Term([LookupFactor("x")])]), {"x": [1, 2, 3]})
Parameters:
  • varname – The name of this variable; used as a lookup key in the passed in data dictionary/DataFrame/whatever.
  • force_categorical – If True, then treat this factor as categorical. (Equivalent to using C() in a regular formula, but of course you can’t do that with a LookupFactor.
  • contrast – If given, the contrast to use; see C(). (Requires force_categorical=True.)
  • levels – If given, the categorical levels; see C(). (Requires force_categorical=True.)
  • origin – Either None, or the Origin of this factor for use in error reporting.

New in version 0.2.0: The force_categorical and related arguments.

class patsy.EvalFactor(code, eval_env, origin=None)

A factor class that executes arbitrary Python code and supports stateful transforms.

Parameters:
  • code – A string containing a Python expression, that will be evaluated to produce this factor’s value.
  • eval_env – The EvalEnvironment where code will be evaluated.

This is the standard factor class that is used when parsing formula strings and implements the standard stateful transform processing. See Stateful transforms and Model specification for experts and computers.

Two EvalFactor’s are considered equal (e.g., for purposes of redundancy detection) if they use the same evaluation environment and they contain the same token stream. Basically this means that the source code must be identical except for whitespace:

env = EvalEnvironment.capture()
assert EvalFactor("a + b", env) == EvalFactor("a+b", env)
assert EvalFactor("a + b", env) != EvalFactor("b + a", env)
class patsy.ModelDesc(lhs_termlist, rhs_termlist)

A simple container representing the termlists parsed from a formula.

This is a simple container object which has exactly the same representational power as a formula string, but is a Python object instead. You can construct one by hand, and pass it to functions like dmatrix() or incr_dbuilder() that are expecting a formula string, but without having to do any messy string manipulation. For details see Model specification for experts and computers.

Attributes:

lhs_termlist
rhs_termlist

Two termlists representing the left- and right-hand sides of a formula, suitable for passing to design_matrix_builders().

Working with the Python execution environment

class patsy.EvalEnvironment(namespaces, flags=0)

Represents a Python execution environment.

Encapsulates a namespace for variable lookup and set of __future__ flags.

add_outer_namespace(namespace)

Expose the contents of a dict-like object to the encapsulated environment.

The given namespace will be checked last, after all existing namespace lookups have failed.

classmethod capture(eval_env=0, reference=0)

Capture an execution environment from the stack.

If eval_env is already an EvalEnvironment, it is returned unchanged. Otherwise, we walk up the stack by eval_env + reference steps and capture that function’s evaluation environment.

For eval_env=0 and reference=0, the default, this captures the stack frame of the function that calls capture(). If eval_env + reference is 1, then we capture that function’s caller, etc.

This somewhat complicated calling convention is designed to be convenient for functions which want to capture their caller’s environment by default, but also allow explicit environments to be specified. See the second example.

Example:

x = 1
this_env = EvalEnvironment.capture()
assert this_env["x"] == 1
def child_func():
    return EvalEnvironment.capture(1)
this_env_from_child = child_func()
assert this_env_from_child["x"] == 1

Example:

# This function can be used like:
#   my_model(formula_like, data)
#     -> evaluates formula_like in caller's environment
#   my_model(formula_like, data, eval_env=1)
#     -> evaluates formula_like in caller's caller's environment
#   my_model(formula_like, data, eval_env=my_env)
#     -> evaluates formula_like in environment 'my_env'
def my_model(formula_like, data, eval_env=0):
    eval_env = EvalEnvironment.capture(eval_env, reference=1)
    return model_setup_helper(formula_like, data, eval_env)

This is how dmatrix() works.

eval(expr, source_name='<string>', inner_namespace={})

Evaluate some Python code in the encapsulated environment.

Parameters:
  • expr – A string containing a Python expression.
  • source_name – A name for this string, for use in tracebacks.
  • inner_namespace – A dict-like object that will be checked first when expr attempts to access any variables.
Returns:

The value of expr.

namespace

A dict-like object that can be used to look up variables accessible from the encapsulated environment.

Building design matrices

patsy.design_matrix_builders(termlists, data_iter_maker, NA_action='drop')

Construct several DesignMatrixBuilders from termlists.

This is one of Patsy’s fundamental functions. This function and build_design_matrices() together form the API to the core formula interpretation machinery.

Parameters:
  • termlists – A list of termlists, where each termlist is a list of Term objects which together specify a design matrix.
  • data_iter_maker – A zero-argument callable which returns an iterator over dict-like data objects. This must be a callable rather than a simple iterator because sufficiently complex formulas may require multiple passes over the data (e.g. if there are nested stateful transforms).
  • NA_action – An NAAction object or string, used to determine what values count as ‘missing’ for purposes of determining the levels of categorical factors.
Returns:

A list of DesignMatrixBuilder objects, one for each termlist passed in.

This function performs zero or more iterations over the data in order to sniff out any necessary information about factor types, set up stateful transforms, pick column names, etc.

See How formulas work for details.

New in version 0.2.0: The NA_action argument.

class patsy.DesignMatrixBuilder

This is an opaque class that represents Patsy’s knowledge about how to build a design matrix. You get these objects from design_matrix_builders(), and you pass them to build_design_matrices().

design_info

This attribute gives metadata about the matrices that this builder object can produce, in the form of a DesignInfo object.

subset(which_terms)

Create a new DesignMatrixBuilder that includes only a subset of the terms that this object does.

For example, if builder has terms x, y, and z, then:

builder2 = builder.subset(["x", "z"])

will return a new builder that will return design matrices with only the columns corresponding to the terms x and z. After we do this, then in general these two expressions will return the same thing (here we assume that x, y, and z each generate a single column of the output):

build_design_matrix([builder], data)[0][:, [0, 2]]
build_design_matrix([builder2], data)[0]

However, a critical difference is that in the second case, data need not contain any values for y. This is very useful when doing prediction using a subset of a model, in which situation R usually forces you to specify dummy values for y.

If using a formula to specify the terms to include, remember that like any formula, the intercept term will be included by default, so use 0 or -1 in your formula if you want to avoid this.

Parameters:which_terms – The terms which should be kept in the new DesignMatrixBuilder. If this is a string, then it is parsed as a formula, and then the names of the resulting terms are taken as the terms to keep. If it is a list, then it can contain a mixture of term names (as strings) and Term objects.
patsy.build_design_matrices(builders, data, NA_action='drop', return_type='matrix', dtype=dtype('float64'))

Construct several design matrices from DesignMatrixBuilder objects.

This is one of Patsy’s fundamental functions. This function and design_matrix_builders() together form the API to the core formula interpretation machinery.

Parameters:
  • builders – A list of DesignMatrixBuilders specifying the design matrices to be built.
  • data – A dict-like object which will be used to look up data.
  • NA_action – What to do with rows that contain missing values. You can "drop" them, "raise" an error, or for customization, pass an NAAction object. See NAAction for details on what values count as ‘missing’ (and how to alter this).
  • return_type – Either "matrix" or "dataframe". See below.
  • dtype – The dtype of the returned matrix. Useful if you want to use single-precision or extended-precision.

This function returns either a list of DesignMatrix objects (for return_type="matrix") or a list of pandas.DataFrame objects (for return_type="dataframe"). In the latter case, the DataFrames will preserve any (row) indexes that were present in the input, which may be useful for time-series models etc. In any case, all returned design matrices will have .design_info attributes containing the appropriate DesignInfo objects.

Unlike design_matrix_builders(), this function takes only a simple data argument, not any kind of iterator. That’s because this function doesn’t need a global view of the data – everything that depends on the whole data set is already encapsulated in the builders. If you are incrementally processing a large data set, simply call this function for each chunk.

New in version 0.2.0: The NA_action argument.

Missing values

class patsy.NAAction(on_NA='drop', NA_types=['None', 'NaN'])

An NAAction object defines a strategy for handling missing data.

“NA” is short for “Not Available”, and is used to refer to any value which is somehow unmeasured or unavailable. In the long run, it is devoutly hoped that numpy will gain first-class missing value support. Until then, we work around this lack as best we’re able.

There are two parts to this: First, we have to determine what counts as missing data. For numerical data, the default is to treat NaN values (e.g., numpy.nan) as missing. For categorical data, the default is to treat NaN values, and also the Python object None, as missing. (This is consistent with how pandas does things, so if you’re already using None/NaN to mark missing data in your pandas DataFrames, you’re good to go.)

Second, we have to decide what to do with any missing data when we encounter it. One option is to simply discard any rows which contain missing data from our design matrices (drop). Another option is to raise an error (raise). A third option would be to simply let the missing values pass through into the returned design matrices. However, this last option is not yet implemented, because of the lack of any standard way to represent missing values in arbitrary numpy matrices; we’re hoping numpy will get this sorted out before we standardize on anything ourselves.

You can control how patsy handles missing data through the NA_action= argument to functions like build_design_matrices() and dmatrix(). If all you want to do is to choose between drop and raise behaviour, you can pass one of those strings as the NA_action= argument directly. If you want more fine-grained control over how missing values are detected and handled, then you can create an instance of this class, or your own object that implements the same interface, and pass that as the NA_action= argument instead.

The NAAction constructor takes the following arguments:

Parameters:
  • on_NA – How to handle missing values. The default is "drop", which removes all rows from all matrices which contain any missing values. Also available is "raise", which raises an exception when any missing values are encountered.
  • NA_types

    Which rules are used to identify missing values, as a list of strings. Allowed values are:

    • "None": treat the None object as missing in categorical data.
    • "NaN": treat floating point NaN values as missing in categorical and numerical data.

New in version 0.2.0.

handle_NA(values, is_NAs, origins)

Takes a set of factor values that may have NAs, and handles them appropriately.

Parameters:
  • values – A list of ndarray objects representing the data. These may be 1- or 2-dimensional, and may be of varying dtype. All will have the same number of rows (or entries, for 1-d arrays).
  • is_NAs – A list with the same number of entries as values, containing boolean ndarray objects that indicate which rows contain NAs in the corresponding entry in values.
  • origins – A list with the same number of entries as values, containing information on the origin of each value. If we encounter a problem with some particular value, we use the corresponding entry in origins as the origin argument when raising a PatsyError.
Returns:

A list of new values (which may have a differing number of rows.)

is_categorical_NA(obj)

Return True if obj is a categorical NA value.

Note that here obj is a single scalar value.

is_numerical_NA(arr)

Returns a 1-d mask array indicating which rows in an array of numerical values contain at least one NA value.

Note that here arr is a numpy array or pandas DataFrame.

Linear constraints

class patsy.LinearConstraint(variable_names, coefs, constants=None)

A linear constraint in matrix form.

This object represents a linear constraint of the form Ax = b.

Usually you won’t be constructing these by hand, but instead get them as the return value from DesignInfo.linear_constraint().

coefs

A 2-dimensional ndarray with float dtype, representing A.

constants

A 2-dimensional single-column ndarray with float dtype, representing b.

variable_names

A list of strings giving the names of the variables being constrained. (Used only for consistency checking.)

Origin tracking

class patsy.Origin(code, start, end)

This represents the origin of some object in some string.

For example, if we have an object x1_obj that was produced by parsing the x1 in the formula "y ~ x1:x2", then we conventionally keep track of that relationship by doing:

x1_obj.origin = Origin("y ~ x1:x2", 4, 6)

Then later if we run into a problem, we can do:

raise PatsyError("invalid factor", x1_obj)

and we’ll produce a nice error message like:

PatsyError: invalid factor
    y ~ x1:x2
        ^^

Origins are compared by value, and hashable.

caretize(indent=0)

Produces a user-readable two line string indicating the origin of some code. Example:

y ~ x1:x2
    ^^

If optional argument ‘indent’ is given, then both lines will be indented by this much. The returned string does not have a trailing newline.

classmethod combine(origin_objs)

Class method for combining a set of Origins into one large Origin that spans them.

Example usage: if we wanted to represent the origin of the “x1:x2” term, we could do Origin.combine([x1_obj, x2_obj]).

Single argument is an iterable, and each element in the iterable should be either:

  • An Origin object
  • None
  • An object that has a .origin attribute which fulfills the above criteria.

Returns either an Origin object, or None.

relevant_code()

Extracts and returns the span of the original code represented by this Origin. Example: x1.