numpy_backend

class pyhf.tensor.numpy_backend.numpy_backend(**kwargs)[source]

Bases: object

NumPy backend for pyhf

Methods

__init__(**kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

abs(tensor)[source]
astensor(tensor_in, dtype='float')[source]

Convert to a NumPy array.

Parameters

tensor_in (Number or Tensor) – Tensor object

Returns

A multi-dimensional, fixed-size homogenous array.

Return type

numpy.ndarray

boolean_mask(tensor, mask)[source]
clip(tensor_in, min_value, max_value)[source]

Clips (limits) the tensor values to be within a specified min and max.

Example

>>> import pyhf
>>> pyhf.set_backend(pyhf.tensor.numpy_backend())
>>> a = pyhf.tensorlib.astensor([-2, -1, 0, 1, 2])
>>> pyhf.tensorlib.clip(a, -1, 1)
array([-1., -1.,  0.,  1.,  1.])
Parameters
  • tensor_in (tensor) – The input tensor object

  • min_value (scalar or tensor or None) – The minimum value to be cliped to

  • max_value (scalar or tensor or None) – The maximum value to be cliped to

Returns

A clipped tensor

Return type

NumPy ndarray

concatenate(sequence, axis=0)[source]

Join a sequence of arrays along an existing axis.

Parameters
  • sequence – sequence of tensors

  • axis – dimension along which to concatenate

Returns

the concatenated tensor

Return type

output

divide(tensor_in_1, tensor_in_2)[source]
einsum(subscripts, *operands)[source]

Evaluates the Einstein summation convention on the operands.

Using the Einstein summation convention, many common multi-dimensional array operations can be represented in a simple fashion. This function provides a way to compute such summations. The best way to understand this function is to try the examples below, which show how many common NumPy functions can be implemented as calls to einsum.

Parameters
  • subscripts – str, specifies the subscripts for summation

  • operands – list of array_like, these are the tensors for the operation

Returns

the calculation based on the Einstein summation convention

Return type

tensor

exp(tensor_in)[source]
gather(tensor, indices)[source]
isfinite(tensor)[source]
log(tensor_in)[source]
normal(x, mu, sigma)[source]

The probability density function of the Normal distribution evaluated at x given parameters of mean of mu and standard deviation of sigma.

Example

>>> import pyhf
>>> pyhf.set_backend(pyhf.tensor.numpy_backend())
>>> pyhf.tensorlib.normal(0.5, 0., 1.)
0.3520653267642995
Parameters
  • x (tensor or float) – The value at which to evaluate the Normal distribution p.d.f.

  • mu (tensor or float) – The mean of the Normal distribution

  • sigma (tensor or float) – The standard deviation of the Normal distribution

Returns

Value of Normal(x|mu, sigma)

Return type

NumPy float

normal_cdf(x, mu=0, sigma=1)[source]

The cumulative distribution function for the Normal distribution

Example

>>> import pyhf
>>> pyhf.set_backend(pyhf.tensor.numpy_backend())
>>> pyhf.tensorlib.normal_cdf(0.8)
0.7881446014166034
Parameters
  • x (tensor or float) – The observed value of the random variable to evaluate the CDF for

  • mu (tensor or float) – The mean of the Normal distribution

  • sigma (tensor or float) – The standard deviation of the Normal distribution

Returns

The CDF

Return type

NumPy float

normal_logpdf(x, mu, sigma)[source]
ones(shape)[source]
outer(tensor_in_1, tensor_in_2)[source]
poisson(n, lam)[source]

The continous approximation, using \(n! = \Gamma\left(n+1\right)\), to the probability mass function of the Poisson distribution evaluated at n given the parameter lam.

Example

>>> import pyhf
>>> pyhf.set_backend(pyhf.tensor.numpy_backend())
>>> pyhf.tensorlib.poisson(5., 6.)
0.16062314104797995
Parameters
  • n (tensor or float) – The value at which to evaluate the approximation to the Poisson distribution p.m.f. (the observed number of events)

  • lam (tensor or float) – The mean of the Poisson distribution p.m.f. (the expected number of events)

Returns

Value of the continous approximation to Poisson(n|lam)

Return type

NumPy float

poisson_logpdf(n, lam)[source]
power(tensor_in_1, tensor_in_2)[source]
product(tensor_in, axis=None)[source]
reshape(tensor, newshape)[source]
shape(tensor)[source]
simple_broadcast(*args)[source]

Broadcast a sequence of 1 dimensional arrays.

Example

>>> import pyhf
>>> pyhf.set_backend(pyhf.tensor.numpy_backend())
>>> pyhf.tensorlib.simple_broadcast(
...   pyhf.tensorlib.astensor([1]),
...   pyhf.tensorlib.astensor([2, 3, 4]),
...   pyhf.tensorlib.astensor([5, 6, 7]))
[array([1., 1., 1.]), array([2., 3., 4.]), array([5., 6., 7.])]
Parameters

args (Array of Tensors) – Sequence of arrays

Returns

The sequence broadcast together.

Return type

list of Tensors

sqrt(tensor_in)[source]
stack(sequence, axis=0)[source]
sum(tensor_in, axis=None)[source]
tolist(tensor_in)[source]
where(mask, tensor_in_1, tensor_in_2)[source]
zeros(shape)[source]