Theano supports any kind of Python object, but its focus is support for symbolic matrix expressions. When you type,
>>> x = T.fmatrix()
the x is a TensorVariable instance. The T.fmatrix object itself is an instance of TensorType. Theano knows what type of variable x is because x.type points back to T.fmatrix.
This chapter explains the various ways of creating tensor variables, the attributes and methods of TensorVariable and TensorType, and various basic symbolic math and arithmetic that Theano supports for tensor variables.
Theano provides a list of predefined tensor types that can be used to create a tensor variables. Variables can be named to facilitate debugging, and all of these constructors accept an optional name argument. For example, the following each produce a TensorVariable instance that stands for a 0dimensional ndarray of integers with the name 'myvar':
>>> x = scalar('myvar', dtype='int32')
>>> x = iscalar('myvar')
>>> x = TensorType(dtype='int32', broadcastable=())('myvar')
These are the simplest and oftenpreferred methods for creating symbolic variables in your code. By default, they produce floatingpoint variables (with dtype determined by config.floatX, see floatX) so if you use these constructors it is easy to switch your code between different levels of floatingpoint precision.
Return a Variable for a 0dimensional ndarray
Return a Variable for a 1dimensional ndarray
Return a Variable for a 2dimensional ndarray in which the number of rows is guaranteed to be 1.
Return a Variable for a 2dimensional ndarray in which the number of columns is guaranteed to be 1.
Return a Variable for a 2dimensional ndarray
Return a Variable for a 3dimensional ndarray
Return a Variable for a 4dimensional ndarray
The following TensorType instances are provided in the theano.tensor module. They are all callable, and accept an optional name argument. So for example:
from theano.tensor import *
x = dmatrix() # creates one Variable with no name
x = dmatrix('x') # creates one Variable with name 'x'
xyz = dmatrix('xyz') # creates one Variable with name 'xyz'
Constructor  dtype  ndim  shape  broadcastable 

bscalar  int8  0  ()  () 
bvector  int8  1  (?,)  (False,) 
brow  int8  2  (1,?)  (True, False) 
bcol  int8  2  (?,1)  (False, True) 
bmatrix  int8  2  (?,?)  (False, False) 
btensor3  int8  3  (?,?,?)  (False, False, False) 
btensor4  int8  4  (?,?,?,?)  (False, False, False, False) 
wscalar  int16  0  ()  () 
wvector  int16  1  (?,)  (False,) 
wrow  int16  2  (1,?)  (True, False) 
wcol  int16  2  (?,1)  (False, True) 
wmatrix  int16  2  (?,?)  (False, False) 
wtensor3  int16  3  (?,?,?)  (False, False, False) 
wtensor4  int16  4  (?,?,?,?)  (False, False, False, False) 
iscalar  int32  0  ()  () 
ivector  int32  1  (?,)  (False,) 
irow  int32  2  (1,?)  (True, False) 
icol  int32  2  (?,1)  (False, True) 
imatrix  int32  2  (?,?)  (False, False) 
itensor3  int32  3  (?,?,?)  (False, False, False) 
itensor4  int32  4  (?,?,?,?)  (False, False, False, False) 
lscalar  int64  0  ()  () 
lvector  int64  1  (?,)  (False,) 
lrow  int64  2  (1,?)  (True, False) 
lcol  int64  2  (?,1)  (False, True) 
lmatrix  int64  2  (?,?)  (False, False) 
ltensor3  int64  3  (?,?,?)  (False, False, False) 
ltensor4  int64  4  (?,?,?,?)  (False, False, False, False) 
dscalar  float64  0  ()  () 
dvector  float64  1  (?,)  (False,) 
drow  float64  2  (1,?)  (True, False) 
dcol  float64  2  (?,1)  (False, True) 
dmatrix  float64  2  (?,?)  (False, False) 
dtensor3  float64  3  (?,?,?)  (False, False, False) 
dtensor4  float64  4  (?,?,?,?)  (False, False, False, False) 
fscalar  float32  0  ()  () 
fvector  float32  1  (?,)  (False,) 
frow  float32  2  (1,?)  (True, False) 
fcol  float32  2  (?,1)  (False, True) 
fmatrix  float32  2  (?,?)  (False, False) 
ftensor3  float32  3  (?,?,?)  (False, False, False) 
ftensor4  float32  4  (?,?,?,?)  (False, False, False, False) 
cscalar  complex64  0  ()  () 
cvector  complex64  1  (?,)  (False,) 
crow  complex64  2  (1,?)  (True, False) 
ccol  complex64  2  (?,1)  (False, True) 
cmatrix  complex64  2  (?,?)  (False, False) 
ctensor3  complex64  3  (?,?,?)  (False, False, False) 
ctensor4  complex64  4  (?,?,?,?)  (False, False, False, False) 
zscalar  complex128  0  ()  () 
zvector  complex128  1  (?,)  (False,) 
zrow  complex128  2  (1,?)  (True, False) 
zcol  complex128  2  (?,1)  (False, True) 
zmatrix  complex128  2  (?,?)  (False, False) 
ztensor3  complex128  3  (?,?,?)  (False, False, False) 
ztensor4  complex128  4  (?,?,?,?)  (False, False, False, False) 
There are several constructors that can produce multiple variables at once. These are not frequently used in practice, but often used in tutorial examples to save space!
Return one or more scalar variables.
Return one or more vector variables.
Return one or more row variables.
Return one or more col variables.
Return one or more matrix variables.
Each of these plural constructors accepts an integer or several strings. If an integer is provided, the method will return that many Variables and if strings are provided, it will create one Variable for each string, using the string as the Variable’s name. For example:
from theano.tensor import *
x, y, z = dmatrices(3) # creates three matrix Variables with no names
x, y, z = dmatrices('x', 'y', 'z') # creates three matrix Variables named 'x', 'y' and 'z'
If you would like to construct a tensor variable with a nonstandard broadcasting pattern, or a larger number of dimensions you’ll need to create your own TensorType instance. You create such an instance by passing the dtype and broadcasting pattern to the constructor. For example, you can create your own 5dimensional tensor type
>>> dtensor5 = TensorType('float64', (False,)*5)
>>> x = dtensor5()
>>> z = dtensor5('z')
You can also redefine some of the provided types and they will interact correctly:
>>> my_dmatrix = TensorType('float64', (False,)*2)
>>> x = my_dmatrix() # allocate a matrix variable
>>> my_dmatrix == dmatrix # this compares True
See TensorType for more information about creating new types of Tensor.
Another way of creating a TensorVariable (a TensorSharedVariable to be precise) is by calling shared()
x = shared(numpy.random.randn(3,4))
This will return a shared variable whose .value is a numpy ndarray. The number of dimensions and dtype of the Variable are inferred from the ndarray argument. The argument to shared will not be copied, and subsequent changes will be reflected in x.value.
For additional information, see the shared() documentation.
Finally, when you use a numpy ndarry or a Python number together with TensorVariable instances in arithmetic expressions, the result is a TensorVariable. What happens to the ndarray or the number? Theano requires that the inputs to all expressions be Variable instances, so Theano automatically wraps them in a TensorConstant.
Note
Theano makes a copy of any ndarray that you use in an expression, so subsequent changes to that ndarray will not have any effect on the Theano expression.
For numpy ndarrays the dtype is given, but the broadcastable pattern must be inferred. The TensorConstant is given a type with a matching dtype, and a broadcastable pattern with a True for every shape dimension that is 1.
For python numbers, the broadcastable pattern is () but the dtype must be inferred. Python integers are stored in the smallest dtype that can hold them, so small constants like 1 are stored in a bscalar. Likewise, Python floats are stored in an fscalar if fscalar suffices to hold them perfectly, but a dscalar otherwise.
Note
When config.floatX==float32 (see config), then Python floats are stored instead as singleprecision floats.
For fine control of this rounding policy, see theano.tensor.basic.autocast_float.
Turn an argument x into a TensorVariable or TensorConstant.
Many tensor Ops run their arguments through this function as preprocessing. It passes through TensorVariable instances, and tries to wrap other objects into TensorConstant.
When x is a Python number, the dtype is inferred as described above.
When x is a list or tuple it is passed through numpy.asarray
If the ndim argument is not None, it must be an integer and the output will be broadcasted if necessary in order to have this many dimensions.
Return type:  TensorVariable or TensorConstant 

The Type class used to mark Variables that stand for numpy.ndarray values (numpy.memmap, which is a subclass of numpy.ndarray, is also allowed). Recalling to the tutorial, the purple box in the tutorial’s graphstructure figure is an instance of this class.
A tuple of True/False values, one for each dimension. True in position ‘i’ indicates that at evaluationtime, the ndarray will have size 1 in that ‘i’th dimension. Such a dimension is called a broadcastable dimension (see Broadcasting in Theano vs. Numpy).
The broadcastable pattern indicates both the number of dimensions and whether a particular dimension must have length 1.
Here is a table mapping some broadcastable patterns to what they mean:
pattern  interpretation 

[]  scalar 
[True]  1D scalar (vector of length 1) 
[True, True]  2D scalar (1x1 matrix) 
[False]  vector 
[False, False]  matrix 
[False] * n  nD tensor 
[True, False]  row (1xN matrix) 
[False, True]  column (Mx1 matrix) 
[False, True, False]  A Mx1xP tensor (a) 
[True, False, False]  A 1xNxP tensor (b) 
[False, False, False]  A MxNxP tensor (pattern of a + b) 
For dimensions in which broadcasting is False, the length of this dimension can be 1 or more. For dimensions in which broadcasting is True, the length of this dimension must be 1.
When two arguments to an elementwise operation (like addition or subtraction) have a different number of dimensions, the broadcastable pattern is expanded to the left, by padding with True. For example, a vector’s pattern, [False], could be expanded to [True, False], and would behave like a row (1xN matrix). In the same way, a matrix ([False, False]) would behave like a 1xNxP tensor ([True, False, False]).
If we wanted to create a type representing a matrix that would broadcast over the middle dimension of a 3dimensional tensor when adding them together, we would define it like this:
>>> middle_broadcaster = TensorType('complex64', [False, True, False])
The number of dimensions that a Variable’s value will have at evaluationtime. This must be known when we are building the expression graph.
A string indicating the numerical type of the ndarray for which a Variable of this Type is standing.
The dtype attribute of a TensorType instance can be any of the following strings.
dtype  domain  bits 

'int8'  signed integer  8 
'int16'  signed integer  16 
'int32'  signed integer  32 
'int64'  signed integer  64 
'uint8'  unsigned integer  8 
'uint16'  unsigned integer  16 
'uint32'  unsigned integer  32 
'uint64'  unsigned integer  64 
'float32'  floating point  32 
'float64'  floating point  64 
'complex64'  complex  64 (two float32) 
'complex128'  complex  128 (two float64) 
If you wish to use a type of tensor which is not already available (for example, a 5D tensor) you can build an appropriate type by instantiating TensorType.
The result of symbolic operations typically have this type.
See _tensor_py_operators for most of the attributes and methods you’ll want to call.
Python and numpy numbers are wrapped in this type.
See _tensor_py_operators for most of the attributes and methods you’ll want to call.
This type is returned by shared() when the value to share is a numpy ndarray.
See _tensor_py_operators for most of the attributes and methods you’ll want to call.
This mixin class adds convenient attributes, methods, and support for Python operators (see Operator Support).
A reference to the TensorType instance describing the sort of values that might be associated with this variable.
The number of dimensions of this tensor. Aliased to TensorType.ndim.
The numeric type of this tensor. Aliased to TensorType.dtype.
Returns a view of this tensor that has been reshaped as in numpy.reshape. If the shape is a Variable argument, then you might need to use the optional ndim parameter to declare how many elements the shape has, and therefore how many dimensions the reshaped Variable will have.
See reshape().
Returns a view of this tensor with permuted dimensions. Typically the pattern will include the integers 0, 1, ... ndim1, and any number of ‘x’ characters in dimensions where this tensor should be broadcasted.
A few examples of patterns and their effect:
 (‘x’) > make a 0d (scalar) into a 1d vector
 (0, 1) > identity for 2d vectors
 (1, 0) > inverts the first and second dimensions
 (‘x’, 0) > make a row out of a 1d vector (N to 1xN)
 (0, ‘x’) > make a column out of a 1d vector (N to Nx1)
 (2, 0, 1) > AxBxC to CxAxB
 (0, ‘x’, 1) > AxB to Ax1xB
 (1, ‘x’, 0) > AxB to Bx1xA
Returns a view of this tensor with ndim dimensions, whose shape for the first ndim1 dimensions will be the same as self, and shape in the remaining dimension will be expanded to fit in all the data from self.
See flatten().
Transpose of this tensor.
>>> x = T.zmatrix()
>>> y = 3+.2j * x.T
Note
In numpy and in Theano, the transpose of a vector is exactly the same vector! Use reshape or dimshuffle to turn your vector into a row or column matrix.
To reorder the dimensions of a variable, to insert or remove broadcastable dimensions, see _tensor_py_operators.dimshuffle().
Returns an lvector representing the shape of x.
Parameters: 


Return type:  variable with x’s dtype, but ndim dimensions 
Note
This function can infer the length of a symbolic newshape in some cases, but if it cannot and you do not provide the ndim, then this function will raise an Exception.
Reshape x by left padding the shape with n_ones 1s. Note that all this new dimension will be broadcastable. To make them nonbroadcastable see the unbroadcast().
Parameters:  x (any TensorVariable (or compatible)) – variable to be reshaped 

Reshape x by right padding the shape with n_ones 1s. Note that all this new dimension will be broadcastable. To make them nonbroadcastable see the unbroadcast().
Parameters:  x (any TensorVariable (or compatible)) – variable to be reshaped 

Make x impossible to broadcast in the specified axes axes. For example, unbroadcast(x,0) will make the first dimension of x unbroadcastable.
Make x broadcastable in the specified axes axes. For example, unbroadcast(x,0) will make the first dimension of x broadcastable. When performing the function, if the length of x along that dimension is not 1, a ValueError will be raised.
Similar to reshape(), but the shape is inferred from the shape of x.
Parameters: 


Return type:  variable with same dtype as x and outdim dimensions 
Returns:  variable with the same shape as x in the leading outdim1 dimensions, but with all remaining dimensions of x collapsed into the last dimension. 
For example, if we flatten a tensor of shape (2,3,4,5) with flatten(x, outdim=2), then we’ll have the same (21=1) leading dimensions (2,), and the remaining dimensions are collapsed. So the output in this example would have shape (2, 60).
Parameters:  x – tensor that has same shape as output 

Returns a tensor filled with 0s that has same shape as x.
Parameters:  x – tensor that has same shape as output 

Returns a tensor filled with 1s that has same shape as x.
Parameters: 


Create a matrix by filling the shape of a with b
Parameters: 


Returns:  An array where all elements are equal to zero, except for the kth diagonal, whose values are equal to one. 
Parameters:  x – tensor 

Returns:  A tensor of same shape as x that is filled with 0s everywhere except for the main diagonal, whose values are equal to one. The output will have same dtype as x. 
Return a Tensor representing for the arguments all stacked up into a single Tensor. (of 1 rank greater).
Parameters:  tensors – one or more tensors of the same rank 

Returns:  A tensor such that rval[0] == tensors[0], rval[1] == tensors[1], etc. 
>>> x0 = T.scalar()
>>> x1 = T.scalar()
>>> x2 = T.scalar()
>>> x = T.stack(x0, x1, x2)
>>> # x.ndim == 1, is a vector of length 3.
Parameters: 


>>> x0 = T.fmatrix()
>>> x1 = T.ftensor3()
>>> x2 = T.fvector()
>>> x = T.concatenate([x0, x1[0], T.shape_padright(x2)], axis=1)
>>> # x.ndim == 2
Parameter :  x  symbolic Tensor (or compatible) 

Parameter :  axis  axis or axes along which to compute the sum 
Parameter :  keepdims  (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor. 
Returns :  maximum of x along axis 
Parameter :  x  symbolic Tensor (or compatible) 

Parameter :  axis  axis along which to compute the maximum 
Parameter :  keepdims  (boolean) If this is set to True, the axis which is reduced is left in the result as a dimension with size one. With this option, the result will broadcast correctly against the original tensor. 
Returns :  the index of the maximum value along a given axis 
Parameter :  x  symbolic Tensor (or compatible) 

Parameter :  axis  axis along which to compute the maximum 
Parameter :  keepdims  (boolean) If this is set to True, the axis which is reduced is left in the result as a dimension with size one. With this option, the result will broadcast correctly against the original tensor. 
Returns :  the maxium value along a given axis and its index. 
Parameter :  x  symbolic Tensor (or compatible) 

Parameter :  axis  axis or axes along which to compute the sum 
Parameter :  keepdims  (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor. 
Returns :  minimum of x along axis 
Parameter :  x  symbolic Tensor (or compatible) 

Parameter :  axis  axis along which to compute the minimum 
Parameter :  keepdims  (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor. 
Returns :  the index of the minimum value along a given axis 
Parameter :  x  symbolic Tensor (or compatible) 

Parameter :  axis  axis or axes along which to compute the sum 
Parameter :  keepdims  (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor. 
Returns :  sum of x along axis 
Parameter :  x  symbolic Tensor (or compatible) 

Parameter :  axis  axis or axes along which to compute the product 
Parameter :  keepdims  (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor. 
Returns :  product of every term in x along axis 
Parameter :  x  symbolic Tensor (or compatible) 

Parameter :  axis  axis or axes along which to compute the mean 
Parameter :  keepdims  (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor. 
Returns :  mean value of x along axis 
Parameter :  x  symbolic Tensor (or compatible) 

Parameter :  axis  axis or axes along which to compute the variance 
Parameter :  keepdims  (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor. 
Returns :  variance of x along axis 
Parameter :  x  symbolic Tensor (or compatible) 

Parameter :  axis  axis or axes along which to compute the standard deviation 
Parameter :  keepdims  (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor. 
Returns :  variance of x along axis 
Parameter :  x  symbolic Tensor (or compatible) 

Parameter :  axis  axis or axes along which to apply ‘bitwise and’ 
Parameter :  keepdims  (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor. 
Returns :  bitwise and of x along axis 
Parameter :  x  symbolic Tensor (or compatible) 

Parameter :  axis  axis or axes along which to apply bitwise or 
Parameter :  keepdims  (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor. 
Returns :  bitwise or of x along axis 
Like numpy, Theano distinguishes between basic and advanced indexing. Theano fully supports basic indexing (see numpy’s basic indexing).
Advanced indexing is almost entirely unsupported (for now). The one sort of advanced indexing that is supported is the retrieval of the c[i]’th element of each row of a matrix x:
>>> x = T.fmatrix()
>>> c = T.lvector()
>>> x[T.arange(c.shape[0]), c]
Indexassignment is not supported. If you want to do something like a[5] = b or a[5]+=b, see set_subtensor() and inc_subtensor() below.
Many Python operators are supported.
>>> a, b = T.itensor3(), T.itensor3() # example inputs
>>> a + 3 # T.add(a, 3) > itensor3
>>> 3  a # T.sub(3, a)
>>> a * 3.5 # T.mul(a, 3.5) > ftensor3 or dtensor3 (depending on casting)
>>> 2.2 / a # T.truediv(2.2, a)
>>> 2.2 // a # T.intdiv(2.2, a)
>>> 2.2**a # T.pow(2.2, a)
>>> b % a # T.mod(b, a)
>>> a & b # T.and_(a,b) bitwise and (alias T.bitwise_and)
>>> a ^ 1 # T.xor(a,1) bitwise xor (alias T.bitwise_xor)
>>> a  b # T.or_(a,b) bitwise or (alias T.bitwise_or)
>>> ~a # T.invert(a) bitwise invert (alias T.bitwise_not)
Inplace operators are not supported. Theano’s graphoptimizations will determine which intermediate values to use for inplace computations. If you would like to update the value of a shared variable, consider using the updates argument to theano.function().
Cast any tensor x to a Tensor of the same shape, but with a different numerical type dtype.
This is not a reinterpret cast, but a coersion cast, similar to numpy.asarray(x, dtype=dtype).
import theano.tensor as T
x_as_float = T.matrix()
x_as_int = T.cast(x, 'int32')
Attempting to casting a complex value to a real value is ambiguous and will raise an exception. Use real(), imag(), abs(), or angle().
Return the real (not imaginary) components of Tensor x. For noncomplex x this function returns x.
Return the imaginary components of Tensor x. For noncomplex x this function returns zeros_like(x).
Parameter:  a  symbolic Tensor (or compatible) 

Parameter:  b  symbolic Tensor (or compatible) 
Return type:  symbolic Tensor 
Returns:  a symbolic tensor representing the application of the logical elementwise operator. 
Note
Theano has no boolean dtype. Instead, all boolean tensors are represented in 'int8'.
Here is an example with the lessthan operator.
import theano.tensor as T
x,y = T.dmatrices('x','y')
z = T.le(x,y)
Returns a symbolic 'int8' tensor representing the result of logical lessthan (a<b).
Also available using syntax a < b
Returns a symbolic 'int8' tensor representing the result of logical greaterthan (a>b).
Also available using syntax a > b
Returns a variable representing the result of logical less than or equal (a<=b).
Also available using syntax a <= b
Returns a variable representing the result of logical greater or equal than (a>=b).
Also available using syntax a >= b
Returns a variable representing the result of logical equality (a==b).
Returns a variable representing the result of logical inequality (a!=b).
based on the condition cond. This is the theano equivalent of numpy.where.
Parameter: cond  symbolic Tensor (or compatible) Parameter: ift  symbolic Tensor (or compatible) Parameter: iff  symbolic Tensor (or compatible) Return type: symbolic Tensor
import theano.tensor as T
a,b = T.dmatrices('a','b')
x,y = T.dmatrices('x','y')
z = T.switch(T.lt(a,b), x, y)
Alias for switch. where is the numpy name.
Return a variable representing x, but with all elements greater than max clipped to max and all elements less than min clipped to min.
Normal broadcasting rules apply to each of x, min, and max.
Parameter:  a  symbolic Tensor of integer type. 

Parameter:  b  symbolic Tensor of integer type. 
Note
The bitwise operators must have an integer type as input.
The bitwise not (invert) takes only one parameter.
Return type:  symbolic Tensor with corresponding dtype. 

Returns a variable representing the result of the bitwise and.
Returns a variable representing the result of the bitwise or.
Returns a variable representing the result of the bitwise xor.
Returns a variable representing the result of the bitwise not.
Alias for and_. bitwise_and is the numpy name.
Alias for or_. bitwise_or is the numpy name.
Alias for xor_. bitwise_xor is the numpy name.
Alias for invert. invert is the numpy name.
Here is an example using the bitwise and_ via the & operator:
import theano.tensor as T
x,y = T.imatrices('x','y')
z = x & y
Returns a variable representingthe absolute of a, ie a.
Note
Can also be accessed with abs(a).
Returns a variable representing angular component of complexvalued Tensor a.
Returns a variable representing the exponential of a, ie e^a.
Returns a variable representing the maximum element by element of a and b
Returns a variable representing the minimum element by element of a and b
Returns a variable representing the negation of a (also a).
Returns a variable representing the inverse of a, ie 1.0/a. Also called reciprocal.
Returns a variable representing the base e, 2 or 10 logarithm of a.
Returns a variable representing the sign of a.
Returns a variable representing the ceiling of a (for example ceil(2.1) is 3).
Returns a variable representing the floor of a (for example floor(2.9) is 2).
Returns a variable representing the rounding of a in the same dtype as a. Implemented rounding mode are half_away_from_zero and half_to_even.
Short hand for cast(round(a, mode),’int64’).
Returns a variable representing the square of a, ie a^2.
Returns a variable representing the of a, ie a^0.5.
Returns a variable representing the trigonometric functions of a (cosine, sine and tangent).
Returns a variable representing the hyperbolic trigonometric functions of a (hyperbolic cosine, sine and tangent).
Broadcasting is a mechanism which allows tensors with different numbers of dimensions to be added or multiplied together by (virtually) replicating the smaller tensor along the dimensions that it is lacking.
Broadcasting is the mechanism by which a scalar may be added to a matrix, a vector to a matrix or a scalar to a vector.
Broadcasting a row matrix. T and F respectively stand for True and False and indicate along which dimensions we allow broadcasting.
If the second argument were a vector, its shape would be (2,) and its broadcastable pattern (F,). They would be automatically expanded to the left to match the dimensions of the matrix (adding 1 to the shape and T to the pattern), resulting in (1, 2) and (T, F). It would then behave just like the example above.
Unlike numpy which does broadcasting dynamically, Theano needs to know, for any operation which supports broadcasting, which dimensions will need to be broadcasted. When applicable, this information is given in the Type of a Variable.
See also:
Parameters: 


Return type:  symbolic matrix or vector 
Returns:  the inner product of X and Y. 
Parameters: 


Return type:  symbolic matrix 
Returns:  vectorvector outer product 
Given two tensors a and b,tensordot computes a generalized dot product over the provided axes. Theano’s implementation reduces all expressions to matrix or vector dot products and is based on code from Tijmen Tieleman’s gnumpy (http://www.cs.toronto.edu/~tijmen/gnumpy.html).
Parameters: 


Returns:  a tensor with shape equal to the concatenation of a’s shape (less any dimensions that were summed over) and b’s shape (less any dimensions that were summed over). 
Return type:  symbolic tensor 
It may be helpful to consider an example to see what tensordot does. Theano’s implementation is identical to NumPy’s. Here a has shape (2, 3, 4) and b has shape (5, 6, 4, 3). The axes to sum over are [[1, 2], [3, 2]] – note that a.shape[1] == b.shape[3] and a.shape[2] == b.shape[2]; these axes are compatible. The resulting tensor will have shape (2, 5, 6) – the dimensions that are not being summed:
a = np.random.random((2,3,4))
b = np.random.random((5,6,4,3))
#tensordot
c = np.tensordot(a, b, [[1,2],[3,2]])
#loop replicating tensordot
a0, a1, a2 = a.shape
b0, b1, _, _ = b.shape
cloop = np.zeros((a0,b0,b1))
#loop over nonsummed indices  these exist
#in the tensor product.
for i in range(a0):
for j in range(b0):
for k in range(b1):
#loop over summed indices  these don't exist
#in the tensor product.
for l in range(a1):
for m in range(a2):
cloop[i,j,k] += a[i,l,m] * b[j,k,m,l]
np.allclose(c, cloop) #true
This specific implementation avoids a loop by transposing a and b such that the summed axes of a are last and the summed axes of b are first. The resulting arrays are reshaped to 2 dimensions (or left as vectors, if appropriate) and a matrix or vector dot product is taken. The result is reshaped back to the required output dimensions.
In an extreme case, no axes may be specified. The resulting tensor will have shape equal to the concatenation of the shapes of a and b:
c = np.tensordot(a, b, 0)
print(a.shape) #(2,3,4)
print(b.shape) #(5,6,4,3)
print(c.shape) #(2,3,4,5,6,4,3)
See the documentation of numpy.tensordot for more examples.
Parameters: 


This function computes the dot product between the two tensors, by iterating over the first dimension using scan. Returns a tensor of size e.g. if it is 3D: (dim1, dim3, dim4) Example: >>> first = T.tensor3(‘first’) >>> second = T.tensor3(‘second’) >>> result = batched_dot(first, second)
Note :  This is a subset of numpy.einsum, but we do not provide it for now. But numpy einsum is slower than dot or tensordot: http://mail.scipy.org/pipermail/numpydiscussion/2012October/064259.html 

Parameters: 

Returns:  tensor of products 
Return symbolic gradients for one or more variables with respect to some cost.
For more information about how automatic differentiation works in Theano, see gradient. For information on how to implement the gradient of a certain Op, see grad().
Parameters: 


Return type:  variable or list of variables (matching wrt) 
Returns:  gradients of the cost with respect to each of the wrt terms 
See the gradient tutorial for the R op documentation.
Partial list of ops without support for Rop:
 All sparse ops
 All linear algebra ops.
 PermuteRowElements
 Tile
 AdvancedSubtensor
 TensorDot
 Outer
 Prod
 MulwithoutZeros
 ProdWithoutZeros
 CAReduce(for max,... done for MaxAndArgmax op)
 MaxAndArgmax(only for matrix on axis 0 or 1)