Guide
The scan functions provides the basic functionality needed to do loops
in Theano. Scan comes with many whistles and bells, which we will introduce
by way of examples.
Simple loop with accumulation: Computing 
Assume that, given k you want to get A**k using a loop.
More precisely, if A is a tensor you want to compute
A**k elemwise. The python/numpy code might look like:
result = 1
for i in xrange(k):
result = result * A
There are three things here that we need to handle: the initial value
assigned to result, the accumulation of results in result, and
the unchanging variable A. Unchanging variables are passed to scan as
non_sequences. Initialization occurs in outputs_info, and the accumulation
happens automatically.
The equivalent Theano code would be:
k = T.iscalar("k")
A = T.vector("A")
# Symbolic description of the result
result, updates = theano.scan(fn=lambda prior_result, A: prior_result * A,
outputs_info=T.ones_like(A),
non_sequences=A,
n_steps=k)
# We only care about A**k, but scan has provided us with A**1 through A**k.
# Discard the values that we don't care about. Scan is smart enough to
# notice this and not waste memory saving them.
final_result = result[-1]
# compiled function that returns A**k
power = theano.function(inputs=[A,k], outputs=final_result, updates=updates)
print power(range(10),2)
print power(range(10),4)
Let us go through the example line by line. What we did is first to
construct a function (using a lambda expression) that given prior_result and
A returns prior_result * A. The order of parameters is fixed by scan:
the output of the prior call to fn (or the initial value, initially)
is the first parameter, followed by all non-sequences.
Next we initialize the output as a tensor with same shape and dtype as A,
filled with ones. We give A to scan as a non sequence parameter and
specify the number of steps k to iterate over our lambda expression.
Scan returns a tuple containing our result (result) and a
dictionary of updates (empty in this case). Note that the result
is not a matrix, but a 3D tensor containing the value of A**k for
each step. We want the last value (after k steps) so we compile
a function to return just that. Note that there is an optimization, that
at compile time will detect that you are using just the last value of the
result and ensure that scan does not store all the intermediate values
that are used. So do not worry if A and k are large.
Iterating over the first dimension of a tensor: Calculating a polynomial
In addition to looping a fixed number of times, scan can iterate over
the leading dimension of tensors (similar to Python’s for x in a_list).
The tensor(s) to be looped over should be provided to scan using the
sequence keyword argument.
Here’s an example that builds a symbolic calculation of a polynomial
from a list of its coefficients:
coefficients = theano.tensor.vector("coefficients")
x = T.scalar("x")
max_coefficients_supported = 10000
# Generate the components of the polynomial
components, updates = theano.scan(fn=lambda coefficient, power, free_variable: coefficient * (free_variable ** power),
outputs_info=None,
sequences=[coefficients, theano.tensor.arange(max_coefficients_supported)],
non_sequences=x)
# Sum them up
polynomial = components.sum()
# Compile a function
calculate_polynomial = theano.function(inputs=[coefficients, x], outputs=polynomial)
# Test
test_coefficients = numpy.asarray([1, 0, 2], dtype=numpy.float32)
test_value = 3
print calculate_polynomial(test_coefficients, test_value)
print 1.0 * (3 ** 0) + 0.0 * (3 ** 1) + 2.0 * (3 ** 2)
There are a few things to note here.
First, we calculate the polynomial by first generating each of the coefficients, and
then summing them at the end. (We could also have accumulated them along the way, and then
taken the last one, which would have been more memory-efficient, but this is an example.)
Second, there is no accumulation of results, we can set outputs_info to None. This indicates
to scan that it doesn’t need to pass the prior result to fn.
The general order of function parameters to fn is:
sequences (if any), prior result(s) (if needed), non-sequences (if any)
Third, there’s a handy trick used to simulate python’s enumerate: simply include
theano.tensor.arange to the sequences.
Fourth, given multiple sequences of uneven lengths, scan will truncate to the shortest of them.
This makes it safe to pass a very long arange, which we need to do for generality, since
arange must have its length specified at creation time.
Simple accumulation into a scalar, ditching lambda
Although this example would seem almost self-explanatory, it stresses a
pitfall to be careful of: the initial output state that is supplied, that is
output_info, must be of a shape similar to that of the output variable
generated at each iteration and moreover, it must not involve an implicit
downcast of the latter.
import numpy as np
import theano
import theano.tensor as T
up_to = T.iscalar("up_to")
# define a named function, rather than using lambda
def accumulate_by_adding(arange_val, sum_to_date):
return sum_to_date + arange_val
seq = T.arange(up_to)
# An unauthorized implicit downcast from the dtype of 'seq', to that of
# 'T.as_tensor_variable(0)' which is of dtype 'int8' by default would occur
# if this instruction were to be used instead of the next one:
# outputs_info = T.as_tensor_variable(0)
outputs_info = T.as_tensor_variable(np.asarray(0, seq.dtype))
scan_result, scan_updates = theano.scan(fn=accumulate_by_adding,
outputs_info=outputs_info,
sequences=seq)
triangular_sequence = theano.function(inputs=[up_to], outputs=scan_result)
# test
some_num = 15
print triangular_sequence(some_num)
print [n * (n + 1) // 2 for n in xrange(some_num)]
Another simple example
Unlike some of the prior examples, this one is hard to reproduce except by using scan.
This takes a sequence of array indices, and values to place there,
and a “model” output array (whose shape and dtype will be mimicked),
and produces a sequence of arrays with the shape and dtype of the model,
with all values set to zero except at the provided array indices.
location = T.imatrix("location")
values = T.vector("values")
output_model = T.matrix("output_model")
def set_value_at_position(a_location, a_value, output_model):
zeros = T.zeros_like(output_model)
zeros_subtensor = zeros[a_location[0], a_location[1]]
return T.set_subtensor(zeros_subtensor, a_value)
result, updates = theano.scan(fn=set_value_at_position,
outputs_info=None,
sequences=[location, values],
non_sequences=output_model)
assign_values_at_positions = theano.function(inputs=[location, values, output_model], outputs=result)
# test
test_locations = numpy.asarray([[1, 1], [2, 3]], dtype=numpy.int32)
test_values = numpy.asarray([42, 50], dtype=numpy.float32)
test_output_model = numpy.zeros((5, 5), dtype=numpy.float32)
print assign_values_at_positions(test_locations, test_values, test_output_model)
This demonstrates that you can introduce new Theano variables into a scan function.
Multiple outputs, several taps values - Recurrent Neural Network with Scan
The examples above showed simple uses of scan. However, scan also supports
referring not only to the prior result and the current sequence value, but
also looking back more than one step.
This is needed, for example, to implement a RNN using scan. Assume
that our RNN is defined as follows :
Note that this network is far from a classical recurrent neural
network and might be useless. The reason we defined as such
is to better illustrate the features of scan.
In this case we have a sequence over which we need to iterate u,
and two outputs x and y. To implement this with scan we first
construct a function that computes one iteration step :
def oneStep(u_tm4, u_t, x_tm3, x_tm1, y_tm1, W, W_in_1, W_in_2, W_feedback, W_out):
x_t = T.tanh( theano.dot(x_tm1, W) + \
theano.dot(u_t, W_in_1) + \
theano.dot(u_tm4, W_in_2) + \
theano.dot(y_tm1, W_feedback))
y_t = theano.dot(x_tm3, W_out)
return [x_t, y_t]
As naming convention for the variables we used a_tmb to mean a at
t-b and a_tpb to be a at t+b.
Note the order in which the parameters are given, and in which the
result is returned. Try to respect chronological order among
the taps ( time slices of sequences or outputs) used. For scan is crucial only
for the variables representing the different time taps to be in the same order
as the one in which these taps are given. Also, not only taps should respect
an order, but also variables, since this is how scan figures out what should
be represented by what. Given that we have all
the Theano variables needed we construct our RNN as follows :
u = T.matrix() # it is a sequence of vectors
x0 = T.matrix() # initial state of x has to be a matrix, since
# it has to cover x[-3]
y0 = T.vector() # y0 is just a vector since scan has only to provide
# y[-1]
([x_vals, y_vals],updates) = theano.scan(fn = oneStep, \
sequences = dict(input = u, taps= [-4,-0]), \
outputs_info = [dict(initial = x0, taps = [-3,-1]),y0], \
non_sequences = [W,W_in_1,W_in_2,W_feedback, W_out])
# for second input y, scan adds -1 in output_taps by default
Now x_vals and y_vals are symbolic variables pointing to the
sequence of x and y values generated by iterating over u. The
sequence_taps, outputs_taps give to scan information about what
slices are exactly needed. Note that if we want to use x[t-k] we do
not need to also have x[t-(k-1)], x[t-(k-2)],.., but when applying
the compiled function, the numpy array given to represent this sequence
should be large enough to cover this values. Assume that we compile the
above function, and we give as u the array uvals = [0,1,2,3,4,5,6,7,8].
By abusing notations, scan will consider uvals[0] as u[-4], and
will start scaning from uvals[4] towards the end.
Using shared variables - Gibbs sampling
Another useful feature of scan, is that it can handle shared variables.
For example, if we want to implement a Gibbs chain of length 10 we would do
the following:
W = theano.shared(W_values) # we assume that ``W_values`` contains the
# initial values of your weight matrix
bvis = theano.shared(bvis_values)
bhid = theano.shared(bhid_values)
trng = T.shared_randomstreams.RandomStreams(1234)
def OneStep(vsample) :
hmean = T.nnet.sigmoid(theano.dot(vsample, W) + bhid)
hsample = trng.binomial(size=hmean.shape, n=1, p=hmean)
vmean = T.nnet.sigmoid(theano.dot(hsample, W.T) + bvis)
return trng.binomial(size=vsample.shape, n=1, p=vmean,
dtype=theano.config.floatX)
sample = theano.tensor.vector()
values, updates = theano.scan(OneStep, outputs_info=sample, n_steps=10)
gibbs10 = theano.function([sample], values[-1], updates=updates)
Note that if we use shared variables ( W, bvis, bhid) but
we do not iterate over them (so scan doesn’t really need to know
anything in particular about them, just that they are used inside the
function applied at each step) you do not need to pass them as
arguments. Scan will find them on its own and add them to the graph. Of
course, if you wish to (and it is good practice) you can add them, when
you call scan (they would be in the list of non-sequence inputs).
The second, and probably most crucial observation is that the updates
dictionary becomes important in this case. It links a shared variable
with its updated value after k steps. In this case it tells how the
random streams get updated after 10 iterations. If you do not pass this
update dictionary to your function, you will always get the same 10
sets of random numbers. You can even use the updates dictionary
afterwards. Look at this example :
a = theano.shared(1)
values, updates = theano.scan(lambda: {a: a+1}, n_steps=10)
In this case the lambda expression does not require any input parameters
and returns an update dictionary which tells how a should be updated
after each step of scan. If we write :
b = a + 1
c = updates[a] + 1
f = theano.function([], [b, c], updates=updates)
print b
print c
print a.value
We will see that because b does not use the updated version of
a, it will be 2, c will be 12, while a.value is 11.
If we call the function again, b will become 12, c will be 22
and a.value 21.
If we do not pass the updates dictionary to the function, then
a.value will always remain 1, b will always be 2 and c
will always be 12.
Conditional ending of Scan
Scan can also be used as a repeat-until block. In such a case scan
will stop when either the maximal number of iteration is reached, or the
provided condition evaluates to True.
For an example, we will compute all powers of two smaller then some provided
value max_value.
def power_of_2(previous_power, max_value):
return previous_power*2, theano.scan_module.until(previous_power*2 > max_value)
max_value = T.scalar()
values, _ = theano.scan(power_of_2,
outputs_info = T.constant(1.),
non_sequences = max_value,
n_steps = 1024)
f = theano.function([max_value], values)
print f(45)
As you can see, in order to terminate on condition, the only thing required
is that the inner function power_of_2 to return also the condition
wrapped in the class theano.scan_module.until. The condition has to be
expressed in terms of the arguments of the inner function (in this case
previous_power and max_value).
As a rule, scan always expects the condition to be the last thing returned
by the inner function, otherwise an error will be raised.
reference
This module provides the Scan Op
Scanning is a general form of recurrence, which can be used for looping.
The idea is that you scan a function along some input sequence, producing
an output at each time-step that can be seen (but not modified) by the
function at the next time-step. (Technically, the function can see the
previous K time-steps of your outputs and L time steps (from the past and
future) of your inputs.
So for example, sum() could be computed by scanning the z+x_i
function over a list, given an initial state of z=0.
Special cases:
- A reduce operation can be performed by returning only the last
output of a scan.
- A map operation can be performed by applying a function that
ignores previous steps of the outputs.
Often a for-loop can be expressed as a scan() operation, and scan is
the closest that theano comes to looping. The advantage of using scan
over for loops is that it allows the number of iterations to be a part of
the symbolic graph.
The Scan Op should typically be used by calling any of the following
functions: scan(), map(), reduce(), foldl(),
foldr().
-
theano.map(fn, sequences, non_sequences=None, truncate_gradient=-1, go_backwards=False, mode=None, name=None)
Similar behaviour as python’s map.
Parameters: |
- fn – The function that map applies at each iteration step
(see scan for more info).
- sequences – List of sequences over which map iterates
(see scan for more info).
- non_sequences – List of arguments passed to fn. map will
not iterate over these arguments (see scan for
more info).
- truncate_gradient – See scan.
- go_backwards – Boolean value that decides the direction of
iteration. True means that sequences are parsed
from the end towards the begining, while False
is the other way around.
- mode – See scan.
- name – See scan.
|
-
theano.reduce(fn, sequences, outputs_info, non_sequences=None, go_backwards=False, mode=None, name=None)
Similar behaviour as python’s reduce
Parameters: |
- fn – The function that reduce applies at each iteration step
(see scan for more info).
- sequences – List of sequences over which reduce iterates
(see scan for more info)
- outputs_info – List of dictionaries describing the outputs of
reduce (see scan for more info).
- non_sequences – List of arguments passed to fn. reduce will
not iterate over these arguments (see scan for
more info).
- go_backwards – Boolean value that decides the direction of
iteration. True means that sequences are parsed
from the end towards the begining, while False
is the other way around.
- mode – See scan.
- name – See scan.
|
-
theano.foldl(fn, sequences, outputs_info, non_sequences=None, mode=None, name=None)
Similar behaviour as haskell’s foldl
Parameters: |
- fn – The function that foldl applies at each iteration step
(see scan for more info).
- sequences – List of sequences over which foldl iterates
(see scan for more info)
- outputs_info – List of dictionaries describing the outputs of
reduce (see scan for more info).
- non_sequences – List of arguments passed to fn. foldl will
not iterate over these arguments (see scan for
more info).
- mode – See scan.
- name – See scan.
|
-
theano.foldr(fn, sequences, outputs_info, non_sequences=None, mode=None, name=None)
Similar behaviour as haskell’ foldr
Parameters: |
- fn – The function that foldr applies at each iteration step
(see scan for more info).
- sequences – List of sequences over which foldr iterates
(see scan for more info)
- outputs_info – List of dictionaries describing the outputs of
reduce (see scan for more info).
- non_sequences – List of arguments passed to fn. foldr will
not iterate over these arguments (see scan for
more info).
- mode – See scan.
- name – See scan.
|
-
theano.scan(fn, sequences=None, outputs_info=None, non_sequences=None, n_steps=None, truncate_gradient=-1, go_backwards=False, mode=None, name=None, profile=False)
This function constructs and applies a Scan op to the provided
arguments.
Parameters: |
- fn –
fn is a function that describes the operations involved in one
step of scan. fn should construct variables describing the
output of one iteration step. It should expect as input theano
variables representing all the slices of the input sequences
and previous values of the outputs, as well as all other arguments
given to scan as non_sequences. The order in which scan passes
these variables to fn is the following :
- all time slices of the first sequence
- all time slices of the second sequence
- ...
- all time slices of the last sequence
- all past slices of the first output
- all past slices of the second otuput
- ...
- all past slices of the last output
- all other arguments (the list given as non_sequences to
- scan)
The order of the sequences is the same as the one in the list
sequences given to scan. The order of the outputs is the same
as the order of outputs_info. For any sequence or output the
order of the time slices is the same as the one in which they have
been given as taps. For example if one writes the following :
scan(fn, sequences = [ dict(input= Sequence1, taps = [-3,2,-1])
, Sequence2
, dict(input = Sequence3, taps = 3) ]
, outputs_info = [ dict(initial = Output1, taps = [-3,-5])
, dict(initial = Output2, taps = None)
, Output3 ]
, non_sequences = [ Argument1, Argument2])
fn should expect the following arguments in this given order:
- Sequence1[t-3]
- Sequence1[t+2]
- Sequence1[t-1]
- Sequence2[t]
- Sequence3[t+3]
- Output1[t-3]
- Output1[t-5]
- Output3[t-1]
- Argument1
- Argument2
The list of non_sequences can also contain shared variables
used in the function, though scan is able to figure those
out on its own so they can be skipped. For the clarity of the
code we recommend though to provide them to scan. To some extend
scan can also figure out other non sequences (not shared)
even if not passed to scan (but used by fn). A simple example of
this would be :
import theano.tensor as TT
W = TT.matrix()
W_2 = W**2
def f(x):
return TT.dot(x,W_2)
The function is expected to return two things. One is a list of
outputs ordered in the same order as outputs_info, with the
difference that there should be only one output variable per
output initial state (even if no tap value is used). Secondly
fn should return an update dictionary (that tells how to
update any shared variable after each iteration step). The
dictionary can optionally be given as a list of tuples. There is
no constraint on the order of these two list, fn can return
either (outputs_list, update_dictionary) or
(update_dictionary, outputs_list) or just one of the two (in
case the other is empty).
To use scan as a while loop, the user needs to change the
function fn such that also a stopping condition is returned.
To do so, he/she needs to wrap the condition in an until class.
The condition should be returned as a third element, for example:
...
return [y1_t, y2_t], {x:x+1}, theano.scan_module.until(x < 50)
Note that a number of steps (considered in here as the maximum
number of steps ) is still required even though a condition is
passed (and it is used to allocate memory if needed). = {}):
- sequences –
sequences is the list of Theano variables or dictionaries
describing the sequences scan has to iterate over. If a
sequence is given as wrapped in a dictionary, then a set of optional
information can be provided about the sequence. The dictionary
should have the following keys:
- input (mandatory) – Theano variable representing the
sequence.
- taps – Temporal taps of the sequence required by fn.
They are provided as a list of integers, where a value k
impiles that at iteration step t scan will pass to fn
the slice t+k. Default value is [0]
Any Theano variable in the list sequences is automatically
wrapped into a dictionary where taps is set to [0]
- outputs_info –
outputs_info is the list of Theano variables or dictionaries
describing the initial state of the outputs computed
recurrently. When this initial states are given as dictionary
optional information can be provided about the output corresponding
to these initial states. The dictionary should have the following
keys:
- initial – Theano variable that represents the initial
state of a given output. In case the output is not computed
recursively (think of a map) and does not require an initial
state this field can be skipped. Given that (only) the previous
time step of the output is used by fn, the initial state
should have the same shape as the output and should not
involve a downcast of the data type of the output. If multiple
time taps are used, the initial state should have one extra
dimension that should cover all the possible taps. For example
if we use -5, -2 and -1 as past taps, at step 0,
fn will require (by an abuse of notation) output[-5],
output[-2] and output[-1]. This will be given by
the initial state, which in this case should have the shape
(5,)+output.shape. If this variable containing the initial
state is called init_y then init_y[0] corresponds to
output[-5]. init_y[1] correponds to output[-4],
init_y[2] corresponds to output[-3], init_y[3]
coresponds to output[-2], init_y[4] corresponds to
output[-1]. While this order might seem strange, it comes
natural from splitting an array at a given point. Assume that
we have a array x, and we choose k to be time step
0. Then our initial state would be x[:k], while the
output will be x[k:]. Looking at this split, elements in
x[:k] are ordered exactly like those in init_y.
- taps – Temporal taps of the output that will be pass to
fn. They are provided as a list of negative integers,
where a value k implies that at iteration step t scan
will pass to fn the slice t+k.
scan will follow this logic if partial information is given:
- If an output is not wrapped in a dictionary, scan will wrap
it in one assuming that you use only the last step of the output
(i.e. it makes your tap value list equal to [-1]).
- If you wrap an output in a dictionary and you do not provide any
taps but you provide an initial state it will assume that you are
using only a tap value of -1.
- If you wrap an output in a dictionary but you do not provide any
initial state, it assumes that you are not using any form of
taps.
- If you provide a None instead of a variable or a empty
dictionary scan assumes that you will not use any taps for
this output (like for example in case of a map)
If outputs_info is an empty list or None, scan assumes
that no tap is used for any of the outputs. If information is
provided just for a subset of the outputs an exception is
raised (because there is no convention on how scan should map
the provided information to the outputs of fn)
- non_sequences – non_sequences is the list of arguments that are passed to
fn at each steps. One can opt to exclude variable
used in fn from this list as long as they are part of the
computational graph, though for clarity we encourage not to do so.
- n_steps – n_steps is the number of steps to iterate given as an int
or Theano scalar. If any of the input sequences do not have
enough elements, scan will raise an error. If the value is 0 the
outputs will have 0 rows. If the value is negative, scan
will run backwards in time. If the go_backwards flag is already
set and also n_steps is negative, scan will run forward
in time. If n_steps is not provided, scan will figure
out the amount of steps it should run given its input sequences.
- truncate_gradient – truncate_gradient is the number of steps to use in truncated
BPTT. If you compute gradients through a scan op, they are
computed using backpropagation through time. By providing a
different value then -1, you choose to use truncated BPTT instead
of classical BPTT, where you go for only truncate_gradient
number of steps back in time.
- go_backwards – go_backwards is a flag indicating if scan should go
backwards through the sequences. If you think of each sequence
as indexed by time, making this flag True would mean that
scan goes back in time, namely that for any sequence it
starts from the end and goes towards 0.
- name – When profiling scan, it is crucial to provide a name for any
instance of scan. The profiler will produce an overall
profile of your code as well as profiles for the computation of
one step of each instance of scan. The name of the instance
appears in those profiles and can greatly help to disambiguate
information.
- mode – It is recommended to leave this argument to None, especially
when profiling scan (otherwise the results are not going to
be accurate). If you prefer the computations of one step of
scan to be done differently then the entire function, you
can use this parameter to describe how the computations in this
loop are done (see theano.function for details about
possible values and their meaning).
- profile – Flag or string. If true, or different from the empty string, a
profile object will be created and attached to the inner graph of
scan. In case profile is True, the profile object will have the
name of the scan instance, otherwise it will have the passed string.
Profile object collect (and print) information only when running the
inner graph with the new cvm linker ( with default modes,
other linkers this argument is useless)
|
Return type: | tuple
|
Returns: | tuple of the form (outputs, updates); outputs is either a
Theano variable or a list of Theano variables representing the
outputs of scan (in the same order as in
outputs_info). updates is a subclass of dictionary
specifying the
update rules for all shared variables used in scan
This dictionary should be passed to theano.function when
you compile your function. The change compared to a normal
dictionary is that we validate that keys are SharedVariable
and addition of those dictionary are validated to be consistent.
|