10000 TensorFlow: Upstream changes to git. · nipengmath/tensorflow@6ec6362 · GitHub
[go: up one dir, main page]

Skip to content

Commit 6ec6362

Browse files
author
Vijay Vasudevan
committed
TensorFlow: Upstream changes to git.
Changes: - Update a lot of documentation, installation instructions, requirements, etc. - Add RNN models directory for recurrent neural network examples to go along with the tutorials. Base CL: 107290480
1 parent f483e39 commit 6ec6362

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+4223
-160
lines changed

CONTRIBUTING.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,3 +15,14 @@ Follow either of the two links above to access the appropriate CLA and instructi
1515

1616
***NOTE***: Only original source code from you and other people that have signed the CLA can be accepted into the main repository.
1717

18+
## Contributing code
19+
20+
We currently use Gerrit to host and handle code changes to TensorFlow. The main
21+
site is
22+
[https://tensorflow-review.googlesourc F438 e.com/](https://tensorflow-review.googlesource.com/).
23+
See Gerrit [docs](https://gerrit-review.googlesource.com/Documentation/) for
24+
information on how Gerrit's code review system works.
25+
26+
We are currently working on improving our external acceptance process, so
27+
please be patient with us as we work out the details.
28+

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ variety of other domains, as well.
1414
# Download and Setup
1515

1616
For detailed installation instructions, see
17-
[here](g3doc/get_started/os_setup.md).
17+
[here](tensorflow/g3doc/get_started/os_setup.md).
1818

1919
## Binary Installation
2020

tensorflow/g3doc/api_docs/python/framework.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
* [class tf.RegisterShape](#RegisterShape)
2828
* [class tf.TensorShape](#TensorShape)
2929
* [class tf.Dimension](#Dimension)
30-
* [tf.op_scope(*args, **kwds)](#op_scope)
30+
* [tf.op_scope(values, name, default_name)](#op_scope)
3131
* [tf.get_seed(op_seed)](#get_seed)
3232

3333

@@ -235,7 +235,7 @@ def my_func(pred, tensor):
235235

236236
- - -
237237

238-
#### tf.Graph.device(*args, **kwds) {#Graph.device}
238+
#### tf.Graph.device(device_name_or_function) {#Graph.device}
239239

240240
Returns a context manager that specifies the default device to use.
241241

@@ -287,7 +287,7 @@ with g.device(matmul_on_gpu):
287287

288288
- - -
289289

290-
#### tf.Graph.name_scope(*args, **kwds) {#Graph.name_scope}
290+
#### tf.Graph.name_scope(name) {#Graph.name_scope}
291291

292292
Returns a context manager that creates hierarchical names for operations.
293293

@@ -611,7 +611,7 @@ the default graph.
611611

612612
- - -
613613

614-
#### tf.Graph.gradient_override_map(*args, **kwds) {#Graph.gradient_override_map}
614+
#### tf.Graph.gradient_override_map(op_type_map) {#Graph.gradient_override_map}
615615

616616
EXPERIMENTAL: A context manager for overriding gradient functions.
617617

@@ -2023,7 +2023,7 @@ The value of this dimension, or None if it is unknown.
20232023

20242024
- - -
20252025

2026-
### tf.op_scope(*args, **kwds) <div class="md-anchor" id="op_scope">{#op_scope}</div>
2026+
### tf.op_scope(values, name, default_name) <div class="md-anchor" id="op_scope">{#op_scope}</div>
20272027

20282028
Returns a context manager for use when defining a Python op.
20292029

tensorflow/g3doc/api_docs/python/index.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -267,7 +267,6 @@
267267
* [`depthwise_conv2d`](nn.md#depthwise_conv2d)
268268
* [`dropout`](nn.md#dropout)
269269
* [`embedding_lookup`](nn.md#embedding_lookup)
270-
* [`embedding_lookup_sparse`](nn.md#embedding_lookup_sparse)
271270
* [`fixed_unigram_candidate_sampler`](nn.md#fixed_unigram_candidate_sampler)
272271
* [`in_top_k`](nn.md#in_top_k)
273272
* [`l2_loss`](nn.md#l2_loss)

tensorflow/g3doc/api_docs/python/nn.md

Lines changed: 30 additions & 90 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,6 @@ accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
3535
* [tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None)](#softmax_cross_entropy_with_logits)
3636
* [Embeddings](#AUTOGENERATED-embeddings)
3737
* [tf.nn.embedding_lookup(params, ids, name=None)](#embedding_lookup)
38-
* [tf.nn.embedding_lookup_sparse(params, sp_ids, sp_weights, name=None, combiner='mean')](#embedding_lookup_sparse)
3938
* [Evaluation](#AUTOGENERATED-evaluation)
4039
* [tf.nn.top_k(input, k, name=None)](#top_k)
4140
* [tf.nn.in_top_k(predictions, targets, k, name=None)](#in_top_k)
@@ -130,17 +129,18 @@ sum is unchanged.
130129
By default, each element is kept or dropped independently. If `noise_shape`
131130
is specified, it must be
132131
[broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
133-
to the shape of `x`, and only dimensions with `noise_shape[i] == x.shape[i]`
134-
will make independent decisions. For example, if `x.shape = [b, x, y, c]` and
135-
`noise_shape = [b, 1, 1, c]`, each batch and channel component will be
132+
to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]`
133+
will make independent decisions. For example, if `shape(x) = [k, l, m, n]`
134+
and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be
136135
kept independently and each row and column will be kept or not kept together.
137136

138137
##### Args:
139138

140139

141140
* <b>x</b>: A tensor.
142-
* <b>keep_prob</b>: Float probability that each element is kept.
143-
* <b>noise_shape</b>: Shape for randomly generated keep/drop flags.
141+
* <b>keep_prob</b>: A Python float. The probability that each element is kept.
142+
* <b>noise_shape</b>: A 1-D `Tensor` of type `int32`, representing the
143+
shape for randomly generated keep/drop flags.
144144
* <b>seed</b>: A Python integer. Used to create a random seed.
145145
See [`set_random_seed`](constant_op.md#set_random_seed) for behavior.
146146
* <b>name</b>: A name for this operation (optional).
@@ -247,10 +247,10 @@ are as follows. If the 4-D `input` has shape
247247
`[batch, in_height, in_width, ...]` and the 4-D `filter` has shape
248248
`[filter_height, filter_width, ...]`, then
249249

250-
output.shape = [batch,
251-
(in_height - filter_height + 1) / strides[1],
252-
(in_width - filter_width + 1) / strides[2],
253-
...]
250+
shape(output) = [batch,
251+
(in_height - filter_height + 1) / strides[1],
252+
(in_width - filter_width + 1) / strides[2],
253+
...]
254254

255255
output[b, i, j, :] =
256256
sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, ...] *
@@ -262,7 +262,7 @@ vectors. For `depthwise_conv_2d`, each scalar component `input[b, i, j, k]`
262262
is multiplied by a vector `filter[di, dj, k]`, and all the vectors are
263263
concatenated.
264264

265-
In the formula for `output.shape`, the rounding direction depends on padding:
265+
In the formula for `shape(output)`, the rounding direction depends on padding:
266266

267267
* `padding = 'SAME'`: Round down (only full size windows are considered).
268268
* `padding = 'VALID'`: Round up (partial windows are included).
@@ -411,7 +411,7 @@ In detail, the output is
411411

412412
for each tuple of indices `i`. The output shape is
413413

414-
output.shape = (value.shape - ksize + 1) / strides
414+
shape(output) = (shape(value) - ksize + 1) / strides
415415

416416
where the rounding direction depends on padding:
417417

@@ -722,103 +722,43 @@ and the same dtype (either `float32` or `float64`).
722722

723723
## Embeddings <div class="md-anchor" id="AUTOGENERATED-embeddings">{#AUTOGENERATED-embeddings}</div>
724724

725-
TensorFlow provides several operations that help you compute embeddings.
725+
TensorFlow provides library support for looking up values in embedding
726+
tensors.
726727

727728
- - -
728729

729730
### tf.nn.embedding_lookup(params, ids, name=None) <div class="md-anchor" id="embedding_lookup">{#embedding_lookup}</div>
730731

731-
Return a tensor of embedding values by looking up "ids" in "params".
732+
Looks up `ids` in a list of embedding tensors.
732733

733-
##### Args:
734-
735-
736-
* <b>params</b>: List of tensors of the same shape. A single tensor is
737-
treated as a singleton list.
738-
* <b>ids</b>: Tensor of integers containing the ids to be looked up in
739-
'params'. Let P be len(params). If P > 1, then the ids are
740-
partitioned by id % P, and we do separate lookups in params[p]
741-
for 0 <= p < P, and then stitch the results back together into
742-
a single result tensor.
743-
* <b>name</b>: Optional name for the op.
744-
745-
##### Returns:
746-
747-
A tensor of shape ids.shape + params[0].shape[1:] containing the
748-
values params[i % P][i] for each i in ids.
749-
750-
##### Raises:
751-
752-
753-
* <b>ValueError</b>: if some parameters are invalid.
734+
This function is used to perform parallel lookups on the list of
735+
tensors in `params`. It is a generalization of
736+
[`tf.gather()`](array_ops.md#gather), where `params` is interpreted
737+
as a partition of a larger embedding tensor.
754738

739+
If `len(params) > 1`, each element `id` of `ids` is partitioned between
740+
the elements of `params` by computing `p = id % len(params)`, and is
741+
then used to look up the slice `params[p][id // len(params), ...]`.
755742

756-
- - -
757-
758-
### tf.nn.embedding_lookup_sparse(params, sp_ids, sp_weights, name=None, combiner='mean') <div class="md-anchor" id="embedding_lookup_sparse">{#embedding_lookup_sparse}</div>
759-
760-
Computes embeddings for the given ids and weights.
761-
762-
This op assumes that there is at least one id for each row in the dense tensor
763-
represented by sp_ids (i.e. there are no rows with empty features), and that
764-
all the indices of sp_ids are in canonical row-major order.
765-
766-
It also assumes that all id values lie in the range [0, p0), where p0
767-
is the sum of the size of params along dimension 0.
743+
The results of the lookup are then concatenated into a dense
744+
tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
768745

769746
##### Args:
770747

771748

772-
* <b>params</b>: A single tensor representing the complete embedding tensor,
773-
or a list of P tensors all of same shape except for the first dimension,
774-
representing sharded embedding tensors. In the latter case, the ids are
775-
partitioned by id % P, and we do separate lookups in params[p] for
776-
0 <= p < P, and then stitch the results back together into a single
777-
result tensor. The first dimension is allowed to vary as the vocab
778-
size is not necessarily a multiple of P.
779-
* <b>sp_ids</b>: N x M SparseTensor of int64 ids (typically from FeatureValueToId),
780-
where N is typically batch size and M is arbitrary.
781-
* <b>sp_weights</b>: either a SparseTensor of float / double weights, or None to
782-
indicate all weights should be taken to be 1. If specified, sp_weights
783-
must have exactly the same shape and indices as sp_ids.
784-
* <b>name</b>: Optional name for the op.
785-
* <b>combiner</b>: A string specifying the reduction op. Currently "mean" and "sum"
786-
are supported.
787-
"sum" computes the weighted sum of the embedding results for each row.
788-
"mean" is the weighted sum divided by the total weight.
749+
* <b>params</b>: A list of tensors with the same shape and type.
750+
* <b>ids</b>: A `Tensor` with type `int32` containing the ids to be looked
751+
up in `params`.
752+
* <b>name</b>: A name for the operation (optional).
789753

790754
##### Returns:
791755

792-
A dense tensor representing the combined embeddings for the
793-
sparse ids. For each row in the dense tensor represented by sp_ids, the op
794-
looks up the embeddings for all ids in that row, multiplies them by the
795-
corresponding weight, and combines these embeddings as specified.
796-
797-
In other words, if
798-
shape(combined params) = [p0, p1, ..., pm]
799-
and
800-
shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn]
801-
then
802-
shape(output) = [d0, d1, ..., dn-1, p1, ..., pm].
803-
804-
For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are
805-
806-
[0, 0]: id 1, weight 2.0
807-
[0, 1]: id 3, weight 0.5
808-
[1, 0]: id 0, weight 1.0
809-
[2, 3]: id 1, weight 3.0
810-
811-
with combiner="mean", then the output will be a 3x20 matrix where
812-
output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)
813-
output[1, :] = params[0, :] * 1.0
814-
output[2, :] = params[1, :] * 3.0
756+
A `Tensor` with the same type as the tensors in `params`.
815757

816758
##### Raises:
817759

818760

819-
* <b>TypeError</b>: If sp_ids is not a SparseTensor, or if sp_weights is neither
820-
None nor SparseTensor.
821-
* <b>ValueError</b>: If combiner is not one of {"mean", "sum"}.
761+
* <b>ValueError</b>: If `params` is empty.
822762

823763

824764

tensorflow/g3doc/api_docs/python/state_ops.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ accepted by [`tf.convert_to_tensor`](framework.md#convert_to_tensor).
2323
* [Sharing Variables](#AUTOGENERATED-sharing-variables)
2424
* [tf.get_variable(name, shape=None, dtype=tf.float32, initializer=None, trainable=True, collections=None)](#get_variable)
2525
* [tf.get_variable_scope()](#get_variable_scope)
26-
* [tf.variable_scope(*args, **kwds)](#variable_scope)
26+
* [tf.variable_scope(name_or_scope, reuse=None, initializer=None)](#variable_scope)
2727
* [tf.constant_initializer(value=0.0)](#constant_initializer)
2828
* [tf.random_normal_initializer(mean=0.0, stddev=1.0, seed=None)](#random_normal_initializer)
2929
* [tf.truncated_normal_initializer(mean=0.0, stddev=1.0, seed=None)](#truncated_normal_initializer)
@@ -896,7 +896,7 @@ Returns the current variable scope.
896896

897897
- - -
898898

899-
### tf.variable_scope(*args, **kwds) <div class="md-anchor" id="variable_scope">{#variable_scope}</div>
899+
### tf.variable_scope(name_or_scope, reuse=None, initializer=None) <div class="md-anchor" id="variable_scope">{#variable_scope}</div>
900900

901901
Returns a context for variable scope.
902902

tensorflow/g3doc/get_started/os_setup.md

Lines changed: 36 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,33 @@ Install TensorFlow (only CPU binary version is currently available).
3636
$ sudo pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
3737
```
3838

39-
### Try your first TensorFlow program
39+
## Docker-based installation
40+
41+
We also support running TensorFlow via [Docker](http://docker.com/), which lets
42+
you avoid worrying about setting up dependencies.
43+
44+
First, [install Docker](http://docs.docker.com/engine/installation/). Once
45+
Docker is up and running, you can start a container with one command:
46+
47+
```sh
48+
$ docker run -it b.gcr.io/tensorflow/tensorflow
49+
```
50+
51+
This will start a container with TensorFlow and all its dependencies already
52+
installed.
53+
54+
### Additional images
55+
56+
The default Docker image above contains just a minimal set of libraries for
57+
getting up and running with TensorFlow. We also have several other containers,
58+
which you can use in the `docker run` command above:
59+
60+
* `b.gcr.io/tensorflow/tensorflow-full`: Contains a complete TensorFlow source
61+
installation, including all utilities needed to build and run TensorFlow. This
62+
makes it easy to experiment directly with the source, without needing to
63+
install any of the dependencies described above.
64+
65+
## Try your first TensorFlow program
4066

4167
```sh
4268
$ python
@@ -133,6 +159,13 @@ $ sudo apt-get install python-numpy swig python-dev
133159
In order to build TensorFlow with GPU support, both Cuda Toolkit 7.0 and CUDNN
134160
6.5 V2 from NVIDIA need to be installed.
135161

162+
TensorFlow GPU support requires having a GPU card with NVidia Compute Capability >= 3.5. Supported cards include but are not limited to:
163+
164+
* NVidia Titan
165+
* NVidia Titan X
166+
* NVidia K20
167+
* NVidia K40
168+
136169
##### Download and install Cuda Toolkit 7.0
137170

138171
https://developer.nvidia.com/cuda-toolkit-70
@@ -227,7 +260,7 @@ Notes : You need to install
227260
Follow installation instructions [here](http://docs.scipy.org/doc/numpy/user/install.html).
228261

229262

230-
### Create the pip package and install
263+
### Create the pip package and install {#create-pip}
231264

232265
```sh
233266
$ bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
@@ -238,7 +271,7 @@ $ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
238271
$ pip install /tmp/tensorflow_pkg/tensorflow-0.5.0-cp27-none-linux_x86_64.whl
239272
```
240273

241-
### Train your first TensorFlow neural net model
274+
## Train your first TensorFlow neural net model
242275

243276
From the root of your source tree, run:
244277

tensorflow/g3doc/how_tos/adding_an_op/index.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -127,10 +127,9 @@ To do this for the `ZeroOut` op, add the following to `zero_out.cc`:
127127
REGISTER_KERNEL_BUILDER(Name("ZeroOut").Device(DEVICE_CPU), ZeroOutOp);
128128
```
129129
130-
TODO: instructions or pointer to building TF
131-
132-
At this point, the Tensorflow system can reference and use the Op when
133-
requested.
130+
Once you
131+
[build and reinstall TensorFlow](../../get_started/os_setup.md#create-pip), the
132+
Tensorflow system can reference and use the Op when requested.
134133
135134
## Generate the client wrapper <div class="md-anchor" id="AUTOGENERATED-generate-the-client-wrapper">{#AUTOGENERATED-generate-the-client-wrapper}</div>
136135
### The Python Op wrapper <div class="md-anchor" id="AUTOGENERATED-the-python-op-wrapper">{#AUTOGENERATED-the-python-op-wrapper}</div>

tensorflow/g3doc/how_tos/variables/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ w_twice = tf.Variable(weights.initialized_value() * 0.2, name="w_twice")
101101
The convenience function `tf.initialize_all_variables()` adds an Op to
102102
initialize *all variables* in the model. You can also pass it an explicit list
103103
of variables to initialize. See the
104-
[Variables Documentation](../../api_docs/python/state_op.md) for more options,
104+
[Variables Documentation](../../api_docs/python/state_ops.md) for more options,
105105
including checking if variables are initialized.
106106

107107
## Saving and Restoring

0 commit comments

Comments
 (0)
0