8000 Merge pull request #765 from dongjoon-hyun/fix_typos_in_tutorials. Cl… · CJavaScala/tensorflow@27c7dd7 · GitHub
[go: up one dir, main page]

Skip to content

Commit 27c7dd7

Browse files
author
Vijay Vasudevan
committed
Merge pull request tensorflow#765 from dongjoon-hyun/fix_typos_in_tutorials. Closes tensorflow#765
2 parents 0b1db1c + ec98c71 commit 27c7dd7

File tree

3 files changed

+6
-6
lines changed

3 files changed

+6
-6
lines changed

tensorflow/g3doc/tutorials/mnist/tf/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -390,7 +390,7 @@ summary_writer = tf.train.SummaryWriter(FLAGS.train_dir,
390390
```
391391

392392
Lastly, the events file will be updated with new summary values every time the
393-
`summary_op` is run and the ouput passed to the writer's `add_summary()`
393+
`summary_op` is run and the output passed to the writer's `add_summary()`
394394
function.
395395

396396
```python

tensorflow/g3doc/tutorials/recurrent/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ for an introduction to recurrent neural networks and LSTMs in particular.
1010

1111
In this tutorial we will show how to train a recurrent neural network on
1212
a challenging task of language modeling. The goal of the problem is to fit a
13-
probabilistic model which assigns probablities to sentences. It does so by
13+
probabilistic model which assigns probabilities to sentences. It does so by
1414
predicting next words in a text given a history of previous words. For this
1515
purpose we will use the Penn Tree Bank (PTB) dataset, which is a popular
1616
benchmark for measuring quality of these models, whilst being small and
@@ -80,7 +80,7 @@ of unrolled steps.
8080
This is easy to implement by feeding inputs of length `num_steps` at a time and
8181
doing backward pass after each iteration.
8282

83-
A simplifed version of the code for the graph creation for truncated
83+
A simplified version of the code for the graph creation for truncated
8484
backpropagation:
8585

8686
```python
@@ -129,7 +129,7 @@ word_embeddings = tf.nn.embedding_lookup(embedding_matrix, word_ids)
129129
The embedding matrix will be initialized randomly and the model will learn to
130130
differentiate the meaning of words just by looking at the data.
131131

132-
### Loss Fuction
132+
### Loss Function
133133

134134
We want to minimize the average negative log probability of the target words:
135135

tensorflow/g3doc/tutorials/seq2seq/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ This basic architecture is depicted below.
5050
Each box in the picture above represents a cell of the RNN, most commonly
5151
a GRU cell or an LSTM cell (see the [RNN Tutorial](../../tutorials/recurrent/index.md)
5252
for an explanation of those). Encoder and decoder can share weights or,
53-
as is more common, use a different set of parameters. Mutli-layer cells
53+
as is more common, use a different set of parameters. Multi-layer cells
5454
have been successfully used in sequence-to-sequence models too, e.g. for
5555
translation [Sutskever et al., 2014](http://arxiv.org/abs/1409.3215).
5656

@@ -203,7 +203,7 @@ sentence with a special PAD symbol. Then we'd need only one seq2seq model,
203203
for the padded lengths. But on shorter sentence our model would be inefficient,
204204
encoding and decoding many PAD symbols that are useless.
205205

206-
As a compromise between contructing a graph for every pair of lengths and
206+
As a compromise between constructing a graph for every pair of lengths and
207207
padding to a single length, we use a number of *buckets* and pad each sentence
208208
to the length of the bucket above it. In `translate.py` we use the following
209209
default buckets.

0 commit comments

Comments
 (0)
0