8000 Fix incorrect formula in Perceptron.ipynb of lesson 3 · github-dask/AI-For-Beginners@ef6570a · GitHub
[go: up one dir, main page]

Skip to content

Commit ef6570a

Browse files
author
caizhi.wcz
committed
Fix incorrect formula in Perceptron.ipynb of lesson 3
1 parent 544115f commit ef6570a

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

lessons/3-NeuralNetworks/03-Perceptron/Perceptron.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@
166166
" \\end{cases} \\\\\n",
167167
"$$\n",
168168
"\n",
169-
"However, a generic linear model should also have a bias, i.e. ideally we should compute $y$ as $y=f(\\mathbf{w}^{\\mathrm{T}}\\mathbf{x})+\\mathbf{b}$. To simplify our model, we can get rid of this bias term by adding one more dimension to our input features, which always equals to 1:"
169+
"However, a generic linear model should also have a bias, i.e. ideally we should compute $y$ as $y=f(\\mathbf{w}^{\\mathrm{T}}\\mathbf{x}+\\mathbf{b})$. To simplify our model, we can get rid of this bias term by adding one more dimension to our input features, which always equals to 1:"
170170
]
171171
},
172172
{
@@ -215,7 +215,7 @@
215215
" \n",
216216
"We will use the process of **gradient descent**. Starting with some initial random weights $\\mathbf{w}^{(0)}$, we will adjust weights on each step of the training using the gradient of $E$:\n",
217217
"\n",
218-
"$$\\mathbf{w}^{\\tau + 1}=\\mathbf{w}^{\\tau} - \\eta \\nabla E(\\mathbf{w}) = \\mathbf{w}^{\\tau} + \\eta \\mathbf{x}_{n} t_{n}$$\n",
218+
"$$\\mathbf{w}^{\\tau + 1}=\\mathbf{w}^{\\tau} - \\eta \\nabla E(\\mathbf{w}) = \\mathbf{w}^{\\tau} + \\eta\\sum_{n \\in \\mathcal{M}}\\mathbf{x}_{n} t_{n}$$\n",
219219
"\n",
220220
"where $\\eta$ is a **learning rate**, and $\\tau\\in\\mathbb{N}$ - number of iteration.\n",
221221
"\n",

0 commit comments

Comments
 (0)
0