8000 No duplicate chunk names · codingbooks/tidy-text-mining@03ae263 · GitHub
[go: up one dir, main page]

Skip to content

Commit 03ae263

Browse files
committed
No duplicate chunk names
1 parent 92cac72 commit 03ae263

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

07-tweet-archives.Rmd

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ One type of text that gets plenty of attention is text shared online via Twitter
1818

1919
An individual can download their own Twitter archive by following [directions available on Twitter's website](https://support.twitter.com/articles/20170160). We each downloaded ours and will now open them up. Let's use the lubridate package to convert the string timestamps to date-time objects and initially take a look at our tweeting patterns overall (Figure \@ref(fig:setup)).
2020

21-
```{r setup, fig.width=7, fig.height=7, fig.cap="All tweets from our accounts"}
21+
```{r tweets, fig.width=7, fig.height=7, fig.cap="All tweets from our accounts"}
2222
library(lubridate)
2323
library(ggplot2)
2424
library(dplyr)
@@ -51,7 +51,7 @@ In the call to `unnest_tokens()`, we unnest using the specialized `"tweets"` tok
5151

5252
Because we have kept text such as hashtags and usernames in the dataset, we can't use a simple `anti_join()` to remove stop words. Instead, we can take the approach shown in the `filter()` line that uses `str_detect()` from the stringr package.
5353

54-
```{r tidytweets, dependson = "setup"}
54+
```{r tidytweets, dependson = "tweets"}
5555
library(tidytext)
5656
library(stringr)
5757

0 commit comments

Comments
 (0)
0