8000 Fix My English in README · potatochip/corenlp-python@12d4134 · GitHub
[go: up one dir, main page]

Skip to content

Commit 12d4134

Browse files
committed
Fix My English in README
1 parent 33888c3 commit 12d4134

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# A Python wrapper for the Java Stanford Core NLP tools
22
---------------------------
33

4-
This is a fork of Dustin Smith's [stanford-corenlp-python](https://github.com/dasmith/stanford-corenlp-python). A Python interface to [Stanford CoreNLP](http://nlp.stanford.edu/software/corenlp.shtml). It can either be python package, or run as a JSON-RPC server.
4+
This is a fork of Dustin Smith's [stanford-corenlp-python](https://github.com/dasmith/stanford-corenlp-python), a Python interface to [Stanford CoreNLP](http://nlp.stanford.edu/software/corenlp.shtml). It can either use as python package, or run as a JSON-RPC server.
55

66
## Edited
77
* Update to Stanford CoreNLP v3.2.0
@@ -24,7 +24,7 @@ To use this program you must [download](http://nlp.stanford.edu/software/corenlp
2424

2525
In other words:
2626

27-
sudo pip install jsonrpclib pexpect unidecode # unidecode is optional
27+
sudo pip install pexpect unidecode jsonrpclib # jsonrpclib is optional
2828
git clone https://bitbucket.org/torotoki/corenlp-python.git
2929
cd corenlp-python
3030
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2013-06-20.zip
@@ -120,9 +120,9 @@ Not to use JSON-RPC, load the module instead:
120120
from corenlp import StanfordCoreNLP
121121
corenlp_dir = "stanford-corenlp-full-2013-06-20/"
122122
corenlp = StanfordCoreNLP(corenlp_dir) # wait a few minutes...
123-
corenlp.parse("Parse it")
123+
corenlp.raw_parse("Parse it")
124124

125-
If you need to parse long texts (more than 30-50 sentences), you have to use a batch_parse() function. It reads text files from input directory and returns a generator object of dictionaries parsed each file results:
125+
If you need to parse long texts (more than 30-50 sentences), you must use a `batch_parse` function. It reads text files from input directory and returns a generator object of dictionaries parsed each file results:
126126

127127
from corenlp import batch_parse
128128
corenlp_dir = "stanford-corenlp-full-2013-06-20/"
@@ -134,7 +134,7 @@ The function uses XML output feature of Stanford CoreNLP, and you can take all i
134134

135135
parsed = batch_parse(raw_text_directory, corenlp_dir, raw_output=True)
136136

137-
(note: The function requires xmltodict now, you must install it by `sudo pip install xmltodict`)
137+
(note: The function requires xmltodict now, you should install it by `sudo pip install xmltodict`)
138138

139139
## Developer
140140
* Hiroyoshi Komatsu [hiroyoshi.komat@gmail.com]

0 commit comments

Comments
 (0)
0