8000 Minor README update · JHnlp/stanford-corenlp-python@160440e · GitHub
[go: up one dir, main page]

Skip to content

Commit 160440e

Browse files
committed
Minor README update
1 parent 65bbde6 commit 160440e

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
This is a Python wrapper for Stanford University's NLP group's Java-based [CoreNLP tools](http://nlp.stanford.edu/software/corenlp.shtml). It can either be imported as a module or run as a JSON-RPC server. Because it uses many large trained models (requiring 3GB RAM on 64-bit machines and usually a few minutes loading time), most applications will probably want to run it as a server.
44

55

6-
* Python interface to Stanford CoreNLP tools: tagging, phrase-structure parsing, dependency parsing, named entity resolution, and coreference resolution.
6+
* Python interface to Stanford CoreNLP tools: tagging, phrase-structure parsing, dependency parsing, [named-entity resolution](http://en.wikipedia.org/wiki/Named-entity_recognition), and [coreference resolution](http://en.wikipedia.org/wiki/Coreference).
77
* Runs an JSON-RPC server that wraps the Java server and outputs JSON.
88
* Outputs parse trees which can be used by [nltk](http://nltk.googlecode.com/svn/trunk/doc/howto/tree.html).
99

@@ -42,7 +42,7 @@ Assuming you are running on port 8080, the code in `client.py` shows an example
4242
result = loads(server.parse("Hello world. It is so beautiful"))
4343
print "Result", result
4444

45-
That returns a dictionary containing the keys `sentences` and (when applicable) `corefs`. The key `sentences` contains a list of dictionaries for each sentence, which contain `parsetree`, `text`, `tuples` containing the dependencies, and `words`, containing information about parts of speech, NER, etc:
45+
That returns a dictionary containing the keys `sentences` and `coref`. The key `sentences` contains a list of dictionaries for each sentence, which contain `parsetree`, `text`, `tuples` containing the dependencies, and `words`, containing information about parts of speech, recognized named-entities, etc:
4646

4747
{u'sentences': [{u'parsetree': u'(ROOT (S (VP (NP (INTJ (UH Hello)) (NP (NN world)))) (. !)))',
4848
u'text': u'Hello world!',
@@ -104,13 +104,13 @@ That returns a dictionary containing the keys `sentences` and (when applicable)
104104
u'PartOfSpeech': u'.'}]]}],
105105
u'coref': [[[[u'It', 1, 0, 0, 1], [u'Hello world', 0, 1, 0, 2]]]]}
106106

107-
To use it in a regular script or to edit/debug it (because errors via RPC are opaque), load the module instead:
107+
To use it in a regular script (useful for debugging), load the module instead:
108108

109109
from corenlp import *
110110
corenlp = StanfordCoreNLP() # wait a few minutes...
111111
corenlp.parse("Parse this sentence.")
112112

113-
The server, `StanfordCoreNLP()`, takes an optional argument `corenlp_path` which specifies the relative path to the jar files. The default value is `StanfordCoreNLP(corenlp_path="./stanford-corenlp-full-2014-08-27/")`.
113+
The server, `StanfordCoreNLP()`, takes an optional argument `corenlp_path` which specifies the path to the jar files. The default value is `StanfordCoreNLP(corenlp_path="./stanford-corenlp-full-2014-08-27/")`.
114114

115115
## Coreference Resolution
116116

@@ -158,4 +158,4 @@ I gratefully welcome bug fixes and new features. If you have forked this reposi
158158

159159
## Related Projects
160160

161-
Maintainers of the Core NLP library at Stanford keep an [updated list of wrappers and extensions](http://nlp.stanford.edu/software/corenlp.shtml#Extensions).
161+
Maintainers of the Core NLP library at Stanford keep an [updated list of wrappers and extensions](http://nlp.stanford.edu/software/corenlp.shtml#Extensions). See Brendan O'Connor's [https://github.com/brendano/stanford_corenlp_pywrapper] for a different socket-based approach.

0 commit comments

Comments
 (0)
0