You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+8-5Lines changed: 8 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -10,10 +10,6 @@ This is a Python wrapper for Stanford University's NLP group's Java-based [CoreN
10
10
11
11
It requires [pexpect](http://www.noah.org/wiki/pexpect) and (optionally) [unidecode](http://pypi.python.org/pypi/Unidecode) to handle non-ASCII text. This script includes and uses code from [jsonrpc](http://www.simple-is-better.org/rpc/) and [python-progressbar](http://code.google.com/p/python-progressbar/).
12
12
13
-
This is a simple script I created after having problems using other Python wrappers to Stanford's dependency parser.
14
-
First the JPypes approach used in [stanford-parser-python](http://projects.csail.mit.edu/spatial/Stanford_Parser) had trouble initializing a JVM on two separate computers. Next, I discovered I could not use a
15
-
[Jython solution](http://blog.gnucom.cc/2010/using-the-stanford-parser-with-jython/) because the Python modules I needed did not work in Jython.
16
-
17
13
It runs the Stanford CoreNLP jar in a separate process, communicates with the java process using its command-line interface, and makes assumptions about the output of the parser in order to parse it into a Python dict object and transfer it using JSON. The parser will break if the output changes significantly, but it has been tested on **Core NLP tools version 1.3.1** released 2012-04-09.
18
14
19
15
## Download and Usage
@@ -24,7 +20,7 @@ In other words:
24
20
25
21
sudo pip install pexpect unidecode # unidecode is optional
0 commit comments