8000 readme update · afcarl/stanford-corenlp-python@668289e · GitHub
[go: up one dir, main page]

Skip to content
8000

Commit 668289e

Browse files
committed
readme update
1 parent c286d62 commit 668289e

File tree

2 files changed

+28
-16
lines changed

2 files changed

+28
-16
lines changed

README.md

Lines changed: 23 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -43,26 +43,26 @@ Assuming you are running on port 8080, the code in `client.py` shows an example
4343

4444
Produces a list with a parsed dictionary for each sentence:
4545

46-
Result [{"text": "hello world",
47-
"tuples": [("amod", "world", "hello")],
48-
"words": {"world": {"NamedEntityTag": "O",
49-
"CharacterOffsetEnd": "11",
50-
"Lemma": "world",
51-
"PartOfSpeech": "NN",
52-
"CharacterOffsetBegin": "6"},
53-
"hello": {"NamedEntityTag": "O",
54-
"CharacterOffsetEnd": "5",
55-
"Lemma": "hello",
56-
"PartOfSpeech": "JJ",
57-
"CharacterOffsetBegin": "0"}}}]
58-
59-
60-
To use it in a regular script or to edit/debug, load the module instead:
46+
Result [{'text': 'hello world',
47+
'tuples': [['amod', 'world', 'hello']],
48+
'words': [['hello', {'NamedEntityTag': 'O', 'CharacterOffsetEnd': '5', 'CharacterOffsetBegin': '0', 'PartOfSpeech': 'JJ', 'Lemma': 'hello'}],
49+
['world', {'NamedEntityTag': 'O', 'CharacterOffsetEnd': '11', 'CharacterOffsetBegin': '6', 'PartOfSpeech': 'NN', 'Lemma': 'world'}]]}]
50+
51+
To use it in a regular script or to edit/debug (since errors via RPC are opaque), load the module instead:
6152

6253
from corenlp import *
6354
corenlp = StanfordCoreNLP()
6455
corenlp.parse("Parse an imperative sentence, damnit!")
6556

57+
I also added a function called **parse_imperative** that introduces a dummy pronoun to overcome the problems that dependency parsers have with imperative statements.
58+
59+
corenlp.parse("stop smoking")
60+
>> [{"text": "stop smoking", "tuples": [["nn", "smoking", "stop"]], "words": [["stop", {"NamedEntityTag": "O", "CharacterOffsetEnd": "4", "Lemma": "stop", "PartOfSpeech": "NN", "CharacterOffsetBegin": "0"}], ["smoking", {"NamedEntityTag": "O", "CharacterOffsetEnd": "12", "Lemma": "smoking", "PartOfSpeech": "NN", "CharacterOffsetBegin": "5"}]]}]
61+
62+
corenlp.parse_imperative("stop smoking")
63+
>> [{"text": "stop smoking", "tuples": [["xcomp", "stop", "smoking"]], "words": [["stop", {"NamedEntityTag": "O", "CharacterOffsetEnd": "8", "Lemma": "stop", "PartOfSpeech": "VBP", "CharacterOffsetBegin": "4"}], ["smoking", {"NamedEntityTag": "O", "CharacterOffsetEnd": "16", "Lemma": "smoke", "PartOfSpeech": "VBG", "CharacterOffsetBegin": "9"}]]}]
64+
65+
6666
<!--
6767
## Adding WordNet
6868
@@ -74,3 +74,11 @@ Download WordNet-3.0 Prolog: http://wordnetcode.princeton.edu/3.0/WNprolog-3.0.
7474
If you think there may be a problem with this wrapper, first ensure you can run the Java program:
7575

7676
java -cp stanford-corenlp-2010-11-12.jar:stanford-corenlp-models-2010-11-06.jar:xom-1.2.6.jar:xom.jar:jgraph.jar:jgrapht.jar -Xmx3g edu.stanford.nlp.pipeline.StanfordCoreNLP -props default.properties
77+
78+
79+
# TODO
80+
81+
- Parse and resolve coreferences
82+
- Mutex on parser
83+
- have pyexpect eat up dead chars after timeout (before next parse after a timeout)
84+

client.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,11 @@
11
import jsonrpc
2+
from simplejson import loads
23
server = jsonrpc.ServerProxy(jsonrpc.JsonRpc20(),
34
jsonrpc.TransportTcpIp(addr=("127.0.0.1", 8080)))
45

56
# call a remote-procedure
6-
result = server.parse("hello world")
7+
result = loads(server.parse("hello world"))
78
print "Result", result
9+
~
10+
11+

0 commit comments

Comments
 (0)
0