[go: up one dir, main page]

loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Hilton Alers-Valentín 1 ; Sandiway Fong 2 and J. Vega-Riveros 3

Affiliations: 1 Linguistics and Cognitive Science, University of Puerto Rico-Mayagüez, Puerto Rico ; 2 Department of Linguistics, University of Arizona-Tucson, U.S.A. ; 3 Electrical and Computer Engineering, University of Puerto Rico-Mayagüez, Puerto Rico

Keyword(s): Minimalist Syntax, Parser, Lexicon, Structural Ambiguity, Cognitive Modeling, Computational Linguistics, Natural Language Processing, Symbolic computation, Neural Networks, Explainable Artificial Intelligence.

Abstract: To overcome the limitations of prevailing NLP methods, a Hybrid-Architecture Symbolic Parser and Neural Lexicon system is proposed to detect structural ambiguity by producing as many syntactic representations as there are interpretations for an utterance. HASPNeL comprises a symbolic AI, feature-unification parser, a lexicon generated using manual classification and machine learning, and a neural network encoder which tags each lexical item in a synthetic corpus and estimates likelihoods for each utterance’s interpretation with respect to the corpus. Language variation is accounted for by lexical adjustments in feature specifications and minimal parameter settings. Contrary to pure probabilistic system, HASPNeL’s neuro-symbolic architecture will perform grammaticality judgements of utterances that do not correspond to rankings of probabilistic systems; have a greater degree of system stability as it is not susceptible to perturbations in the training data; detect lexical and structur al ambiguity by producing all possible grammatical representations regardless of their presence in the training data; eliminate the effects of diminishing returns, as it does not require massive amounts of annotated data, unavailable for underrepresented languages; avoid overparameterization and potential overfitting; test current syntactic theory by implementing a Minimalist grammar formalism; and model human language competence by satisfying conditions of learnability, evolvability, and universality. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 142.171.178.55

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Alers-Valentín, H.; Fong, S. and Vega-Riveros, J. (2023). Modeling Syntactic Knowledge With Neuro-Symbolic Computation. In Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART; ISBN 978-989-758-623-1; ISSN 2184-433X, SciTePress, pages 608-616. DOI: 10.5220/0011718500003393

@conference{icaart23,
author={Hilton Alers{-}Valentín. and Sandiway Fong. and J. Vega{-}Riveros.},
title={Modeling Syntactic Knowledge With Neuro-Symbolic Computation},
booktitle={Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART},
year={2023},
pages={608-616},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011718500003393},
isbn={978-989-758-623-1},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART
TI - Modeling Syntactic Knowledge With Neuro-Symbolic Computation
SN - 978-989-758-623-1
IS - 2184-433X
AU - Alers-Valentín, H.
AU - Fong, S.
AU - Vega-Riveros, J.
PY - 2023
SP - 608
EP - 616
DO - 10.5220/0011718500003393
PB - SciTePress