naacl 2019 https://github.com/wabyking/qnn https://mp.weixin.qq.com/s/NbLnGQD4TjlbMGOXbfa8pg
query is wikipedia title concatenated with section titles, answer is the wikipedia page text
query_0 -> multi reformulator -> search results -> aggregator
use RL to train reformulator agents (state is documents, reward r is )
aggragator as meta-agent
multiple word senses reside in linear superposition within the wordembedding and simple sparse coding can re-cover vectors that approximately capture the senses.
polysemy vector is linear combination of its monosemous vectors
linearity Assertion linear structure appears out of a highly nonlinearembedding
Word Sense Induction via sparse coding
random walk on discourses
random walk across micro-topics(discourse)
there exists a linear relationship between the vector of aword and the vectors of the words in its contexts
tree-to-tree transduction method for sentence compression
entity link: https://spaces.ac.cn/archives/6919
LAMBADA dataset long dependency, predict last word
wikitext-2 ‘train’: “https://s3.amazonaws.com/datasets.huggingface.co/wikitext-2/train.txt” ‘valid’: “https://s3.amazonaws.com/datasets.huggingface.co/wikitext-2/valid.txt”
wikitext-103 ‘train’: “https://s3.amazonaws.com/datasets.huggingface.co/wikitext-103/wiki.train.tokens” ‘valid’: “https://s3.amazonaws.com/datasets.huggingface.co/wikitext-103/wiki.valid.tokens”
300 average length
SQUAD unanswerable questions
[AI2 Elementary SchoolScience Questions] 1080 questions
[NTCIR QA Lab] university entrance
[CLEF QA Track]
- arc classification
- arc prediction(binary)
Preposition supersense disambiguation
STREUSLE 4.0 corpus
Grammatical error detection
Document-level Generation and Translation
head and dependent
the structure must be connected, have a designated root node, and be acyclic or planar.
each word has a single head.
there is a single root node from which one can follow a unique directed path to each of the words in the sentence.
projective, if there is a path from the head to every word that lies between the head and the dependent in the sentence.
a tree is projective, if it can be drawn with no crossing edges.
non-projective, seen in flexible langauages.
Translation, text, audio
Graph based dependency parsing
score the tree with edge weights
using maximum spanning tree, but might have cycles
to eliminate cycles, Chu, Liu, Edmonds(1967):
for each vertex, incoming edge with max weight is chosen
subtract each incoming edges with max incoming weight.
select a cycle and collapse it into a single new node.
higher order dependency parsing
Zhang and McDonald 2012
𝑛 vectors completely at random from a unit Gaussian distribution in ℝ𝑚. If 𝑚≫𝑛, with high probability the result would be an approximate Pythagorean embedding. The reason is that in high dimensions, (1) vectors drawn from a unit Gaussian distribution have length very close to 1 with high probability; and (2) when 𝑚≫𝑛, a set of 𝑛 unit Gaussian vectors will likely be close to mutually orthogonal. Initialize with a completely random tree embedding, and in addition pick a special random vector for each vertex; then at each step, move each child node so that it is closer to its parent's location plus the child's special vector > [A Structural Probe for Finding Syntax in Word Representations](https://nlp.stanford.edu/pubs/hewitt2019structural.pdf) https://github.com/john-hewitt/structural-probes
The idea in Gibbs sampling is to generate posterior samplesby sweeping through each variable (or block of variables) to sample from its conditionaldistribution with the remaining variables fixed to their current values.
namesake (Markov 1906)
Sutskever et el. 2011
Mikolov el al. 2012, Sennrich el al. 2015
char and word level (Kang el al. 2011) (Ling el al. 2015) (Kim el al. 2016)
unsupervised morphology with log-bilinear model (Botha and Blunsom 2014)(Jozefowicz el al. 2016)
Brown el al. 1992
Chung el al. 2016
Luong and Manning 2016
combine count based and neural lm (Neubig Dyer 2016)
char based and word based to translate text and code (Ling el al 2016)
copy when UNK (Merity et al. 2016)
Phonetics and phonology
dataset: room to room
heavy NP Shift
verb + NP + PP(temporal adjunct), if NP is heavy enough, PP will be shifted after verb
hrasal verbs and particle shift
give the habit up / give up the habit
give the woman the object / give the object to the woman
double object / propositional object
the woman’s house / the house of the woman
one change to the rule: choose rightarc if it produces a correct relation given reference parsing, and, all dependents of the top word in stack have been assigned.
combine simples features, use feature template: location.property
clausal relation, that describe syntatic roles with respect to predicate(often a verb)
modifier relations, that categorize the ways that words can modify theirs heads.
|Clausal Argument Relations||Description|
|XCOMP||open clausal complement|
|Nominal Modifier Relations||Description|
|CASE||Prepositions, postpositions and other case markers|
Institute for the Study of the Ancient World the ISAW Library NYU Center for Humanities NYU Division of Libraries NYU Center for Ancient Studies and NYU Department of Classics
includes generative grammars of the type described by Chomsky (1957)
isolate a violation, so tend to be unacceptable for a single identifiable reason.
the artificial learner must not be ex-posed to any knowledge of language that could not plausibly be part of the input to a human learner
100k most frequent words in the British National Corpus
in-domain: training set (8551 examples), a development set (527), and a test set (530), 17 sources
out-domain development set (516) and a test set (533), 6 sources
aggregate rating, yielding an average agreement of 93%
The encoders are trained on 100- 200 million tokens, which is within a factor of ten of the number of tokens human learners are ex- posed to during language acquisition
subject-verb number agreement
highly sensitive to recent but irrelevant nouns
one-hot encoding input lstm 50 units: 0.83% errors
nouns-only baseline: 4.5%
distance between nouns and verbs
attractor: last intervening nouns
count of attractors
word representation learned the concept of singular and plural from scratch
word-by-word activation visualization
verb inflection: have verb’s singular form before verb(give cue to syntactic clause boundary
grammaticality judge: (human rarely receive ungrammatical signals
training only on hard dataset(with attractors)
Agreement attraction errors in humans are much more common when the attractor is plural than when it is singular
verbs similar to plural nouns are mistaken as nouns: The ship that the player drives has a very high speed.
limitation of the number prediction task, which jointly evaluates the model’s ability to identify the subject and its ability to assign the correct number to noun phrases
Negative licensor: not none
Negative Polarity Item: any ever
word-synchronus beam search (Stern et al., 2017)
Poverty of theStimulus Argument.
purely data-driven learning is not powerful enough to explain the richness and uniformity of human grammars
knowledge embedding TransE
masked language model, next sent prediction as pre-training obj