Collecting Commonsense Inferences from Text
Ernest DavisCognitum2016July 11, 2016
TACITToward Annotating Commonsense Inferences in Text
First text: Theft of the Mona Lisa
On a mundane morning in late summer in Paris, the impossible happened. The Mona Lisa vanished. On Sunday evening, August 20, 1911, Leonardo da Vinci's best-known painting was hanging in her usual place on the wall of the SalonCarrebetween Correggio's Mystical Marriage and Titian's Allegory of Alfonsod'Avalos. On Tuesday morning, when the Louvre reopened to the public, she was gone. Within hours of the discovery of the empty frame, stashed behind a radiator, the story broke in an extra edition of Le Temps, the leading morning newspaper. Incredulous reporters from local papers and international news services converged on the museum.
Second text: Speciation
In allopatric speciation (from the Greekallos,other, andpatra, homeland) gene flow is interrupted when a population is divided into geographically isolated subpopulations. For example, the water level in a lake may subside, resulting in two or more smaller lakes that are now home to separated populations (see Figure 24.5a). Or a river may change course and divide a population of animals that cannot cross it.
Outline
Goal and related workSome example inferencesAnnotation schemaWhat has been doneHow you can helpThe way forward
High-level goal
Find out what commonsense inferences are needed to understand text. Avoid “looking for the keys under the streetlight”.General approachSystematically annotate texts withallthe commonsense inferences needed to understand them.
Streetlight problem
Logicistapproaches: Knowledge that is easy to formalizeWeb mining:Easy to mine.Crowd sourcing: Seems interesting toMTurkersRTE: Small-scale, sentence-level inferencesCYC: ??, as usual.
TACIT’s own streetlight problems
VerbalizableknowledgeEmphasis on well-defined problems of exegesis could obscure the big picture:What is the mood of the text? What is the point? What is the viewpoint of the author? Is the author reliable?Easy to miss implicit inferences. E.g. the current state misses important temporal inferencesEnglish-specific issues
State of the artin commonsense reasoning
Taxonomic knowledge in good shape. Large, very high-quality taxonomies and enormous quite high-quality taxonomies.Temporal knowledge:Abstract representationlargely solved.SitCalc, event calculus, continuoustimeConnecting language to representation is partially solved.Annotation of text is difficult and imperfect.Noother commonsense domains in good shape (spatial, physical, psychology, social etc.)
Selected related work
Schank’sgroup’s work. Mike Dyer,In-Depth UnderstandingCYC’s original goal of encoding background knowledge for 400 encyclopedia articles.RTE (Dagan et al. 2006)Semantic annotation of texts e.g.TimeML,PropBankLoBueand Yates (2011) “Types of Commonsense Knowledge Needed for Recognizing Textual Entailment”Hobbs and Gordon, Naïve psychology
Sample Inferences
On a mundane morning in late summer in Paris, the impossible happened. The Mona Lisa vanished.“the impossible happened” is hyperbole.“in Paris” semantically modifies “happened “ not “morning”The Mona Lisa did not actually vanish; it mysteriously became absent.
More inferences
“The Mona Lisa vanished” and “the impossible happened” are the same event.The event of the Mona Lisa being absent was not expected by the museum administration.For the 7 sentences of text, I have enumerated 34 such inferences.
Annotations for Inference 3
Inference:In "The Mona Lisa vanished", "vanished" is metaphorical, not literal. What is meant is "The Mona Lisa became absent from its proper place".Specific text being explicated:"The Mona Lisa vanished"Background:Physical objects rarely literally vanish.Category of Inference:( Existence ; Event = Mona Lisa became absent ; )Domain:Spatial and physical knowledge
Example 1:Cntd
Linguistic Significance:Interpret non-literal text.Question:What actually happened to the Mona Lisa?Right answer:The Mona Lisa unexpectedly became missing from its usual place.Wrong answer:The Mona Lisa became invisible.Feasibility:Feasible.Comment:Detecting the impossibility ofliterally vanishingis reasonably easy on a feature match. The metaphorical use isvery common and could be in the lexicon. (OED mentions figurative use but does not explain).
Second text: Speciation
In allopatric speciation (from the Greekallos,other, andpatra, homeland) gene flow is interrupted when a population is divided into geographically isolated subpopulations. For example, the water level in a lake may subside, resulting intwo or more smaller lakesthat are now home to separated populations (see Figure 24.5a). Or a river may change course and divide a population of animals that cannot cross it.
Example 2
Inference 6 :The new lakes are smaller than the original lake.Specific text being explicated:"two or more smaller lakes"Background:In the process described in inference 5, each of the new separate regions is a proper subset of the original region.If region A is a proper subset of region B, then A is smaller than B.(Inference 5 inferred that the region occupied by the new lakes is a subset of the old lakes)Domain:Spatial and physical knowledge.
Example 2 (cntd)
Linguistic Significance:Find case filler.Question:The passage refers to "two or more smaller lakes". What are these lakes smaller than?Right answer:They are smaller than the original lake.Wrong answer:They are smaller than one another.Wrong answer:They are smaller than most lakes.Wrong answer:They are smaller than the subpopulations.Wrong answer:They are smaller than the homeland.
Annotation schema
Text being explicatedBackground knowledgeDomainSix general categories: Spatial and physical; naïve biology; naïve psychology; social relations; specialized knowledge; conventions of discourse and narrative21 lower level categories
Annotation schema (cntd)
Linguistic significance:Non-literal text, find case filler, lexical disambiguation, syntacticdisambigutation,coreferenceresolution etc.Category of inference:Operator(args)Entity categories: Aspect, Event, Object, Person, Proposition,SpeechAct,State, Other.21 Relation categories: Authorized, Believe,CausalRelation,ContentOf, Emotion, Ethics …Compare:Example of contrasting text.
What has been done
6 narrative (newspaper) text and 3 biology texts annotated!171 inferences characterized!XML format defined!Annotators’ Manual written!3 students involved!
How you can help
Difficult to train annotators.No formal representation for content of inferences or background knowledge.Inferences are hard to individuate, particularly in the biology domain.In biology, some texts seem to require that you know most of the content before you can understand the text.
Only intelligible if you know most of it
By transportingfluidthroughoutthe body, the circulatory system functionally connects the aqueous environment of the body cells to the organs that exchange gasses, absorb nutrients, and dispose of wastes. In mammals, for example, oxygen from inhaled air diffuses across only two layers of cells in the lungs before reaching the blood. The circulatory system, powered by the heart, then carries the oxygen-rich blood to all parts of the body. As the blood streams throughout the body tissues in tiny blood vessels, oxygen in the blood diffuses only a short distance before encountering the fluid that directly bathes the cells.
How you can help (cntd)
No way to evaluate the answers.Inter-annotator agreement can be measuredFor the discrete fields e.g. Category of inference and linguistic significanceNot for the amorphous aspects e.g. individuation of inferences and background knowledge
How you can help (cntd)
What would the annotations be used for?As a guide for developing knowledge-enriched NLP system. But the gaps are large.Training set for ML. But (a) it would have to be huge; (b) what would be the purpose of the output of the ML.Run statistics. Pretty pointless.Serve as a test set for CYCWhywould anyone fundit? Why would a serious student want to work on it?
The way forward
Multiple levels of annotatorsExperts characterize inferences and background knowledge.Trained annotators validate inferences and background, characterize linguistic significance, categorize inferences.Naïve subjects generate questions and answers, both based on inferences and based purely on text.
Encouraging
Fei-FeiLi’s success with Visual Genome in getting rich image annotations fromMTurkersis encouraging.Clearly, this is an art, and requires a fair amount of work.Multiple cycle system could be adapted.
Way forward (cntd)
Use existingresources and tools.NL tools such as dependencyparseSemantic annotations:TimeML,PropBank,Systematize forms of inference (e.g.TimeMLasks for all implicittemporal relations.)Tieto gaps and errors in existing technology.Develop symbolic representations for as much as possible.Proof of concept for improving technology
Thank you!
Erik Mueller for suggestions about the slidesLeoraMorgenstern, Peter Clark, Gary Marcus for discussions about the projectsCasey Lorimer,RajatRam Suresh, and Kara Tong for working on the annotations.http://www.cs.nyu.edu/faculty/davise/annotate/Tacit.html
0
Embed
Upload