酷兔英语

章节正文

K

 

 

KB
= knowledge base
knowledge base
See knowledge representation language.
knowledge representation
See knowledge representation language.
knowledge representation language
The term knowledge representation language (KRL) is used to refer to the language used by a particular system to encode the knowledge. The collection of knowledge used by the system is referred to as a knowledge base (KB).
KRL
= knowledge representation language

 

L

 

 

lambda reduction
The process of applying a lambda-expression to it's argument (in general, arguments, but the examples we've seen in COMP9414 have all been single argument lambda-expressions). A lambda expression is a formula of the form (lambda ?x P(?x)), in an Allen- like notation, or lambda(X, p(X)) in a Prolog-ish notation. P(?x) (or p(X)) signifies a formula involving the variable ?x (or X). The lambda-expression can be viewed as a function to be applied to an argument. The result of applying lambda(X, p(X)) to an argument a is p(a) - that is, the formula p(X) with all the instances of the variable X replaced by a. Using a more clearly NLP example, if we apply lambda(X, eat1(l1, X, pizza1)) to mary1 we get eat1(l1, mary1, pizza1)).

Prolog code for lambda-reduction is:

lambda_reduce(lambda(X, Predicate), Argument, Predicate) :-
     X = Argument.
Applying this to an actual example:
: lambda_reduce(
         lambda(X, eats(e1, X, the1(p1, pizza1))),
         name(m1, 'Mary'),
         Result) ?
X = name(m1, 'Mary')
Result = eats(e1, name(m1, 'Mary'), the1(p1, pizza1))
language generated by a grammar
The language generated by a grammar is the set of all sentences that can be derived from the start symbol S of the grammar using the grammar rules. Less formally, it is the set of all sentences that "follow from" or are consistent with the grammar rules.
left-to-right parsing
Parsing that processes the words of the sentence from left to right (i.e. from beginning to end), as opposed to right-to-left (or end-to-beginning) parsing. Logically it may not matter which direction parsing proceeds in, and the parser will work, eventually, in either direction. However, right-to-left parsing is likely to be less intuitive than left-to-right. If the sentence is damaged (e.g. by the presence of a mis-spelled word) it may help to use a parsing algorithm that incorporates both left-to-right and right-to-left strategies, to allow one to parse material to the right of the error.
lemma
A set of lexemes with the same stem, the same major part-of-speech, and the same word-sense. E.g. {cat, cats}.
lexeme
Fancy name for a word, including any suffix or prefix. Contrast free and bound morphemes.
lexical functional grammar
A grammatical formalism, not covered in COMP9414.
lexical generation probability
The probability that a particular lexical category (in context or out of context) will give rise to a particular word. For example, suppose, in a system with a very small lexicon, there might be only two nouns, say cat and dog. Given a corpus of sentences using this lexicon, one could count the number of times that the two words cat anddog occurred (as a noun), say ncats and ndogs. Then the lexical generationprobability for cats as a noun would be ncats/(ncats+ndogs), written symbolically as Pr(cat | N).
lexical insertion rule
A rule of a grammar (particularly a context-free grammar) of the form X → w, where w is a single word. In most lexicons, all the lexical insertion rules for a particular word are "collapsed" into a single lexical entry, like

 

"pig": N V ADJ.

"pig" is familiar as a N, but also occurs as a verb ("Jane pigged herself on pizza") and an adjective, in the phrase "pig iron", for example.

lexical symbol, lexical category
Synonymous with part-of-speech (POS). Also called a pre-terminal symbol. A kind of non-terminal symbol of a grammar - a non-terminal is a lexical symbol if it can appear in a lexical insertion rule. Examples are N, V, ADJ, PREP, INTERJ, ADV. Non-examples include NP, VP, PP and S (these are non-terminals). The term lexical category signifies the collection of all words that belong to a particular lexical symbol, for example, the collection of all Nouns or the collection of all ADJectives.

 

Contrast with phrasal category.

lexicon
A lexicon is a collection of information about the words of a language about the lexical categories to which they belong. A lexicon is usually structured as a collection of lexical entries, like ("pig" N V ADJ). "pig" is familiar as a N, but also occurs as a verb ("Jane pigged herself on pizza") and an adjective, in the phrase "pig iron", for example. In practice, a lexical entry will include further information about the roles the word plays, such as feature information - for example, whether a verb is transitive, intransitive, ditransitive, etc., what form the verb takes (e.g. present participle, or past tense, etc.)
LFG
= lexical functional grammar
local discourse context
The local discourse context, or just local context includes the syntactic and semantic analysis of the preceding sentence, together with a list of objects mentioned in the sentence that could be antecedents for later pronouns and definite noun phrases. Thus the local context is used for the reference stage of NLP. See also history list.
logical form
Logical forms are expressions in a special language, resembling FOPC (first order predicate calculus) and used to encode the meanings (out of context) of NLP sentences. The logical form language used in the book by James Allen includes:

 

terms
constants or expressions that describe objects: fido1, jack1
predicates
constants or expressions that describe relations or properties, like bites1. Each predicate has an associated number of arguments - bites1 is binary.
propositions
a predicate followed by the appropriate number of arguments: bites1(fido1, jack1), dog1(fido1) - Fido is a dog. More complex propositions can be constructed using logical operators not(loves1(sue1, jack1)), &(bites1(fido1, jack1), dog1(fido1)).
quantifiers
English has some precise quantifier-like words: some, all, each, every, the, aas well as vague ones: most, many, a few. The logical form language has quantifiers to encode the meanings of each quantifier-like word.
variables
are needed because of the quantifiers, and because while the words in a sentence in many cases give us the types of the objects, states and events being discussed, but it is not until a later stage of processing (reference) that we know to what instances of those types the words refer.

Variables in logical form language, unlike in FOPC, persist beyond the "scope" of the quantifier. E.g. A man came in. He went to the table. The first sentence introduces a new object of type man1. The He, in the second sentence refers to this object.

NL quantifiers are typically restricted in the range of objects that the variable ranges over. In Most dogs bark the variable in the most1 quantifier is restricted to dog1 objects: most1(d1 : dog1(d1), barks1(d1)).

predicate operators
A predicate operator takes a predicate as an argument and produces a new predicate. For example, we can take a predicate like cat1 (a unary predicate true of a single object of type cat1) and apply the predicate operator plur that converts singular predicates into the corresponding plural predicate plur(cat1), which is true of any set of cats with more than one member.
modal operators
Modal operators are used to represent certain verbs like believe, know, want, that express attitudes to other propositions, and for tense, and other purposes. Sue believes Jack is happy becomes
believe(sue1, happy1(jack1))

With tenses, we use the modal operators pres, past, fut, as in:

pres(sees1)(john1, fido1))
past(sees1)(john1, fido1)
fut(sees1)(john1, fido1)

logical operator
The operators and, or, not, (implies), and <⇒ (equivalent to). and is sometimes written as &. They are used to connect propositions to make larger propositions: e.g.
is-blue(sky1) and is-green(grass1) or can-fly(pig1)


文章标签:词典  

章节正文