|
Parsing Engine | |||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use Event | |
---|---|
danbikel.parser | Provides the core framework of this extensible statistical parsing engine. |
danbikel.parser.ms |
Default package for model structure classes (subclasses of ProbabilityStructure ). |
Uses of Event in danbikel.parser |
---|
Subinterfaces of Event in danbikel.parser | |
---|---|
interface |
MutableEvent
Provides additional methods to those of Event that
permit modification of the event object. |
interface |
Subcat
Specification for a collection of required arguments to be generated by a parser, also known as a subcategorization frame. |
Classes in danbikel.parser that implement Event | |
---|---|
class |
AbstractEvent
A convenience class that simply implements the equals
method, as specified by the contract in equals(Object) . |
class |
BrokenSubcatBag
A “broken” version of SubcatBag that precisely reflects
the details specified in Collins’ thesis (used for
“clean-room” implementation). |
class |
SexpEvent
Represents an event composed of one or more Sexp
objects. |
class |
SexpSubcatEvent
Represents an event composed of zero or more Sexp objects
and zero or one Subcat object. |
class |
SubcatBag
Provides a bag implementation of subcat requirements (a bag is a set that allows multiple occurrences of the same item). |
class |
SubcatList
Implements subcats where requirements need to be met in the order in which they are added to this subcat (the strictest form of a subcat). |
Methods in danbikel.parser that return Event | |
---|---|
protected static Event |
Model.canonicalizeEvent(Event event,
FlexibleMap canonical)
This method first canonicalizes the information in the specified event (a Sexp or a Subcat and a Sexp), then it returns a canonical version of the event itself, copying it into the map if necessary. |
Event |
BrokenSubcatBag.copy()
Returns a deep copy of this subcat bag. |
Event |
Event.copy()
Returns a deep copy of this event of the same run-time type. |
Event |
SexpEvent.copy()
Returns a deep copy of this event, which really just means creating a new instance with a deep copy of the backing Sexp , using the Sexp.deepCopy method. |
Event |
SexpSubcatEvent.copy()
Returns a deep copy of this event, using SexpEvent.copy to
copy the backing Sexp , and using Event.copy
to copy the backing Subcat , if there is one. |
Event |
SubcatBag.copy()
Returns a deep copy of this subcat bag. |
Event |
SubcatList.copy()
|
Event |
Transition.future()
Gets the future event of this transition object. |
abstract Event |
ProbabilityStructure.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
Extracts the future for the specified level of back-off from the specified trainer event. |
abstract Event |
ProbabilityStructure.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
Extracts the history context for the specified back-off level from the specified trainer event. |
Event |
Transition.history()
Gets the history event of this transition object. |
Methods in danbikel.parser with parameters of type Event | |
---|---|
protected static Event |
Model.canonicalizeEvent(Event event,
FlexibleMap canonical)
This method first canonicalizes the information in the specified event (a Sexp or a Subcat and a Sexp), then it returns a canonical version of the event itself, copying it into the map if necessary. |
static double[] |
AnalyzeDisns.getLogProbDisn(Model model,
int level,
Event hist,
Set futures,
double[] disn,
Transition tmpTrans)
Returns the smoothed log-probability distribution for the specified history at the specified back-off level in the specified model. |
protected void |
InterpolatedKnesserNeyModel.precomputeProbs(MapToPrimitive.Entry transEntry,
double[] lambdas,
double[] estimates,
Transition[] transitions,
Event[] histories,
int lastLevel)
|
protected void |
Model.precomputeProbs(MapToPrimitive.Entry transEntry,
double[] lambdas,
double[] estimates,
Transition[] transitions,
Event[] histories,
int lastLevel)
Precomputes the probabilities and smoothing values for the Transition object contained as a key within the specified
map entry, where the value is the count of the transition. |
protected void |
InterpolatedKnesserNeyModel.precomputeProbs(TrainerEvent event,
Transition[] transitions,
Event[] histories)
Deprecated. This method is called by Model.precomputeProbs(CountsTable,Filter) , which is also deprecated. |
protected void |
Model.precomputeProbs(TrainerEvent event,
Transition[] transitions,
Event[] histories)
Deprecated. This method is called by Model.precomputeProbs(CountsTable,Filter) , which is also deprecated. |
static void |
PrintDisn.printLogProbDisn(PrintWriter writer,
ModelCollection mc,
Model model,
int level,
Event hist,
Set futures,
Transition tmpTrans)
Prints the log-probability distribution of the specified event at the specified back-off level of the specified model to the specified writer. |
boolean |
ProbabilityStructure.removeFuture(int backOffLevel,
Event future)
Indicates that Model.cleanup() , which is invoked at the end
of Model.deriveCounts(CountsTable,danbikel.util.Filter,double,danbikel.util.FlexibleMap) ,
can safely remove the specified event from the Model
object's internal counts tables, as the event is not applicable
to any of the probabilities for which the model will produce an estimate. |
boolean |
ProbabilityStructure.removeHistory(int backOffLevel,
Event history)
Indicates that Model.cleanup() , which is invoked at the end
of Model.deriveCounts ,
can safely remove the specified event from the Model
object's internal counts tables, as the event is not applicable
to any of the probabilities for which the model will produce an estimate. |
void |
Transition.setFuture(Event future)
Sets the future event of this transition. |
void |
Transition.setHistory(Event history)
Sets the history event of this transition. |
protected void |
InterpolatedKnesserNeyModel.storePrecomputedProbs(double[] lambdas,
double[] estimates,
Transition[] transitions,
Event[] histories,
int lastLevel)
|
protected void |
Model.storePrecomputedProbs(double[] lambdas,
double[] estimates,
Transition[] transitions,
Event[] histories,
int lastLevel)
Stores the specified smoothing values (lambdas) and smoothed probability estimates in the Model.precomputedProbs and Model.smoothingParams
map arrays. |
Constructors in danbikel.parser with parameters of type Event | |
---|---|
Transition(Event future,
Event history)
Constructs this transition with the specified future and history events. |
Uses of Event in danbikel.parser.ms |
---|
Methods in danbikel.parser.ms that return Event | |
---|---|
Event |
BrokenLeftSubcatModelStructure.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
Gets the future being predicted conditioning on this subcat event. |
Event |
BrokenLexPriorModelStructure.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
Returns an event whose two components are the word and part-of-speech for which a marginal probability is being computed. |
Event |
BrokenModWordModelStructure.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
Returns an event whose sole component is the word being generated as the head of some modifier nonterminal. |
Event |
BrokenRightSubcatModelStructure.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
Gets the future being predicted conditioning on this subcat event. |
Event |
BrokenTopLexModelStructure.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
GapModelStructure1.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
HeadModelStructure1.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
LeftSubcatModelStructure1.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
LeftSubcatModelStructure2.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
LexPriorModelStructure1.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure1.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure2.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure3.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure4.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure5.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure6.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure7.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure8.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure9.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure1.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure2.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure3.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure4.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure5.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure6.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure7.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure8.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure9.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
NonterminalPriorModelStructure1.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
RightSubcatModelStructure1.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
RightSubcatModelStructure2.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
TagModelStructure1.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
TagModelStructure2.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
TopLexModelStructure1.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
TopNonterminalModelStructure1.getFuture(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
BrokenLexPriorModelStructure.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
As this model simulates unconditional probabilities using relative-frequency estimation, this method returns a history whose sole component is a dummy object that is the same regardless of the “future” being estimated. |
Event |
BrokenModWordModelStructure.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
Returns the history event corresponding to the specified back-off level. |
Event |
BrokenTopLexModelStructure.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
GapModelStructure1.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
HeadModelStructure1.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
LexPriorModelStructure1.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure1.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure2.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure3.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure4.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure6.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure7.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure8.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModNonterminalModelStructure9.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure1.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure2.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure3.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure4.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure5.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure6.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure7.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure8.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
ModWordModelStructure9.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
NonterminalPriorModelStructure1.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
SubcatModelStructure1.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
Returns a history for the specified back-off level, according to the following zero-indexed list of history events. |
Event |
SubcatModelStructure2.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
TagModelStructure1.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
TagModelStructure2.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
TopLexModelStructure1.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Event |
TopNonterminalModelStructure1.getHistory(TrainerEvent trainerEvent,
int backOffLevel)
|
Methods in danbikel.parser.ms with parameters of type Event | |
---|---|
boolean |
ModWordModelStructure2.removeFuture(int backOffLevel,
Event future)
|
boolean |
ModWordModelStructure4.removeFuture(int backOffLevel,
Event future)
|
boolean |
ModWordModelStructure6.removeFuture(int backOffLevel,
Event future)
|
boolean |
ModWordModelStructure7.removeFuture(int backOffLevel,
Event future)
|
boolean |
ModWordModelStructure8.removeFuture(int backOffLevel,
Event future)
|
boolean |
BrokenModWordModelStructure.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p^(w | t), the trainer “fakes” a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
ModNonterminalModelStructure2.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
ModNonterminalModelStructure4.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
ModNonterminalModelStructure6.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
ModNonterminalModelStructure7.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
ModNonterminalModelStructure8.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
ModNonterminalModelStructure9.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
ModWordModelStructure2.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
ModWordModelStructure4.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
ModWordModelStructure5.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
ModWordModelStructure6.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
ModWordModelStructure7.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
ModWordModelStructure8.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
TagModelStructure1.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
TagModelStructure2.removeHistory(int backOffLevel,
Event history)
In order to gather statistics for words that appear as the head of the entire sentence when estimating p(w | t), the trainer "fakes" a modifier event, as though the root node of the observed tree was seen to modify the magical +TOP+ node. |
boolean |
TopLexModelStructure1.removeHistory(int backOffLevel,
Event history)
|
|
Parsing Engine | |||||||||
PREV NEXT | FRAMES NO FRAMES |