Save This Page
Home » lucene-3.0.1-src » org.apache » lucene » analysis » [javadoc | source]
org.apache.lucene.analysis
public class: Token [javadoc | source]
java.lang.Object
   org.apache.lucene.util.AttributeImpl
      org.apache.lucene.analysis.Token

All Implemented Interfaces:
    TermAttribute, Cloneable, PositionIncrementAttribute, OffsetAttribute, PayloadAttribute, TypeAttribute, FlagsAttribute, Attribute, Serializable

A Token is an occurrence of a term from the text of a field. It consists of a term's text, the start and end offset of the term in the text of the field, and a type string.

The start and end offsets permit applications to re-associate a token with its source text, e.g., to display highlighted query terms in a document browser, or to show matching text fragments in a KWIC display, etc.

The type is a string, assigned by a lexical analyzer (a.k.a. tokenizer), naming the lexical or syntactic class that the token belongs to. For example an end of sentence marker token might be implemented with type "eos". The default token type is "word".

A Token can optionally have metadata (a.k.a. Payload) in the form of a variable length byte array. Use TermPositions#getPayloadLength() and TermPositions#getPayload(byte[], int) to retrieve the payloads from the index.

NOTE: As of 2.9, Token implements all Attribute interfaces that are part of core Lucene and can be found in the {@code tokenattributes} subpackage. Even though it is not necessary to use Token anymore, with the new TokenStream API it can be used as convenience class that implements all Attribute s, which is especially useful to easily switch from the old to the new TokenStream API.

Tokenizers and TokenFilters should try to re-use a Token instance when possible for best performance, by implementing the TokenStream#incrementToken() API. Failing that, to create a new Token you should first use one of the constructors that starts with null text. To load the token from a char[] use #setTermBuffer(char[], int, int) . To load from a String use #setTermBuffer(String) or #setTermBuffer(String, int, int) . Alternatively you can get the Token's termBuffer by calling either #termBuffer() , if you know that your text is shorter than the capacity of the termBuffer or #resizeTermBuffer(int) , if there is any possibility that you may need to grow the buffer. Fill in the characters of your term into this buffer, with String#getChars(int, int, char[], int) if loading from a string, or with System#arraycopy(Object, int, Object, int, int) , and finally call #setTermLength(int) to set the length of the term text. See LUCENE-969 for details.

Typical Token reuse patterns:

A few things to note:

Nested Class Summary:
public static final class  Token.TokenAttributeFactory  Expert: Creates a TokenAttributeFactory returning {@link Token} as instance for the basic attributes and for all other attributes calls the given delegate factory. 
Field Summary
public static final  String DEFAULT_TYPE     
public static final  AttributeFactory TOKEN_ATTRIBUTE_FACTORY    Convenience factory that returns Token as implementation for the basic attributes and return the default impl (with "Impl" appended) for all other attributes.
    since: 3.0 -
 
Constructor:
 public Token() 
 public Token(int start,
    int end) 
 public Token(int start,
    int end,
    String typ) 
    Constructs a Token with null text and start & end offsets plus the Token type.
    Parameters:
    start - start offset in the source text
    end - end offset in the source text
    typ - the lexical type of this Token
 public Token(int start,
    int end,
    int flags) 
 public Token(String text,
    int start,
    int end) 
    Constructs a Token with the given term text, and start & end offsets. The type defaults to "word." NOTE: for better indexing speed you should instead use the char[] termBuffer methods to set the term text.
    Parameters:
    text - term text
    start - start offset
    end - end offset
 public Token(String text,
    int start,
    int end,
    String typ) 
    Constructs a Token with the given text, start and end offsets, & type. NOTE: for better indexing speed you should instead use the char[] termBuffer methods to set the term text.
    Parameters:
    text - term text
    start - start offset
    end - end offset
    typ - token type
 public Token(String text,
    int start,
    int end,
    int flags) 
    Constructs a Token with the given text, start and end offsets, & type. NOTE: for better indexing speed you should instead use the char[] termBuffer methods to set the term text.
    Parameters:
    text -
    start -
    end -
    flags - token type bits
 public Token(char[] startTermBuffer,
    int termBufferOffset,
    int termBufferLength,
    int start,
    int end) 
Method from org.apache.lucene.analysis.Token Summary:
clear,   clone,   clone,   copyTo,   endOffset,   equals,   getFlags,   getPayload,   getPositionIncrement,   hashCode,   reinit,   reinit,   reinit,   reinit,   reinit,   reinit,   reinit,   reinit,   reinit,   resizeTermBuffer,   setEndOffset,   setFlags,   setOffset,   setPayload,   setPositionIncrement,   setStartOffset,   setTermBuffer,   setTermBuffer,   setTermBuffer,   setTermLength,   setType,   startOffset,   term,   termBuffer,   termLength,   toString,   type
Methods from org.apache.lucene.util.AttributeImpl:
clear,   clone,   copyTo,   equals,   hashCode,   toString
Methods from java.lang.Object:
clone,   equals,   finalize,   getClass,   hashCode,   notify,   notifyAll,   toString,   wait,   wait,   wait
Method from org.apache.lucene.analysis.Token Detail:
 public  void clear() 
    Resets the term text, payload, flags, and positionIncrement, startOffset, endOffset and token type to default.
 public Object clone() 
 public Token clone(char[] newTermBuffer,
    int newTermOffset,
    int newTermLength,
    int newStartOffset,
    int newEndOffset) 
    Makes a clone, but replaces the term buffer & start/end offset in the process. This is more efficient than doing a full clone (and then calling setTermBuffer) because it saves a wasted copy of the old termBuffer.
 public  void copyTo(AttributeImpl target) 
 public final int endOffset() 
    Returns this Token's ending offset, one greater than the position of the last character corresponding to this token in the source text. The length of the token in the source text is (endOffset - startOffset).
 public boolean equals(Object obj) 
 public int getFlags() 
    EXPERIMENTAL: While we think this is here to stay, we may want to change it to be a long.

    Get the bitset for any bits that have been set. This is completely distinct from #type() , although they do share similar purposes. The flags can be used to encode information about the token for use by other org.apache.lucene.analysis.TokenFilter s.

 public Payload getPayload() 
    Returns this Token's payload.
 public int getPositionIncrement() 
    Returns the position increment of this Token.
 public int hashCode() 
 public  void reinit(Token prototype) 
    Copy the prototype token's fields into this one. Note: Payloads are shared.
 public  void reinit(Token prototype,
    String newTerm) 
    Copy the prototype token's fields into this one, with a different term. Note: Payloads are shared.
 public Token reinit(String newTerm,
    int newStartOffset,
    int newEndOffset) 
 public Token reinit(String newTerm,
    int newStartOffset,
    int newEndOffset,
    String newType) 
 public  void reinit(Token prototype,
    char[] newTermBuffer,
    int offset,
    int length) 
    Copy the prototype token's fields into this one, with a different term. Note: Payloads are shared.
 public Token reinit(char[] newTermBuffer,
    int newTermOffset,
    int newTermLength,
    int newStartOffset,
    int newEndOffset) 
 public Token reinit(String newTerm,
    int newTermOffset,
    int newTermLength,
    int newStartOffset,
    int newEndOffset) 
 public Token reinit(char[] newTermBuffer,
    int newTermOffset,
    int newTermLength,
    int newStartOffset,
    int newEndOffset,
    String newType) 
 public Token reinit(String newTerm,
    int newTermOffset,
    int newTermLength,
    int newStartOffset,
    int newEndOffset,
    String newType) 
 public char[] resizeTermBuffer(int newSize) 
 public  void setEndOffset(int offset) 
    Set the ending offset.
 public  void setFlags(int flags) 
 public  void setOffset(int startOffset,
    int endOffset) 
    Set the starting and ending offset.
 public  void setPayload(Payload payload) 
    Sets this Token's payload.
 public  void setPositionIncrement(int positionIncrement) 
    Set the position increment. This determines the position of this token relative to the previous Token in a TokenStream , used in phrase searching.

    The default value is one.

    Some common uses for this are:

    • Set it to zero to put multiple terms in the same position. This is useful if, e.g., a word has multiple stems. Searches for phrases including either stem will match. In this case, all but the first stem's increment should be set to zero: the increment of the first instance should be one. Repeating a token with an increment of zero can also be used to boost the scores of matches on that token.
    • Set it to values greater than one to inhibit exact phrase matches. If, for example, one does not want phrases to match across removed stop words, then one could build a stop word filter that removes stop words and also sets the increment to the number of stop words removed before each non-stop word. Then exact phrase queries will only match when the terms occur with no intervening stop words.
 public  void setStartOffset(int offset) 
    Set the starting offset.
 public final  void setTermBuffer(String buffer) 
    Copies the contents of buffer into the termBuffer array.
 public final  void setTermBuffer(char[] buffer,
    int offset,
    int length) 
    Copies the contents of buffer, starting at offset for length characters, into the termBuffer array.
 public final  void setTermBuffer(String buffer,
    int offset,
    int length) 
    Copies the contents of buffer, starting at offset and continuing for length characters, into the termBuffer array.
 public final  void setTermLength(int length) 
    Set number of valid characters (length of the term) in the termBuffer array. Use this to truncate the termBuffer or to synchronize with external manipulation of the termBuffer. Note: to grow the size of the array, use #resizeTermBuffer(int) first.
 public final  void setType(String type) 
    Set the lexical type.
 public final int startOffset() 
    Returns this Token's starting offset, the position of the first character corresponding to this token in the source text. Note that the difference between endOffset() and startOffset() may not be equal to #termLength , as the term text may have been altered by a stemmer or some other filter.
 public final String term() 
    Returns the Token's term text. This method has a performance penalty because the text is stored internally in a char[]. If possible, use #termBuffer() and #termLength() directly instead. If you really need a String, use this method, which is nothing more than a convenience call to new String(token.termBuffer(), 0, token.termLength())
 public final char[] termBuffer() 
    Returns the internal termBuffer character array which you can then directly alter. If the array is too small for your token, use #resizeTermBuffer(int) to increase it. After altering the buffer be sure to call #setTermLength to record the number of valid characters that were placed into the termBuffer.
 public final int termLength() 
    Return number of valid characters (length of the term) in the termBuffer array.
 public String toString() 
 public final String type() 
    Returns this Token's lexical type. Defaults to "word".