brails.processors.vlm_image_classifier.clip.simple_tokenizer module

class brails.processors.vlm_image_classifier.clip.simple_tokenizer.SimpleTokenizer(bpe_path: str = '/Users/fmckenna/NHERI/brailsPlusPlus/brails/processors/vlm_image_classifier/clip/bpe_simple_vocab_16e6.txt.gz')

Bases: object

bpe(token)
decode(tokens)
encode(text)
brails.processors.vlm_image_classifier.clip.simple_tokenizer.basic_clean(text)
brails.processors.vlm_image_classifier.clip.simple_tokenizer.bytes_to_unicode()

Returns list of utf-8 byte and a corresponding list of unicode strings. The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. When you’re at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings. And avoids mapping to whitespace/control characters the bpe code barfs on.

brails.processors.vlm_image_classifier.clip.simple_tokenizer.default_bpe()
brails.processors.vlm_image_classifier.clip.simple_tokenizer.get_pairs(word)

Return set of symbol pairs in a word. Word is represented as tuple of symbols (symbols being variable-length strings).

brails.processors.vlm_image_classifier.clip.simple_tokenizer.whitespace_clean(text)