text
stringlengths 1
215k
| category
stringclasses 10
values | source_file
stringclasses 10
values |
|---|---|---|
simpler neural network language models
|
build_gpt.txt
|
build_gpt.txt
|
uh so multi perceptrons and so on it
|
build_gpt.txt
|
build_gpt.txt
|
really introduces the language modeling
|
build_gpt.txt
|
build_gpt.txt
|
framework and then uh here in this video
|
build_gpt.txt
|
build_gpt.txt
|
we're going to focus on the Transformer
|
build_gpt.txt
|
build_gpt.txt
|
neural network itself okay so I created
|
build_gpt.txt
|
build_gpt.txt
|
a new Google collab uh jup notebook here
|
build_gpt.txt
|
build_gpt.txt
|
and this will allow me to later easily
|
build_gpt.txt
|
build_gpt.txt
|
share this code that we're going to
|
build_gpt.txt
|
build_gpt.txt
|
develop together uh with you so you can
|
build_gpt.txt
|
build_gpt.txt
|
follow along so this will be in a video
|
build_gpt.txt
|
build_gpt.txt
|
description uh later now here I've just
|
build_gpt.txt
|
build_gpt.txt
|
done some preliminaries I downloaded the
|
build_gpt.txt
|
build_gpt.txt
|
data set the tiny Shakespeare data set
|
build_gpt.txt
|
build_gpt.txt
|
at this URL and you can see that it's
|
build_gpt.txt
|
build_gpt.txt
|
about a 1 Megabyte file then here I open
|
build_gpt.txt
|
build_gpt.txt
|
the input.txt file and just read in all
|
build_gpt.txt
|
build_gpt.txt
|
the text of the string and we see that
|
build_gpt.txt
|
build_gpt.txt
|
we are working with 1 million characters
|
build_gpt.txt
|
build_gpt.txt
|
roughly and the first 1,000 characters
|
build_gpt.txt
|
build_gpt.txt
|
if we just print them out are basically
|
build_gpt.txt
|
build_gpt.txt
|
what you would expect this is the first
|
build_gpt.txt
|
build_gpt.txt
|
1,000 characters of the tiny Shakespeare
|
build_gpt.txt
|
build_gpt.txt
|
data set roughly up to here so so far so
|
build_gpt.txt
|
build_gpt.txt
|
good next we're going to take this text
|
build_gpt.txt
|
build_gpt.txt
|
and the text is a sequence of characters
|
build_gpt.txt
|
build_gpt.txt
|
in Python so when I call the set
|
build_gpt.txt
|
build_gpt.txt
|
Constructor on it I'm just going to get
|
build_gpt.txt
|
build_gpt.txt
|
the set of all the characters that occur
|
build_gpt.txt
|
build_gpt.txt
|
in this text and then I call list on
|
build_gpt.txt
|
build_gpt.txt
|
that to create a list of those
|
build_gpt.txt
|
build_gpt.txt
|
characters instead of just a set so that
|
build_gpt.txt
|
build_gpt.txt
|
I have an ordering an arbitrary ordering
|
build_gpt.txt
|
build_gpt.txt
|
and then I sort that so basically we get
|
build_gpt.txt
|
build_gpt.txt
|
just all the characters that occur in
|
build_gpt.txt
|
build_gpt.txt
|
the entire data set and they're sorted
|
build_gpt.txt
|
build_gpt.txt
|
now the number of them is going to be
|
build_gpt.txt
|
build_gpt.txt
|
our vocabulary size these are the
|
build_gpt.txt
|
build_gpt.txt
|
possible elements of our sequences and
|
build_gpt.txt
|
build_gpt.txt
|
we see that when I print here the
|
build_gpt.txt
|
build_gpt.txt
|
characters there's 65 of them in total
|
build_gpt.txt
|
build_gpt.txt
|
there's a space character and then all
|
build_gpt.txt
|
build_gpt.txt
|
kinds of special characters and then U
|
build_gpt.txt
|
build_gpt.txt
|
capitals and lowercase letters so that's
|
build_gpt.txt
|
build_gpt.txt
|
our vocabulary and that's the sort of
|
build_gpt.txt
|
build_gpt.txt
|
like possible uh characters that the
|
build_gpt.txt
|
build_gpt.txt
|
model can see or emit okay so next we
|
build_gpt.txt
|
build_gpt.txt
|
will would like to develop some strategy
|
build_gpt.txt
|
build_gpt.txt
|
to tokenize the input text now when
|
build_gpt.txt
|
build_gpt.txt
|
people say tokenize they mean convert
|
build_gpt.txt
|
build_gpt.txt
|
the raw text as a string to some
|
build_gpt.txt
|
build_gpt.txt
|
sequence of integers According to some
|
build_gpt.txt
|
build_gpt.txt
|
uh notebook According to some vocabulary
|
build_gpt.txt
|
build_gpt.txt
|
of possible elements so as an example
|
build_gpt.txt
|
build_gpt.txt
|
here we are going to be building a
|
build_gpt.txt
|
build_gpt.txt
|
character level language model so we're
|
build_gpt.txt
|
build_gpt.txt
|
simply going to be translating
|
build_gpt.txt
|
build_gpt.txt
|
individual characters into integers so
|
build_gpt.txt
|
build_gpt.txt
|
let me show you uh a chunk of code that
|
build_gpt.txt
|
build_gpt.txt
|
sort of does that for us so we're
|
build_gpt.txt
|
build_gpt.txt
|
building both the encoder and the
|
build_gpt.txt
|
build_gpt.txt
|
decoder
|
build_gpt.txt
|
build_gpt.txt
|
and let me just talk through what's
|
build_gpt.txt
|
build_gpt.txt
|
happening
|
build_gpt.txt
|
build_gpt.txt
|
here when we encode an arbitrary text
|
build_gpt.txt
|
build_gpt.txt
|
like hi there we're going to receive a
|
build_gpt.txt
|
build_gpt.txt
|
list of integers that represents that
|
build_gpt.txt
|
build_gpt.txt
|
string so for example 46 47 Etc and then
|
build_gpt.txt
|
build_gpt.txt
|
we also have the reverse mapping so we
|
build_gpt.txt
|
build_gpt.txt
|
can take this list and decode it to get
|
build_gpt.txt
|
build_gpt.txt
|
back the exact same string so it's
|
build_gpt.txt
|
build_gpt.txt
|
really just like a translation to
|
build_gpt.txt
|
build_gpt.txt
|
integers and back for arbitrary string
|
build_gpt.txt
|
build_gpt.txt
|
and for us it is done on a character
|
build_gpt.txt
|
build_gpt.txt
|
level
|
build_gpt.txt
|
build_gpt.txt
|
now the way this was achieved is we just
|
build_gpt.txt
|
build_gpt.txt
|
iterate over all the characters here and
|
build_gpt.txt
|
build_gpt.txt
|
create a lookup table from the character
|
build_gpt.txt
|
build_gpt.txt
|
to the integer and vice versa and then
|
build_gpt.txt
|
build_gpt.txt
|
to encode some string we simply
|
build_gpt.txt
|
build_gpt.txt
|
translate all the characters
|
build_gpt.txt
|
build_gpt.txt
|
individually and to decode it back we
|
build_gpt.txt
|
build_gpt.txt
|
use the reverse mapping and concatenate
|
build_gpt.txt
|
build_gpt.txt
|
all of it now this is only one of many
|
build_gpt.txt
|
build_gpt.txt
|
possible encodings or many possible sort
|
build_gpt.txt
|
build_gpt.txt
|
of tokenizers and it's a very simple one
|
build_gpt.txt
|
build_gpt.txt
|
but there's many other schemas that
|
build_gpt.txt
|
build_gpt.txt
|
people have come up with in practice so
|
build_gpt.txt
|
build_gpt.txt
|
for example Google uses a sentence
|
build_gpt.txt
|
build_gpt.txt
|
piece uh so sentence piece will also
|
build_gpt.txt
|
build_gpt.txt
|
encode text into um integers but in a
|
build_gpt.txt
|
build_gpt.txt
|
different schema and using a different
|
build_gpt.txt
|
build_gpt.txt
|
vocabulary and sentence piece is a
|
build_gpt.txt
|
build_gpt.txt
|
subword uh sort of tokenizer and what
|
build_gpt.txt
|
build_gpt.txt
|
that means is that um you're not
|
build_gpt.txt
|
build_gpt.txt
|
encoding entire words but you're not
|
build_gpt.txt
|
build_gpt.txt
|
also encoding individual characters it's
|
build_gpt.txt
|
build_gpt.txt
|
it's a subword unit level and that's
|
build_gpt.txt
|
build_gpt.txt
|
usually what's adopted in practice for
|
build_gpt.txt
|
build_gpt.txt
|
example also openai has this Library
|
build_gpt.txt
|
build_gpt.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.