Contents

NLP: RNN and Transformers

Backpropagation Through Time

Long Short-Term Memory

  1. Delete information from the context that is no longer needed: Forget Gate f

$$ f_t = \sigma (U_f h_{t-1} + W_f X_t) $$

$$ k_t = c_{t-1} \odot f_t $$

  1. Compute the actual information we need to extract from the previous hidden stat and current inputs

$$ g_t = \tanh (U_g h_{t-1} + W_g x_t) $$

  1. Select the information to add to the current context: Add Gate i

$$ i_t = \sigma (U_i h_{t-1} + W_i X_t) $$

$$ j_t = g_{t} \odot i_t $$

  1. Get new context vector

$$ c_t = j_t + k_t $$

  1. Output Gate o: decide what information is required for the current hiddent state (as opposed to what information need to be preseved for future decicions)

$$ o_t = \sigma (U_o h_{t-1} + W_o x_t) $$

$$ h_t = o_t \odot \tanh (c_t) $$

Gated Recurrent Units

GRU ease the tranning burden by dispensing with the use of a separate context vector, and by reducing the number of gates to 2:

  1. a reset gate, $r$: decide which aspects of the previous hidden state are relevant to the current context and what can be ignored.

$$ r_t = \sigma (U_r h_{t-1} + W_r x_t) $$

Then computing an intermediate representation for the new hidden stat at time $t$

$$ \tilde h_t = \tanh (U(r_t \odot h_{t-1}) + Wx_t) $$

  1. an update gate, $z$: detemine which aspects of the new intermedicate representation will be used directly and which aspects of the previous stat need to be preseverd for future use

$$ z_t = \sigma (U_z h_{t-1} + W_z x_t) $$

$$ h_t = (1- z_t)h_{t-1} + z_t \tilde h_t $$

Example: Text Classification

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import torch
import torch.nn as nn

class RNN(nn.Module):
    def __init__(self, input_size, hidden_size, num_layers, num_classes):
        super(RNN, self).__init__()
        self.num_layers = num_layers
        self.hidden_size = hidden_size
        ## MARK: work with nn.RNN, nn.GRU, nn.LSTM
        self.rnn = nn.GRU(input_size, hidden_size, num_layers, batch_first=True) 
        # Batch x Seq_len x embeding_size (input_size)
        self.fc = nn.Linear(hidden_size, num_classes)
    def forward(self, inputs):

        ## MARK: init hidden state (h0)
        hidden = torch.zeros(self.num_layers, inputs.size(0), self.hidden_size)
        # if LSTM, need init cell state (c0)
        # cell = torch.zeros(self.num_layers, inputs.size(0), self.hidden_size)

        out, hidden = self.rnn(inputs, hidden) # out: B x S x hidden_size
        # out, (hidden, cell) = self.rnn(inputs, (hidden, cell))

        ## Mark: only need last output for sentence classification
        out = out[:,-1,:] # out: B x hidden_size
        out = self.fc(out)
        return out

Attention mechanism

Consider Encoder to Deconder Network, a decoder

$$ h_i^d = g(\hat y_{i-1}, h_{i-1}^d, c_i) $$

  1. computing context vector $c_i$: a vector of scores that capture the relevance of each encoder hidden state to the decoder state captured in $h_{i-1}^d$. That’s, at each state $i$ during decoding, we’ll compute $score(h_{i-1}^d, h_j^e)$ for each encoder state $j$. Recall similarity

$$ score(h_{i-1}^d, h_j^e) = h_{i-1}^d \cdot h_j^e $$

  1. make a more robust similarity score by adding a learnable weights, $W_s$:

$$ score(h_{i-1}^d, h_j^e) = h_{i-1}^d W_s h_j^e $$

  1. normalize the scores

$$ \begin{aligned} \alpha_{ij} &= \operatorname{softmax} (score(h_{i-1}^d, h_j^e)) \cr &= \frac {\exp (score(h_{i-1}^d, h_j^e))} {\sum_k score(h_{i-1}^d, h_j^e)} \end{aligned} $$

  1. finally, give $\alpha$,

$$ c_i = \sum_j \alpha_{ij}h_j^e $$

/images/nlp/encoder-decoder.png

Self-Attention

  1. create 3 vectors from each of the encoders’ input vector, a Query vector, a Key vector and a Value vector, then multiplying the embedding (of word) X

    /images/nlp/self-attention-1.png

  2. calculate a score by taking the dot product of the Query with the Key

  3. divide the scores by the square root of the dimension of the key vector (a more stable gradients), then pass the result grought a softmax.

  4. multiply each Value vector by the softmax score

  5. sum up the weighted value vectors

/images/nlp/self-attention-2.png

  1. multi-head attention: to focus on different region and give “representation subspace”

/images/nlp/self-attention-3.png

Transformer

A transformer of two stacked encoder and decoder looks like this

/images/nlp/transformers.png

Positoinal encoding

Transformer use positoinal encoding vector to representing the order of the sequence. It follows a specific pattern that the model learns, which helps it determine the position of each word, or the distance between different words in the sequence.

let $t$ be the position in an input sentence, $\overrightarrow{p_{t}} \in R^d$ be the encoding, $d$ be the encoding dimension, then

$$ \overrightarrow{p_{t}}^{(i)}=f(t)^{(i)}:= \begin{cases} \sin (\omega_{k} \cdot t), & \text { if } i=2 k \cr \cos (\omega_{k} \cdot t), & \text { if } i=2 k+1 \end{cases} $$

where

$$ \omega_{k}=\frac{1}{10000^{2 k / d}} $$

image that the positional embeding look like this:

$$ \overrightarrow{p_{t}}=\left[\begin{array}{c} \sin \left(\omega_{1} \cdot t\right) \cr \cos \left(\omega_{1} \cdot t\right) \cr \sin \left(\omega_{2} \cdot t\right) \cr \cos \left(\omega_{2} \cdot t\right) \cr \vdots \cr \sin \left(\omega_{d / 2} \cdot t\right) \cr \cos \left(\omega_{d / 2} \cdot t\right) \end{array}\right]_{d \times 1} $$

Word embeding + Positional encoding:
For every word $\omega_{t}$ in a sentence, calculating the correspondent embedding which is fed to the model is as follows:

$$ \psi^{\prime}\left(w_{t}\right)=\psi\left(w_{t}\right)+\overrightarrow{p_{t}} $$

To make this summation possible, keep $$d_{\text{word embed}} = d_{\text {pos embed}}$$

Encoder:

  • Self-attention: 计算的是src或trg自身的词与词之间的依赖关系 (之前教程的Attention则是计算src的词与trg的词之间的依赖关系)

    • 每个input token转成w2v
    • 用w2v乘以三个权重矩阵(Wq,Wk,Wv)得到三个(Query,Key,Value)向量, q,k,v 用该位置token的q乘以自己以及其他token的k, 得到self-attention分数值
    • 分数值除以一个常数(default 8), 让梯度更稳定, 然后放入softmax, 得到自己与其他每个token的权重
    • 所有位置的权重乘以v并相加, 得到self-attention在该位置的输出 Z

    $$A(Q,K,V)= \operatorname{softmax} ( \frac{QK^{T}}{ \sqrt d_{k}})V=Z$$

  • Multi-Headed Attention - 扩展了模型专注于不同位置的能力:

    • 把上述Self-attention的过程做8次, 即开始就初始化8组权重矩阵(Wq,Wk,Wv), 得到8个Q,K,V矩阵, 通过上述计算最后得到8个 Z
    • 将8个Z合并, 并乘以另一权重矩阵Wo, 最终得到一个Z矩阵
  • Positional Encoding - 表示序列的顺序将src的position放入到embedding layer

  • Layer Normalization - 解决多层神经网络训练困难的问题,通过将前一层的信息无差的传递到下一层, 使特征的平均值为0, 标准差为1, 更容易训练

Decoder

与 Encoder 相似, 但比Encoder多了一层Multi-Headed Attention 一层是src或trg自身的词与词之间的依赖关系, 另一层是是计算src的词与trg的词之间的依赖关系

  • Mask
    • Padding Mask - Encoder和Decoder都会用到, 大小与batch size对齐后序列一致, 的部分为0, 其余为1
    • Sequence/Subsequent Mask - Decoder会用到, 为了使其看不到未来的信息(使Decoder输出应该只能依赖于 t 时刻之前的输出,而不能依赖 t 和 t 之后的输出), 通过下三角矩阵解决, 且下三角矩阵应与decoder的padding mask 结合

PositionwiseFeedforwardLayer:

  1. after attetnion operation, apply a fc layer first to transformed from hid_dim to pf_dim (512 to 2048)
  2. apply relu activation function (In BERT, use glue activation function)
  3. apply dropout
  4. apply another fc layer to transformed from pf_dim to hid_dim

Reference

GRU and LSTM
transformer
transformer code breakdown: pytorch
what and why postional encoding