CLIP

CLIP

CLIP是一个神奇又简单的方法,从网上成对的获取图像与文本,令成对的信息为True,令不成对的信息为False,于是得到了图中的这个表。对角线上为GT。

Clip架构

Loss设置也非常简单,行为Text的GT,列为Image的GT。于是有了下面的代码:

# image_encoder - ResNet or Vision Transformer
# text_encoder - CBOW or Text Transformer
# I[n, h, w, c] - minibatch of aligned images
# T[n, l] - minibatch of aligned texts
# W_i[d_i, d_e] - learned proj of image to embed
# W_t[d_t, d_e] - learned proj of text to embed
# t - learned temperature parameter
# extract feature representations of each modality
I_f = image_encoder(I) #[n, d_i]
T_f = text_encoder(T) #[n, d_t]
# joint multimodal embedding [n, d_e]
I_e = l2_normalize(np.dot(I_f, W_i), axis=1)
T_e = l2_normalize(np.dot(T_f, W_t), axis=1)
# scaled pairwise cosine similarities [n, n]
logits = np.dot(I_e, T_e.T) * np.exp(t)
# symmetric loss function
labels = np.arange(n)
loss_i = cross_entropy_loss(logits, labels, axis=0)
loss_t = cross_entropy_loss(logits, labels, axis=1)
loss = (loss_i + loss_t)/2

文章太好懂了,所以大部分分析文章都能看懂。此处引一篇知乎的。[[https://zhuanlan.zhihu.com/p/486857682|知乎有关CLIP的介绍]]