s8frbroy commited on
Commit
e0de5ec
·
verified ·
1 Parent(s): 597de0d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -82,7 +82,7 @@ These inputs are short enough to fit within the model’s 512-token limit — no
82
  The cited-paper encoder was trained jointly with the query-talk encoder under a **dual-encoder contrastive framework** inspired by Dense Passage Retrieval (Karpukhin et al., 2020).
83
 
84
  Each talk $Ti$ and paper $Rj$ is encoded into embeddings $fT(Ti)$ and $fR(Rj)$.
85
- Their dot-product similarity $sij = fT(Ti)·fR(Rj)$ is optimized using a sigmoid-based binary loss supporting multiple positives per query:
86
 
87
  $$
88
  L = - \sum_i [y_i \log \sigma(s_i) + (1 - y_i)\log(1 - \sigma(s_i))]
 
82
  The cited-paper encoder was trained jointly with the query-talk encoder under a **dual-encoder contrastive framework** inspired by Dense Passage Retrieval (Karpukhin et al., 2020).
83
 
84
  Each talk $Ti$ and paper $Rj$ is encoded into embeddings $fT(Ti)$ and $fR(Rj)$.
85
+ Their dot-product similarity $s_{ij} = f_T(T_i) \cdot f_R(R_j)$ is optimized using a sigmoid-based binary loss supporting multiple positives per query:
86
 
87
  $$
88
  L = - \sum_i [y_i \log \sigma(s_i) + (1 - y_i)\log(1 - \sigma(s_i))]