Add results of IFM-TTE-7B

#69
by haoyubu - opened

Upload our results for MMEB v2: IFM-TTE-7B.json

ubowang changed pull request status to merged
TIGER-Lab org

@haoyubu
Hi — thanks again for your contribution to our leaderboard! Would you mind taking a look at this issue related to the “IFM-TTE-7B results”? Thanks so much for your help!
https://huggingface.co/spaces/TIGER-Lab/MMEB-Leaderboard/discussions/73

@haoyubu @ziyjiang What is exactly the IFM-TTE-7B model?

In the paper, the reported MMEB-v2 average score of TTE_t-7b is 71.5, while it's 74.07 on the leaderboard. Will the author open source the IFM-TTE-7B model? Otherwise, how can we trust the leaderboard score since it's not peer-reviewed?

Even if we trust the explation in https://interestfm-tte.github.io/ , the author improved TTE_t-7b using hard mining and in-house data. I still doubt if TTE_t-7b could be named as 7b, since it incorporates a Qwen2.5-VL-72b reasoner and a Qwen2-VL-7b Embedder, thus it would be fair to be named as 72+7=79b.

However, if the IFM-TTE-7B model is based on TTE_u-7B, then it's fair to call 7b, but the score of TTE_s-7b is only 68.6, and the score of TTE_u-7B is NOT even reported in the paper

Sign up or log in to comment