File size: 5,387 Bytes
8612c3c
 
 
 
 
 
 
 
 
 
 
 
aed57ce
 
 
56707ed
 
 
aed57ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56707ed
 
 
aed57ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56707ed
 
aed57ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56707ed
 
 
 
 
 
 
aed57ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56707ed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
---
license: apache-2.0
language:
- en
metrics:
- accuracy
- pearsonr
base_model:
- openai/clip-vit-base-patch16
tags:
- IQA
---
<div align="center">
    <a href="https://arxiv.org/abs/2508.14475"><img src="https://img.shields.io/badge/Arxiv-preprint-red"></a>
    <a href="https://pxf0429.github.io/FGResQ/"><img src="https://img.shields.io/badge/Homepage-green"></a>
    <a href="https://huggingface.co/spaces/orpheus0429/FGResQ"><img src="https://img.shields.io/badge/πŸ€—%20Hugging%20Face-Spaces-blue"></a>
    <a href='https://github.com/sxfly99/FGResQ/stargazers'><img src='https://img.shields.io/github/stars/sxfly99/FGResQ.svg?style=social'></a>
    
</div>

<h1 align="center">Fine-grained Image Quality Assessment for Perceptual Image Restoration</h1>

<div align="center">
    <a href="https://github.com/sxfly99">Xiangfei Sheng</a><sup>1*</sup>,
    <a href="https://github.com/pxf0429">Xiaofeng Pan</a><sup>1*</sup>,
    <a href="https://github.com/yzc-ippl">Zhichao Yang</a><sup>1</sup>,
    <a href="https://faculty.xidian.edu.cn/cpf/">Pengfei Chen</a><sup>1</sup>,
    <a href="https://web.xidian.edu.cn/ldli/">Leida Li</a><sup>1#</sup>
</div>

<div align="center">
  <sup>1</sup>School of Artificial Intelligence, Xidian University
</div>

<div align="center">
<sup>*</sup>Equal contribution. <sup>#</sup>Corresponding author. 
</div>


<div align="center">
  <img src="FGResQ.png" width="800"/>
</div>

<div style="font-family: sans-serif; margin-bottom: 2em;">
    <h2 style="border-bottom: 1px solid #eaecef; padding-bottom: 0.3em; margin-bottom: 1em;">πŸ“° News</h2>
    <ul style="list-style-type: none; padding-left: 0;">
        <li style="margin-bottom: 0.8em;">
            <strong>[2025-11-19]</strong> The model is now available on the <a href="https://huggingface.co/orpheus0429/FGResQ">HuggingFace Hub</a>. A live demo is also available on <a href="https://huggingface.co/spaces/orpheus0429/FGResQ">HuggingFace Spaces</a> for you to try it out directly in your browser.
        </li>
        <li style="margin-bottom: 0.8em;">
            <strong>[2025-11-08]</strong> πŸŽ‰πŸŽ‰πŸŽ‰Our paper, "Fine-grained Image Quality Assessment for Perceptual Image Restoration", has been accepted to appear at AAAI 2026!
        </li>
        <li style="margin-bottom: 0.8em;">
            <strong>[2025-08-20]</strong> Code and pre-trained models for FGResQ released.
        </li>
    </ul>
</div>


## Quick Start

This guide will help you get started with the FGResQ inference code.

### 1. Installation

First, clone the repository and install the required dependencies.

```bash
git clone https://github.com/sxfly99/FGResQ.git
cd FGResQ
pip install -r requirements.txt
```

### 2. Download Pre-trained Weights

You can download the pre-trained model weights from the following link:
[**Download Weights (Google Drive)**](https://drive.google.com/drive/folders/10MVnAoEIDZ08Rek4qkStGDY0qLiWUahJ?usp=drive_link), [**(Baidu Netdisk)**](https://pan.baidu.com/s/1a2IZbr_PrgZYCbUbjKLykA?pwd=9ivu) or [**(HuggingFace)**](https://huggingface.co/orpheus0429/FGResQ)

Place the downloaded files in the `weights` directory.

- `FGResQ.pth`: The main model for quality scoring and ranking.
- `Degradation.pth`: The weights for the degradation-aware task branch.

Create the `weights` directory if it doesn't exist and place the files inside.

```
FGRestore/
|-- weights/
|   |-- FGResQ.pth
|   |-- Degradation.pth
|-- model/
|   |-- FGResQ.py
|-- requirements.txt
|-- README.md
```

## Usage

The `FGResQ` provides two main functionalities: scoring a single image and comparing a pair of images.

### Initialize the Scorer

First, import and initialize the `FGResQ`.

```python
from model.FGResQ import FGResQ

# Path to the main model weights
model_path = "weights/FGResQ.pth"

# or use HuggingFace Model
# from huggingface_hub import hf_hub_download
# model_path = hf_hub_download(
#     repo_id="orpheus0429/FGResQ",
#     filename="weights/FGResQ.pth"
# )

# Initialize the inference engine
model = FGResQ(model_path=model_path)
```

### 1. Single Image Input Mode: Quality Scoring

You can get a quality score for a single image. The score typically ranges from 0 to 1, where a higher score indicates better quality.

```python
image_path = "path/to/your/image.jpg"
quality_score = model.predict_single(image_path)
print(f"The quality score for the image is: {quality_score:.4f}")
```

### 2. Pairwise Image Input Mode: Quality Ranking

You can also compare two images to determine which one has better quality.

```python
image_path1 = "path/to/image1.jpg"
image_path2 = "path/to/image2.jpg"

comparison_result = model.predict_pair(image_path1, image_path2)

# The result includes a human-readable comparison and raw probabilities
print(f"Comparison: {comparison_result['comparison']}")
# Example output: "Comparison: Image 1 is better"

print(f"Raw output probabilities: {comparison_result['comparison_raw']}")
# Example output: "[0.8, 0.1, 0.1]" (Probabilities for Image1 > Image2, Image2 > Image1, Image1 β‰ˆ Image2)
```
## Citation

If you find this work is useful, pleaes cite our paper!

```bibtex

@article{sheng2025fgresq,
  title={Fine-grained Image Quality Assessment for Perceptual Image Restoration},
  author={Sheng, Xiangfei and Pan, Xiaofeng and Yang, Zhichao and Chen, Pengfei and Li, Leida},
  journal={arXiv preprint arXiv:2508.14475},
  year={2025}
}