Spaces:
				
			
			
	
			
			
		Running
		
			on 
			
			Zero
	
	
	
			
			
	
	
	
	
		
		
		Running
		
			on 
			
			Zero
	Update app.py
Browse files
    	
        app.py
    CHANGED
    
    | @@ -35,6 +35,14 @@ models_supported = { | |
| 35 | 
             
                        attn_implementation={"decoder": "flash_attention_2", "encoder": "eager"},
         | 
| 36 | 
             
                    ),
         | 
| 37 | 
             
                ],
         | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
| 38 | 
             
            }
         | 
| 39 |  | 
| 40 |  | 
| @@ -121,8 +129,9 @@ There are three models available: | |
| 121 | 
             
            - [arabic-small-nougat](https://huggingface.co/MohamedRashad/arabic-small-nougat): A small model that is faster but less accurate (a finetune from [facebook/nougat-small](https://huggingface.co/facebook/nougat-small)).
         | 
| 122 | 
             
            - [arabic-base-nougat](https://huggingface.co/MohamedRashad/arabic-base-nougat): A base model that is more accurate but slower (a finetune from [facebook/nougat-base](https://huggingface.co/facebook/nougat-base)).
         | 
| 123 | 
             
            - [arabic-large-nougat](https://huggingface.co/MohamedRashad/arabic-large-nougat): The largest of the three (Made from scratch using [riotu-lab/Aranizer-PBE-86k](https://huggingface.co/riotu-lab/Aranizer-PBE-86k) tokenizer and a larger transformer decoder model).
         | 
|  | |
| 124 |  | 
| 125 | 
            -
            **Disclaimer**:  | 
| 126 | 
             
            """
         | 
| 127 |  | 
| 128 | 
             
            example_images = list(Path(__file__).parent.glob("*.jpeg"))
         | 
|  | |
| 35 | 
             
                        attn_implementation={"decoder": "flash_attention_2", "encoder": "eager"},
         | 
| 36 | 
             
                    ),
         | 
| 37 | 
             
                ],
         | 
| 38 | 
            +
                "mobser-small-v0.1": [
         | 
| 39 | 
            +
                    NougatProcessor.from_pretrained("TawasulAI/Mobser-Small-V0.1"),
         | 
| 40 | 
            +
                    VisionEncoderDecoderModel.from_pretrained(
         | 
| 41 | 
            +
                        "TawasulAI/Mobser-Small-V0.1",
         | 
| 42 | 
            +
                        torch_dtype=torch.bfloat16,
         | 
| 43 | 
            +
                        attn_implementation={"decoder": "flash_attention_2", "encoder": "eager"},
         | 
| 44 | 
            +
                    ),
         | 
| 45 | 
            +
                ],
         | 
| 46 | 
             
            }
         | 
| 47 |  | 
| 48 |  | 
|  | |
| 129 | 
             
            - [arabic-small-nougat](https://huggingface.co/MohamedRashad/arabic-small-nougat): A small model that is faster but less accurate (a finetune from [facebook/nougat-small](https://huggingface.co/facebook/nougat-small)).
         | 
| 130 | 
             
            - [arabic-base-nougat](https://huggingface.co/MohamedRashad/arabic-base-nougat): A base model that is more accurate but slower (a finetune from [facebook/nougat-base](https://huggingface.co/facebook/nougat-base)).
         | 
| 131 | 
             
            - [arabic-large-nougat](https://huggingface.co/MohamedRashad/arabic-large-nougat): The largest of the three (Made from scratch using [riotu-lab/Aranizer-PBE-86k](https://huggingface.co/riotu-lab/Aranizer-PBE-86k) tokenizer and a larger transformer decoder model).
         | 
| 132 | 
            +
            - [mobser-small-v0.1](https://huggingface.co/TawasulAI/Mobser-Small-V0.1): A finetune built by [TawasulAI](https://huggingface.co/TawasulAI) team to push the boundry of what's possible further.
         | 
| 133 |  | 
| 134 | 
            +
            **Disclaimer**: Models can hallucinate text and are not perfect. Please double check the output if you care about accuracy the most.
         | 
| 135 | 
             
            """
         | 
| 136 |  | 
| 137 | 
             
            example_images = list(Path(__file__).parent.glob("*.jpeg"))
         | 
