Commit
·
933f58b
1
Parent(s):
c16b237
Update README.md
Browse files
README.md
CHANGED
|
@@ -25,7 +25,9 @@ BigBird relies on **block sparse attention** instead of normal attention (i.e. B
|
|
| 25 |
Here is how to use this model to get the features of a given text in PyTorch:
|
| 26 |
|
| 27 |
```python
|
| 28 |
-
from transformers import BigBirdPegasusForConditionalGeneration,
|
|
|
|
|
|
|
| 29 |
|
| 30 |
# by default encoder-attention is `block_sparse` with num_random_blocks=3, block_size=64
|
| 31 |
model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-bigpatent")
|
|
|
|
| 25 |
Here is how to use this model to get the features of a given text in PyTorch:
|
| 26 |
|
| 27 |
```python
|
| 28 |
+
from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer
|
| 29 |
+
|
| 30 |
+
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-bigpatent")
|
| 31 |
|
| 32 |
# by default encoder-attention is `block_sparse` with num_random_blocks=3, block_size=64
|
| 33 |
model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-bigpatent")
|