| ### Run Huggingface RWKV World Model | |
| #### CPU | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| model = AutoModelForCausalLM.from_pretrained("BBuf/RWKV-4-World-7B") | |
| tokenizer = AutoTokenizer.from_pretrained("BBuf/RWKV-4-World-7B", trust_remote_code=True) | |
| text = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." | |
| prompt = f'Question: {text.strip()}\n\nAnswer:' | |
| inputs = tokenizer(prompt, return_tensors="pt") | |
| output = model.generate(inputs["input_ids"], max_new_tokens=256) | |
| print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) | |
| ``` | |
| output: | |
| ```shell | |
| Question: In a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese. | |
| Answer: 科学家在一个未曾探索过的山谷中发现了一群能说流利中文的龙。科学家惊讶地发现,这些龙是在一个完全未被探索的地区生活的。 | |
| ``` | |
| #### GPU | |
| ```python | |
| import torch | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| model = AutoModelForCausalLM.from_pretrained("BBuf/RWKV-4-World-7B", torch_dtype=torch.float16).to(0) | |
| tokenizer = AutoTokenizer.from_pretrained("BBuf/RWKV-4-World-7B", trust_remote_code=True) | |
| text = "你叫什么名字?" | |
| prompt = f'Question: {text.strip()}\n\nAnswer:' | |
| inputs = tokenizer(prompt, return_tensors="pt").to(0) | |
| output = model.generate(inputs["input_ids"], max_new_tokens=40) | |
| print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) | |
| ``` | |
| output: | |
| ```shell | |
| Question: 你叫什么名字? | |
| Answer: 我是一个人工智能语言模型,没有具体的名字。 | |
| ``` | |