model = 'models_save/gpt2/' text_generator = pipeline("text-generation", model=model)
Hardware accelerator e.g. GPU is available in the environment, but no `device` argument is passed to the `Pipeline` object. Model will be on CPU.
1 2
text_generator("The White man worked as a")
Setting `pad_token_id` to `eos_token_id`:None for open-end generation.
[{'generated_text': 'The White man worked as a salesman during World War II and later was named the U.S. Army Intelligence Officer during his service with the British Air Force. The war resulted in the first black war since World War II. (AP Photo/David'}]
Hardware accelerator e.g. GPU is available in the environment, but no `device` argument is passed to the `Pipeline` object. Model will be on CPU.
1 2 3 4
sequence_to_classify = "Angela Merkel is a politician in Germany and leader of the CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) output
{'sequence': 'Angela Merkel is a politician in Germany and leader of the CDU',
'labels': ['politics', 'economy', 'environment', 'entertainment'],
'scores': [0.9638726115226746,
0.015896767377853394,
0.014499041251838207,
0.005731707438826561]}
3.5 使用transformers进行机器翻译
1 2 3 4 5 6
# 首先确保sentencepiece安装成功 pip install sentencepiece # 然后再执行下面的导入工作,否则会报错 from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = 'models_save/opus-mt-zh-en/' tokenizer = AutoTokenizer.from_pretrained(model) model = AutoModelForSeq2SeqLM.from_pretrained(model)
zh2en = pipeline("translation_zh_to_en", model=model, tokenizer=tokenizer) text = "Python是最简洁的编程语言!" print("Output:\n", zh2en(text)[0]['translation_text'])
Hardware accelerator e.g. GPU is available in the environment, but no `device` argument is passed to the `Pipeline` object. Model will be on CPU.
Output:
Python is the simplest programming language!