Technology Sharing

Example of Self-Instruct Prompt

2024-07-08

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

  1. Artificially construct a batch of prompts as seeds. (Starting with a small seed set of human-written tasks)
  2. Each time, put some prompts generated by the seeds into the Input to do few-shot examples, and use LLM to generate more prompts; (Using the LLM to generate new instructions based on the seed tasks)
  3. Filter out those of poor quality and correct those that are needed; (Filtering and refining the generated instructions)
  4. Input all generated prompts into LLM to get the output results; (Creating input-output instances for the new instructions)
  5. Input Output, use the generated dataset to fine-tune the LLM

Step 2, LLM generation:

import random
from transformers import AutoTokenizer, AutoModelForCausalLM

# Load a pre-trained language model
model_name = "bigcode/starcoderbase-1b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Seed tasks (simplified for demonstration)
seed_tasks = [
    "Write a function to calculate the factorial of a number.",
    "Create a class to represent a bank account.",
    "Implement a binary search algorithm."
]

def generate_instruction(prompt):
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_new_tokens=50)
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

def self_instruct(num_iterations):
    generated_tasks = []
    
    for _ in range(num_iterations):
        # Sample existing tasks
        sampled_tasks = random.sample(seed_tasks   generated_tasks, min(3, len(seed_tasks)   len(generated_tasks)))
        
        # Create a prompt for generating new instructions
        prompt = "Generate a new programming task based on these examples:nn"
        prompt  = "n".join(sampled_tasks)
        prompt  = "nnNew task:"
        
        # Generate a new instruction
        new_task = generate_instruction(prompt)
        
        # In practice, you would filter and refine the generated task here
        
        generated_tasks.append(new_task)
    
    return generated_tasks

# Run Self-Instruct
new_tasks = self_instruct(5)
for i, task in enumerate(new_tasks, 1):
    print(f"Task {i}: {task}")

Step 3 Filter:

Manually define some rules to filter out the bad ones; (LLM can also be used as a referee)

Purpose: To ensure quality and diversity;

  • Filter out instructions that are too short or too long
  • Filter out instructions containing keywords unsuitable for language models (e.g. "image", "graph", "file", "plot")
  • Filter out instructions starting with punctuation
  • Filter out instructions starting with non-English characters
  • Filter out instructions that have high ROUGE-L similarity (above 0.7) with any existing instruction in the task pool