Parâmetros de LLM
Como qualquer modelo de aprendizagem de máquina, os grandes modelos de linguagem têm vários parâmetros que controlam a variância da saída de texto gerada. Nós começamos uma série de vários partes para explicar detalhadamente o impacto destes parâmetros. Concluiremos atingindo o equilíbrio perfeito na geração de conteúdo usando todos esses parâmetros discutidos na nossa série de vários partes.
Bem-vindo ao primeiro parte, onde discutimos o parâmetro mais bem conhecido, “Temperatura.”
Temperatura
Se o objetivo é controlar a aleatoriedade das predições, então a temperatura é a coisa para você. Valores de temperatura baixos tornam a saída mais determinística, enquanto valores altos permitirão resultados diversos, tornando-a mais criativa.
Vamos olhar a temperatura em ação usando o seguinte código e saída. Para apresentar a importância simplesmente, escolhemos usar os transformers do Hugging Face e o modelo GPT2 em particular.
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load GPT-2 model and tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
# Add pad token to tokenizer (GPT-2 doesn't have it by default)
tokenizer.pad_token = tokenizer.eos_token
# Function to generate response with varying temperature
def generate_with_temperature(prompt, temperature):
inputs = tokenizer(prompt, return_tensors='pt', padding=True)
# Set the attention_mask and pad_token_id
outputs = model.generate(
inputs.input_ids,
attention_mask=inputs['attention_mask'],
do_sample=True,
max_length=200,
temperature=temperature,
pad_token_id=tokenizer.eos_token_id
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
#Prompt on which we get the content generated based on temperature
prompt = "What are some of the best strategies within NFL Fantasy"
# Test temperature variations
print("Temperature 0.3 (Less deterministic but coherent):\n", generate_with_temperature(prompt, temperature=0.3))
print("\nTemperature 0.5 (Balanced):\n", generate_with_temperature(prompt, temperature=0.5))
print("\nTemperature 0.9 (More creative):\n", generate_with_temperature(prompt, temperature=0.9))
Saída:
python test_temperature.py
Temperature 0.3 (Less deterministic but coherent):
What are some of the best strategies within NFL Fantasy Football?
I've seen a lot of great strategies for predicting the next NFL season. I've seen great strategies for predicting the next NFL season.
I've seen a lot of great strategies for predicting the next NFL season. I've seen great strategies for predicting the next NFL season.
I've seen a lot of great strategies for predicting the next NFL season. I've seen great strategies for predicting the next NFL season.
I've seen a lot of great strategies for predicting the next NFL season. I've seen great strategies for predicting the next NFL season.
I've seen a lot of great strategies for predicting the next NFL season. I've seen great strategies for predicting the next NFL season.
I've seen a lot of great strategies for predicting the next NFL season. I've seen great strategies for predicting the next NFL season.
I've seen a lot of great strategies for predicting the next
Temperature 0.5 (Balanced):
What are some of the best strategies within NFL Fantasy Football?
1. Pick up the first pick and use it to draft a QB
In the past, I've mentioned that I tend to pick up the first pick in the first round. The best way to pick up the first pick is to take the first pick and then use it to draft a QB. I find that the more I draft the QB, the more I draft the QB. This is why I like to draft the first pick in the first round.
2. Draft a QB for the first time
This is an important one, but it's not the only one. Drafting a QB for the first time is a great way to start your fantasy season.
3. Draft a QB for the first time
This is the most important one. Drafting a QB for the first time is the most important one. Drafting a QB for the first time is the most important one.
Temperature 0.9 (More creative):
What are some of the best strategies within NFL Fantasy?
If you are looking for players that will be good for you, here is an updated list of key stats, which you can find on our official website:
All players were ranked in the top 10 fantasy players. These players are all high-rated defensive backs or running backs with good play across all phases of their careers. The players above were ranked from 5-5 for total points scored.
The chart below will allow you to visualize the players in your league.
All players have 5.5 sacks, 5 sacks and 2.5 tackles for loss on the season. They have a combined 11.3 sacks with a 4.6, 1.6 and 2.1 yards per carry average, respectively.
Each player has three touchdowns. The three touchdowns are tied for the top five fantasy points with 3 points in an entire game. The three touchdowns are tied for the top ten points with 2 points
Vamos entender a saída:
- Temperatura Baixa (0.3): O modelo irá se concentrar nas escolhas de palavra mais prováveis. Se a precisão e a consistência importam para você, então dê a temperatura em torno deste intervalo. No entanto, lembre-se que o modelo pode ficar preso a repetir frases semelhantes, como é o caso com nossa saída aqui.
- Temperatura Média (0.5): Esta temperatura equilibra perfeitamente a coesão e a criatividade. É um excelente ponto de partida se você quiser uma quantidade justa de variação sem perder a estrutura. Como você pode ver no resultado, um pouco de equilíbrio foi adicionado, mas ainda é possível identificar alguma repeticão no resultado.
- Temperatura Alta (0.9): Esta temperatura explode o MLG para ser o mais criativo possível. Como você pode ver, este resultado difere dos dois anteriores, trazendo muita randomidade e variação no conteúdo.
O exemplo acima estabelece um entendimento básico da temperatura. Agora vamos olhar para isso de uma maneira um pouco mais detalhada com alguns casos de uso: “Geração de Histórias Criativas” e “Explicação Técnica”.
Vamos olhar isso com o código a seguir para entender como a temperatura afeta os dois casos de uso acima mencionados.
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load GPT-2 model and tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
# Add pad token to tokenizer (GPT-2 doesn't have it by default)
tokenizer.pad_token = tokenizer.eos_token
# Function to generate response based on temperature
def generate_with_temperature(prompt, temperature, max_length=200):
inputs = tokenizer(prompt, return_tensors='pt', padding=True)
outputs = model.generate(
inputs.input_ids,
attention_mask=inputs['attention_mask'],
do_sample=True,
max_length=max_length,
temperature=temperature, # Only focusing on temperature
pad_token_id=tokenizer.eos_token_id
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
### USE CASE 1: CREATIVE STORY GENERATION ###
def creative_story_generation():
prompt = "Once upon a time, in a distant galaxy, there was a spaceship called Voyager."
# Negative Impact: Low temperature for creative writing (too deterministic, repetitive)
print("\n=== Creative Story with Low Temperature (0.2) - Negative Impact: ===")
low_temp_story = generate_with_temperature(prompt, temperature=0.2)
print(low_temp_story)
# Perfect Impact: High temperature for creative writing (more creative and varied)
print("\n=== Creative Story with High Temperature (0.9) - Perfect Impact: ===")
high_temp_story = generate_with_temperature(prompt, temperature=0.9)
print(high_temp_story)
### USE CASE 2: TECHNICAL EXPLANATION ###
def technical_explanation():
prompt = "Explain how blockchain works in simple terms."
# Negative Impact: High temperature for technical writing (too creative, inaccurate)
print("\n=== Technical Explanation with High Temperature (0.9) - Negative Impact: ===")
high_temp_explanation = generate_with_temperature(prompt, temperature=0.9)
print(high_temp_explanation)
# Perfect Impact: Optimal temperature for technical writing (accurate and focused)
print("\n=== Technical Explanation with Adjusted Temperature (0.7) - Perfect Impact: ===")
perfect_temp_explanation = generate_with_temperature(prompt, temperature=0.7)
print(perfect_temp_explanation)
# Run both use cases
creative_story_generation()
technical_explanation()
Resultado:
python temperature_impact.py
=== Creative Story with Low Temperature (0.2) - Negative Impact: ===
Once upon a time, in a distant galaxy, there was a spaceship called Voyager. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been
=== Creative Story with High Temperature (0.9) - Perfect Impact: ===
Once upon a time, in a distant galaxy, there was a spaceship called Voyager. The ship seemed to have been flying in space as well, like the spaceship in the movie "The Voyage Home". The Captain of the Voyager was Captain Jean-Luc Picard.
In the Star Trek movies, this is true. But in the movie, on Voyager, our hero is not Jean-Luc Picard. Jean-Luc, the villain, has no desire to lead us to vengeance against the dying star.
But what about Star Trek VI: The Undiscovered Country…
In a scene that's been in development for years (one you'll almost certainly see in an upcoming Star Trek film), in the middle of the movie (one you won't see until later in the film), we see that Picard is no longer in the Star Trek universe as is the story, to be played by the same actor who played Lieutenant Dax (who was in the movie), but Picard himself.
=== Technical Explanation with High Temperature (0.9) - Negative Impact: ===
Explain how blockchain works in simple terms.
Blockchain can be used to determine if the system is trustworthy and to prevent fraud, even if the system is used in a completely different manner. Blockchain can also be used to help determine how the system is run and to ensure that its operation is efficient.
This way all your trust in the system can be verified by your actions, and you can have full control over it. When you are not trusting a computer, it can be easy to get a hold of a server and then just change the software, allowing you to control and monitor transactions with the help of the blockchain. If your business uses distributed storage then it is easy to have more control over your activities.
What do I need to understand about blockchain?
To understand how blockchain works and how you can use it properly, you must first understand how blockchain works.
Bitcoins are digital tokens, created at the start of each generation that are used to
=== Technical Explanation with Adjusted Temperature (0.7) - Perfect Impact: ===
Explain how blockchain works in simple terms.
What are the key differences between Bitcoin and Ethereum?
Blockchain is a cryptographic protocol. It can be used to create any type of transaction. It is used to store data and create new entities. It is used as a system of communication in blockchain systems.
In Ethereum, the transaction is recorded, stored, and used to perform the transaction. It is a way to transfer information. The transaction is called a "blockchain."
Since the blockchain is used for many things, it is easy to understand how the technology works. The most important difference is that Ethereum uses the blockchain to create an interface to the Internet of Things. It is this interface that allows for data exchange and the creation of new entities.
Because of this, it is possible to perform the transactions on the blockchain. So, what is the difference between Bitcoin and Ethereum?
The Bitcoin and Ethereum blockchain is a distributed ledger.
Agora vamos parar e analisar o resultado para a geração de histórias criativas e a explicação técnica com base nas configurações de temperatura e como o resultado foi afetado. Também observaremos como uma configuração de temperatura funciona perfeitamente para um caso de uso e faz o exactamente o contrário para outro caso de uso.
Geração de Histórias Criativas
- Temperatura Baixa (Impacto Negativo): Como você pode ver, a saída da história é altamente reprodutiva e carece de variedade. Este resultado não é satisfeito para uma tarefa criativa, e a repetitividade extrema causada pela incapacidade do modelo de introduzir ideias inovadoras e novas torna-se desejavel para a contação de histórias.
- Alta Temperatura (Impacto Perfeito): Como podem ver pela saída, a história toma direções interessantes e é muito criativa. A saída também adiciona múltiplos aspectos à história, o que a torna variada, imaginativa e perfeita para uma narrativa inovadora.
Explicação Técnica
- Alta Temperatura (Impacto Negativo):É importante lembrar que manter a precisão factual é muito importante para um caso de uso como uma explicação técnica. A alta temperatura leva a muito aleatoriedade e a palavras menos prováveis serem introduzidas no conteúdo gerado, fazendo com que seja insatisfatório para a escrita técnica. O mesmo pode ser inferido da saída acima, que é muito vaga e inclui ideias irrelevantes.
- Temperatura Ajustada (Impacto Perfeito):Nós ajustamos a temperatura para um ajuste que atinge um equilíbrio perfeito para a geração de conteúdo técnico. Como podem ver, a saída é muito mais organizada agora. Neste ajuste de temperatura, o modelo evita a repetição como faz em temperaturas baixas e não perde coesão como em temperaturas altas.
Conclusão
Você viu todas as maneiras em que a temperatura pode afetar a geração de conteúdo e em que o ajuste de temperatura é perfeito para qualquer caso de uso. Além disso, note que ajustar a temperatura não é o fim da história da geração de conteúdo; você também terá que ajustar outros parâmetros. Vamos olhar para todos esses na próxima série de artigos.
Source:
https://dzone.com/articles/decoding-llm-parameters-temperature