Parametri dell’MLM
Come qualsiasi modello di apprendimento automatico, i grandi modelli di linguaggio hanno diversi parametri che controllano la varianza dell’output del testo generato. Abbiamo iniziato una serie multiparte per spiegare in dettaglio l’impatto di questi parametri. Concluderà imponendo il giusto equilibrio nella generazione del contenuto utilizzando tutti questi parametri discussi nella nostra serie multiparte.
Benvenuti alla prima parte, dove discutiamo del parametro più noto, la “Temperatura”.
Temperatura
Se l’obiettivo è controllare la casualità delle previsioni, allora la temperatura è quella giusta per te. Valori più bassi della temperatura rendono l’output più deterministico, mentre valori più alti lo rendono più creativo permettendo risultati diversi.
Osserviamo la temperatura in azione utilizzando il seguente codice e output. Per presentare l’importanza in maniera semplice, abbiamo scelto di utilizzare Hugging Face Transformers e in particolare il modello GPT2.
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load GPT-2 model and tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
# Add pad token to tokenizer (GPT-2 doesn't have it by default)
tokenizer.pad_token = tokenizer.eos_token
# Function to generate response with varying temperature
def generate_with_temperature(prompt, temperature):
inputs = tokenizer(prompt, return_tensors='pt', padding=True)
# Set the attention_mask and pad_token_id
outputs = model.generate(
inputs.input_ids,
attention_mask=inputs['attention_mask'],
do_sample=True,
max_length=200,
temperature=temperature,
pad_token_id=tokenizer.eos_token_id
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
#Prompt on which we get the content generated based on temperature
prompt = "What are some of the best strategies within NFL Fantasy"
# Test temperature variations
print("Temperature 0.3 (Less deterministic but coherent):\n", generate_with_temperature(prompt, temperature=0.3))
print("\nTemperature 0.5 (Balanced):\n", generate_with_temperature(prompt, temperature=0.5))
print("\nTemperature 0.9 (More creative):\n", generate_with_temperature(prompt, temperature=0.9))
Output:
python test_temperature.py
Temperature 0.3 (Less deterministic but coherent):
What are some of the best strategies within NFL Fantasy Football?
I've seen a lot of great strategies for predicting the next NFL season. I've seen great strategies for predicting the next NFL season.
I've seen a lot of great strategies for predicting the next NFL season. I've seen great strategies for predicting the next NFL season.
I've seen a lot of great strategies for predicting the next NFL season. I've seen great strategies for predicting the next NFL season.
I've seen a lot of great strategies for predicting the next NFL season. I've seen great strategies for predicting the next NFL season.
I've seen a lot of great strategies for predicting the next NFL season. I've seen great strategies for predicting the next NFL season.
I've seen a lot of great strategies for predicting the next NFL season. I've seen great strategies for predicting the next NFL season.
I've seen a lot of great strategies for predicting the next
Temperature 0.5 (Balanced):
What are some of the best strategies within NFL Fantasy Football?
1. Pick up the first pick and use it to draft a QB
In the past, I've mentioned that I tend to pick up the first pick in the first round. The best way to pick up the first pick is to take the first pick and then use it to draft a QB. I find that the more I draft the QB, the more I draft the QB. This is why I like to draft the first pick in the first round.
2. Draft a QB for the first time
This is an important one, but it's not the only one. Drafting a QB for the first time is a great way to start your fantasy season.
3. Draft a QB for the first time
This is the most important one. Drafting a QB for the first time is the most important one. Drafting a QB for the first time is the most important one.
Temperature 0.9 (More creative):
What are some of the best strategies within NFL Fantasy?
If you are looking for players that will be good for you, here is an updated list of key stats, which you can find on our official website:
All players were ranked in the top 10 fantasy players. These players are all high-rated defensive backs or running backs with good play across all phases of their careers. The players above were ranked from 5-5 for total points scored.
The chart below will allow you to visualize the players in your league.
All players have 5.5 sacks, 5 sacks and 2.5 tackles for loss on the season. They have a combined 11.3 sacks with a 4.6, 1.6 and 2.1 yards per carry average, respectively.
Each player has three touchdowns. The three touchdowns are tied for the top five fantasy points with 3 points in an entire game. The three touchdowns are tied for the top ten points with 2 points
Scopriamo l’output:
- Temperatura bassa (0.3):Il modello userà le scelte di parola più probabili. Se precisione e consistenza sono importanti per te, fornisci una temperatura in questo range. Tuttavia, ricorda che il modello potrebbe rimanere bloccato nel ripetere frasi simili, come è il nostro output qui.
- Temperatura Media (0.5): Questa temperatura equilibra perfettamente coerenza e creatività. È un ottimo punto di mezzo se vuoi una certa varietà senza perdere struttura. Come vedete nell’output, è stato aggiunto un po’ di equilibrio, ma potete ancora vedere alcuni aspetti di ripetizione nell’output.
- Temperatura Alta (0.9): Questa temperatura esplode l’MLM per essere più creativo possibile. Come potete vedere, questo output è diverso dai precedenti due, introducendo molta casualità e varietà nel contenuto.
L’esempio precedente stabilisce una comprensione fondamentale della temperatura. Ora guardiamo alla temperatura in un modo un po’ più dettagliato con alcuni casi d’uso: “Generazione di Storie Creative” e “Spiegazione Tecnica”.
Consideriamo questo con il seguente codice per capire come la temperatura influenza i due casi d’uso precedenti.
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load GPT-2 model and tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
# Add pad token to tokenizer (GPT-2 doesn't have it by default)
tokenizer.pad_token = tokenizer.eos_token
# Function to generate response based on temperature
def generate_with_temperature(prompt, temperature, max_length=200):
inputs = tokenizer(prompt, return_tensors='pt', padding=True)
outputs = model.generate(
inputs.input_ids,
attention_mask=inputs['attention_mask'],
do_sample=True,
max_length=max_length,
temperature=temperature, # Only focusing on temperature
pad_token_id=tokenizer.eos_token_id
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
### USE CASE 1: CREATIVE STORY GENERATION ###
def creative_story_generation():
prompt = "Once upon a time, in a distant galaxy, there was a spaceship called Voyager."
# Negative Impact: Low temperature for creative writing (too deterministic, repetitive)
print("\n=== Creative Story with Low Temperature (0.2) - Negative Impact: ===")
low_temp_story = generate_with_temperature(prompt, temperature=0.2)
print(low_temp_story)
# Perfect Impact: High temperature for creative writing (more creative and varied)
print("\n=== Creative Story with High Temperature (0.9) - Perfect Impact: ===")
high_temp_story = generate_with_temperature(prompt, temperature=0.9)
print(high_temp_story)
### USE CASE 2: TECHNICAL EXPLANATION ###
def technical_explanation():
prompt = "Explain how blockchain works in simple terms."
# Negative Impact: High temperature for technical writing (too creative, inaccurate)
print("\n=== Technical Explanation with High Temperature (0.9) - Negative Impact: ===")
high_temp_explanation = generate_with_temperature(prompt, temperature=0.9)
print(high_temp_explanation)
# Perfect Impact: Optimal temperature for technical writing (accurate and focused)
print("\n=== Technical Explanation with Adjusted Temperature (0.7) - Perfect Impact: ===")
perfect_temp_explanation = generate_with_temperature(prompt, temperature=0.7)
print(perfect_temp_explanation)
# Run both use cases
creative_story_generation()
technical_explanation()
Output:
python temperature_impact.py
=== Creative Story with Low Temperature (0.2) - Negative Impact: ===
Once upon a time, in a distant galaxy, there was a spaceship called Voyager. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been brought back from the dead by the gods. It was a spaceship that had been
=== Creative Story with High Temperature (0.9) - Perfect Impact: ===
Once upon a time, in a distant galaxy, there was a spaceship called Voyager. The ship seemed to have been flying in space as well, like the spaceship in the movie "The Voyage Home". The Captain of the Voyager was Captain Jean-Luc Picard.
In the Star Trek movies, this is true. But in the movie, on Voyager, our hero is not Jean-Luc Picard. Jean-Luc, the villain, has no desire to lead us to vengeance against the dying star.
But what about Star Trek VI: The Undiscovered Country…
In a scene that's been in development for years (one you'll almost certainly see in an upcoming Star Trek film), in the middle of the movie (one you won't see until later in the film), we see that Picard is no longer in the Star Trek universe as is the story, to be played by the same actor who played Lieutenant Dax (who was in the movie), but Picard himself.
=== Technical Explanation with High Temperature (0.9) - Negative Impact: ===
Explain how blockchain works in simple terms.
Blockchain can be used to determine if the system is trustworthy and to prevent fraud, even if the system is used in a completely different manner. Blockchain can also be used to help determine how the system is run and to ensure that its operation is efficient.
This way all your trust in the system can be verified by your actions, and you can have full control over it. When you are not trusting a computer, it can be easy to get a hold of a server and then just change the software, allowing you to control and monitor transactions with the help of the blockchain. If your business uses distributed storage then it is easy to have more control over your activities.
What do I need to understand about blockchain?
To understand how blockchain works and how you can use it properly, you must first understand how blockchain works.
Bitcoins are digital tokens, created at the start of each generation that are used to
=== Technical Explanation with Adjusted Temperature (0.7) - Perfect Impact: ===
Explain how blockchain works in simple terms.
What are the key differences between Bitcoin and Ethereum?
Blockchain is a cryptographic protocol. It can be used to create any type of transaction. It is used to store data and create new entities. It is used as a system of communication in blockchain systems.
In Ethereum, the transaction is recorded, stored, and used to perform the transaction. It is a way to transfer information. The transaction is called a "blockchain."
Since the blockchain is used for many things, it is easy to understand how the technology works. The most important difference is that Ethereum uses the blockchain to create an interface to the Internet of Things. It is this interface that allows for data exchange and the creation of new entities.
Because of this, it is possible to perform the transactions on the blockchain. So, what is the difference between Bitcoin and Ethereum?
The Bitcoin and Ethereum blockchain is a distributed ledger.
Ora analizziamo l’output per la generazione di storie creative e la spiegazione tecnica in base ai settaggi di temperatura e come l’output sia stato influenzato. Inoltre, osserviamo come un impostazione della temperatura funziona perfettamente per un caso d’uso e ha l’effetto opposto per l’altro.
Generazione di Storie Creative
- Temperatura bassa (Impatto negativo):Come potete vedere, l’output della storia è altamente ripetitivo e manca di varietà. Questo risultato non è soddisfacente per un compito creativo, e la ripetizione estrema causata dalla mancanza dell’abilità dell’MLM di introdurre idee nuove e innovative rende tale output non desiderabile per la narrazione.
- Alta Temperatura (Impatto perfetto): Come potete vedere dall’output, la storia prende direzioni interessanti ed è molto creativa. L’output aggiunge anche molti aspetti alla storia, rendendola varia, immaginativa e perfetta per una narrazione innovativa.
Spiegazione Tecnica
- Alta Temperatura (Impatto negativo): E’ importante ricordare che mantenere l’accuratezza fattuale è molto importante per un caso d’uso come una spiegazione tecnica. L’alta temperatura porta a molta casualità e a parole meno probabili che vengono introdotte nel contenuto generato, rendendolo insoddisfacente per la scrittura tecnica. Lo stesso si può dedurre dall’output sopra che è troppo vago e include idee irrilevanti.
- Temperatura Regolata (Impatto perfetto): Abbiamo regolato la temperatura a un valore che raggiunge un perfetto equilibrio per la generazione di contenuti tecnici. Come potete vedere, l’output è molto più ordinato in questo momento. Con questa impostazione della temperatura, il modello evita la ripetizione come succede a temperature più basse e non perde coerenza come succede a temperature più alte.
Conclusione
Avete visto tutti i modi in cui la temperatura può influenzare la generazione del contenuto e a quale impostazione di temperatura è perfetta per qualunque caso d’uso. Inoltre, notate che la regolazione della temperatura non è l’unico approcio alla generazione del contenuto; dovrete anche aggiustare altri parametri. Vedremo tutto ciò negli articoli successivi della serie.
Source:
https://dzone.com/articles/decoding-llm-parameters-temperature