from datasets import load_dataset
from transformers import AutoModelForSeq2SeqLM
from transformers import AutoTokenizer
from transformers import GenerationConfig
1 Introduction
In this article I will perform dialogue summarization using generative AI. We will explore how the input text affects the output of the model, and perform prompt engineering to direct it towards the task we need. By comparing zero shot, one shot, and few shot inferences, we will take the first steps towards prompt engineering and see how it can enhance the generative output of Large Language Models.
2 Load Libraries
Let’s load the datasets, Large Language Model (LLM), tokenizer, and configurator python libs that we will use.
3 Summarize Dialogue without Prompt Engineering
In this use case, we will be generating a summary of a dialogue with the pre-trained Large Language Model (LLM) FLAN-T5 from Hugging Face. The list of available models in the Hugging Face transformers
package can be found here.
Let’s upload some simple dialogues from the DialogSum Hugging Face dataset. This dataset contains 10,000+ dialogues with the corresponding manually labeled summaries and topics.
= "knkarthick/dialogsum"
huggingface_dataset_name
= load_dataset(huggingface_dataset_name) dataset
Downloading and preparing dataset csv/knkarthick--dialogsum to /root/.cache/huggingface/datasets/knkarthick___csv/knkarthick--dialogsum-391706c81424fc80/0.0.0/6954658bab30a358235fa864b05cf819af0e179325c740e4bc853bcc7ec513e1...
Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/knkarthick___csv/knkarthick--dialogsum-391706c81424fc80/0.0.0/6954658bab30a358235fa864b05cf819af0e179325c740e4bc853bcc7ec513e1. Subsequent calls will reuse this data.
Print a couple of dialogues with their baseline summaries.
= [40, 200]
example_indices
= '-'.join('' for x in range(100))
dash_line
for i, index in enumerate(example_indices):
print(dash_line)
print('Example ', i + 1)
print(dash_line)
print('INPUT DIALOGUE:')
print(dataset['test'][index]['dialogue'])
print(dash_line)
print('BASELINE HUMAN SUMMARY:')
print(dataset['test'][index]['summary'])
print(dash_line)
print()
---------------------------------------------------------------------------------------------------
Example 1
---------------------------------------------------------------------------------------------------
INPUT DIALOGUE:
#Person1#: What time is it, Tom?
#Person2#: Just a minute. It's ten to nine by my watch.
#Person1#: Is it? I had no idea it was so late. I must be off now.
#Person2#: What's the hurry?
#Person1#: I must catch the nine-thirty train.
#Person2#: You've plenty of time yet. The railway station is very close. It won't take more than twenty minutes to get there.
---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# is in a hurry to catch a train. Tom tells #Person1# there is plenty of time.
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Example 2
---------------------------------------------------------------------------------------------------
INPUT DIALOGUE:
#Person1#: Have you considered upgrading your system?
#Person2#: Yes, but I'm not sure what exactly I would need.
#Person1#: You could consider adding a painting program to your software. It would allow you to make up your own flyers and banners for advertising.
#Person2#: That would be a definite bonus.
#Person1#: You might also want to upgrade your hardware because it is pretty outdated now.
#Person2#: How can we do that?
#Person1#: You'd probably need a faster processor, to begin with. And you also need a more powerful hard disc, more memory and a faster modem. Do you have a CD-ROM drive?
#Person2#: No.
#Person1#: Then you might want to add a CD-ROM drive too, because most new software programs are coming out on Cds.
#Person2#: That sounds great. Thanks.
---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# teaches #Person2# how to upgrade software and hardware in #Person2#'s system.
---------------------------------------------------------------------------------------------------
Let’s now load the FLAN-T5 model, creating an instance of the AutoModelForSeq2SeqLM
class with the .from_pretrained()
method.
='google/flan-t5-base'
model_name
= AutoModelForSeq2SeqLM.from_pretrained(model_name) model
To perform encoding and decoding, we need to work with text in a tokenized form. Tokenization is the process of splitting texts into smaller units that can be processed by the LLM models.
Now we download the tokenizer for the FLAN-T5 model using AutoTokenizer.from_pretrained()
method. Parameter use_fast
switches on fast tokenizer. At this stage, there is no need to go into the details of that, but you can find the tokenizer parameters in the documentation.
= AutoTokenizer.from_pretrained(model_name, use_fast=True) tokenizer
Test the tokenizer encoding and decoding a simple sentence:
= "What time is it, Tom?"
sentence
= tokenizer(sentence, return_tensors='pt')
sentence_encoded
= tokenizer.decode(
sentence_decoded "input_ids"][0],
sentence_encoded[=True
skip_special_tokens
)
print('ENCODED SENTENCE:')
print(sentence_encoded["input_ids"][0])
print('\nDECODED SENTENCE:')
print(sentence_decoded)
ENCODED SENTENCE:
tensor([ 363, 97, 19, 34, 6, 3059, 58, 1])
DECODED SENTENCE:
What time is it, Tom?
Now it’s time to explore how well the base LLM summarizes a dialogue without any prompt engineering. Prompt engineering is an act of a human changing the prompt (input) to improve the response for a given task.
for i, index in enumerate(example_indices):
= dataset['test'][index]['dialogue']
dialogue = dataset['test'][index]['summary']
summary
= tokenizer(dialogue, return_tensors='pt')
inputs = tokenizer.decode(
output
model.generate("input_ids"],
inputs[=50,
max_new_tokens0],
)[=True
skip_special_tokens
)
print(dash_line)
print('Example ', i + 1)
print(dash_line)
print(f'INPUT PROMPT:\n{dialogue}')
print(dash_line)
print(f'BASELINE HUMAN SUMMARY:\n{summary}')
print(dash_line)
print(f'MODEL GENERATION - WITHOUT PROMPT ENGINEERING:\n{output}\n')
---------------------------------------------------------------------------------------------------
Example 1
---------------------------------------------------------------------------------------------------
INPUT PROMPT:
#Person1#: What time is it, Tom?
#Person2#: Just a minute. It's ten to nine by my watch.
#Person1#: Is it? I had no idea it was so late. I must be off now.
#Person2#: What's the hurry?
#Person1#: I must catch the nine-thirty train.
#Person2#: You've plenty of time yet. The railway station is very close. It won't take more than twenty minutes to get there.
---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# is in a hurry to catch a train. Tom tells #Person1# there is plenty of time.
---------------------------------------------------------------------------------------------------
MODEL GENERATION - WITHOUT PROMPT ENGINEERING:
Person1: It's ten to nine.
---------------------------------------------------------------------------------------------------
Example 2
---------------------------------------------------------------------------------------------------
INPUT PROMPT:
#Person1#: Have you considered upgrading your system?
#Person2#: Yes, but I'm not sure what exactly I would need.
#Person1#: You could consider adding a painting program to your software. It would allow you to make up your own flyers and banners for advertising.
#Person2#: That would be a definite bonus.
#Person1#: You might also want to upgrade your hardware because it is pretty outdated now.
#Person2#: How can we do that?
#Person1#: You'd probably need a faster processor, to begin with. And you also need a more powerful hard disc, more memory and a faster modem. Do you have a CD-ROM drive?
#Person2#: No.
#Person1#: Then you might want to add a CD-ROM drive too, because most new software programs are coming out on Cds.
#Person2#: That sounds great. Thanks.
---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# teaches #Person2# how to upgrade software and hardware in #Person2#'s system.
---------------------------------------------------------------------------------------------------
MODEL GENERATION - WITHOUT PROMPT ENGINEERING:
#Person1#: I'm thinking of upgrading my computer.
You can see that the guesses of the model make some sense, but it doesn’t seem to be sure what task it is supposed to accomplish. Seems it just makes up the next sentence in the dialogue. Prompt engineering can help here.
4 Summarize Dialogue with an Instruction Prompt
Prompt engineering is an important concept in using foundation models for text generation. You can check out this blog from Amazon Science for a quick introduction to prompt engineering.
4.1 Zero Shot Inference with an Instruction Prompt
In order to instruct the model to perform a task - summarize a dialogue - you can take the dialogue and convert it into an instruction prompt. This is often called zero shot inference. You can check out this blog from AWS for a quick description of what zero shot learning is and why it is an important concept to the LLM model.
Let’s wrap the dialogue in a descriptive instruction and see how the generated text will change:
for i, index in enumerate(example_indices):
= dataset['test'][index]['dialogue']
dialogue = dataset['test'][index]['summary']
summary
= f"""
prompt Summarize the following conversation.
{dialogue}
Summary:
"""
# Input constructed prompt instead of the dialogue.
= tokenizer(prompt, return_tensors='pt')
inputs = tokenizer.decode(
output
model.generate("input_ids"],
inputs[=50,
max_new_tokens0],
)[=True
skip_special_tokens
)
print(dash_line)
print('Example ', i + 1)
print(dash_line)
print(f'INPUT PROMPT:\n{prompt}')
print(dash_line)
print(f'BASELINE HUMAN SUMMARY:\n{summary}')
print(dash_line)
print(f'MODEL GENERATION - ZERO SHOT:\n{output}\n')
---------------------------------------------------------------------------------------------------
Example 1
---------------------------------------------------------------------------------------------------
INPUT PROMPT:
Summarize the following conversation.
#Person1#: What time is it, Tom?
#Person2#: Just a minute. It's ten to nine by my watch.
#Person1#: Is it? I had no idea it was so late. I must be off now.
#Person2#: What's the hurry?
#Person1#: I must catch the nine-thirty train.
#Person2#: You've plenty of time yet. The railway station is very close. It won't take more than twenty minutes to get there.
Summary:
---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# is in a hurry to catch a train. Tom tells #Person1# there is plenty of time.
---------------------------------------------------------------------------------------------------
MODEL GENERATION - ZERO SHOT:
The train is about to leave.
---------------------------------------------------------------------------------------------------
Example 2
---------------------------------------------------------------------------------------------------
INPUT PROMPT:
Summarize the following conversation.
#Person1#: Have you considered upgrading your system?
#Person2#: Yes, but I'm not sure what exactly I would need.
#Person1#: You could consider adding a painting program to your software. It would allow you to make up your own flyers and banners for advertising.
#Person2#: That would be a definite bonus.
#Person1#: You might also want to upgrade your hardware because it is pretty outdated now.
#Person2#: How can we do that?
#Person1#: You'd probably need a faster processor, to begin with. And you also need a more powerful hard disc, more memory and a faster modem. Do you have a CD-ROM drive?
#Person2#: No.
#Person1#: Then you might want to add a CD-ROM drive too, because most new software programs are coming out on Cds.
#Person2#: That sounds great. Thanks.
Summary:
---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# teaches #Person2# how to upgrade software and hardware in #Person2#'s system.
---------------------------------------------------------------------------------------------------
MODEL GENERATION - ZERO SHOT:
#Person1#: I'm thinking of upgrading my computer.
This is much better! But the model still does not pick up on the nuance of the conversations though.
Further explorations:
- We could experiment with the
prompt
text and see how the inferences will be changed. Will the inferences change if you end the prompt with just empty string vs.Summary:
? - We could also try to rephrase the beginning of the
prompt
text fromSummarize the following conversation.
to something different - and see how it will influence the generated output.
4.2 Zero Shot Inference with the Prompt Template from FLAN-T5
Let’s use a slightly different prompt. FLAN-T5 has many prompt templates that are published for certain tasks here. In the following code, we will use one of the pre-built FLAN-T5 prompts:
for i, index in enumerate(example_indices):
= dataset['test'][index]['dialogue']
dialogue = dataset['test'][index]['summary']
summary
= f"""
prompt Dialogue:
{dialogue}
What was going on?
"""
= tokenizer(prompt, return_tensors='pt')
inputs = tokenizer.decode(
output
model.generate("input_ids"],
inputs[=50,
max_new_tokens0],
)[=True
skip_special_tokens
)
print(dash_line)
print('Example ', i + 1)
print(dash_line)
print(f'INPUT PROMPT:\n{prompt}')
print(dash_line)
print(f'BASELINE HUMAN SUMMARY:\n{summary}\n')
print(dash_line)
print(f'MODEL GENERATION - ZERO SHOT:\n{output}\n')
---------------------------------------------------------------------------------------------------
Example 1
---------------------------------------------------------------------------------------------------
INPUT PROMPT:
Dialogue:
#Person1#: What time is it, Tom?
#Person2#: Just a minute. It's ten to nine by my watch.
#Person1#: Is it? I had no idea it was so late. I must be off now.
#Person2#: What's the hurry?
#Person1#: I must catch the nine-thirty train.
#Person2#: You've plenty of time yet. The railway station is very close. It won't take more than twenty minutes to get there.
What was going on?
---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# is in a hurry to catch a train. Tom tells #Person1# there is plenty of time.
---------------------------------------------------------------------------------------------------
MODEL GENERATION - ZERO SHOT:
Tom is late for the train.
---------------------------------------------------------------------------------------------------
Example 2
---------------------------------------------------------------------------------------------------
INPUT PROMPT:
Dialogue:
#Person1#: Have you considered upgrading your system?
#Person2#: Yes, but I'm not sure what exactly I would need.
#Person1#: You could consider adding a painting program to your software. It would allow you to make up your own flyers and banners for advertising.
#Person2#: That would be a definite bonus.
#Person1#: You might also want to upgrade your hardware because it is pretty outdated now.
#Person2#: How can we do that?
#Person1#: You'd probably need a faster processor, to begin with. And you also need a more powerful hard disc, more memory and a faster modem. Do you have a CD-ROM drive?
#Person2#: No.
#Person1#: Then you might want to add a CD-ROM drive too, because most new software programs are coming out on Cds.
#Person2#: That sounds great. Thanks.
What was going on?
---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# teaches #Person2# how to upgrade software and hardware in #Person2#'s system.
---------------------------------------------------------------------------------------------------
MODEL GENERATION - ZERO SHOT:
#Person1#: You could add a painting program to your software. #Person2#: That would be a bonus. #Person1#: You might also want to upgrade your hardware. #Person1#
Notice that this prompt from FLAN-T5 did help a bit, but still struggles to pick up on the nuance of the conversation. This is what we will try to solve with the few shot inferencing.
5 Summarize Dialogue with One Shot and Few Shot Inference
One shot and few shot inference are the practices of providing an LLM with either one or more full examples of prompt-response pairs that match your task - before your actual prompt that you want completed. This is called “in-context learning” and puts our model into a state that understands your specific task. You can read more about it in this blog from HuggingFace.
5.1 One Shot Inference
Let’s build a function that takes a list of example_indices_full
, generates a prompt with full examples, then at the end appends the prompt which you want the model to complete (example_index_to_summarize
). We will use the same FLAN-T5 prompt template from the earlier section.
def make_prompt(example_indices_full, example_index_to_summarize):
= ''
prompt for index in example_indices_full:
= dataset['test'][index]['dialogue']
dialogue = dataset['test'][index]['summary']
summary
# The stop sequence '{summary}\n\n\n' is important for FLAN-T5. Other models may have their own preferred stop sequence.
+= f"""
prompt Dialogue:
{dialogue}
What was going on?
{summary}
"""
= dataset['test'][example_index_to_summarize]['dialogue']
dialogue
+= f"""
prompt Dialogue:
{dialogue}
What was going on?
"""
return prompt
Construct the prompt to perform one shot inference:
= [40]
example_indices_full = 200
example_index_to_summarize
= make_prompt(example_indices_full, example_index_to_summarize)
one_shot_prompt
print(one_shot_prompt)
Dialogue:
#Person1#: What time is it, Tom?
#Person2#: Just a minute. It's ten to nine by my watch.
#Person1#: Is it? I had no idea it was so late. I must be off now.
#Person2#: What's the hurry?
#Person1#: I must catch the nine-thirty train.
#Person2#: You've plenty of time yet. The railway station is very close. It won't take more than twenty minutes to get there.
What was going on?
#Person1# is in a hurry to catch a train. Tom tells #Person1# there is plenty of time.
Dialogue:
#Person1#: Have you considered upgrading your system?
#Person2#: Yes, but I'm not sure what exactly I would need.
#Person1#: You could consider adding a painting program to your software. It would allow you to make up your own flyers and banners for advertising.
#Person2#: That would be a definite bonus.
#Person1#: You might also want to upgrade your hardware because it is pretty outdated now.
#Person2#: How can we do that?
#Person1#: You'd probably need a faster processor, to begin with. And you also need a more powerful hard disc, more memory and a faster modem. Do you have a CD-ROM drive?
#Person2#: No.
#Person1#: Then you might want to add a CD-ROM drive too, because most new software programs are coming out on Cds.
#Person2#: That sounds great. Thanks.
What was going on?
Now we pass this prompt to perform the one shot inference:
= dataset['test'][example_index_to_summarize]['summary']
summary
= tokenizer(one_shot_prompt, return_tensors='pt')
inputs = tokenizer.decode(
output
model.generate("input_ids"],
inputs[=50,
max_new_tokens0],
)[=True
skip_special_tokens
)
print(dash_line)
print(f'BASELINE HUMAN SUMMARY:\n{summary}\n')
print(dash_line)
print(f'MODEL GENERATION - ONE SHOT:\n{output}')
---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# teaches #Person2# how to upgrade software and hardware in #Person2#'s system.
---------------------------------------------------------------------------------------------------
MODEL GENERATION - ONE SHOT:
#Person1 wants to upgrade his system. #Person2 wants to add a painting program to his software. #Person1 wants to add a CD-ROM drive.
5.2 Few Shot Inference
Let’s explore few shot inference by adding two more full dialogue-summary pairs to your prompt.
= [40, 80, 120]
example_indices_full = 200
example_index_to_summarize
= make_prompt(example_indices_full, example_index_to_summarize)
few_shot_prompt
print(few_shot_prompt)
Dialogue:
#Person1#: What time is it, Tom?
#Person2#: Just a minute. It's ten to nine by my watch.
#Person1#: Is it? I had no idea it was so late. I must be off now.
#Person2#: What's the hurry?
#Person1#: I must catch the nine-thirty train.
#Person2#: You've plenty of time yet. The railway station is very close. It won't take more than twenty minutes to get there.
What was going on?
#Person1# is in a hurry to catch a train. Tom tells #Person1# there is plenty of time.
Dialogue:
#Person1#: May, do you mind helping me prepare for the picnic?
#Person2#: Sure. Have you checked the weather report?
#Person1#: Yes. It says it will be sunny all day. No sign of rain at all. This is your father's favorite sausage. Sandwiches for you and Daniel.
#Person2#: No, thanks Mom. I'd like some toast and chicken wings.
#Person1#: Okay. Please take some fruit salad and crackers for me.
#Person2#: Done. Oh, don't forget to take napkins disposable plates, cups and picnic blanket.
#Person1#: All set. May, can you help me take all these things to the living room?
#Person2#: Yes, madam.
#Person1#: Ask Daniel to give you a hand?
#Person2#: No, mom, I can manage it by myself. His help just causes more trouble.
What was going on?
Mom asks May to help to prepare for the picnic and May agrees.
Dialogue:
#Person1#: Hello, I bought the pendant in your shop, just before.
#Person2#: Yes. Thank you very much.
#Person1#: Now I come back to the hotel and try to show it to my friend, the pendant is broken, I'm afraid.
#Person2#: Oh, is it?
#Person1#: Would you change it to a new one?
#Person2#: Yes, certainly. You have the receipt?
#Person1#: Yes, I do.
#Person2#: Then would you kindly come to our shop with the receipt by 10 o'clock? We will replace it.
#Person1#: Thank you so much.
What was going on?
#Person1# wants to change the broken pendant in #Person2#'s shop.
Dialogue:
#Person1#: Have you considered upgrading your system?
#Person2#: Yes, but I'm not sure what exactly I would need.
#Person1#: You could consider adding a painting program to your software. It would allow you to make up your own flyers and banners for advertising.
#Person2#: That would be a definite bonus.
#Person1#: You might also want to upgrade your hardware because it is pretty outdated now.
#Person2#: How can we do that?
#Person1#: You'd probably need a faster processor, to begin with. And you also need a more powerful hard disc, more memory and a faster modem. Do you have a CD-ROM drive?
#Person2#: No.
#Person1#: Then you might want to add a CD-ROM drive too, because most new software programs are coming out on Cds.
#Person2#: That sounds great. Thanks.
What was going on?
Now we pass this prompt to perform a few shot inference:
= dataset['test'][example_index_to_summarize]['summary']
summary
= tokenizer(few_shot_prompt, return_tensors='pt')
inputs = tokenizer.decode(
output
model.generate("input_ids"],
inputs[=50,
max_new_tokens0],
)[=True
skip_special_tokens
)
print(dash_line)
print(f'BASELINE HUMAN SUMMARY:\n{summary}\n')
print(dash_line)
print(f'MODEL GENERATION - FEW SHOT:\n{output}')
Token indices sequence length is longer than the specified maximum sequence length for this model (819 > 512). Running this sequence through the model will result in indexing errors
---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# teaches #Person2# how to upgrade software and hardware in #Person2#'s system.
---------------------------------------------------------------------------------------------------
MODEL GENERATION - FEW SHOT:
#Person1 wants to upgrade his system. #Person2 wants to add a painting program to his software. #Person1 wants to upgrade his hardware.
In this case, few shot did not provide much of an improvement over one shot inference. And, anything above 5 or 6 shot will typically not help much, either. Also, we need to make sure that we do not exceed the model’s input-context length which, in our case, if 512 tokens. Anything above the context length will be ignored.
However, we can see that feeding in at least one full example (one shot) provides the model with more information and qualitatively improves the summary overall.
Further explorations:
- We could choose different dialogues - changing the indices in the
example_indices_full
list andexample_index_to_summarize
value. - We could change the number of shots. We must be sure to stay within the model’s 512 context length, however.
6 Generative Configuration Parameters for Inference
We can change the configuration parameters of the generate()
method to see a different output from the LLM. So far the only parameter that we have been setting was max_new_tokens=50
, which defines the maximum number of tokens to generate. A full list of available parameters can be found in the Hugging Face Generation documentation.
A convenient way of organizing the configuration parameters is to use GenerationConfig
class.
Let’s change the configuration parameters to investigate their influence on the output.
Putting the parameter do_sample = True
, we activate various decoding strategies which influence the next token from the probability distribution over the entire vocabulary. We can then adjust the outputs changing temperature
and other parameters (such as top_k
and top_p
).
= GenerationConfig(max_new_tokens=50)
generation_config # generation_config = GenerationConfig(max_new_tokens=10)
# generation_config = GenerationConfig(max_new_tokens=50, do_sample=True, temperature=0.1)
# generation_config = GenerationConfig(max_new_tokens=50, do_sample=True, temperature=0.5)
# generation_config = GenerationConfig(max_new_tokens=50, do_sample=True, temperature=1.0)
= tokenizer(few_shot_prompt, return_tensors='pt')
inputs = tokenizer.decode(
output
model.generate("input_ids"],
inputs[=generation_config,
generation_config0],
)[=True
skip_special_tokens
)
print(dash_line)
print(f'MODEL GENERATION - FEW SHOT:\n{output}')
print(dash_line)
print(f'BASELINE HUMAN SUMMARY:\n{summary}\n')
---------------------------------------------------------------------------------------------------
MODEL GENERATION - FEW SHOT:
#Person1 wants to upgrade his system. #Person2 wants to add a painting program to his software. #Person1 wants to upgrade his hardware.
---------------------------------------------------------------------------------------------------
BASELINE HUMAN SUMMARY:
#Person1# teaches #Person2# how to upgrade software and hardware in #Person2#'s system.
Note:
- Choosing
max_new_tokens=10
will make the output text too short, so the dialogue summary will be cut. - Putting
do_sample = True
and changing the temperature value you get more flexibility in the output.
As we can see, prompt engineering can take you a long way for this use case, but there are some limitations - which a method like fine-tuning can further help with which I will look at in a future article.
7 Acknowledgements
I’d like to express my thanks to the wonderful Generative AI with Large Language Models Course by DeepLearning.ai and AWS - which i completed, and acknowledge the use of some images and other materials from the course in this article.