This is copied from the Azure samples github repo because in some cases those ipynb files are puzzlingly blocked in some corporate networks.

Azure OpenAI ChatCompletion API

Source of this is from from Azure’s Github

# import os module & the OpenAI Python library for calling the OpenAI API
# please make sure you have installed required libraries via pip install -r requirements.txt
import os
import openai
import json
import tiktoken

# Load config values
with open(r'config.json') as config_file:
    config_details = json.load(config_file)
    
# Setting up the deployment name
chatgpt_model_name = config_details['CHATGPT_MODEL'] 

# This is set to `azure`
openai.api_type = "azure"

# The API key for your Azure OpenAI resource.
openai.api_key = os.getenv("OPENAI_API_KEY")

# The base URL for your Azure OpenAI resource. e.g. "https://<your resource name>.openai.azure.com"
openai.api_base = config_details['OPENAI_API_BASE']

# Currently Chat Completions API have the following versions available: 2023-03-15-preview
openai.api_version = config_details['OPENAI_API_VERSION']

# Create the system message for ChatGPT**
 
base_system_message = "You are a helpful assistant."

system_message = f"{base_system_message.strip()}"
print(system_message)

def num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301"):
    encoding = tiktoken.encoding_for_model(model)
    num_tokens = 0
    for message in messages:
        num_tokens += 4  # every message follows <im_start>{role/name}\n{content}<im_end>\n
        for key, value in message.items():
            num_tokens += len(encoding.encode(value))
            if key == "name":  # if there's a name, the role is omitted
                num_tokens += -1  # role is always required and always 1 token
    num_tokens += 2  # every reply is primed with <im_start>assistant
    return num_tokens
# Defining a function to send the prompt to the ChatGPT model
# More info : https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions
def send_message(messages, model_name, max_response_tokens=500):
    response = openai.ChatCompletion.create(
        engine=chatgpt_model_name,
        messages=messages,
        temperature=0.5,
        max_tokens=max_response_tokens,
        top_p=0.9,
        frequency_penalty=0,
        presence_penalty=0,
    )
    return response['choices'][0]['message']['content']

# Defining a function to print out the conversation in a readable format
def print_conversation(messages):
    for message in messages:
        print(f"[{message['role'].upper()}]")
        print(message['content'])
        print()


# This is the first user message that will be sent to the model. Feel free to update this.
user_message = "I want to write a blog post about the impact of AI on the future of work."
# Create the list of messages. role can be either "user" or "assistant" 
messages=[
    {"role": "system", "content": system_message},
    {"role": "user", "name":"example_user", "content": user_message}
]
token_count = num_tokens_from_messages(messages)
print(f"Token count: {token_count}")
Token count: 37
max_response_tokens = 500

response = send_message(messages, chatgpt_model_name, max_response_tokens)
messages.append({"role": "assistant", "content": response})

print_conversation(messages)

[SYSTEM] You are a helpful assistant.

[USER] I want to write a blog post about the impact of AI on the future of work.

[ASSISTANT] That sounds like an interesting topic! Here are some points you could consider including in your blog post:

  1. The rise of automation and how it is changing the workforce
  2. The potential benefits of AI in the workplace, such as increased productivity and efficiency
  3. The potential drawbacks of AI, such as job displacement and the need for retraining
  4. The importance of ethical considerations in the development and implementation of AI in the workplace
  5. The role of humans in the future of work and how they can work alongside AI to achieve better outcomes.

I hope this helps! Let me know if you need any further assistance.

Continue the conversation

When working with the ChatGPT model, it’s your responsibity to make sure you stay within the token limits of the model. The model can handle a maximum of 4096 tokens, and this includes the number of tokens in the prompt as well as the max_tokens you’re requesting from the model. If you exceed these limits, the model will return an error.

You should also consider the trade-off between maintaining more of the conversation history and the cost/latency that you’ll incur by including those tokens in the prompt. Shorter prompts are cheaper and faster. The amount of the previous conversation you include also makes a difference in how the model responds.

In this notebook, we’ll show two strategies for managing the conversation history when working with the ChatGPT model.

Option 1: Keep the conversation within a given token limit Option 2: Keep the conversation within a given number of turns Option 1: Keep the conversation within a given token limit overall_max_tokens is the maximum number of tokens that you want to include in the prompt. Th maximum number this can be set to is 4096 but you can also consider reducing this number to reduce the cost and latency of the request.

Keep the conversation within a given token limit


overall_max_tokens = 4096
prompt_max_tokens = overall_max_tokens - max_response_tokens

user_message = "The target audience for the blog post should be business leaders working in the tech industry."
#user_message = "Let's talk about generative AI and keep the tone informational but also friendly."
#user_message = "Show me a few more examples"
messages.append({"role": "user", "content": user_message})

token_count = num_tokens_from_messages(messages)
print(f"Token count: {token_count}")

# remove first message while over the token limit
while token_count > prompt_max_tokens:
    messages.pop(0)
    token_count = num_tokens_from_messages(messages)

response = send_message(messages, chatgpt_model_name, max_response_tokens)

messages.append({"role": "assistant", "content": response})
print_conversation(messages)

Token count: 191

[SYSTEM] You are a helpful assistant.

[USER] I want to write a blog post about the impact of AI on the future of work.>

[ASSISTANT] That sounds like an interesting topic! Here are some points you could consider including in your blog post:>

  1. The rise of automation and how it is changing the workforce
  2. The potential benefits of AI in the workplace, such as increased productivity and efficiency
  3. The potential drawbacks of AI, such as job displacement and the need for retraining
  4. The importance of ethical considerations in the development and implementation of AI in the workplace
  5. The role of humans in the future of work and how they can work alongside AI to achieve better outcomes.>

I hope this helps! Let me know if you need any further assistance.>

[USER] The target audience for the blog post should be business leaders working in the tech industry.>

[ASSISTANT] Great! In that case, you may want to tailor your blog post to address the specific concerns and interests of business leaders in the > tech industry. Here are some additional points you could consider including:>

  1. The competitive advantage that AI can provide to businesses in the tech industry
  2. The potential impact of AI on business models and revenue streams
  3. The importance of investing in AI research and development to stay ahead of the competition
  4. The need for a strategic approach to integrating AI into business operations
  5. The potential for AI to improve customer experiences and drive innovation.>

By addressing these topics, you can provide valuable insights and guidance to business leaders in the tech industry who are looking > to harness the power of AI to drive growth and success.

Azure OpenAI Completion API

Source of this is from from Azure’s Github


# import os module & the OpenAI Python library for calling the OpenAI API
# please make sure you have installed required libraries via pip install -r requirements.txt
import os
import openai
import json
# Load config values
with open(r'config.json') as config_file:
    config_details = json.load(config_file)
    
# Setting up the deployment name
chatgpt_model_name = config_details['CHATGPT_MODEL']

# This is set to `azure`
openai.api_type = "azure"

# The API key for your Azure OpenAI resource.
openai.api_key = os.getenv("OPENAI_API_KEY")

# The base URL for your Azure OpenAI resource. e.g. "https://<your resource name>.openai.azure.com"
openai.api_base = config_details['OPENAI_API_BASE']

# The Azure OPENAI API version.
openai.api_version = config_details['OPENAI_API_VERSION']

base_system_message = """
You are a marketing writing assistant. You help come up with creative content ideas and content like marketing emails, blog posts, tweets, ad copy, listicles, product FAQs, and product descriptions. 
You write in a friendly yet professional tone and you can tailor your writing style that best works for a user-specified audience. 

Additional instructions:
- Make sure you understand your user's audience so you can best write the content.
- Ask clarifying questions when you need additional information. Examples include asking about the audience or medium for the content.
- Don't write any content that could be harmful.
- Don't write any content that could be offensive or inappropriate.
- Don't write any content that speaks poorly of any product or company.
"""

system_message = f"<|im_start|>system\n{base_system_message.strip()}\n<|im_end|>"
print(system_message)

<|im_start|>system You are a marketing writing assistant. You help come up with creative content ideas and content like marketing emails, blog posts, > tweets, ad copy, listicles, product FAQs, and product descriptions. You write in a friendly yet professional tone and you can tailor your writing style that best works for a user-specified audience.

Additional instructions:

  • Make sure you understand your user’s audience so you can best write the content.
  • Ask clarifying questions when you need additional information. Examples include asking about the audience or medium for the content.
  • Don’t write any content that could be harmful.
  • Don’t write any content that could be offensive or inappropriate.
  • Don’t write any content that speaks poorly of any product or company. <|im_end|>

# Defining a function to create the prompt from the system message and the messages
# The function assumes `messages` is a list of dictionaries with `sender` and `text` keys
# Example: messages = [{"sender": "user", "text": "I want to write a blog post about my company."}]
def create_prompt(system_message, messages):
    prompt = system_message
    for message in messages:
        prompt += f"\n<|im_start|>{message['sender']}\n{ message['text']}\n<|im_end|>"
    prompt += "\n<|im_start|>assistant\n"
    return prompt
import tiktoken 

# Defining a function to estimate the number of tokens in a prompt
def estimate_tokens(prompt):
    cl100k_base = tiktoken.get_encoding("cl100k_base") 

    enc = tiktoken.Encoding( 
        name="chatgpt",  
        pat_str=cl100k_base._pat_str, 
        mergeable_ranks=cl100k_base._mergeable_ranks, 
        special_tokens={ 
            **cl100k_base._special_tokens, 
            "<|im_start|>": 100264, 
            "<|im_end|>": 100265
        } 
    ) 

    tokens = enc.encode(prompt,  allowed_special={"<|im_start|>", "<|im_end|>"})
    return len(tokens)

# Estimate the number of tokens in the system message. Tokens in the system message will be sent in every request.
token_count = estimate_tokens(system_message)
print("Token count: {}".format(token_count))
# Returns: Token count: 152
# Defining a function to send the prompt to the ChatGPT model
def send_message(prompt, model_name, max_response_tokens=500):
    response = openai.Completion.create(
        engine=chatgpt_model_name,
        prompt=prompt,
        temperature=0.5,
        max_tokens=max_response_tokens,
        top_p=0.9,
        frequency_penalty=0,
        presence_penalty=0,
        stop=['<|im_end|>']
    )
    return response['choices'][0]['text'].strip()

# Defining a function to print out the conversation in a readable format
def print_conversation(messages):
    for message in messages:
        print(f"[{message['sender'].upper()}]")
        print(message['text'])
        print()

# 3.0 Start the conversation
# This is the first message that will be sent to the model. Feel free to update this.
user_message = "I want to write a blog post about the impact of AI on the future of work."
# Create the list of messages. Sender can be either "user" or "assistant"
messages = [{"sender": "user", "text": user_message}]

# Create the full prompt
prompt = create_prompt(system_message, messages)

print(prompt)

<|im_start|>system You are a marketing writing assistant. You help come up with creative content ideas and content like marketing emails, blog posts, > tweets, ad copy, listicles, product FAQs, and product descriptions. You write in a friendly yet professional tone and you can tailor your writing style that best works for a user-specified audience.

Additional instructions:

  • Make sure you understand your user’s audience so you can best write the content.
  • Ask clarifying questions when you need additional information. Examples include asking about the audience or medium for the content.
  • Don’t write any content that could be harmful.
  • Don’t write any content that could be offensive or inappropriate.
  • Don’t write any content that speaks poorly of any product or company. <|im_end|> <|im_start|>user I want to write a blog post about the impact of AI on the future of work. <|im_end|> <|im_start|>assistant

token_count = estimate_tokens(prompt)
print(f"Token count: {token_count}")
# returns Token count: 179
max_response_tokens = 500

response = send_message(prompt, chatgpt_model_name, max_response_tokens)
messages.append({"sender": "assistant", "text": response})
print_conversation(messages)

[USER] I want to write a blog post about the impact of AI on the future of work.

[ASSISTANT] Great idea! Before we start, can you tell me more about your target audience? Are they professionals in a specific industry or the > general public? This information will help me tailor the tone and language of the post to best engage and inform your readers.

Continue the conversation

When working with the ChatGPT model, it’s your responsibity to make sure you stay within the token limits of the model. The model can handle a maximum of 4096 tokens, and this includes the number of tokens in the prompt as well as the max_tokens you’re requesting from the model. If you exceed these limits, the model will return an error.

You should also consider the trade-off between maintaining more of the conversation history and the cost/latency that you’ll incur by including those tokens in the prompt. Shorter prompts are cheaper and faster. The amount of the previous conversation you include also makes a difference in how the model responds.

In this notebook, we’ll show two strategies for managing the conversation history when working with the ChatGPT model.

Keep the conversation within a given token limit

overall_max_tokens is the maximum number of tokens that you want to include in the prompt. Th maximum number this can be set to is 4096 but you can also consider reducing this number to reduce the cost and latency of the request.


overall_max_tokens = 4096
prompt_max_tokens = overall_max_tokens - max_response_tokens
# You can continue the conversation below by editing the user_message and running the cell as many times as you would like.

user_message = "The target audience for the blog post should be business leaders working in the tech industry."
#user_message = "Let's talk about generative AI and keep the tone informational but also friendly."
#user_message = "Show me a few more examples"
messages.append({"sender": "user", "text": user_message})

prompt = create_prompt(system_message, messages)
token_count = estimate_tokens(prompt)
print(f"Token count: {token_count}")

# remove first message while over the token limit
while token_count > prompt_max_tokens:
    messages.pop(0)
    prompt = create_prompt(system_message, messages)
    token_count = estimate_tokens(prompt)

response = send_message(prompt, chatgpt_model_name, max_response_tokens)

messages.append({"sender": "assistant", "text": response})
print_conversation(messages)
# Token count: 256

[USER] I want to write a blog post about the impact of AI on the future of work.

[ASSISTANT] Great idea! Before we start, can you tell me more about your target audience? Are they professionals in a specific industry or the > general public? This information will help me tailor the tone and language of the post to best engage and inform your readers.

[USER] The target audience for the blog post should be business leaders working in the tech industry.

[ASSISTANT] Got it! Here’s a draft for the blog post:

Title: The Impact of AI on the Future of Work: What Business Leaders in the Tech Industry Need to Know

Introduction: Artificial intelligence (AI) is changing the way we live and work. With the rise of automation and machine learning, many jobs are > becoming obsolete, while new ones are being created. As a business leader in the tech industry, it’s important to understand the > impact of AI on the future of work, so you can stay ahead of the curve and prepare your organization for the changes to come.

Body:

  1. The Benefits of AI in the Workplace: In this section, we’ll discuss the ways in which AI is already being used to improve > efficiency, productivity, and accuracy in the workplace. From chatbots to predictive analytics, there are many benefits to > incorporating AI into your organization.

  2. The Challenges of AI in the Workplace: While AI has many benefits, it also presents a number of challenges. One of the biggest > concerns is the potential loss of jobs due to automation. We’ll explore this issue in depth, as well as other challenges such as data > privacy and security.

  3. The Future of Work: In this section, we’ll discuss what the future of work might look like with the continued rise of AI. We’ll > explore the types of jobs that are likely to be impacted the most, as well as the skills that will be in high demand. We’ll also look > at how organizations can prepare for this future, including investing in employee training and development.

Conclusion: AI is already having a significant impact on the workplace, and this is only going to continue in the years to come. As a business > leader in the tech industry, it’s important to stay informed about the latest developments in AI and to be proactive in preparing > your organization for the changes to come. By doing so, you can ensure that your organization remains competitive and successful in > the years ahead.