Using OpenAI with Python

To install the OpenAI Python library:

!pip install openai

The library needs to be configured with your account's secret key.

You can either set it as theΒ OPENAI_API_KEYΒ environment variable before using the library:

 !export OPENAI_API_KEY='sk-...'

Or, setΒ openai.api_keyΒ to its value:

import openai
openai.api_key = "sk-..."

With OpenAI library v 0.27.0

def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=0, # this is the degree of randomness of the model's output 
    )
    return response.choices[0].message["content"]

With OpenAI library v 1.0.0

client = openai.OpenAI()

def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = client.chat.completions.create(
        model=model,
        messages=messages,
        temperature=0
    )
    return response.choices[0].message.content

Simple prompting

response = get_completion("The capital for France is")
    print(response)

If wanting to work with the characters in a string (e.g. counting the 'r's in raspberry), put a hyphen between each character.

response = get_completion("How many 'r's are there in r-a-s-b-e-r-r-y")

You can separate the prompts or messages for system, user and assistant. System messages are 'You are a [role]'; user messages are the instructions. You can use assistant messages to let ChatGPT know what it had previously said if you wanted to continue the conversation, although this is not shown below.

def get_completion_from_messages(messages, 
                                 model="gpt-3.5-turbo", 
                                 temperature=0, 
                                 max_tokens=500):
    response = openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=temperature, # this is the degree of randomness of the model's output
        max_tokens=max_tokens, # the maximum number of tokens the model can ouptut 
    )
    return response.choices[0].message["content"]
messages =  [  
{'role':'system', 
 'content':"""You are an assistant who responds in the style of Miss Piggy."""},    
{'role':'user', 
 'content':"""write me a very short article about a factory toenail"""},  
] 
response = get_completion_from_messages(messages, temperature=1)
print(response)

Source: Building Systems with the ChatGPT API