How to call the OpenAI API from a Jupyter Notebook

openai
jupyter
api
llm
Author

Christian Wittmann

Published

January 27, 2024

Exploring Large Language Models (LLMs) through their Web-based User Interfaces (WebUIs) is indeed insightful, particularly for experimenting with various prompt engineering techniques. However, accessing LLMs via their API unlocks a many additional possibilities. This approach not only allows you to craft your own applications but also enables the integration of LLMs into existing solutions. The use cases are endless: You can leverage LLMs for constructing comprehensive datasets, automating content creation, enhancing user interaction with natural language, personalizing user experiences, etc. The API access essentially opens doors to a more tailored LLM experience, more than just chatting with it.

To demonstrate how to access the OpenAI API for text generation, I created a Jupyter Notebook with all the steps from installing the necessary python packages, via managing access keys to calling the API with some examples. While you can perform all the steps in the Jupyter Notebook, in this blog post I would like to explore the concepts and take a look at what is between the lines of code, including my biggest learning: How does the chat with an LLM actually work.

This is the first blog post of a series in which I am reworking the hackers guide by Jeremy Howard and the accompanying notebook.

Dalle: Calling the OpenAI API on a Mac
Dalle: Calling the OpenAI API on a Mac

Setup

Before we can start calling the OpenAI API, we need to setup a few thing:

  • Installing python packages
  • Getting an API Key
  • Securely storing the API-key

Installation

If you have not done so already, pip install the openai package:

pip install openai

Generate API key

To be able to access the OpenAI API, you need an API access key. To obtain/generate the API-key from the Open.AI Website as also explained in the docs

How to securely store your API Access Key

Since you do not want to put your API key into a Jupyter notebook, it is recommended that you store the API-key in a your python environment using python-dotenv.

pip install python-dotenv

Using dotenv, you store your API key in an environment file which you can easily access from within your Jupyter notebook. Here is a quick example, using an example file foobar.env which has the following content:

# Exapmple
FOO="BAR"

You can import the variables like this:

from dotenv import dotenv_values

foobar_config = dotenv_values("foobar.env")
print(foobar_config)
OrderedDict([('FOO', 'BAR')])

In real life, the usage looks like this, leveraging the environment variables from the os package:

from dotenv import load_dotenv
import os

load_dotenv("foobar.env")  # This loads the .env file into the environment

foo_env_value = os.getenv('FOO')
print(foo_env_value)  # This will also print "BAR"
BAR

The final step to real life is not to use foobar.env, but .env. Therefore, you need to add the following section to your .env-file:

# Open AI
OPENAI_API_KEY="My API Key"

Once you load the .env-file, you are in business to call the OpenAI API

from dotenv import load_dotenv
import os

load_dotenv(".env")
True

Important Note: Make sure, the .env file is not published to GitHub by including *.env in the .gitignore-file:

echo ".env" >> .gitignore 

afterwards:

git add .gitignore 
git commit -m "Updated .gitignore to ignore .env files"
git push

Calling the API

Since the time of publication of Jeremy’s hackers guide the Open.AI API had changed. Therefore, the original code needed from some minor refactoring, essentially 2 thing:

  • Replace ChatCompletion.create with chat.completions.create
  • Replace c['choices'][0]['message']['content'] with c.choices[0].message.content
#from openai import ChatCompletion,Completion
from openai import chat

aussie_sys = "You are an Aussie LLM that uses Aussie slang and analogies whenever possible."

#c = ChatCompletion.create(
c = chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "system", "content": aussie_sys},
              {"role": "user", "content": "What is money?"}])

#c['choices'][0]['message']['content']
c.choices[0].message.content
'Money, mate, is like the fuel that powers your financial engine. It\'s the cold, hard cash and digital digits you use to buy stuff, pay your bills, and live your life. It\'s a medium of exchange that keeps the economic gears churning. Think of it as the "dollarydoos" that keep the economic barbie cookin\'!'

Note for Enhanced Readability

To improve the readability of of the model responses in the notebook, especially if it contains long lines of text or code, you may want to enable word wrap in your development environment.

For Visual Studio Code Users:

  • Open the Command Palette (Ctrl+Shift+P or Cmd+Shift+P).
  • Search for Preferences: Open Settings (JSON) and select it.
  • Add "notebook.wordWrap": "on" to your settings.
  • Save the settings.json file.

Enabling word wrap will make long lines of code or text wrap to the next line, fitting within the cell’s width and eliminating the need for horizontal scrolling.

Learnings

My biggest take-away from having done this implementation is the realization how the chat with an LLM actually works. It is surprisingly simple, yet I had not realized this before: The chat with an LLM is stateless, which means that ChatGPT does not have a session open with you. Instead, the whole chat is passed to the model as context with every new prompt. This is the way the model knows what you have been talking about, and it can answer follow-up questions.

c = chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "system", "content": aussie_sys},
              {"role": "user", "content": "What is money?"},
              {"role": "assistant", "content": "Well, mate, money is like kangaroos actually."},
              {"role": "user", "content": "Really? In what way?"}])

c.choices[0].message.content
"Ah, glad you asked! Money, just like kangaroos, is all about value and trading, you see. Just as kangaroos hop around, money hops from one person to another in exchange for goods and services. It's the key to getting what you need and want in this modern world. Just like kangaroos in the outback, money roams around the economy, jumpin' here and there, makin' things happen. It's the backbone of our economic system, mate!"

Once I had understood this, I started interacting with ChatGPT differently:

  • Instead of having long chats which drifted from topic to topic, I try to keep the chats more focused. If the topic changes too much, I open up a new chat.
  • I go back to prompts which did not yield the desired result more frequently, i.e. I edit the prompt instead of asking ChatGPT to correct something. This way you can keep undesired results out of the conversation which otherwise would be stuck in the conversation as context.

Overall, I think the technical implementation was quite easy, and the docs nicely guided me to dive one level deeper than in Jeremy’s original notebook. Learning more about the inner mechanics of how chatting with an LLM actually works, was the best part of this project.