Unlocking the Full Potential of Language Models with ChatGPT Plugins

Welcome to the digital playground! Today we’re excited to talk about ChatGPT plugins and how they can enhance your experience with language models. ChatGPT has implemented initial support for plugins, which are tools designed specifically to give language models access to up-to-date information, run computations, or use third-party services. With plugins, users can unlock a vast range of possible use cases and create a community shaping the future of the human-AI interaction paradigm.

Language models today are still limited, and the only information they can learn from is their training data. This information can be out-of-date and is one-size-fits-all across applications. Plugins can be “eyes and ears” for language models, giving them access to information that is too recent, too personal, or too specific to be included in the training data. In response to a user’s explicit request, plugins can also enable language models to perform safe, constrained actions on their behalf, increasing the usefulness of the system overall.

ChatGPT is rolling out plugins in a gradual, iterative manner to ensure safety and alignment challenges are addressed. We’re starting with a small set of users and planning to gradually roll out larger-scale access as we learn more. Plugin developers who have been invited off our waitlist can use our documentation to build a plugin for ChatGPT. The first plugins have been created by Expedia, FiscalNote, Instacart, KAYAK, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier.

We’re also hosting two plugins ourselves: a web browser and code interpreter. The web browser plugin allows language models to read information from the internet, expanding the amount of content they can discuss. The code interpreter plugin provides a working Python interpreter in a sandboxed, firewalled execution environment. It allows for solving mathematical problems, data analysis and visualization, and converting files between formats.

Connecting language models to external tools introduces new opportunities as well as significant new risks. Plugins offer the potential to tackle various challenges associated with large language models, including “hallucinations,” keeping up with recent events, and accessing (with permission) proprietary information sources. By integrating explicit access to external data, language models can strengthen their responses with evidence-based references. However, plugins could increase safety challenges by taking harmful or unintended actions.

From day one, ChatGPT has implemented several safeguards to mitigate these risks, including red-teaming exercises, strict network controls to prevent external internet access, and resource limits on each session. As ChatGPT continues to improve its safety systems, we plan to enable developers using OpenAI models to integrate plugins into their own applications beyond ChatGPT.

We invite users to try the code interpreter integration and discover other useful tasks. If you’re a researcher interested in studying safety risks or mitigations in this area, we encourage you to make use of our Researcher Access Program. We also invite developers and researchers to submit plugin-related safety and capability evaluations as part of our recently open-sourced Evals framework.

In conclusion, ChatGPT plugins offer a new way to enhance language models and access up-to-date information, run computations, or use third-party services. ChatGPT is rolling out plugins in a gradual, iterative manner to ensure safety and alignment challenges are addressed. With the help of everyone, we hope to build something that is both useful and safe.

Author: Nardeep Singh

Previous
Previous

Apple Business Connect: Expand Your Reach

Next
Next

Welcome to the AI Playground: Let's Play with Prompt Engineering