OpenAI Unveils Customizable ChatGPT Assistants: Enhancing Conversations with Tailored Precision and Efficiency

162
OpenAI Unveils Customizable ChatGPT Assistants: Enhancing Conversations with Tailored Precision and Efficiency

OpenAI recently introduced the capability to save customized versions of ChatGPT as pre-configured assistants. There has been an opportunity to test this out. The new solution, officially termed GPTs by OpenAI, is fundamentally a simple technical extension, yet it holds significant potential. However, there are a few caveats.

Previously, for Plus users, it was possible to configure personal preferences, ensuring they were applied to every new chat. For instance, users could inform ChatGPT about their location, preferred language, response style, whether they preferred shorter or longer replies, formal or informal language, among other settings. While these configurations could also be adjusted directly in each chat, predefining them proved more convenient.

GPT Lacks Memory

A distinctive aspect of GPT, often overlooked by many users, is that the GPT models have no memory of past conversations or users. To them, each message sent to them stands alone without any context, and they respond based solely on the trained model, independent of prior inquiries. In the case of ChatGPT (unlike the GPT model itself), there’s a front-end instance that retains the chat history, encompassing both user messages and GPT responses. This entire history is automatically appended to the new message and sent along with GPT.

This process creates a cohesive chat history, enabling context-based message responses. When using the API directly with custom programs, users previously had to manually replicate this behavior if desired. While not overly complex, this behavior initially surprises many when they begin working with the API because it differs from the expected behavior of ChatGPT.

Similarly, one can now precondition ChatGPT or provide crucial additional information beforehand without needing to retrain the model. Companies have been leveraging this by introducing vector databases and embedding models, furnishing potentially helpful supplementary information to GPT beforehand. This allows GPT to better respond to queries within that context. However, this significantly inflates the input volume, and as a chat history grows longer, the data volume accompanying each new request increases.

These input units are counted in tokens. The API usage tallies these input tokens, and the billing is based on the tokens used for input and output. Thus, OpenAI receives payment based on the resources utilized, meaning users pay more as their resource consumption increases. This applies to API usage, but a different payment model applies when using ChatGPT via the web or smartphone app: a monthly fixed fee for the Plus subscription, irrespective of specific usage. This is a crucial point we’ll revisit shortly.

READ MORE: Google’s IP Address Protection Plan Sparks Backlash from Advertisers and Competition Authorities

Previous articleGoogle’s IP Address Protection Plan Sparks Backlash from Advertisers and Competition Authorities
Next articleTele Columbus Navigates Financial Waters: Seeks €200M, Expands Fiber Optic Reach
Carl Woodrow
A seasoned tech enthusiast and writer, Carl delves deep into emerging technologies, offering insightful analysis and reviews on the latest gadgets and trends.