Newsnews

OpenAI Unveils GPT-4 Turbo And Fine-Tuning Program For GPT-4

openai-unveils-gpt-4-turbo-and-fine-tuning-program-for-gpt-4

OpenAI, the leading AI research lab, made a significant announcement today at its first-ever developer conference. The company introduced GPT-4 Turbo, an upgraded version of its popular text-generating AI model, GPT-4. OpenAI claims that GPT-4 Turbo is not only more powerful but also more cost-effective than its predecessor.

Key Takeaway

OpenAI introduces GPT-4 Turbo, an enhanced version of its text-generating AI model, GPT-4. GPT-4 Turbo offers increased power and a more cost-effective pricing structure. It boasts a larger context window and a more recent knowledge base. OpenAI also launches a fine-tuning program for GPT-4 and raises the token rate limit for customers. These updates aim to provide developers with improved text-generation capabilities and enhance the AI model’s accuracy and performance.

GPT-4 Turbo: More Power at a Lower Cost

GPT-4 Turbo comes in two variations: one that solely analyzes text and another that understands the context of both text and images. The text-analyzing model is available for preview via an API starting today, and both versions are expected to be generally available in the coming weeks.

The pricing for GPT-4 Turbo is set at $0.01 per 1,000 input tokens (approximately 750 words) and $0.03 per 1,000 output tokens. Input tokens represent fragments of raw text, such as dividing the word “fantastic” into “fan,” “tas,” and “tic.” The pricing for the image-processing GPT-4 Turbo is dependent on the image size, with a 1080×1080-pixel image costing $0.00765, according to OpenAI.

OpenAI notes, “We optimized performance so we’re able to offer GPT-4 Turbo at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4.”

Enhancements and Benefits of GPT-4 Turbo

GPT-4 Turbo brings several improvements compared to its predecessor. One key enhancement is a more recent knowledge base, allowing GPT-4 Turbo to provide more accurate answers to questions about recent events up until April 2023. In contrast, GPT-4 was trained on web data up to September 2021.

In terms of context window, GPT-4 Turbo offers a significant boost. The context window refers to the amount of preceding text the model considers before generating additional text. GPT-4 Turbo boasts a 128,000-token context window, four times larger than GPT-4’s context window and the largest among commercially available models.

GPT-4 Turbo also introduces a “JSON mode,” ensuring that the model responds with valid JSON, which is particularly useful in web apps for seamless data transmission. It also offers other parameters that enhance the model’s ability to return consistent completions and log probabilities for output tokens.

OpenAI stated, “GPT-4 Turbo performs better than our previous models on tasks that require the careful following of instructions, such as generating specific formats.”

GPT-4 Fine-Tuning Program

In addition to the launch of GPT-4 Turbo, OpenAI also announced an experimental access program for fine-tuning GPT-4. This program involves more oversight and guidance from OpenAI teams compared to the fine-tuning program for GPT-3.5, owing to technical challenges.

OpenAI’s blog post acknowledges that preliminary results indicate GPT-4 fine-tuning requires more effort to achieve meaningful improvements compared to the substantial gains seen with GPT-3.5 fine-tuning.

Token Rate Limit Increase

OpenAI shared another update, revealing that the tokens-per-minute rate limit for all paying GPT-4 customers has been doubled. However, the pricing structure remains the same, with the cost set at $0.03 per input token and $0.06 per output token for the GPT-4 model with an 8,000-token context window, or $0.06 per input token and $0.012 per output token for GPT-4 with a 32,000-token context window.

Leave a Reply

Your email address will not be published. Required fields are marked *