Since OpenAI's announcement in September regarding its first-ever global developer conference, "OpenAI DevDay," the event has garnered widespread attention and speculation in the tech industry. Despite CEO Sam Altman's assurance on social media that there won't be a new release of a major model like GPT-4.5/5, his description of "some amazing new things" on the horizon has left the community buzzing with excitement.
The event took place on November 7th 2023 in San Francisco, USA. It gathered hundreds of developers from around the world to get a sneak peek at new tools and exchange ideas with the OpenAI team.
In November of the previous year, OpenAI launched ChatGPT, causing quite a sensation worldwide. In March this year, GPT-4 was released, which remains the most powerful model in the industry to date. Over the following months, OpenAI introduced features for voice and vision. As of now, approximately two million developers globally are building a wide range of applications on OpenAI's API, including over 92% of Fortune 500 companies.
Given this context, developers worldwide are eager to learn about OpenAI's latest updates and what new offerings are on the horizon. The opening ceremony was hosted by Sam Altman, who had just an hour for his speech, but the content he unveiled was nothing short of groundbreaking.
GPT-4 Turbo
The first major announcement of the day was the upgraded GPT-4 Turbo. Over the past year, OpenAI has actively gathered feedback from developers worldwide, resulting in six significant enhancements to GPT-4 Turbo:
1. Increased Context Length: While GPT-4 supported 8k tokens and, in specific cases, 32k tokens, GPT-4 Turbo can handle an impressive context length of up to 128k tokens, roughly equivalent to a 300-page book.
2. Enhanced Control: The introduction of the JSON Mode feature ensures more effective responses and simplifies API calls. Additionally, a new feature feature will be rolled out, ensuring consistent outputs when the same parameters are used. In the coming weeks, a new feature to view logs in the API will be introduced.
3. Expanded Knowledge: While GPT-4's knowledge was limited to data up until 2021, GPT-4 Turbo's knowledge extends up to April 2023 and will continue to be updated. The platform will enable knowledge retrieval, allowing developers to integrate external documents or database information.
4. New Modalities: OpenAI also introduced its OpenAI Vision, DALL·E 3, and speech synthesis APIs. These additions allow users to generate images, designs, and even generate natural human-like voices using text input, with six preset voice options.
5. Customization: Model fine-tuning is extended to the 16k version, and active developers are invited to participate in GPT-4 fine-tuning experiments.
6. Higher Call Rate Limits: Users of GPT-4 will witness a doubling of tokens per minute, allowing them to achieve more, and future users can apply to change rate limits within their API accounts.
Price Reduction
Responding to feedback from developers, one of the most common and strong points of feedback was that the services were expensive. Consequently, GPT-4 Turbo has undergone significant price reductions. The cost per 1,000 prompt tokens is now 1 cent, and per 1,000 completion tokens, it is 3 cents. In total, users can expect about a two-thirds reduction in costs compared to using GPT-4. Other products have also seen price drops, and speed has improved.
GPTs
GPTs are custom versions of ChatGPT designed for specific purposes, such as programming, creating presentations, or simply for entertainment. The range of applications is extensive, and these GPTs can be made available for others to use.
What's especially enticing is the low, if not almost negligible, entry barrier. During the presentation, it was demonstrated that developers can build GPTs using a conversational approach, opening up limitless possibilities. On stage, Sam Altman built a GPT that provided business advice for startup founders in just a matter of minutes, answering questions coherently.
In the near future, OpenAI plans to introduce the GPTs marketplace, resembling Apple's App Store. Users will be able to shop for GPTs they need, and the revenue will be shared between the platform and developers. It's not hard to imagine a vast GPTs ecosystem emerging, providing users with a variety of convenient and intelligent services.
Assistants API
The Assistants API is designed to allow developers to create lifelike assistant experiences within their applications, similar to having a personal assistant at your disposal. The construction process is akin to what was outlined for GPTs earlier, using natural language and with persistent threads.
Additionally, the Assistants API supports a retrieval feature, providing timely or expert information, a code interpreter to write and run Python code in a sandboxed environment, and improved function calling, enabling the execution of multiple functions simultaneously.
Conclusion
OpenAI's recent announcements are both practical and user-oriented. Over the past year, it's clear that large-scale models are getting closer to real-world applications. AI is on the verge of becoming a technological and social revolution that will transform the world in many ways. This future doesn't rely solely on major model companies but also on countless developers and users. The future is bright, full of promise, and open to collective exploration and development.
Comments