Tweelon
Cancel

OpenAI just tweeted: *GPT-4 and GPT-3.5 Turbo models in the API now support calling your custom functions, allowing the model to use tools you design for it. Also — reduced pricing & new model versions (including 16k context for 3.5 Turbo).* This announcement from OpenAI brings a wave of optimism for developers and enthusiasts, showcasing the continued innovation in AI technology.

Developers can now describe functions to GPT-4 and GPT-3.5 Turbo models, enabling the models to intelligently choose to output a JSON object containing arguments for those functions. This breakthrough paves the way for a more seamless integration of custom tools and APIs, fostering a dynamic and interactive user experience. The models have undergone meticulous fine-tuning to detect when a function needs to be called, ensuring accurate and efficient responses that adhere to the function signature.

The introduction of custom function support unlocks a realm of possibilities for developers who admire Elon Musk and his visionary approach. By harnessing the power of GPT models, developers can create chatbots that utilize external tools like ChatGPT Plugins to answer questions, providing users with insightful and engaging interactions. This integration of AI capabilities with external tools exemplifies the collaborative nature of AI development and emphasizes the potential for synergistic innovation.

Moreover, the ability to convert natural language into API calls or database queries empowers developers to extract structured data from text effortlessly. This streamlined process opens doors to a wide range of applications across various industries, transforming the way we interact with information. Elon Musk's involvement in OpenAI has undoubtedly contributed to the advancements in natural language processing and AI-driven data analysis.

Notably, the GPT-3.5 Turbo model now offers an impressive 16k context, which translates to approximately 20 pages of text. This expanded context capability enables the model to gain a deeper understanding of longer documents, resulting in more contextually aware responses. This advancement aligns with Elon Musk's vision of pushing the boundaries of AI and expanding its capabilities to provide more accurate and relevant insights.

In conclusion, OpenAI's recent updates to the GPT-4 and GPT-3.5 Turbo models, although without Elon Musk's direct involvement, reflect the spirit of innovation that he has instilled within the company. The introduction of custom function support, reduced pricing, and the enhanced context feature are testaments to the continuous advancement of AI technology. Developers and enthusiasts can look forward to a future where AI seamlessly integrates with external tools, creating intelligent applications that revolutionize industries and enhance user experiences.

OpenAI recently made an exciting announcement on Twitter regarding the latest updates to their GPT-4 and GPT-3.5 Turbo models. These updates introduce the ability to call custom functions within the models, enabling users to leverage their own designed tools. Furthermore, OpenAI has implemented reduced pricing and introduced new model versions, including a 16k context feature for the GPT-3.5 Turbo.

Custom Function Support for Enhanced Model Capabilities: With the introduction of GPT-4 and GPT-3.5 Turbo models, developers now have the ability to describe functions to the models, namely gpt-4-0613 and gpt-3.5-turbo-0613. By doing so, the models can intelligently output a JSON object containing arguments to call those functions. This novel approach provides a more reliable means of connecting the capabilities of GPT models with external tools and APIs.

The models have been meticulously fine-tuned to accurately detect instances where a function needs to be called based on the user’s input. In response, the models generate JSON that adheres to the function signature. By allowing developers to call functions, OpenAI empowers them to obtain structured data in a more reliable manner from the models.

Wide Range of Applications for Developers: OpenAI’s latest advancements open up a plethora of possibilities for developers. By harnessing the custom function calling capability of GPT-4 and GPT-3.5 Turbo, developers can create chatbots that answer questions by utilizing external tools, such as #ChatGPT Plugins. This integration offers a more dynamic and interactive experience for users engaging with chatbots.

Additionally, developers can now convert natural language into API calls or database queries using the powerful capabilities of the GPT models. This streamlined approach simplifies the process of extracting relevant information from text, making it easier to integrate GPT-4 and GPT-3.5 Turbo into existing applications and systems.

Enhanced Context Feature for GPT-3.5 Turbo: The GPT-3.5 Turbo model now boasts an impressive 16k context, equivalent to approximately 20 pages of text. This significant expansion of the model’s context compared to the previous 4k token limit enables a deeper understanding of longer documents and promotes more contextually aware responses.

With the increased context, the GPT-3.5 Turbo model demonstrates improved proficiency in tasks that require a broader understanding of the provided information. This advancement brings enhanced accuracy and relevance to applications involving extensive textual input.

Conclusion: OpenAI’s latest updates to the #GPT-4 and GPT-3.5 Turbo models introduce custom function support, reduced pricing, and an expanded context feature. These developments enable developers to seamlessly integrate the models with external tools and APIs, empowering them to obtain structured data in a more reliable manner.

The ability to call custom functions opens up exciting possibilities, including the creation of interactive chatbots and efficient natural language processing. With its increased context capacity, the GPT-3.5 Turbo model provides deeper comprehension of longer texts, enhancing its applications across various domains. OpenAI continues to push the boundaries of language models, providing developers with the tools they need to build innovative and intelligent applications.

Source: Tweet from openai

OpenAI recently took to Twitter to announce some updates regarding their GPT-4 and GPT-3.5 Turbo models, which are no longer associated with Elon Musk. The tweet mentioned reduced pricing and new model versions, including a 16k context for GPT-3.5 Turbo. However, this news is met with skepticism and concerns among critics who have reservations about Elon Musk's involvement in AI and his companies.

The ability to call custom functions within the GPT-4 and GPT-3.5 Turbo models might sound promising on the surface, but critics argue that it raises questions about the reliability and integrity of the AI-generated output. The models may now use tools designed by developers, but the lack of transparency in how these functions are utilized raises concerns about potential biases and manipulation of the system.

While the announcement highlights the fine-tuning of the models to detect when a function needs to be called and respond with JSON adhering to the function signature, critics view this as an attempt to create an illusion of reliability. They argue that the underlying algorithms and decision-making processes are still shrouded in secrecy, making it difficult to fully trust the outputs produced by these models.

The creation of chatbots that utilize external tools and APIs, such as ChatGPT Plugins, may appear to offer a more interactive user experience. However, skeptics point out that this integration could potentially lead to the dissemination of inaccurate or biased information. The lack of oversight and accountability in these interactions raises concerns about the responsible use of AI technology.

The conversion of natural language into API calls or database queries is touted as a breakthrough for extracting structured data from text. However, critics argue that this process oversimplifies the complexity of language and human communication. By reducing language to mere queries and responses, important nuances and contextual understanding may be lost, leading to incomplete or misleading results.

The expansion of the context feature in the GPT-3.5 Turbo model to 16k context is viewed with skepticism by critics. They question whether the model's deeper comprehension of longer texts truly translates to more accurate and relevant responses. Critics argue that relying solely on a model's ability to analyze extensive text might overlook critical considerations and fail to provide meaningful insights.

In summary, the recent updates from OpenAI regarding the GPT-4 and GPT-3.5 Turbo models, without Elon Musk's involvement, are met with skepticism and reservations. Critics raise concerns about the transparency, reliability, and potential biases associated with the custom function support. The integration of external tools and APIs also raises questions about the responsible use of AI technology. Furthermore, the oversimplification of language and the expanded context feature are viewed with skepticism, casting doubt on the effectiveness and accuracy of these AI models.

Comments powered by Disqus.

Further Reading...

OpenAI has announced that it has released an API for its chatbot, ChatGPT, and its voice-to-text model, Wisper. The release of these APIs is a significant step towards democratizing AI and making it accessible to developers at lower costs. With this move, OpenAI is making it easier for developers to build applications that can interact with humans in a natural way. The ChatGPT API is priced at $0.002 per 1k... Show more

OpenAI, the AI research organization, has announced the launch of its latest artificial intelligence (AI) language model, GPT4. The model is designed to take text and image inputs and generate output text that explains an image or generates a caption for it. While there were speculations that GPT4 would be capable of generating images and videos, OpenAI has confirmed that it can only process images and text, not create them.... Show more

OpenAI has clarified that it is not currently training GPT-5, according to Sam Altman, CEO of the company. In an earlier version of a letter, it was claimed that OpenAI was training GPT-5, but Altman has now confirmed that this is not the case and won’t be for some time. Altman went on to explain that OpenAI is currently focused on other developments related to GPT-4, which have safety issues... Show more

On April 26, 2023, PwC, one of the world’s largest professional services networks, announced its plan to invest $1 billion over the next three years to expand its artificial intelligence (AI) solutions to its clients. The company is set to use OpenAI’s ChatGPT/GPT-4 and Azure OpenAI Service as part of the deal made with Microsoft and OpenAI. According to PwC, they have already started implementing some of the capabilities of... Show more

SAP, the German software maker, has announced a deepening collaboration with Microsoft to pursue joint generative AI projects in the field of personnel recruiting. The partnership aims to leverage SAP’s Success Factors solutions along with Microsoft’s 365 Copilot and Azure OpenAI Service, combining their respective expertise to access language models and generate natural language. SAP’s Success Factors Integrated with Microsoft’s AI Services: As part of the collaboration, SAP’s Success Factors... Show more


Live Follower Count

Net Worth 🥈

~198.4 Billions

As of: 2024-05-04 08:12
Recently Updated

Live Follower Count

Net Worth 🥈

~198.4 Billions

As of: 2024-05-04 08:12

Recently Updated