Revolutionizing AI: OpenAI's New Agent API Enhancements

OpenAI "Agents API" (computer use, web search, multi-agent, open-source!)

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    OpenAI introduced innovative advancements for developers with their new "Agents API," aiming to simplify the creation of autonomous systems. The API offers built-in tools like web search, file search, and computer use, allowing agents to operate independently and efficiently. Not only does it enhance multimodal reasoning, but it also brings together disparate APIs into a streamlined structure. With added features like metadata filtering, direct search endpoints, and the launch of an open-source SDK, developers can now easily build robust agentic applications. Moreover, the open-source Agents SDK enables orchestration of multiple agents, promoting ease in handling complex tasks. OpenAI’s endeavors are set to make 2025 the "Year of the Agent," capable of executing tasks and not just answering queries.

      Highlights

      • OpenAI unveils groundbreaking Agent API designed for building autonomous systems efficiently. πŸŽ‰
      • The Web Search Tool employs a custom fine-tuned model for accurate information retrieval. πŸ”
      • File Search Tool now includes metadata filtering and direct search endpoints, enhancing data handling. πŸ“š
      • Box AI utilizes OpenAI's improvements to manage and extract insights from unstructured data. πŸ—ƒοΈ
      • The new Computer Use Tool provides capabilities for controlling various computing environments. πŸ’»
      • OpenAI is pushing forward with multiple agent orchestration via their new open-source Agents SDK. 🌟
      • The Responses API allows integrated tool use, making agent applications more flexible. πŸ› οΈ
      • OpenAI's commitment extends beyond chat assistance to actual task execution in 2025. πŸš€
      • Excitement around the seamless integration of tools and response functionalities for developers. πŸ”—

      Key Takeaways

      • OpenAI's Agents API introduces tools for autonomous operation, including web, file, and computer use tools. πŸš€
      • The new API supports complex multi-step tasks through advanced reasoning and multi-modal capabilities. 🧠
      • Developers can orchestrate agents efficiently with the open-source Agents SDK. βš™οΈ
      • The web search tool utilizes a fine-tuned model, providing up-to-date and factual outputs. 🌐
      • With metadata filtering and direct search endpoints, file management becomes a breeze for developers. πŸ“‚
      • The API enables seamless personal stylist applications using file and web search tools. πŸ§₯
      • OpenAI is committed to enhancing developer experience with its new Responses API. πŸ’‘
      • 2025 is projected to be the 'Year of the Agent' with capabilities beyond question answering. πŸ“…

      Overview

      OpenAI has recently rolled out its latest advancements in AI, focused on offering developers enhanced tools to create autonomous systems. With the introduction of the Agents API, OpenAI is aiming to simplify the process of building systems that can act independently to perform tasks. The API includes tools such as web search, file search, and computer use, all designed to enhance operational capabilities.

        This new API promises a robust framework for integrating complex workflows involving autonomous agents, thanks to its multi-modal capabilities and advanced reasoning skills. OpenAI also introduced an open-source SDK, allowing developers to orchestrate multiple agents for intricate tasks. This development aims to revolutionize how AI is built and deployed, bringing together different APIs for a streamlined and efficient infrastructure.

          With the eye on future prospects, OpenAI is marking 2025 as the 'Year of the Agent'. This initiative looks to transition AI capabilities from merely answering questions to executing detailed tasks in the real world. The responses and feedback garnered from the launch are paving the way for continued development and exciting opportunities within AI development.

            Chapters

            • 00:00 - 00:30: Introduction to OpenAI's New Agents API OpenAI has introduced new agent functionality through its API, which was discussed during a recent live stream. These tools are designed to help developers create reliable and useful agents. 'Agent' refers to a system capable of acting independently to perform tasks on behalf of the user. The live stream explained that these updates aim at improving ease of development for such autonomous systems.
            • 00:30 - 03:00: Overview of New Agent Functionality This chapter provides an overview of the new features available in the agent. These features enhance the agent's ability to operate autonomously by integrating tools and memory into its functionality. Key components include the 'operator' which enables web browsing and task execution online, although its effectiveness is somewhat limited. Another feature, 'deep research', allows the creation of detailed reports on any chosen topic. The chapter mentions that while the 'operator' is usable, it often encounters performance issues.
            • 02:30 - 03:30: Introduction to OpenAI Team The chapter provides an introduction to the OpenAI team, emphasizing their work in deep research and its significance. It explains how OpenAI tools are frequently used in real-world scenarios and the positive feedback received. Efforts are underway to launch tools and more in the API for developers, guided by insights from global developer interactions and readiness of models.
            • 03:30 - 07:00: Built-In Tools Announcement The chapter discusses the readiness of advanced models, highlighting their ability to execute complex, multi-step workflows necessary for agents. The focus is on the need for infrastructure development around these models to fully utilize their capabilities, using the project Manis as an example of these advanced systems in action.
            • 07:00 - 08:30: Box AI Integration The chapter discusses the challenges developers face when trying to integrate AI and code execution environments. It highlights the need to build a robust architecture around core models and addresses issues related to the current necessity for developers to piece together various low-level APIs from disparate sources, which can feel inefficient and fragile. The introduction of a series of tools aims to streamline this process and enhance the development experience.
            • 08:30 - 10:30: Computer Use Tool Overview The chapter introduces key members of the team involved in developing and launching new tools, including Elan from the developer experience team, Steve from the API team, and Nick from the API product team.
            • 10:30 - 13:00: Responses API Introduction The chapter introduces the 'web search tool,' which enhances AI models by enabling internet access to ensure their responses are current and factual. This tool transforms AI from a static information source into a dynamic one capable of real-time learning and improvement.
            • 17:00 - 20:00: Agents SDK and Swarm Rebranding The chapter discusses the Agents SDK and Swarm rebranding, featuring a web search tool utilized by Chat GPT for retrieving large amounts of data. The tool, powered by a fine-tuned GPT model (likely GPT-4 or a smaller variant), excels in identifying relevant information from the web and citing it effectively. Its performance is highlighted using a benchmark named Simple QA, showcasing GPT-4's capabilities.
            • 25:00 - 28:00: Wrap-Up and Future Plans In the final chapter titled 'Wrap-Up and Future Plans', the transcript discusses insights about the limitations and enhancements of web search tools. Specifically, it highlights that although cutting-edge language models like GPT 4.5 and GPT 40 01 03 mini may not excel in simple question and answer accuracy, their performance significantly improves when they are paired with search functionalities. This enhancement is demonstrated by Boro achieving a state-of-the-art accuracy score of 90%. This achievement illustrates the potential and future direction for integrating search capabilities into AI models to improve their efficacy.
            • 27:00 - 28:00: Conclusion and Future Outlook The chapter titled 'Conclusion and Future Outlook' discusses the introduction of two new features in the file Search tool. Launched last year in the assistance API, the tool allows developers to upload, chunk, and embed documents for smooth retrieval and generation of relevant information (rag). The first new feature is metadata filtering, which lets users add attributes to their files, making it easier to filter documents based on specific criteria. This enhancement aims to improve the efficiency of handling and organizing documents. The chapter indicates a continual development in tools designed to aid developers in managing data more effectively.

            OpenAI "Agents API" (computer use, web search, multi-agent, open-source!) Transcription

            • 00:00 - 00:30 all right open AI just finished their live stream they have a bunch of new agent functionality specifically through the API let's watch it together and I'll give you my thoughts we're excited to launch a bunch of new tools that make it easy for developers to build reliable and useful agents now when we say agent we mean A system that can act independently to do tasks on your behalf I like that definition so A system that can act independently and accomplish things on your behalf I think that's a really good way to explain it usually
            • 00:30 - 01:00 all includes stuff like it can do things autonomously and has tools and it has memory and obviously built around a core model but I really like that definition as well the first is uh operator which can browse the web and do things for you on the web the second is deep research which can uh create detailed reports for you on any topic you want by the way operator kind of barely usable it's okay but doesn't really work a lot of the
            • 01:00 - 01:30 time deep research on the other hand is phenomenal I use it all the time it provides a lot of value to me and there are a lot of real world use cases that I'm using it for day-to-day now the feedback for those has been fantastic but we want to Now launch those tools and more in the API to developers so we' spent the last couple months going around talking to developers all over the world about how we can make it easy for them to build agents and what we've heard is that the models are ready so
            • 01:30 - 02:00 with Advanced reasoning with multimodal understanding our models can now do the kind of complex multi-step workflows that agents need yeah and I think that's a key and maybe even underemphasized in this video the models are ready look at Manis Manis is an incredible project built on likely Claude but the core intelligence in these models are good enough and now we need to build out the infrastructure around the models the tool use like mCP servers local
            • 02:00 - 02:30 environments so it can actually write code execute Code test and create directories and that's another thing Manis does really well and so the architecture around the core model is really what needs to be built out now but on the other hand developers feel like they're having to Cobble together different low-level apis from different sources it's difficult it's slow it often feels brittle So today we're really excited to bring that together into a series of tools uh and and a new
            • 02:30 - 03:00 API and an open source SDK to make this a lot easier so with that let me introduce the team yeah hi I'm Elan I'm an engineer on the developer experience team I'm Steve I'm an engineer on the API team and I'm Nick I work on the API product team so let's dive into all the stuff that we're launching today like Kevin mentioned we have three new built-in tools we have a new API and an open source SDK uh starting off with the built-in tools the first tool that we're
            • 03:00 - 03:30 announcing today is called the web search tool the web search tool allows our models to access information from the internet so that your responses and the output that you get is up to-date and factual yeah I mean that is the most basic tool that any AI system needs is the ability to search the web because then they go from being a static completely Frozen in Time source of information to being actually able to have real-time information to learn to get better and and all of this requires
            • 03:30 - 04:00 being able to search the web uh the a web search tool is the same tool that powers chat gbd search and it's powered by a fine-tuned model under the hood so this is the fine tuned gbd 40 or 40 mini that is really good at looking at large amounts of data retrieve from the web finding the relevant pieces of information and then clearly citing it in its response um in a benchmark that measures uh these type of things uh which is called Simple QA uh you can see that gbd4 so that's really interesting I
            • 04:00 - 04:30 did not know that the web search tool had a custom fine-tuned model under it and here look at this so this is simple QA simple question and answer accuracy and even Cutting Edge models GPT 4.5 GPT 40 01 03 mini they don't score very well but when you have these models enhance with search they become much better Boro hits a high score of state-ofthe-art score of 90% ENT so that's the first
            • 04:30 - 05:00 tool Steve do you want to tell us about the second one yeah the second tool is actually my favorite tool and this is the file Search tool now we launched the file Search tool last year uh in the assistance API as a way for developers to upload chunk embed their documents and then do really easily do uh rag really easily over those documents now we're really excited to be launching two new features in the file Search tool today the first is metadata filtering so with metadata filtering you can add attributes to your files to be able to easily filter them down to just the ones
            • 05:00 - 05:30 that are the most relevant for your query the second is a direct search endpoint so now you can directly search your vector stores without your queries being filtered through the model first nice so you have web search for the public data file search for the the private data that you all right both of those very useful tools web search and private search and now you can have metadata so for example you can likely tag your files or directories and just more easily search them and one of the launch partners of the agents STK was box also the sponsor of today's video
            • 05:30 - 06:00 they're doing some really cool stuff with the agents SDK enabling Enterprises to search query and extract insights from unstructured data stored in box let me tell you more introducing box AI from box every business sits on top of an immense amount of unstructured data and yet the true potential of all of this data remains largely untapped the problem is analyzing all of that unstructured data is really really difficult
            • 06:00 - 06:30 until now that's where box AI comes in with box AI developers and businesses can leverage the latest breakthroughs in AI to automate document processing workflows extract insights from content build custom AI agents to work on that content and so much more and box AI works with all of the leading model providers and so you can always be sure you're using the latest AI with your content use it to extract key metadata fields from contracts invoices financial documents resumΓ©s and more to automate
            • 06:30 - 07:00 workflows you can also ask questions of any of the content you have within the Box ecosystem such as sales presentations or long research reports and if you're a developer leverage box ai's API to build really cool automations and applications right on top of your own content box aai handles the entire rag pipeline for you do all of this while maintaining the highest levels of security compliance and data governance that over 115,000 Enterprises Trust unlock the
            • 07:00 - 07:30 power of your content with intelligent content management by box thanks again to box now back to the video and then the third tool that we are launching is the computer use tool the computer use tool is operator in the API but it allows you to control the computers that you are operating so this could be a virtual machine it could be a legacy application that just has a graphical user interface and you have no API access to it if you want to automate those kind of tasks and build applications on that you can use the computer use tool which comes with the
            • 07:30 - 08:00 computer use model so again really cool computer use is useful and it really makes me think about Manis again obviously I've just gotten done testing Manis like crazy and so I'm thinking about it a lot and so Manis takes all of these different things and puts it all together in a really nice way and that's what we're seeing here is a framework an API where you can essentially build all of these things yourself so it's giving you the ability to control a computer maybe you want to spin up new environment per session very similar to
            • 08:00 - 08:30 what Manis does then you go out and search the web for all the up-to-date information about whatever your task is and then you can store all that information locally you can write notes and you can write code and have all of that stored in files on your local computer on your containerized environment whatever it is so this is the same model that is used by operator in chat gbt it has soda benchmarks on uh OS World web ARA web Voyager early user feedback on the Kua model and the tool has been super super positive so I'm
            • 08:30 - 09:00 really excited to see what all of you built with it all right so those are the three tools um and while we were building these tools and thinking of getting them out we also wanted to take a first principles Approach at designing the best API for these tools um we released chat completions I think in March 2023 alongside gbd 3.5 turbo and every single API interaction at that time was just text in and text out since then we've we've uh introduced multimodality so you have images you
            • 09:00 - 09:30 have audio we're introducing tools today and you also have products like 01 Pro deep research operator that make these multiple model turns and multiple tool calls behind the scenes so you wanted to build an API primitive that is flexible enough it it supports multiple terms it supports tools um and we're calling this new API the responses API and to show you the responses API I'm going to hand it over to Steve cool let's go ahead and take a look at the responses API so if you've used chat completions before this
            • 09:30 - 10:00 will look really familiar to you so the completions endpoint is the standard on the web for AI right now any model that you're using through an API likely is the open AI standard and that is the completions endpoint standard you select some context you pick a model and you get a response it's pretty simple it's pretty simple and it's always hilarious so maybe not I don't know um so to demonstrate the power of the responses
            • 10:00 - 10:30 API we're going to be building sort of a personal stylist assistant so let's start off by giving it some instructions you are a personal sty you're only typing in front of like 50,000 people right now don't worry about it cool and we'll say uh we'll get rid of this and we'll say what are some of the latest trends so nothing really different from what you would be doing with the completions endpoint the the only difference is you can see right here
            • 10:30 - 11:00 we're using responses instead but they're going to expand on that let's watch jokes in the context jok from the let's see what it says okay cool great um but no personal stylus assistant is complete unless it understands what its users like so in order to demonstrate this we've created a vector store that has uh some you know like some entries almost some diary entries of what people on the team have been wearing um we' that's not weird at all it's weird at all I would just let
            • 11:00 - 11:30 it happen uh we've kind of been following people around the office and kind of like understanding what they what they've been up to so we we we uh we yeah there's a whole there's a team there's a team on it yeah so go ahead and add the file Search tool all right so this is now new you can insert tools directly in the responses API call and as we're seeing here they're using the type file Search tool Vector store IDs this is where you can actually maybe specifically call out a vector that you want to use and then they have fil so these are the metadata filters I
            • 11:30 - 12:00 believe and uh I'll copy in my Vector store ID yep okay and here I can actually filter down this the files in this Vector store to just the ones that are relevant to the person that we want to style so uh in this case let's start with Elon we'll go ahead and filter down to his username yeah so that's the metadata filter that they were alluding to earlier and we'll come back here and we'll refresh and we'll say uh can you
            • 12:00 - 12:30 Briefly summarize what Elon likes to wear I often ask chat GPT this question yeah but it never knows and now it can actually tell you what the cool so Alon has a distinct in consistent style characterized by Miami Chic that's really awesome um so the fil and you could see right there so file search call what does Elon like to wear elon's clothing prefer es style summary fashion
            • 12:30 - 13:00 choices and yeah so you can see the actual call right there in line tool is a great way to bring information about your users into your application but in order to be able to create a really good application for this personal stylist we want to be able to bring in fresh data from around the web um so that we have both the newest information and also stuff that's really relevant to your users so in order to demonstrate that I'll add the web search tool you can also add data about like where your user is so let's try with somebody else Kevin
            • 13:00 - 13:30 are you Happ going to be taking any trips anytime soon let's say Tokyo okay cool Tokyo so I'll put in Tokyo here and we'll swap in Kevin and the responses API is really cool because it can do multiple things it once it can call a file Search tool it can call the web search tool and it can give you a final answer just in one API response so in order to tell it exactly what we want let's give it some instructions and it'd be good if I knew how to code well great you say you're an engineer here
            • 13:30 - 14:00 yeah well I'm in training so uh what we want the we want the model to do is when it's asked to recommend products we wanted to use the file Search tool to understand what Kevin likes and then use the web search tool to find a store near him where he can buy something that he might be interested in all right so that's pretty cool you define the tools it's interesting that they're still just using GPT 40 uh and then in the instructions you explicitly call out how you want the API to use the tool so use the file Search tool to get the user
            • 14:00 - 14:30 preferences then use the web search tool to find stores near them all of this information is likely stored in the file Search tool so you can see there's information about KW Kevin wild let's keep watching so let's go back and say uh find me a jacket um that I would like nearby and what the model will do is it will uh issue a file Search tool call to
            • 14:30 - 15:00 understand what kinds of things Kevin likes to wear and then it will issue a web search tool call to then go and find uh stuff that Kevin would like based on where he is so the model was able to uh just in the scope of one API call find a bunch of Patagonia stores in Tokyo just for you Kevin which which go it actually corresponds to Kevin's preferences he's been wearing a lot of Patagonia around the office so um but no personal stylist assistant would be complete unless they could actually go and make purchases on on your behalf so in order to do that
            • 15:00 - 15:30 let's demonstrate the computer use tool so we'll go ahead and add this we're using the computer use preview model and the computer use preview tool and we will ask all right so that's pretty cool you do have to specify display height display with because of course if it's supposed to click around it needs to know how and what the bounds of its environment are but yeah computer use now through the API me find my friend Kevin a new
            • 15:30 - 16:00 pagonia jacket what's your favorite color cevin uh let's go with black black can't have too many black pagony jackets and what the model will do is it will ask us for a screenshot and we have a Docker container running locally on this computer and we will go ahead and send that screenshot to the model it will look at the state of the computer and issue another action click that's really cool so it's operating system ostic I believe it's browser agnostic it should just work as long as you provide it with the display height and width because then it can overlay coordinates
            • 16:00 - 16:30 on top of it and try to guess where the cursor actually needs to go drag move type and then we will execute that action take another screenshot send it back to the model and then it will continue in this fashion until it feels that it's completed the task and then return a final answer so while this is kind of going and doing its thing we'll hand it back to nun yeah awesome so these are some really cool tools and a really flexible API for you to build uh agents and and you have you have amazing building blocks to to do that now but for those of you who have built more
            • 16:30 - 17:00 complex applications like say you're building a customer support agent it's not always about just having one agent that's sort of the personal style uh stylist you also have some uh agentic application that's doing your refunds you have another thing that's answering customer support uh FAQ queries you something else that's dealing with orders and billing Etc and to make these applications easy to build we released an SDK last year called swarm and swarm made it easy to do agent orchestration this was uh supposed to be an
            • 17:00 - 17:30 experimental and educational thing but so many of you took it to production anyway so uh you're like forcing our over here and so uh we've decided to take swarm and make it production ready add a bunch of new features and we're going to be rebranding it to be called the agents SDK so I hadn't actually used swarm I know a lot of people have tried it out I didn't actually use it so this is kind of new to me Elan built uh a swarm and helped build it so I'm going to have hand it to him to tell you more about how it works yeah thanks nun yeah
            • 17:30 - 18:00 so uh in my time at open a I've spent a lot of time working with Enterprises and Builders to help them build out agentic experiences and I've seen firsthand how pretty simple ideas can actually grow in complexity like when you actually go to implement them and so the idea with the agents SDK is to keep Simple ideas simple to implement while allowing you to build more complex and robust ideas still in a pretty like straightforward and simple way so um let's take a look at what Steve had before in the demo but
            • 18:00 - 18:30 implemented using the agents SD it's going to look very similar at first we have our agent defined here we have some instructions um and we also have both of the tools file Search tool web search tool that we had before is this using like responses under the hood yeah so by default this is using the responses API but we actually support multiple vendors anything that really fits the chat completions um shape can work with the agents SDK nice so um during the
            • 18:30 - 19:00 practice runs we actually we actually accidentally ordered like many many pagonas so I'm sorry I understand what's the problem we're helping you here uh we want to return some of them uh and so to do that I could usually just add in like a returns tool and like add more to this prompt and get it to work but the problem with that is you start to mix all of this business logic which makes your agents a little bit harder to test and so this is the power of multiple agents is you can actually separate your concerns and develop and test them separately so to do so let's actually
            • 19:00 - 19:30 introduce a like an agent specifically to deal with the sorts of uh like returns so I'm going to load mine in and great so we still have and this all feels very familiar it feels very similar to how crew AI is so the agents SDK is essentially a code framework to build out multiple agents and allow them to work together now obviously I'm quite biased towards crew aai not only because I've been using it for a while and I do think they're best but I'm also an investor so it's interesting to see this
            • 19:30 - 20:00 I love competition and I love the fact that and they're about to say this this is open source now let's keep watching our agent from before but you can see there's also this new agent the customer support agent here and I've defined a couple tools for it to use the guess get past orders and then submit refund request and um you might notice these are just regular python functions as this is actually a feature that we uh people really loved in swarm that brought over to the agent SDK which is
            • 20:00 - 20:30 we'll take your python functions and look at the type inference or look at the type signatures and then automatically generate the Json schema that the models need to use to perform those function calls and then once they do we actually run the code and then return the results so you can just Define these functions um as as they are now I've given them um now we have our two agents right we have the stylist agent and we have the customer support refunds agent so how do we interact with both of them as a user this is where the
            • 20:30 - 21:00 notion of handoffs come in and a handoff is actually a pretty simple idea it's pretty powerful and it's when you have one conversation where One agent is handling it and then it hands it off to another where you keep the entire conversation the same but behind the scenes you just swap out the instructions and the tools um and this gives you a way to triage conversations and like load in the correct context for each part of the conversation yeah so this feels again very familiar it just allows you to Define multiple agents
            • 21:00 - 21:30 have each agent be very specialized in the things that it can do the description of what it is the tools it can use and so on and then you can have kind of a manager agent A dispatch agent they call it a triage agent and that just allows that agent to coordinate between other agents so what we've done here is created this triage agent that can hand off to the stylist agent or the customer support agent so enough talking let's actually see this in action so I'm going to save and do you know um I think we may have
            • 21:30 - 22:00 ordered one too many pagonas can you help me return I don't understand I I know I'm so sorry I can get you one later so what just happened here is it started off by transferring remember we're starting with the triage agent um to the customer support agent and this is just a function call that I'll show show you in a second um and then the customer support agent proactively called the past orders function where we can see all of Kevin's pagonas I think
            • 22:00 - 22:30 you'll be okay um cool so to actually see what happened behind the scenes usually you might need to add some debugging statements by hand but one of the things that the agent skk brings right out of the box is monitoring and tracing so I'm going to go over to the tracing UI that we have on our platform um to actually take a look what just happened so these are some of the previous runs that we've had I'm just refreshing the page um and we can see the last one uh and this last one you can actually see exactly what
            • 22:30 - 23:00 happened we started with a triage agent which um we sent a request to made a handoff and then switched over to the yeah and I must say that this UI is very clean I really like it just very easy to see what's going on and if you want similar functionality to be able to trace your agents not only in open AI agent SDK but in other platforms like crew as well check out my friends over at agent Ops they're awesome they're not sponsoring this video let's keep watching uh we can see what the original input was was and handoffs are first
            • 23:00 - 23:30 class objects in this dashboard so you can see not only which agent we actually handed it off to but any that it like it had as options that it did not which is actually a really useful feature for debugging um afterward once we're in the customer support agent you can see they get get past orders function call with any input prams Here There Were None um and then the output is just again just all of Kevin's very monotonous history um and then finally we can get to the end where you get a response and so these are some of the features
            • 23:30 - 24:00 that you get right out of the box with the agents SD there's a few more you uh we also have built in guard rails that you can enable we have life cycle events um and importantly this is an open source framework so we're going to keep building it out um and you can install it like very soon or right now so you can just do pip install open AI middle Dash agents and we'll have an one for the JavaScript coming soon um but to close this off so very cool open source yeah definitely thank you to open AI for open sourcing this let's actually
            • 24:00 - 24:30 perform the the refund so uh you know uh you know what I'm sorry Kevin get rid of all of them oh what am I going to wear Kevin's going to be cold yeah let's see now I believe because it's open source you don't actually need to use an open AI model which is really nice I like to use multiple models some models are better at certain things than others I do believe because it is a full open AI
            • 24:30 - 25:00 project that most likely it's going to work best with open AI models but that's one of those things you're probably just going to have to test yourself a lot of them there we go takes a while to return so many Pam and so what what happens under the hood how do you how do you debug this how do you understand more about what's going on yeah so that we can all do back in the in the tracing in the tracing UI so this is a pretty nice straightforward way to build out these experiences yeah and Joel awesome pass back to you I'm so excited for all of you to have access to all of these tools
            • 25:00 - 25:30 uh and before we wrap up I wanted to make two additional points first we've introduced the responses API but the chat completions API is not going away we're going to continue supporting it with new models and capabilities there will be certain capabilities that require built-in tool use and there'll be certain models and agentic products that we release in the future that will require will require them and those will be available in responses API only responses API features are superet of what chat chat completion support so
            • 25:30 - 26:00 whenever you decide to migrate over it should be a pretty straightforward migration to you and we hope you love the developer experience of responses because we put a lot of thought into that the second point I wanted to make was around the assistance API we built the assistance API based on all the great feedback that we got from all of our beta users and uh you know we we wouldn't be here without uh without all the learnings that we had during the assistance API phase we are going to be adding more features to the responses
            • 26:00 - 26:30 API so that it can support everything that the assistance API can do and once that happens we'll be sharing a migration guide that makes it really easy for all of you to migrate your applications from assistance to responses without any loss of functionality or data we'll give you ample time to move things over and once we once we're done with that we plan to Sunset the assistance API sometime in 2026 we'll be sharing a lot more details about this uh offline as well but yeah
            • 26:30 - 27:00 that's it for me I'll hand it over to Kevin to wrap us up awesome well we're super excited to announce the the responses API and the idea that we can bring take a single powerful API and bring together a whole bunch of different tools from Rag and file search to web search to Kua and our uh operator uh computer use apis now you can count on us to continue building powerful new models and bring more intelligence to bring more powerful tools to help you you build better agents 2025 is going to
            • 27:00 - 27:30 be the year of the agent it's the year that chat GPT and our developer tools go from just answering questions to actually doing things for you out in the real world we're super excited about that we're just getting started we know you are too and we can't wait to see what you build all right so that's it 2025 is definitely the year of the agent especially this week between Manis and now we have open AI responses API endpoint and just so much more coming I bet so if you enjoyed this video please consider giving a like And subscribe and
            • 27:30 - 28:00 I'll see you in the next one