In the realm of AI, efficiency and precision are paramount. LangChain once stood as a crucial bridge, offering integrations and Retrieval-Augmented Generation (RAG). Yet, the landscape shifts. OpenAI Agents emerged, learning in stride with human users. AWS Bedrock followed, a bastion of business-oriented AI, offering secure, codeless integration of generative models.
In the vein of Hemingway’s style—clear and direct— I’ll share my view on the contrasting landscapes of OpenAI Agents, AWS Bedrock, and LangChain.
OpenAI Agents, invoked through APIs, are virtual entities that can perceive and interact with their environment (I could say in a manner akin to lidar) and manipulate objects in their digital space. You can create your own AI assistants within applications that can execute tasks (even in multi-agent scenarios), interpret code, retrieve information, and call functions – informed by models, tools, and an expansive well of knowledge. The evolution of these agents is epitomized by GPT-4 turbo, a model that stands out in its ability to learn and iterate with users. They are sculpted to respond, learn, and collaborate, evolving through use—engaging in creative and technical endeavors alongside human users123.
Imagine building an AI assistant to help script dialogue for a video game. With OpenAI Agents, one could leverage the Assistants API to create an assistant that iterates with game writers, suggesting character dialogue, story arcs, and integrating feedback to refine its outputs.
AWS Bedrock Agents
AWS Bedrock, is a user friendly comprehensive platform to build generative AI apps. In Bedrock you fine tune large Foundation Models (FMs) with your own data by an intuitive visual interface, without coding skills. Bedrock provides fully managed agents that can dynamically invoke APIs to execute complex business tasks, extending the reasoning capabilities of FMs to orchestrate and carry out specific actions. AWS Bedrock supports RAG (Retrieval Augmented Generation) to enhance the power of FMs with proprietary data, thus amplifying their domain-specific knowledge4. The difference between fine tuning and RAG is that you fine tune beforehand, while RAG happens when responding to the prompt, when the model retrieves information from a vectorized content box.
Data security is a cornerstone of AWS Bedrock, offering HIPAA and GDPR compliance, ensuring data encryption at rest and in transit, and allowing the use of private keys for additional security layers4. A notable advantage of AWS Bedrock is its offering of a variety of leading FMs from Amazon and other prominent AI firms, that you can experiment with, from a single API which hastens development and ensures the flexibility to stay updated with minimal code adjustments. You can integrate generative AI capabilities into applications, streamlining development processes while ensuring privacy and security.
Returning to our AI assistant, to achieve a similar function in AWS Bedrock, you start by selecting a foundation model (FM) suited for language tasks. In 1h you can try 2 or 3 – some is going to work better. You fine tune this FM with your own dataset, perhaps including existing game scripts or style guides, through Bedrock’s visual interface, without writing code. Then, you build agents that make API call to this model and to your company’s systems, to weave the AI-generated content into the game’s development pipeline, mindful of your workflows and work units.
OpenAI Agents vs AWS Bedrock Agents
You can craft AI-powered dialogue with both platforms. In comparison, OpenAI’s agents shine in custom interactions and user engagement, providing a tailored experience with their AI agents. They are designed to learn and adapt, showing a degree of flexibility within their operational parameters. They will feel human. For example I created a script to inject a full code repo and the agent detected, on its own, that it accidentally included my example and this was not what we wanted, then it reprocessed the data. And I got it to act as a software architect comparing options and providing convincing technical arguments.
AWS Bedrock, conversely, emphasizes ease of use and security in its design, catering to businesses looking to integrate AI without extensive technical investment, and ensuring that data used for customization is not repurposed or shared. It stands out for its ease of integration, extensive customization without the need for coding, and robust security measures. In less than 1 hour I created a UI to ask questions in different models, change parameters like the temperature of the model, and integrate it with my company’s system. Bedrock has a strategic forward-looking technological approach of collaboration and horizontally spaces-out with various FMs from multiple AI companies, whereas OpenAI is a single company. In response to that, the upcoming OpenAI’s GPT store also leverages community work.
Released in October 2022, LangChain carved its niche by filling-in a gap. At the time, Large Language Models (LLMs) were isolated entities with a knowledge cut-off. LangChain enabled connection to databases and various sources, and simplified AI development for a wide range of applications, from chatbots to complex Natural Language Processing (NLP) tasks. Its arsenal includes memory modules and agents capable of executing code or querying databases. It stood out for its speed to prototype apps, saving valuable time and resources5.
The emergence of native Agents in OpenAI and AWS may overshadow the utility LangChain once offered, as they provide quicker and more comprehensive solutions that already streamline the development and integration of AI within a broader range of contexts and, especially in the case of AWS Bedrock, with a focus on security and compliance that is critical for enterprise-level applications. However it’s still possible to integrate LangChain and create agents that run on Bedrock like in this example by the fellow AWS Community Builder Banjo Obayomi.
As new features and tools come out every month, it’s not a correct time for strategic commitments and custom code. I decided not to invest my team’s time and company resources in LangChain anymore, but I’m using both OpenAI and AWS Bedrock services. It’s unfeasible to run GPT-4 on AWS or I’d have my perfect set.