Anthropic Model Context Protocol: How to give wings to local mode
Make your Chat and RAG application Safe with AWS Bedrock Guardrails
With the increasing adoption of Large Language Models (LLMs) in production for Chat and RAG, it is more and more important to ensure safe and controlled interactions. Today, we’ll dive deep into LLM guardrails – what they are, how they…
Contextual Retrieval: a powerful RAG technique that your wallet will like
How Prompt Caching Works: A Deep Dive into Optimizing AI Efficiency
How to make AWS Bedrock Langchain chains faster using BedrockCross-region inference
You have created an application using LangChain and AWS Bedrock and you are wondering how to have better performances and how to be more resilient ? Say no more, in this blog post we will see AWS Bedrock newest features:…
How Moshi Works: A Simple Guide to the to Open Source Real Time Voice LLMs
You’ve probably heard a lot about large language models (LLMs) these days—OpenAI’s GPT models, Google’s Bard, or maybe even Meta’s LLaMA. But what if I told you there’s a model that takes things to the next level by making these…
How OpenAI o1 works in a simple way and why it matters for RAG and Agentic 🤯
You want to learn how OpenAI’s newest model o1 works and why it is a revolution in a simple way ? You also want to know why it matters for RAG and Agentic ? Say no more, this is exactly…
Simple domain specific Corrective RAG with LangChain and LangGraph
If you are using RAG in your use cases, at some point, you will see that most of the answers are not domain specific but only depend on your vector stores. In this post, we are going to see a…
Simple Agentic RAG for Multi Vector stores with LangChain and LangGraph
When beginning with RAG and vector store creation, one question will come back soon: How can you choose the correct vector for each user in a simple way? If you have this question, then you are in the right place…