During the Ignite 2018, Microsoft has announced the general availability of Multi-Master feature being introduced to Azure Cosmos DB to provide more control into data redundancy and elastic scalability for your data from different regions with multiple writes and read instances. What is Multi-Master essentially? Multi-master is a capability that provided as part of Cosmos […]
Read more →Search Results for: title
Azure Cosmos DB–Reserved Capacity
Azure Cosmos DB is a planet scale global document database which have been available for Azure Customers based on pay-as-you-go. Reserved Capacity is a new long term pre-paid billing commitment customers can get a discounted pricing. Azure Cosmos DB reserved capacity helps you save money by pre-paying for Azure Cosmos DB resources for a period […]
Read more →Azure Cosmos DB – 429 Too Many Requests
Recently while I was doing Performance Testing in one of the APIs interacting with Cosmos DB, I encountered a problem as Azure Cosmos DB API’s started returning Http Code 429. Http Status Code 429 indicates that too many request been received or request rate is very large. This error would happen when we have concurrent […]
Read more →Azure Database for MariaDB: Public Preview
During Ignite 2018, Microsoft has announced the availability of Maria DB support in Azure Database services. Today it has been opened for Public Preview for all Azure customers. What is MariaDB? MariaDB is a community-developed fork of the MySQL relational database management system intended to remain free under the GNU GPL.Development is led by some […]
Read more →Azure Cosmos DB–Setting Up New Database using Azure CLI–Sample
Purpose of this article is to help you with few steps of commands to provision a new Azure Cosmos DB database instance through Azure CLI or Azure Cloud Shell. Here is the snippet: <# This Bash script should help you create a Azure Cosmos DB instance using Azure CLI with bare minimal configuration #> export […]
Read more →Document Chunking Strategies: Optimizing RAG Retrieval Quality
Introduction: RAG systems live or die by their chunking strategy. Chunk too large and you waste context window space with irrelevant content. Chunk too small and you lose semantic coherence, making it hard for the LLM to understand context. The right chunking strategy depends on your document types, query patterns, and retrieval approach. This guide […]
Read more →