Vector Search

46 results

Binary Quantization & Rescoring: 96% Less Memory, Faster Search

We are excited to share that several new vector quantization capabilities are now available in public preview in MongoDB Atlas Vector Search : support for binary quantized vector ingestion, automatic scalar quantization, and automatic binary quantization and rescoring. Together with our recently released support for scalar quantized vector ingestion , these capabilities will empower developers to scale semantic search and generative AI applications more cost-effectively. For a primer on vector quantization, check out our previous blog post . Enhanced developer experience with native quantization in Atlas Vector Search Effective quantization methods—specifically scalar and binary quantization—can now be done automatically in Atlas Vector Search. This makes it easier and more cost-effective for developers to use Atlas Vector Search to unlock a wide range of applications, particularly those requiring over a million vectors. With the new “quantization” index definition parameters, developers can choose to use full-fidelity vectors by specifying “none,” or they can quantize vector embeddings by specifying the desired quantization type—”scalar” or “binary” (Figure 1). This native quantization capability supports vector embeddings from any model provider as well as MongoDB’s BinData float32 vector subtype . Figure 1: New index definition parameters for specifying automatic quantization type in Atlas Vector Search Scalar quantization—converting a float point into an integer—is generally used when it's crucial to maintain search accuracy on par with full-precision vectors. Meanwhile, binary quantization—converting a float point into a single bit of 0 or 1—is more suitable for scenarios where storage and memory efficiency are paramount, and a slight reduction in search accuracy is acceptable. If you’re interested in learning more about this process, check out our documentation . Binary quantization with rescoring: Balance cost and accuracy Compared to scalar quantization, binary quantization further reduces memory usage, leading to lower costs and improved scalability—but also a decline in search accuracy. To mitigate this, when “binary” is chosen in the “quantization” index parameter, Atlas Vector Search incorporates an automatic rescoring step, which involves re-ranking a subset of the top binary vector search results using their full-precision counterparts, ensuring that the final search results are highly accurate despite the initial vector compression. Empirical evidence demonstrates that incorporating a rescoring step when working with binary quantized vectors can dramatically enhance search accuracy, as shown in Figure 2 below. Figure 2: Combining binary quantization and rescoring helps retain search accuracy by up to 95% And as Figure 3 shows, in our tests, binary quantization reduced processing memory requirement by 96% while retaining up to 95% search accuracy and improving query performance. Figure 3: Improvements in Atlas Vector Search with the use of vector quantization It’s worth noting that even though the quantized vectors are used for indexing and search, their full-fidelity vectors are still stored on disk to support rescoring. Furthermore, retaining the full-fidelity vectors enables developers to perform exact vector search for experimental, high-precision use cases, such as evaluating the search accuracy of quantized vectors produced by different embedding model providers, as needed. For more on evaluating the accuracy of quantized vectors, please see our documentation . So how can developers make the most of vector quantization? Here are some example use cases that can be made more efficient and scaled effectively with quantized vectors: Massive knowledge bases can be used efficiently and cost-effectively for analysis and insight-oriented use cases, such as content summarization and sentiment analysis. Unstructured data like customer reviews, articles, audio, and videos can be processed and analyzed at a much larger scale, at a lower cost and faster speed. Using quantized vectors can enhance the performance of retrieval-augmented generation (RAG) applications. The efficient processing can support query performance from large knowledge bases, and the cost-effectiveness advantage can enable a more scalable, robust RAG system, which can result in better customer and employee experience. Developers can easily A/B test different embedding models using multiple vectors produced from the same source field during prototyping. MongoDB’s flexible document model lets developers quickly deploy and compare embedding models’ results without the need to rebuild the index or provision an entirely new data model or set of infrastructure. The relevance of search results or context for large language models (LLMs) can be improved by incorporating larger volumes of vectors from multiple sources of relevance, such as different source fields (product descriptions, product images, etc.) embedded within the same or different models. To get started with vector quantization in Atlas Vector Search, see the following developer resources: Documentation: Vector Quantization in Atlas Vector Search Documentation: How to Measure the Accuracy of Your Query Results Tutorial: How to Use Cohere's Quantized Vectors to Build Cost-effective AI Apps With MongoDB

December 12, 2024

Announcing Hybrid Search Support for LlamaIndex

MongoDB is excited to announce enhancements to our LlamaIndex integration. By combining MongoDB’s robust database capabilities with LlamaIndex’s innovative framework for context-augmented large language models (LLMs), the enhanced MongoDB-LlamaIndex integration unlocks new possibilities for generative AI development. Specifically, it supports vector (powered by Atlas Vector Search ), full-text (powered by Atlas Search ), and hybrid search, enabling developers to blend precise keyword matching with semantic search for more context-aware applications, depending on their use case. Building AI applications with LlamaIndex LlamaIndex is one of the world’s leading AI frameworks for building with LLMs. It streamlines the integration of external data sources, allowing developers to combine LLMs with relevant context from various data formats. This makes it ideal for building application features like retrieval-augmented generation (RAG), where accurate, contextual information is critical. LlamaIndex empowers developers to build smarter, more responsive AI systems while reducing the complexities involved in data handling and query management. Advantages of building with LlamaIndex include: Simplified data ingestion with connectors that integrate structured databases, unstructured files, and external APIs, removing the need for manual processing or format conversion. Organizing data into structured indexes or graphs , significantly enhancing query efficiency and accuracy, especially when working with large or complex datasets. An advanced retrieval interface that responds to natural language prompts with contextually enhanced data, improving accuracy in tasks like question-answering, summarization, or data retrieval. Customizable APIs that cater to all skill levels—high-level APIs enable quick data ingestion and querying for beginners, while lower-level APIs offer advanced users full control over connectors and query engines for more complex needs. MongoDB's LlamaIndex integration Developers are able to build powerful AI applications using LlamaIndex as a foundational AI framework alongside MongoDB Atlas as the long term memory database. With MongoDB’s developer-friendly document model and powerful vector search capabilities within MongoDB Atlas, developers can easily store and search vector embeddings for building RAG applications. And because of MongoDB’s low-latency transactional persistence capabilities, developers can do a lot more with MongoDB integration in LlamIndex to build AI applications in an enterprise-grade manner. LlamaIndex's flexible architecture supports customizable storage components, allowing developers to leverage MongoDB Atlas as a powerful vector store and a key-value store. By using Atlas Vector Search capabilities, developers can: Store and retrieve vector embeddings efficiently ( llama-index-vector-stores-mongodb ) Persist ingested documents ( llama-index-storage-docstore-mongodb ) Maintain index metadata ( llama-index-storage-index-store-mongodb ) Store Key-value pairs ( llama-index-storage-kvstore-mongodb ) Figure adapted from Liu, Jerry and Agarwal, Prakul (May 2023). “Build a ChatGPT with your Private Data using LlamaIndex and MongoDB”. Medium. https://medium.com/llamaindex-blog/build-a-chatgpt-with-your-private-data-using-llamaindex-and-mongodb-b09850eb154c Adding hybrid and full-text search support Developers may use different approaches to search for different use cases. Full-text search retrieves documents by matching exact keywords or linguistic variations, making it efficient for quickly locating specific terms within large datasets, such as in legal document review where exact wording is critical. Vector search, on the other hand, finds content that is ‘semantically’ similar, even if it does not contain the same keywords. Hybrid search combines full-text search with vector search to identify both exact matches and semantically similar content. This approach is particularly valuable in advanced retrieval systems or AI-powered search engines, enabling results that are both precise and aligned with the needs of the end-user. It is super simple for developers to try out powerful retrieval capabilities on their data and improve the accuracy of their AI applications with this integration. In the LlamaIndex integration, the MongoDBAtlasVectorSearch class is used for vector search. All you have to do is enable full-text search, using VectorStoreQueryMode.TEXT_SEARCH in the same class. Similarly, to use Hybrid search, enable VectorStoreQueryMode.HYBRID . To learn more, check out the GitHub repository . With the MongoDB-LlamaIndex integration’s support, developers no longer need to navigate the intricacies of Reciprocal Rank Fusion implementation or to determine the optimal way to combine vector and text searches—we’ve taken care of the complexities for you. The integration also includes sensible defaults and robust support, ensuring that building advanced search capabilities into AI applications is easier than ever. This means that MongoDB handles the intricacies of storing and querying your vectorized data, so you can focus on building! We’re excited for you to work with our LlamaIndex integration. Here are some resources to expand your knowledge on this topic: Check out how to get started with our LlamaIndex integration Build a content recommendation system using MongoDB and LlamaIndex with our helpful tutorial Experiment with building a RAG application with LlamaIndex, OpenAI, and our vector database Learn how to build with private data using LlamaIndex, guided by one of its co-founders

October 17, 2024

Vector Quantization: Scale Search & Generative AI Applications

Update 12/12/2024: The upcoming vector quantization capabilities mentioned at the end of this blog post are now available in public preview: Support for ingestion and indexing of binary (int1) quantized vectors: gives developers the flexibility to choose and ingest the type of quantized vectors that best fits their requirements. Automatic quantization and rescoring: provides a native mechanism for scalar quantization and binary quantization with rescoring, making it easier for developers to implement vector quantization entirely within Atlas Vector Search. View the documentation to get started. We are excited to announce a robust set of vector quantization capabilities in MongoDB Atlas Vector Search . These capabilities will reduce vector sizes while preserving performance, enabling developers to build powerful semantic search and generative AI applications with more scale—and at a lower cost. In addition, unlike relational or niche vector databases, MongoDB’s flexible document model—coupled with quantized vectors—allows for greater agility in testing and deploying different embedding models quickly and easily. Support for scalar quantized vector ingestion is now generally available, and will be followed by several new releases in the coming weeks. Read on to learn how vector quantization works and visit our documentation to get started! The challenges of large-scale vector applications While the use of vectors has opened up a range of new possibilities , such as content summarization and sentiment analysis, natural language chatbots, and image generation, unlocking insights within unstructured data can require storing and searching through billions of vectors—which can quickly become infeasible. Vectors are effectively arrays of floating-point numbers representing unstructured information in a way that computers can understand (ranging from a few hundred to billions of arrays), and as the number of vectors increases, so does the index size required to search over them. As a result, large-scale vector-based applications using full-fidelity vectors often have high processing costs and slow query times, hindering their scalability and performance. Vector quantization for cost-effectiveness, scalability, and performance Vector quantization, a technique that compresses vectors while preserving their semantic similarity, offers a solution to this challenge. Imagine converting a full-color image into grayscale to reduce storage space on a computer. This involves simplifying each pixel's color information by grouping similar colors into primary color channels or "quantization bins," and then representing each pixel with a single value from its bin. The binned values are then used to create a new grayscale image with smaller size but retaining most original details, as shown in Figure 1. Figure 1: Illustration of quantizing an RGB image into grayscale Vector quantization works similarly, by shrinking full-fidelity vectors into fewer bits to significantly reduce memory and storage costs without compromising the important details. Maintaining this balance is critical, as search and AI applications need to deliver relevant insights to be useful. Two effective quantization methods are scalar (converting a float point into an integer) and binary (converting a float point into a single bit of 0 or 1). Current and upcoming quantization capabilities will empower developers to maximize the potential of Atlas Vector Search. The most impactful benefit of vector quantization is increased scalability and cost savings through reduced computing resources and efficient processing of vectors. And when combined with Search Nodes —MongoDB’s dedicated infrastructure for independent scalability through workload isolation and memory-optimized infrastructure for semantic search and generative AI workloads— vector quantization can further reduce costs and improve performance, even at the highest volume and scale to unlock more use cases. "Cohere is excited to be one of the first partners to support quantized vector ingestion in MongoDB Atlas,” said Nils Reimers, VP of AI Search at Cohere. “Embedding models, such as Cohere Embed v3, help enterprises see more accurate search results based on their own data sources. We’re looking forward to providing our joint customers with accurate, cost-effective applications for their needs.” In our tests, compared to full-fidelity vectors, BSON-type vectors —MongoDB’s JSON-like binary serialization format for efficient document storage—reduced storage size by 66% (from 41 GB to 14 GB). And as shown in Figures 2 and 3, the tests illustrate significant memory reduction (73% to 96% less) and latency improvements using quantized vectors, where scalar quantization preserves recall performance and binary quantization’s recall performance is maintained with rescoring–a process of evaluating a small subset of the quantized outputs against full-fidelity vectors to improve the accuracy of the search results. Figure 2: Significant storage reduction + good recall and latency performance with quantization on different embedding models Figure 3: Remarkable improvement in recall performance for binary quantization when combining with rescoring In addition, thanks to the reduced cost advantage, vector quantization facilitates more advanced, multiple vector use cases that would have been too computationally-taxing or cost-prohibitive to implement. For example, vector quantization can help users: Easily A/B test different embedding models using multiple vectors produced from the same source field during prototyping. MongoDB’s document model —coupled with quantized vectors—allows for greater agility at lower costs. The flexible document schema lets developers quickly deploy and compare embedding models’ results without the need to rebuild the index or provision an entirely new data model or set of infrastructure. Further improve the relevance of search results or context for large language models (LLMs) by incorporating vectors from multiple sources of relevance, such as different source fields (product descriptions, product images, etc.) embedded within the same or different models. How to get started, and what’s next Now, with support for the ingestion of scalar quantized vectors, developers can import and work with quantized vectors from their embedding model providers of choice (such as Cohere, Nomic, Jina, Mixedbread, and others)—directly in Atlas Vector Search. Read the documentation and tutorial to get started. And in the coming weeks, additional vector quantization features will equip developers with a comprehensive toolset for building and optimizing applications with quantized vectors: Support for ingestion of binary quantized vectors will enable further reduction of storage space, allowing for greater cost savings and giving developers the flexibility to choose the type of quantized vectors that best fits their requirements. Automatic quantization and rescoring will provide native capabilities for scalar quantization as well as binary quantization with rescoring in Atlas Vector Search, making it easier for developers to take full advantage of vector quantization within the platform. With support for quantized vectors in MongoDB Atlas Vector Search, you can build scalable and high-performing semantic search and generative AI applications with flexibility and cost-effectiveness. Check out these resources to get started documentation and tutorial . Head over to our quick-start guide to get started with Atlas Vector Search today.

October 7, 2024

MongoDB.local Londres 2024 : de meilleures applications, plus rapides

Depuis le lancement de la série d'événements 2024 de MongoDB en avril, nous avons rencontré des milliers de clients, de partenaires et de membres de la communauté dans des villes du monde entier, de Mexico à Mumbai. Hier, nous avons marqué la 19ème étape de la tournée 2024 MongoDB.local, et nous avons eu beaucoup de plaisir à accueillir des gens de tous les secteurs à MongoDB.local Londres, où nous avons discuté des dernières tendances technologiques, célébré les innovations des clients et dévoilé des mises à jour de produits qui facilitent plus que jamais la création d'applications de nouvelle génération pour les développeurs. Au cours de l'année écoulée, les plus de 50 000 clients de MongoDB nous ont fait part de l'évolution de leurs besoins. Ils se concentrent de plus en plus sur trois domaines : Aider les développeurs à créer plus rapidement et plus efficacement Donner aux équipes les moyens de créer des applications basées sur l’IA Passer des systèmes legacy aux plateformes modernes Dans tous ces domaines, il y a un besoin commun d'une base solide : chacun a besoin d'une base de données résiliente, évolutive, sécurisée et très performante. Les mises à jour que nous avons partagées sur MongoDB.local Londres reflètent ces priorités. MongoDB s'engage à ce que ses produits soient construits pour dépasser les exigences les plus strictes de ses clients, et qu'ils fournissent la base la plus solide possible pour construire une large gamme d'applications, aujourd'hui et à l'avenir. En effet, lors de l'événement d'hier, Sahir Azam, directeur des produits de MongoDB, a évoqué le rôle fondamental que jouent les données dans son discours d'ouverture. Il a également partagé la dernière avancée de notre écosystème de partenaires, une solution AI alimentée par MongoDB, Amazon Web Services et Anthropic qui permet aux clients de déployer plus facilement des applications d'intelligence artificielle pour le service client. MongoDB 8.0 : la meilleure version de MongoDB à ce jour La plus grande nouveauté du .local Londres a été la disponibilité généralisée de MongoDB 8.0 , qui améliore considérablement les performances, réduit les coûts de mise à l'échelle et ajoute des capacités supplémentaires en matière d'évolutivité, de résilience et de sécurité des données à la base de données documentaire la plus populaire au monde. Les optimisations architecturales de MongoDB 8.0 ont considérablement réduit l'utilisation de la mémoire et les temps de requête, et MongoDB 8.0 dispose de capacités de traitement par lots plus efficaces que les versions précédentes. Plus précisément, MongoDB 8.0 offre un débit de lecture amélioré de 36 %, des écritures en bloc plus rapides de 56 % et des écritures simultanées plus rapides de 20 % lors de la réplication des données. De plus, MongoDB 8.0 peut traiter des volumes plus importants de données de séries temporelles et peut effectuer des agrégations complexes plus de 200 % plus rapidement, tout en utilisant moins de ressources et en réduisant les coûts. Enfin (mais pas des moindres !), Queryable Encryption prend désormais en charge les requêtes de portée, ce qui garantit la sécurité des données tout en permettant des analyses puissantes. Pour en savoir plus sur les annonces de produits de MongoDB.local Londres, qui visent à accélérer le développement d'applications, à simplifier l'innovation en matière d'IA et à accélérer le perfectionnement des développeurs, lisez la suite ! Accélérez le développement des applications Amélioration de la mise à l'échelle et de l'élasticité des fonctionnalités de MongoDB Atlas Les nouvelles améliorations apportées au plan de contrôle de MongoDB Atlas permettent aux clients de faire évoluer les clusters plus rapidement, de répondre aux demandes de ressources en temps réel et d'optimiser les performances, tout en réduisant les coûts d'exploitation. Tout d'abord, nos nouvelles fonctions d'approvisionnement en ressources granulaires et de mise à l'échelle, y compris la mise à l'échelle indépendante du shard et l'extension du stockage et de l'IOPS sur Azure, permettent aux clients d'optimiser les ressources précisément là où elles sont nécessaires. Deuxièmement, les clients d'Atlas bénéficieront d'une mise à l'échelle plus rapide des clusters, jusqu'à 50 % plus rapide, grâce à la mise à l'échelle des clusters en parallèle par type de nœud. Enfin, les utilisateurs de MongoDB Atlas bénéficieront d'une mise à l'échelle automatique plus réactive, avec une réactivité multipliée par 5 grâce à l'amélioration de nos algorithmes de dimensionnement et de notre infrastructure. Ces améliorations sont en cours de déploiement pour tous les clients d'Atlas, qui devraient commencer à en bénéficier immédiatement. Plugin IntelliJ pour MongoDB Annoncé en Public Preview, le plugin MongoDB IntelliJ est conçu pour améliorer fonctionnellement la façon dont les développeurs travaillent avec MongoDB dans IntelliJ IDEA, l'un des IDE les plus populaires parmi les développeurs Java. Le plugin permet aux développeurs Java d'écrire et de tester des requêtes Java plus rapidement, de recevoir des informations proactives sur les performances et de réduire les erreurs d'exécution directement dans leur IDE. En améliorant l'intégration de la base de données à l'IDE, JetBrains et MongoDB se sont associés pour offrir une expérience transparente à leur base d'utilisateurs commune et libérer leur potentiel pour créer des applications modernes plus rapidement. Inscrivez-vous pour un private preview ici . MongoDB Copilot Participant pour VS Code (Public Preview) Actuellement disponible en Public Preview, le nouveau MongoDB Participant pour GitHub Copilot intègre des fonctionnalités d'IA spécifiques à un domaine directement à une expérience de chat dans l'extension MongoDB pour VS Code . Le participant est profondément intégré à l'extension MongoDB, ce qui permet de générer des requêtes MongoDB précises (et de les exporter vers le code de l'application), de décrire les schémas de collecte et de répondre aux questions avec un accès actualisé à la documentation MongoDB, sans que le développeur n'ait à quitter son environnement de codage. Ces fonctionnalités réduisent considérablement le besoin de changement de contexte entre les domaines, ce qui permet aux développeurs de rester dans leur flux et de se concentrer sur la création d’applications innovantes. Support multicluster pour l'opérateur Kubernetes MongoDB Enterprise Assurez la haute disponibilité, la résilience et la mise à l'échelle des déploiements MongoDB exécutés dans Kubernetes grâce à l'ajout d'une prise en charge du déploiement de MongoDB et d'Ops Manager sur plusieurs clusters Kubernetes. Les utilisateurs ont désormais la possibilité de déployer des ReplicaSets, des clusters fragmentés (en Public Preview) et Ops Manager sur des clusters Kubernetes locaux ou géographiquement distribués pour une meilleure résilience de déploiement, une plus grande flexibilité et une meilleure reprise après sinistre. Cette approche permet une disponibilité, une résilience et une évolutivité multi-sites au sein des Kubernetes, des capacités qui n'étaient auparavant disponibles qu'en dehors des Kubernetes pour MongoDB. Pour en savoir plus, consultez la documentation . MongoDB Atlas Search et Vector Search sont désormais disponibles via Atlas CLI et Docker L'expérience de développement local pour MongoDB Atlas est maintenant disponible. Utilisez MongoDB Atlas CLI et Docker pour construire avec MongoDB Atlas dans votre environnement local préféré, et accédez facilement à des fonctionnalités telles que Atlas Search et Atlas Vector Search tout au long du cycle de développement logiciel. L’Atlas CLI fournit une interface unifiée et familière basée sur un terminal qui vous permet de déployer et de construire avec MongoDB Atlas dans votre environnement de développement préféré, localement ou sur le cloud. Si vous développez avec Docker, vous pouvez aussi maintenant utiliser Docker et Docker Compose pour intégrer facilement Atlas dans vos environnements locaux et d'intégration continue avec l'Atlas CLI . Dites adieu aux tâches répétitives en automatisant le cycle de vie de vos environnements de développement et de test et concentrez-vous sur la création de fonctionnalités d'application avec la recherche plein texte, l'IA et la recherche sémantique, et plus encore. Simplifier l'innovation IA RRéduisez les coûts et augmentez la mise à l'échelle dans Atlas Vector Search Nous avons annoncé les fonctionnalités de quantification vectorielle dans Atlas Vector Search . En réduisant la mémoire (jusqu'à 96 %) et en accélérant l'extraction des vecteurs, la quantification vectorielle permet aux clients de créer une large gamme d'applications d'IA et de recherche à plus grande échelle et à moindre coût. Disponible dès maintenant, la prise en charge de l'ingestion de vecteurs quantifiés scalaires permet aux clients d'importer et de travailler avec des vecteurs quantifiés provenant des fournisseurs de modèles d'intégration de leur choix, directement dans Atlas Vector Search. Bientôt, des fonctionnalités supplémentaires de quantification vectorielle, notamment la quantification automatique, fourniront aux clients un ensemble d'outils complet pour créer et optimiser des applications d'IA et de recherche à grande échelle dans Atlas Vector Search. Intégrations supplémentaires avec les frameworks d'IA les plus populaires Envoyez votre prochain projet alimenté par l'IA plus rapidement avec MongoDB, quel que soit le framework ou le LLM. Les technologies IA évoluent rapidement. Il est donc important de créer et de développer rapidement des applications performantes, et d'utiliser votre pile préférée en fonction de l'évolution de vos besoins et des technologies disponibles. La suite améliorée d'intégrations de MongoDB avec LangChain, LlamaIndex, Microsoft Semantic Kernel, AutoGen, Haystack, Spring AI, le ChatGPT Retrieval Plugin, et plus encore, rend plus facile que jamais la création de la prochaine génération d'applications sur MongoDB . Améliorer la montée en compétences des développeurs Nouveaux badges d'apprentissage MongoDB Plus rapides à obtenir et plus ciblés qu'une certification, les badges d'apprentissage gratuits de MongoDB témoignent de votre engagement en faveur de la formation continue et de votre volonté de prouver vos connaissances sur un sujet spécifique. Suivez le parcours d’apprentissage, acquérez de nouvelles compétences et obtenez un badge numérique à afficher sur LinkedIn. Découvrez les deux nouveaux badges d’apprentissage sur l'IA générative ! Création d’applications d’IA générative  : apprenez à créer des applications d’IA générative innovantes avec Atlas Vector Search, y compris des applications de génération augmentée de récupération (RAG). Déploiement et évaluation d'applications d'IA générative  : faites passer vos applications de leur création à leur déploiement complet, en vous concentrant sur l'optimisation des performances et l'évaluation des résultats. En savoir plus Pour en savoir plus sur les récentes annonces et mises à jour de MongoDB, consultez notre page Nouveautés et tous nos articles de blog sur les mises à jour produits . Bon développement !

October 3, 2024

MongoDB.local Londra 2024: Applicazioni migliori, più veloci

Da quando abbiamo dato il via alla serie di eventi 2024 di MongoDB ad aprile, siamo entrati in contatto con migliaia di clienti, partner e membri della comunità in città di tutto il mondo, da Città del Messico a Mumbai. Ieri, durante la diciannovesima tappa del tour MongoDB.local 2024, abbiamo accolto con enorme piacere persone di tutti i settori a MongoDB.local Londra, dove abbiamo discusso delle ultime tendenze tecnologiche, celebrato le innovazioni dei clienti e svelato aggiornamenti dei prodotti che rendono estremamente semplice per gli sviluppatori creare applicazioni di nuova generazione. Nell'ultimo anno, gli oltre 50.000 clienti di MongoDB ci hanno detto che le loro esigenze stanno cambiando. Si concentrano sempre di più su tre aree: Aiutare gli sviluppatori a creare più velocemente e in modo più efficiente Consentire ai team di creare applicazioni basate sull'AI Passare dai sistemi legacy alle piattaforme moderne In tutte queste aree, l'esigenza comune è quella di poter disporre di una solida base: ognuna richiede un database resiliente, scalabile, sicuro e altamente performante. Gli aggiornamenti discussi a MongoDB.local Londra riflettono queste priorità. MongoDB si impegna a garantire che i suoi prodotti siano sviluppati per soddisfare, e andare oltre, i requisiti più severi dei nostri clienti e che forniscano le basi più solide possibili per la creazione di un'ampia gamma di applicazioni, ora e in futuro. Inoltre, durante l'evento di ieri, Sahir Azam, Chief Product Officer di MongoDB, ha discusso del ruolo fondamentale svolto dai dati nel suo discorso keynote. Ha anche parlato degli ultimi progressi del nostro ecosistema di partner, una soluzione AI basata su MongoDB, Amazon Web Services e Anthropic , che rende più semplice per i clienti l'implementazione di applicazioni di assistenza clienti di AI generativa. MongoDB 8.0: la migliore versione di MongoDB di sempre Le più grandi novità su .local L'evento di Londra ha annunciato la disponibilità generale di MongoDB 8.0 , che offre notevoli miglioramenti delle prestazioni, costi di scalabilità ridotti e aggiunge ulteriori funzionalità di scalabilità, resilienza e sicurezza dei dati al database di documenti più popolare al mondo. Le ottimizzazioni architettoniche in MongoDB 8.0 hanno ridotto notevolmente l'utilizzo della memoria e i tempi di interrogazione e MongoDB 8.0 ha funzionalità di elaborazione batch più efficienti rispetto alle versioni precedenti. In particolare, MongoDB 8.0 offre un throughput di lettura migliore del 36%, scritture in blocco più veloci del 56% e scritture simultanee più veloci del 20% durante la replica dei dati. Inoltre, MongoDB 8.0 può gestire volumi più elevati di dati di time-series e può eseguire aggregazioni complesse a una velocità superiore al 200%, con un utilizzo e costi delle risorse inferiori. Infine (ma non meno importante!), Queryable Encryption ora supporta le query di intervallo, garantendo la sicurezza dei dati e consentendo potenti analytics. Per ulteriori informazioni sugli annunci di prodotto di MongoDB.local Londra, progettati per accelerare lo sviluppo di applicazioni, semplificare l'innovazione dell'AI e accelerare il miglioramento delle competenze degli sviluppatori, continua a leggere. Accelerare lo sviluppo delle applicazioni Scalabilità ed elasticità migliorate nelle funzionalità di MongoDB Atlas I nuovi miglioramenti al piano di controllo di MongoDB Atlas consentono ai clienti di scalare i cluster più velocemente, rispondere alle richieste di risorse in tempo reale e ottimizzare le prestazioni, il tutto riducendo i costi operativi. Innanzitutto, le nostre nuove funzionalità granulari di provisioning e scalabilità delle risorse, tra cui scalabilità indipendente degli shard, storage esteso e IOPS su Azure, consentono ai clienti di ottimizzare le risorse esattamente dove necessario. In secondo luogo, i clienti Atlas sperimenteranno una scalabilità dei cluster più rapida con tempi di scalabilità fino al 50% più rapidi, scalando i cluster in parallelo per tipo di nodo. Infine, gli utenti di MongoDB Atlas godranno di una scalabilità automatica più reattiva, con un miglioramento di 5 volte della reattività grazie ai miglioramenti dei nostri algoritmi e della nostra infrastruttura di scalabilità. Questi miglioramenti sono stati estesi a tutti i clienti Atlas, che dovrebbero iniziare immediatamente a riscontrare i vantaggi. Plugin IntelliJ per MongoDB Annunciato in anteprima privata, il plugin MongoDB per IntelliJ è progettato per migliorare funzionalmente il modo in cui gli sviluppatori lavorano con MongoDB in IntelliJ IDEA, uno degli IDE più popolari tra gli sviluppatori Java. Il plugin consente agli sviluppatori Java aziendali di scrivere e testare le query Java più velocemente, ricevere insight proattivi sulle prestazioni e ridurre gli errori di runtime direttamente nel loro IDE. Migliorando l'integrazione tra database e IDE, JetBrains e MongoDB hanno instaurato una collaborazione per offrire un'esperienza senza interruzioni per la loro base di utenti condivisa e consentire loro di creare applicazioni moderne più velocemente. Iscriviti all'anteprima privata qui . Copilot MongoDB Participant per VS Code (public preview) Ora in public preview, il nuovo MongoDB Participant for GitHub Copilot integra le funzionalità di AI specifiche del dominio direttamente con un'esperienza simile a una chat nell'estensione MongoDB per VS Code . Il participant è profondamente integrato con l'estensione MongoDB, consentendo la generazione di query MongoDB accurate (e l'esportazione nel codice dell'applicazione), la descrizione di schemi di raccolta e la risposta alle domande con accesso aggiornato alla documentazione di MongoDB, senza richiedere allo sviluppatore di lasciare il proprio ambiente di codifica. Queste funzionalità riducono in modo significativo la necessità di passare da un dominio all'altro, consentendo agli sviluppatori di rimanere nel loro flusso e di concentrarsi sulla creazione di applicazioni innovative. Supporto multicluster per l'operatore MongoDB Enterprise Kubernetes Garantire elevata disponibilità, resilienza e scalabilità per le distribuzioni di MongoDB in esecuzione in Kubernetes tramite un supporto aggiuntivo per l'implementazione di MongoDB e Ops Manager su più cluster Kubernetes. Ora gli utenti hanno la possibilità di implementare ReplicaSets, Sharded Clusters (in public preview) e Ops Manager su cluster Kubernetes locali o distribuiti geograficamente per una maggiore resilienza di implementazione, flessibilità e disaster recovery. Questo approccio consente la disponibilità, la resilienza e la scalabilità su più siti all'interno di Kubernetes, funzionalità che in precedenza erano disponibili solo al di fuori di Kubernetes per MongoDB. Per saperne di più, consulta la documentazione . MongoDB Atlas Search e Vector Search sono ora generalmente disponibili tramite Atlas CLI e Docker L'esperienza di sviluppo locale per MongoDB Atlas è ora disponibile a livello generale. Usa MongoDB Atlas CLI e Docker per creare con MongoDB Atlas nel tuo ambiente locale preferito e accedi facilmente a funzionalità come Atlas Search e Atlas Vector Search durante l'intero ciclo di vita dello sviluppo del software. L'Atlas CLI fornisce un'interfaccia unificata e familiare basata su terminale, che consente di implementare e creare con MongoDB Atlas nel tuo ambiente di sviluppo preferito, localmente o nel cloud. Se sviluppi con Docker, ora puoi anche utilizzare Docker e Docker Compose per integrare facilmente Atlas nei tuoi ambienti di integrazione locale e continua con Atlas CLI . Evita il lavoro ripetitivo automatizzando il ciclo di vita dei tuoi ambienti di sviluppo e di test e concentrati sulla creazione di funzionalità applicative con la ricerca full-text, l'AI e la ricerca semantica e altro ancora. Semplificare l'innovazione dell'AI Riduci i costi e aumenta la scalabilità in Atlas Vector Search Abbiamo annunciato le funzionalità di quantizzazione vettoriale in Atlas Vector Search . Riducendo la memoria (fino al 96%) e velocizzando il recupero dei vettori, la quantizzazione vettoriale consente ai clienti di creare un'ampia gamma di applicazioni di AI e di ricerca su scala più elevata e a costi inferiori. Generalmente disponibile ora, il supporto per l'inserimento di vettori quantizzati scalari consente ai clienti di importare e lavorare senza problemi con vettori quantizzati dai loro fornitori di modelli di incorporamento preferiti, direttamente in Atlas Vector Search. Presto, funzionalità aggiuntive di quantizzazione vettoriale, inclusa la quantizzazione automatica, forniranno ai clienti un set di strumenti completo per creare e ottimizzare applicazioni di AI e di ricerca su larga scala in Atlas Vector Search. Ulteriori integrazioni con i framework AI più diffusi Consegna più velocemente il tuo prossimo progetto basato sull'AI con MongoDB, indipendentemente dal framework o LLM prescelto. Le tecnologie AI stanno progredendo rapidamente e questo rende importante costruire e scalare rapidamente applicazioni performanti e utilizzare lo stack preferito in base all'evoluzione dei requisiti e delle tecnologie disponibili. La suite avanzata di integrazioni di MongoDB con LangChain, LlamaIndex, Microsoft Semantic Kernel, AutoGen, Haystack, Spring AI, ChatGPT Retrieval Plugin e altro ancora rende estremamente semplice la creazione della prossima generazione di applicazioni su MongoDB . Migliorare le competenze degli sviluppatori Nuovi badge di apprendimento MongoDB Più veloci da raggiungere e più mirati di una certificazione, i Learning Badge gratuiti di MongoDB dimostrano il tuo impegno nell'apprendimento continuo e nel dimostrare la tua conoscenza su un argomento specifico. Segui il percorso di apprendimento, acquisisci nuove competenze e ottieni un badge digitale da mostrare su LinkedIn. Scopri i due badge di apprendimento di AI generativa di nuova generazione! Creazione di app di AI generativa: impara a creare app di AI generativa innovative con Atlas Vector Search, comprese le app retrieval-augmented generation (RAG). Distribuzione e valutazione di applicazioni di AI generativa : porta le tue applicazioni dalla creazione alla distribuzione completa, concentrandoti sull'ottimizzazione delle prestazioni e sulla valutazione dei risultati. Ulteriori Informazioni Per saperne di più sui recenti annunci e aggiornamenti di prodotto di MongoDB, consulta la nostra pagina delle novità di prodotto e tutti i post del nostro blog sugli aggiornamenti dei prodotti . Buon sviluppo!

October 3, 2024

MongoDB.local London 2024: Better Applications, Faster

This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . Since we kicked off MongoDB’s series of 2024 events in April, we’ve connected with thousands of customers, partners, and community members in cities around the world—from Mexico City to Mumbai. Yesterday marked the nineteenth stop of the 2024 MongoDB.local tour, and we had a blast welcoming folks across industries to MongoDB.local London, where we discussed the latest technology trends, celebrated customer innovations, and unveiled product updates that make it easier than ever for developers to build next-gen applications. Over the past year, MongoDB’s more than 50,000 customers have been telling us that their needs are changing. They’re increasingly focused on three areas: Helping developers build faster and more efficiently Empowering teams to create AI-powered applications Moving from legacy systems to modern platforms Across these areas, there’s a common need for a solid foundation: each requires a resilient, scalable, secure, and highly performant database. The updates we shared at MongoDB.local London reflect these priorities. MongoDB is committed to ensuring that our products are built to exceed our customers’ most stringent requirements, and that they provide the strongest possible foundation for building a wide range of applications, now and in the future. Indeed, during yesterday’s event, Sahir Azam, MongoDB’s Chief Product Officer, discussed the foundational role data plays in his keynote address. He also shared the latest advancement from our partner ecosystem, an AI solution powered by MongoDB, Amazon Web Services, and Anthropic that makes it easier for customers to deploy gen AI customer care applications. MongoDB 8.0: The best version of MongoDB ever The biggest news at .local London was the general availability of MongoDB 8.0 , which provides significant performance improvements, reduced scaling costs, and adds additional scalability, resilience, and data security capabilities to the world’s most popular document database. Architectural optimizations in MongoDB 8.0 have significantly reduced memory usage and query times, and MongoDB 8.0 has more efficient batch processing capabilities than previous versions. Specifically, MongoDB 8.0 features 36% better read throughput, 56% faster bulk writes, and 20% faster concurrent writes during data replication. In addition, MongoDB 8.0 can handle higher volumes of time series data and can perform complex aggregations more than 200% faster—with lower resource usage and costs. Last (but hardly least!), Queryable Encryption now supports range queries, ensuring data security while enabling powerful analytics. For more on MongoDB.local London’s product announcements—which are designed to accelerate application development, simplify AI innovation, and speed developer upskilling—please read on! Accelerating application development Improved scaling and elasticity on MongoDB Atlas capabilities New enhancements to MongoDB Atlas’s control plane allow customers to scale clusters faster, respond to resource demands in real-time, and optimize performance—all while reducing operational costs. First, our new granular resource provisioning and scaling features—including independent shard scaling and extended storage and IOPS on Azure—allow customers to optimize resources precisely where needed. Second, Atlas customers will experience faster cluster scaling with up to 50% quicker scaling times by scaling clusters in parallel by node type. Finally, MongoDB Atlas users will enjoy more responsive auto-scaling, with a 5X improvement in responsiveness thanks to enhancements in our scaling algorithms and infrastructure. These enhancements are being rolled out to all Atlas customers, who should start seeing benefits immediately. IntelliJ plugin for MongoDB Announced in private preview, the MongoDB for IntelliJ Plugin is designed to functionally enhance the way developers work with MongoDB in IntelliJ IDEA, one of the most popular IDEs among Java developers. The plugin allows enterprise Java developers to write and test Java queries faster, receive proactive performance insights, and reduce runtime errors right in their IDE. By enhancing the database-to-IDE integration, JetBrains and MongoDB have partnered to deliver a seamless experience for their shared user-base and unlock their potential to build modern applications faster. Sign up for the private preview here . MongoDB Copilot Participant for VS Code (Public Preview) Now in public preview, the new MongoDB Participant for GitHub Copilot integrates domain-specific AI capabilities directly with a chat-like experience in the MongoDB Extension for VS Code . The participant is deeply integrated with the MongoDB extension, allowing for the generation of accurate MongoDB queries (and exporting them to application code), describing collection schemas, and answering questions with up-to-date access to MongoDB documentation without requiring the developer to leave their coding environment. These capabilities significantly reduce the need for context switching between domains, enabling developers to stay in their flow and focus on building innovative applications. Multicluster support for the MongoDB Enterprise Kubernetes Operator Ensure high availability, resilience, and scale for MongoDB deployments running in Kubernetes through added support for deploying MongoDB and Ops Manager across multiple Kubernetes clusters. Users now have the ability to deploy ReplicaSets, Sharded Clusters (in public preview), and Ops Manager across local or geographically distributed Kubernetes clusters for greater deployment resilience, flexibility, and disaster recovery. This approach enables multi-site availability, resilience, and scalability within Kubernetes, capabilities that were previously only available outside of Kubernetes for MongoDB. To learn more, check out the documentation . MongoDB Atlas Search and Vector Search are now generally available via the Atlas CLI and Docker The local development experience for MongoDB Atlas is now generally available. Use the MongoDB Atlas CLI and Docker to build with MongoDB Atlas in your preferred local environment, and easily access features like Atlas Search and Atlas Vector Search throughout the entire software development lifecycle. The Atlas CLI provides a unified and familiar terminal-based interface that allows you to deploy and build with MongoDB Atlas in your preferred development environment, locally or in the cloud. If you build with Docker, you can also now use Docker and Docker Compose to easily integrate Atlas in your local and continuous integration environments with the Atlas CLI . Avoid repetitive work by automating the lifecycle of your development and testing environments and focus on building application features with full-text search, AI and semantic search, and more. Simplifying AI innovation Reduce costs and increase scale in Atlas Vector Search We announced vector quantization capabilities in Atlas Vector Search . By reducing memory (by up to 96%) and making vectors faster to retrieve, vector quantization allows customers to build a wide range of AI and search applications at higher scale and lower cost. Generally available now, support for scalar quantized vector ingestion lets customers seamlessly import and work with quantized vectors from their embedding model providers of choice—directly in Atlas Vector Search. Coming soon, additional vector quantization features, including automatic quantization, will equip customers with a comprehensive toolset for building and optimizing large-scale AI and search applications in Atlas Vector Search. Additional integrations with popular AI frameworks Ship your next AI-powered project faster with MongoDB, no matter your framework or LLM of choice. AI technologies are advancing rapidly, making it important to build and scale performant applications quickly, and to use your preferred stack as your requirements and available technologies evolve. MongoDB’s enhanced suite of integrations with LangChain, LlamaIndex, Microsoft Semantic Kernel, AutoGen, Haystack, Spring AI, the ChatGPT Retrieval Plugin, and more make it easier than ever to build the next generation of applications on MongoDB . Advancing developer upskilling New MongoDB Learning Badges Faster to achieve and more targeted than a certification, MongoDB's free Learning Badges show your commitment to continuous learning and to proving your knowledge about a specific topic. Follow the learning path, gain new skills, and get a digital badge to show off on LinkedIn. Check out the two new gen AI learning badges! Building gen AI Apps : Learn to create innovative gen AI apps with Atlas Vector Search, including retrieval-augmented generation (RAG) apps. Deploying and Evaluating gen AI Apps : Take your apps from creation to full deployment, focusing on optimizing performance and evaluating results. Learn more To learn more about MongoDB’s recent product announcements and updates, check out our What’s New product announcements page and all of our blog posts about product updates . Happy building!

October 3, 2024

MongoDB.local London 2024: Mejores aplicaciones, más rápidas

Desde que iniciamos el serial de eventos de MongoDB en abril 2024, nos conectamos con miles de clientes, socios y afiliados a la comunidad en ciudades de todo el mundo, desde Ciudad de México hasta Mumbai. Ayer marcó la decimonovena atajada de la gira MongoDB.local 2024, y nos divertimos mucho al dar la bienvenida a personas de todos los sectores a MongoDB.local London, donde discutimos las últimas tendencias tecnológicas, celebramos las innovaciones de los clientes y presentamos actualizaciones de productos que hacen que sea más fácil que nunca para los desarrolladores crear aplicaciones de próxima generación. Durante el último año, los más de 50,000 clientes de MongoDB nos dijeron que sus necesidades están cambiando. Se centran cada vez más en tres áreas: Ayudar a los desarrolladores a crear de forma más rápida y eficiente Capacitar a los equipos para crear aplicaciones impulsadas por IA Pasar de sistemas heredados a plataformas modernas En todas estas áreas, existe una necesidad común de una base de datos estable: cada una requiere una base de datos resistente, escalable, segura y de alto rendimiento. Las actualizaciones que compartimos en MongoDB.local London reflejan estas prioridades. MongoDB se compromete a garantizar que nuestros productos estén diseñados para superar los requisitos más estrictos de nuestros clientes y que proporcionen la base más estable posible para crear una amplia gama de aplicaciones, ahora y en el futuro. De hecho, durante el evento, Sahir Azam, Director de Producto de MongoDB, discutió el papel fundamental que juegan los datos en su discurso de apertura. También compartió los últimos avances de nuestro socio ecosistema, una solución de IA impulsada MongoDB por, Amazon Web Services y Anthropic que facilita a los clientes la implementación de aplicaciones de atención al cliente de IA de generación. MongoDB 8.0: La mejor versión de MongoDB hasta la fecha Las noticias más importantes en .local Londres fue la disponibilidad general de MongoDB 8.0 , que proporciona mejoras significativas en el rendimiento, costos de escalado reducidos y agrega escalabilidad, resiliencia y capacidades de seguridad de datos adicionales a la base de datos de documentos más popular del mundo. Las optimizaciones arquitectónicas de MongoDB 8.0 redujeron significativamente el uso de memoria y los tiempos de consulta, y MongoDB 8.0 tiene capacidades de procesamiento por lotes más eficientes que las versiones anteriores. En concreto, MongoDB 8.0 presenta un rendimiento de lectura un 36% mejor, escrituras masivas un 56% más rápidas y escrituras concurrentes un 20% más rápidas durante la replicación de datos. Además, MongoDB 8.0 puede manejar volúmenes más altos de datos de series temporales y puede realizar agregaciones complejas más de un 200% más rápido, con menor uso de recursos y costos. Por último (¡pero no menos importante!), Queryable Encryption ahora admite consultas de rango, lo que garantiza la seguridad de los datos al tiempo que permite un análisis potente. Para obtener más información sobre los anuncios de productos de MongoDB.local London, que están diseñados para acelerar el desarrollo de aplicaciones, simplificar la innovación en IA y acelerar la mejora de las habilidades de los desarrolladores, ¡siga leyendo! Aceleramiento del desarrollo de aplicaciones Mejora del escalamiento y la elasticidad de las capacidades MongoDB Atlas Las nuevas mejoras en el plano de control de MongoDB Atlas permiten a los clientes escalar clústeres más rápido, responder a las demandas de recursos en tiempo real y optimizar el rendimiento, al tiempo que reducen los costos operativos. En primer lugar, nuestro nuevo aprovisionamiento de recursos granular y escalamiento característico, que incluye el escalamiento de shard independiente y el almacenamiento extendido e IOPS en Azure, permiten a los clientes optimizar los recursos precisamente donde sea necesario. En segundo lugar, Atlas clientes experimentarán un escalamiento cluster más rápido con tiempos de escalamiento hasta un 50% más rápidos por escalamiento cluster en paralelo por tipo de nodo. Por último, MongoDB Atlas usuarios disfrutarán de un autoescalamiento más responsive, con una mejora de 5 veces en la capacidad de respuesta gracias a las mejoras en nuestros algoritmos de escalamiento e infraestructura. Estas mejoras se están implementando para todos los clientes de Atlas, que deberían comenzar a ver los beneficios de inmediato. Plugin de IntelliJ para MongoDB Anunciado en versión preliminar privada, el complemento MongoDB para IntelliJ está diseñado para mejorar funcionalmente la forma en que los desarrolladores trabajan con MongoDB en IntelliJ IDEA, uno de los IDE más populares entre los desarrolladores de Java. El complemento permite a los desarrolladores empresariales de Java escribir y probar consultas de Java más rápido, recibir información proactiva sobre el rendimiento y reducir los errores de tiempo de ejecución directamente en su IDE. Al mejorar la integración de la base de datos al IDE, JetBrains y MongoDB tienen la capacidad de ofrecer una experiencia fluida para su base de usuarios compartida y desbloquear su potencial para crear aplicaciones modernas más rápido. Registrar para la vista previa privada aquí . Participante de copiloto de MongoDB para VS Code (versión preliminar pública) Ahora en versión preliminar pública, el nuevo MongoDB Participant para Github Copilot integra capacidades de IA específicas del dominio directamente con una experiencia similar a un chat en la extensión MongoDB para VS Code . El participante está profundamente integrado con la extensión MongoDB, lo que permite la generación de consultas de MongoDB precisas (y exportarlas al código de la aplicación), describir collection esquema, y responder preguntas con acceso actualizado a MongoDB documentación sin necesidad de que el desarrollador abandone su entorno de codificación. Estas capacidades reducen significativamente la necesidad de cambiar de contexto entre dominios, lo que permite a los desarrolladores mantener en su flujo y centrar en la creación de aplicaciones innovadoras. Compatibilidad con varios clústeres para el operador de Kubernetes de MongoDB Enterprise Garantice el alta disponibilidad, la resiliencia y la escalabilidad de las implementaciones de MongoDB que se ejecutan en Kubernetes a través de una asistencia técnica adicional para implementar MongoDB y Ops Manager en múltiples Kubernetes cluster. Los usuarios ahora tienen la capacidad de implementar ReplicaSets, cluster fragmentado (en versión preliminar pública) y Ops Manager en distributed Kubernetes cluster local o geográfica para una mayor resistencia de implementación, flexibilidad y recuperación ante desastres. Este enfoque permite la disponibilidad, la resiliencia y la escalabilidad de varios sitios dentro de Kubernetes, capacidades que anteriormente solo estaban disponibles fuera de Kubernetes para MongoDB. Para obtener más información, consulte la documentación . MongoDB Atlas Search y Vector Search ahora están disponibles de forma general a través de Atlas CLI y Docker La experiencia de desarrollo local de MongoDB Atlas ya está disponible para el público en general. Emplee la MongoDB Atlas CLI y el Docker para crear con MongoDB Atlas en su entorno local preferido y acceder fácilmente a características como la Atlas búsqueda y el de Atlas Vector Search a lo largo de todo el ciclo de vida del desarrollo de software. La CLI de Atlas proporciona una interfaz unificada y familiar basada en terminal que le permite implementar y compilar con MongoDB Atlas en su entorno de desarrollo preferido, localmente o en la nube. Si compilas con Docker, ahora también puedes usar Docker y Docker Compose para integrar fácilmente Atlas en tus entornos de integración local y continua con la CLI de Atlas . Evite el trabajo repetitivo automatizando el ciclo de vida de sus entornos de desarrollo y pruebas y concentrar en crear características de aplicaciones con búsqueda de texto completo, IA y búsqueda semántica, y más. Simplificación de la innovación en IA Reduzca los costos y aumente la escala en Atlas Vector Search Anunciamos capacidades de cuantificación vectorial en Atlas Vector Search . Al reducir la memoria (hasta en un 96%) y hacer que los vectores sean más rápidos de recuperar, la cuantificación vectorial permite a los clientes crear una amplia gama de aplicaciones de AI y búsqueda a mayor escala y menor costo. La asistencia técnica para la ingesta de vectores cuantificados escalares permite a los clientes importar y trabajar sin problemas con vectores cuantificados de los proveedores de modelos de incrustación de su elección, directamente en Atlas Vector Search. Próximamente, la cuantización vectorial adicional característica, incluida la cuantificación automática, equipará a los clientes con un conjunto completo de herramientas para crear y optimizar aplicaciones de búsqueda e IA a gran escala en Atlas Vector Search. Integraciones adicionales con el popular marco de IA Envíe su próximo proyecto impulsado por IA más rápido con MongoDB, sin importar su marco o LLM de elección. Las tecnologías de AI avanzan rápidamente, por lo que es importante crear y escalar aplicaciones de alto rendimiento rápidamente, y emplear su pila preferida a medida que evolucionan sus requisitos y las tecnologías disponibles. El conjunto mejorado de integraciones de MongoDB con LangChain, LlamaIndex, Microsoft Semantic Kernel, AutoGen, Haystack, Spring AI, ChatGPT Retrieval Plugin y más hace que sea más fácil que nunca crear la próxima generación de aplicaciones en MongoDB . Avance de la mejora de las competencias de los desarrolladores Nuevas insignias de aprendizaje de MongoDB Más rápidas de lograr y más específicas que una certificación, las insignias de aprendizaje gratis de MongoDB muestran su compromiso con el aprendizaje continuo y con demostrar sus conocimientos sobre un tema específico. Siga la ruta de aprendizaje, adquiera nuevas habilidades y obtenga una insignia digital para presumir en LinkedIn. ¡Echa un vistazo a las dos insignias de aprendizaje de IA de nueva generación! Creación de aplicaciones de IA de generación: aprenda a crear aplicaciones de IA de generación innovadoras con Atlas Vector Search, incluidas las aplicaciones de generación aumentada de recuperación (RAG). Despliegue y evaluación de gen AI Apps: Lleve sus aplicaciones desde la creación hasta el despliegue completo, centrar en la optimización del rendimiento y la evaluación de los resultados. Obténga más información Para obtener más información sobre los anuncios y actualizaciones recientes de productos de MongoDB, consulte nuestra página de anuncios de productos, novedades y todas nuestras publicaciones de blog sobre actualizaciones de productos . ¡Feliz desarrollo!

October 3, 2024

MongoDB.local London 2024:更快、更好的应用程序

自今年 4 月启动 MongoDB 2024 系列活动以来,我们已与世界各地的数千名客户、合作伙伴和社区成员建立了联系,包括墨西哥城到孟买等城市。昨天是 2024 年 MongoDB.local 巡回活动的第十九站,我们很高兴欢迎各行各业的人们来到 MongoDB.local 伦敦大会。在那里我们讨论了最新的技术趋势,庆祝了客户创新,并发布了产品更新,使开发者可以比以往更轻松地构建下一代应用程序。 在过去的一年里,MongoDB 的 50,000 多名客户一直在告诉我们他们的需求正在发生变化。他们越来越关注三个领域: 全面助力开发者更快、更高效地构建应用 帮助团队创建 AI 赋能的应用程序 从传统系统迁移到现代平台 这些领域都需要一个坚实的基础:每个领域都需要一个有弹性、可扩展、安全和高性能的数据库。 我们在 MongoDB.local 伦敦大会上分享的更新反映了这些优先事项。MongoDB 致力于确保我们的产品超越客户最严格的要求,并为现在和将来构建广泛的应用程序奠定最坚实的基础。 在昨天的活动中,MongoDB 的首席产品官 Sahir Azam 在他的主题演讲中讨论了数据所发挥的基础作用。他还分享了我们合作伙伴生态系统的最新进展, 这是一个由 MongoDB、Amazon Web Services 和 Anthropic 提供支持的 AI 解决方案,使客户能够更轻松地部署生成式人工智能客户服务应用程序。 MongoDB 8.0:有史以来最好的 MongoDB 版本 .local 伦敦大会上的最大新闻是 MongoDB 8.0 正式发布。它显著提高了性能,降低了扩展成本,并为全球最受欢迎的文档数据库增加了额外的可扩展性、韧性和数据安全功能。 MongoDB 8.0 的架构优化大大降低了内存使用率和查询时间,而且 MongoDB 8.0 比以前的版本具有更高效的批处理能力。具体来说,MongoDB 8.0 的读取吞吐量提高了 36%,批量写入速度提高了 56%,数据复制期间的并发写入速度提高了 20%。此外,MongoDB 8.0 可以处理更大量的时间序列数据,并且可以将执行复杂聚合的速度提高 200% 以上,同时降低资源使用量和成本。最后, Queryable Encryption 现在支持范围查询,确保数据安全,同时实现强大的分析功能。 如需详细了解 MongoDB.local 伦敦大会的产品公告(旨在加速应用程序开发、简化 AI 创新和加快开发者的技能提升),请继续阅读! 加快应用程序开发 提高 MongoDB Atlas 功能的扩展性和弹性 MongoDB Atlas 控制平面的全新增强功能使客户能够更快地扩展集群、实时响应资源需求并优化性能,同时降低运营成本。 首先,我们新的颗粒度资源预配和扩展功能(包括 Azure 上的独立分片扩展和扩展存储及 IOPS)允许客户在需要时精确优化资源。其次,通过按节点类型并行扩展集群,Atlas 客户将体验到更快的集群扩展,扩展时间最多可缩短 50%。 最后,MongoDB Atlas 用户将享受响应更快的自动扩展,由于我们的扩展算法和基础设施的增强,响应速度将提高 5 倍。这些增强功能正在向所有 Atlas 客户推出,他们将立即开始受益。 适用于 MongoDB 的 IntelliJ 插件 MongoDB for IntelliJ 插件已在私人预览版中发布,旨在从功能上增强开发者在 IntelliJ IDEA(Java 开发者中最流行的 IDE 之一)中使用 MongoDB 的方式。该插件允许企业 Java 开发者在其 IDE 中更快地编写和测试 Java 查询、获得主动的性能洞察并减少运行时错误。 通过增强数据库到 IDE 的集成,JetBrains 和 MongoDB 合作,为其共享用户群提供无缝体验,并释放他们更快构建现代应用程序的潜力。 在此处注册个人预览版 。 MongoDB Copilot Participant for VS Code (公共预览版) 目前,新的 MongoDB Participant for GitHub Copilot 已进入公开预览阶段,它将特定领域的 AI 功能直接与 MongoDB Extension for VS Code 中的聊天式体验相结合。 参与者与 MongoDB 扩展深度集成,允许生成准确的 MongoDB 查询(并将其导出到应用程序代码)、描述集合模式并通过访问最新的 MongoDB 文档来回答问题,而无需开发者离开他们的编码环境。这些功能大大减少了域之间的上下文切换,使开发者能够保持工作流,专注于构建创新应用程序。 MongoDB Enterprise Kubernetes Operator 的多集群支持 通过增加对跨多个 Kubernetes 集群部署 MongoDB 和 Ops Manager 的支持,确保在 Kubernetes 中运行的 MongoDB 部署具有高可用性、韧性和扩展性。 用户现在可以在本地或地理分布的 Kubernetes 集群中部署 ReplicaSet、分片集群(公开预览版)和 Ops Manager,以实现更高的部署韧性、灵活性和灾难恢复能力。这种方法可以在 Kubernetes 中实现多站点可用性、韧性和可扩展性,这些功能以前仅在 Kubernetes 之外为 MongoDB 提供。要了解更多信息, 请查看文档 。 MongoDB Atlas Search 和 Vector Search 现已通过 Atlas CLI 和 Docker 正式发布 MongoDB Atlas 的本地开发体验现已公开。使用 MongoDB Atlas CLI 和 Docker 在您喜欢的本地环境中构建 MongoDB Atlas,并在整个软件开发生命周期中轻松访问 Atlas Search 和 Atlas Vector Search 等功能。Atlas CLI 提供了统一且熟悉的基于终端的界面,允许您在您喜欢的开发环境中(本地或云端)使用 MongoDB Atlas 进行部署和构建。 如果您使用 Docker 进行构建,现在还可以使用 Docker 和 Docker Compose 通过 Atlas CLI 轻松地将 Atlas 集成到您的本地和持续集成环境中。通过自动化开发和测试环境的生命周期来避免重复性工作,并专注于通过全文搜索、 AI 和语义搜索等功能构建应用程序功能。 简化 AI 创新 在 Atlas Vector Search 中降低成本并扩大规模 我们宣布了 Atlas Vector Search 中的向量量化功能。通过减少内存(最多减少 96%)和提高向量检索速度,向量量化使客户能够以更高的规模和更低的成本构建各种人工智能和搜索应用。 标量量化向量导入功能现已全面推出,客户能够直接在 Atlas Vector Search 中操作,从其选择的嵌入模型中无缝地导入和使用量化向量。自动量化等其他向量量化功能也将很快上线,客户将能够利用 Atlas Vector Search 全面的工具集,构建和优化各种大规模 AI 和搜索应用程序。 与流行 AI 框架的额外集成 无论您选择什么框架或 LLM,都可以使用 MongoDB 更快地交付您的下一个 AI 项目。AI 技术正在迅速发展,因此快速构建和扩展高性能应用程序,并随着您的需求和可用技术的发展而使用您首选的堆栈非常重要。 MongoDB 与 LangChain、LlamaIndex、Microsoft Semantic Kernel、AutoGen、Haystack、Spring AI、ChatGPT 检索插件等增强的集成套件,使得在 MongoDB 上构建下一代应用程序变得前所未有的简单 。 促进开发者技能提升 新的 MongoDB 学习徽章 与认证相比, MongoDB 的免费学习徽章更快获得 ,也更有针对性,它表明您致力于不断学习,并证明您对特定主题的了解。遵循学习路径,掌握新技能,并获得可在 LinkedIn 上炫耀的数字徽章。 查看两个新的生成式人工智能学习徽章! 构建生成式人工智能应用 :学习使用 Atlas Vector Search 创建创新的生成式人工智能应用,包括检索增强生成 (RAG) 应用。 部署和评估生成式人工智能应用 :从创建到全面部署应用程序,专注于优化性能和评估结果。 了解详情 要了解有关 MongoDB 最新产品公告和更新的更多信息,请查看我们的新产品 公告页面以 及有关产品 更新的所有博客文章 。快乐构建!

October 3, 2024

MongoDB.local London 2024: Bessere Anwendungen, schneller

Seit wir im April die MongoDB-Veranstaltungsreihe 2024 gestartet haben, haben wir Kontakte zu Tausenden von Kunden, Partnern und Community-Mitgliedern in Städten auf der ganzen Welt geknüpft – von Mexiko-Stadt bis Mumbai. Gestern war der 19. Stopp der 2024 MongoDB.local Tour, und wir hatten viel Spaß, Leute aus allen Branchen bei MongoDB.local London zu begrüßen, wo wir die neuesten Technologietrends diskutierten, Kundeninnovationen feierten und Produktaktualisierungen vorstellten, die es Entwicklern einfacher denn je machen, Anwendungen der nächsten Generation zu erstellen. Im letzten Jahr haben uns über 50.000 MongoDB-Kunden mitgeteilt, dass sich ihre Anforderungen ändern. Sie konzentrieren sich zunehmend auf drei Bereiche: Entwicklern helfen, schneller und effizienter zu entwickeln Teams befähigen, KI-gestützte Anwendungen zu erstellen Umstellung von Altsystemen auf moderne Plattformen In all diesen Bereichen besteht ein gemeinsamer Bedarf an einer soliden Grundlage: Alle benötigen eine belastbare, skalierbare, sichere und hochleistungsfähige Datenbank. Die Updates, die wir bei MongoDB.local London geteilt haben, spiegeln diese Prioritäten wider. MongoDB setzt sich dafür ein, dass die Produkte so konzipiert sind, dass sie die strengsten Anforderungen unserer Kunden übertreffen und die bestmögliche Grundlage für die Entwicklung einer breiten Palette von Anwendungen bieten – jetzt und in der Zukunft. Sahir Azam, Chief Product Officer von MongoDB, erläuterte während der gestrigen Veranstaltung in seinem Hauptvortrag die grundlegende Rolle, die Daten spielen. Er stellte außerdem die neuesten Fortschritte in unserem Partner-Ökosystem vor: eine KI-Lösung auf Basis von MongoDB, Amazon Web Services und Anthropic , die es Kunden erleichtert, KI-Anwendungen für den Kundendienst einzusetzen. MongoDB 8.0: Die beste Version von MongoDB aller Zeiten Die größten Neuigkeiten bei .local In London wurde MongoDB 8.0 allgemein verfügbar gemacht. Das neue System bietet erhebliche Leistungsverbesserungen, geringere Skalierungskosten und erweitert die weltweit beliebteste Dokumentendatenbank um zusätzliche Skalierbarkeit, Ausfallsicherheit und Datensicherheitsfunktionen. Durch Architekturoptimierungen in MongoDB 8.0 wurden Speichernutzung und Abfragezeiten deutlich reduziert und MongoDB 8.0 verfügt über effizientere Stapelverarbeitungsfunktionen als frühere Versionen. Insbesondere zeichnet sich MongoDB 8.0 durch einen um 36 % besseren Lesedurchsatz, 56 % schnellere Massenschreibvorgänge und 20 % schnellere gleichzeitige Schreibvorgänge während der Datenreplikation aus. Darüber hinaus kann MongoDB 8.0 größere Mengen an Zeitreihendaten verarbeiten und komplexe Aggregationen über 200 % schneller durchführen – bei geringerem Ressourcenverbrauch und geringeren Kosten. Nicht zuletzt unterstützt Queryable Encryption jetzt auch Bereichsabfragen, was die Datensicherheit gewährleistet und gleichzeitig leistungsstarke Analysen ermöglicht. Lesen Sie mehr über die Produktankündigungen von MongoDB.local London, die darauf abzielen, die Anwendungsentwicklung zu beschleunigen, KI-Innovationen zu vereinfachen und Entwickler schneller weiterzubilden. Anwendungsentwicklung beschleunigen Verbesserte Skalierung und Elastizität der MongoDB Atlas-Funktionen Neue Verbesserungen an der Steuerungsebene von MongoDB Atlas ermöglichen es Kunden, Cluster schneller zu skalieren, in Echtzeit auf Ressourcenanforderungen zu reagieren und die Leistung zu optimieren – und das alles bei gleichzeitiger Senkung der Betriebskosten. Erstens können Kunden mit unseren neuen, granularen Funktionen zur Ressourcenbereitstellung und -skalierung – einschließlich unabhängiger Shard-Skalierung und erweiterter Speicherung und IOPS auf Azure – Ressourcen genau dort optimieren, wo sie benötigt werden. Zweitens profitieren Atlas-Kunden von einer schnelleren Cluster-Skalierung mit bis zu 50 % schnelleren Skalierungszeiten durch die parallele Skalierung von Clustern nach Knotentyp. Und schließlich können sich Benutzer von MongoDB Atlas über eine reaktionsschnellere automatische Skalierung freuen: Dank Verbesserungen unserer Skalierungsalgorithmen und -infrastruktur konnte die Reaktionsfähigkeit um das Fünffache gesteigert werden. Diese Verbesserungen werden allen Atlas-Kunden zur Verfügung gestellt, die sofort von den Vorteilen profitieren können. IntelliJ-Plugin für MongoDB Das in einer privaten Vorschau angekündigte MongoDB-Plugin für IntelliJ soll die Arbeitsweise von Entwicklern mit MongoDB in IntelliJ IDEA, einer der beliebtesten IDEs unter Java-Entwicklern, funktional verbessern. Das Plug-In ermöglicht es Java-Entwicklern in Unternehmen, Java-Abfragen schneller zu schreiben und zu testen, proaktive Einblicke in die Leistung zu erhalten und Laufzeitfehler direkt in ihrer IDE zu reduzieren. Durch die Verbesserung der Datenbank-zu-IDE-Integration haben sich JetBrains und MongoDB zusammengeschlossen, um ihrer gemeinsamen Benutzerbasis ein nahtloses Erlebnis zu bieten und ihr Potenzial zur schnelleren Erstellung moderner Anwendungen freizusetzen. Melden Sie sich hier für die private Vorschau an . MongoDB Copilot-Teilnehmer für VS Code (Öffentliche Vorschau) Der neue MongoDB Participant für GitHub Copilot, der jetzt in der öffentlichen Vorschau verfügbar ist, integriert domänenspezifische KI-Funktionen direkt mit einer chatähnlichen Erfahrung in der MongoDB-Erweiterung für VS Code . Der Teilnehmer ist tief in die MongoDB-Erweiterung integriert, sodass genaue MongoDB-Abfragen generiert (und in Anwendungscode exportiert), Sammlungsschemata beschrieben und Fragen mit aktuellem Zugriff auf die MongoDB-Dokumentation beantwortet werden können, ohne dass der Entwickler seine Codierungsumgebung verlassen muss. Diese Funktionen reduzieren die Notwendigkeit des Kontextwechsels zwischen den Domänen erheblich und ermöglichen es den Entwicklern, in ihrem Arbeitsfluss zu bleiben und sich auf die Entwicklung innovativer Anwendungen zu konzentrieren. Multicluster-Unterstützung für den MongoDB Enterprise Kubernetes Operator Sorgen Sie für hohe Verfügbarkeit, Ausfallsicherheit und Skalierbarkeit für MongoDB-Bereitstellungen, die in Kubernetes ausgeführt werden, durch zusätzliche Unterstützung für die Bereitstellung von MongoDB und Ops Manager in mehreren Kubernetes-Clustern. Benutzer können jetzt ReplicaSets, Sharded Clusters (in der öffentlichen Vorschau) und Ops Manager über lokale oder geografisch verteilte Kubernetes-Cluster hinweg bereitstellen, um eine höhere Ausfallsicherheit, Flexibilität und Notfallwiederherstellung bei der Bereitstellung zu erreichen. Dieser Ansatz ermöglicht standortübergreifende Verfügbarkeit, Ausfallsicherheit und Skalierbarkeit innerhalb von Kubernetes – Funktionen, die bisher nur außerhalb von Kubernetes für MongoDB verfügbar waren. Um mehr zu erfahren, lesen Sie die Dokumentation . MongoDB Atlas Search und Vector Search sind jetzt allgemein über Atlas CLI und Docker verfügbar. Die lokale Entwicklungserfahrung für MongoDB Atlas ist jetzt allgemein verfügbar. Verwenden Sie die MongoDB Atlas CLI und Docker, um mit MongoDB Atlas in Ihrer bevorzugten lokalen Umgebung zu erstellen, und greifen Sie während des gesamten Softwareentwicklungszyklus problemlos auf Funktionen wie Atlas Search und Atlas Vector Search zu. Atlas CLI bietet eine einheitliche und vertraute terminalbasierte Schnittstelle, die Ihnen die Bereitstellung und Erstellung mit MongoDB Atlas in Ihrer bevorzugten Entwicklungsumgebung ermöglicht, lokal oder in der Cloud. Wenn Sie mit Docker erstellen, können Sie jetzt auch Docker und Docker Compose verwenden, um Atlas mit Atlas CLI problemlos in Ihre lokalen und kontinuierlichen Integrationsumgebungen zu integrieren. Vermeiden Sie sich wiederholende Arbeiten, indem Sie den Lebenszyklus Ihrer Entwicklungs- und Testumgebungen automatisieren, und konzentrieren Sie sich auf die Erstellung von Anwendungsfunktionen mit Volltextsuche, KI und semantischer Suche und vielem mehr. Vereinfachung von KI-Innovationen Senken Sie die Kosten und erhöhen Sie die Skalierbarkeit in Atlas Vector Search Wir haben Funktionen zur Vektorquantisierung in Atlas Vector Search angekündigt. Durch die Verringerung des Speicherbedarfs (um bis zu 96 %) und die Beschleunigung des Abrufs von Vektoren ermöglicht die Vektorquantisierung den Kunden die Entwicklung einer breiten Palette von KI- und Suchanwendungen in größerem Maßstab und zu geringeren Kosten. Die jetzt allgemein verfügbare Unterstützung für die Aufnahme skalarer quantisierter Vektoren ermöglicht es Kunden, quantisierte Vektoren nahtlos von den Einbettungsmodellanbietern ihrer Wahl zu importieren und damit zu arbeiten – direkt in Atlas Vector Search. In Kürze werden zusätzliche Vektorquantisierungsfunktionen, einschließlich automatischer Quantisierung, verfügbar sein und den Kunden damit ein umfassendes Toolset für die Erstellung und Optimierung groß angelegter KI- und Suchanwendungen in Atlas Vector Search zur Verfügung stellen. Zusätzliche Integrationen mit beliebten KI-Frameworks Liefern Sie Ihr nächstes KI-gestütztes Projekt schneller mit MongoDB aus, unabhängig von Ihrem Framework oder LLM. KI-Technologien entwickeln sich rasant weiter. Daher ist es wichtig, dass Sie schnell leistungsstarke Anwendungen erstellen und skalieren und Ihren bevorzugten Stack verwenden können, wenn sich Ihre Anforderungen und die verfügbaren Technologien weiterentwickeln. Dank der erweiterten Integrationssuite von MongoDB mit LangChain, LlamaIndex, Microsoft Semantic Kernel, AutoGen, Haystack, Spring AI, dem ChatGPT Retrieval Plugin und mehr ist es einfacher denn je, die nächste Generation von Anwendungen auf MongoDB zu erstellen . Förderung der Weiterbildung von Entwicklern Neue MongoDB Learning Badges Die kostenlosen Learning Badges von MongoDB sind schneller zu erreichen und zielgerichteter als eine Zertifizierung. Sie zeigen Ihr Engagement für kontinuierliches Lernen und den Nachweis Ihres Wissens zu einem bestimmten Thema. Folgen Sie dem Lernpfad, erwerben Sie neue Fähigkeiten und holen Sie sich ein digitales Abzeichen, das Sie auf LinkedIn zeigen können. Schauen Sie sich die beiden aktuellen KI Learning Badges an! Erstellung von KI-Apps : Lernen Sie, wie Sie mit Atlas Vector Search innovative KI-Apps erstellen, einschließlich Retrieval-Augmented Generation (RAG)-Apps. Bereitstellung und Bewertung von KI-Apps : Führen Sie Ihre Apps von der Erstellung bis zur vollständigen Bereitstellung und konzentrieren Sie sich dabei auf die Optimierung der Leistung und die Auswertung der Ergebnisse. Weiterlesen Um mehr über die neuesten Produktankündigungen und Updates von MongoDB zu erfahren, besuchen Sie unsere Seite „ What's New “ und all unsere Blogbeiträge zu Produktupdates . Viel Spaß beim Programmieren!

October 3, 2024

MongoDB.local London 2024: 더 나은 애플리케이션, 더 빠른 속도

지난 4월에 MongoDB의 2024년 이벤트 시리즈를 시작한 이래, 멕시코시티에서 뭄바이에 이르기까지 전 세계 도시에서 수천 명의 고객, 파트너, 커뮤니티 회원들과 소통해 왔습니다. 어제는 2024년 MongoDB.local 투어의 열아홉 번째 행선지였던 런던에서 다양한 업계의 사람들을 MongoDB.local London으로 초대해 최신 기술 동향을 논의하고, 고객 혁신을 축하하며, 개발자들이 차세대 애플리케이션을 그 어느 때보다 쉽게 구축할 수 있는 제품 업데이트를 공개하는 등 즐거운 시간을 보냈습니다. 지난 1년 동안 50,000명이 넘는 MongoDB 고객들은 자신들의 요구 사항이 변화하고 있다고 말했습니다. 고객들은 세 가지 영역에 점점 더 집중하고 있습니다: 개발자가 더 빠르고 효율적으로 구축할 수 있도록 지원 AI 기반 애플리케이션을 만들 수 있도록 팀 역량 강화 레거시 시스템에서 최신 플랫폼으로 전환 이러한 모든 영역에서는 공통적으로 견고한 기반을 요구합니다. 각 영역에는 탄력적이고 확장 가능하며 안전하고 성능이 뛰어난 데이터베이스가 필요합니다. MongoDB.local London에서 공유한 업데이트는 이러한 우선 순위를 반영합니다. MongoDB는 고객의 가장 엄격한 요구 사항을 능가하는 제품을 만들고, 현재와 미래에 광범위한 애플리케이션을 구축할 수 있는 가장 강력한 기반을 제공하기 위해 최선을 다하고 있습니다. 실제로 어제 이벤트에서 MongoDB의 최고 제품 책임자인 Sahir Azam은 기조 연설에서 데이터의 기본적인 역할에 대해 설명했습니다. 그는 또한 고객이 생성형 AI 고객 관리 애플리케이션을 보다 쉽게 배포할 수 있도록 MongoDB, Amazon Web Services, Anthropic 기반 AI 솔루션인 파트너 에코시스템의 최신 발전 사항을 공유했습니다. MongoDB 8.0: 역대 최고의 MongoDB 버전 .local London에서 가장 큰 소식은 세계에서 가장 인기 있는 문서 데이터베이스에 상당한 성능 향상과 확장 비용 절감, 확장성, 회복 탄력성, 데이터 보안 기능을 추가한 MongoDB 8.0 의 정식 출시였습니다. MongoDB 8.0의 아키텍처 최적화로 메모리 사용량과 쿼리 시간이 크게 감소했으며, MongoDB 8.0은 이전 버전보다 효율적인 일괄 처리 기능을 제공합니다. 구체적으로, MongoDB 8.0은 데이터 복제 중 읽기 처리량이 36%, 대량 쓰기 속도가 56%, 동시 쓰기 속도가 20% 더 빨라졌습니다. 또한 MongoDB 8.0은 더 많은 양의 시계열 데이터를 처리할 수 있으며 리소스 사용량과 비용을 낮추면서 복잡한 집계를 200% 이상 더 빠르게 수행할 수 있습니다. 마지막으로, Queryable Encryption 은 이제 범위 쿼리를 지원하여 데이터 보안을 보장하는 동시에 강력한 분석을 가능하게 합니다. 애플리케이션 개발을 가속화하고, AI 혁신을 간소화하며, 개발자의 역량 향상 속도를 높이기 위해 설계된 MongoDB.local London의 제품 발표에 대해 자세히 알아보려면 계속 읽어보세요! 애플리케이션 개발 가속화 MongoDB Atlas 기능의 확장성 및 탄력성 개선 MongoDB Atlas의 제어 영역에 대한 새로운 개선 사항을 통해 고객은 클러스터를 더 빠르게 확장하고, 리소스 수요에 실시간으로 대응하며, 성능을 최적화하는 동시에 운영 비용을 절감할 수 있습니다. 첫째, 독립적인 샤드 확장, 확장된 저장소 및 Azure의 IOPS를 비롯한 새로운 세분화된 리소스 프로비저닝 및 확장 기능을 통해 고객은 필요한 곳에서 리소스를 정확하게 최적화할 수 있습니다. 둘째, Atlas 고객은 노드 유형별로 클러스터를 병렬로 확장하여 최대 50% 더 빠른 클러스터 확장을 경험할 수 있습니다. 마지막으로, MongoDB Atlas 사용자들은 확장 알고리즘과 인프라 개선으로 응답성이 5배 향상된 자동 확장 기능을 통해 더욱 빠른 응답성을 누릴 수 있습니다. 이러한 개선 사항이 모든 Atlas 고객을 대상으로 출시되고 있으므로 고객은 즉시 혜택을 누릴 수 있습니다. MongoDB용 IntelliJ 플러그인 비공개 미리 보기로 발표된 MongoDB for IntelliJ 플러그인은 Java 개발자들 사이에서 가장 인기 있는 IDE 중 하나인 IntelliJ IDEA에서 개발자가 MongoDB로 작업하는 방식을 기능적으로 개선하도록 설계되었습니다. 이 플러그인을 사용하면 엔터프라이즈 Java 개발자는 IDE에서 바로 Java 쿼리를 더 빠르게 작성 및 테스트하고, 선제적 성능 인사이트를 얻고, 런타임 오류를 줄일 수 있습니다. JetBrains와 MongoDB는 제휴를 통해 데이터베이스와 IDE의 통합을 강화함으로써 공유 사용자 기반에 원활한 경험을 제공하고 최신 애플리케이션을 더 빠르게 구축할 수 있는 잠재력을 실현했습니다. 여기에서 비공개 미리 보기에 등록하세요 . VS Code용 MongoDB Copilot Participant(공개 미리 보기) 현재 공개 미리 보기로 제공되는 새로운 GitHub Copilot용 MongoDB Participant는 도메인 특화 AI 기능을 VS Code용 MongoDB 확장의 채팅과 유사한 환경과 직접 통합합니다. 참가자는 MongoDB 확장과 긴밀하게 통합되어 있어 개발자가 코딩 환경을 벗어날 필요 없이 정확한 MongoDB 쿼리를 생성(및 이를 애플리케이션 코드로 내보내기)하고, 컬렉션 스키마를 설명하고, MongoDB 문서에 대한 최신 액세스를 통해 질문에 답변할 수 있습니다. 이러한 기능을 통해 도메인 간 컨텍스트 전환의 필요성이 크게 줄어들어 개발자는 흐름을 유지하면서 혁신적인 애플리케이션 구축에 집중할 수 있습니다. MongoDB Enterprise Kubernetes Operator에 대한 멀티클러스터 지원 여러 Kubernetes 클러스터에 걸쳐 MongoDB 및 Ops Manager 배포를 위한 추가 지원을 통해 Kubernetes에서 실행되는 MongoDB 배포서버의 고가용성, 회복 탄력성 및 확장성을 보장합니다. 이제 사용자는 배포서버 회복 탄력성, 유연성 및 재해 복구를 강화하기 위해 로컬 또는 지리적으로 분산된 Kubernetes 클러스터에 ReplicaSet, 샤딩된 클러스터(공개 미리 보기) 및 Ops Manager를 배포할 수 있습니다. 이 접근 방식은 이전에는 MongoDB용 Kubernetes 외부에서만 사용할 수 있었던 기능인 멀티사이트 가용성, 회복 탄력성 및 확장성을 Kubernetes 내에서 사용할 수 있도록 합니다. 자세한 내용은 문서를 참조하세요 . Atlas CLI 및 Docker를 통해 MongoDB Atlas Search 및 Vector Search 정식 출시 MongoDB Atlas의 로컬 개발 환경이 이제 정식 출시되었습니다. MongoDB Atlas CLI 및 Docker를 사용하여 선호하는 로컬 환경에서 MongoDB Atlas로 구축하고, 전체 소프트웨어 개발 수명 주기 동안 Atlas Search 및 Atlas Vector Search와 같은 기능에 쉽게 액세스할 수 있습니다. Atlas CLI는 로컬 또는 클라우드 중 선호하는 개발 환경에서 MongoDB Atlas를 통해 배포하고 구축할 수 있는 친숙한 통합 터미널 기반 인터페이스를 제공합니다. Docker로 구축하는 경우 이제 Docker 및 Docker Compose를 사용하여 Atlas CLI 를 통해 로컬 및 지속적 통합 환경에서 Atlas를 쉽게 통합할 수도 있습니다. 개발 및 테스트 환경의 수명 주기를 자동화하여 반복적인 작업을 피하고 전문 검색, AI 및 시맨틱 검색 등을 통해 애플리케이션 기능 구축에 집중할 수 있습니다. AI 혁신 간소화 Atlas Vector Search에서 비용 절감 및 규모 확장 Atlas Vector Search 의 벡터 양자화 기능을 발표했습니다. 벡터 양자화는 메모리를 최대 96%까지 줄이고 벡터를 더 빠르게 검색할 수 있게 함으로써 고객은 더 큰 규모와 더 낮은 비용으로 광범위한 AI 및 검색 애플리케이션을 구축할 수 있습니다. 이제 정식 출시되는 스칼라 양자화 벡터 수집 지원을 통해 고객은 원하는 임베딩 모델 제공업체의 양자화된 벡터를 Atlas Vector Search에서 직접 원활하게 가져와서 사용할 수 있습니다. 곧 출시될 자동 양자화를 포함한 추가적인 벡터 양자화 기능을 통해 고객은 Atlas Vector Search에서 대규모 AI 및 검색 애플리케이션을 구축하고 최적화할 수 있는 포괄적인 도구 집합을 갖추게 될 것입니다. 인기 있는 AI 프레임워크와의 추가 통합 선택한 프레임워크나 LLM에 상관없이 MongoDB를 사용하여 다음 AI 기반 프로젝트를 더 빠르게 출시하세요. AI 기술은 빠르게 발전하고 있으며, 요구 사항과 사용 가능한 기술이 발전함에 따라 성능 좋은 애플리케이션을 빠르게 구축하고 확장하며 선호하는 스택을 사용하는 것이 중요해지고 있습니다. MongoDB의 향상된 LangChain, LlamaIndex, Microsoft Semantic Kernel, AutoGen, Haystack, Spring AI, ChatGPT Retrieval Plugin 등과의 통합 제품군을 통해 MongoDB에서 차세대 애플리케이션을 그 어느 때보다 쉽게 구축할 수 있습니다 . 개발자 역량 향상 새로운 MongoDB 학습 배지 자격증보다 더 빨리 취득할 수 있고 더욱 구체적인 목표를 설정한 MongoDB의 무료 학습 배지 는 지속적인 학습에 대한 의지와 특정 주제에 대한 지식을 증명하려는 노력을 보여줍니다. 학습 경로를 따라가고, 새로운 기술을 습득하고, LinkedIn에서 자랑할 수 있는 디지털 배지를 받으세요. 두 가지 새로운 생성형 AI 학습 배지를 확인해 보세요! 생성형 AI 앱 구축 : Atlas Vector Search를 사용하여 RAG(검색 증강 생성) 앱을 비롯한 혁신적인 생성형 AI 앱을 만드는 방법을 알아보세요. 생성형 AI 앱 배포 및 평가 : 성능 최적화 및 결과 평가에 중점을 두고 앱 제작부터 전체 배포까지 진행하세요. 자세히 알아보세요. MongoDB의 최근 제품 발표 및 업데이트에 대해 자세히 알아보려면 새로운 기능 제품 발표 페 이지와 제품 업데이트에 대한 모든 블로그 게시물을 확 인하세요. 즐거운 빌드 되세요!

October 3, 2024

MongoDB.local Londres 2024: Melhor aplicação, mais rápido

Desde que iniciamos a série de eventos de 2024 do MongoDB em abril, nos conectamos com milhares de clientes, parceiros e membros da comunidade em cidades do mundo todo, da Cidade do México a Mumbai. Ontem marcamos a décima nona parada da turnê MongoDB.local 2024, e nos divertimos muito recebendo pessoas de todos os setores no MongoDB.local Londres, onde discutimos as últimas tendências tecnológicas, celebramos as inovações dos clientes e revelamos atualizações de produtos que tornam mais fácil do que nunca para os desenvolvedores criarem aplicativos de última geração. No último ano, os mais de 50.000 clientes do MongoDB nos relataram que suas necessidades estão mudando. Eles estão cada vez mais focados em três áreas: Ajudando os desenvolvedores a criar com mais rapidez e eficiência Capacitando equipes para criar aplicativos baseados em IA Migrando de sistemas legado para plataformas modernas Em todas essas áreas, há uma necessidade comum de uma base sólida: cada uma exige um banco de dados resiliente, escalável, seguro e de alto desempenho. As atualizações que compartilhamos no MongoDB.local Londres refletem essas prioridades. MongoDB tem o compromisso de garantir que nossos produtos sejam construídos para exceder os requisitos mais rigorosos de nossos clientes e que forneçam a base mais sólida possível para a construção de uma ampla gama de aplicações, agora e no futuro. De fato, durante o evento de ontem, Sahir Azam, diretor de produtos do MongoDB, discutiu a função fundamental que os dados desempenham em seu discurso de abertura. Ele também compartilhou o mais recente avanço de nosso ecossistema de parceiros, uma solução de IA desenvolvida por MongoDB, Amazon Web Services e Anthropic que facilita aos clientes a implantação de uma aplicação de atendimento ao cliente com IA. MongoDB 8.0: A melhor versão do MongoDB de todos os tempos A maior novidade do MongoDB.local Londres foi o anúncio da disponibilidade geral do MongoDB 8.0 , que oferece melhorias de desempenho importantes, reduz os custos de dimensionamento e acrescenta funcionalidades adicionais de escalabilidade, resiliência e segurança de dados ao banco de dados de documentos mais conhecido do mundo. As otimizações arquitetônicas no MongoDB 8.0 reduziram significativamente o uso de memória e os tempos de consulta, e o MongoDB 8.0 tem recursos de processamento em lote mais eficientes do que as versões anteriores. Especificamente, o MongoDB 8.0 apresenta uma taxa de transferência de leitura 36% melhor, gravações em massa 56% mais rápidas e gravações simultâneas 20% mais rápidas durante a replicação de dados. Além disso, o MongoDB 8.0 pode lidar com volumes maiores de dados Time Series e pode executar agregações complexas com mais de 200% de rapidez, com menor uso de recursos e custos. Por último (mas não menos importante!), o Queryable Encryption agora oferece suporte a consultas de intervalo, garantindo a segurança dos dados e possibilitando análises avançadas. Para saber mais sobre os anúncios de produtos do MongoDB.local London, que são projetados para acelerar o desenvolvimento de aplicações, simplificar a inovação em AI e acelerar a qualificação dos desenvolvedores, leia! Aceleração do desenvolvimento de aplicativos Dimensionamento e elasticidade aprimorados nos recursos do MongoDB Atlas Os novos aprimoramentos do plano de controle do MongoDB Atlas possibilitam que os clientes dimensionem os clusters mais rapidamente, respondam às demandas de recursos em tempo real e otimizem o desempenho, tudo isso enquanto reduzem os custos operacionais. Primeiro, nossos novos recursos granulares de provisionamento e dimensionamento de recursos - incluindo dimensionamento independente de fragmentos e armazenamento e IOPS estendidos no Azure - permitem que os clientes otimizem os recursos exatamente onde são necessários. Em segundo lugar, os clientes do Atlas terão um escalonamento mais rápido do cluster, com tempos de escalonamento até 50% mais rápidos, por meio do escalonamento do cluster em paralelo por tipo de nó. Por fim, os usuários do MongoDB Atlas desfrutarão de um dimensionamento automático mais ágil, com uma melhoria de 5 vezes na capacidade de resposta graças aos aprimoramentos em nossos algoritmos e infraestrutura de dimensionamento. Esses aprimoramentos estão sendo implementados para todos os clientes do Atlas, que devem começar a ver os benefícios imediatamente. Plug-in do IntelliJ para MongoDB Anunciado em uma prévia privada, o plug-in MongoDB para IntelliJ foi projetado para aprimorar funcionalmente a maneira como os desenvolvedores trabalham com o MongoDB no IntelliJ IDEA, um dos IDEs mais populares entre os desenvolvedores Java. O plug-in permite que os desenvolvedores Java corporativos escrevam e testem consultas Java com mais rapidez, recebam insights proativos de desempenho e reduzam os erros de tempo de execução diretamente em seu IDE. Ao aprimorar a integração entre o banco de dados e o IDE, a JetBrains e o MongoDB firmaram uma parceria para oferecer uma experiência perfeita para sua base de usuários compartilhada e liberar seu potencial para criar aplicativos modernos com mais rapidez. Inscreva-se para a prévia privada aqui . Participante do MongoDB Copilot para VS Code (visualização pública) Agora em pré-visualização pública, o novo MongoDB Participant for GitHub Copilot integra recursos de IA específicos de domínio diretamente a uma experiência semelhante a um chat na extensão do MongoDB para o VS Code . O participante está profundamente integrado à extensão do MongoDB, permitindo a geração de consultas precisas ao MongoDB (e exportando-as para o código do aplicativo), descrevendo esquemas de coleção e respondendo a perguntas com acesso atualizado à documentação do MongoDB, sem que o desenvolvedor precise sair de seu ambiente de codificação. Esses recursos reduzem significativamente a necessidade de alternância de contexto entre domínios, permitindo que os desenvolvedores permaneçam em seu fluxo e se concentrem na criação de aplicativos inovadores. Suporte a vários clusters para o MongoDB Enterprise Kubernetes Operator Garanta alta disponibilidade, resiliência e escala para as implementações do MongoDB executadas no Kubernetes por meio de suporte adicional para a implementação do MongoDB e do Ops Manager em vários Kubernetes cluster. Os usuários agora podem implantar ReplicaSets, Sharded cluster (em visualização pública) e Ops Manager em locais ou geograficamente distribuídos Kubernetes cluster para maior resiliência, flexibilidade e recuperação de desastres da implantação. Essa abordagem permite a disponibilidade, a resiliência e o dimensionamento em vários locais no Kubernetes, recursos que antes só estavam disponíveis fora do Kubernetes para o MongoDB. Para saber mais, confira a documentação . O Atlas Search e o Vector Search estão agora disponíveis para todos através do Atlas CLI e do Docker A experiência de desenvolvimento local do MongoDB Atlas já está disponível para todos. Use a CLI do MongoDB Atlas e o Docker para criar com o MongoDB Atlas em seu ambiente local preferido e acesse facilmente recursos como o Atlas Search e o Atlas Vector Search durante todo o ciclo de vida de desenvolvimento de software. A CLI do Atlas fornece uma interface unificada e familiar baseada em terminal que permite implantar e construir com o MongoDB Atlas em seu ambiente de desenvolvimento preferido, localmente ou no cloud. Se você constrói com o Docker, agora também pode usar o Docker e o Docker Compose para integrar facilmente o Atlas em seus ambientes de integração local e contínua com a CLI do Atlas . Evite o trabalho repetitivo automatizando o ciclo de vida de seus ambientes de desenvolvimento e teste e concentre-se na criação de recursos de aplicativos com pesquisa de texto completo, IA e pesquisa semântica, entre outros. Simplificando a inovação em IA Reduzir custos e aumentar a escala no Atlas Vector Search Anunciamos os recursos de quantização de vetores no Atlas Vector Search . Ao reduzir a memória (em até 96%) e tornar a recuperação de vetores mais rápida, a quantização de vetores permite que os clientes criem uma ampla variedade de aplicações de IA e pesquisa em maior escala e menor custo. Agora disponível de forma geral, o suporte à ingestão de vetores quantizados escalares permite que os clientes importem e trabalhem com vetores quantizados de seus provedores de modelos de incorporação preferidos - diretamente no Atlas Vector Search. Em breve, recursos adicionais de quantização vetorial, incluindo quantização automática, equiparão os clientes com um conjunto de ferramentas abrangente para criar e otimizar aplicações de IA e pesquisa em grande escala no Atlas Vector Search. Integrações adicionais com estruturas de IA populares Envie seu próximo projeto alimentado por IA mais rapidamente com o MongoDB, independentemente da estrutura ou do LLM de sua escolha. As tecnologias de IA estão avançando rapidamente, o que torna importante criar e dimensionar aplicativos de alto desempenho rapidamente e usar sua pilha preferida à medida que seus requisitos e tecnologias disponíveis evoluem. O conjunto aprimorado de integrações do MongoDB com LangChain, LlamaIndex, Microsoft Semantic Kernel, AutoGen, Haystack, Spring AI, ChatGPT Retrieval Plugin e outros, torna a criação da próxima geração de aplicativos no MongoDB mais fácil do que nunca . Capacitação de desenvolvedores Novos emblemas de aprendizado do MongoDB Mais rápidos de obter e mais específicos do que uma certificação, os Emblemas de Aprendizado gratuitos do MongoDB mostram seu compromisso com o aprendizado contínuo e com a comprovação do seu conhecimento sobre um tópico específico. Siga o caminho do aprendizado, adquira novas habilidades e obtenha um emblema digital para exibir no LinkedIn. Confira os dois emblemas de aprendizado da IA generativa! Criação de aplicativos de IA generativa : aprenda a criar aplicativos inovadores de IA generativa com o Atlas Vector Search, incluindo aplicativos de geração aumentada de recuperação (RAG). Implantação e avaliação de aplicativos de IA generativa : leve seus aplicativos da criação à implantação completa, concentrando-se na otimização do desempenho e na avaliação dos resultados. Saiba mais Para saber mais sobre os anúncios recentes e atualizações de produtos do MongoDB, confira os anúncios de produtos na página Novidades e todas as publicações sobre atualizações de produtos no blog . Bom desenvolvimento!

October 3, 2024

Top Use Cases for Text, Vector, and Hybrid Search

Search is how we discover new things. Whether you’re looking for a pair of new shoes, the latest medical advice, or insights into corporate data, search provides the means to unlock the truth. Search habits—and the accompanying end-user expectations—have evolved along with changes to the search experiences offered by consumer apps like Google and Amazon. The days of the standard of 10 blue links may well be behind us, as new paradigms like vector search and generative AI (gen AI) have upended long-held search norms. But are all forms of search created equal, or should we be seeking out the right “flavor” of search for specific jobs? In this blog post, we will define and dig into various flavors of search, including text, vector and AI-powered search, and hybrid search, and discuss when to use each, including sample use cases where one type of search might be superior to others. Information retrieval revolutionized with text search The concept of text search has been baked into user behavior from the early days of the web, with the rudimentary text box entry and 10 blue link results based on text relevance to the initial query. This behavior and associated business model has produced trillions in revenue and has become one of the fiercest battlegrounds across the internet . Text search allows users to quickly find specific information within a large set of data by entering keywords or phrases. When a query is entered, the text search engine scans through indexed documents to locate and retrieve the most relevant results based on the keywords. Text search is a good solution for queries requiring exact matches where the overarching meaning isn't as critical. Some of the most common uses include: Catalog and content search: Using the search bar to find specific products or content based on keywords from customer inquiries. For example, a customer searching for "size 10 men trainers" or “installation guide” can instantly find the exact items they’re looking for, like how Nextar tapped into Atlas Search to enable physical retailers to create online catalogs. In-application search: This is well-suited for organizations with straightforward offerings to make it easier for users to locate key resources, but that don’t require advanced features like semantic retrieval or contextual re-ranking. For instance, if a user searches for "songs key of G," they can quickly receive relevant materials. This streamlines asset retrieval, allowing users to focus on the task they are trying to achieve and boosts overall satisfaction. For a company like Yousician , Atlas Search enabled their 20 million monthly active users to tackle their music lessons with ease. Customer 360: Unifying data from different sources to create a single, holistic view. Consolidated information such as user preferences, purchase history, and interaction data can be used to enhance business visibility and simplify the management, retrieval, and aggregation of user data. Consider a support agent searching for all information related to customer “John Doe." They can quickly access relevant attributes and interaction history, ensuring more accurate and efficient service. Helvetia was able to achieve success after migrating to MongoDB and using Atlas Search to deliver a single, 360-degree real-time view across all customer touchpoints and insurance products. AI and a new paradigm with vector search With advances in technology, vector search has emerged to help solve the challenge of providing relevant results even when the user may not know what they’re looking for. Vector search allows you to take any type of media or content, convert it into a vector using machine learning algorithms, and then search to find results similar to the target term. The similarity aspect is determined by converting your data into numerical high-dimensional vectors, and then calculating the distance between them to determine relevance—the closer the vector, the higher the relevance. There is a wide range of practical, powerful use cases powered by vector search—notably semantic search and retrieval-augmented generation (RAG) for gen AI. Semantic search focuses on meaning and prioritizes user intent by deciphering not just what users type but why they're searching, in order to provide more accurate and context-oriented search results. Some examples of semantic search include: Content/knowledge base search: Vast amounts of organizational data, structured and unstructured, with hidden insights, can benefit significantly from semantic search. Questions like “What’s our remote work policy?” can return accurate results even when the source materials do not contain the “remote” keyword, but rather have “return to office” or “hybrid” or other keywords. A real-world example of content search is the National Film and Sound Archive of Australia , which uses Atlas Vector Search to power semantic search across petabytes of text, audio, and visual content in its collections. Recommendation engines: Understanding users’ interests and intent is a strong competitive advantage–like how Netflix provides a personalized selection of shows and movies based on your watch history, or how Amazon recommends products based on your purchase history. This is particularly powerful in e-commerce, media & entertainment, financial services, and product/service-oriented industries where the customer experience tightly influences the bottom line. A success story is Delivery Hero , which leverages vector search-powered real-time recommendations to increase customer satisfaction and revenue. Anomaly detection: Identifying and preventing fraud, security breaches, and other system anomalies is paramount for all organizations. By grouping similar vectors and using vector search to identify outliers, potential threats can be detected early, enabling timely responses. Companies like VISO TRUST and Extrac are among the innovators that build their core offerings using semantic search for security and risk management. With the rise of large language models (LLMs), vector search is increasingly becoming essential in gen AI application development. It augments LLMs by providing domain-specific context outside of what the LLMs “know,” ensuring relevance and accuracy of the gen AI output. In this case, the semantic search outputs are used to enhance RAG. By providing relevant information from a vector database, vector search helps the RAG model generate responses that are more contextually relevant. By grounding the generated text in factual information, vector search helps reduce hallucinations and improve the accuracy of the response. Some common RAG applications are for chatbots and virtual assistants, which provide users with relevant responses and carry out tasks based on the user query, delivering enhanced user experience. Two real-world examples of such chatbot implementations are from our customers Okta and Kovai . Another popular application is using RAG to help generate content like articles, blog posts, scripts, code, and more, based on user prompts or data. This significantly accelerates content production, allowing organizations including Novo Nordisk and Scalestack to save time and produce content at scale, all at an accuracy level that was not possible without RAG. Beyond RAG, an emerging vector search usage is in agentic systems . Such a system is an architecture encompassing one or more AI agents with autonomous decision-making capabilities, able to access and use various system components and resources to achieve defined objectives while adapting to environmental feedback. Vector search enables efficient and semantically meaningful information retrieval in these systems, facilitating relevant context for LLMs, optimized tool selection, semantic understanding, and improved relevance ranking. Hybrid search: The best of both worlds Hybrid search combines the strengths of text search with the advanced capabilities of vector search to deliver more accurate and relevant search results. Hybrid search shines in scenarios where there's a need for both precision (where text search excels) and recall (where vector search excels), and where user queries can vary from simple to complex, including both keyword and natural language queries. Hybrid search delivers a more comprehensive, flexible information retrieval process, helping RAG models access a wider range of relevant information. For example, in a customer support context, hybrid search can ensure that the RAG model retrieves not only documents containing exact keywords but also semantically similar content, resulting in more informative and helpful responses. Hybrid search can also help reduce information overload by prioritizing the most relevant results. This allows RAG models to focus on processing and understanding the most critical information, leading to faster, more accurate responses, and improving the user experience. Powering your AI and search applications with MongoDB As your organization continues to innovate in the rapidly evolving technology ecosystem, building robust AI and search applications supporting customer, employee, and stakeholder experiences can deliver powerful competitive advantages. With MongoDB, you can efficiently deploy full-text search , vector search , and hybrid search capabilities. Start building today—simplify your developer experience while increasing impact in MongoDB’s fully-managed, secure vector database, integrated with a vast AI partner ecosystem , including all major cloud providers, generative AI model providers, and system integrators. Head over to our quick-start guide to get started with Atlas Vector Search today.

September 16, 2024