What is #Raft? Raft is a consensus algorithm designed for managing replicated data in distributed systems. The goal of Raft is to ensure that a group of nodes in a distributed system can work together to maintain consistent and reliable data changes. Raft is an alternative to the more complex #Paxos algorithm. It includes leader election, where one node in a cluster is elected as the leader. The leader manages the replication of data entries to other nodes. If the leader fails or becomes disconnected, a new leader is elected through a distributed consensus process. This approach eliminates the need for a #Zookeeper-like system and provides automated high availability and strong consistency guarantees. Raft vs Paxos? Both of these algorithms function like way for a group of friends to make decisions together, even when some are not present simultaneously. The key differences lie in their complexity and ease of implementation. Raft has gained popularity specifically for its clarity. Raft has been widely adopted and implemented in various distributed systems and platforms. Notable users and implementations of Raft include Redpanda Data (based on #Seastar), #TiKV database, RedisRaft (under development), etcd, RethinkDB, The Apache Software Foundation's brpc (based on Baidu, Inc.'s braft), and others. In Arium, we are using Raft to orchestrate our gaming machines. For C++, #RaftLib is a good implementation, and alternatives include eBay's #NuRaft and Redis's Raft (developed base on Willem-Hendrik Thiart's work) are good as well. libraft: https://lnkd.in/d9AJd_fa redislabs: https://lnkd.in/dz5Cdany NuRaft: https://lnkd.in/dJW8sydM #tech #programing #distributed #system #cpp
Pooya Eimandar’s Post
More Relevant Posts
-
Grok-1 that language model from xAI, is now opensource under the Apache License 2.0. It's a big deal for developers and researchers who can now dive into this 314 billion parameter model and do some serious innovation. Let's see where this takes us. https://x.ai/blog/grok-os
To view or add a comment, sign in
-
At Cheerio AI 📣 , NGINX serves as our go-to tool for Reverse Proxy, Load Balancer, HTTP Cache, and TLS/SSL Termination. However, recently some drama was happening in the repository of Nginx. Nginx is a neat product but the essential features are in the paid version. For open-source developers like us, this creates a dilemma. When we seek to contribute new features to the open-source NGINX repository, we often find our pull requests rejected if features are already there or are exclusive to NGINX Plus. As a result, developers are forking the repository and maintaining their version with additional features they want. What do you use at your organisation as an alternative to NGINX? #nginx #opensource #opensourcecommunity #github #loadbalancer #reverseproxy
To view or add a comment, sign in
-
🔍 Excited to share my latest blog post on SkipLists and lock-free concurrency in Rust! In this deep dive, we explore: 1. What makes SkipLists efficient for concurrent operations? 2. How Rust's crossbeam-skiplist crate implements lock-free algorithms? 3. The power of Compare-And-Swap operations and epoch-based reclamation. Whether you're building high-performance systems or just curious about advanced concurrent data structures, this post offers valuable insights into the mechanics behind efficient, scalable solutions. Check it out and let me know your thoughts! #Rust #Concurrency #DataStructures #PerformanceOptimization #SystemDesign
To view or add a comment, sign in
-
The latest update for #Elastic includes "From App #Search to #Elasticsearch — Tap into the future of search" and "Agentic RAG on Dell AI Factory with NVIDIA and Elasticsearch Vector Database". #Logging #DevOps https://lnkd.in/d3SsUnZ
Elastic
opsmatters.com
To view or add a comment, sign in
-
New Post: GitHub’s latest AI tool that can automatically fix code vulnerabilities - https://lnkd.in/g5WZKtvG - It’s a bad day for bugs. Earlier today, Sentry announced its AI Autofix feature for debugging production code and now, a few hours later, GitHub is launching the first beta of its code scanning autofix feature for finding and fixing security vulnerabilities during the coding process. This new feature combines the real-time capabilities of GitHub’s © 2024 TechCrunch. All rights reserved. For personal use only. - #news #business #world -------------------------------------------------- Download: Stupid Simple CMS - https://lnkd.in/g4y9XFgR -------------------------------------------------- or download at SourceForge - https://lnkd.in/gNqB7dnp
GitHub’s latest AI tool that can automatically fix code vulnerabilities
shipwr3ck.com
To view or add a comment, sign in
-
Something I hear in my conversations with developers every day is "we don't need Pinecone yet". While we're one of the few companies that can handle hundreds of billions of embeddings for the most demanding, high volume workloads in the world, there is a ton of value on the smaller scale too. What they don't realize, is that they can actually use Pinecone as much as they want for FREE! Most companies have use cases that are well under 100k vectors. Why not leverage the industry leader and avoid managing any of your own infrastructure? Do you really want to have to do a migration later when your project is a success? If you have a #RAG or semantic search use case, try us out! https://lnkd.in/eqTgyNvE #openai #genai #semanticsearch #vectordb #vectordatabase #search #mistral #langchain #llamaindex
Supporting our growing number of free users | Pinecone
pinecone.io
To view or add a comment, sign in
-
I’ve recently written an article on creating an agent system where each of the agents can remember and learn from every interaction. The tech stack includes #ApacheKafka, #OpenSearch, #Valkey, and Amazon Bedrock models. I find exciting the concept of long-term memory and how we can achieve it with #RAG. Curious to hear if others have some experience in the area or opinion about it. https://lnkd.in/e9S8yUhd
Developing memory-rich AI systems with Valkey™, OpenSearch® and RAG
aiven.io
To view or add a comment, sign in
-
There is a new LLM & RAG framework in town that you must know about, as it might replace LangChain and LlamaIndex ↓ The new RAG framework targets to be the PyTorch library for LLM applications. It prioritizes: - simplicity - modularity - robustness - a readable codebase ...over out-of-the-box one-liners that are hard to understand and extend. ...which we often encounter when working with LangChain and LlamaIndex. . 𝗦𝗼, 𝘄𝗵𝗼 𝗶𝘀 𝘁𝗵𝗲 𝗻𝗲𝘄 𝗸𝗶𝗱 𝗶𝗻 𝘁𝗼𝘄𝗻? → 𝘓𝘪𝘨𝘩𝘵𝘙𝘈𝘎 It was initiated by Li Yin, who started implementing LLM solutions before they were cool and realized how limited one is when building custom solutions. She highlighted that each use case is unique in its data, business logic, and user experience. Thus, no library can provide out-of-the-box solutions. Ultimately, LightRAG aims to provide a robust and clean codebase you can 100% trust, understand and extend. With the ultimate goal of quickly customizing your own LLM, RAG, and agent solutions. . If you are into LLMs, RAG and agents, consider checking it out on GitHub and support it with a ⭐️ (or even contribute) ↓ 🔗 𝘓𝘪𝘨𝘩𝘵𝘙𝘈𝘎: https://lnkd.in/d-fcDZ3A #machinelearning #mlops #datascience . 💡 Follow me for daily content on production ML and MLOps engineering.
To view or add a comment, sign in
-
There is a new LLM & RAG framework in town that you must know about, as it might replace LangChain and LlamaIndex ↓
Senior ML/AI Engineer • MLOps • Founder @ Decoding ML ~ Posts and articles about building production-grade ML/AI systems.
There is a new LLM & RAG framework in town that you must know about, as it might replace LangChain and LlamaIndex ↓ The new RAG framework targets to be the PyTorch library for LLM applications. It prioritizes: - simplicity - modularity - robustness - a readable codebase ...over out-of-the-box one-liners that are hard to understand and extend. ...which we often encounter when working with LangChain and LlamaIndex. . 𝗦𝗼, 𝘄𝗵𝗼 𝗶𝘀 𝘁𝗵𝗲 𝗻𝗲𝘄 𝗸𝗶𝗱 𝗶𝗻 𝘁𝗼𝘄𝗻? → 𝘓𝘪𝘨𝘩𝘵𝘙𝘈𝘎 It was initiated by Li Yin, who started implementing LLM solutions before they were cool and realized how limited one is when building custom solutions. She highlighted that each use case is unique in its data, business logic, and user experience. Thus, no library can provide out-of-the-box solutions. Ultimately, LightRAG aims to provide a robust and clean codebase you can 100% trust, understand and extend. With the ultimate goal of quickly customizing your own LLM, RAG, and agent solutions. . If you are into LLMs, RAG and agents, consider checking it out on GitHub and support it with a ⭐️ (or even contribute) ↓ 🔗 𝘓𝘪𝘨𝘩𝘵𝘙𝘈𝘎: https://lnkd.in/d-fcDZ3A #machinelearning #mlops #datascience . 💡 Follow me for daily content on production ML and MLOps engineering.
To view or add a comment, sign in
-
🔄 𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆 𝘃𝘀. 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗶𝘀𝗺: 𝗠𝗮𝘀𝘁𝗲𝗿𝗶𝗻𝗴 𝘁𝗵𝗲 𝗔𝗿𝘁 𝗼𝗳 𝗠𝗼𝗱𝗲𝗿𝗻 𝗖𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴 In the world of software development, terms like concurrency and parallelism are often used interchangeably, yet they represent distinct concepts with unique implications for how applications are designed and executed. Understanding these differences is crucial for optimizing performance and making informed architectural decisions. 𝗗𝗲𝗳𝗶𝗻𝗶𝗻𝗴 𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆 𝗮𝗻𝗱 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗶𝘀𝗺: 𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆: Concurrency is about dealing with multiple tasks at the same time but not necessarily executing them simultaneously. It’s like having multiple tabs open in your browser—you switch between them frequently, giving the illusion that they are all active at once. 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗶𝘀𝗺: Parallelism, on the other hand, involves executing multiple tasks at the exact same time, typically using multiple processors or cores. Think of it as cooking a meal with several chefs working on different dishes simultaneously. 𝗞𝗲𝘆 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝗰𝗲𝘀: 𝗘𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻: Concurrency involves multiple tasks making progress without necessarily running at the same time, often through time slicing. Parallelism achieves simultaneous task execution. 𝗣𝘂𝗿𝗽𝗼𝘀𝗲: Concurrency aims to improve responsiveness and resource utilization. Parallelism focuses on speeding up computational tasks by dividing them across multiple processors. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗦𝘄𝗶𝘁𝗰𝗵𝗶𝗻𝗴: Concurrency relies on context switching, where the system rapidly switches between tasks. Parallelism minimizes context switching by distributing tasks to separate cores. 𝗨𝘀𝗲 𝗖𝗮𝘀𝗲𝘀: 𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆: Ideal for applications requiring high responsiveness, such as user interfaces, real-time systems, and networked applications where tasks need to be managed simultaneously but not executed at the same time. 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗶𝘀𝗺: Best suited for compute-intensive tasks like scientific simulations, data processing, and large-scale computations where tasks can be broken down into smaller, independent units. 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗜𝗺𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀: Choosing between concurrency and parallelism depends on your application's needs: If your goal is to handle many tasks efficiently, focus on concurrency. For example, a web server handling multiple requests concurrently can improve throughput. If your objective is to reduce the time to complete a task, leverage parallelism. Data processing frameworks like Apache Spark use parallelism to process large datasets quickly. Consider a video streaming service: 𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆: Ensures multiple users can interact with the platform simultaneously without lag. 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗶𝘀𝗺: Accelerates video encoding by processing chunks of video in parallel, reducing the time required to make content available. Here's a great talk on the subject by Rob Pike https://lnkd.in/gg9zRTvW
Google I/O 2012 - Go Concurrency Patterns
https://www.youtube.com/
To view or add a comment, sign in