| Characteristic | Description | Challenges | Solutions |
|---|---|---|---|
| Concurrency | Multiple nodes operating simultaneously | Coordination, synchronization | Locks, semaphores, consensus protocols |
| No Global Clock | Each node has its own clock | Ordering events, causality | Logical clocks, vector clocks |
| Independent Failures | Components can fail independently | System reliability, fault tolerance | Replication, redundancy, error recovery |
| Network Partitions | Communication failures between nodes | Data consistency, availability | Consensus protocols, conflict resolution |
| Heterogeneity | Different hardware, OS, and programming languages | Interoperability, communication | Standard protocols, middleware |
| Model | Description | Characteristics | Trade-offs |
|---|---|---|---|
| Strong Consistency | All nodes see same data at same time | Linearizability, sequential consistency | High consistency, low availability |
| Eventual Consistency | All nodes eventually converge to same state | Availability over consistency | High availability, potential inconsistency |
| Causal Consistency | Causally related operations appear in order | Maintains causality, relaxes global order | Balanced approach |
| Weak Consistency | No guarantees about data consistency | Best effort, fast responses | High performance, no consistency |
| Algorithm | Approach | Strengths | Weaknesses | Use Cases |
|---|---|---|---|---|
| Raft | Leader-based consensus | Simple, understandable, efficient | Single point of failure | Etcd, Consul, CockroachDB |
| Paxos | Message passing with quorums | Proven, fault-tolerant | Complex, hard to implement | Google Chubby, Spanner |
| Two-Phase Commit | Prepare and commit phases | Ensures atomicity | Blocking, single point of failure | Distributed databases |
| Three-Phase Commit | Non-blocking extension of 2PC | Reduces blocking issues | Complex, assumes reliable network | Distributed transactions |
| Byzantine Fault Tolerance | Handles malicious nodes | Secure against arbitrary failures | High resource overhead | Blockchain, security-critical systems |
| Architecture | Description | Advantages | Disadvantages | Examples |
|---|---|---|---|---|
| Client-Server | Centralized model with clients requesting services | Simple, centralized control | Single point of failure, scalability | Web applications, databases |
| Peer-to-Peer | Decentralized model with equal nodes | Scalable, resilient | Complex management, security | BitTorrent, blockchain |
| Microservices | Decomposed into small, independent services | Scalability, technology diversity | Complexity, network overhead | Netflix, Amazon, Uber |
| Master-Slave | Master coordinates work among slaves | Load distribution, fault tolerance | Master as bottleneck | Hadoop, MapReduce |
| Event-Driven | Components communicate through events | Loose coupling, scalability | Complex debugging, event ordering | Message queues, event sourcing |
| Concept | Description | Techniques | Trade-offs |
|---|---|---|---|
| Partitioning | Distributing data across multiple nodes | Range, hash, consistent hashing | Scalability vs complexity |
| Replication | Duplicating data across nodes | Synchronous, asynchronous, semi-synchronous | Availability vs consistency |
| Sharding | Horizontal partitioning of database | Key-based, directory-based, consistent hashing | Scalability vs query complexity |
| Cache Coherence | Keeping cached data consistent | Write-through, write-back, invalidation | Performance vs consistency |
| Data Locality | Processing data near where it's stored | MapReduce, Hadoop, edge computing | Network efficiency vs load balancing |
| Pattern | Description | Use Case | Benefits |
|---|---|---|---|
| MapReduce | Process large datasets in parallel | Big data processing, analytics | Scalability, fault tolerance |
| Actor Model | Concurrency through message passing | Concurrent, distributed systems | Isolation, concurrency |
| Service Discovery | Dynamic service location | Microservices, cloud systems | Dynamic scaling, resilience |
| Circuit Breaker | Prevent cascading failures | Microservices, API calls | Fault isolation, resilience |
| Load Balancing | Distribute requests across nodes | High-traffic applications | Scalability, availability |
| Leader Election | Select coordinator among nodes | Consensus, coordination | Coordination, fault tolerance |
| Property | Description | Systems that prioritize it | Trade-offs |
|---|---|---|---|
| Consistency | All nodes see same data at same time | RDBMS, Spanner | May sacrifice availability |
| Availability | System remains operational despite failures | Cassandra, DynamoDB | May sacrifice consistency |
| Partition Tolerance | System continues despite network failures | All distributed systems | Must choose between C or A |